AI is rhubarb.
Ah – I see what you’re thinking – you think I mean ‘Rhubarb’ like ‘Blah Blah Blah’. No, it’s not that at all.
AI is like Rhubarb growing. You know it’s happening, sometimes you swear you can see it, but not quite. It’s slow enough that you think “I’ll go and do something else for a while.” When you remember to come back, it’s grown an extraordinary amount, and you have to pretend to everyone you think the leaves are ornamental, because there’s no way you can get rid of that without hydraulics.
How quickly will AI grow?
There are no end of opinions on how AI-like technologies will change the world, so it doesn’t matter if I add mine (caveated with: I know nothing, just idle speculation.). So here it is: The use of AI-like technologies will transform my later life and absolutely the lives of my children. However, it will grow slowly enough to allow middle-aged professional men to say things like “You can’t replace human judgement” for a few years yet.
But it’s coming, and people recognise we need to get ready. I am not a fan of banning research unless there’s a huge barrier to entry that stops everyone else from researching too. So we’ll likely have rules around regulating its use and development. And, in its ‘Pro innovation approach to AI regulation’, the UK government is starting to think here.
AI regulation
What’s in the paper? Well, surprisingly little (and that’s good). Here’s some highlights:
- The paper sets out aims of AI regulation as being:
- Safety, security and robustness.
- Appropriate transparency and explainability.
- Accountability and governance.
- Contestability and redress.
- There’s no real new regulator (well, there’s centralised support) – the onus is on existing regulators to develop and apply rules. I like this idea, because context is important.
- There’s a mention of AI assurance, and some links, but few details except saying ‘This Will Grow’.
And not much else, really. Sure, there’s a lot of words, and nice phrases like the need to regulate outcomes and not technology, but it’s really drawing a line in the sand. (a) AI like technologies are coming/already here (b) we have to regulate them (c) they’re going to impact everything, so existing regulators need to figure this out (with centralised help) and (d) somehow we’ll assure these things.
What's next?
There’s a lot of excitement at the moment, and new use cases for AI-like technologies are popping up everywhere. We know we’ll need some rules, and that it’s more likely to be like accounting rather than Asimov, but beyond that, we’re shooting in the dark. Regulation won’t keep pace with development (especially if growth is exponential), but it’s making a start, and that helps brig a little certainty to what is an existentially uncertain technology.
I usually invite questions, but in this case, I just don't know. I am happy to talk through possibilities if you want to get in touch, but you probably know as much as I do on this one. I'm just blogging because it's interesting.