the draft “Regulation On A European Approach For Artificial Intelligence” leaked earlier this week, it made quite the splash – and not just because it’s the size of a novella. It goes to town on AI just as fiercely as GDPR did on data, proposing chains of responsibility, defining “high risk AI” that gets the full force of the regs, proposing multi-million euro fines for non-compliance, and defining a whole set of harmful behaviours and limits to what AI can do with individuals and in general.
What it does not do is define AI, saying that the technology is changing so rapidly it makes sense only to regulate what it does, not what it is. So yes, chatbots are included, even though you can write a simple one in a few lines of ZX Spectrum BASIC. In general, if it’s sold as AI, it’s going to get treated like AI. That’ll make marketing think twice.
A regulated market puts responsibilities on your suppliers that will limit your own liabilities: a well-regulated market can enable as much as it moderates. And if AI doesn’t go wrong, well, the regulator leaves you alone. Your toy Spectrum chatbot sold as an entertainment won’t hurt anyone: chatbots let loose on social media to learn via AI what humans do and then amplify hate speech? Doubtless there are “free speech for hatebots” groups out there: not on my continent, thanks.
It also means that countries with less-well regulated markets can’t take advantage. China has a history of aggressive AI development to monitor and control its population, and there are certainly ways to turn a buck or yuan by tightly controlling your consumers. But nobody could make a euro at it, as it wouldn’t be allowed to exist within, or offer services to, the EU. Regulations that are primarily protectionist for economic reasons are problematic, but ones that say you can’t sell cut-price poison in a medicine bottle tend to do good.
There will be regulation. There will be costs. There will be things you can’t do then that you can now. But there will be things you can do that you couldn’t do otherwise, and while the level playing field of the regulators’ dreams is never quite as smooth for the small company as the big, there’ll be much less snake oil to slip on.
It may be an artificial approach to running a market, but it is intelligent.
They classify high risk AIs and require them to be registered and monitored and there to be contact people for them as well as give insight into how they work. They also want a pan EU dataset for AIs to train on. There’s a lot of really good stuff in there.