As it did years ago with online data protection, the EU wants to limit the most abusive uses of AI and penalize them.
When a few years ago the European Union designed the Internet data protection law – the same one that would become a headache for companies shortly after – the body became a pioneer in adapting the legislation to the changes brought by the network and the use of personal data by companies. The regulations that were followed in other places have continued in line with that law.
Data protection is still very important, but technology has made things much more complex and companies have at their disposal much more ambitious tools and with more potential applications.
This is what happens with artificial intelligence, which has evolved powerfully in recent years and has already been integrated into many of the services that consumers use and many of the avenues that companies follow to connect with them. AI is also beginning to be one of the targets of criticism and one of those leading to fears about how much and how its power can be abused.
Now, too, it will be one of the environments in which the European Union will launch – everything points towards it – a legislative package that will limit its scope and its possible most abusive uses.
The proposal comes from the European Commission, the body that makes legislative proposals. For now, it is just that, a proposal (but one that we already know thanks to the fact that the text has reached Bloomberg), although it makes it clear that the European Union has a clear interest in regulating what happens with artificial intelligence and how it affects the lives of citizens.
In general, the EU wants to limit the use of artificial intelligence to control and manipulate citizens. The two main lines on which the regulations would act are mass surveillance and social behavior rankings.
As they point out in Bloomberg, the community objective is to create a framework that forces AI to be transparent and upholds high standards in people’s privacy, while being controlled by people.
What the standard will focus on
More specifically, the regulation wants to ban in the European Union all AI systems that are used to manipulate human behavior, establish social scores or carry out indiscriminate surveillance. Likewise, they also want to ban those that are used to exploit information about citizens. The only exception in these points will be in AI systems applied to public security (military systems will be left out, for example).
Other uses of AI will not be prohibited, but they will have to be under some control. This is what happens with biometric identification systems: if they are in public places, the regulatory proposal wants you to have to ask the authorities for permission to implement them.
Likewise, all those uses of artificial intelligence that are considered high risk (for example, driverless cars enter here, but also remote surgery) will have to be subject to prior inspections.
The regulations, if the plans go ahead, will affect all companies operating in the European Union, whether or not they are community companies. Failure to comply with these rules and not respect them will imply a fine: it will be, if everything is approved, of up to 4% of the total revenues of those companies worldwide.