Blog

What if Europe is too hard on AI? Companies are afraid of innovation slow down

January 06, 2023

Forbes published an article on impending AI regulations for the EU in the wake of UNESCO’s global AI agreement, signed by 193 member states in November 2021. Alongside other business leaders, Marek Rosa weighs in on the potential implications for tech innovation. This article has been translated from Czech. The original can be found here.


The state, companies and entrepreneurs agree that regulation of technology and especially artificial intelligence is necessary. Experts believe that the as yet unclear wording and, above all, the final method and degree of classification will determine the extent to which regulation will affect the European market and, indeed, businesses in general.

According to EU officials, the AI Act is the first proposal of its kind in the world to classify technologies according to their use and impose different levels of restrictions on them depending on their risk.

The goal is to strengthen safety, trust in technologies and ensure their greater use in the European Union through such regulation.

According to Jan Míči, head of the Department of the European Digital Agenda, artificial intelligence systems that could have a negative impact on the life, health, safety or basic rights of citizens are considered high-risk systems.

In order for such products to be considered trustworthy, transparent and non-discriminatory, they must first meet a number of requirements.

Systems evaluating the behavior of citizens in public, using remote biometric identification (except for narrowly defined exceptions), or those that use subliminal techniques for the purpose of influencing behavior should be banned completely in the European Union.

This evaluation system is already in use in China and is one example of what the Act should prevent. According to Business Insider, while it is not yet a nationwide system, the state has plans for a mandatory implementation for all.

“People can be punished if they drive badly, buy too many video games or steal,” the newspaper said. The punishment then takes various forms, from restricting internet speeds to banning air travel.

However, if citizens behave well, their social credit will increase and, in turn, they may receive discounts on energy bills or better interest rates at banks.

New technological innovations are to play a big role in such a system. Millions of cameras in the country already use facial recognition software. In addition to physical surveillance measures, the behavior of citizens on the Internet can also be evaluated.

According to Míča, the very classification of artificial intelligence systems as high-risk will be a big challenge for Europe and the upcoming act, which will also include the establishment of rules for so-called artificial intelligence systems with general use (GPAI).

The European Commission did not consider these in its original proposal. They are typically offered by larger companies that often disagree with regulation.

“Regulation in this regard may negatively affect EU entities in what will be available to them in the area of AI solutions versus markets that are not subject to regulation,” says Petr Hirš, director of the team at Dataclair.ai artificial intelligence center.

Fields with these kinds of systems are already dominated by a very narrow group of the world’s largest technology companies. “We believe that it is in the public interest that we support as much as possible those rare entities and projects that manage to compete with corporate giants,” he adds.

Current requirements, according to the DataClair.ai team, potentially require developers of these models to treat them as high-risk systems regardless of how these models will actually be used.

And the way in which the systems will be evaluated is, according to him, one of the main shortcomings of the Artificial Intelligence Act.

According to Mark Rosa from Good AI, a lot will depend on the final details, specifically on how to define what is and what is not artificial intelligence, how to test whether a potential harm has occurred in a given application, what is acceptable and what is over the edge.

“The most important thing is that the regulation does not slow down innovation and does not make it impossible to use AI in areas where it could be very useful, and where the benefits would outweigh the risks,” he adds.

Companies and applications that will be affected by the act will also have to undergo a self-assessment process, which, according to Rosa, necessarily incurs additional time and administrative costs.

“On the other hand, ethical companies that care about their customers should take into account the impact of their own products regardless of the regulation,” he assesses, even as it could have a negative impact in special cases.

For example, startups will not want to take the risk of violating the regulation, so they may think twice about innovating in an area where it is not clear whether they meet the criteria or not.

However, regulation does not enter the decision-making process only at startups, as Hirš and his team confirm. “In the context of our Dataclair AI center, we can say that we are already thinking today when developing new solutions whether this is an area that will be regulated in the future.”

In addition, companies will face fines for failing to comply with the terms of AI supervision. Originally, the directive mandated up to six percent of annual turnover.

According to Hirsch and his team, this could be potentially devastating for any organization, especially when a relatively smaller data science team operates within a large company.

Despite the current uncertainty about the final text of the act, companies see the regulation as a step in the right direction. According to Hirsch, the legislation just needs to be drafted in cooperation with academia and the commercial sector.

This would link feedback to the real environment. A crucial question, he said, would also be how quickly the EU institutions would be able to respond to new breakthroughs in AI and how flexibly they could be integrated into existing legislation.

This will be a difficult task for the Czech Presidency as well, according to Rosa. He argues that the healthiest solution would be to regulate after the problem has occurred.

“I know that some will argue that we cannot risk people’s lives. On the other hand, not innovating may cost us more lives,” he explains. At the same time, he welcomes the fact that the AI Act is structured in such a way that it allows for provisional modifications to rules and items that prove problematic.

Moreover, this is not the first rule of its kind that companies have had to follow. Míča points out that the AI Act itself refers to rules arising from existing legislation, such as the General Data Protection Regulation (GDPR), the Machinery Products Regulation, and the Medical Devices Regulation.

Then there are, for example, the systems used in the financial and banking sectors, which are a highly regulated legal ecosystem.

That artificial intelligence should be regulated has been talked about since 2019. The start of December saw another breakthrough step on the road to final approval of the AI Act.

After a number of modifications, the EU Member States gave their final support to the European Commission’s proposal from last year. It is now up to the Council of the European Union and the European Parliament to agree on the final version of this proposal.

Leave a comment

Join GoodAI

Are you keen on making a meaningful impact? Interested in joining the GoodAI team?

View open positions