Photo by Maico Amorim on Unsplash
The AI Act is still just a draft, but investors and business owners in the European Union are already nervous about the possible outcomes.
Will it prevent the European Union from being a valuable competitor in the global space?
According to regulators, it’s not the case. But let’s see what’s happening.
The AI Act and Risk assessment
The AI Act divides the risks posed by artificial intelligence into different risk categories, but before doing that, it narrows down the definition of artificial intelligence to include only those systems based on machine learning and logic.
This doesn’t only serve the purpose of differentiating AI systems from simpler pieces of software, but also help us understand why the EU wants to categorize risk.
The different uses of AI are categorized into unacceptable risk, a high risk, and
low or minimal risk. The practices that fall under the unacceptable risk category are considered as prohibited.
This type of practices includes:
- Practices that involve techniques that work beyond a person’s consciousness,
- Practices that want to exploit vulnerable parts of the population,
- AI-based systems put in place to classify people according to personal characteristics or behaviors,
- AI-based systems that use biometric identification in public spaces.
There are some use cases, which should be considered similar to some of the practices included in the prohibited activities, that fall under the category of “high-risk” practices.
These include systems used to recruit workers or to assess and analyze people’s creditworthiness (and this might be dangerous for fintech). In these cases, all the businesses that create or use this type of system should produce detailed reports to explain how the system works and the measures taken to avoid risks for people and to be as transparent as possible.
Everything looks clear and correct, but there are some problems that regulators should address.
The Act looks too generic
One of the aspects that most worry business owners and investors is the lack of attention towards specific AI sectors.
For instance, those companies that produce and use AI-based systems for general purposes could be considered as those that use artificial intelligence for high-risk use cases.
This means that they should produce detailed reports that cost time and money. Since SMEs make no exception, and since they form the largest part of European economies, they could become less competitive over time.
And it is precisely the difference between US and European AI companies that raises major concerns: in fact, Europe doesn’t have large AI companies like the US, since the AI environment in Europe is mainly created by SMEs and startups.
According to a survey conducted by appliedAI, a large majority of investors would avoid investing in startups labeled as “high-risk”, precisely because of the complexities involved in this classification.
ChatGPT changed EU's plans
EU regulators should have closed the document on April 19th, but the discussion related to the different definitions of AI-based systems and their use cases delayed the delivery of the final draft.
Moreover, tech companies showed that not all of them agree on the current version of the document.
The point that most caused delays is the differentiation between foundation models and general purpose AI.
An example of AI foundation models is OpenAI's ChatGPT: these systems are trained using large quantities of data and can generate any kind of output.
General purpose AI includes those systems that can be adapted to different use cases and sectors.
EU regulators want to strictly regulate foundation models, since they could pose more risks and negatively affect people's lives.
How the US and China are regulating AI
If we have a look at how EU regulators are treating AI there's something that stands out: it looks like regulators are less willing to cooperate.
In the US, for instance, the Biden administration looked for public comments on the safety of systems like ChatGPT, before designing a possible regulatory framework.
In China, the government has been regulating AI and data collection for years, and its main concern remains social stability.
So far, the country that seems to be well positioned in AI regulation is the UK, which preferred a "light" approach - but it's no secret that the UK wants to become a leader in AI and fintech adoption.
Fintech and the AI Act
When it comes to companies and startups that provide financial services, the situation is even more complicated.
In fact, if the Act will remain as the current version, fintechs will need not only to be tied to the current financial regulations, but also to this new regulatory framework.
The fact that creditworthiness assessment could be labeled as an high-risk use case is just an example of the burden that fintech companies should carry, preventing them from being as flexible as they’ve been so far, to gather investments and to be competitive.
Conclusion
As Peter Sarlin, CEO of Silo AI, pointed out, the problem is not regulation, but bad regulation.
Being too generic could harm innovation and all the companies involved in the production, distribution and use of AI-based products and services.
If EU investors will be concerned about the potential risks posed by a label that says that a startup or company falls into the category of "high-risk", the AI environment in the European Union could be negatively affected, while the US is looking for public comments to improve its technology and China already has a clear opinion about how to regulate artificial intelligence.
According to Robin Röhm, cofounder of Apheris, one of the possible scenarios is that startups will move to the US - a country that maybe has a lot to lose when it comes to blockchain and cryptocurrencies, but that could win the AI race.
If you want to know more about fintech and discover fintech news, events, and opinions, subscribe to FTW Newsletter!