Artificial Intelligence (AI) has entered our lives, transforming our lives and work. However, the emergence of AI raises doubts about what our new future will look like, leaving questions about safety and ethics in the air: "Could these technologies be harmful to society? How can we guarantee the transparency of private research? How can we guarantee that the systems are fair and respect human rights? Which applications could be most harmful to citizens?"
These questions led the 27 European Union (EU) countries to adopt the first legislation entirely dedicated to AI this year.
What impact will the new rules have on research?
The new legislation underlines the political commitment to promote investment in AI research and innovation, strongly impacting the EU's different R&I programs, such as Horizon Europe and Digital Europe. Existing public-private partnerships are also expected to evolve to include the new political priorities around AI.
The European Commission and the European Parliament are critical in defining future calls for proposals under these programs and future partnerships in this field. Direct cooperation with these EU institutions is essential to ensure that we, as research stakeholders, are kept informed and can participate in the different initiatives.
Are the citizens protected?
Partially. The regulation addresses AI systems' complexity, bias, and behavior to ensure they are compatible with human rights. It helps to establish a non-exhaustive list of "high-risk AI applications" that can be banned. However, it does not define penalties in case of abuse.
The European Commission has called on all EU countries to establish procedures to assess compliance with the use of these high-risk applications and potential sanctions. These measures will be vital to protecting citizens throughout the development of AI technology.
So-called "high-risk AI applications" include using AI to influence behavior, government-run social scoring, or real-time remote biometric identification (e.g., to detect emotions) in publicly accessible spaces.
Other controversial practices under the watchful eye of regulators include the use of AI for curriculum vitae scanning tools, which rank job candidates, or AI systems intended to be used to prioritize the dispatch of emergency first-response services, including firefighters and medical assistants.
What's next?
The new rules will come into force over the next two years and are binding on all EU countries. A new AI office will be set up within the European Commission to help companies and research service providers, among others, comply with the new legislation.
It is now crucial that Europe avoids becoming too dependent on third countries for key AI technologies. Catching up with the big US tech players will require significant resources and combined efforts.
We will continue to closely follow the AI discussions, working side by side with regulators, to ensure that we develop the most ethical and first-class AI technology for our economy and society.
By: Talita Soares
EU Strategy & Policy Advisor | Internationalization Department CCG/ZGDV Institute