Since August 1, 2024, the new AI Act has come into force. With this new law, the EU has reached a significant milestone in the regulation of artificial intelligence (AI). The goal is to strike a balance between fostering innovation and protecting citizens' rights. The AI Act is a response to the rapid development and potential risks and ethical challenges associated with the use of AI technologies. In this blog article, we highlight the key aspects of the AI Act and the implications of the law for businesses, software developers, and AI users.
The AI Act is a comprehensive legislative act that aims to create a framework for the development, commercialization and use of AI in the EU. The Act regulates all AI models and systems. It aims to ensure that AI systems are safe and transparent and respect the fundamental rights of citizens. The AI Act categorizes AI applications according to their risk and sets out specific requirements and regulations for each category.
The AI Act divides AI systems into four main categories:
The AI Act places various requirements on the development and use of AI systems, particularly in the category of high-risk applications.
Transparency: Users must be informed that they are interacting with an AI system and be made aware of its potential and limitations.
Safety and accuracy testing: High-risk AI systems must undergo extensive testing and certification to ensure their safety and accuracy.
Data governance: Software developers must ensure that the data used to develop AI systems is of high quality and representative. Discrimination and skewing must be minimized.
Monitoring and control: Operators of AI systems must introduce mechanisms for monitoring and control to prevent misuse and malfunctions.
A national authority must be established in each EU Member State. Each state appoints a representative of this authority to the European Committee on AI. In Germany, this could be the state protection authority or the Federal Network Agency. There will be an advisory forum and a European AI office responsible for monitoring the GPAI models. A scientific committee will support the office and the AI Office will also contribute its expertise.
The AI Act also has a number of changes in store for companies and developers who want to offer AI technologies in the EU. For providers of high-risk AI systems in particular, the new law means increased responsibility and additional compliance costs. Companies may need to rethink their development processes and invest to meet the new requirements. However, this could also lead to a competitive advantage, as certified security and transparency can increase user confidence.
The AI Act offers numerous advantages for consumers. The strict transparency and security requirements are intended to minimize the risks of AI applications and strengthen citizens' trust in these technologies. The data governance and anti-discrimination measures are intended to help promote fair and equitable AI systems that respect the rights of all citizens.
The AI Act is an important step in the regulation of artificial intelligence and shows that the EU is determined to build a safe and transparent AI era. While companies and developers have to face new challenges, the Act also offers the opportunity to have a positive impact on society through higher standards and more trust in AI systems. Only time will tell how the AI Act will prove itself in practice and what adjustments may be necessary in the future to keep up with the rapid technological advances.