What the AI Act means for companies

The European Commission wants to regulate artificial intelligence more closely: Stricter requirements for AI systems will apply across the EU in the future. What does this mean for companies and how can they prepare for this?

Artificial intelligence is a powerful technology that is developing rapidly and is already being used in many sectors, for example in industry, agriculture, healthcare, finance, and mobility. AI is special because it can imitate human cognitive capabilities, i.e. read out and sort data, and draw logical conclusions from it. This offers huge opportunities, but also poses risks. The solution applied by an algorithm is difficult for humans to follow and it does not differentiate whether specific values are adhered to, or interests are given priority to others. This can result in discrimination, the misuse of personal data or the violation of human dignity.


DIN e. V.
Filiz Elmas

Am DIN-Platz
Burggrafenstraße 6
10787 Berlin

Send message to contact  

What the AI Act means for companies

AI Act: AI legislation

Against this background, the EU is responding to the need to regulate AI systems more closely with the planned Artificial Intelligence Act (AIA). European legislative institutions are currently negotiating a corresponding draft bill, which will form the world's first legal framework for AI. It aims to ensure people's fundamental rights in the development and application of AI technologies, to strengthen trust in this technology, and at the same time to drive investment and innovation in this area. In the long term, this should see Europe become the global centre of AI excellence, providing AI systems that are human-centric, sustainable, secure, inclusive, and trusted. In the near future, this means that companies in the EU will be required to assess their AI systems and the associated risk. Depending on the risk assessment performed, this will be done through an appropriate conformity assessment that demonstrates or declares that the requirements for trustworthy AI are met.

The new EU regulations follow a risk-based approach:

  • Minimal risk AI applications that do not use personal data or have an impact on humans will not be regulated (for example, predictive maintenance)
  • Limited risk AI applications will have to comply with transparency requirements (for example, providing an indication that the AI system is a chatbot)
  • High-risk AI applications that may adversely affect people's safety or their fundamental rights must meet further, more stringent requirements (this particularly applies to areas such as mobility, medicine, or workers’ management)
  • Dangerous AI applications that, for example, predict and monitor people's social behaviour from data in order to directly influence people's behaviour or opinions (for example, social scoring) will be banned.

The role of standards and specifications

The EU is relying on the principle of the "New Legislative Framework" (NLF) during concrete formulation. The legislator confines itself to formulating essential requirements and protection targets and requests the European standards organizations to specify these technically in the form of standards and specifications. Especially in the area of high-risk AI applications, standards and specifications are central: They define the safety requirements that AI systems must meet before they are launched on the market, for example, in terms of transparency, accuracy, explainability or quality. Standards and specifications can play a decisive role in protecting against bias, discrimination, or manipulation.

The Standardization Roadmap on AI identifies needs for action

With the German Standardization Roadmap on AI DIN and DKE have presented a comprehensive analysis of which standards and specifications already exist and where there is still a concrete need for the standardization of AI applications. The second edition published in December 2022 focused on nine key topics: Fundamentals, Security/safety, Testing and certification, Sociotechnical systems, Industrial automation, Mobility, Medicine, Financial services and Energy and environment. The standards that are still required to comply with the AI Act are currently being drawn up by experts from business, science, the public sector and civil society at European level, and are being examined by the European Commission, and will then be published as “harmonized European Standards”. Compliance with such standards means that it can be assumed that the requirements of this legal act have been met.

Whoever sets the standard controls the market

The New Legislative Framework principle explicitly calls for broad participation in the specific development of AI-relevant standards. The advantage of this: The more experts contribute their knowledge to the standardization process, the higher the market acceptance of the standard. This means: Companies of all sizes, as well as stakeholders from research and civil society, now have the opportunity to actively shape the legal requirements for AI themselves by setting the necessary standards and specifications themselves. This offers significant competitive advantages: Becoming involved in the standardization process means you gain a competitive lead through new knowledge, and you can ensure your interests are taken into account during the development of standards. This means that products and services can be adapted accordingly during development and brought to market faster. Last but not least, standards and specifications create trust in AI applications, which leads to better market acceptance of one’s own products. A huge opportunity for German businesses, but also for industry and science as a whole: Germany could become a leading centre for AI. Experts interested in participating can register for free on the DIN.ONE collaboration platform.