Second edition of the German Standardization Roadmap AI

Update of the groundbreaking results and recommendations for action from the first edition

© pinkeyes/

After nearly a year of intensive work, the 2nd edition of the Standardization Roadmap AI was handed over to Robert Habeck, Vice Chancellor and Federal Minister for Economic Affairs and Climate Action, at the German government's Digital Summit on 09 December 2022 and subsequently published. The Roadmap updates the results of the first edition and presents an expanded and updated analysis of the current state of and need for international standards and specifications for AI. This forms the basis to establish "Artificial Intelligence made in Germany" as a globally recognized seal of approval for trustworthy technology. The Roadmap is part of the German government's AI Strategy and was commissioned by the Federal Ministry for Economic Affairs and Climate Action (BMWK). The development and consolidation of the Roadmap is being accompanied by a high-level coordination group for AI standardization and conformity with a mandate from the German government.


6 overarching recommendations for action and more than 100 standardization needs were formulated with the active participation of more than 570 experts from industry, science, civil society and politics. In addition to the previous key topics such as fundamentals, security/safety, testing and certification, industrial automation, mobility as well as medicine, the second edition also focuses on the new aspects of sociotechnical systems, financial services and energy/environment.

The work focused on the draft regulation on AI (Artificial Intelligence Act) published by the EU Commission in spring 2021, which attributes a central role to standards and specifications pertaining to high-risk AI applications: Requirements for AI systems, such as transparency, robustness and accuracy, are to be technically specified through harmonized European Standards. The second edition also identifies concrete needs for standards and specifications that need to be developed for the implementation of the European Commission's planned AI Act.

Presentation of results and kick-off for the implementation of the second edition

In a free virtual event on 26 January 2023 ,10:00 am to 12:30 pm, DIN and DKE, together with the Federal Ministry of Economic Affairs and Climate Action (BMWK), presented the results of the second edition and provided information on how to participate in the practical implementation of the results during.

Detailed information on the event can be found here.

Participation in the implementation of the results is welcomed

DIN and DKE are keen to welcome interested parties from industry, science, the public sector and civil society to help implement the results and actively support the upcoming standardization work. Implementation will help support German industry and science in the international competition in the field of artificial intelligence and create innovation-friendly conditions for the technology of the future. In addition, it aims to foster trust and confidence in AI.

Interested experts can register to participate on the DIN.ONE collaboration platform.


DIN e. V.
Filiz Elmas

Am DIN-Platz
Burggrafenstraße 6
10787 Berlin

Send message to contact  

More information

Nine key topics related to artificial intelligence

Fundamentals Hide

As a cross-sectional technology, artificial intelligence offers great potential in many industries and areas. The basis for this is reliable, functional and, above all, safe and secure AI. The Standardization Roadmap covers terminologies and significant areas of application, such as language technologies, imaging sensor technology, quantum AI and describes AI methods and capabilities. Moreover, ethical principles, data quality and other criteria are also addressed. All of these topics are the basis for a common understanding of artificial intelligence and can be reflected in standards and specifications. This promotes interoperability and the interaction of different systems.

Security/safety Show

AI systems must be safe/secure in several different ways: People who interact with an AI system must be protected (safety). Moreover, the data used must not be misused (security). Only a deeper consideration of the security/safety of AI-based technologies and applications can enable their comprehensive use in industry and society. Currently, autonomous AI systems still face the challenge of being able to prove they do not pose a risk to life and limb. The Standardization Roadmap describes an approach towards exploiting the market potential of corresponding applications and achieving the necessary risk reduction. It also helps to tap the market potential of corresponding applications and, in particular, to design the interfaces between ethics, law and technology as regards safety. AI also needs trust and confidence in cybersecurity and privacy. Testing and certification can help to instil trust, and they are in turn based on standards and specifications: The focus here is on the life cycle of the data or algorithms used for AI systems.

Testing and certification Show

AI systems differ from conventional software in a number of ways in that data and algorithms take on a much more central role. This is why, for instance. the planned AI Act provides for audits of AI applications that could pose a high risk to health, safety or users’ rights to privacy. In particular, the quality requirement, for example, for technical reliability as well as the explainability and traceability of AI applications is absolutely central here. In order to strengthen trust in AI applications, quality criteria and test procedures are necessary that describe AI systems technically and make them measurable. Standards and specifications describe requirements for criteria and procedures (with regard to quality, security/safety and transparency) and form the basis for the certification of AI applications. Certification can also contribute to the creation of a trustworthy “AI Made in Europe” brand and so strengthen Europe's international competitiveness in the field of AI. To this aim, the Standardization Roadmap recommends developing a horizontal AI quality standard, which can be the basis for further industry-specific tests and certifications.

Sociotechnical systems Show

AI technology is always within the context of humans and the organizational environment. For successful AI solutions, therefore, it is not enough to look only at the technology. Instead, the sociotechnical system in which artificial intelligence is used and interacts with humans should also be considered. The goal is to identify users’ needs and to design AI in such a way that it supports them in their tasks in the best possible way. In particular, small- and medium-sized enterprises and start-ups should be enabled to integrate AI technologies into their business models. Standardization's task is to include all relevant groups of people and perspectives, taking sociotechnical aspects into account. In the draft regulation of the AI Act, human oversight and the possibility of intervention as well as transparency play significant roles in AI systems, for example. Technical and social components must therefore be thought of from the human perspective and aligned with this in mind during the development process.

Industrial automation Show

Artificial intelligence is a key technology when it comes to the digital transformation of the manufacturing sector. An important function is ascribed to the digital mapping of physical reality, the “digital twin”. The Standardization Roadmap outlines current challenges related to data models for the use of AI in industrial automation. The Roadmap also addresses how humans and machines interact and how AI systems can be integrated. Standards and specifications are essential in industrial automation; they promote cross-company interoperability and help in the implementation of regulatory frameworks.

Mobility Show

The use of artificial intelligence offers many advantages in the mobility sector, for example, when self-learning systems take over complex automated control functions and optimize traffic flows or mobility chains. Standards and specifications for the mobility sector can:

  • form a basis for operationalization of the planned European AI Act by enabling objective methods for testable development according to trustworthiness-by-design;
  • support dynamic, continuous (re)certification or type approval in the sense of continuous system development;
  • define interfaces and minimum requirements for interoperability, data exchange, trustworthiness and safety for automated mobility systems.
Medicine Show

Medicine of the future will be unthinkable without AI - whether in diagnostics, therapy, early detection or in everyday care.
At the same time, the use of technology in this area is challenging: It's not only about health and personal data, but also about letting people benefit from medical progress quickly and safely.
Standards and specifications can help:

  • increase the usability and usability of data for AI-based systems in medicine;
  • check the performance and safety of AI-based medical devices;
  • create trust and acceptance among users and patients;
  • efficiently implement the quality infrastructure of regulatory framework conditions for AI in medical devices.
Financial Services Show

Money makes the world go round: Financial services ensure social participation for everyone and are also demanding and highly sensitive products - particularly in a digitized world. Seemingly perfect for the use of artificial intelligence, cash has long been replaced by pure data streams. How can these new opportunities be used responsibly without overlooking the risks? Can customer behaviour models become data leaks? What data should be allowed to be used for decision models and how is it determined when enough information is available to make fair decisions? How are AI models present in conventional bank risk management? Standards and specifications make answering this question much easier.

Energy/environment Show

Energy is part of the critical infrastructure, and in particular the energy transition is an important political issue. Economic efficiency, security of supply, climate protection and the switch to renewable energies are on the agenda at the same time. In the future, smart grids will become increasingly important. They combine energy technology with information and communication technologies. Artificial intelligence must be integrated into this system of data models and system architectures. AI is also a significant tool for achieving the 17 Sustainable Development Goals defined by the United Nations. AI offers use potentials in the context of the European Green Deal. For example, AI can be used to develop cross-sectoral behavioural recommendations for market players and consumers to minimize the ecological footprint. In the environmental sector, AI can also be a tool for greater resource efficiency in industry and for processing large volumes of data in various sectors of the economy. In this context, standardization contributes to Germany's transformation into a climate-neutral industrialized country.

Central recommendations for action

Horizontal conformity assessment and certification programme for trusted AI Hide

How can the requirements of industry, public authorities and civil society on AI be made objectively verifiable? For economic growth and the successful use of AI systems, taking European values into account, a conformity assessment and certification programme is needed. Based on reliable and reproducible tests, sound statements on AI trustworthiness can be made.

Establish data infrastructures and data quality standards for the development and validation of AI systems Show

The quality of an AI system often depends on the quality of the data used. The extent to which the German AI industry and start-ups in particular can access corresponding datasets is thus a strategic competitive factor. Suitable infrastructures are therefore required to collect, describe and provide datasets. Standards and specifications ensure interoperability and define quality requirements.

Understand humans as part of the system in all phases of the AI life cycle Show

Which transparency is sufficient in which context for which target group? How can human oversight be implemented in AI systems? And what information must be available as a basis for human intervention in the system? These are all questions that must be thought of from the human perspective and according to which the technical and social components must be developed and aligned. For “lighthouse projects”, concrete tests must be carried out to determine how affected and involved people can be integrated into all phases of the AI lifecycle. In addition, standards for the sociotechnical aspects of the planned AI Act must be developed – here it is particularly important that all relevant target groups are involved in a balanced manner. In addition, the sociotechnical perspective, previously underrepresented in standardization, should be developed by experts.

Develop specifications for the conformity assessment of learning systems in the medical field Show

Learning AI systems in medicine can be continuously improved, for example, via new training data, information on faulty behaviour and corrections obtained. At the same time, the correspondingly high safety requirements must be met. This requires a (re-)conformity assessment with suitable test methods. In order to pave the way for market access for such test methods, the Roadmap calls for the specification of boundary conditions that permit the automated release of continual learning systems. For this purpose, the initiation of medicine-specific subprojects in cooperation with the relevant stakeholders is recommended in order to implement project results in standards, specifications and generally practicable test methods.

Develop safe and trusted AI applications in mobility through best practices Show

The use of AI technologies in the context of mobility is characterized by complex boundary conditions: Interaction with a constantly changing environment and many other actors. Malfunctions can have high risks for humans and the environment. Trustworthy AI systems are important for the following reasons: The planned AI Act specifies various aspects of trustworthiness. These are to be concretized on the basis of standards and specifications across the entire lifecycle of an AI system. A best practice catalogue should support the efficient development and safeguarding of systems in operation. The Roadmap also recommends the development of standards and specifications that define minimum requirements, especially for safety and essential trust aspects.

Develop overarching data standards and dynamic modelling techniques Show

Standards and specifications are needed to overcome data system boundaries and to develop reference procedures. The Standardization Roadmap recommends the following pilot projects:

  • Establishing a common terminology, semantics, taxonomy, and data mappings and schemas based on them in the domains of materials science/construction to determine energy efficiency and environmental impacts.
  • Developing an industry-independent communication format for determining the energy and resource consumption of goods and services.
  • Developing a methodology to assess the runtime, accuracy, and sustainability performance of AI systems.

Recording of the presentation results and kick-off of the implementation of the second edition by DIN and DKE together with BMWK.

DIN and DKE have produced the second edition on behalf of the Federal Ministry for Economic Affairs and Climate Action, and handed it over to ...

The work on the Standardization Roadmap AI is being steered and accompanied by a high-level coordination group.

Finger Roboter berührt Finger eines Menschen

Ethical aspects in standardization regarding AI in autonomous machinery and vehicles.

Dr. Gerhard Schabhüser from the German Federal Office for Information Security was involved in the development of the Standardization Roadmap AI as a permanent guest of the steering group.
The Federal Association for Information Technology, Telecommunications and New Media (Bitkom e. V.) represents more than 2,600 digital economy companies in Germany.
Dr. Wolfgang Hildesheim from IBM Deutschland has been actively involved as a member of the steering group for the Standardization Roadmap AI by DIN and DKE.