EU AI Act: What companies can expect in 2025 – and how you can prepare yourself
With the EU AI Act, the European Union has created a comprehensive legal framework that regulates the use of artificial intelligence (AI) in companies. The aim is to promote innovation while minimizing risks for people, society and fundamental rights. For companies, this means that they face specific obligations – and the first deadlines are already due in 2025.
But what exactly does the AI Act require? Which systems are affected? And how can companies ensure that they are compliant in good time? In this article, we provide an overview of the most important dates, requirements and how Syngenity® GmbH can support you along the way.
What is the EU AI Act?
The EU AI Act is the first comprehensive AI regulation worldwide. It distinguishes between four risk classes of AI systems: minimal, limited, high and unacceptable risk. Depending on the classification, different transparency, security, documentation and control requirements apply.
The focus is particularly on so-called high-risk AI systems, for example in the areas of biometric identification, critical infrastructure, personnel recruitment or credit scoring. However, providers of generative AI and general-purpose AI (GPAI) – i.e. systems that can be used for many purposes – must also fulfill new obligations.
Important deadlines in 2025
The AI Act will take effect gradually. Several key requirements will come into force as early as 2025:
June 2025: Mandatory labeling for generative AI
Providers of generative AI systems – for example for text, image or speech generation – must ensure that users can clearly recognize when content has been generated by an AI. This transparency obligation is intended to prevent AI-generated content from being confused with human content and thus facilitating disinformation or manipulation.
August 2025: Obligations for general-purpose AI providers
Providers of GPAI systems must meet extensive requirements from August 2025. These include, among other things:
- Transparency of training data and model architecture
- Documentation of system functions and limits
- Risk assessment and risk mitigation measures
- Traceability and human control options
These obligations apply regardless of whether the system is directly classified as high-risk AI – the broad applicability alone makes GPAI a special focus of regulation.
October 2025: Risk assessment and compliance testing for high-risk AI
Companies that develop or use high-risk AI systems must carry out a comprehensive risk assessment by October 2025. A conformity assessment is also required to ensure that the system meets the requirements of the AI Act. This assessment can be carried out internally or – depending on the system – by a notified body.
December 2025: Mandatory registration for high-risk AI systems
From December 2025, high-risk AI systems may only be placed on the market if they have previously been registered in the official EU database for AI systems. This measure is intended to create transparency and enable better market surveillance.
What does this mean for companies?
The requirements of the AI Act are extensive – and they don’t just affect large tech companies. Medium-sized companies that develop, use or import AI systems also have to deal with the new obligations. Those who react too late risk fines, market barriers or reputational damage.
At the same time, the AI Act also offers opportunities: companies that act early can position themselves as responsible and trustworthy providers. They create clear structures, improve their data and IT security and strengthen the trust of customers, partners and supervisory authorities.
How Syngenity® GmbH supports you
Implementing the AI Act requires technical, legal and organizational expertise. This is exactly where Syngenity® GmbH comes into play. We support companies on their path to AI compliance – practically, efficiently and individually.
Our services include, among others:
- Analysis of your existing AI systems and classification according to risk classes
- Support in the preparation of risk assessments and certificates of conformity
- Development of transparency and documentation guidelines
- Integration of AI Act requirements into existing management systems (e.g. ISO 27001)
- Preparation for audits and inquiries from authorities
Our aim is not only to make you compliant, but also resilient and future-proof. Because responsible AI is not an obstacle – it is a competitive advantage.
Conclusion: act now instead of catching up later
The EU AI Act is here – and it is changing the rules of the game for the use of artificial intelligence in Europe. Companies that use or develop AI should familiarize themselves with the requirements now and take concrete measures. The 2025 deadlines are closer than they seem.
With Syngenity® GmbH, you have an experienced partner at your side who will guide you safely through the regulatory requirements. Let us work together to ensure that your AI systems are not only innovative, but also legally compliant and trustworthy.
Further information and contact details can be found at www.syngenity.com






