Back to Blog Security

The AI Act in Practice — What Every Enterprise Deploying AI Needs to Know

Zespół ESKOM.AI 2026-04-15 Reading time: 8 min

AI Act — A New Regulatory Reality

The European Union became the first jurisdiction in the world to create comprehensive regulations governing artificial intelligence. The AI Act entered into force in August 2024 and is being applied in phases — the first obligations are already enforceable, with additional requirements being introduced through 2025–2027. Every enterprise deploying AI systems in its operations within the EU must familiarize itself with these rules.

The AI Act does not prohibit artificial intelligence — it regulates it based on a risk-based approach. The higher the risk, the stricter the requirements. Most business applications of AI fall into the low or limited risk categories, which means relatively light obligations. But there are areas where requirements are very stringent.

Risk Classification of AI Systems

The AI Act divides AI systems into four risk categories:

  • Unacceptable risk (prohibited) — systems that manipulate human behavior below the threshold of awareness, social scoring by authorities, real-time biometric surveillance (with exceptions). These systems are simply banned.
  • High risk — AI used in recruitment, credit scoring, healthcare, education, justice administration, and critical infrastructure. The strictest requirements: documentation, testing, transparency, human oversight, and registration in the EU database.
  • Limited risk — chatbots, deepfakes, content-generating systems. Transparency obligation: the user must know they are interacting with an AI.
  • Minimal risk — most business AI applications: spam filters, product recommendations, internal process automation. Minimal or no additional obligations.

Who Is a Provider and Who Is a Deployer of an AI System?

The AI Act distinguishes two key roles. A provider is the entity that creates and places an AI system on the market. A deployer is the entity that uses an AI system in its business operations. The obligations differ for each role — providers face more stringent requirements regarding technical documentation and certification.

A company that purchases an off-the-shelf AI solution from a provider and uses it for its own processes is a deployer. A company that customizes or fine-tunes a model for its own applications may become a provider with all the associated consequences.

Deployer Obligations for High-Risk Systems

If your company uses a high-risk AI system (e.g., a scoring system in credit decisions, a CV pre-screening tool, a diagnostic support system in healthcare), you must:

  • Ensure human oversight over the AI system's decisions
  • Maintain operational logs for at least 6 months
  • Conduct a Fundamental Rights Impact Assessment (FRIA)
  • Inform employees about AI systems that affect them
  • Report serious incidents and malfunctions to the relevant supervisory authority

Penalties and Timelines

Penalties for violating the AI Act are severe: up to EUR 35 million or 7% of global annual turnover for violations involving prohibited systems. Up to EUR 15 million or 3% of turnover for other violations. The timelines are staggered — prohibitions on unacceptable systems have been in effect since 2025, while requirements for high-risk systems apply from 2026–2027.

How ESKOM.AI Supports AI Act Compliance

ESKOM.AI helps organizations prepare for AI Act requirements. We offer audits of existing AI systems for risk classification, development of technical documentation, implementation of human oversight and logging mechanisms, and training for compliance teams. Every new AI deployment we deliver is designed with AI Act compliance in mind from day one.

#AI Act #EU regulation #compliance #risk classification #governance