EU Commission unveils draft guidelines on prohibited AI practices under the AI Act

In a significant move toward bolstering the ethical oversight of artificial intelligence, the European Commission has published draft guidelines outlining prohibited AI practices as defined under the AI Act. The initiative is part of the EU’s broader ambition to balance technological innovation with the safeguarding of fundamental rights and European values.
Designed as a tool to aid stakeholders, from tech developers to legal experts, the guidelines offer both legal explanations and practical examples. This dual approach is intended to help organizations understand and comply with the requirements set forth by the AI Act, the world's first comprehensive legal framework on artificial intelligence. Although the document sheds light on the Commission’s interpretation of the prohibitions, it remains non-binding. The definitive legal interpretations will ultimately rest with the Court of Justice of the European Union (CJEU).
A clear stance on unacceptable AI applications
The draft guidelines provide a detailed overview of AI practices that the Commission deems unacceptable due to their potential risks. Among the practices highlighted are harmful manipulation, social scoring, and real-time remote biometric identification. These activities are considered particularly concerning as they can undermine individual freedoms and lead to significant societal risks. The AI Act classifies AI systems into several risk categories:
- Prohibited practices: AI systems whose use could threaten fundamental rights and public trust.
- High-risk systems: Applications that require stringent regulatory oversight.
- Transparency obligations: Systems that must adhere to specific disclosure requirements.
The guidelines focus on providing clarity on the practices falling under the prohibited category. By doing so, the Commission aims to foster a safer digital ecosystem where innovation does not come at the expense of ethical standards.
They prohibit the use, market placement, or service of AI systems involved in manipulative, exploitative, or harmful activities like social control or surveillance, which violate fundamental rights and EU values. These practices undermine core principles such as human dignity, freedom, equality, democracy, and the rule of law as enshrined in the EU Charter of Fundamental Rights, including rights to non-discrimination, data protection, privacy, freedom of expression, and fair trial.
Next steps
The AI Act entered into force on 1 August 2024 and will be fully applicable two years later, on 2 August 2026. The governance rules and the obligations for general-purpose AI models will take effect on 2 August 2025. Meanwhile, the rules for high-risk AI systems embedded into regulated products have been granted an extended transition period until 2 August 2027.
While the draft guidelines have received the Commission’s approval, they have not yet been formally adopted. Stakeholders are encouraged to review the document and prepare for its potential implications as the regulatory framework evolves.
As the EU continues to lead the way in AI regulation, these guidelines could set a global benchmark for ensuring that the deployment of AI technologies aligns with ethical and human rights considerations.