Prohibited AI
Since 2 February 2025, the AI Act has prohibited certain AI systems and tools. Prohibited AI systems pose an unacceptable risk and may, for example, restrict people's freedom of choice, discriminate or manipulate, or can be in conflict with fundamental human rights. The AI Regulation does not replace the GDPR. Both laws apply.
The use of prohibited AI systems and tools in the educational environment is therefore not permitted. If there is a suspicion that prohibited AI is being used or will be used in education (i.e. with educational support staff, teachers or students, in an educational environment or during an internship), this must be reported immediately via: aionderwijs@uu.nl.
This email address is also intended for advice on prohibited AI systems or tools or the use of AI in a prohibited use case. The use of these AI tools in a pilot project in education, or conducting research (e.g. with USO or EMP resources) into these systems with the aim of using them in UU education or offering them to other parties, is also not permitted. This also includes the use of prohibited AI in, for example, master's research projects for graduation assignments or during internships.
Which AI systems and tools are prohibited (Art. 5)
- 鈥渟ocial scoring鈥 based on certain social behaviour or personal characteristics;
- 鈥減redictive policing鈥 to assess or predict criminal offences committed by individuals;
- creating or (via scraping) expanding databases for facial recognition;
- manipulating or misleading people;
- remote biometric identification for law enforcement (with some exceptions);
- biometric categorization, whereby people are classified into certain sensitive categories on the basis of biometric data.
- Emotion recognition in the workplace and education (Art. 5.1.f.)
Recital 18 of the AI Act explains that the term emotion recognition refers to emotions such as happiness, sadness, anger, fear, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement. It does not include physical states such as pain or fatigue. The mere detection of clearly visible expressions, gestures or movements is not prohibited, unless they are used to identify or infer emotions. An example of emotion recognition in the educational environment is the use of AI-assisted VR glasses for students in which emotions such as aggression or fear are consciously recorded. This article applies to all employees (not only students).
If an AI system or tool does not fall under the category of Prohibited AI, this does not automatically mean the tool is permitted. At the very least, an AI risk assessment is required. The same applies to algorithms that could have potentially harmful effects on education, our educational systems, or human rights.
Governance of prohibited AI
The Data Protection Authority (AP) supervises compliance with the AI Regulation and can impose fines in the event of violations.
As a student, teacher, or educational support staff member, you are not authorized to independently determine whether prohibited AI or other high-risk categories under the AI Act are involved. There may, for example, be an exception in the law. Such determinations can be made only by at least a certified AI Compliance Officer (CAICO) or an AI Governance Officer. If in doubt, use the designated email address, anonymously if necessary
For each specific use case, a thorough assessment is carried out by AI compliance officers (CAICO) to determine whether the AI system is indeed subject to the prohibitions set out in the law. This provision requires careful analysis and may depend on the context of use, but also partly on the role of the university as a data controller or as a third-party user of the AI or, for example, as a facilitator of the AI system.
The responsibility for safeguarding the UU educational environment from prohibited AI lies with the vice-deans of education for the faculty and, by mandate, with the directors of education/programme directors.