High-risk AI

The regulation of so-called high-risk AI systems and tools within the EU AI Regulation (EU AI Act) will come into force on 2 August 2026. In anticipation of this, we are already providing information on this page to raise awareness and enable timely anticipation of the requirements of this legislation. Most of the provisions in the AI Regulation focus on AI systems that pose a 'high risk' to the health, safety or fundamental rights of individuals. These AI systems and tools must meet a series of requirements specifically laid down in the Regulation to ensure their reliability and safety. More information can be found below and on the website

The number of high-risk AI systems and tools in the education domain is expected to be limited. If, after reading the explanation of the AI Act below, you believe that high-risk AI systems or tools may be in use in education at UU now or in the future, please report this via:aionderwijs@uu.nl or contact the AI Governance officer for education via: y.g.roman@uu.nl. It may not be a case of high-risk AI, for example because it falls under an exception or because a lower risk category can be achieved with a few adjustments.

If it concerns purely scientific research with/on AI systems and tools, it is in principle not subject to the regulation of the AI Act. However, if the ultimate goal of the research is to deploy and apply the AI system or tool in education and it is potentially high-risk AI, it is advisable to report this in good time. Educational purposes refer to the use of AI with students, by teachers or in educational support and educational systems.

What are high-risk AI systems and tools in education?

The EU AI Regulation classifies AI systems as high-risk AI if they meet one or both of the following conditions:

  1. The AI system is intended to be used as a safety component of a high-risk product (Annex I of the AI Act) or if the high-risk product itself is an AI system. For example, if AI is used in cars, medical devices and elevators.
  2. The AI system falls under one of the areas listed in Annex III of the Act, including the educational sector:
    • AI systems and tools intended to determine access, admission and allocation to education (or a course, track or minor, etc.)
    • AI systems and tools that evaluate learning outcomes and, where appropriate, guide the learning process (e.g. automated grading, learning analytics systems with AI)
    • AI systems and tools that assess the appropriate level of education or determine who has access to a particular level of education (e.g. adaptive learning systems with AI)
    • AI systems and tools that monitor and detect unauthorised behaviour during tests (proctoring)

Please note: not covered by the AI Act

  • AI systems and tools used exclusively for military or defense purposes; Art 2.3
  • AI systems and tools used exclusively for scientific research and scientific development as their sole purpose; Art. 2.6 and 2.8
  • AI systems and tools purely for non-professional purposes; Art. 2.10. For example, teachers who experiment with an AI tool solely for their own private experience.
  • Open source AI systems and tools that do not function as independent AI systems or are not used for regulated applications; Art 2.12

Roles within the AI Act

The requirements of the EU AI Act vary and depend on the role/actor. The two most common roles within the UU are:

  1. Deployer in the role of end user or data controller
  2. Provider

In the educational domain, the UU is in most cases a deployer.

Requirements for deployers of high-risk AI systems and tools (Art. 26 et seq.)

  1. Use in accordance with the provider's instructions and ensure human oversight by trained personnel; Art. 26.1-3 and Art. 14.1-4
  2. Input data used in the AI models is under own control and must be relevant and representative; Art. 26.4
  3. Operational monitoring; in the event of a risk or serious incident, stop use and notify the provider first, followed by the supervisory authority; Art. 26.5 and 72, 73 and 79.1
  4. Retain logs (for at least 6 months or longer if required); Art 26.6
  5. Informing employees on the work floor about the AI system before it is put into use; Art. 26.7
  6. Not using AI systems that are not registered in the European AI database for high-risk AI; Art. 26.8
  7. Carrying out a DPIA where applicable, such as when personal data is processed in the AI system; Art. 26.9, 27.4 and GDPR Art. 35
  8. Informing data subjects (natural persons) when the AI system makes decisions about them; Art. 26.11
  9. Cooperating with supervisory authorities (including the Dutch Data Protection Authority) and providing documentation upon request; Art. 26.12
  10. If AI is used for emotion recognition or biometric categorisation, the persons to whom it is applied must be informed (unless it falls under prohibited AI); Art. 50.3
  11. Conduct a Fundamental Rights Impact Assessment (FRIA) on the AI system prior to its use and notify the supervisory authority of the results; Art. 27.1-5. We have chosen the for this purpose.

Requirements for providers of high-risk AI systems and tools (Art. 16 et seq.)

  1. Compliant with the requirements of Art. 16a, includes:
    • Data management: representative, bias-limited and authorised datasets; Art. 10
    • Transparency and human oversight measures built in and clear instructions for the user; Art. 13–14
    • Ensuring the accuracy, robustness and cyber security of the AI system; Art. 15
    • Post-market monitoring and incident reporting and subsequent corrective actions; Art. 20, 72–73
  2. Carry out conformity assessment and draw up/affix EU declaration of conformity and CE marking; Art. 16.b,f–i and 43
  3. Apply a risk management system for the entire life cycle; Art. 16c and 9, 17
  4. Maintain technical documentation and logs; Art. 16d and 18, 19
  5. Take corrective action if not AI act compliant; Art. 16j and 20
  6. Register the AI system in the EU database; Art. 16.i
  7. Maintain a quality management system (QMS); Art. 17.1
  8. Be available to supervisory authorities such as the Dutch Data Protection Authority and be able to demonstrate compliance; Art. 16k and 21
  9. Ensure that the AI system is accessible to persons with disabilities; Art. 16l

In case of non-compliance

Failure to comply with the obligations under the AI Act may result in fines (Art. 99.8). The Dutch Data Protection Authority (AP) determines any fines. Each Member State determines for itself "to what extent" it imposes financial penalties on higher education institutions, and these are unlikely to be the full EU ceiling, but penalties similar to those under the GDPR remain likely.

Is the university a provider or deployer or both (acc. to EU AI Act)?

See the table below for four different situations:

AI for educational purposes

Provider?

Deployer?

Comments

ľ¹Ï¸£ÀûÓ°ÊÓ develops and sells (or makes available) AI system or tools to third parties

yes

no

Only provider obligations

ľ¹Ï¸£ÀûÓ°ÊÓ purchases ready-made AI tools and uses them internally with lecturers, teaching support staff and/or students

no

yes

Deployer obligations only

ľ¹Ï¸£ÀûÓ°ÊÓ develops AI system or tools and uses them itself with its own lecturers, teaching support staff and/or students

yes

yes

Both sets of obligations

ľ¹Ï¸£ÀûÓ°ÊÓ uses existing AI system or tool, but fine-tunes (adapts) it in such a way that the purpose/risk of the AI system or tool changes

Becomes provider

yes

Both sets of duties after modification

Even if the AI system or tool is used entirely internally, the AI Act applies the strictest set of requirements and the system is treated as if it were being placed on the market.

If the UU is both the deployer and provider of the AI system or tool, the obligations of both categories apply. In addition, there is of course also a need to comply with other relevant legislation such as the GDPR, cybersecurity legislation, WHW, etc. Which other legislation applies depends on the environment and the application in which the AI system operates. This is different in an educational domain, for example, than in domains of C&F, HR, FCA or C&M, etc.

AI system and tool risk assessment

To determine whether there is (prohibited) or high-risk AI, a risk assessment of an AI system and/or tool must be carried out. This also applies to chatbots, AI tutors, etc. This analysis must be carried out by a CAICO.