UU AI Ethical Code of Conduct
We recognise the potentially significant impact of AI technology on our institution, the academic community, our values and on our students and staff. With the goal of responsible innovation with and through AI, we believe it is important to establish a number of safety and management measures for the use of AI within our university.
0. 木瓜福利影视 strategy & existing codes of conduct & public and academic values
The university provides education in line with the UU educational model, in which the responsible use of AI is an integral part. Our teachers promote innovative teaching, stimulate critical thinking and prepare students for their future careers and roles in society. At the same time, our teachers ensure the correct use of AI and contribute our teachers to the development of basic academic skills. As a basic principle, the university maintains agency over the application and deployment of AI.
- Our researchers, teachers and teaching support staff design and develop AI with ethically sound intentions, both for use within our university teaching and with our collaborative relationships.
- Our faculty and teaching support staff use AI with the intention of enhancing the teaching and learning experience and promoting student success in line with the UU educational model.
- Our employees and students only use and/or develop AI that complies with Dutch, European and applicable international law.
- Our staff and students make every effort to use and/or develop only AI that is consistent with our democratic principles and the values of our rule of law.
- Our staff and students actively ensure that the use, generation and promotion of AI is consistent with the principles of open science, fair data use (including no deliberate omission of data) and the ethical conduct of the academic community.
- Our staff and students do not intentionally use, generate or promote fake or deepfake content (images, code, text, audio, videos, etc.) produced with AI that may cause harm to individuals, the university, its reputation, the integrity of academic teaching and research, or society at large: the do-no harm principle.
1. Human autonomy
AI systems must respect human autonomy and dignity. Humans must be able to adequately direct and intervene in decisions made by AI. Ensuring fundamental rights, human autonomy and oversight throughout the life cycle of an AI system.
- The university is doing its utmost to protect digital sovereignty and our academic values from Big Tech dominance and geopolitical influences that threaten our democracy and academic values.
- Our teachers and researchers will ensure that there will always be human oversight when developing or using algorithms in critical use scenarios.
- AI can help our teachers and researchers and is primarily not meant to replace our staff.
2. Technically robust and safe
AI should be secure and robust in design and use, with a focus on reliability and protection against system failure or misuse. This includes resilience to attacks, reliability, reproducibility and overall security of AI systems to avoid unintended negative effects.
- Our teachers and educational support staff take all necessary precautions (as far as possible) to ensure the proper functioning and safe use of AI, while safeguarding the privacy of both students and staff.
- Our staff and students do not use prohibited AI. For high-risk AI systems or high-risk algorithms that violate human rights, we work in accordance with the GDPR, EU AI Act and any additional UU terms and conditions.
- Investigation of prohibited or high risk AI is carried out within appropriate legal sandbox conditions If required by law.
3. Social and environmental well-being
Promote sustainability, environmental friendliness and strengthen the positive social impact of AI on democracy and society. AI should contribute to the common good, take into account potential social and environmental impacts and strive for positive impact.
- Our staff and students make every effort to use AI in line with the SDGs, putting the protection of our planet first. Our staff and students consider the negative impact of using AI on our environment and use AI only with the intention of adding value.
- Our staff and students do not intentionally use or develop AI for applications that harm social welfare or living beings.
- In our communications, we ensure an ethical responsible image of AI to avoid the sugestion that AI has human capabilities.
4. Privacy and data security
Respecting privacy, data quality and integrity, and facilitating access rights. AI systems should comply with privacy rules and handle (personal) data with care, while respecting the rights and dignity of individuals.
- Our staff and students protect their own IP (intellectual property) and privacy and respect the IP and privacy of others, and make responsible use of AI in line with the GDPR and Responsible AI criteria (EU Trustworthy AI).
- Our staff and students do not use AI for the purpose of gaining unauthorised access to data, websites or systems, including restricted parts of university networks or databases.
- Our researchers are committed to responsible and fair use of AI and data management and privacy. They respect the AI policies of funding agencies, research partners and publishers.
- Our staff and students only use AI compliant with the Dutch regulation knowledge security to protect our academic values and IP.
5. Transparancy
Ensure traceability, explainability and clear communication about AI systems. The operation and outcomes of AI should be understandable and traceable; users should know when they are dealing with an AI system.
- Staff and students using AI ensure that they are aware of the rules for responsible use of AI and how to use it accordingly.
- Employees and students are actively reminded to use AI responsibly, with the Executive Board facilitating the necessary support.
- Staff and students are transparent about the use of AI (including, for example, the use of digital twins or personal clones) in their studies and work.
6. Diversity, non-discriminatory and fair
AI should be designed with inclusion in mind so that everyone can benefit from it and avoid unequal treatment or bias. Avoid unfair bias, promote accessibility and involve stakeholders in the process.
- When selecting and using AI systems and tools, our staff and students make every effort to deploy AI that is accessible, transparent, fair, inclusive, non-discriminatory and respectful of diversity.
- Our staff and students do not use AI and use AI to deliberately generate content that is biased, misleading, deceptive or deliberately incomplete, or that promotes false information, political or religious views, or generates harmful stereotypes that seriously impede open academic debate.
7. Responsibility
There should be clear roles and responsibilities for the design, implementation and use of AI, enabling oversight and corrective action in case of undesirable consequences. Provide audit capabilities, document, minimise negative impacts, mitigate risks and provide accountability and redress mechanisms.
- AI governance, policies, risk mitigation and documentation strategies are developed under the responsibility of the Directorates, Faculty Boards and the Executive Board.
- Deans are responsible for the AI awareness of the academic community and the legitimate adoption of AI (WHW Art 9.15).
- Our staff and students are personally responsible for the content they generate with (the help of) AI.