Section 10
Ethical principles and AI safety
Responsible AI, internal policy, and safeguarding obligations.
Section 10
Ethical principles and AI safety
Transparency and explainability
Staff, students and citizens should be able to understand when AI is used and what role it plays in a decision.
Accountability and human oversight
AI supports judgement rather than replacing it, with clear lines of responsibility for outcomes.
Non-discrimination and fairness
Systems should be checked for bias across gender, ethnicity, geography and other characteristics.
Data protection
Only necessary data should be collected and processed in line with the personal data framework.
Security and reliability
Tools should be tested before deployment, supported by contingency plans and regular audits.
Inclusivity and accessibility
Solutions need to work for users with different abilities and different levels of technical confidence.
Environmental sustainability
The concept also considers the footprint of AI systems and prefers efficient platforms where possible.
10.2 Internal policy
The internal AI policy is developed in Phase I and approved in Phase II. It covers approved tools, confidentiality rules, verification requirements, incident reporting, and staff accountability.
10.3 Protection of children and vulnerable groups
The concept adds specific safeguards for children, including limits on data collection, restrictions on student profiling without explicit consent, and the protection of a learner's right to make mistakes without long-term profiling consequences.