Research Article
Other versions:
- ContentsContents
- Article InfoArticle Info
- CiteCite
- MetricsMetrics
- CommentComment
- RelatedRelated
- FigsFigs
- TabsTabs
- RefsRefs
- CitedCited
-
Article title
-
Abstract
-
Keywords
-
Relevance to practice
-
1. Introduction
-
2. EU AI Act (Transparency & Human Oversight Requirements)
-
2.1. Overview of the EU AI Act (parties and classification of systems)
-
Risk-based approach
-
Roles in AI systems in the EU AI Act
-
-
2.2. Transparency requirements in the EU AI Act
-
Risk-based requirements
-
Complaint mechanism
-
-
2.3. Human oversight requirements in the EU AI Act
-
Development phase
-
Monitoring
-
Exemption
-
-
2.4. Fairness principle under the EU AI Act
-
-
3. Explainable AI (XAI)
-
3.1. What is XAI?
-
3.2. XAI characteristics
-
3.3. XAI design considerations
-
3.4. Example model
-
3.5. XAI techniques
-
Local Interpretable Model-agnostic Explanations (LIME)
-
SHapley Additive exPlanations (SHAP)
-
-
3.6. Overview
-
Ease of implementation
-
-
3.7. Limitations
-
-
4. What does it mean for the internal auditor?
-
4.1. The role of the internal auditor
-
Advisory capacity
-
Assurance function
-
-
4.2. The role of XAI in assessing transparency and human oversight
-
Transparency and explainability
-
Human oversight
-
-
4.3. Auditing AI systems leveraging XAI
-
AI standards and frameworks
-
-
-
5. Conclusion
-
References
Subscribe to email alerts for current Article's categories