HCAIM - Socially Responsible AI

The HCAIM (Human-Centered AI Master) program includes a comprehensive module on socially responsible AI as part of Evaluation (Module C), focusing on the evaluation aspects of AI development. This phase emphasizes understanding the societal impact of AI products, technology trends, compliance, and maintaining the human element throughout AI system design, development, and evaluation.
The Practical Focus lesson plan on socially responsible AI combines theoretical lectures and interactive sessions. Key areas covered include the societal impact of AI, with lectures on positive and negative externalities and corporate social responsibility (ISO 26000) in using human-centered AI systems. Topics such as fair operating practices in AI recruitment, AI-based decision-making, human intervention in AI decisions, and the psychological aspects of working with AI are explored through interactive sessions.
Additionally, the program addresses consumer issues related to AI, including filter bubbles, data storage, and fair practices. Socio-legal aspects are covered in sessions on product responsibility and copyright problems. The module also examines broader implications of AI, including economic gaps (digital divide), the impact on human behaviour, environmental impact (carbon footprint), education, filter bubbles, and AI-powered warfare. Each topic includes interactive sessions to apply these concepts to real-world scenarios.
This module aims to provide students with the skills to develop and evaluate AI systems responsibly and ethically. For more information, students can visit the HCAIM consortium’s homepage.