Click here to start your application. Apply now

Academic Handbook MSc Artificial Intelligence and Ethics

Advanced Topics in Responsible Artificial Intelligence Course Descriptor

Course code LDSCI7230 Discipline Computer Science
UK Credit  30 US Credit NA
FHEQ level 7 Date approved June 2023
Core attributes N/A
Pre-requisites None
Co-requisites None

Course Summary

This course explores the social and ethical dimensions of responsible Artificial intelligence. Artificial intelligence systems and their development can pose risks to individuals and societies. Responsible AI encompasses the design, development, implementation, and regulation of AI systems in order to ensure that they are ethical, safe, and trustworthy. Responsible AI approaches are naturally interdisciplinary; they include investigations into ethics, societal impact, regulation, design, and responsibility for AI systems, with a particular focus on AI that will have significant impact on our lives. There is a particular focus on sustainability in terms of societal and environmental impact.
Students will cover a range of advanced topics in Responsible AI with particular attention to recent, current, and near-future issues in AI. Students will gain an understanding of how philosophy can interact with other disciplines in discussions of artificial intelligence, and how these approaches can inform the development and management of new technologies. Each topic is examined through a philosophical lens, and at least one other disciplinary approach that has a particular relevance to that topic. Additional disciplinary approaches may include (but are not limited to) business studies, digital humanities, law, politics, sociology, psychology, or computer science. Topics will be research-led, informed by the expertise of Northeastern University London.

Course Aims

The aims of the course are to: 

  • To provide a foundation of philosophical concepts and values relevant to current and emerging issues in responsible AI
  • To develop the critical skills needed to apply concepts and approaches from a variety of disciplines to ethical and societal questions in AI.
  • To develop an understanding of how philosophical thinking can contribute to the beneficial development of artificial intelligence.
  • To enable students to form, elaborate, and defend their own views in this field, to appreciate the relevance and value of diverse viewpoints, and contribute constructively to work aimed at addressing concrete problems.
  • To develop students’ capacity to engage constructively with peers, process arguments in a charitable manner, and engage in productive class debate.

Learning Outcomes

On successful completion of the course, students will be able to:

Knowledge and Understanding

K1d Demonstrate sound knowledge and sophisticated understanding of key questions and debates in the field of Responsible AI from an interdisciplinary perspective.
K3d Evaluate societal dimensions of AI and data practices and demonstrate a comprehensive understanding and critical awareness of key philosophical issues (ethical, cultural, privacy, or policy) surrounding data use, data processing, and AI.
K4d Critically review and analyse key issues in the development and application of artificial intelligence in society, identify issues and limitations, and propose directions for the responsible development, management, and future innovation of AI systems.

Subject Specific Skills

S1d Critically assess the design, development, and implementation of artificial intelligence systems in society, and propose ways to improve this.
S3d Produce original ideas for the future development, implementation, and management of responsible AI systems.
S4d Understand the importance of embedding ethical considerations into the development of data applications and AI systems.

Transferable and Professional Skills

T2d Consistently display an excellent level of technical proficiency in written English and command of scholarly terminology, so as to be able to deal with complex issues in a sophisticated and systematic way.
T4d Communicate effectively with rigorous arguments appropriately for both technical and non-technical audiences in relation to the development and application of responsible AI systems and the contemporary philosophical questions that surround it, through oral presentations and written reports.
T3d Demonstrate initiative in working independently, effectively, and to deadlines.

Teaching and Learning

This course has a dedicated Virtual Learning Environment (VLE) page with a syllabus and range of additional resources (e.g. readings, question prompts, tasks, assignment briefs, discussion boards) to orientate and engage you in your studies.

The scheduled teaching and learning activities for this course are:

Lectures/Labs: Contact hours are typically a mix of weekly lectures and lab sessions:

  • Version 1: All sessions in the same sized group, or
  • Version 2: most of the sessions in larger groups; some of the sessions in smaller groups

Faculty hold regular ‘office hours’, which are opportunities for students to drop in or sign up to explore ideas, raise questions, or seek targeted guidance or feedback, individually or in small groups.

Students are to attend and participate in all the scheduled teaching and learning activities for this course and to manage their directed learning and independent study.

Indicative total learning hours for this course: 300

Assessment

Both formative and summative assessment are used as part of this course, with formative opportunities typically embedded within interactive teaching activities delivered via the VLE.

Summative

AE: Assessment Activity Weighting (%) Duration Length
1 Presentation 20% 10 Minutes N/A
2 Written Assignment 80% N/A 4,000 Words

Further information about the assessments can be found in the Course Syllabus.

Feedback

Students will receive formative and summative feedback in a variety of ways, written (e.g. marked up on assignments or via the VLE) or oral (e.g. as part of interactive teaching sessions or in office hours).

Indicative Reading

Note: Comprehensive and current reading lists for courses are produced annually in the Course Syllabus or other documentation provided to students; the indicative reading list provided below is used as part of the approval/modification process only.

Books

  • Voeneky, S., Kellmeyer, P., Mueller, O., & Burgard, W. (Eds.) (2022). The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
  • Dubber, M.D., Pasquale, F. and Das, S. (Eds.) (2020). The Oxford handbook of ethics of AI. Oxford University Press.
  • Dignum, V. (2019) Responsible Artificial Intelligence: How to develop and use AI in a responsible way. Springer.
  • von Braun, S. Archer, M., Reichberg, G. M., & Sánchez Sorondo, M. (2021). Robotics, AI, and Humanity: Science, Ethics, and Policy. Springer Nature.
  • Noble, S. U. (2018). Algorithms of oppression. In Algorithms of oppression. New York University Press
  • Sandler. (2014). Ethics and Emerging Technologies. Palgrave Macmillan UK.

Journals

  • Autor, D. H. (2015), ‘Why Are There Still So Many Jobs? The History and Future of Workplace Automation’, The Journal of Economic Perspectives, 29, pp. 3–30
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).
  • Spencer, D. A. (2022) ‘Automation and Well-Being: Bridging the Gap between Economics and Business Ethics.’ Journal of Business Ethics.
  • Hristov, K. (2016). ‘Artificial intelligence and the copyright dilemma’. Idea, 57, p.431.
  • van Wynsberghe, A. (2021) ‘Sustainable AI: AI for sustainability and the sustainability of AI’. AI Ethics, 1, 213–218 (2021). https://doi.org/10.1007/s43681-021-00043-6
  • Hongladarom, S. and Bandasak, J. (2023) ‘Non-western AI ethics guidelines: implications for intercultural ethics of technology’. AI & Society. pp.1-14.
  • La Fors, K. (2022) ‘Toward children-centric AI: a case for a growth model in children-AI interactions’. AI & Society. pp.1-13.

Reports

  • Taylor, Steve, Pickering, Brian, Boniface, Michael, Anderson, Michael, Danks, David,
  • Følstad, Asbjørn, Leese, Matthias, Müller, Vincent, Sorell, Tom, Winfield, Alan, & Woollard,
  • Fiona. (2018). Responsible AI – Key Themes, Concerns & Recommendations for
  • European Research and Innovation (1.0). Zenodo. https://doi.org/10.5281/zenodo.1303253

Electronic Resources

University of Oxford. Ethics in AI [Podcast] Available at: https://podcasts.ox.ac.uk/series/ethics-ai
Ethical OS Toolkit https://ethicalos.org/

Indicative Topics

Note: Comprehensive and current topics for courses are produced annually in the Course Syllabus or other documentation provided to students. The indicative topics provided below are used as a general guide and part of the approval/modification process only.

  • The ethics of automation
  • Authorship, AI, and copyright
  • AI in the classroom
  • Human-computer interaction
  • Human-centred AI
  • Sustainable AI
  • AI in healthcare
  • Global governance for AI

Version History

Title: LDSCI7230 Advanced Topics in Responsible AI

Approved by: Academic Board

Location: Academic Handbook/Programme specifications and Handbooks/ Postgraduate Programme Specifications/MSc Computer Science Programme Specification/Course Descriptors

Version number Date approved Date published  Owner Proposed next review date Modification (As per AQF4) & category number
1.0 June 2023 June 2023 Alexandros Koliousis April 2028
Print/Save PDF