ICT902 Artificial Intelligence and Machine Learning Semester 1 2026 Assessment 2 – Ethical Considerations in AI Solutions – 30% Guide (Open Assessment) Submission Deadline: Sunday 19 April 2026 11:59 PM Total Assessment weighting – 30% Purpose of this assessment This assessment aims to develop student’s ability to critically examine ethical issues arising from the development and deployment of Artificial Intelligence (AI) technologies in real-world contexts. By engaging with a contemporary AI application or scenario, students will apply conceptual knowledge, ethical reasoning frameworks, and analytical thinking to identify potential risks, biases, governance challenges, and societal implications. The task requires students to evaluate the responsible use of AI systems, assess their broader organizational and social impact, and formulate well-reasoned, evidence-based recommendations. Through this process, students will strengthen their capacity for critical reflection, professional judgment, and the production of a structured academic report that demonstrates ethical awareness and responsible AI practice in complex decision-making environments. Demonstrate achievement of this learning outcome: ULO 4: Explore and critically assess the ethical considerations surrounding the development and deployment of AI technologies in society. Task description: This is an individual assessment designed to evaluate student’s ability to critically analyse ethical considerations arising from the development and deployment of Artificial Intelligence (AI) systems in real-world contexts. Students will be assigned a contemporary AI application or scenario situated within a realistic organisational or societal setting. The scenario will outline the purpose of the AI system, its operational environment, relevant stakeholders, data usage practices, and key ethical concerns. The task requires students to produce a comprehensive written report of approximately 2000 words that demonstrates critical analysis, ethical reasoning, conceptual understanding, and professional academic communication. The report must evaluate the ethical implications of the AI application, assess risks and governance challenges, and propose well-reasoned recommendations to support responsible AI development and deployment. This assessment aims to simulate a professional ethical review of an AI solution. Students will be required to: Analyse the assigned AI scenario and identify key ethical issues and stakeholders Examine potential risks including bias, fairness, transparency, accountability, privacy, and societal impact Evaluate the organisational, regulatory, and governance implications of deploying the AI system Propose evidence-based recommendations to mitigate ethical risks and enhance responsible AI practice Critically reflect on the broader implications of AI adoption in organisational and societal contexts To complete this assessment, students are required to: Critically analyse the assigned AI application or scenario, identifying ethical risks and contextual factors. Evaluate potential impacts on individuals, organisations, and society, including issues of fairness, bias, privacy, and accountability. Apply relevant ethical principles, governance considerations, and responsible AI concepts introduced in the unit. Develop structured, evidence-based recommendations to improve ethical design, implementation, and oversight of the AI system. Demonstrate academic rigour through clear argumentation, appropriate referencing, and integration of scholarly or industry sources. The final submission must include: A structured written report (PDF, approximately 2000 words) analysing the AI scenario, evaluating ethical implications, and presenting recommendations. Appropriate academic references supporting ethical analysis and argumentation. This assessment aims to help students: Critically evaluate ethical challenges in AI development and deployment. Apply responsible AI principles to realistic organisational scenarios. Formulate structured and evidence-based ethical recommendations. Communicate complex ethical and technical considerations in a professional academic format. Demonstrate readiness to engage responsibly with AI technologies in professional practice. Structure: This assessment must be submitted in an academic report format, including the provided assessment cover sheet from the ICT 902 Moodle page. The report should include an introduction, main body, conclusion, recommendations, and a reference list. Formatting: This assessment should be submitted with a word count of 2,000 words using either Calibri or Times New Roman font, size 12. The document should be double-spaced, with a minimum of ten references in APA 7 format. The assessment carries a total weight of 30%. Headings and Subheadings: Use a clear and consistent hierarchy for headings and subheadings. For instance, main headings should be in bold and a larger font size, while subheadings should be bold with a smaller font size. Font Style and Size: Ensure consistency in font style (e.g., Calibri or Times New Roman) and size (12-point for body text). The entire document should be double-spaced with uniform paragraph spacing. Alignment: Keep the text left-aligned for better readability. The body text should be justified for a neat, organised appearance. Proofreading and Editing: Review your work for grammatical errors and clarity. Consider using tools like Grammarly or peer feedback to improve writing quality. Due Date: Week 7 Resources Available: Lecture slides and notes from weeks one to ten. Videos available in the “Readings and Viewings” section of OASIS. Guide: The guide below provides a concise overview of how to approach the assessment. SCI Cover Page: (No word count) Table of Contents: (No word count, structured outline) Introduction (300–350 words) Introduce the assigned AI application or scenario and its relevance in today’s data-driven environment. Highlight the growing importance of ethical oversight in AI development and deployment. Outline how the report will examine the AI system, analyse ethical risks, evaluate governance implications, and propose responsible AI recommendations. Body of the Report: Paragraph 1: AI System and Impact Analysis (400–450 words) Describe the nature and purpose of the AI system (e.g., predictive model, automated decision system, intelligent assistant). Identify affected stakeholders, including individuals, organisations, and society. Assess potential impacts such as fairness concerns, bias, privacy risks, transparency limitations, or accountability gaps. Discuss possible regulatory or compliance implications where relevant. Paragraph 2: Ethical Risk and Governance Evaluation (450–500 words) Identify key ethical risks associated with the AI system, including data quality concerns, bias in training data, model opacity, or decision-making autonomy. Evaluate organisational responsibility, governance structures, and oversight mechanisms. Assess risk severity, potential harm, and long-term societal implications. Integrate relevant ethical principles and responsible AI concepts introduced in the unit. Paragraph 3: Responsible AI Strategy and Mitigation Framework (450–500 words) Propose a structured strategy to mitigate identified ethical risks. Recommend governance mechanisms, transparency practices, monitoring processes, or policy interventions. Discuss accountability structures, fairness auditing, and continuous evaluation mechanisms. Justify