AI considerations

As a tutor, you will be concerned that students may use generative AI on assessments in ways that fail to demonstrate real learning. AI tools can produce increasingly high-quality work, evolving alongside students' capabilities to leverage them. These realities highlight shortcomings in many conventional assessments, leaving them vulnerable to AI exploitation. Rather than banning AI or ignoring how pervasive it has become across education and the workplace, we encourage you to consider directly addressing its emerging role in your course and assessment design. It is already apparent that there are no quick-fixes or panaceas here: each current assessment will need to be examined on a case-by-case basis and judgements made about its suitability within an era of ubiquitous AI. We have a collective responsibility and opportunity to guide students on AI integration that upholds academic integrity while tapping the potential of this new technology.

In a recent article, Lodge et al (2023) provides a taxonomy of six options for reacting and redesigning assessments in light of advances in generative AI over the short, medium and long term:

  • Ignore: Unlikely to be viable long-term as the use of AI will become more mainstream.
  • Ban: Not viable as students will find ways to bypass any prohibition and AI will become embedded into a range of IT based productivity tools.
  • Invigilate: Observing students undertake assessment tasks to confirm that they are not using AI tools can help but is not the only solution.
  • Embrace: Allowing AI use in assessments prepares students for an AI-powered world the value is questionable if outputs of AI can easily be submitted as assessment tasks.
  • Design Around: Exploiting current AI limitations may work in the short-term but becomes less viable as the technology rapidly advances.
  • Rethink: Fundamentally reimagining assessment design using best practices.

Lodge argues that "rethink" and "embrace" options will become increasingly important over time, alongside the targeted use of ‘invigilate’. Building on Lodge's taxonomy, the following principles focus specifically on viable strategies for embracing AI's role in assessment while, rethinking assessment philosophy and practice more broadly.

We have developed four principles from a number of key documents to provide you with a framework for assessment design.

Assessment design should take a holistic, programme-level view to support student development

Assessment should be designed across a programme of study to map and support the development of students ’knowledge and capabilities. A programme's assessments as a whole should aim to capture the learning journey, not just isolated demonstrations of competence.

We believe this is important because it:

  • Provides multiple touchpoints to evaluate student learning over time rather than a single high-stakes assessment allows for judging progress at multiple points.
  • Allows for assessment methods that reveal thinking and process, which AI cannot easily replicate because it focuses on final outcomes rather than process.
  • Synoptic or integrative assessments can evaluate ability to integrate and apply learning across a program because AI has difficulty producing integrated applications of complex learning.
  • Combining formative and low stakes summative alongside high stakes invigilated assessments can assure overall achievement because they can support each other to assure graduate outcomes.
  • It portrays a richer, more nuanced profile of graduate capabilities because programme-level assessment provides a fuller picture of what students are capable of and identifies different ways to encourage and capture their learning.
  • Reduces reliance on assessable components that are most vulnerable to AI influence because overreliance on isolated written assessments risks enabling misconduct.
  • Aligned assessments distributed over time are less susceptible to misconduct because connectivity between assignments over time can deter cheating.

Suggested actions for programme teams to discuss:

  • Map all assessments to overarching program outcomes to identify gaps, duplication, and coherence.
  • Consider synoptic, capstone or integrative assignment that requires applying learning from across the programme.
  • Replace some high-stakes exams with assessments revealing student thinking and decision-making over time
  • Identify key proficiencies that need invigilation at appropriate points in the curriculum rather than attempt to invigilate every aspect of learning.
  • Survey alumni and employers to evaluate if assessment prepares graduates as intended for workplace or further study
  • Review grades over time to check for patterns that may suggest ineffective assessments or misconduct
  • Provide resources for tutors to develop learning outcomes, assignments, and rubrics focused on higher-order skills
  • Articulate connections between learning activities, assessments, and overarching program goals for students

Assessment should focus on exposing students’ thinking, decision-making and reflection on process, not just final products

The role of assessment extends beyond evaluating end results; vital insight is gained by mapping the iterative decisions, reflections, and refinements students make throughout a learning period.

We believe this is important because it:

  • Allows evaluation of higher-order thinking and analysis which AI tools currently cannot match because generative AI focuses more on fluent final outputs rather than the complex cognitive processes behind them.
  • Provides insight into the student's incremental development of knowledge because mapping a student's progress, failures, recoveries and refinement reveals authentic personal growth.
  • Guards against potential misconduct by exposing each student's unique pathway because documenting all the steps to the final product may deter inappropriate uses of AI.
  • Develops skills around metacognition, ethical judgement and critical reflection valued by employers.
  • Aligns assessment with learning processes rather than mainly achievement because overemphasis on marks and grades often disconnects assessment from the learning journey itself.

Suggested actions for programme teams to discuss:

  • Break down assessments into multiple components that demonstrate the evolution of student work over-time.
  • Consider models of continuous assessment to capture learning and provide clear developmental support. Require students to submit reflective memos explaining their decision-making process and changes made based on feedback.
  • Design rubrics and grading schemes that value evidence of critical thinking and evaluation skills.
  • Integrate regular teacher-student discussions, critiques and interactive feedback sessions into courses.
  • Structure group assessments with individual grading components tied to personal contributions.
  • Ask students to actively identify areas of weakness, mistakes or knowledge gaps and reflect on them.
  • Provide opportunities for students to assess their own work as well as peer review others against standards.
  • Replace high-stakes exams that only demonstrate recall with assessments revealing applied understanding.
  • Consider implementing a student portfolio reviewed periodically to monitor and reflect on the learning journey.
  • Develop synoptic, integrated or capstone projects spanning program components focused on skill demonstration.

Assessment should promote the ethical and responsible use of AI tools aligned with discipline values and practices.

Assessment should require students to apply AI technologies judiciously, following the norms and expectations of their field of study.

We believe this is important because it:

  • Embeds critical thinking about appropriate AI use within disciplines rather than treating it generically because norms vary across fields.
  • Promotes tutor/student dialogue and clarity about AI's role in their programs because shared expectations enable consistency in preparing and evaluating students.
  • Connects AI skills to graduate attributes valued by employers and societies because responsible innovation is imperative as AI prolificates.
  • Enables students to understand the rules of properly credited generative AI and how it is to be utilised to enrich their development since clear guidelines around use will support student use.

Suggested actions for programme teams to discuss:

  • Consult with industry advisors, accreditation bodies and other organisations to determine appropriate AI use guidelines for the discipline.
  • Audit learning outcomes for opportunities to embed critical thinking around ethical AI into assessments.
  • Design rubrics that evaluate legal, social and ethical implications in student work involving AI.
  • Provide checklists outlining professional codes and conventions needed to guide use of AI.
  • Scaffold opportunities across foundation and level 4 for attributed AI practice before possible summative assessments.

Assessment should incorporate tutor-student partnerships and dialogue around expectations, standards, and use of AI tools

Tutors and students should engage in regular discussion of grading approaches, benchmark targets, and suitable versus unsuitable uses of generative tools. Assessment design at the programme level should offer opportunities for tutor-student partnerships in co-constructing meaningful standards, justifiable expectations, and parameters for ethical leveraging of AI.

We believe this is important because it:

  • Promotes transparency and consistency in AI expectations across courses and programmes because unified understanding of appropriate use is co-developed.
  • Enables tutors and students to make contextual judgements collaboratively as suitability of AI is interpreted through joint consensus.
  • Embeds ethical discernment about AI tools within learning activities since critical reflection is shaped through ongoing discussion.
  • Allows standards and rubrics to emerge from mutual negotiations as criteria are weighed from both tutor and learner perspectives.
  • Develops student skills in self-evaluation and peer review with AI assistance by demystifying assessment standards through conversation.
  • Encourages student accountability in judging and applying AI tools appropriately via consistent and clear boundary setting.

Suggested actions for programme teams to discuss:

  • Build time for student-tutor co-creation of assessment rubrics and quality criteria into the programme.
  • Structure group tutorials focused on appraising example assignments involving AI technologies.
  • Require students to submit a commentary justifying their AI use against standards developed in class.
  • Scaffold reflective opportunities leading up to final projects analysing evolving thought processes.
  • Survey students on their understanding of rules and expectations early on and adjust guidelines accordingly.
  • Implement mid-term focus groups for tutors and students to compare judgements of standards and appropriate AI use.
  • Jointly develop an applied code of conduct for generative technology use in the discipline.

Additional Information

Advice on detection or what to do if you suspect the unauthorised use of AI

We do not recommend the use of any automated AI detection tools. The reasons for this are as follows:

  • There are instances across the sector of even market leading AI detection tools providing misleading or inaccurate data with respect to the use of AI
  • Students using AI for legitimate purposes (supporting their English as someone who is not a native English speaker for example) can be flagged up as ‘misuse’ by detection software. This can result in AMPs being initiated where not required
  • Emphasis on ‘endpoint’ assessment Gen AI detectors often measure output ahead of process.

If you suspect a student or want to know more about academic misconduct, we recommend contacting your Faculty Registrar.

Glossary

Synoptic, integrative, and capstone assessments use complex activities to bring knowledge into practice from across the curriculum. The definitions found in the literature are often used interchangeably, but a hierarchical relationship could be seen as follows.

Capstone assessment

Target graduating students at the departmental level, similar to a dissertation but supported by smaller cornerstones assessment across the program (Hansen et al., 2023)

Focus on students’ transition from university and using real-world local community issues explored through group work (Couch, Wood and Knight, 2015) (Hansen et al., 2023) (Kerrigan and Jhaj, 2007)

Integrative assessments

Emphasizes connections across learning and application of that learning involving synthesis around a complex problem, modeled on the idea of 'transfer of learning' (Constantinou, 2020, Birenbaum et al., 2006, Mungal and Cloete, 2016)

Examines students' learning strategies as well as knowledge (Crisp, 2012)

Synoptic assessment

Assess accumulated knowledge on one topic from different parts of the program (Southall and Wason, 2016) an entire year (Baartman and Quinlan, 2024) or a few modules (Barbosa-Bouças and Otermans, 2021).

More substantive in complexity and topics covered than 'normal' assessments (Constantinou, 2020)


References

Acknowledgement

The principles above were developed from the following documents.