Review of (some) debates in AI and Assessment

Since the public release of general-purpose AI tools in November 2022, the role of these technologies in educational assessment has become a significant point of discussion among educators. There is a growing consensus that thoughtful integration of AI into educational practices is essential. The goal is to equip students with the knowledge and skills to navigate the world independently, with educators playing a crucial role in guiding this process and ensuring the responsible use of AI. AI has the potential to enhance assessments by promoting active learning and developing critical thinking and problem-solving skills. Educators are crucial in guiding students to use AI tools critically and actively for research and analysis, enriching the learning experience and preparing students for a future where AI is integral to their professional and personal lives.

Assessment Scales and a Post AI Learning Taxonomy

The AI Assessment Scale (AIAS), developed by Leon Furze and colleagues, provides a framework for integrating artificial intelligence into educational assessments. This scale, which offers a range of levels from ‘no AI’ to ‘full AI,’ is designed to meet diverse assessment needs. Its primary purpose is to guide educators and students in understanding AI’s ethical and appropriate use in assessments. The AIAS has been widely adopted by educational institutions and recognised by organisations such as UNESCO and the Australian Tertiary Education Quality and Standards Agency (TEQSA). This recognition reassures educators about the credibility and relevance of the scale, promoting assessment transparency and ethical use of AI.

The AIAS includes levels that address both technological and pedagogical aspects of AI integration. One significant feature is the inclusion of a level that allows unrestricted use of AI in assessments, acknowledging that students often possess advanced skills in using generative AI, which can be leveraged to explore innovative ways of achieving learning outcomes.

  • Pedagogically, the scale distinguishes between levels where AI is used for planning and research and those where it is used for evaluation and feedback. This distinction helps educators make informed decisions about incorporating AI into their teaching practices.
  • Assessment validity: The AIAS emphasises assessment validity over security. The authors argue that permitting any use of AI allows for all uses, given the undetectable nature of sophisticated AI tools. Therefore, assessment validity is prioritised across all levels except the one that prohibits AI use entirely. This is crucial as it helps to prevent potential misuse of AI tools, ensuring that assessments remain valid and reliable.
  • Stylistically, the scale uses neutral colours to improve accessibility and avoid hierarchical implications, enhancing accessibility for individuals with visual impairments.
Click to expand

These features reflect a comprehensive effort by Leon and his colleagues to ensure that the AI Assessment Scale remains an effective tool for educators navigating the integration of AI into their teaching practices. Through continuous refinement and engagement with diverse perspectives, the AIAS supports educators leveraging AI while maintaining academic integrity and enhancing learning outcomes.

Philippa Hardman’s AI taxonomy presents a transformative approach to redefining learning and assessment. As AI technologies become integral to educational and professional environments, traditional frameworks like Bloom’s Taxonomy are being challenged. Hardman suggests a paradigm shift towards fostering higher-order thinking skills that AI cannot replicate—skills vital for the future human workforce.

Click to expand

Her taxonomy emphasises developing skills complementing AI capabilities, such as critical analysis, creative problem-solving, and effective collaboration with AI systems. This encourages educators and learners to reconceptualise what effective learning entails in academic and corporate environments. The taxonomy underscores the importance of understanding, applying, analysing, collaborating, creating, and disrupting AI.

This innovative framework focuses on leveraging AI for deeper learning and skill development. It invites educators to embrace the opportunities presented by AI, fostering an educational landscape where human creativity and critical thinking are enhanced rather than replaced by technology. Through this lens, Hardman’s taxonomy seeks to prepare learners for a future where AI is not just a tool but a partner in achieving educational and professional excellence.

Swiss Cheese and Assessment Integrity

Phil Dawson is an expert in higher education assessment, particularly concerning AI and academic integrity. Dawson uses the Swiss Cheese Model to illustrate how to approach assessment integrity in a world where AI makes traditional methods less reliable. The model suggests that individual assessment strategies, like Swiss cheese with holes, will have weaknesses. However, these weaknesses can be mitigated by layering multiple strategies – ‘layering Swiss cheese’. Each layer acts as a barrier, and while some AI-assisted cheating might slip through one hole, it’s less likely to get past all of them. This model underscores the need for a multi-faceted approach to assessment integrity in the age of AI, guiding educators in developing robust assessment strategies.

The Swiss Cheese Model is not specific to assessment or AI. However, Dawson’s application of the model highlights the need for a multi-faceted approach to assessment integrity in the age of AI. The model emphasises that relying on a single “AI-proof” assessment method is unrealistic. Instead, educators should focus on combining various strategies, such as:

  • Authentic assessment
  • Process-oriented evaluations
  • In-class activities
  • Reflective portfolios

By layering these approaches, educators can create a more robust system that minimises the risk of AI-assisted cheating while promoting genuine learning.

Swiss Cheese (you get the concept, I am sure) Generated with Bing AI, 14 October 2024

The relationship with your second brain

Over the past couple of years, the concept of a ‘second brain’ has emerged, particularly in AI’s role as a collaborator in augmenting human intelligence. This concept, championed by Ethan Mollick, suggests that AI can extend our cognitive processes, assisting us with research, analysis, and problem-solving tasks. It aligns with Tiago Forte’s ‘Building a Second Brain’ concept, where he describes a system for organising digital information to enhance our cognitive capabilities. This concept underscores the potential of AI as a cognitive collaborator, highlighting its role in augmenting human intelligence and its relevance to education.

OpenAI. (2024). A conceptual image of two minds—one human and one AI—interacting in the same space [Digital image]. DALL-E. https://chat.openai.com/

When used effectively, technology can function like a “bicycle for the mind” (or even a motorbike), amplifying our abilities and helping us achieve goals more efficiently. Mollick emphasises that AI should be viewed as a co-intelligence that works alongside humans, enhancing our intelligence rather than replacing it. This active and collaborative approach is evident in examples where AI aids in generating ideas, automating routine tasks, and providing personalised learning experiences. AI can integrate into our thinking processes as an external tool, expanding our cognitive capacity and transforming our relationship with information.

An assessment example:

Humanities Essay Assessment Task: Exploring the Impact of AI on Human Cognition

This example assessment task invites humanities students to explore the impact of artificial intelligence on human cognition through the lens of the “second mind” concept. This task challenges students to critically examine how AI is augmenting human intelligence and reshaping our relationship with knowledge and learning. The assessment requires strategic use of AI tools, such as research assistants and chatbots, to enhance research, analysis, and writing processes. Students engage with the subject by documenting their AI interactions and reflecting on its’ influence while demonstrating their ability to integrate AI thoughtfully and ethically into their work. Through this task, students are guided to critically evaluate AI-generated content, fostering skills in critical analysis, creative problem-solving, and effective collaboration with AI systems.

Task:

Write a 1500-word essay that explores the concept of a “second mind” in the context of AI’s increasing integration into human lives. Drawing on the ideas of Ethan Mollick and other relevant sources, examine how AI is augmenting human intelligence and reshaping our relationship with knowledge and learning. Consider the implications of this evolving relationship for education, particularly within the humanities disciplines.

AI Integration Requirements:

  • You must use AI to complete this assessment task. The AI Assessment Scale (AIAS) Level for this assessment is Level 4: AI Task Completion, Human Evaluation.
  • You can use AI to complete specified tasks; any AI-created content must be cited. You may also use AI to aid in rewriting after conducting your analysis.

Examples of how you can integrate AI include:

  • Research: Employ an AI research assistant like Elicit.org to locate relevant academic articles that discuss the “extended mind” concept.
  • Analysis: Engage with an AI chatbot to analyse specific arguments or concepts connected to the “second mind.” For example, you could ask the chatbot to summarise key ideas from Mollick’s work or generate counterarguments to his claims.
  • Brainstorming: Use an AI tool to generate ideas, create outlines, or suggest relevant examples. For instance, you could input your current essay structure into an AI tool and ask it to identify potential gaps in your argument or suggest additional perspectives.
  • Editing: Refine your writing using AI-powered grammar and style checkers like Grammarly.

Document Your AI Use: Submit a separate document outlining:

  • The specific AI tools you employed for each stage of the process.
  • The prompts you used to interact with those AI tools.
  • A critical reflection on how AI influenced your research, analysis, and writing process.

Assessment Criteria:

Your essay will be assessed on the following:

Content:

  • Depth of engagement with the “second mind” concept.
  • Sophistication of your analysis of how AI is changing human cognition.
  • An insightful exploration of the implications for education, particularly in the humanities.

Structure and Argumentation:

  • Clarity and coherence of your essay’s organisation.
  • Strength and persuasiveness of your arguments.
  • Effective use of evidence from the sources to support your claims.

AI Integration:

  • Strategic and purposeful use of AI tools to enhance your work.
  • Critical evaluation of AI-generated output and thoughtful incorporation into your analysis.
  • Thorough and insightful documentation of your AI usage process.

Writing Quality:

  • Clarity, precision, and academic style of your writing.
  • Accurate citation of all sources, including AI-generated content.

Note: Using AI to generate the entire or substantial portions of the essay without proper attribution is considered plagiarism and will result in academic penalties.

Conclusion

In conclusion, integrating artificial intelligence into educational assessment represents a shift in conceptualising learning and evaluating student performance. As highlighted, traditional frameworks like Bloom’s Taxonomy are increasingly inadequate in addressing the complexities introduced by AI technologies. Dr Philippa Hardman’s Post-AI Learning Taxonomy offers a forward-thinking approach emphasising higher-order thinking skills—critical analysis, creative problem-solving, and effective collaboration with AI systems—essential for the future workforce.

The AI Assessment Scale (AIAS), developed by Leon Furze and colleagues, further complements this vision by providing a structured framework for integrating AI into assessments. The scale’s emphasis on assessment validity over security reflects a nuanced understanding of AI’s role in education, recognising that sophisticated AI tools necessitate new approaches to evaluation. By prioritising validity, the AIAS encourages educators to embrace AI as a partner in learning rather than a threat to academic integrity.

Moreover, the concept of the “extended mind,” as discussed by Ethan Mollick and others, underscores the potential of AI to serve as a “second brain,” enhancing our cognitive capacities and transforming our relationship with information. This perspective aligns with Hardman’s taxonomy, advocating for an educational paradigm where human creativity and critical thinking are augmented by technology.

As we navigate an AI-saturated educational landscape, educators and learning professionals must embrace these challenges and opportunities with a critical mind. By leveraging AI’s capabilities thoughtfully and ethically, we move towards an educational system that values innovation and adaptability, preparing students for success.

Ultimately, integrating AI into assessment practices is not about replacing traditional models wholesale but evolving them to reflect realities. They play a vital role in shaping a future where AI is not just a tool but a collaborative partner in many things we do.

Resources

Ashford-Rowe, K., Herrington, J., & Brown, C. (2014). Establishing the critical elements that determine authentic assessment. Assessment & Evaluation in Higher Education, 39(2), 205–222. https://doi.org/10.1080/02602938.2013.819566

Ajjawi, R., Tai, J., Nghia, T. L. H., Boud, D., Johnson, L., & Patrick, C-J. (2020). Aligning assessment with the needs of work-integrated learning: The challenges of authentic assessment in a complex context. Assessment & Evaluation in Higher Education, 45(2), 304–316. https://doi.org/10.1080/02602938.2019.1639613

Arnold, L., & Croxford, J. (2024). Is it time to stop talking about authentic assessment? Teaching in Higher Education. https://doi.org/10.1080/13562517.2024.2369143

Fawns, T., & Schuwirth, L. (2024). Rethinking the value proposition of assessment at a time of rapid development in generative artificial intelligence. Medical Education, 58(1), 14–16. https://doi.org/10.1111/medu.15259

Hardman, P. (2024, October 3). A post-AI learning taxonomy: Imagining a new framework for designing & assessing human learning. Dr Phil’s Newsletter. https://drphilippahardman.substack.com/p/a-post-ai-learning-taxonomy

MacCallum, K., Parsons, D., & Mohaghegh, M. (2024). The Scaffolded AI Literacy (SAIL) Framework for Education: Preparing learners at all levels to engage constructively with Artificial Intelligence. He Rourou, 1(1), 23. https://doi.org/10.54474/herourou.v1i1.10835

Perkins, M., Furze, L., Roe, J., & McVaughan, J. (2023, 2024). The artificial intelligence assessment scale (AIAS): A framework for ethical integration of generative AI in educational assessment. Journal of University Teaching and Learning Practice, 21(6). https://doi.org/10.53761/q3azde36

Pigg, S. (2024). Research writing with ChatGPT: A descriptive embodied practice framework. Computers and Composition, 71, 102830. https://doi.org/10.1016/j.compcom.2024.102830

AI writing citation

In crafting this analysis, I explored the “second mind and assessment” concept using Notebook LM and an extensive collection of notes and articles. This AI tool was instrumental in helping me synthesise my reading notes from the 30+ readings, organise my ideas, and refine my writing, ultimately enhancing the clarity and depth of my analysis.

Posted

Comments

Leave a Reply