MENU

Fun & Interesting

UK universities’ policy responses to Artificial Intelligence related academic misconduct

QQI 551 2 years ago
Video Not Working? Fix It Now

Presenter: Stephanas Lim, Imperial College London Stephanas is currently a student in his 4th year studying Chemistry at Imperial College London. As part of the Academic Integrity in STEMM i-Explore module, he and his group conducted research into the readiness of UK universities towards AI-related academic misconduct which won the Group Research Prize at the recent 2023 Imperial College London Academic Integrity in STEMM student research conference. His findings were also presented at the 9th European Conference on Ethics and Integrity in Academia earlier in July 2023. Within his College, he has spoken on a panel at this year’s Imperial College Festival of Learning and Teaching where he focused on the student perspective towards digital technologies on assessment and feedback. Abstract: As universities in the UK and around the world adapt to the mass emergence of generative artificial intelligence (AI) tools such as ChatGPT and Google Bard, their strong influence on the educational landscape of universities and other higher education institutions has become increasingly noticeable (Dwivedi et al., 2021; Swiecki et al., 2022). The ease of use, breadth of application and accessibility of AI tools creates the potential for students to both enhance their learning and to breach university academic integrity policies. Similarly, AI tools augment the way educators develop lesson plans and assessments in response to this effect on student’s learning. It is therefore this bipolar nature of the use of AI tools affecting all stakeholders within universities that underscores the importance of holistic regulation. Whilst AI tools should be encouraged to supplement learning especially in academically demanding university degree courses, universities play crucial roles in ensuring that AI tools do not create added opportunity, incentive and rationale to commit academic misconduct (Holden et al., 2021). The risks posed by generative AI are not purely hypothetical. Based on Google Trend data sampling the interest towards AI collected between March 2022-23, it was noted that interest in AI as a search keyword and topic spiked in mid-December 2022 and late February 2023, periods coinciding with widespread interest in ChatGPT (Google, 2023). Therefore, it is of interest to understand whether UK universities, which are bound by regulatory guidelines, have been able to officially update their policies or provide new guidance to students about how AI can be best used to support their studies. The study assessed the readiness of a sample of the top 50 UK universities published in the Complete University Guide 2023 University League table (Complete University Guide, 2023) to tackling AI related academic misconduct based on information presented in the university’s published policy and student guidance documents. Data was collected between 27 February and 5 March 2023. A novel rating system named the AI Misconduct Readiness (AIMR) rating was developed using keyword-based analysis processing to extract AI-related information from the publicly accessible university documentation. This information was analysed qualitatively and converted onto a numerical scale indicating how far the university’s published policy and student guidance documents showed them as more prepared to address AI-based academic misconduct on a scale of 1 (most prepared) to 4 (least prepared). The sample universities were all found to have publicly accessible academic integrity or misconduct policies and based on the sample, an arithmetic mean AIMR rating of 2.83 ± 0.91 was obtained. This indicated a potential lack of readiness of UK universities to address AI misuse based upon published documentation. This was positioned alongside vague offence and third-party service definitions which lacked a reflection of the technologically driven potential for academic misconduct, suggesting that more work in this area is needed across the higher education sector. The authors notes that this research was conducted in a field that is rapidly evolving and that universities are beginning to roll out guidance on AI tool usage. It is anticipated that further developments across the sector will allow examples of good practice in AI policy and guidance development to be shared during the conference presentation. This will also consider the general principles that are vital for implementation to weather the potential threat of AI-related academic misconduct. It is hoped that this will provide timely student-led input into the discussions on AI that are happening in higher education both in the UK and more widely around the world.

Comment