The Future of Feedback: How AI is Transforming Higher Education Practices

 

Dr Edd PITT
Reader and Programme Director for PGCHE,
Centre for the Study of Higher Education,
Univerity of Kent,
UK

Dr Kristen LI
Lecturer,
Department of Computer Science,
HKBU

SUMMARY KEYWORDS
AI in Education, Assessment Innovation, Generative AI, Feedback Mechanisms, Personalized Learning, Academic Integrity, Data Privacy, Learning Analytics, Student Engagement, Pedagogical Strategies, AI Tools for Assessment, Collaborative Learning, Critical Thinking, Equity in Education, AI Bias, Educational Technology, Continuous Feedback, Research in Education, Teaching Practices

 

Dr Kristen LI  0:12 
Ladies and gentlemen, welcome to this podcast today. We are having Dr Edd Pitt with us. Welcome to Hong Kong. Thank you. So we are going to discuss about AI in assessment. So Dr Edd Pitt, would you please first share a bit about your background and how you became interested in researching on assessment and feedback in higher education?

Dr Edd PITT  0:35
So I began my career in HE in 2004 as an academic work in sports science, in sports coaching and psychology. And at that time, there was quite a lot of pressure to do a PhD at the institution I was working at. And I was interested in student learning, and I was lucky to spend a month at my institution with a really world famous researcher called Roy Sadler, who was interested in assessment and feedback. He's written lots of seminal papers, and I just spent every day talking to him. At that time, I was doing a little bit of work writing a small book for students about how to write essays. And that was a sort of a side thing, working with my with my colleague, Professor Lynn Norton. And really, after talking to Royce, he said, you know, sounds like you're really into this. Why don't you explore a PhD? And at that time, I was really fortunate that the university I worked for funded a PhD. And I spent seven years doing the PhD, full time job PhD on the side, in assessment and feedback. And it was great because I was able to research my own teaching colleagues, teaching other colleagues, teaching the student experience. And I spent a lot of time immersed in the student experience of assessment and feedback. And it was clear from my PhD studies that there were lots of issues, and that's how it set my path in my career to try and address those. And I moved into academic development 11 years ago, and since then, have consistently researched assessment and feedback, really looking at how educators can improve that, but also how students can do more in assessment and feedback and have a bit more responsibility and change the way that feedback is seen in he and that's kind of where my research over the last few years has certainly moved towards.

Dr Kristen LI  2:32
Yeah, thank you. Very interesting. Dr. Pitt how do you think about the technology changing the landscape of assessment in higher education?

Dr Edd PITT  2:41
I think there's a lot of potential actually, and I think there's lots of different areas that that we could draw on. I think one of the things that that comes home is that there could be a lot of assessment innovation with generative AI. And I think where that might happen more so is the integration of generative AI into more of the familiar tools that we're using. Now we know that that's happened over the last 10, 15 years without us really appreciating it, but I think things like ChatGPT have obviously come about, and the response has been to sort of think, well, how can we make assessment more authentic, dialogic, process focused, and how we can sort of make that connection between ChatGPT and the human element of assessment? And I think, as well with sort of AI and technology, the idea that there's an enhanced realism in application. And particular examples I would give on that are things that are traditionally seen as really high risk. So I'm thinking things like simulation that you would create for take paramedic science, where you can recreate a RTC, you know, horrendous situation, and be able to simulate that through VR, through AI, and get students to experience what it would be like in that moment, without the necessary risks of recreating that in the real world, where people could potentially die. So, I think there's lots of ways that we can use AI and technology to help our students navigate complex simulations. They can navigate complex problems in dynamic environments that can change, and there's lots of power, then in the instructor being able to manipulate the simulation and respond in a way that doesn't necessarily hurt people, and that I think will help align what goes on in higher education with the real world, where these students might go off and do these really intense, different decision making and synthesis jobs. I also think that we could potentially have more diverse assessment formats in terms of moving beyond just those traditional essays and exams which people will use, now that's not to say that those are bad things, but students might be asked to present their findings through different medium. You know, we've seen things like videos, podcasts, interactive portfolios really starting to help our students develop the skills that they're going to need when they move into the world of work or wider society, and those higher order skills in terms of moving our students away from testing recall to more meta cognitive abilities and critical thinking, and we can get students to generate initial responses through AI. We can get them to critique the outputs, verify, refine these outputs, so that we get students to have a deeper engagement in the material, so that we can foster that critical evaluation. And we know that, you know, more broadly, AI is already outperforming humans in terms of reading comprehension, image recognition, language understanding and interpretation, and it's probably not going to be long before that's replicated for mathematical problems, general knowledge tests, complex reasoning, all of those sorts of things. So I think there's a huge focus on us helping students to understand how AI can harness the power of what all of this research is purporting to put forward. So I think it's there's an onus on us to integrate AI into our assessment so that we help better prepare our students for the for the future, in their in their lifelong learning.

Dr Kristen LI  6:34
Yeah, I hear a lot of advantages for students experience assessment, etc, and later on, we can discuss more. Yeah, you just mentioned the generative AI in education. So what are some specific examples of AI tools currently used in assessment and feedback?

Dr Edd PITT  6:50
Yeah, I've got a couple. Actually, I was fortunate enough in the last year to work with a fantastic master's student at my institution in physics. And one of the challenges that physics educators have is students being prepared for laboratory experiments, and quite often, the educators will have to spend a long time explaining the experiment and for students to be familiar with the equipment they're going to use, and some of the dangers of that is obviously that time slips and they waste a lot of time, but also that the kit's very expensive and students might break it or whatever. So what this student wanted to do was to create a virtual reality lab, and he had actually already set up his own company in creating virtual spaces. It was a complete whiz with VR, and he wanted to do an education focused Master's project for his mph. So he created this virtual reality lab, and you were able to walk in where in the goggles, and you could walk around the room, and it looked exactly like a science lab, and within it was six different stations, and he had created the full experiment, and one of them was a pendulum swing, and you had to predict the arc of travel, and you were able to interact with that completely. You were also able to pick up different weights, place them on a scale, and the scale would change the reading depending on the put the weights, you were able to interact with the flat lab protocol and read that. You were able to manipulate the different things in the room. You were able to talk to your peers, and you were also able to go up to different parts of the room and ask the questions that you might have asked the instructor, and the AI would generate a response, and this was amazing. And the way he evaluated it was that he had three different conditions. He gave the students the lab protocol as a written form. He also exposed them to a video recording of him just using the lab equipment, and then he got them to use the VR and overwhelmingly, the students said that they loved the VR experience because it was immersive. They were able to ask questions, and once they got over the motion sickness and strangeness of standing in a room and actually walking around a room and bumping into each other, it was a really great way for them to prepare before the laboratory so that the real learning could happen in the lab, and the time wasn't wasted. So that was one example, and to add to that, I know of a colleague at a university in London that has also now developed different language for the AI to come back to you in your mother tongue. So this particular institution has lots of international students that ask questions. He's training the AI in different language so that they feel that they're part of the same situation. And it was also breaking down the barriers with language struggles for some of the international students, the other thing that has happened in the last few years is using ChatGPT prompts for student feedback generation, and we've been experimenting. Students using ChatGPT and asking it to rate the degree to which they've met the criteria. So they would say “my particular piece of work has got these four criteria. These are the criteria. Could you have a look at my draft piece of work to see whether or not I've addressed that criteria?” And we've had varying degrees of student engagement with that. Some students simply asking it to say, “Is it good or bad?” “Fine.” Some students saying, “What could I add?” Then we're stretching the rules a little, and then others are actually going back and forth and having a dialog and saying, well, what could I do, and what about if I did this? So there's more of an iterative dialogic feature, and I can see how more students are going to think of AI as another resource where we know that academics have time pressures, and we know that it's not possible for them to look at all students draft work. So helping our students to understand how that iterative dialog and change can happen with ChatGPT, could be a potential, fruitful avenue for feedback generation. However, one moment of caution is that we need to appreciate the quality of what comes out, and I'm sure in the latter part of this podcast, we're going to talk about some of the some of the issues around AI. So I don't want to spoil my responses to that, but just the idea of being critical, like I mentioned earlier, the criticality in the information that one receives training our students to do that, because we can't be 100% confident of all the outputs from Ai.

Dr Kristen LI  11:32
Yeah, as an educator, I'm thinking about the example you give me like to have the auto grading and students can get feedback from AI. Are there any research or study to compare the AI grading and feedback and the human grading and feedback?

Dr Edd PITT  11:49
Yeah, so there was, and I can't remember the name off the top of my head, but I know of a study that happened. I think there were 12,100 essays, and they used ChatGPT to assess these essays had already been marked by academics, and over a series of iterations, the ChatGPT was able to get within roughly around one or 2% margin of error of the human markers, and the fact that normally, the tolerance for us, if you and I were marking a piece of work, the tolerance would be much higher than one or 2% difference, and maybe they were able to replicate that across over 12,000 essays. That suggests to me that there's some efficacy in using AI to mark, yeah, and I cannot remember the name, but it is out there, if people are wanting to Google that.

Dr Kristen LI  12:43
Yeah, I'm also thinking about, actually, we can use AI to mark the result of the student. So can we also use AI to mark the learning process of the student?

Dr Edd PITT  12:54
Yes, possibly. But I think potentially, with learning process, it might be more the sort of things that they've put in. So you're talking about, like, some sort of evidence base that they, you know, maybe they've used it two or three times. Yes, and I know of sort of colleagues that have experimented with that in asking students to provide the evidence of their interactions with something like ChatGPT and showing the distance that they've travelled, in terms of how that works changed. So the way that they may have used feedback from the AI or not used feedback from the AI, which is, again, you know, an important part for students to appreciate that not all feedback that they receive has to be actioned. You know, it could be that they feel that that's not relevant at a particular time. So, yes, I think that's a good way. And I think the learning in that is for them to reflect upon the impact that those resources have had on them in terms of the quality of the work that they've produced.

Dr Kristen LI  13:47
Very interesting. So what benefits do you believe AI brings to the assessment process that the traditional methods might lack? Yeah,

Dr Edd PITT  13:56
I think one of the areas is definitely that sort of personalization and adaptivity that sometimes is difficult for educators to always provide all of the time. And I know a lot of students sort of expect that when they when they sign up to university, but it's not necessarily always practical. So, you know, AI can adapt in real time to a student's performance, to based on what they put in. They can give target it can give a targeted advice, maybe some resources to help them improve. And that sort of iterative element, which I think sometimes is, is more difficult for some situations, for a human to be able to provide that. And I think where that could be really useful is in huge classes, you know, 5-600 students, which wouldn't, wouldn't be possible, and being able to help our students with the skills to be able to successfully use that as an alternative resources, is one avenue that, I think is potential there, but I think it's probably the idea of efficiency and scalability, and that's kind of aligned to what I've just said in that you. If there's multiple assignments, multiple things that academics are trying to do with students, the AI might help with a little bit of offloading of that, of that workload, freeing up some of the time, allowing them to maybe focus on more complex tasks that happen within the teaching time, rather than having to do some of the mundane things. So you know, I'm not saying that we're replacing academics. What I'm saying is some of the things that academics do might not necessarily have to always happen in the future, and they can really then focus their time on impacting student in the classroom with other tasks. I think you know the idea of continuous feedback from the AI is, is potential. We need to look at how that feedback loop helps the students in terms of correcting their mistakes and improving their understanding, because we need to be mindful that we don't want to push students towards AI doing it for them, right? So it needs to be a collaborative activity. I think AI can also help with analytical insights in terms of instructors being able to understand multiple data points from students’ behaviour, so you could look at various different metrics of student performance identify patterns that might not be apparent through traditional grading, you know, and we've had things Like learning analytics, but that can be really cumbersome and time consuming to create. There's potential maybe that AI can do some of the legwork on that so that we get to know some of the patterns in our in our students and behaviours. My own country, in the UK, we have a big emphasis on retaining students and being able to spot some of the issues that students are going through. AI might help us to be able to spot those quicker and being able to then do some interventions to help the students to make sure that they stay on track. So, you know, early risk sort of warnings for students, allowing for those interventions, and being able to look at missed opportunities for student support. But I think also looking at our own teaching and how we can evaluate the effectiveness of that, moving beyond maybe just metrics of student satisfaction, but looking at other elements that might happen within the teaching time and some of that data might help us to understand the impact of our teaching on our learners. One of the things I wanted to emphasize, though, is also around bias and subjectivity and helping our students to understand that. But AI might be able to help reduce those, those biases in terms of grading and feedback with the consistent application of criteria. My own work has looked at anonymity and things like that, and it hasn't necessarily proved overly successful in reducing bias and subjectivity. So maybe we can look at how AI might be able to play a part in that, in terms of applying that consistent criterion, so that students feel that assessment is fairer in terms of the point at which marking has happened. But obviously that is hugely dependent on how the AI is being trained in analyzing that data. So I think, I suppose, what I'm trying to say is, in all of this, is that we, we're not advocating AI proof assessments. We're saying that AI can enhance assessment, and it's not a case of, you know, we're going to detect every use of AI, because I just think that that's never going to really be something that we can do even more if we appreciate that it's going to be integrated into so many different things that we won't even realize it's there. But I think we're, we're looking at developing more authentic forms of assessment that prioritize that critical thinking, that application of knowledge and the AI could be a part of that that helps our students in terms of another resource.

Dr Kristen LI  18:48
Yeah, I'm interested in the point that you mentioned that there can be the personalized feedback real time, especially, I also teach class like more than 300 students, so if they can get the real time, personalized feedback, interactive, collaborative feedback, can be very interesting.

Dr Edd PITT  19:05
Yeah, absolutely. And I think it'll be a balance, because, you know, the human has to still be part of this. You know, we're not saying that academics are replaced by AI, because I don't think that's what anyone wants. All that that will be good for students or the human race, but it's more as a sort of, sort of a joint venture between AI and the human and students are going to need to be prepared, you know, that is where the world is going. But we want students to be prepared in the right way, so that we don't sort of have this march towards everything's AI, you know, we have some balance and some nuance in that.

Dr Kristen LI  19:41
Right, you mentioned joint venture,

Dr Edd PITT  19:44
yeah, I think so. I think it is a collaboration, yeah.

Dr Kristen LI  19:48
So are there any significant challenges or limitations to use AI in these areas?

Dr Edd PITT  19:53
Yeah, I think I mentioned in the previous question around academic integrity AND when it first came out, you know, particularly things like ChatGPT, there was lots of concern around academic integrity, you know, around plagiarism and the authenticity of students work. But I think there's sort of a debate around rethinking of assessment to reduce the reliance on essays exams and things that people can cheat in and more sort of complex AI resistant tasks that require original thought and personal engagement. But I think we have to strike a balance. We can't sort of throw every assessment out and say that's no because of AI etc. I think some of the things I've said already in this podcast have been around understanding and embracing and the challenges of academic integrity, but also how AI can contribute and improve students’ work and understanding. And I think that we need to strike a balance there. We're not going to go down the sort of policing route in that sense, I think there are other things that are there in terms of the ethics that we need to think about, because there are concerns around data privacy, intellectual property, and also the equity around the digital divide. I know that here you know students have all got access to ChatGPT, that isn't the case in in all in all parts of the world. Even you know that even if students are given the option to use it, I think there's also a divide in terms of the skill level that students are going to have and one of my concerns is that, from an equity point of view, does the use of ChatGPT, etc, perpetuate the sort of capital, the intellectual Capital, that other students have got, and whether that's going to widen the gap between our students. Increasingly, we've got students varying different educational backgrounds and standards in our classroom, and we've got to be mindful of that. So we want equal access, but we also want to make sure that we don't unfairly advantage certain groups. We know that institutions are rapidly using it. There's varying practice there, and I think we probably have to have some pause for thought in terms of what's working in the sector. You know, some universities are going down the hole. We're going to catch everyone that's using it. Others are embracing it, what's working. So we need a lot more research on that, so that we think about the implications for workforce planning in the future. And I think I've mentioned this, but I want to emphasize it again. It's about our identities as educators. You know, how is this affecting us? For many, it will be a real, real challenge to suddenly learn these new skills be up to speed, because in our profession, you want to be you know, the knowledge, the point of expertise. And this does challenge, it's rebalancing the whole dynamic there. Because for many of us, our students will be a lot further ahead in their understanding of using this. And that creates pressure on academics to think, well, how can I suddenly get up to speed be able to use this in a proficient way? The bias issue obviously comes up as well, because we've seen instances where the algorithm, particularly the Google example, where it was training to be overly conscious of certain biases and going the other way, and it was creating images that were completely historically incorrect and trying not to offend. So I think we have to think about the way that AI is being trained and trying to reduce the fact that it might reinforce bias. And we need to think about also the diversity of the people that are involved in the training. You know, we don't want that to perpetuate the biases that they might have. So there are lots of issues around that, in terms of educational context for us. And I think one of the other things is the data protection around student data protection, but also the data protection of our data that we create, you know, if we're using data that's for formally held within an institutional environment, and then we take it out and we put it into AI and use that there are ethical issues about data protection that we need to be really thinking about in terms of the stringiness of that. And the last thing I wanted to talk about was the reliability and validity, particularly around the accuracy in terms of the AI outputs, and maybe that we don't become overly reliant on that to the point where we think everything it says is correct. It's hard wired to try and please us. So my concern is that students will put something in and it will come back with something. And if they haven't got the necessary level of criticality, they'll think, Oh, well, AI has said, this is great, and it is conditioned to please us, so we have to be critical in terms of the outputs. And I think only exposure to this over time in a safe space scaffolded it will help our students to develop that criticality, much like we get them to develop criticality under the sources that they're reading for their studies. And questioning the sort of validity and reliability of the sources of information. This is no different, and it's a skill that we have to get our students to develop.

Dr Kristen LI  25:09
Yeah, right. Like the bias you mentioned, actually, when we talk about the data collected can be biased, the training and the algorithm can be also biased. In the end, the people who use it may also use it not properly, make the result biased. So we need to be very careful.

Dr Edd PITT  25:26
Yeah, we do, and I think there's a lot of work for us to do in a partnership with our students to understand and have those conversations, because we've also got the sort of way that we're discussing AI in higher education. But then there's also a discussion in wider society about AI and all the fears that people have and the media and the way that is being perceived. You know, we have to think about how all that is bracketed by our students, and how they can rationalize that to then go and influence society when they leave.

Dr Kristen LI  25:54
Right, right. So looking ahead, what do you see as a future of AI in higher education, particularly in assessment and feedback.

Dr Edd PITT  26:03
Yeah, I think that it's going to require an understanding and an integration across universities. So it can't just be one person saying, Oh, I'm going to use this right? It needs to be more joined up with lots of different parts of the university coming together. And, you know, we've already seen some of that in terms of AI policies, but also about how we train our staff. I already mentioned about the pressure on staff to learn a new thing yet, yet more. I think we're really going to have to think about the time and implications for that. You know, I was impressed that the University is giving that we haven't yet been given that to staff, you know, is that not the first step for staff to have it, to be able to understand it and use it. So really thinking about that joined up approach to how we can think about how staff are going to interact with that, to plan their use in assessment and feedback. So I think we need maybe a little bit of a sort of appreciation about its capability, and also what it can't do yet. And we're only at the start of the journey with AI, you know, and I think there's going to be a lot of advancement in a very short space of time. And what we now today think isn't possible, in a few months could be possible. And I think getting staff to use it themselves and understand it would help, but also to maybe think it's not going to solve every problem, and that there will still need to be a lot of human interaction with the students. And I want to emphasize that, because I think a lot of what we've talked about today is the potential but I wouldn't want people to think that are listening, that we're all going to be removed. I think it really is the balance. I think, as I mentioned earlier, that the real future is that that interaction with students, that generative element, the word generative is there because it is more than one input, it is more back and forth dialogues, prompts really investigating what it is, and it potentially could have a huge impact on the way that students evolve and change as learners. But we need more research. We need lots of controlled empirical research about the effectiveness, the change that it has on student learning behaviour, the quality of the outputs, whether students actually change the quality of their work as a result of interacting with AI. So lots and lots of more research that will take time for us to understand and then thinking about pedagogically. Then what are the benefits of using this in your teaching, in your assessment, etc.

Dr Kristen LI  28:40
Yeah, right. Like you mentioned, the joint venture, we educators may try to collaborate with AI and try out and also, importantly, like you mentioned, to evaluate with AI. Support of learning and assessment is improved compared with before.

Dr Edd PITT  28:56
That's right, yeah. And there could be some great opportunities for AI to use the data that is generated from things that we already have so that we can further understand the effectiveness of our of our practice. And of course, it could also potentially impact the workload allocation that we will have in terms of freeing up some space for us to do things that could be more impactful, that maybe we could offload to AI that that you know, generally take us lots of time, that frustrate us all, but maybe aren't the most effective use of our skills.

Dr Kristen LI  29:29
So what advice would you give to educators considering integrating AI into their assessment and feedback practice?

Dr Edd PITT  29:39
I would say, read around a little bit, around the journal articles that have come out in the last few years, around the effectiveness of AI in assessment and feedback and the potential. But don't worry. Don't panic and think that the end is nigh and try it out yourself. Sign up. Have a go. See what the potential is. If you're really, really worried about the AI impact on your assessment, think about your assessment. Use AI to try and generate the output that you think it might be able to generate. Think about the process that you went through to get that final result. And then think about the degree to which maybe your students will be able to do that. How much support they need? Do they need some training? Do they need you to create formative experiences, to be able to use AI if you're going to embrace that and also talk to your colleagues, think about the way that the colleagues across the whole program, or course team, are using it. You know, having conversations with peers around this and how they're tackling it is probably a good way of understanding what your discipline is doing. There's lots of networks that many disciplines have around the world. There's countless YouTube videos now, countless research presentations that are freely available for us to engage with to see about what things people are doing, what exciting things, how are they pushing the boundaries? But also, as I said, don't panic. I'm not convinced we're going to get replaced, but maybe some level of embracement is needed in order to understand the way that things are moving forward. Thank.

Dr Kristen LI  31:15
Thank you Dr Pitt. Actually, as an educator, my passion has been ignited after listening to you. I would like to try out the example that you mentioned, in my teaching and also assessment. Thank you very much.

Transcribed by https://otter.ai