Balancing Technology and Ethics: A Conversation with Professor Mark Pegrum

 

Professor Mark PEGRUM
Professor of Digital Learning and Deputy Head of School (International),
Graduate School of Education,
The University of Western Australia

Dr Rachel Siow ROBERTSON
Assistant Professor,
Department of Religion and Philosophy,
HKBU

SUMMARY KEYWORDS
AI in Education, Assessment Innovation, Generative AI, Feedback Mechanisms, Personalized Learning, Academic Integrity, Data Privacy, Learning Analytics, Student Engagement, Pedagogical Strategies, AI Tools for Assessment, Collaborative Learning, Critical Thinking, Equity in Education, AI Bias, Educational Technology, Continuous Feedback, Research in Education, Teaching Practices

 

Rachel Siow ROBERTSON  0:00
Great. Well, welcome here, Professor Pegrum. It's really good to meet you and have you here with us before we dive into our discussion about ethics and topics in education. Could you just share a bit about your background for us?

Mark PEGRUM  0:25 
Sure. Well, thank you very much for having me here. It's great pleasure to be here. So I've been working on digital and mobile learning for quite a few decades now, since the 90s, since I was a post graduate student and first began tutoring, and so I've seen lots of waves of new technologies, lots of new trends, ideas come and go, and I've been lucky enough over the years to have the chance to work in over 30 countries and to see how digital learning is rolling out in different places around the world. In terms of the current focus of my research, I'm working on digital literacies. So essentially, the skill set that we need and our students need to operate effectively in a world that is more and more digitally mediated, and within that larger skill set, I guess my main focus points in the last few years would be attentional literacy, which is about how we bring insights from actually, very ancient mindfulness traditions to bear on the attentional issues, the information overload, issues that we face online and also, and I'm sure this won't be surprising, AI literacy is the other one. Actually, AI literacy is something we've talked about for quite a few years, like well before generative AI, but it's something that's on everyone's minds now, and so yes, there's a big focus on generative AI and AI literacy.

Rachel Siow ROBERTSON  1:46 
Great. Thank you. Thanks so much for that background. Let's get started. Just to start us off with then, what are some of the ethical issues in the use of digital technologies in higher education, especially for students and faculty as well.

Mark PEGRUM  2:03 
Okay, look, I'll probably talk mainly about generative AI, because, as I said a moment ago, it's something that's on everyone's mind. At the moment, there are lots of conversations about it. In many ways, it's a new stage in the development of digital technologies, and it's a more powerful stage, which means, on the one hand, it's potentially more valuable, but on the other hand, it's potentially more risky. And the kinds of themes that come up with generative AI, actually, they're themes we've talked about for years with other digital technologies as well. So in a sense, it takes those themes and almost exaggerates them or takes them to an extreme. So look, I think there are four main types of issues that I think are really important in this area. So first of all, informational issues. So I'm sure you've heard about hallucinations. So the fact that when we ask generative AI a question, it may simply invent a response, including inventing references. Beyond that, there are issues around the historical data sets on which it's trained, because any biases which are in those data sets around gender, race, ethnicity, sexuality, etc, they're going to be hard baked into the responses we're getting from generative AI. And there's another looming issue which quite a few people are becoming quite concerned about. It's going under the name of model collapse these days. Essentially, what it means is that generative AI is increasingly going to be trained on the outputs of other generative AI, and that could lead to a real degrading of the information and potentially eventually to the collapse of the entire model. So all of those are informational issues, but they're about unintentionally false information. In a sense, we also have issues to do with intentionally false information or disinformation. So deep fakes, I'm sure you've heard about where we've got images, or, more often videos, often of well known people misrepresenting their words and actions. So these are all the informational issues, obviously very, very important for education. Then there are pedagogical issues as well. So there are many of these. But one example would be the individualization that is encouraged by all ed tech, actually, but particularly by generative AI. And certainly there's a place in education for individualized learning, particularly in a supporting self study kind of role, but it does sit very awkwardly with trends over recent years towards social constructivist, collaborative educational processes, which are all about people interacting and communicating with other people. So there are issues in that space, also in the assessment space, because one thing that a number of people are concerned about is that we're going to count the things that are easy to count, and it might mean that we're missing lots of aspects of students, learning journeys, so then some of the pedagogical issues beyond that, privacy and surveillance issues, which we've talked about for a long time with all ed tech. I mean, essentially, LMSs or learning management systems, are systems of total educational surveillance, and this is to say nothing of the arrival of facial recognition in the classroom. And of course, what's happening here is that AI, the algorithms are making the potential uses of this data much more powerful, and risks come with that. The last set of issues I'd mention are the environmental issues. So all digital technologies are energy-hungry and environmentally destructive, but it's particularly true of the training and operation of generative AI. So there's some of the issues, the informational, pedagogical, the privacy and surveillance issues, and then the environmental issues. So a lot to think of.

Rachel Siow ROBERTSON  5:44 
Thank so much. That was really helpful. Overview of some of those issues, especially to do with generative AI. Can we now think about some examples? So get a bit more specific. Can you give us some examples of ethical dilemmas that students or faculty might face in the digital environment of higher education?

Mark PEGRUM  6:02 
Sure, sure. Well, many of them are related to the kinds of issues I was talking about. But, yeah, getting a bit more specific. One example, one cluster of dilemmas, if you like, I think, is around creativity and copyright. So sometimes we have the notion that when we are asking generative AI to create something for us, we are actually creating something new. But in fact, what's happening is we are remixing existing content. So I think one set of questions is about how original our work is. When we're prompting generative AI and trying to get exactly what we're looking for, there is a big question about the level of originality, but there's also a question about copyright. So given that generative AI is effectively remixing existing work, whose work is it remixing, and who owns the copyright in that work? And you might have heard there are a number of cases now, particularly in the courts, in the US, where content creators so from media organizations to artists, painters, graphic designers, they're suing generative AI companies for having used their content without permission or without payment as a part of their training data sets. So I think there are some important dilemmas, actually around creativity and copyright. We have to see what happens in those lawsuits, but regardless of what happens legally, I think there are ethical questions there. I think another area where we're seeing some ethical dilemmas is around diversity. So I mentioned a few moments ago that a question some people are asking is, are we just going to count what is easy to count when we assess students, and the same kind of question applies when we're looking at students’ diversity. So some people who have been critiquing learning analytics recently have been saying, Well, you know, are we really seeing students as whole people, or are we seeing them as kind of aggregates of data categories that we can easily capture, you know, what are we not seeing? Are we perhaps not seeing the whole person in many cases? And another issue to do with diversity that comes up, particularly with generative AI, is that we know that the AI detectors, which the plagiarism detection companies are working on, the approaches that they're based on, according to some of the research that's coming out now, they're grounded in factors like perplexity and burstiness, which essentially are measures of the unpredictability of a text. And those sorts of approaches are inherently biased against people who might use language in more constrained ways, such as non native speakers of any given language, but also people with autism spectrum disorder, and that came out in a report amongst others, but it was in a report from anthology, the company behind the blackboard, learning management, management system, last year. So I think there are some big issues. There are some real ramifications here for diversity, equity and inclusion, and I suppose Finally, I'd mention environmental issues once again. So I think again, we have a false notion sometimes that when we are using digital technologies, we're being environmentally friendly because we're not printing things out on paper. But actually there are very high environmental costs. To give you just a couple of examples, and these come from the opening cleaner, which Vicky Saumell gave this year at the IATEFL conference in the UK. To generate one image using an AI generator requires as much electricity, according to some research, as fully charging your smartphone. And according to some other research that's come out, for somewhere between five and 50 responses, depending on location and timing, the old GPT 3 needed to drink half a liter of water, and that's because of the need to cool the server farms that underpin all of this technology. So Vicky, saying in her plenary, every time we're refining our prompts, step after step after step, we should probably be thinking about the environmental costs, so the electricity consumption, the water consumption and so on. So they're just three examples of different kinds of ethical dilemmas. There are quite a lot, but you can see, I think the whole range of dilemmas there,

Rachel Siow ROBERTSON  10:20 
I guess a range, but maybe also some commonalities. I feel like when you mention things like whose work is being taken advantage of who's being represented and then who's being impacted by the environmental costs, seems like some of these themes are who gets to benefit and who doesn't, again and again.

Mark PEGRUM  10:38 
I think you're absolutely right, yes, because certainly some providers of the technology stand to make huge profits from actually, data mining and, of course, from selling the technologies. But always, there is someone who is paying a price somewhere, and whether that's the people whose data is being used to train generative AI, or whether that's people in the global south who are suffering the environmental and climate consequences? Yes, I think actually, that's a really good way to look at it, a sort of a, almost like a cost benefit analysis, who's gaining and who's paying.

Rachel Siow ROBERTSON  11:13 
So lots of dynamics popping up in the classroom. Yes. Okay, so considering the rapid pace of technological advancement, is there a risk that ethical guidelines in higher education will always lag behind? And if so, how can institutions proactively address this gap?

Mark PEGRUM  11:34 
Okay, yes, that's a very good question. Actually, the technology is evolving incredibly quickly, and it's definitely true that it's evolving faster than the law. It's evolving faster than policy, and not just in higher education institutions, or indeed in educational institutions, but in all kinds of institutions, really, in every area of life. Actually, there's a US researcher called S Craig Watkins who has talked about this and said it's quite problematic that we have unleashed a new technology that we don't really understand very well in high stakes environments. So healthcare, policing, education, without any real guardrails around it. So it's definitely a problem that we need to be talking about in terms of what we can do about it, and this is where I would bring a digital literacies lens to bear on the whole area. I think it is about developing our digital literacies as staff, but also helping our students do the same thing. So we need to develop our technological literacy in general, but more specifically, of course, our AI literacy. There are various dimensions to AI literacy, but for example, there are operational dimensions. So functional dimensions, how you use the technology to get the kinds of results you want? But then there are critical dimensions as well. So the ability to take a step back and have a look at the bigger picture, to ask the kinds of questions you asked a moment ago about who's gaining from this, who's paying for this? I think we need that spectrum of dimensions, really, to be brought to bear on AI, and that will be part of our AI literacy skill set. So I think it's staff development and student development in this space, and probably that's something that we can do hand in hand, in a way, because we're all exploring this territory, and I think it's probably good to have open conversation, so for us as educators to talk openly to our students. Here are our concerns. This is what we're thinking, what do you think? And, you know, really get a dialog going, because we all need to develop our skills in this fairly new area, right?

Rachel Siow ROBERTSON  13:39 
Yes, I would like to ask, in what kind of arenas Do you see these literacies developing? Are you thinking specific classes on those or would teachers benefit from adding in a session ahead of having courses across different disciplines?

Mark PEGRUM  13:58 
That's a really good question as well, and actually, it's something that we have been grappling with in the field of digital literacies as a whole for many, many years. And generally, most people would say that, rather than having separate digital literacies classes, it's best to integrate those skills with if you're teaching math or geography, whatever it might be, you can bring in information literacy and critical literacy, and nowadays AI literacy. So I think, I mean, I know some universities have just begun to run separate courses, and I don't think that's necessarily a bad thing, especially as we're just getting into this generative AI era. But I think the way we need to move in the medium to long term, is really the integration of this skill set with all of the other content and skills that we're teaching students. Yeah.

Rachel Siow ROBERTSON  14:48 
I suppose I find that encouraging, because hopefully where disciplines are taught well, these literacies will be taught well,

Mark PEGRUM  14:56 
yeah, I think so, yeah. I mean, certainly, you know, we're often teaching digital literacies, even if we don't specifically name those literacies in a whole range of discipline areas. So, you know, we're teaching multimodal literacy and information literacy and search literacy and so on. So I think the AI literacy skill set is another skill set that we want to add into the mix, but it's a bit difficult to conceptualize exactly what it means at the moment. So there have been a few attempts. We'll see a lot more in the next few years, and that's where I think some open conversations with students across a range of disciplines, across a range of subject areas would be really good. Yes,

Rachel Siow ROBERTSON  15:34 
right? So we've already talked a bit about who stands to gain and who's benefiting who's losing out. And we've talked about different players so staff and students. Now I kind of wonder, in your view, are universities and their tech partners like software and platform providers playing fair with students and staff regarding digital ethics?

Mark PEGRUM  15:58 
It's a very interesting question. Actually, there is a term we're beginning to hear used now, which is phantomization. What it means is, it's referring to the fact that our public educational institutions are increasingly beholden to corporate technologies, including black box AI technologies, where we really don't know what's happening inside those black boxes, and at the same time, we're talking about, as I said earlier, systems of total educational surveillance, which are vacuuming up students data, but educators data as well. And we need to ask some questions about what algorithms are being applied to that data, what that data is being used to do, for example, is it being used to identify students who might be at risk? Is it being used to monitor staff and how often they're online and how often they interact with students, that kind of thing? And beyond that, there are the questions about commercialization. I think this speaks to your question. So when companies collect a huge amount of data, they're not using data necessarily about any individual, but that aggregated data is very, very useful. You know, data is the new gold in many ways. And essentially, they are commercializing our data, and that means that students and staff are engaging in unpaid labour. And so, you know, this is really an important question. Is that ethical, actually? And do we know what's happening with that data? So I think we certainly need to be talking about this, this data mining and this issue of unpaid labour. It's an open question, really, at the moment, right?

Rachel Siow ROBERTSON  17:35 
What kind of scale are we talking at? Do you have an idea of how different educational software is connected with these companies?

Mark PEGRUM  17:46 
It's quite difficult to get the picture actually, and I think different companies are doing different things with the data, but certainly that potential for commercialization of the data is always there, and no doubt it's happening in some cases. And I think staff and students need to be aware actually, of what's happening with their data, where it's going. I think people need to have the option to opt out, and that's something that has been discussed recently. Should students have the ability to say, No, I don't want my data collected by a learning management system. I don't want it subjected to learning analytics, so we're kind of giving up control of our data the moment that we interact in any kind of online environment, and we really don't know what happens with it after that, so it's hard to give a clear answer, but we know various different practices are going on, and I think it's something that we need to shed some more light on and get people thinking about,

Rachel Siow ROBERTSON  18:41 
yes, I think it's hard to even conceptualize what's happening in terms of something that a student is doing now could be captured in data that could then be held and sold on for years into the future, right?

Mark PEGRUM  18:53 
Well, that's the other thing, because data can be retained for a really long period, and even if it's not possible at the moment to analyse data in certain ways or do certain things with it that could become possible in five years or 10 years, and so we just don't know what's down the road once we've given up the rights to our data.

Rachel Siow ROBERTSON  19:14 
Well, thanks so much. I feel like we've had a really helpful overview and then some specific examples, and just thought about what the current challenges are. Now, maybe let's look to the future and think about some future directions. Looking forward, what emerging trends in technology should we as educators be mindful of from an ethical standpoint?

Mark PEGRUM  19:37 
Well, I'd say the big one is what we've been talking about today already, which is generative, AI, because, as I said right at the start, it's like a new stage of digital technology development. It's a more powerful stage, and therefore potentially more valuable, but potentially more risky. So that's one side of it. But actually, as I also said earlier, the themes that come through when we're talking about generative AI, are themes that have been there for a long time with many other digital technologies. So all of this stuff about privacy and surveillance and data collection and so on, that's not just to do with generative AI. So I think in keeping a focus on generative AI, which is, if you like, at the cutting edge, we will also be talking about themes that apply to all other digital technologies, and I think ultimately, what we need to be doing is to work on developing our digital literacies skill set so both educators and students in order to be able to work more effectively with these technologies, but also have the kind of critical lens that we've been very much focused on today.

Rachel Siow ROBERTSON  20:43 
Just with generative AI. Are there some trends that you're seeing? I think you've spoken a bit about how openly I said current chat bot versions are not the end of what chat GPT is. Could you speak a bit to what generative AI will do?

Mark PEGRUM  21:01 
Yes, so there was an announcement on X, formerly Twitter, earlier this year by the head of Developer Relations at OpenAI, which of course, is the company behind GPT and ChatGPT, and what he said was that the final form of ChatGPT is not going to be chat so that led to quite a bit of speculation about what this might mean. One possibility that has been suggested by a number of commentators is that we could be looking at humanoid AI robots. And we know, for example, that open AI is collaborating with a company called Figure robotics, and they're working on next generation AI robots. They're not the only ones. Other companies are also working in this space, so that is a trend that is certainly worth watching. In coming years, we may well have AI robots chatting to us, rather than having to go onto our computers, as we do at the moment.

Rachel Siow ROBERTSON  21:55 
So it might look different, but perhaps some similar theme still. Oh,

Mark PEGRUM  21:59 
I think definitely there will be similar themes coming through and actually developing a critical lens now is only going to be a benefit in the future as we see new twists and turns in the developments of the technology, because these critical issues, these key issues, are going to be there.

Rachel Siow ROBERTSON  22:14 
Yeah, I suppose another trend that I'm interested in is how some of these things are just going to get more integrated as well. So instead of going to different platforms or different devices, maybe one thing with all of these different functionalities coming up.

Mark PEGRUM  22:29 
I think you're absolutely right. I mean, we're in a very experimental stage at the moment with generative AI. It's a bit like in the early days of web 2.0 or social media, when there were dozens of new companies almost every day, it seemed, and new software platforms and so on, and many of them either amalgamated or disappeared over time. And I think the same thing will happen. There's a lot of generative AI offerings out there. They are not all going to be able to survive, and it's worth remembering that a lot of them are not really financially self sustaining at the moment. So a lot of these companies, a lot of these platforms, are sustained by venture capital, and that's not really a long-term option. So I don't think all of the current pieces of software that we see, all of the current options, they won't all survive. So there will be some consolidation, but I think you're quite right, there'll be integration as well. And you know, we were talking before about privacy and surveillance and our personal data, that's going to become all the more risky as we see more integration between different platforms, and as we do more of our daily activities through possibly a single source, whether that's interacting with one online platform, or whether it's interacting with one AI robot, whatever it might be. So those risks are going to be even more extreme in the future. Exactly.

Rachel Siow ROBERTSON  23:50 
Well, I feel like you've given us a lot of good tools already, but maybe we can bring some of that together. And just let me ask you, what advice would you give to universities and other educational institutions to foster ethical digital practices?

Mark PEGRUM  24:07 
Well, I think to begin with, I would reiterate what I said about digital literacies. I think that's a really key aspect of our response here. So it's developing those literacies for us as educators, but also helping our students to develop them, and I think also beyond that, we need to see these literacies as part of a pathway to digital citizenship. So as you probably know, there's a project running at the moment based here at Hong Kong Baptist University, but involving a number of other universities in Hong Kong, which is about digital citizenship, and in the conceptual framework of digital citizenship, which this project has produced and which will be widely disseminated in time, there are nine key elements, but one of those is digital literacies, and specifically information literacy. So I think that's an important signal that. Digital literacies are not only important kind of in and of themselves, but they are an important stepping stone on the pathway to us becoming effective and ethical digital citizens. So I think seeing digital literacies as part of this larger concept of digital citizenship is quite a helpful way to look at the whole situation,

Rachel Siow ROBERTSON  25:21 
right? And how would, how would you sort of sum up the ideal digital citizen? How do you understand what it is to be a good citizen in the digital realm?

Mark PEGRUM  25:30 
That's actually a very good question. There are lots of different aspects to it. I mean, one part is being able to operate effectively in spaces that are increasingly digitally mediated. So that's more the functional or operational side, but I think the ethical side is very important too, and that ties into the critical side that we've talked about as well. So being able to step back to see the bigger picture, to consider what practices you come across, seem to be ethical and which ones perhaps don't, but also considering your own actions and your own impact on this ecosystem. So I think we almost have to bring multiple lenses to bear on this concept. One of the complicated things about this whole discussion around digital citizenship is that obviously laws vary by jurisdictions. And so there are situations where something that is legal in one country is illegal or perhaps discouraged in another country. So there are some differences like that. And so we're seeing some discussions at the moment about how we can develop a global concept of digital citizenship when we are dealing with a whole lot of different jurisdictions, and that's an ongoing conversation at the moment, and it's a kind of complicating factor in all of this. But I think that issue aside, it's about having the operational and functional skills. It's about having the critical lens, and then considering what you find to be ethical, both in terms of other people's behaviour, but also particularly your own. Yeah, I find

Rachel Siow ROBERTSON  27:05 
that really helpful, just in contrast to some of the harms and dangers we were talking about at the beginning, where it seems like things are getting left out, only some things are being counted and valued. And this picture of the digital citizen is someone who is valued as a whole, enhance values.

Mark PEGRUM  27:22 
Yeah, it has to be something holistic. Yeah, absolutely, great.

Rachel Siow ROBERTSON  27:25 
Well, just to finish up with a final future looking question, how can educators and policy makers play a more active role in shaping a more ethical digital future in higher education?

Mark PEGRUM  27:39 
Okay, well, again, I would reiterate the point I've made about the importance of digital literacies, the importance of seeing those as part of the pathway to digital citizenship. But beyond that, I would say I think actually educational institutions have an important leadership role to play. So in other words, it's not just about us as educators developing these skills, or even us helping our students to develop them. Actually, we need to be talking to the wider public so generative AI and all of these other technologies, they are impacting all of our everyday lives, and that means we need to involve a lot more stakeholders in the conversations about how these technologies are being rolled out, what guardrails they should perhaps be so we can't leave those conversations to computer scientists and engineers. I mean, they have to be part of the conversations, obviously, but because we're all impacted by this, I think we all need to be in those conversations. And I think that educational institutions could do more to involve the general public and to communicate with the general public, what the advantages and disadvantages, what the issues are with these technologies. You know, sometimes in education, in academia, we tend to talk to other academics primarily, and I think we need to be presenting ideas about what's going on in a way that is accessible to the general public. We need to be leading debates, and we need to be publishing work, not just in academic journals. I mean, that's important as well, but we also need to be publishing in the mainstream media or online, where our work can be accessed by a wider group of people. So I would like to see educational institutions in general take more of a leadership role.

Rachel Siow ROBERTSON  29:27 
Right? Do you think we could bring students along with us, too in that sort of work?

Mark PEGRUM  29:31 
Absolutely. Yeah. Look, I think it's about educators and students and then beyond the walls of our institutions, the general public, and I think everybody should be part of these conversations. Yeah,

Rachel Siow ROBERTSON  29:43 
thanks, yeah. I find that really helpful to think about the active role and the leadership that we have, because a lot of the conversation about technology can sometimes come with a bit of fear or feeling of disempowerment, uncertainty, but it's helpful to recognize. As educators, you do have positions of power in our context, and there are things that we can do that our students can do.

Mark PEGRUM  30:12 
Absolutely, I would totally agree.

Rachel Siow ROBERTSON  30:13
Great, well, thank you so much professor Pegrum for sharing your insights. For those interested in learning more, where they can find resources or follow you work?

Mark PEGRUM  30:28
Probably the best place is the to go to my website which has a variety of different pages about different aspects of digital technologies. There is a page there are generative AI which I try to keep up today. It’s changing very quickly so it’s updated usually few times a week. It’s probably the best place to go but you are certainly also welcome to email me with any questions or comments that might have.

Rachel Siow ROBERTSON  30:47
Sure thank you so much. We’ll be providing those details of your website and email address anyone interested as well. Thank you.

Mark PEGRUM  30:55
Thank you very much, it’s been great chatting.

Transcribed by https://otter.ai