AI for everyone, but not everything

Look. I grew up in the 80s. When you say AI to me, I automatically think of Skynet and humanity’s desperate fight for survival. Younger me certainly didn’t expect older me to be actively thinking about how to use AI for the public good. If anything, younger me would be bitterly disappointed that older me isn’t a ghostbuster. Life can be cruel like that.

Last week I was over in the US as part of Adobe’s International Study Tour. The first part of this trip was to spend a day at University of Texas at San Antonio (go roadrunners! Meep meep!) learning about how the university has worked to embed digital skills into the curriculum. We then we headed over to San Jose for the rest of the week. And so I am writing this having got back home following the brilliant Adobe EduMAX 2024 conference. The theme of the event, along with the International Study Tour that I was very fortunate enough to be a part of, was ‘Empowering the AI Generation: Building Career Pathways with Digital Storytelling’. It really is a unifying topic – one of the most obvious things about the week has been that all of the delegates, despite coming from universities from around the world, are all grappling with this same issue. And despite the different types of university and sector funding etc etc, we largely agreed around the need to adopt a positive approach to managing AI in university.

At Maynooth, we’re moving rapidly along this path. One of the key jobs I was tasked with when I started, was to lead the development of some guidelines and policies for the use of GenAI in our teaching and learning. It’s something that I was already engaged with in my previous role, so at least I wasn’t started from nothing.

From early discussions with staff and students at Maynooth, it was clear that the lens that we should look through was one centred on student anxiety and uncertainty over actually using GenAI. There has been a little work in this area already, some of which suggests that trust is key issue – that students currently do not feel that ‘two-way transparency’ exists. This means that a large amount of effort is needed to demonstrate that GenAI has not been used in an assessment but that academic staff do not provide sufficient detail of their own use of GenAI. Nervousness and uncertainty is also a compounding issue – can the students use AI, to what extent, and how will academics and lecturers respond when they declare this? This situation is made worse by the incorporation of AI tools into standard university software, such as Copilot in Microsoft Office. Despite this uncertainty, data from US universities shows that most students (50% non-user; 75% regular AI user) will continue to use GenAI tools even if the academic or institution bans it (Tyton Partners, 2024).

At Maynooth, our key student issues were worries about committing academic misconduct, about false positives being flagged, uncertainty around legitimate usage (like using Grammarly), staff communication and the chance to discuss concerns with academics.

It’s important to start somewhere, and so our approach began with the premises that: the future workplace will be an AI-enhanced professional environment; that “Educators are in the difficult position of not fully knowing what the concrete outcomes of AI will be (Fujii and Aoun, 2024); but that “One thing is certain, the AI you are using today is the worst AI you are ever going to use” (Mollick, 2024). We also framed it all with the phrase in the title of this blog – AI for everyone, but not everything. It recognises that AI needs to be used with consideration and thought, but that we need to ensure that no-one is left behind as opportunities develop and the workplace changes.

As psychologist Alison Gopnik puts it, these are “cultural technologies”, like writing, print, libraries and internet searches. AI is a tool for human augmentation, not replacement. Indeed, one of the repeated comments to come out of these past few days is that large companies do not think that AI will take our jobs, although workers who can use AI will take the jobs of those who cannot. Now, that is something that we in HE can address.

But how to do something constructive then? One of the tasks we had to do in our afternoon workshop was a speed think, or ideas purge (not sure I like that phrase…). We had just 60 seconds to write down the main challenges to integrating responsible GenAI into the curriculum that came to mind. For me, I had: mistrust, threat to academic integrity, threat to discipline, lack of understanding of tech, legal issues, and institutional coordination. All valid, but you can’t just dwell on the negatives while this issue is marching forward.

With that in mind, we’re in the midst of running our Strategic Alignment of Teaching and Learning Enhancement Funding in Higher Education funded ‘GenAI Guidelines and Resources for Learning, Teaching, and Assessment’ project which is being led by Lisa O’Regan (Head of our Centre of Teaching & Learning) and Dr Aisling Flynn (Head of our Student Skills and Success unit) and supported by our Students’ Union. I actually spoke about it a couple of weeks ago in Cadmus’ UK AI Roundtable ‘Generative AI Frameworks and Solutions’. There are a number of outputs for this project, but a key one are guidelines co-created with our students. To do that we took input from our AI Expert Advisory Group (which is very multidisciplinary and also includes PhD students) and then had a series of writing sprints (which included a pre-sprint with our students to help them become accustomed to the process and give them the confidence to get stuck in). These we periodically reviewed and were supported with weekly Huddles. Finally we all pulled the comms together and created a GenAI Student Portal on our website, built by students for students, and officially launched in Academic Integrity Week – which is this week.

As you can imagine, we spent a lot of time talking about AI last week, but actually not a huge amount of that time was spent on assessments. There’s a lot of concern around using AI for cheating, but AI didn’t invent cheating, and what’s the worst that can happen – that we reflect upon our assessments to ensure that they are still relevant and appropriate? As one of the speakers said at EduMAX, students express themselves in a multi-model way, so shouldn’t assessment therefore reflect this. Regardless, the application of AI in assessment is just one aspect of this debate. For me, a more interesting perspective is to think about how we can start to co-create with AI. AI is not leaving HE – after all, how many of us were told at school that we had to learn all of these maths processes by heart because we wouldn’t always have a calculator in our pockets when we were adults? What was very clear last week was that the increase in AI in the workplace will increase the focus on the facets that make us human – creativity, imagination, collaboration, resilience. As an anthropologist, my official response to that is, “Well, of course”.

As I flew back from California, I finally got around to watching the latest Mission: Impossible film. The key thorn is our collective heroes’ side this time – rogue AI. Can’t help but feel that Hollywood is kind of undermining my point here…

One Comment

  1. Hi Tim, really like this format, great to know what’s going on in your head. Like the point about transparency between Staff and Students. So true. If we’re not open and hence vulnerable to making mistakes, we can’t learn… and that works both ways.
    (You triggered a memory: The first video tape my Dad rented in the 80s was Terminator. His work colleagues had recommended it to him… “they said it was a good film”. We never saw the end, or much of the middle… Mum took over rentals from that point.)

Leave a Reply to Rachel Dodd Cancel

Your email address will not be published. Required fields are marked *