The rise of generative Artificial Intelligence (AI) has sparked intense debate within universities across the world. A common refrain expressed by lecturers to their students:“If I catch you using AI…”. While often framed as a defence of academic integrity, such language reveals a deeper anxiety about the role of AI in higher education. More importantly, it reflects a growing tendency toward what might be described as surveillance pedagogy, an approach that focuses primarily on detecting and policing student behaviour rather than guiding responsible engagement with new technologies.
This response is understandable. Universities are guardians of knowledge and concerns about plagiarism, intellectual authenticity and academic honesty are legitimate. Yet if universities respond to AI primarily through prohibition and detection, they risk misunderstanding the scale of the transformation currently underway. Artificial intelligence is not simply another tool students might misuse, it represents a profound shift in how knowledge is created, accessed and applied.
The real challenge facing universities is therefore not whether AI will enter the classroom, it already has but how institutions choose to govern its presence.
A growing body of scholarship suggests that a surveillance-driven response may be counterproductive. When universities frame AI use primarily as misconduct, they encourage concealment rather than learning. Students quickly become adept at hiding their engagement with AI tools, while the deeper questions such as how these technologies shape thinking, creativity and knowledge production remain unaddressed. In such environments, education risks becoming a game of cat and mouse between detection software and student ingenuity.
The consequences extend beyond the classroom. In the world of work, AI is rapidly becoming embedded in everyday organisational life from recruitment systems and decision-support tools to data analytics and creative industries. Graduates entering these workplaces will be expected not only to understand AI but to use it responsibly and critically.
If universities treat AI as something to be banned or feared, they inadvertently widen the academic–practice gap. In the workplace, AI is becoming a partner in thinking. Yet in many universities it is still treated as a forbidden tool. The most valuable graduates will not be those who avoid AI but those who know how to question it, guide it and work with it responsibly. If universities teach students to hide their engagement with AI, we are not protecting learning, we are weakening their preparation for the future of work. That disconnect risks preparing students for a world that no longer exists.
Instead of responding with surveillance, universities should embrace a different orientation, I argue for stewardship.
Stewardship recognises that universities have a responsibility not merely to regulate technology but to shape how it is used in the pursuit of knowledge and the public good. Rather than asking whether students are using AI, universities should ask how they can teach students to use it ethically, critically and creatively.
Is the problem with us lecturers?
In our published research in the journal Reading and Writing on lecturers’ digital literacy development, we note a concern that needs to be addressed. While academics may initially resist new technologies, the pressure from students, peers and changing career expectations that call for adaptation. AI is no different, universities can either steward this transition or pretend it is not happening.
Universities have seen this story before. Technologies are first dismissed as fads, then reluctantly adopted, and eventually become essential. Artificial intelligence is simply the next chapter in that story.
This shift from surveillance to stewardship involves four key changes.
First, universities must invest in AI literacy. Students and staff alike need to understand what AI systems can and cannot do, the biases embedded within them and the implications of outsourcing cognitive work to algorithmic systems. AI literacy is not simply about technical competence, it is about cultivating critical judgment in an AI-mediated world.
Second, institutions must strengthen the ethics of knowledge creation. Artificial intelligence raises profound questions about authorship, originality and intellectual responsibility. Addressing these issues requires dialogue, transparency, and thoughtful assessment design not simply technological detection systems.
Third, universities must remain attentive to epistemic justice. Most generative AI systems are trained predominantly on Western, English-language data. Without critical engagement, these technologies risk reinforcing global knowledge hierarchies that marginalise African perspectives and scholarship. Universities in the Global South therefore have a unique responsibility to ensure that AI engagement does not reproduce historical patterns of exclusion in knowledge production.
Finally, an appeal to all parties concerned. To think about AI in universities requires us to think simultaneously about ethics, intimacies and ecologies. A focus on how we responsibly govern AI, how we sustain trust between lecturers and students and how we protect the knowledge ecosystem of the university.
These four concerns are increasingly recognised within the South African higher education sector. National initiatives exploring critical AI literacies are beginning to shift the conversation away from narrow concerns about assessment integrity toward broader questions about the future of knowledge itself. Such initiatives acknowledge that the implications of AI differ across disciplines and that universities must develop contextually appropriate responses grounded in their educational missions.
At its core, the debate about AI is not merely technological. It is about the kind of intellectual culture universities wish to cultivate.
A surveillance-driven approach assumes that students must be monitored in order to protect knowledge. A stewardship approach assumes something different. It views students as stakeholders that can be trusted to engage responsibly with technology when universities provide the guidance, ethical frameworks, and intellectual tools to do so.
The stakes are high.
Universities remain among the few institutions in society dedicated to the careful creation and stewardship of knowledge. If they respond to AI primarily through fear and control, they risk undermining the very intellectual curiosity they seek to protect.
But if they respond through stewardship, by fostering critical engagement, ethical reasoning and intellectual courage, they can transform this moment of disruption into an opportunity.
Artificial intelligence will undoubtedly re-shape how we learn, work and think. The question is whether universities will meet this moment as institutions of surveillance or as communities of stewardship committed to guiding the future of knowledge.
The article was part of a talk given as part of a panel on AI: Ethics, Intimacies and Ecologies at Rhodes University on Tuesday 10 March 2026.
Willie Chinyamurindi is a professor in the Department of Applied Management Administration and Ethical Leadership at the University of Fort Hare.
Universities remain among the few institutions in society dedicated to the careful creation and stewardship of knowledge. If they respond to AI primarily through fear and control, they risk undermining the very intellectual curiosity they seek to protect
