In a 2025 state-of-the-art overview published in Encyclopedia, Manolis Adamakis and Theodoros Rachiotis map what higher education currently knows (and still struggles with) about academic integrity, AI literacy, and governance.  Synthesising recent research and policy developments, including guidance linked to UNESCO and the OECD, the study delivers a central message that AI can widen access and boost efficiency, but without clear rules and skills, it can also accelerate misconduct, bias, and shallow learning. This matters because universities are training the future workforce while AI adoption is moving faster than institutional safeguards. The authors argue the real question is no longer "ban or allow," but how to redesign teaching and assessment so students still learn to think, verify, and create responsibly.

Across the evidence they review, the authors highlight that AI can support personalization and reduce workload in tasks like drafting, feedback support, and administrative processes, but outcomes depend heavily on human oversight and transparency. At the same time, academic integrity risks are increasing, not only through undisclosed AI-assisted writing and plagiarism, but also through unreliable "AI detection" approaches that may produce false accusations and harm trust. The paper argues that AI literacy must now be treated as a core graduate skill, covering not just tool use, but also verification, bias awareness, ethical disclosure, and critical evaluation. Finally, the authors warn that over-reliance may create "cognitive debt," gradually weakening memory, independence, and deep learning if assessment design fails to protect authentic thinking.

DOI: 10.3390/encyclopedia5040180


Editor: Bashir Mehvish

First-Round Review Editor: Guo Enkai

Second-Round Review Editor: Peng Xiyang