Is AI the future of higher education? Only if it remains human-centric

AI-powered tools are revolutionizing how researchers conduct literature reviews, but maintaining academic rigor remains a challenge. The HCAI-SLR framework proposes a balanced approach where AI augments, rather than replaces, human research efforts. This study identifies two primary categories of AI tools used in systematic literature reviews: prompt-based AI tools and task-oriented AI tools.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 20-03-2025 12:25 IST | Created: 20-03-2025 12:25 IST
Is AI the future of higher education? Only if it remains human-centric
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) is everywhere these days. However, without human oversight, this powerful technology risks becoming a black box - unexplainable, biased, and unreliable. When it comes to higher education, the stakes are even higher. Research integrity, academic rigor, and ethical considerations must remain at the forefront of AI-driven advancements.

For AI to truly enhance learning, research, and scholarly practices, it must be designed with human oversight, transparency, and accountability at its core. A recent study presents a novel framework that integrates human judgment at every stage of the AI process, ensuring that decision-making is guided by human values and ethical considerations. Titled "Human-Centered Artificial Intelligence in Higher Education: A Framework for Systematic Literature Reviews", the study advocates for AI systems that enhance research efficiency and scholarly inquiry without compromising academic integrity.

Beyond the black box: Making AI-driven research transparent and trustworthy

One of the primary challenges of AI in education is the lack of transparency in decision-making processes. AI-driven platforms often collect and analyze vast amounts of academic data, but without proper oversight, their outputs can be biased, misleading, or misaligned with academic principles. The HCAI-SLR framework directly addresses this issue by embedding transparency into every stage of AI deployment.

This framework follows a multi-phase AI governance model to ensure that AI-driven research processes are both transparent and ethically sound. The model incorporates three critical control points: Human-before-the-loop, Human-in-the-loop, and Human-over-the-loop. In the first phase, researchers set clear objectives and ethical boundaries before AI tools are deployed. In the second phase, AI is integrated into research workflows, but human experts continuously monitor and validate AI-generated outputs. The final phase ensures ongoing oversight, where human researchers review AI’s conclusions, mitigating risks related to bias and misinterpretation.

By adopting this structured approach, the study demonstrates how AI can enhance systematic literature reviews (SLRs) without compromising the credibility of academic findings. The framework outlines specific AI-powered tools that can assist with data extraction, synthesis, and organization, while ensuring that the final interpretation remains in human hands. This not only improves research efficiency but also builds trust in AI-assisted academic practices.

Smart AI, smarter research: Ensuring integrity in an AI-driven academic world

AI-powered tools are revolutionizing how researchers conduct literature reviews, but maintaining academic rigor remains a challenge. The HCAI-SLR framework proposes a balanced approach where AI augments, rather than replaces, human research efforts. This study identifies two primary categories of AI tools used in systematic literature reviews: prompt-based AI tools and task-oriented AI tools.

Prompt-based AI tools, such as ChatGPT and Claude, facilitate interactive query-based research, helping researchers refine research questions, extract relevant literature, and summarize key findings. These tools support the initial phases of an SLR, assisting with identification, synthesis of studies, and literature analysis. On the other hand, task-oriented AI tools, such as Typeset and Covidence, focus on screening, quality assessment, and data extraction, ensuring that researchers can process large datasets efficiently while maintaining accuracy.

The study also introduces the concept of AI triangulation, a process where multiple AI tools cross-check each other’s outputs to identify inconsistencies or biases. This method strengthens the reliability of AI-assisted research by reducing systemic errors and ensuring that AI-driven insights align with human interpretation. AI triangulation is particularly useful in large-scale reviews where thousands of studies must be evaluated for relevance and credibility.

Furthermore, by integrating human decision-making checkpoints throughout the research process, the HCAI-SLR framework ensures that AI serves as an aid rather than an autonomous decision-maker. Researchers retain control over final selections, data interpretation, and conclusions, reinforcing the ethical application of AI in academic research.

AI as a partner, not a replacement

The findings from this study suggest that human-centered AI can transform higher education by making research more efficient, accessible, and reliable. The HCAI-SLR framework provides a clear pathway for integrating AI into systematic literature reviews while preserving academic integrity, transparency, and ethical considerations. Beyond research, these principles can be extended to AI-driven educational platforms, ensuring that students and educators benefit from AI without compromising human oversight and ethical standards.

The study also highlights the need for AI literacy in academia, advocating for training programs that equip researchers, educators, and students with the skills needed to navigate AI-powered research tools responsibly. As AI continues to evolve, higher education institutions must prioritize frameworks like HCAI-SLR to create a balanced relationship between AI automation and human expertise.

Future advancements should focus on expanding the adaptability of the HCAI-SLR framework across diverse research domains, refining AI models to enhance their interpretability and ethical compliance, and integrating AI-driven insights with human-led academic practices. The key objective is to build a sustainable AI ecosystem in higher education where technology complements human intelligence rather than takes its place.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback