AI evolution in academic libraries: A critical assessment framework for the digital age

Academic libraries have historically embraced emerging technologies, transitioning from custodians of physical resources to dynamic hubs of digital information management. AI represents the latest technological frontier, promising efficiency gains, new research capabilities, and enhanced learning experiences. However, its adoption is fraught with challenges, including opaque algorithms, potential biases, privacy concerns, and the risk of undermining academic craftsmanship.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 17-01-2025 16:18 IST | Created: 17-01-2025 16:18 IST
AI evolution in academic libraries: A critical assessment framework for the digital age
Representative Image. Credit: ChatGPT

The integration of artificial intelligence (AI) into research and higher education offers both unparalleled opportunities and unique challenges. In their article, “Cutting Through the Noise: Assessing Tools That Employ Artificial Intelligence”, Leticia Antunes Nogueira, Stine Thordarson Moltubakk, Andreas Fagervik, and Inga Buset Langfeldt examine the complexities of evaluating AI-driven tools for academic libraries. Published in IFLA Journal (2025), the study emphasizes the critical role of academic libraries as impartial evaluators and navigators of the rapidly expanding AI tool landscape. Libraries, as stewards of information and learning, are uniquely equipped to ensure that AI tools are adopted responsibly, aligning with academic values and societal needs.

Academic libraries have historically embraced emerging technologies, transitioning from custodians of physical resources to dynamic hubs of digital information management. AI represents the latest technological frontier, promising efficiency gains, new research capabilities, and enhanced learning experiences. However, its adoption is fraught with challenges, including opaque algorithms, potential biases, privacy concerns, and the risk of undermining academic craftsmanship. Libraries are increasingly tasked with evaluating these tools, offering a vital service to stakeholders who may lack the time or expertise to navigate the AI landscape.

The authors highlight that libraries are often approached by vendors pitching AI tools with claims of revolutionizing academic work. This pressure can lead to hasty adoption without thorough evaluation. Libraries, therefore, must act as critical gatekeepers, balancing the allure of innovation with the responsibility to uphold ethical and practical standards.

Complexities of AI tool assessment

AI is a multifaceted technology encompassing machine learning, neural networks, natural language processing, and other advanced methodologies. Its applications in academic contexts range from automated literature searches to predictive analytics in research. However, the hype surrounding AI often obscures its limitations, creating challenges for libraries attempting to discern genuine innovation from marketing hyperbole. The authors stress that while technical expertise is not mandatory, a basic level of technological literacy is essential for librarians tasked with evaluating these tools.

A Framework for Evaluating AI Tools

The study proposes a structured framework for assessing AI tools, focusing on three core dimensions:

Tool Purpose, Design, and Technical Aspects 

Understanding a tool’s purpose is fundamental to assessing its value. Librarians must ask critical questions about the tool's intended function, how it integrates into existing workflows, and whether it meets its promised objectives. For instance, tools like ChatGPT or Microsoft Copilot may excel in natural language generation but lack reliability in providing accurate or trustworthy information. Libraries should demand transparency from vendors regarding algorithms, data sources, and potential limitations. The authors caution against adopting tools without a clear understanding of their design and functionality, as this can lead to misuse or ethical lapses.

Information Literacy and Academic Craftsmanship

Libraries are at the forefront of promoting information literacy, equipping users with the skills to critically engage with AI-generated outputs. The convenience offered by AI tools - such as automated citations or content recommendations - can risk fostering superficial research practices or eroding the academic rigor of students and researchers. The study underscores the importance of evaluating whether AI tools enhance academic skills or inadvertently discourage critical engagement. For example, tools that summarize large volumes of data may save time but could reduce opportunities for deep, reflective learning.

Ethics and the Political Economy of AI

Ethical considerations are paramount in assessing AI tools. Issues such as data privacy, algorithmic biases, and environmental sustainability require careful scrutiny. Training large language models, for instance, consumes significant energy resources, raising questions about their environmental impact. Libraries must also examine the political economy of AI, including the power dynamics between large technology companies and academic institutions. Vendors’ business models - such as reliance on user data for profit - should be evaluated for alignment with academic values of transparency and equitable access.

Practical Tools: Guiding Questions for Assessment

To operationalize their framework, the authors provide a comprehensive set of guiding questions, organized into foundational and advanced levels. These questions include:

  • What specific problems does the tool address, and how does it improve workflows?
  • What are the tool's limitations, and are these clearly communicated by the vendor?
  • Does the tool integrate seamlessly with existing software, and what are the implications of vendor lock-in?
  • How are data privacy and security managed, and what rights do users retain over their data?
  • Does the tool’s functionality align with principles of research ethics and academic integrity?

The questions encourage librarians to engage deeply with tools, ensuring that their adoption is well-informed and aligned with institutional goals.

Implications for academic libraries

The study highlights several implications for libraries in their role as evaluators and adopters of AI tools. First, libraries must navigate the tension between the pressure to adopt cutting-edge technologies and the need for deliberate, ethical decision-making. Second, they must build internal competencies to assess and integrate AI tools effectively. This involves fostering technological literacy among staff and collaborating with IT and legal departments to address complex issues such as data compliance and intellectual property.

Furthermore, libraries have a broader societal responsibility. As trusted, non-commercial entities, they can influence market dynamics by demanding transparency and ethical standards from vendors. By acting as critical intermediaries, libraries can steer the development of AI tools toward greater alignment with academic and public values.

Challenges 

While the framework provides a robust starting point, the authors acknowledge its limitations. The rapidly evolving nature of AI means that assessment criteria must be continuously updated to address emerging technologies and challenges. Additionally, libraries must consider the diverse needs of their user base, tailoring their evaluations to the unique contexts of their institutions.

The study also emphasizes the importance of fostering collaboration between libraries, vendors, and other stakeholders. By working together, these groups can co-create tools that meet the practical needs of academia while adhering to ethical principles.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback