AI-driven learning analytics struggle to deliver measurable gains in higher education
- Country:
- South Africa
New research suggests that while artificial intelligence-driven learning analytics have advanced predictive capabilities, their real impact on educational outcomes remains uneven, particularly in resource-constrained contexts.
Published in Trends in Higher Education, the study From Engagement to Outcomes: AI-Driven Learning Analytics in Higher Education—Insights for South Africa assesses a decade of global research on AI in learning analytics and examines how these developments translate into the South African higher-education landscape.
The findings reveal a growing gap between technical progress in predictive modeling and the institutional, ethical, and pedagogical conditions required to turn predictions into improved student outcomes.
AI learning analytics are improving predictions but not outcomes
The study shows that AI-driven learning analytics have significantly enhanced universities’ ability to predict short-term academic performance and identify students at risk of failure or dropout. Early systems relied largely on descriptive dashboards that summarized student activity within learning management systems. Over time, these tools evolved into predictive models using machine learning techniques such as decision trees, random forests, and logistic regression.
More recently, deep-learning architectures and transformer-based models have entered the field, allowing institutions to process complex temporal patterns in student behavior. These systems can analyze clickstreams, assignment submissions, discussion participation, and other digital traces to generate early warning signals, often weeks before traditional assessments would reveal problems.
The review finds strong and consistent evidence that behavioral engagement indicators are effective predictors of short-term academic outcomes, particularly course grades and early assessment performance. Time-aware models that update predictions incrementally throughout a semester consistently outperform one-time predictions, allowing institutions to intervene earlier.
However, the study also identifies a persistent limitation. Improved predictive accuracy does not automatically lead to improved learning outcomes. Across the literature, many institutions deploy dashboards and risk scores without embedding them into structured teaching, advising, or support workflows. As a result, analytics often function as diagnostic tools rather than drivers of meaningful pedagogical change.
The authors highlight what they call a “last-mile problem” in learning analytics. Predictions frequently fail to translate into action because educators are not provided with clear guidance on how to respond, when to intervene, or which support mechanisms to activate. In many cases, analytics outputs remain detached from everyday teaching practice, limiting their impact on retention and success.
This gap is particularly evident in studies evaluating learning analytics dashboards. While dashboards improve awareness and monitoring, the evidence shows inconsistent effects on achievement when they are used in isolation. Where positive outcomes are reported, they are typically associated with coordinated interventions such as targeted advising, curriculum redesign, or structured student support triggered by analytics signals.
Ethical governance and contextual limits shape AI adoption
The study argues that sustainable AI adoption in higher education depends not only on model accuracy but also on trust, accountability, and regulatory compliance.
In the South African context, these concerns are shaped by the Protection of Personal Information Act (POPIA), which establishes strict requirements for consent, data minimization, purpose limitation, and accountability. While POPIA provides a strong legal foundation comparable to data-protection regimes in Europe, the study finds that institutional implementation remains uneven.
Many universities lack dedicated governance structures for AI and learning analytics. Policies governing data access, retention, and interpretation are often fragmented, leaving educators uncertain about how analytics outputs should be used in decision-making. Without clear accountability frameworks, there is a risk that predictive systems could reinforce bias, enable surveillance, or be misapplied in ways that harm students.
The review also sheds light on the growing importance of explainable AI in education. As predictive models become more complex, their outputs are increasingly opaque to educators and administrators. The literature shows that trust and effective use depend on the ability to explain why a student has been flagged as at risk and which factors are driving that assessment.
Explainability alone, however, is not sufficient. The authors emphasize that educators require training and institutional support to interpret analytics responsibly. Without adequate data literacy, even transparent models may be misunderstood or ignored, limiting their practical value.
These governance challenges are aggravated by structural inequalities. Much of the research on AI-driven learning analytics originates from high-income countries with robust digital infrastructure and stable connectivity. Models developed in these contexts often assume continuous access to learning platforms and uniform patterns of engagement, assumptions that do not always hold in South African universities.
Bandwidth constraints, multilingual learning environments, uneven access to devices, and diverse modes of study complicate the interpretation of engagement data. Offline study, peer collaboration outside institutional platforms, and intermittent connectivity can distort analytics signals, increasing the risk of misclassification if models are not locally calibrated.
South Africa faces opportunity and evidence gaps
South Africa occupies a relatively advanced position within the African higher-education landscape. Several universities have implemented analytics-driven student success initiatives, including early warning systems, progression monitoring, and advising dashboards. National collaboration through sector-wide networks has further accelerated adoption and knowledge sharing.
However, the review also exposes a critical evidence gap. Only a small number of peer-reviewed empirical studies conducted within South African higher education were identified during the decade under review. This scarcity limits the ability to assess long-term outcomes, equity impacts, and the effectiveness of analytics-informed interventions at scale.
The dominance of Global North datasets and research raises concerns about transferability. Models trained on student populations in Europe or North America may not generalize well to South African contexts without careful adaptation. The authors argue that local data pipelines, validation studies, and longitudinal evaluations are essential to avoid reinforcing existing inequalities.
The study identifies several priorities for advancing AI-driven learning analytics in South Africa. First, institutions need to move beyond prediction toward integrated intervention systems where analytics are embedded in teaching and advising workflows. Signals must be timely, interpretable, and linked to specific actions rather than presented as abstract risk scores.
Second, governance frameworks must evolve alongside technical systems. Universities should establish clear policies for consent, data use, explainability, and human oversight, ensuring alignment with POPIA and international ethical guidance from bodies such as UNESCO and the OECD.
Third, capacity building is critical. Educators, advisors, and institutional leaders require training in data interpretation, AI literacy, and ethical decision-making. Without this human infrastructure, even advanced analytics systems are unlikely to deliver sustained improvements in student outcomes.
The authors call for more locally grounded research. Longitudinal studies evaluating the causal impact of analytics-driven interventions, particularly on retention and equity, are needed to move the field beyond predictive accuracy toward demonstrable educational value.
- FIRST PUBLISHED IN:
- Devdiscourse

