Can we trust digital competitiveness rankings? Study finds major index discrepancies
The core question this study confronts is whether discrepancies in country rankings across these indices arise from differences in how they define and measure digital competitiveness. It also asks whether these rankings, despite differing methodologies, tend to agree over time and whether those rankings for individual countries remain stable or fluctuate in response to external factors such as policy shifts, investment, or technological advancement.
A newly published academic study has cast a spotlight on the foundations of digital competitiveness rankings, raising fundamental questions about how artificial intelligence and digital infrastructure are measured, interpreted, and compared globally. The study, titled "Digitalization and Artificial Intelligence: A Comparative Study of Indices on Digital Competitiveness" and published in the journal Information, investigates whether existing global indices evaluating digital readiness provide consistent and reliable assessments- or whether they tell conflicting stories about nations’ progress.
The research evaluates four widely cited digital indices and their ranking of 29 European countries over a six-year period from 2019 to 2024
- World Digital Competitiveness Ranking (WDCR)
- Network Readiness Index (NRI)
- AI Readiness Index (AIRI)
- Digital Quality of Life Index (DQLI)
Each index captures a distinct view of the digital economy, from infrastructure to AI policy to internet affordability. However, the question confronting researchers was whether these indices measure progress in a consistent, methodologically sound way.
The core question this study confronts is whether discrepancies in country rankings across these indices arise from differences in how they define and measure digital competitiveness. It also asks whether these rankings, despite differing methodologies, tend to agree over time and whether those rankings for individual countries remain stable or fluctuate in response to external factors such as policy shifts, investment, or technological advancement.
The researchers deployed two key statistical methods - Friedman’s ANOVA and Kendall’s coefficient of concordance. The former assesses whether rankings differ significantly across indices and time while the latter tests for agreement among rankings, revealing whether these measures can be considered mutually reinforcing or fundamentally divergent. The findings are both revealing and sobering.
On one hand, the study confirms significant methodological differences between the indices, leading to varying rankings. This supports the concern that a country may rank high on one index but far lower on another - not necessarily because of actual performance gaps, but because of what is being measured and how. This is especially evident in the DQLI, which focuses heavily on internet access and user experience and showed the greatest variation in rankings compared to more economically focused indices like the WDCR and AIRI.
On the other hand, the study also found strong correlations across most indices, indicating a reassuring level of agreement on which countries are leading or lagging in digital competitiveness, lending credibility to the use of these rankings as policy benchmarks. Countries like Finland, Denmark, and the Netherlands consistently emerged as leaders, while others, including Romania, Greece, and Bulgaria, frequently occupied the lower ranks. These patterns remained stable across time and measurement frameworks, underscoring the persistence of digital divides within Europe.
Do rankings remain stable over time, or are they susceptible to short-term fluctuations?
The statistical results showed that while rankings do shift, particularly in response to AI investments or policy changes, the top and bottom positions tend to hold steady. High values of Kendall’s coefficient indicate that although methodologies differ, the overall agreement on country performance remains strong across years. The WDCR, for example, showed remarkable consistency in its annual rankings between 2019 and 2024.
The implications of these findings are far-reaching. At a strategic level, they suggest that governments and institutions cannot rely on a single index to shape digital policy or evaluate readiness. Instead, a multidimensional approach that triangulates between frameworks like the NRI, AIRI, and DQLI is necessary to capture the full scope of digital transformation. The study recommends treating indices as complementary rather than competitive, each offering insight into different but overlapping dimensions of the digital ecosystem.
The authors also emphasize the necessity for a comprehensive and harmonized approach to evaluating digital readiness, especially as artificial intelligence becomes increasingly embedded in public governance, infrastructure, and economic life. The AIRI, which focuses on government capacity to implement AI strategies, revealed strong alignment with the broader digital performance of leading nations. This suggests that AI maturity is a key driver, not just a byproduct, of national digital competitiveness.
Regional gaps and updated metrics for AI-era sustainability
The study also highlights significant disparities. Countries in Southern and Eastern Europe, while making notable strides in certain areas like e-government or mobile access, continue to lag in broader categories such as AI strategy development, digital infrastructure investment, and cybersecurity preparedness. These gaps raise concerns about regional fragmentation in Europe’s digital economy and highlight the risks of uneven development in the AI age.
This analysis leads to another pressing question - are digital indices capturing what really matters for long-term digital sustainability, or are they anchored in indicators that may become obsolete as technologies evolve? The study notes that updates in indicator composition, as seen in the NRI’s inclusion of AI and cloud adoption metrics, are critical for keeping pace with transformation. But without methodological transparency and periodic recalibration, the reliability of these indices could degrade.
Simply put, digital competitiveness is too complex to be captured by any single measure. While rankings offer visibility, they must be interpreted in context and supplemented with qualitative analysis. For policymakers, this means shifting from index-driven decisions to evidence-informed strategies that address digital inclusion, AI ethics, infrastructure gaps, and policy coherence, the study concludes.
- FIRST PUBLISHED IN:
- Devdiscourse

