AI is no longer a tool - it is an actor in education and research
SURF's Tech Trends Report 2026 is based on international trend studies, enriched with insights from experts from the SURF cooperative.
Published on December 24, 2025

Bart, co-founder of Media52 and Professor of Journalism oversees IO+, events, and Laio. A journalist at heart, he keeps writing as many stories as possible.
Artificial intelligence has crossed a threshold. What began as an experimental productivity aid has rapidly evolved into an infrastructural force reshaping how education and research function at their core. According to SURF’s Tech Trends 2026 report, AI is no longer something institutions can “try out” at the margins. It is becoming embedded in workflows, curricula, research practices, and governance structures, often faster than institutions can meaningfully respond.
Tech Trends 2026
The speed of adoption tells the story. Since the public release of ChatGPT in late 2022, large language models (LLMs) have reached hundreds of millions of users worldwide. AI is no longer confined to specialist labs; it is now built into everyday tools such as word processors, messaging apps and learning platforms. For students and researchers, AI has become ambient: always available, increasingly invisible, and hard to opt out of.
From monopoly access to a fragmented AI landscape
One of the most significant shifts identified by SURF is the diversification of access to AI models. Institutions are no longer limited to a small number of commercial cloud-based systems. Alongside hyperscalers, open-source initiatives and European alternatives are emerging, while lightweight models can now run locally on personal devices.
This diversification brings opportunities: more autonomy, more control over data, and more room for experimentation. But it also introduces complexity. Institutions must now navigate a fragmented ecosystem of models, licenses, infrastructure requirements and legal frameworks. The question is no longer whether to use AI, but which AI, under whose conditions, and for what purpose.
For education, this fragmentation risks widening existing inequalities. Students with better access to tools, skills or institutional support may gain a structural advantage. At the same time, AI is increasingly used for cognitive offloading - summarising texts, generating drafts, or explaining concepts - which raises fundamental questions about assessment, learning outcomes and the role of human judgment.
Responsible AI: values under geopolitical pressure
Europe has positioned itself as a global leader in responsible AI, notably through the EU AI Act. Yet SURF observes a growing tension between ethical ambitions and geopolitical realities. In an international race for AI supremacy, large technology companies are scaling back internal ethics programs. At the same time, governments increasingly frame AI as a strategic asset tied to economic power and national security.
This shift matters deeply for education and research. Universities depend on transparency, reproducibility and trust, values that can be undermined when AI systems become opaque, proprietary or driven primarily by commercial incentives. While many organisations express commitment to responsible AI, SURF notes a persistent gap between intention and implementation.
For students, this ambiguity is particularly problematic. They are expected to work with AI systems whose underlying values, training data and limitations are often unclear. As AI ethicist Virginia Dignum warns in the report, students must learn to operate in a professional culture that is itself increasingly ambivalent about responsibility.
Hardware hits back: AI meets physical limits
Another key insight from the report is the co-evolution of AI and hardware. While software innovation has accelerated, hardware has become a bottleneck. Training and running large models demands enormous computational power, energy and raw materials, all of which are increasingly scarce.
This has triggered a shift toward specialised chips, energy-efficient architectures and “hardware-aware” AI models. For education and research, this could be a turning point. Personal AI workstations and local compute may reduce dependence on hyperscale cloud infrastructure, improving privacy and autonomy. But the concentration of hardware production among a handful of global suppliers also introduces new vulnerabilities.
In this context, infrastructure is no longer a neutral technical choice. It becomes a strategic decision with implications for sustainability, sovereignty and long-term resilience.
From automation to collaboration
Perhaps the most profound change is conceptual: AI is moving from automation to collaboration. Rather than simply replacing tasks, AI increasingly acts as a partner: drafting code, generating hypotheses, providing feedback or supporting decision-making. Multi-agent systems and so-called “agentic AI” push this even further, enabling systems to act autonomously with minimal human oversight.
In education, this could reduce workloads for teachers by automating grading, feedback and lesson planning. In research, AI agents may assist with literature reviews, data analysis and experimental design. Yet these gains come with risks: loss of oversight, blurred authorship, and growing dependence on systems whose reasoning is difficult to explain.
SURF is clear that efficiency must not become the dominant metric. Education is not a factory, and research is not a pipeline. AI should support meaningful learning and inquiry, not hollow them out.

Watt Matters in AI
Watt Matters in AI is a conference that aims to explore the potential of AI with significantly improved energy efficiency.
Smaller models, bigger choices
The final AI trend highlighted by SURF may be the most hopeful. Alongside ever-larger models, small language models (SLMs) are gaining traction. These models require far less energy, can run locally, and are often better suited for specific tasks.
For institutions concerned with privacy, sustainability and cost, SLMs and edge AI offer an alternative path. They enable data retention while reducing environmental impact and vendor lock-in. But they also demand new skills: model optimisation, hardware integration and strategic planning.
A collective responsibility
Across all these trends, one conclusion stands out: no single institution can manage this transformation alone. AI reshapes not just technology stacks, but governance, pedagogy and public values. SURF’s role, as outlined in the report, is to enable collective learning, shared infrastructure and informed decision-making.
The real question for education and research is not whether AI will shape the future; it already does. The question is whether institutions will, in turn, shape AI, guided by public values rather than technological momentum alone.
