‘Energy efficiency is the unifying challenge across AI’
Doing more with less power; that is the same intent across the AI technology board, from the cloud to the tiniest sensors.
Published on November 14, 2025

© Zulfugar Karimov - Unsplash
Mauro swapped Sardinia for Eindhoven and has been an IO+ editor for 3 years. As a GREEN+ expert, he covers the energy transition with data-driven stories.
“Energy efficiency is the unifying challenge across AI—from the cloud to the tiniest sensors,” says Christian Bachmann, portfolio director for edge computing and wireless technology at imec. The researcher emphasizes that energy efficiency is the common thread across AI’s board. “Whether it’s a massive data center or a microcontroller in a sensor, the goal is the same: do more with less power,” he continues.
Imec is one of the world’s leading nanoelectronics research institutions, headquartered in Belgium. Bachmann works at Eindhoven’s Holst Centre, a research hub created by imec and the Dutch main scientific research organization TNO.
The portfolio manager is one of the speakers at the upcoming Watt Matters in AI conference, taking place on November 26 in Eindhoven. In his talk, he will highlight imec’s efforts in this domain. As AI energy requirements soar, researchers are exploring new ways to curb AI’s energy usage in every aspect.
Energy efficiency is the industry’s priority
To this end, imec is up to the challenge and, by closely collaborating with its industry partners, is developing innovations from the cloud to the edge. “On the semiconductor process technology level, we are, for instance, working on new, better memories. Furthermore, we are also giving our partners tools to engineer their AI architectures,” he explains.
During the last edition of its annual technology forum, ITF World, imec has indeed introduced a new software platform to explore future memories, processes, and connectivity options. This way, they can better prepare for the future and, for instance, develop more effective algorithms for the hardware they will run on.
This approach resonates with the organization’s key strategy of co-designing hardware and algorithms. “We work closely with industrial partners to understand their workloads—what sensors they use, what AI models they need, and what constraints they face,” Bachmann explains. This collaborative approach ensures that new hardware is tailored to real-world applications.

Watt Matters in AI
Watt Matters in AI is a conference that aims to explore the potential of AI with significantly improved energy efficiency. In the run-up to the conference, IO+ publishes a series of articles that describe the current situation and potential solutions.
Getting inspired by the brain
One prominent strand of research aimed at making AI computing more energy-efficient is neuromorphic computing. This is an approach that draws on principles of computation from the human brain. In fact, the brain can do plenty with the equivalent amount of energy of two LED lightbulbs. In addition to this energy sparsity, it can literally power down the areas that are not required for a given process, a feature that, so far, AI chips don’t have.
Bachmann says to be fascinated by the human brain's heterogeneity and complexity. “To some extent, we can now mimic some functionalities of the brain. However, behind two different architectures, the principles can be applied in very different ways. Yet, it is inspiring to look at what happens there and take inspiration for both hardware and software design,” says the researcher.

Christian Bachmann
Portfolio director for edge computing and wireless technology at imec
He joined imec in 2011 after obtaining his PhD in Electrical Engineering at Graz University of Technology. Within the research institute, he is working to address some of the R&D challenges for ultra-low power connectivity and compute.
Neuromorphic computing to overcome traditional processing bottlenecks
Traditional computing architectures, based on the Von Neumann model, separate memory and processing, leading to bottlenecks and inefficiencies. By contrast, neuromorphic computing is heterogeneous and integrates memory and computation, enabling event-based, low-power processing.
Bachmann highlights imec’s work on hybrid architectures, which combine classical neural networks (CNNs) with spiking neural networks (SNNs). “The brain is heterogeneous—different regions handle different tasks,” he notes. “We’re exploring how to replicate this heterogeneity in hardware, using the right architecture for the right workload.”
Multiple paths for energy efficiency
If neuromorphic computing is one of the most promising pathways to curb AI’s energy use, much can be achieved using existing tech.“There is still a lot of room to boost the energy efficiency of datacenters, leveraging already available hardware,” underlines imec’s researcher.
One example is the use of superconducting devices, which, by leveraging the quantum-mechanical properties of semiconductors, can operate at low temperatures. As a result, computations can run faster and with less energy. “With superconducting hardware, we can achieve up to a hundred times faster processing speed,” he explains.
With different solutions at disposal, and the industry’s interest in achieving higher efficiency, Bachmann believes that the future of computing lies in heterogeneous architectures. “Simply put, there will be different AI accelerators or computer kernels within the same computing device. Each one of them would run the processes for which they are best suited,” he clarifies.
According to the researcher, robotics will be a major driver of AI development. From applications widely available today, such as factory cobots, innovation will advance to empower humanoid robots. The lowest common denominator will stay the same: doing more with less power.
