AI perspectives for 2026
This past year has been twelve months of reading about AI, talking about AI, strategising about AI, experimenting with AI and observing AI in organisations.
It feels like there have been a lot more questions than answers, however, when it comes to assessing AI’s effects. There is a palpable mood of generalised uncertainty, whether that’s prospects for a bursting AI bubble, when artificial general intelligence (AGI) might arrive or AI’s elusive productivity impacts.
Everyone can see that AI will be transformative, but for many business leaders, 2025 was, to quote Macbeth, a tale “of sound and fury, signifying nothing.”
With this as backdrop, following are my thoughts on what’s ahead in AI in 2026, what it means for industry - and what we need to do differently.
AI: The bad, the ugly and the good
First, AI will do more harm in the next year than it did in 2025.
That’s not to say it won’t also do more positive things as well. But it seems almost inevitable that as we rely more and more on AI, people and things are going to be negatively impacted with greater frequency.
Perhaps that’s the price of admission. But how prepared are we for this?
In the past year, as chatbots have multiplied and their use has been normalised, tragic stories have emerged in the media of vulnerable people who have been led astray by interactions with AI. People (and cats) have been hit by proliferating AI-powered driverless vehicles.
Vulnerable people are manipulated by fellow humans, and cars that have drivers also cause accidents. But the sheer amount of AI in our lives ensures there will be more impacts on all of us – often positive but also increasingly negative.
In the next year, we’ll go from a steady tide of so-called “AI slop” – synthetic, formulaic and untrustworthy content produced with generative AI – to a raging torrent. AI-generated articles have now reportedly overtaken human-authored writing on the Internet. One AI app, Suno, produces a Spotify catalog’s worth of music every two weeks. Growing levels of AI slop decrease trust and increase misinformation risks – in society, but in organisations as well.
We’ve heard about deepfakes reaching the enterprise – the phony, AI-generated voice of the CFO that instructs an unwitting financial analyst to remit payment to a fraudulent account. But are companies ready for AI agents purpose-built for blackmail and extortion, or agentic malware?
I believe 2026 will be the year that business leaders begin to give AI governance the attention it deserves. I’m afraid, however, it may take some painful lessons before managing AI risk is properly prioritised.
It’s complicated. The truth is that lots of technologies can be used to make mischief or even sow destruction. Utilised the wrong way (or by the wrong people), an airplane is dangerous. So is a nuclear power plant, a turbine or a power saw. Many machines and systems we rely upon every day can do all sorts of damage.
But by creating standards and protocols for their use, we can incorporate these technologies in how we work and live, with a high level of predictability and safety. That is the task before us as consumers of AI and especially as leaders of industry.
In a recent open letter to legislators on AI policy, several eminent researchers commented that, “There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers.” They make a valid point.
I’m no Cassandra, however, and don’t share the views of the AI doomsayers. In fact, I also see the potential for AI to do tons of good in the world.
Take, for example, AlphaFold, an AI system developed by DeepMind, which has transformed biology by turning protein folding – a problem that bedeviled scientists for decades – into a solvable challenge. By predicting the structures of hundreds of millions of proteins, it has accelerated drug discovery and opened whole new avenues in disease research.
This is AI at its best: not replacing human insight but augmenting it to unlock breakthroughs that were once unimaginable.
Accelerating adoption of polyfunctional and humanoid robots was one of my predictions for 2025. It’s impossible not to feel both excitement and a chill when interacting with this technology, as I recently did firsthand. I have been astounded by the leaps in “physical AI” over the past year, particularly the capacity to shift robots from one task or environment to another in a matter of moments, thanks to what are known as “robotic foundation models.” There are many promising applications of physical AI technology in plant-floor contexts, and observers forecast that industrial humanoid robot deployments will double by 2030.
Pragmatic AI
While examples like AlphaFold represent step-change innovations – even new capabilities that somehow work but humans don’t fully understand – much of the focus in AI in 2026 will be squarely on the mundane. Stronger governance around AI will be married with a consistent focus on realising tangible business value – what I call “pragmatic AI.”
I see big things in the next year for industrial AI, and for a very pragmatic reason: Industrial AI is highly mature. We have decades of experience running AI-powered predictive maintenance, for example, and these solutions are the perennial, undisputed champs in delivering business value through AI. Now, we’re applying the lessons learned in domains like engineering and design, energy management, quality assurance, supply chain orchestration and more.
I foresee a portfolio approach where industrial firms experiment with frontier models like GPT-5 and Gemini Ultra, but also recognise the opportunity presented by smaller, domain-specific language models. Chinese open-source models like Qwen, DeepSeek, ERNIE and Wu Dao are designed for efficiency, using smart architectures and adaptive techniques to deliver strong performance with lower compute-intensity. Backed by major platforms such as Alibaba and Baidu, these models make advanced AI more cost-effective and practical for real-world industrial use outside the lab, and are among the most downloaded globally.
Generative AI interfaces are another innovation making waves in industry, with AI assistants making it possible for engineers to “talk” directly with the industrial infrastructure they supervise and control. I anticipate greater focus on user experiences like this that support natural language queries and chat.
In 2026, firms will pivot from purely cost-optimised sourcing of compute toward geographically-secure infrastructure in an effort to exert strategic control over AI use. Deloitte projects that nearly US$100 billion will be spent this coming year on sovereign AI compute.
From Canada to the Middle East to the European Union and beyond, there is a clear trend toward what Gartner refers to as “geopatriation” to act as a strategic technology hedge against volatility. This represents a profound shift from the overriding emphasis on cost control that has been the main driver of IT spending in recent years – a decidedly pragmatic recognition that complexity and risk are here to stay.
AI’s environmental balance sheet
Scrutiny of AI’s energy and water consumption will continue to rise in 2026. This is entirely appropriate because, on its current path, AI and data centre growth is not sustainable. By 2030, data centres will use as much electricity as the entirety of Japan, a country of 123 million people. Already, we’re seeing a centres that drain the aquifers on which local communities’ drinking water depends. We can do better.
Importantly, not all AI is created equal. Generating a funny image for your friends with ChatGPT or Nano Banana may use more electrons than running machine learning algorithms on a production-line process in a factory. Many industrial AI applications use around the same amount of energy as a typical spreadsheet.
Leaner, less energy-intensive models of AI must be on industry’s to-do list. It’s why I’m proud of AVEVA’s work with the Green Software Foundation, and of the fact that our chief technologist, Arti Garg, chairs the IEEE’s Working Group P7100, which is working to develop a technical standard to measure the environmental impact of AI.
We need to optimise “watts per token,” and deploy AI aggressively to improve energy efficiency in data centres. AI can be an asset, not just a liability, on the global balance sheet of energy use.
In that same light, I also hope we don’t lose sight of where energy is really going. The International Energy Agency highlights that industry - heavy and non-heavy - will consume around four times the electricity of all data centres in the next five years, even with skyrocketing growth due to AI.
This points to where the biggest potential for decarbonisation really lies: it’s industry. And it’s why AI-powered industrial intelligence is so critical to the energy transition.
Work in progress
A question I receive constantly is about AI’s impact on jobs. Once again, the picture is complex.
Morgan Stanley says that AI could impact 90% of professions. A recent MIT study finds AI can already replace 11.7% of the American workforce. New data from outplacement firm Challenger, Gray & Christmas estimates that nearly 55,000 U.S. jobs were lost in 2025 directly as a result of AI. A range of companies have publicly cited AI as a rationale for workforce reductions.
Tech layoffs happen a lot. But whether AI’s impact on workforce reductions is causal – or merely correlated – is key. In some cases, job losses that were coming anyway (i.e., due to corporate underperformance or a planned restructuring) are cynically being attributed to AI and its ostensible productivity gains.
In the industrial sector, however, I see a clear direction of travel. Sectors like manufacturing, energy, mining and cement face a well-documented talent gap. Here, AI is augmenting roles, not supplanting them.
AI is helping new employees accelerate their time to productivity. It’s helping industrial companies preserve institutional knowledge that is headed for retirement. It’s making everyone better at making decisions.
Agentic AI will autonomously handle complex, multi-step tasks that once required constant human input and attention. It will connect data sources and adapt processes in real time, freeing experts to focus on contextualising insights and making good decisions.
So far, from the conversations I have had with customers all around the world, I see scant evidence of AI taking jobs in the industrial sector. On the contrary, AI is empowering people to do more.
Who’s the boss?
But it’s safe to assume that AI will lead to some labor displacement. So it has been with every disruptive technology in history, from the printing press to the steam engine to the web browser. I’m not so naïve as to discount that.
But part of being a leader in the age of AI is deciding how to use it ethically.
Organisations are made up of people. Indeed, the word’s very definition is “a group of people who work together for a shared purpose.”
A leader who uses AI explicitly to get rid of people has abdicated their responsibility to the organisation they serve.
Leaders should never actively seek this, or consider it a badge of success as some have. Instead, they should focus on efficiency, resilience and sustainability – the hallmarks of modern competitiveness – and harnessing AI to make their employees more productive.
There are plenty of value-creating AI use cases (and they will become more valuable as we learn how to optimise them over time). But intentionally destroying jobs isn’t one of them.
Finally, I hope that amid all the talk of agentic AI, we also keep in mind the idea of agency – as in, we have it. Humanity has agency in AI. We get to decide the kind of future we want to create with this tool.
And that’s all AI is: a tool. Tools can be used to build, or they can be used to tear down.
It’s up to us.
Energy Connects includes information by a variety of sources, such as contributing experts, external journalists and comments from attendees of our events, which may contain personal opinion of others. All opinions expressed are solely the views of the author(s) and do not necessarily reflect the opinions of Energy Connects, dmg events, its parent company DMGT or any affiliates of the same.
