Goodbye crystal-ball mentality! Replacing forecasting noise with key questions (that might defy an easy answer)

By Ricardo Mendes, CEO, Vawlt.

  • 4 hours ago Posted in

It’s that time of the year again: IT decision makers, industry analysts and self-acclaimed ‘thought leaders’ scramble to publish their respective forecasts and predictions, the louder the better. But do technological developments really, and so conveniently, conform to a calendar? How realistic are such forecasts? Bold claims about the future of technology often turn into nothing more than pipe dreams. History shows that predicting market or technical shifts is rarely accurate and frequently overblown. Instead of straining to ‘call it right,’ it’s more productive to admit that we don’t have all the answers — and that’s okay. It’s an approach that encourages richer debates, deeper insights, and a more honest engagement. With that in mind, in this piece I’m trying to share some personal questions, thoughts and concerns on the subject of the moment - IA.

 

AI - a technology surrounded by uncertainty

We are all witnessing the ongoing AI frenzy. Regardless of what people individually think of the benefits and risks of using AI, we can deduce one thing for sure: AI is here to stay, and we are in an AI hype cycle that won’t calm down anytime soon. This led me to consider questions around AI advances, outcomes and impacts. I don't mind admitting that I do not have all the answers to these challenges. I’m not an expert on the matter, so please take my opinion for what it’s worth. My aim is to give a voice to the questions about AI we are not asking (enough).

 

Where are we right now? Taking stock, looking ahead

First, my thoughts turn to what is the true impact of this technology's current status quo? How big have the advances been in the world with the support of AI? We do know that AI has helped to boost a lot of areas, sectors and use-cases. From content creation, to software development, support services, we all know them - there are plenty of examples. Impressive, you might think, and you would be correct. Since I would argue that the current capabilities of AI technology have had a transformative impact comparable to the appearance of the Internet.

The million-dollar questions

The big question now is, what’s next? I can see a division of thought into two camps: those who say that AGIs (Artificial General Intelligence - simply put, AI as smart as humans) and ASIs (Artificial Super Intelligence - AIs smarter than humans) are around the corner. Then, there are others who state that we are nowhere near achieving it.

Given the advances we have experienced in the last couple of years, I don’t doubt we will see more developments in the coming years, and I don’t dare to bet on timelines. However, there are questions that always come up for me whenever I delve deeper into the subject:

Status Quo: Will existing technology and techniques used to provide the current AI solutions be enough for the next generations of AI? At the end of the day, the technological conversations when we talk about AI are on LLMs, training models, etc, but all these processes run on top of infrastructure. AI requires masses of (dedicated) processing units, storage, and network. Will the existing status of all these technologies be enough for the next stages of AI?And what about the AI technology itself? Will the existing models and processes, with their underlying hardware, their background in machine learning algorithms, their data flow, etc., be enough for future needs? In fact, will it even be suitable for that purpose? If the answer to any of these is “no”, then I’d say we are not that close to having these next-level IAs. On the other hand, we have the existing IA to help accelerate the creation of any new technology, right?

ROI: There is already debate around the costs of having the current version of AI running with full operational benefits. Further queries are emerging about whether or not there is room, under present capacities and workloads, to escalate developments in AI to the next stages needed. I think we need to sit down and honestly ask whether we might be obligated to find more efficient ways of, for instance, processing data, in order to have an economically sustainable AI. It’s a complex consideration with many sides to it, but I don’t hear enough interrogation of these challenges.

Ecology: There is already extensive and serious discussion around the impact of AI on the environment. Questions around disposal and electronic waste, constraints with raw material for components, rising pressure on energy grids, and–last but not least–the excess(ive) energy consumption of AI – all these issues need to be addressed. The challenge is on to fully understand how much energy AI will consume in the future! Forecasts are frightening. 

Data privacy, Cyber sovereignty, data manipulation: A real concern I have is that in order to get the benefits of evolving AI technologies, organisations may want to relax (or, in fact, are already relaxing) their approach to data privacy, data control and operational sovereignty. Because of the high costs and complexity of running such solutions, they are likely to be made available centrally by single entities that have the infrastructure and the knowledge to do so. Correspondingly, one must consider the costs and threats that entities may incur, not only in providing sensitive data to train models and so on, but also in becoming operationally dependent on these entities. All this change and challenge opens the doors to a level of control that organisations have on the world and in peoples’ lives, which is clearly undesirable (to say the least). We actually already have a taste of this with the dependency entire companies have created by relying on single cloud providers. While single-cloud concentration is already a grave concern, the new level of reliance driven by AI-powered tools and solutions can be even worse. Moreover, these entities (and/or their technologies) could manipulate the data, and so validate the “truth” at a level that would be almost impossible to roll back from. Will we, as citizens, accept this as inevitable? Or will we realise privacy-preserving models operating over encrypted data (the dream)? Will this even be possible (a crazy evolution of things in the space of privacy-preserving computation and homomorphic encryption)? These are complex questions, for sure - but I believe that they do matter.

 

Let’s not forget: AI is supposed to make life easier and better

It is inevitable that AI is the way forward - this train won’t stop, and we are on this journey together. The fact that AI is developing at a blistering speed is both a blessing and a curse. As with any technology, AI places a great responsibility on its human creators. Asking critical questions at every step of the way ensures that we stay humble and realistic in the face of the complexity and uncertainty that surrounds AI. 

One thing is clear: making like the Ostrich, and sticking our heads in the sand, won’t get us anywhere. Joining the giant AI hype machine without asking critical questions won’t be helpful in the long run either. What we do need to have is healthy debates around these challenges in order to stay in the driving seat on our journey with the rapidly developing AI technology - and we have to have these discussions now. 

Artificial intelligence is emerging as a key area of focus and discussion within the electric...
By Nathan Marlor, Global Head of Data and AI, Version 1.
AI is at the heart of the manufacturing revolution, driving efficiency, sustainability, and...
By John Kreyling, Managing Director, Centiel UK.
By David de Santiago, Group AI & Digital Services Director at OCS.
By Krishna Sai, Senior VP of Technology and Engineering.