- Media
WATCH: Agentic AI – from reactive to proactive
AI agents are computer programs but also possess some special properties such as autonomy, which means that they can decide what to do on their own, stated Professor Alexiei Dingli while speaking with Game Lounge CEO Richard Dennys.
Properties of AI agents
With that in mind, Dingli continued that such AI agents have other properties which are not discussed as often, such as having the ability to travel between different servers, meaning that the agent can decide to move from one server to another. He added that agents can also communicate with humans and other agents, while also possessing the human-like ability to problem solve.
Dennys asked Dingli if he has seen that ability put into action in an active situation, describing it as “one of those things where the concept sounds amazing”, though he is yet to see “that kind of killer app”.
Professor Dingli replied that there is a lot of noise in the space at the moment, and so he thinks it will be a few years before the proper infrastructure for agents starts being seen. Having said that, he commented that he used to work on agents over twenty years ago, “so the concepts are there, it’s just that the technology was not mature enough when we were working on them”. He remarked that if somebody proposes agents at this stage, he would advise to “stay a little bit away from them” as he thinks the technology still needs a few more years to mature.
Dennys said that, with regard to the gaming industry, a potential use case could be agentic-driven agents driving gaming themselves and then reporting back on the experience to provide feedback on things such as whether they got paid out or whether the game is good or not.
Dingli advised caution, as he said that the current state of play of most agents is that they use a large-language model, “and the problem with that is that large-language models have this problem known as non-determinism, so you can never determine what their output will be”.
He said that the model might work perfectly 99% of the time, but there is always the possibility or danger of that 1%, which may lead to what has been commonly referred to as the AI model experiencing hallucinations.
The professor suggested a “hybrid kind of approach” which uses more traditional AI in combination with large-language models. He said that he thinks this would be the winning combination at this stage, but emphasised that caution should still be exercised “because a lot of people are now jumping on the agentic AI bandwagon”.
Need for upskilling and reskilling
Dingli stated that he prefers not to make predictions about the future of AI in this current stage, as he said that things are moving very fast, and the world may be a very different place within five to ten years.
He remarked that it is already understood that there will be “a massive problem with skills” in five years’ time. He said that the World Economic Forum has estimated that AI will be creating around 170 million new jobs which will require a new set of skills which would mean that workers need to learn or be upskilled for such jobs.
However, Dingli remarked that he would always suggest to follow one’s passion with regard to choosing what to study or what field to get into, “irrespective of whatever happens”. He commented that the most important skill someone can possess is their own adaptability, as one would likely need to retrain themselves and get a grip on new technology as the world changes throughout their lifetime.
Stay ahead in leadership with the latest insights from industry experts. Subscribe today.


