Skip to content
  • Media
6 min read

ChatGPT not untouchable: Professor Dingli on state of the AI market

Professor Alexiei Dingli spoke with Game Lounge Media about the state of the AI market and his thoughts on how AI should be regulated.

The rise of Artificial Intelligence, or AI, both in global industry as well as within the social zeitgeist has been something to behold over the past couple of years or so. Although AI as a concept, both fictional and real, is not something that has only been developed recently, it has now become more relevant than ever, with some of the most valuable tech companies in the world being heavily involved in the field of AI development.

Game Lounge Media spoke with Professor Alexiei Dingli, an academic with a deep understanding of AI. Dingli is a professor of artificial intelligence at the University of Malta’s Faculty of Information and Communication Technology, and has recently written an article going over OpenAI and ChatGPT’s place in the current AI market.

ChatGPT’s position in the market

Game Lounge Media asked Dingli to detail what signs had convinced him that ChatGPT’s position in the AI market has weakened, as well as what he would consider to currently be the dominant AI model.

He replied that when he talks about ChatGPT “weakening”, he means that it no longer feels untouchable. “Competition has improved, alternatives have become credible, and users are more willing to switch tools depending on the task. That said, if we look at overall usage and public recognition, ChatGPT still holds a leading position,” he commented. He continued that what has changed is not dominance, but rather exclusivity, as he added that the market has moved from “one obvious choice” to “many strong options”, which he described as a healthy sign for innovation.

Continuing on that point, Dingli was asked if he believes whether the rise in popularity of other AI models aside from ChatGPT has been caused more by improvements in other models or more by user frustration with the limitations of ChatGPT.

“It is a mix of both,” he responded. “Alternative models have genuinely improved: offering faster responses, fewer restrictions, or better performance in specific tasks. At the same time, some users feel constrained by limits, pricing, or safety filters in ChatGPT, and are naturally curious to explore other options.”

Dingli said that once people realise that they do not need to rely on just one AI, they then start choosing tools in the way that they choose apps, meaning based on convenience, preference, and trust. He remarked that this shift towards a multi-tool mindset is likely to define the next phase of AI adoption.

With there now being a number of largely popular AI models, such as Gemini, Grok, DeepSeek, as well as ChatGPT among others, Game Lounge Media asked Dingli what he thinks are the factors which set models apart the most, and what he believes makes an effective AI model in comparison to another.

He said that what sets AI models apart is not just intelligence, but usefulness. He stated that an effective model gives clear answers, admits uncertainty, works quickly, integrates well with other tools, and fits naturally into how people already work. He further remarked that some models excel at reasoning, while others may be more proficient at speed, coding, creativity, or real-time information.

“For most users, the ‘best’ model is the one that feels reliable, easy to use, and helpful without constant friction. As competition grows, we are moving away from one dominant model towards a landscape where different tools serve different needs.”

AI, reliability, bias, and regulation

Having said that, it has become more commonplace nowadays for people to resort to certain AI models as a sort of replacement for Google or other search engines. With that in mind, Dingli was asked if AI is generally reliable and unbiased enough for this use case.

“I see AI increasingly replacing how people search rather than what they should trust,” he replied. He continued that AI systems are good at summarising information and explaining complex topics in plain language, “however, they are not truth engines”.

He said that unlike Google, “which points you to multiple sources,” AI often gives the user a single confident answer to their query, “and that answer can sometimes be wrong or incomplete”. He added that bias is also an issue, as AI learns from human-created data, which ultimately reflects human opinions, blind spots, and power structures.

“So while AI is excellent as a first step, guide, or assistant, it is not yet reliable enough to fully replace the need to verify information, especially when the topic matters.”

Delving further into that notion: Dingli has previously spoken about the dangers of misinformation through the use of AI, particularly with regard to video or image generation. The government of the United Kingdom has recently stated that it will bring into force a law which would make it illegal to create non-consensual intimate images. Game Lounge Media asked Dingli if he thinks such legislation would be a move in the right direction, as well as whether he believes if it begins to address concerns about misinformation.

He responded that yes, he believes such a step is a positive and necessary one. He said that non-consensual explicit images cause real harm, whether that be emotional, reputational, or psychological, and that AI has made such abuse “easier and faster”. He continued that by criminalising the creation of such content, the law would be recognising that harm begins long before something is shared widely.

With regard to the topic of misinformation, Dingli said that although introducing such laws would not solve misinformation as a whole, it would address one of the most damaging and personal forms of synthetic media abuse. “It sends a clear message that technological capability does not override human dignity.”

Dingli has also previously spoken about the potential issues posed by restrictive regulation, and how restrictive regulation may push users toward unregulated alternatives. Game Lounge Media asked him what sort of regulation he believes would be appropriate to address concerns about AI, as well as whether he thinks an educative approach could potentially prove to be more effective.

“The key is to regulate harm, not curiosity,” he commented. Dingli said that, rather than banning tools outright, laws should focus on high-risk uses of AI, such as identity fraud, deepfake abuse, unsafe medical advice, or large-scale manipulation.

“Overly restrictive rules often push people towards unregulated systems that are harder to control. Education, on the other hand, builds long-term resilience. Teaching people how AI works, where it can fail, and how to verify outputs is one of the most effective safeguards we have,” he said. Continuing, he remarked that the best approach in practice is a combination of clear legal boundaries for serious misuse, and widespread AI literacy so that people can use the tools responsibly.

Professor Dingli concluded by saying that if there is one idea he would like to leave readers with, it is that AI is no longer about finding “the best model”, but rather about learning how to use these tools wisely, critically, and responsibly.


Stay ahead in leadership with the latest insights from industry experts. Subscribe today.

Game Lounge Content Team
Isaac Saliba
Journalist
Published on January 22, 2026