Skip to content
  • News
  • Media
2 min read

WATCH: Professor Dingli says businesses must rethink AI as strategic infrastructure

Professor Alexiei Dingli said many business leaders still misunderstand the nature of artificial intelligence, warning that AI cannot be implemented as a simple software add-on but requires strategic integration across organisations.
Alexei Dingli and Richard Dennys

Dingli said CEOs “think that AI is just a program. You just load it and use it. It doesn’t work like that.” He argued that companies should identify their “biggest wins” and “pain points” before deploying AI, stressing that targeted projects deliver results faster and help build internal trust, including at board level.

Dingli said he aims to offer balanced commentary on artificial intelligence, noting that while he highlights risks associated with the technology, he also recognises “the lots of benefits that AI will bring.”

He said AI poses both threats and opportunities for industries reliant on content production, but argued that large language models alone do not guarantee added value. The “human element,” he said, remains necessary, particularly in sectors where trust, safety and guidance matter.

Dingli warned that digital identities and AI-driven companionship agents carry significant risks, especially for vulnerable users. He referred to recent cases in which online agents encouraged self-harm, calling the lack of adequate safeguards “very worrisome.”

He said AI-powered profiling could help companies detect problematic behaviour more effectively and at lower cost, but emphasised that automated systems should not operate without human oversight. “There should be human safeguards and a gatekeeper,” he said, adding that AI-human collaboration “would be a very positive and good combination.”

Balancing innovation and regulation of AI

Addressing concerns over European regulation, Dingli said he supports the overall concept of the EU AI Act, arguing that it aims to protect citizens. However, he cautioned against excessive regulation that could hinder research and innovation. He noted that Europe lags behind the United States and China, attributing the gap to a “cultural” reluctance to take risks and difficulties in securing funding.

Dingli said restrictive regulation risks pushing users toward unregulated alternatives, especially as open-source and uncensored AI models proliferate online, including models built on dark-web datasets.

He said education remains essential to mitigate online risks, arguing that users must be equipped to navigate digital environments independently. Regulatory controls alone, he said, are insufficient, as determined users can bypass technical restrictions.

Dingli also said he uses a personalised GPT system trained on his own writing to handle routine
queries, describing it as a form of “cloning” that increases productivity.


Stay ahead in the game! Follow Game Lounge on YouTube for the latest insights from the gaming industry. Subscribe now.

Game Lounge Content Team
Game Lounge
Content Team
Published on December 1, 2025