AI in Telecoms: Regulation, Risks and Rewards

6 December 2024

AI is transforming industries, including telecoms, but is the regulatory framework keeping up?

With a plethora of use cases – from customer service chatbots to network optimisation – AI is being integrated into the telecoms industry at pace. On 8 June 2023, the European Parliament published the EU AI Act – a landmark for AI regulation. But can this regulatory model strike a balance between fostering innovation and ensuring security and trust?

AI use cases in telecoms

AI has been introduced into telecoms by regulators and operators globally – both on the consumer- and network-side. It has the ability to improve customer experience, facilitate smarter network management, and allow in-depth analysis of live data – across a range of applications, as shown in the graph below.

The rise of all these AI services and other GenAI tools is driving further growth in data traffic on telecom networks – however, at present, we do not (yet) see AI in its current form as a driver behind a new spike in mobile data traffic consumption – operators will watch this closely in the coming years.

Challenges created by AI in telecoms

Incorporating AI presents significant challenges, all of which any regulatory framework must address:

These challenges already have real regulatory implications on the tech industry. In 2024, Google had to address its ‘Gemini bot’ after it over-corrected for potential discriminatory bias, leading to inaccurate outputs. As AI matures, the impact on all industries, including telecoms, will grow – and the EU has worked to mitigate some of the associated risks and challenges with the introduction of the EU AI Act.

A first look at the EU AI Act

The EU AI Act defines a tier-based system, which reflects the level of criticality with regards to the amount of AI regulation required. The tier-based system ensures that regulatory efforts are proportional to the potential risks posed by AI applications, with the aim of balancing innovation and oversight. The act splits the use of AI into the following four tiers:

  • Unacceptable Risk: This includes e.g. prohibiting AI used for social scoring, manipulation, or mass surveillance due to its harmful impact
  • High Risk: Requires strict conformity assessments for AI systems used in sensitive areas like employment, education, public services, and law enforcement
  • Limited Risk: Mandates transparency obligations for AI systems such as chatbots, deepfakes, and emotion recognition
  • Minimal Risk: AI systems like simple chatbots, spam filters, and basic image/speech recognition face no specific obligations

For high-risk applications, such as fraud detection and law enforcement, there is much stricter scrutiny. Lower risk tiers have less stringent rules as there is less risk associated.

The tier-based system is particularly relevant for telecoms, where any AI interaction with the providers’ networks will fall under the critical, higher-risk tiers, due to the associated impact on critical network infrastructure and the privacy of consumers’ data. The numerous other applications of AI (including customer service features) will fall under lower-risk tiers.

Further to the tier-based approach, the EU AI Act provides a framework for disclosing the use of AI to users and regulators – such as publication of key details regarding the AI, for example informing customers of AI-driven network optimisations.

Finally, the Act details how to ensure data quality and robustness for AI models – an important step for the continuation of AI use within an industry where reliability and performance are key.

Assessment of features

There are positives and negatives which arise from the EU AI Act, detailed below.

Potentially, some of the negatives could be offset by developing a bespoke framework for AI in telecoms, but is it really feasible to produce bespoke framework for all industries using AI?

Conclusion

The AI Act marks a significant step towards addressing AI risks, but its implementation must strike a balance that preserves innovation, particularly in telecoms.

The Act is useful in numerous ways – the tier-based system creates a framework for balancing innovation and security, particularly where critical infrastructure is concerned, and its ubiquity raises the potential for it to act as a global benchmark for AI regulation across all industries. However, its success depends on how it is implemented – there are risks related to ambiguity in the definition of tiers.

Nevertheless, with careful refinements tailored to the telecoms industry – such as specifying which risk tier each of the applicable AI services belongs in – the EU AI Act could not only provide a global benchmark, but a foundation for secure, innovative AI integration in this critical sector.

Authors

Marc Eschenburg
Marc EschenburgPartner
Cameron Currin
Cameron CurrinManager
Callum Lerigo
Callum LerigoManager