Market Power and Digital Market Regulation: AI and Data

Comparative analysis: UK/EU, Japan, and China

In the UK, the Bank of England has raised concerns over the level of concentration and market power consolidation in AI-related service markets within the financial sector.


Approximately 75% of UK financial services firms utilize AI. The top three providers control 73% of cloud, 44% of AI models, and 33% of data services used by financial companies.


Three key conditions are under scrutiny: vendor lock-in, operational resilience, and competition:


  1. Lock-In Risks: technical and commercial barriers hinder switching and entry, prompting scrutiny under new digital markets laws. The EU aims to explicitly address vendor lock-in with its upcoming Data Act, ensuring companies can switch cloud providers — potentially at no additional cost.
  2. New rules bring "critical third parties" like major cloud providers under the Financial Conduct Authority and Prudential Regulation Authority, aiming to protect against cyber attacks and IT failures. These rules apply only after a provider receives a “critical” designation from the UK Treasury.
  3. The UK Procurement Act aims to help smaller AI companies bid for public contracts, seeking to rebalance sector concentration.


In Japan, following the enactment of the Act on Promotion of Competition for Specified Smartphone Software, the Japan Fair Trade Commission published the final rules, which target the dominance of major providers, such as Apple and Google, in mobile software ecosystems.

Modeled on the EU Digital Market Act, Japan’s regulation aims to foster competition, enhance security, and protect privacy in mobile software markets. Key provisions include rules to prevent designated companies from self-preferencing, restrictions on the use of users' data to prevent unfair competitive advantages, and a list of practices deemed anticompetitive (including anti-steering provisions).

Noncompliance can lead to investigations and fines of up to 20 percent of revenues for violations of ex-ante rules.


In China, new judicial guidelines on data-property protection instruct courts to handle AI-related disputes by avoiding overly restrictive interpretations of data ownership that might stifle innovation. The guidelines have two key objectives: to create predictability for AI developers and investors, and to strengthen China’s global tech position. Significantly, the new rules expand IP protection.


Each jurisdiction's approach reflects a nuanced variation of the same objective: to prevent market power abuse. 

The regulatory (UK/EU, Japan) and judicial/administrative (China) approaches converge, viewing data as a fundamental economic asset in the digital economy that may disadvantage smaller firms or limit innovation. The interesting news is China's clear opening to IP development as an engine for high-tech growth and global competitiveness, suggesting a more innovation-centric approach compared to the past.


At CILC, we provide expert legal guidance on antitrust, regulatory compliance, and intellectual property across the U.S., EU, and beyond. Our platform connects clients with qualified attorneys to navigate the complexities of multiple regulatory regimes, ensuring strategic advantage in a rapidly changing digital world.

The widening strategic gap between Chinese and U.S. approaches to Large Language Models

Open Source vs. Proprietary Models

Recent AI developments underscore a widening strategic gap between Chinese and U.S. approaches to advanced large language models (LLMs).


Notably:

  • Moonshot AI (Beijing) released Kimi K2, a trillion-parameter Mixture-of-Experts (MoE) model, reportedly surpassing GPT-4.1 in coding and math benchmarks, and crucially, making it open source.
  • OpenAI (San Francisco) postponed the release of its anticipated “open-weight” model, attributing the delay to extended safety reviews.


Chinese firms are leaning aggressively into open releases, sharing model weights freely to encourage rapid adoption and enable broad experimentation. This tactic is intended to grow developer ecosystems and accelerate both innovation and feedback cycles.

 Leading U.S. AI companies are showing increased hesitation, tightening access to model weights and intellectual property. OpenAI’s latest delay follows broader industry reticence to release high-performing models without restrictive licensing or additional vetting.


Impact on competition

The tradition of proprietary “AI moats” may be eroding as China’s open-weight models gain traction. Open release models can dilute incumbents' moats by reducing barriers to parity and enabling collective improvement. Ready access to state-of-the-art models enables more actors to experiment, build, and deploy novel use cases, potentially accelerating the AI adoption curve worldwide.


(New ?) Barriers

With model weights more widely available, the limiting factor shifts to access to computational resources. This is especially true as high-end GPUs and accelerators face supply constraints, in part due to escalating chip export restrictions targeting China. Thus, even as models proliferate, only those with sufficient compute can fully leverage them.


Conclusion

The immediate gains in model quality and access by Chinese labs challenge U.S. incumbents’ dominance. Open-source pushes can foster ecosystems richer and more diverse than those around closed models, influencing who shapes the next generation of applications and research. Export controls and hardware limitations now risk becoming the primary choke point for global AI progress, overshadowing the previous focus on model secrecy. These trends signal a reconfiguration of the AI competitive landscape, elevating open-source momentum and forcing U.S. firms to rethink their approaches to IP, safety, and collaboration. The trajectory will depend on how each side navigates the compute challenge and the rapidly evolving regulatory climate.


At CILC, we provide expert legal guidance on antitrust, regulatory compliance, and intellectual property across the U.S., EU, and beyond. Our platform connects clients with qualified attorneys to navigate the complexities of multiple regulatory regimes, ensuring strategic advantage in a rapidly changing digital world

AI and the Regulatory Framework: U.S./EU Compare

As ChatGPT falls under the European Commission lens, the Commission has launched an AI-on-Demand platform

ChatGPT is about to be considered for designation as a "systemic digital service" under the EU Digital Services Act (DSA). Specifically, the European Commission is actively considering designating ChatGPT as a "very large online platform" or "very large online search engine" (VLOP/VLOSE), which are the official terms for systemic digital services under the DSA. ChatGPT’s web search feature in Europe has rapidly grown to an average of 41.3 million monthly active users as of the first quarter of 2025, up from 11.2 million in the previous period. The DSA’s threshold for such designation is 45 million average monthly users, so ChatGPT is now very close to triggering this status.


Key Business Impacts of VLOP/VLOSE Designation

The EC Increased Regulatory Compliance and Oversight will cause:

- Stricter Obligations: robust risk management systems, publish transparency reports, and annual audits to assess and mitigate systemic risks such as misinformation, illegal content, and threats to fundamental rights.

Data Access for Researchers: Provide researchers and regulators with access to internal data, which could expose operational details and require new data-sharing infrastructure.

- Algorithmic Transparency:  Explain how its algorithms rank and present results, which could impact proprietary technology and competitive advantage

- Higher Compliance Costs

- Penalties for Non-Compliance

- Potential Slowdown in Innovation

- Precedent for AI Regulation: This designation could set a precedent for how other AI-driven platforms are regulated, potentially influencing global standards and increasing scrutiny for similar services.

- Competitive Pressure: Increased regulatory scrutiny could level the playing field, making it harder for dominant players to stifle competition and fostering a more diverse digital market


Designating ChatGPT as a VLOP or VLOSE under the DSA would bring challenges for OpenAI. It would require significant investments in compliance and transparency, potentially slow innovation, and increase operational costs.

At the same time, the European Commission has launched its AI-on-Demand platform. The EU prioritizes ethical development, regulatory compliance, and digital sovereignty, often resulting in slower deployment but higher trust and transparency. Once again, the EU shows that its approach to innovation is collaborative, involving universities, research centers, and industry. This sharply contrasts with the competitive U.S., where the initiatives focus more on feature-rich, user-friendly products with companies vying for market share and technological leadership.


Is This Protectionism or a Push for Broader Objectives?

The EU is pursuing a dual strategy: building up its AI ecosystem and infrastructure while regulating powerful foreign platforms (like ChatGPT) to ensure a level playing field, user safety, and compliance with EU law. It is also true that the DSA and AI Act are designed to apply to all relevant actors, regardless of origin, ensuring that both European and non-European platforms meet the same standards for transparency, accountability, and risk management. However, the regulatory environment does create a higher barrier to entry for non-EU firms, which must invest heavily in compliance. This can be seen as a form of regulatory protectionism, especially if the compliance burden disproportionately affects foreign companies with large user bases in Europe. Yet, the EU’s stated goal is to ensure that all platforms, whether European or not, meet the same standards for safety, transparency, and accountability.


Conclusion

Back to the future. It is undeniable that two distinct economic philosophies influence the approach to digital markets in two of the world's largest economies.  The U.S. model is strongly influenced by Schumpeterian competition, emphasizing innovation and market dynamism.  The U.S. legal and regulatory environment is seen as sufficiently adaptable to accommodate this competitive dynamism, fostering a virtuous cycle of innovation and entry by both startups and large incumbents. This reflects the classic Schumpeterian view, where market leadership is constantly contested and technological change drives progress; that is, economic progress is propelled by entrepreneurs who disrupt existing industries and technologies, continuously destroying old structures to make way for new, more efficient ones. This process is inherently dynamic and often results in both winners and losers, but it is considered essential for long-term economic growth and increased productivity. Google, Apple, and Microsoft have transformed markets through aggressive innovation, displacing older technologies and business models. The EU model is shaped by ordoliberal ideas that emphasize the importance of legal frameworks in ensuring fair competition, market contestability, and the protection of fundamental rights. The EU is not explicitly or solely ordoliberal, but these ideas are deeply embedded in its regulatory approach to digital markets and AI.  Strict regulatory oversight and the need to comply with detailed rules could slow down the pace of innovation. If we examine the current debate in Europe, many have observed that Europe is lagging behind the U.S. in AI and digital innovation.


At CILC, we provide expert legal guidance on antitrust, regulatory compliance, and intellectual property across the U.S., EU, and beyond. Our platform connects clients with qualified attorneys to navigate the complexities of both regulatory regimes, ensuring strategic advantage in a rapidly changing digital world