Why the US needs international AI framework
To prevent the Internet from fragmentation, the US needs to lead an international AI framework

Concerns over the potential access of Security Services to Telegram data centers after the arrest of Pavel Durov in France are doubled by the news about the usage of personal data by LinkedIn to train its generative AI. In addition, Meta resumed the utilization of Brazilian users’ data to power the generative AI model according to Brazil's National Data Protection Authority that suspended the projects in July 2024. The tendency signals social platforms’ change from the national information security object to a direct case of national security and personal safety.
A decade ago, the platforms were discussed as the drivers of globalization, as well as manners of social manipulation, and channels of leaking "unwanted information". Therefore, the Chinese and Russian governments initiated polarization of the Internet by launching national programs to control the information flow on social media. China introduced a Great Firewall to monitor sensitive information and developed an alternative search engine (Baidu) and social media platform (WeChat) while Yandex for internet search and VKontakte for social media rose in Russia. Both countries initially intended to prevent their societies from manipulating information and thus pioneered the polarization of the global Internet.
Current concerns over data security may fuel Internet fragmentation and de-globalization trends as the cases pose threats to the national security of states and the personal safety of citizens. As the US government is caught in the dilemma of implementing anti-trust law against its tech giants and court deals to foster a fairer ecosystem or to recruit these companies to compete with their Chinese counterparts, anxiety related to data protection and its usage for unintended purposes is growing. In the absence of a clear regulatory international framework, individual countries are responding to the tendency with national data protection laws mainly curbing the power of American tech giants such as Amazon, Google, Meta, and Microsoft.
As Washington postpones the adoption of a law on protecting personal data from use to train AI, American tech companies may soon find themselves in a jungle of nationwide restrictions. Ultimately, Washington’s intention to support market-driven AI with fewer legal barriers for its tech industry turns out to be counterproductive. In the global context, American companies have to deal with court deals of individual countries due to privacy and data safety enforcement. This mainstream also negatively affects the balance of power in the US-China tech war, as China is naturally rich in data to develop its AI models while American companies are dependent on the international flow of this critical element.
To secure its lead in the field of AI law, the stream of necessary data, and a global enabling ecosystem of business for its tech industry, Washington needs to adopt a law that can be a framework and guideline for individual governments. American lawmakers could use the UNESCO Recommendation on AI Etihics, OECD AI Principles, and G20 AI Guidelines in initial steps. As Anu Bradford claims in his book “Digital Empires”, the digital operations of American technology giants under the law of data protection and privacy increase the companies’ reliability. Otherwise, individual governments facilitate Chinese-style restrictions to support national security and adopt regulations like the European AI Act. Both cases limit the operation areas for American technology companies leaving them less data to train AI than before and pushing the companies to court deals for the abuse of national data laws.
Single standards among all states of America and ally countries for data collection to train AI could ease the business of big tech, contrary to the US government's fear that data protection and privacy laws shrink the business environment for American companies. Thus, Washington cannot offer the world an AI regulation framework that could be a model for individual countries to include their national AI laws. In the end, international companies of the US like Google, Amazon, Microsoft, and Meta may get trapped in the world of different AI regulations of individual countries. What is worse, start-ups like OpenAI could not deal with resource-burning court issues due to data security and privacy. For example, Italian data privacy regulator Garante accused Open AI for the breach of European data protection laws in this year (2024).
To avoid further fragmentation of the Internet with individual governmental data regulations, it is time for Washington to initiate an AI framework that suits to the data protection ambitions of the US and the world countries. In other case, the tech giants, for whom the US government have been delaying data protection regulations, find themselves trapped in a world of different standards.
Shakhboz Juraev is the Chief Coordinator of Technology in Global Affairs. He is the author of the book "Hybrid Strategy of Cybersecurity: The Role of Information Technology Companies in Chinese Cybersecurity Policy”.