Artificial Intelligence as a new phenomenon brings opportunities and challenges to societies. Societies that can reduce AI risks and take advantage of opportunities can employ this technological phenomenon successfully and safely. In this process, governments are responsible for adopting necessary regulations to control potential AI threats.
Responsible AI Index monitors and informs government regulations that intend to prevent people from AI threats. The index attempts to show where a safer AI ecosystem exists according to current legislation. Regulations to prevent AI threats are the main measuring component to rank countries in the index.
Summary on Asia
Although Japan and South Korea are prominent examples of technological development, their legislations still focus less on accountability and transparency of artificial intelligence. This tendency also covers Southeast Asian and South Asian countries. China is an outstanding leader in AI regulation in these parts of Asia. On the other hand, countries in the Middle East excel at AI legislation fostering safer ecosystems and human contact. Qatar dominates in this field in the continent facilitating a better environment for responsible AI. At the same time, Iran is positioned in the lowest row due to the lack of laws in this domain and AI militarization ambitions.
Israel uniquely exemplifies how to strike a balance between legislation and innovation. Contrary to the arguments claiming that AI rules discourage technological rise, the country stands out in both sectors. Regardless, the implementation of AI-powered Gospel and Lavender in Gaza causing civilians, women, and children deaths denounces the prestige of Israel.
Apart from Israel, China, Russia, Turkey, India, South Korea, and Iran have exposed their intentions to develop artificial intelligence in the defense sector. Only the legislations of Qatar, Jordan, the United Arab Emirates, and Israel mandate human oversight over AI and only China in Asia regulates access to advanced academic AI. Furthermore, China is the single country that requires the use of reliable data to train artificial intelligence whereas Qatar and Israel partially hint at this requirement.
To summarize, legislation on AI is developing in Asia. For example, most countries have adopted laws that protect personal data. Qatar, Jordan, the United Arab Emirates, and China have established legislation for the responsible use of artificial intelligence above an average level (at or more than 3.5 points).
Methodology, Ranking and Sources
AI Risks
1. Uncontrollable AI. Uncontrolled robots pose a risk to human safety through their erratic actions. Governments should enact legislation requiring tech-working units to refrain from totally excluding human participation in AI systems at this point in the development of AI. This principle requires businesses and organizations developing AI programs to have a backup plan in place for when the system becomes uncontrollable.
Question 1: Does the country's legislation require AI developers to maintain human control over the AI system they have developed?
2. Academic AI. Individual groups can carry out research on weapons of mass destruction that could prove hazardous to communities by using AI systems for academic reasons. It increases the availability of chemical and biological weapons and facilitates access to their development processes. Several nuclear-armed powers can reveal the secrets of their missiles and nuclear weapons to other nations. AI systems with strong academic research potential have the ability to not only incite a state arms race but also transfer these techniques to non-state actors and threaten the current system of regulating WMDs.
One way to monitor the use of artificial intelligence is through state legislation that limits uncertified access to sophisticated academic AI systems. It might keep the world from devolving into a war involving weapons of mass destruction.
Question 2: Does the country impose restrictions on access to advanced academic AI systems from uncertified sources?
3. Social Scoring. Governments may use discriminatory algorithms on their electronic platforms that target social minorities and support apartheid and segregation. Second, AI systems might be used to assess citizens based on how faithful they are to the rules and limit their basic rights. Third, those who have not yet committed a crime but have the capacity to do so are penalized by social scoring systems that use AI algorithms. People's freedom thus gets restricted and their fundamental rights continue to be threatened.
To mitigate the case, the community must have access to AI algorithms utilized in the public sector. Furthermore, it must be possible for independent specialists to frequently study and evaluate AI algorithms.
Question 3: Are the algorithms of national AI projects open for public discussion in the country?
4. Manipulative AI. Deepfakes and chatbots with either unreliable data or deliberately accumulated data sources are examples of systems that can be exploited to sway public opinion. An AI chatbot may provide deliberate and restricted responses if the data source is subject to stringent regulations, but it may also generate inaccurate information from an abundance of internet-based data. Both situations have the potential to manipulate the public in the absence of legislation that simultaneously protects freedom of speech and identifies reliable data sources.
Question 4: Does country regulation mandate the use of reliable data sources for AI model training?
5. Data Exploitation. Since data is a fundamental component of artificial intelligence, developers need more data in order to train their models. It can occasionally lead to the misuse of personal information. Governments should enact laws pertaining to the protection and security of personal data and prohibit the use of data for AI model training without consent to hold AI developers more accountable.
Question 5: Does country legislation protect personal data from misuse?
6. AI Militarization. Big data analysis and automated targeting are two uses of military AI. Some nations now prioritize developing lethal autonomous weaponry that improves targeting precision and allows them to utilize smaller weapons to destroy enemy warehouses and vital infrastructure. Nonetheless, the race for AI weapons feeds states' aspirations for armaments and helps to rebalance the existing power structure.
Question 6: Does the country have plans to develop AI (automated weapons) as part of its military strategy?
Malaysia
Indonesia
China
Cyberspace Administration of China
Provisions on the Management of Algorithmic Recommendations in Internet Information Services
Interim Measures for the Management of Generative Artificial Intelligence Services
Personal Information Protection Law
New Generation Artificial Intelligence Development Plan
India
Digital Personal Data Protection Act
Kazakhstan
Law on Personal Data and Its Protection
Japan
Act on the Protection of Personal Information
South Korea
Personal Information Protection Act
Philippines
Thailand
Vietnam
Decree on Personal Data Protection
Singapore
Iran
Mongolia
Saudi Arabia
Oman
Executive Regulation of the Personal Data Protection Law
Armenia
On Protection of Personal Data
Georgia
Law of Georgia on Personal Data Protection
The United Arab Emirates
The UAE Charter for the Development and Use of Artificial Intelligence
Federal Decree by Law No. (45) of 2021 Concerning the Protection of Personal Data
Qatar
Artificial Intelligence in Qatar – Principles and Guidelines for Ethical Development and Deployment
Personal Data Privacy Protection
Israel
Israel’s Policy on Artificial Intelligence. Regulation and Ethics
Jordan
National Charter of Ethics for AI
Uzbekistan
Kyrgyzstan