Summary
In the Middle East and North Africa, Egypt, Qatar, Jordan, and Cyprus foster ecosystems to build more secure AI systems for humans adopting documents that initiate reliable use of artificial intelligence. Egypt is the leader in the region whose AI policy breeds human oversight, transparency, and security.
The legislation related to the responsible implementation of artificial intelligence technologies in the Middle East and North Africa (MENA) shows varying features that contain distinct national characteristics. First, people in the region are vulnerable to personal data abuse by companies to train their AI models. Conflicts in Yemen, Palestine, Syria, Libya, and Sudan make the users of digital platforms unprotected to be exploited and cultivate an unregulated flow of data. Secondly, the government of Cyprus cannot impose a single mandate over the whole country as the island has been torn apart due to a conflict. It degrades its actual position, which is supported by the enforcement of the principles of the EU AI Act. Moreover, none of the countries in MENA restrict access to uncertified sources to academic AI systems. Advanced computers that enable the development of military equipment undermine security when non-state actors acquire the structure of weapons. Another point to consider is that the region is one of the first testing grounds of AI for military purposes. Israel Defense Forces employ Lavender and Gospel AI systems in the Gaza war. However, only three countries (Iran, Turkiye, and Israel) are developing military AI technologies.
To summarize, the Middle East and North African countries can be categorized into stable lands to build responsible AI and unstable territories subject to personal data exploitation and violent use of AI. To alleviate the situation, the former could support the latter in protecting their citizens’ data from misuse by AI developers. Besides, international and regional organizations may become facilitators or coordinators to decrease the gap, maintain cooperation, and increase public awareness.
Methodology, Ranking and Sources
AI Risks
1. Uncontrollable AI. Uncontrolled robots pose a risk to human safety through their erratic actions. Governments should enact legislation requiring tech-working units to refrain from totally excluding human participation in AI systems at this point in the development of AI. This principle requires businesses and organizations developing AI programs to have a backup plan in place for when the system becomes uncontrollable.
Question 1: Does the country's legislation require AI developers to maintain human control over the AI system they have developed?
2. Academic AI. Individual groups can carry out research on weapons of mass destruction that could prove hazardous to communities by using AI systems for academic reasons. It increases the availability of chemical and biological weapons and facilitates access to their development processes. Several nuclear-armed powers can reveal the secrets of their missiles and nuclear weapons to other nations. AI systems with strong academic research potential have the ability to not only incite a state arms race but also transfer these techniques to non-state actors and threaten the current system of regulating WMDs.
One way to monitor the use of artificial intelligence is through state legislation that limits uncertified access to sophisticated academic AI systems. It might keep the world from devolving into a war involving weapons of mass destruction.
Question 2: Does the country impose restrictions on access to advanced academic AI systems from uncertified sources?
3. Social Scoring. Governments may use discriminatory algorithms on their electronic platforms that target social minorities and support apartheid and segregation. Second, AI systems might be used to assess citizens based on how faithful they are to the rules and limit their basic rights. Third, those who have not yet committed a crime but have the capacity to do so are penalized by social scoring systems that use AI algorithms. People's freedom thus gets restricted and their fundamental rights continue to be threatened.
To mitigate the case, the community must have access to AI algorithms utilized in the public sector. Furthermore, it must be possible for independent specialists to frequently study and evaluate AI algorithms.
Question 3: Are the algorithms of national AI projects open for public discussion in the country?
4. Manipulative AI. Deepfakes and chatbots with either unreliable data or deliberately accumulated data sources are examples of systems that can be exploited to sway public opinion. An AI chatbot may provide deliberate and restricted responses if the data source is subject to stringent regulations, but it may also generate inaccurate information from an abundance of internet-based data. Both situations have the potential to manipulate the public in the absence of legislation that simultaneously protects freedom of speech and identifies reliable data sources.
Question 4: Does country regulation mandate the use of reliable data sources for AI model training?
5. Data Exploitation. Since data is a fundamental component of artificial intelligence, developers need more data in order to train their models. It can occasionally lead to the misuse of personal information. Governments should enact laws pertaining to the protection and security of personal data and prohibit the use of data for AI model training without consent to hold AI developers more accountable.
Question 5: Does country legislation protect personal data from misuse?
6. AI Militarization. Big data analysis and automated targeting are two uses of military AI. Some nations now prioritize developing lethal autonomous weaponry that improves targeting precision and allows them to utilize smaller weapons to destroy enemy warehouses and vital infrastructure. Nonetheless, the race for AI weapons feeds states' aspirations for armaments and helps to rebalance the existing power structure.
Question 6: Does the country have plans to develop AI (automated weapons) as part of its military strategy?
Iran
Saudi Arabia
Oman
Executive Regulation of the Personal Data Protection Law
The United Arab Emirates
The UAE Charter for the Development and Use of Artificial Intelligence
Federal Decree by Law No. (45) of 2021 Concerning the Protection of Personal Data
Qatar
Artificial Intelligence in Qatar – Principles and Guidelines for Ethical Development and Deployment
Personal Data Privacy Protection
Israel
Israel’s Policy on Artificial Intelligence. Regulation and Ethics
Jordan
National Charter of Ethics for AI
Bahrain
Law No. (30) of 2018 with Respect to Personal Data Protection Law
Egypt
Egyptian Charter for Responsible AI
Egypt’s views on: “Lethal Autonomous Weapons Systems” resolution A/78/241
Morocco
Law No. 09-08 on the protection of individuals with regard to the processing of personal data
Tunisia
Law on the Protection of Personal Data
Cyprus
General Data Protection Regulation