Summary
American continents are the most influential region in artificial intelligence development as the United States, home to the biggest data center operators, leading AI chips producers, and top universities are located there. Moreover, the soft power of the US affects the countries around the world and they may take the US legislation on the regulation of artificial intelligence granted for democratic principles and technological success. One example is that the lack of federal laws regulating AI breeds a false illusion that an unregulated market facilitates opportunities for businesses to develop AI technologies successfully as companies are conferred with abundant data to train their models. As a result, some governments denounce establishing rules for companies building AI systems, thus ingraining a slow-bursting mine in their societies. Seeing the US as a sample to follow, they usually forget that the technology companies in the country precede the campaigns for AI regulation. It means a vice-versa situation in which the AI conglomeration in the US jeopardizes an effective regulation that provides personal rights and democratic elections in the country. The last elections in the US showed that social media such as Elon Musk’s Twitter could play a significant role in manipulating the voters. The same big data with AI-powered analyzing tools make a bigger impact on the US democracy.
Similarly, the lack of AI regulation is pervasive in the Americas where the majority of the countries have not yet adopted laws on the responsible use of AI. Only Uruguay and Brazil have fostered better ecosystems that maintain personal safety and national security from AI-related threats. In terms of index scores, Uruguay is the regional leader with 5 points.
Methodology, Ranking, and Sources
AI Risks
1. Uncontrollable AI. Uncontrolled robots pose a risk to human safety through their erratic actions. Governments should enact legislation requiring tech-working units to refrain from totally excluding human participation in AI systems at this point in the development of AI. This principle requires businesses and organizations developing AI programs to have a backup plan in place for when the system becomes uncontrollable.
Question 1: Does the country's legislation require AI developers to maintain human control over the AI system they have developed?
2. Academic AI. Individual groups can carry out research on weapons of mass destruction that could prove hazardous to communities by using AI systems for academic reasons. It increases the availability of chemical and biological weapons and facilitates access to their development processes. Several nuclear-armed powers can reveal the secrets of their missiles and nuclear weapons to other nations. AI systems with strong academic research potential have the ability to not only incite a state arms race but also transfer these techniques to non-state actors and threaten the current system of regulating WMDs.
One way to monitor the use of artificial intelligence is through state legislation that limits uncertified access to sophisticated academic AI systems. It might keep the world from devolving into a war involving weapons of mass destruction.
Question 2: Does the country impose restrictions on access to advanced academic AI systems from uncertified sources?
3. Social Scoring. Governments may use discriminatory algorithms on their electronic platforms that target social minorities and support apartheid and segregation. Second, AI systems might be used to assess citizens based on how faithful they are to the rules and limit their basic rights. Third, those who have not yet committed a crime but have the capacity to do so are penalized by social scoring systems that use AI algorithms. People's freedom thus gets restricted and their fundamental rights continue to be threatened.
To mitigate the case, the community must have access to AI algorithms utilized in the public sector. Furthermore, it must be possible for independent specialists to frequently study and evaluate AI algorithms.
Question 3: Are the algorithms of national AI projects open for public discussion in the country?
4. Manipulative AI. Deepfakes and chatbots with either unreliable data or deliberately accumulated data sources are examples of systems that can be exploited to sway public opinion. An AI chatbot may provide deliberate and restricted responses if the data source is subject to stringent regulations, but it may also generate inaccurate information from an abundance of internet-based data. Both situations have the potential to manipulate the public in the absence of legislation that simultaneously protects freedom of speech and identifies reliable data sources.
Question 4: Does country regulation mandate the use of reliable data sources for AI model training?
5. Data Exploitation. Since data is a fundamental component of artificial intelligence, developers need more data in order to train their models. It can occasionally lead to the misuse of personal information. Governments should enact laws pertaining to the protection and security of personal data and prohibit the use of data for AI model training without consent to hold AI developers more accountable.
Question 5: Does country legislation protect personal data from misuse?
6. AI Militarization. Big data analysis and automated targeting are two uses of military AI. Some nations now prioritize developing lethal autonomous weaponry that improves targeting precision and allows them to utilize smaller weapons to destroy enemy warehouses and vital infrastructure. Nonetheless, the race for AI weapons feeds states' aspirations for armaments and helps to rebalance the existing power structure.
Question 6: Does the country have plans to develop AI (automated weapons) as part of its military strategy?
Canada
Artificial Intelligence Strategy
United States
Pentagon Official Lays Out DOD Vision for AI
Mexico
Federal Law on Protection of Personal Data Held by Private Parties
Salvador
Nicaragua
Panama
Colombia
Brazil
Argentina
Ley de Protección de los Datos Personales
Uruguay
AI Strategy for the Digital Government
Ecuador
Personal Data Protection Organic Law
Peru
Ley de Protección de Datos Personales
Chile