State of AI and ethical issues

How to Regulate Artificial Intelligence the Right Way: State of AI and Ethical Issues

Current artificial intelligence (AI) systems are governed by other existing regulations such as data protection, consumer protection and market competition laws.

It is essential that governments, leaders and policy makers develop a solid understanding of the fundamental differences between artificial intelligence, machine learning and deep learning.

Artificial Intelligence (AI) applies to computer systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules and decision trees. AI recognizes patterns from large amounts of quality data providing insights, predicting outcomes and making complex decisions.

Machine Learning (ML) is a subset of AI that uses advanced statistical techniques to enable computer systems to improving tasks with experience over time. Chatbots like Amazon’s Alexa and Apple’s Siri are getting better every year through consistent consumer use coupled with machine learning taking place in the background.

Deep Learning (DL) is a subset of machine learning that uses advanced algorithms to enable an AI system to practice performing tasks by exposing multilayer neural networks to large amounts of data. It then uses what it learns to recognize new patterns in the data. Learning can be human supervised learning, unsupervised learning, and or reinforcement learning, like Google used it with DeepMind to learn how to beat humans at the Go game.

State of Artificial Intelligence in the Age of the Pandemic

The_current_state_of_the_AI.png

Artificial intelligence (AI) is intensifying in more concrete ways in blockchain, education, the internet of things, quantum computing, the arms race, and vaccine development.

During the Covid-19 pandemic, we have seen AI become increasingly essential to breakthroughs in everything from drug discovery to critical infrastructure like power grids.

AI-driven approaches have taken biology by storm with faster simulations of the human cellular machinery (proteins and RNA). This has the potential to transform drug discovery and healthcare.

Transformers have become a general-purpose architecture for machine learning, beating the state of the art in many areas, including natural language planning (NLP), computer vision, and even structure prediction. proteins.

AI is now an arms race rather than a figurative.

Organizations must learn from the mistakes made with the Internet and prepare for safer AI.

Artificial intelligence deals with the field of computer systems development who are able to perform tasks that humans are very good at, such as object recognition, speech recognition and understanding, and decision making in a constrained environment.

There are 3 stages of artificial intelligence:

3_Types_of_AI.jpeg

1. Narrow Artificial Intelligence (ANI), which has a limited range of abilities. For example: AlphaGo, IBM’s Watson, virtual assistants like Siri, disease mapping and prediction tools, self-driving cars, machine learning models like recommender systems and deep learning translation .

2. Artificial General Intelligence (AGI)), who has attributes that are on par with human abilities. This level has not yet been reached.

3. Super artificial intelligence (ASI), who has skills that surpass humans and can make them obsolete. This level has not yet been reached.

Why should governments regulate artificial intelligence?

Make_AI_Accountable.jpeg

We must regulate artificial intelligence for two reasons.

  • First, because governments and businesses are using AI to make decisions that can impact a significant impact on our lives. For example, algorithms that calculate academic performance can have a devastating effect.

  • Second, because whenever someone make a decision that concerns us, they must be accountable to us. Human rights law sets minimum standards of treatment that everyone can expect. This gives everyone the right to a remedy when these standards are not met and you suffer harm.

Is there an international law on artificial intelligence?

How_New_Laws_and_Regulations_are_Created.png

Nowadays, there is no international artificial intelligence specific law or legislation to regulate its use. However, progress has been made through the passage of bills to regulate specific AI systems and frameworks.

Artificial intelligence has evolved rapidly over the past decades. It has made our life so much easier and saves us valuable time to complete other tasks.

AI must be regulated to protect positive advances in technology. To date, lawmakers around the world have failed to craft laws specifically regulating the use of artificial intelligence. This allows for-profit companies to develop systems that can harm individuals and society in general.

National and international regulations on artificial intelligence

AI_Facts_and_Figures.jpeg

National and local governments have started adopting strategies and working on new laws for a number of years, but no legislation has yet been passed.

China, for example, developed a strategy in 2017 to become the world leader in AI in 2030. In the United States, the White House has published ten principles for the regulation of AI. They include promoting “reliable, robust and trustworthy AI applications”, public participation and scientific integrity. International bodies that provide advice to governments, such as the OECD or the World Economic Forum, have developed ethical guidelines.

AI_Regulation_Draft.png

The Council of Europe has created a committee dedicated to developing a legal framework on AI. The most ambitious proposal to date comes from the EU. On April 21, 2021, the European Commission presented a proposal for a new AI law.

Ethical Concerns of Artificial Intelligence

Ethical_Principles_AI.png

EU police forces deploy facial recognition technologies and predictive policing systems. These systems are inevitably biased and thus perpetuate discrimination and inequality.

Crime prediction and recidivism risk is a second AI application fraught with legal issues. A ProPublica investigation of an algorithm-based criminal risk assessment tool found that the formula was more likely to flag black defendants as future criminals, labeling them twice as much as white defendants, and white defendants were mislabeled as low risk more often than black defendants. We need to think about how we mass-produce decisions and treat people, especially low-income and low-status people, through automation and its implications for society.

How to Regulate Artificial Intelligence the Right Way

Regulatory_Issues.png

Effective and rights-protecting AI regulation should, at a minimum, contain the following safeguards. First, AI regulation must prohibit use cases that violate fundamental rights, such as biometric mass surveillance or predictive policing systems.. The ban should not contain exceptions allowing companies or public authorities to use them “under certain conditions”.

Second, there must be clear rules defining exactly what organizations must make public about their products and services.. Companies must provide a detailed description of the AI ​​system itself. This includes information about the data it uses, the development process, the purpose of the systems and where and by whom they are used. It is also essential that people exposed to AI are made aware of it, for example in the case of hiring algorithms. Systems that can have a significant impact on people’s lives should be further investigated and included in a publicly available database. This would make it easier for researchers and journalists to ensure that companies and governments are properly protecting our freedoms.

Third, individuals and organizations protecting consumers must be able to hold governments and companies accountable when things go wrong. Existing liability rules need to be adapted to recognize that decisions are made by an algorithm and not by the user. This could mean that the company that developed the algorithm has an obligation to verify the data with which the algorithms are trained and the decisions the algorithms make so that they can correct problems.

Fourth, new regulations must ensure that there is a regulator who can hold companies and authorities accountable and that they follow the rules correctly. This watchdog should be independent and have the necessary resources and powers to do its job.

Finally, AI regulation should also contain safeguards to protect the most vulnerable. It should put in place a system for people who have been harmed by AI systems to file complaints and obtain compensation. Workers should have the right to take action against invasive AI systems used by their employer without fear of reprisal.

Conclusion

EU_AI_Act.jpeg


A trustworthy artificial intelligence must comply with all applicable laws and regulations,
as well as a series of requirements; specific assessment checklists aim to help verify the application of each of the key requirements:

  • Human agency and supervision: AI systems should enable equitable societies by supporting human agency and fundamental rights, not diminish, limit or mislead human autonomy.

  • Robustness and security: Trustworthy AI requires algorithms to be secure, reliable, and robust enough to handle errors or inconsistencies during all phases of the AI ​​system lifecycle.

  • Data privacy and governance: Citizens should have full control over their own data, while data about them will not be used to harm or discriminate against them.

  • Transparency: The traceability of AI systems must be ensured.

  • Diversity, non-discrimination and equity: AI systems must consider all human capabilities, skills and requirements, and ensure accessibility.

  • Societal and environmental well-being: AI systems should be used to drive positive social change and build sustainability and ecological responsibility.

  • Responsibility: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Comments are closed.