Artificial Intelligence Regulation: current laws relating to AI

 Current Laws Relating to AI.



European Union

In 2021, the EU reported that 6% of the Union’s small enterprises, 13% of medium enterprises, and 28% of large enterprises used AI. The differences might be explained by, for instance, the complexity of implementing AI technologies in an enterprise, or economies of scale, or costs.

However, we can say this for sure: the union aims to benefit from the economic and societal advantages of AI technologies, even as it is committed to striving for a balanced approach.



The Artificial Intelligence Act (the AI Act)

The AI Act, a proposed European law on artificial intelligence, would be the first AI law enacted by a principal regulator. The regulation groups AI applications into three risk categories:


Applications and systems that pose an unacceptable risk, such as Chinese government-run social scoring, are banned

High-risk applications, such as a CV-scanning tool that evaluates job applicants, must follow strict regulatory guidelines

Applications that are not explicitly prohibited or labeled as high-risk are mostly unregulated

At the same time, the regulation is accompanied by a robust funding policy to support AI: the Digital Europe and Horizon Europe programs will each contribute €1 billion to AI initiatives every year. Furthermore, 20% of the EU’s recovery scheme funds must be dedicated to digital transition and AI projects.


General Data Protection Regulation (GDPR) - European Union:

The GDPR, though not exclusively focused on AI, includes provisions that impact the use of AI in handling personal data. It governs how AI applications can collect, process, and store personal data while prioritizing individual data privacy and consent.


United States

Unlike the more comprehensive regulatory framework offered by the EU, the United States doesn’t yet have a federal privacy law. Instead, regulatory guidelines have been proposed by federal agencies, and by several state and local governments.

State lawmakers are, of course, considering AI’s benefits and challenges, and as a result a growing number of measures have been introduced, all aimed at studying the impact of AI and the room for policymakers to maneuver.


National AI Initiative Act (U.S. AI Act)

The National AI Initiative Act (U.S. AI Act) was enacted in January 2021. It was established to provide “an overarching framework to strengthen and coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies.”


The United States AI Act established offices and task forces to implement a national AI strategy involving various federal agencies. These include the Federal Trade Commission (FTC), the Department of Defense, the Department of Agriculture, the Department of Education, and the Department of Health and Human Services.


Algorithmic Accountability Act - United States (Proposed):

This proposed legislation aimed to regulate automated decision-making systems, including AI algorithms, to prevent discrimination and bias. If enacted, it would require companies to assess and mitigate risks associated with AI systems' impact on civil rights.


The NIST AI Risk Management Framework (AI RMF)

The NIST AI Risk Management Framework (AI RMF) is designed to improve the ability to incorporate trustworthiness considerations into the design, development, usage and evaluation of AI products, services and systems.


The Framework is being created through a consensus-driven, open, transparent, and collaborative process that includes workshops and other avenues for people to contribute feedback. Through greater understanding, detection, and preemption, it is designed to assist organizations in managing enterprise and societal risks associated with the design, development, deployment, assessment and usage of AI systems.


Local Law 144 (the AI Law)

Local Law 144 is the first law in the United States to address the use of AI and other automated technology in the hiring process. Local Law 144 would require businesses to conduct bias audits on automated employment decision tools, including those that use artificial intelligence and related technology, and to publish specific notices about such tools to employees or job candidates in the city.


New York City joined Illinois, Maryland and several other jurisdictions to implement AI regulations in respect to hiring and promotion bias in the workplace. However, due to substantial public comment on the issue, the New York City Department of Consumer and Worker Protection has announced that enforcement will be delayed until April 15, 2023.


The California Privacy Rights Act (CPRA)

The CPRA, which became effective on January 1, 2023, directly addresses automated decision-making. Under the Act, consumers have the right to understand (and opt out of) automated decision-making technologies, which include profiling consumers based on their “work performance, economic status, health, personal preferences, interests, reliability, behavior, location or movements.”


California Consumer Privacy Act (CCPA) - United States:

The CCPA gives California residents the right to know what personal information is being collected about them by businesses, including those using AI for data processing. It also grants the right to opt out of the sale of their personal information.


California's Algorithmic Accountability and Transparency Act (AATA) was passed in September 2020. The AATA requires businesses to disclose certain information about their algorithms, such as how the algorithms are used and what data is used to train them. The AATA also gives consumers the right to request information about how an algorithm was used to make a decision about them.


New York City's Automated Decision Systems (ADS) Mitigation Toolkit was released in January 2023. The toolkit is a set of guidelines and resources to help businesses mitigate the potential harms of ADS. The toolkit covers a variety of topics, such as transparency, accountability, and fairness.


Canada

Canada is heavily investing in AI. As of August 2020, $1 billion in contributions have been awarded across Canada.


AI regulations entered a new era following the Canadian Government’s announcement of a digital charter as part of a bigger revamp of the country’s data privacy landscape. Part three of Bill C-27, the Digital Charter Implementation Act, 2022, seeks to create the Artificial Intelligence and Data Act (AIDA), which would be Canada’s first AI legislation. We’ll now take a closer look at it.


Artificial Intelligence and Data Act (AIDA)

In general, AIDA and the EU AI Act are both focused on limiting the risks of bias and harm caused by AI while attempting to strike a balance with the need to encourage technical innovation. Both AIDA and the EU AI Act define “artificial intelligence” in a technology-neutral manner, so as to be “future-proof” and to keep up with breakthroughs in AI.


AIDA takes a more principles-based approach. In contrast, the EU AI Act is more prescriptive in categorizing “high-risk” AI systems and harmful AI practices and in limiting their development and deployment.


Except for transparency requirements, AI systems posing low or no risk are typically exempt from regulations. AIDA merely places transparency requirements on high-impact AI systems; it does not explicitly prohibit AI systems that pose an unacceptable level of danger. Most of AIDA’s substance and specifics are being left to future laws, including the definition of those “high-risk” AI systems to which most of AIDA’s requirements are attached.


United Kingdom

The UK is already home to a thriving AI sector, with research suggesting that more than 1.3 million UK businesses will use artificial intelligence and invest over £200 billion in the technology by 2040.

At the same time, when it comes to AI regulations, there is more to do to address the complex challenges that the emerging technologies present. In its National AI Strategy, drafted in 2022, the government committed to developing a pro-innovation national position on governing and regulating AI.

Instead of delegating responsibility for AI governance to a single regulatory body, as the EU is doing through its AI Act, the UK government’s proposals will allow different regulators to take a tailored approach to the use of AI to boost productivity and growth simultaneously.


The core principles require developers and users to:

Ensure that AI is used in a safe way

Ensure that AI is technically secure and performs as intended

Make sure that AI is appropriately accessible and explainable

Consider fairness

Identify a legally liable person to be responsible for AI

Clarify routes to redress or competitiveness


China's Personal Information Protection Law (PIPL) came into force in November 2021. The PIPL is a comprehensive privacy law that regulates the collection, use, and transfer of personal data. The PIPL also includes provisions specifically related to AI, such as requiring companies to obtain consent from users before collecting or using their personal data for AI-powered decision making.



Post a Comment

0 Comments

Close Menu