The Wild West of Artificial Intelligence regulations

  • Print
Listen to this story

Subscriber Benefit

As a subscriber you can listen to articles at work, in the car, or while you work out. Subscribe Now
0:00
0:00
Loading audio file, please wait.
  • 0.25
  • 0.50
  • 0.75
  • 1.00
  • 1.25
  • 1.50
  • 1.75
  • 2.00

Just as artificial intelligence is rapidly evolving, so is the legislative landscape. As with most new technologies, the establishment of any regulatory framework has lagged far behind the rise of artificial intelligence.

But over the past few months, the momentum for regulating artificial intelligence has reached an all-time high and legislators show no signs of slowing down. In the 2024 legislative session, at least 40 states, Puerto Rico, the Virgin Islands and Washington, D.C., introduced AI bills.

On May 17, Colorado became the first state in the country to pass a comprehensive regulatory framework for artificial intelligence.

Just four days later, the European Union voted to endorse the AI Act, the world’s first comprehensive regulation for providers of AI systems. Both regulations adopted a risk-based approach to address high-risk AI systems and their potential to cause “algorithmic discrimination.”

The European Union’s Regulatory Framework

The AI Act designates Artificial Intelligence systems into four categories: Unacceptable Risk, High Risk, Limited Risk, and Minimal Risk. The higher the risk designation, the more restrictive the regulation.

For example, unacceptable uses, such as using AI to assess the risk of an individual committing criminal offenses, are strictly prohibited while minimal risk uses, such as an email provider’s spam filter, are unregulated.

Under the European Union’s regulatory framework, high-risk uses include any AI system that impacts: the health, safety or fundamental rights of a natural person, critical infrastructure, education, employment, migration, democracy, elections, the rule of law and the environment.

The requirements set forth under the AI Act offer a strong preview of what can be expected for future federal or state legislation.

Colorado’s Regulatory Framework

Borrowing from the European Union’s sweeping AI Act, Colorado’s legislation targets developers of high-risk artificial intelligence systems, imposing a duty on such developers to exercise reasonable care to protect consumers from any “known or reasonably foreseeable” risks of algorithmic discrimination.

With limited exceptions, a high-risk artificial intelligence system is “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making a consequential decision.”

The act categorizes eight high-risk uses for which algorithmic discrimination is actionable, one of which is legal services.

Other “high-risk” uses include essential government services, financial or lending services, education, employment, health care, housing and insurance. The new law does not go into effect until Feb. 1, 2026.

Indiana’s measured approach to artificial intelligence

Indiana has taken a more calculated approach to artificial intelligence.

On March 13, Gov. Eric Holcomb signed Senate Bill 150 for Artificial Intelligence and Cybersecurity. The law establishes an AI Task Force to study and assess the use of AI technology by state agencies.

The task force began work in July and will conclude in December 2027.

Takeaways for Indiana lawyers

We are in the early days of AI regulation.

There is currently no federal comprehensive legislation that regulates or restricts the development and use of AI. But at least 20 bills have been introduced in Congress regarding AI this summer alone.

Half of all states have AI legislation under consideration, with roughly a third of states having enacted at least one law regarding the technology.

And in the absence of comprehensive federal legislation on artificial intelligence, the patchwork of AI regulation will continue to grow.

Lawyers everywhere need to stay apprised of the developments in AI—both for ourselves, as well as for our clients. •

__________

Jayna Cacioppo is a litigation partner at Taft and co-chair of the firm’s Innovation Tools and Technology Committee. She can be reached at [email protected]. Christine Walsh is a litigation associate at Taft. She can be reached at [email protected].

Please enable JavaScript to view this content.

{{ articles_remaining }}
Free {{ article_text }} Remaining
{{ articles_remaining }}
Free {{ article_text }} Remaining Article limit resets on
{{ count_down }}