Local governments are utilizing A.I. for school assignments and bail decisions without regulation.

Local governments are utilizing A.I. for school assignments and bail decisions without regulation.

AI Legislation

States Taking the Lead in AI Regulation

Artificial intelligence (AI) has rapidly become an integral part of various sectors, including medicine, science, business, and education. As the deployment of AI systems continues to increase, legislators across the United States have recognized the need to protect their constituents from potential discrimination and other harms. However, they also strive not to stifle the progress and innovation brought by cutting-edge advancements. This delicate balance has prompted several states to take the lead in developing comprehensive AI regulations.

Leading the Way: Connecticut’s Approach

Connecticut has taken significant steps to address the challenges associated with AI. State Senator James Maroney, a Democrat and recognized authority on AI, believes it is crucial for the government to lead by example. Connecticut plans to create an inventory of its government systems utilizing AI by the end of 2023, with the information made available to the public. Furthermore, state officials will regularly review these systems to ensure they do not perpetuate unlawful discrimination.

Senator Maroney aims to collaborate with lawmakers from Colorado, New York, Virginia, Minnesota, and other states. Together, they plan to develop model AI legislation focusing on product liability, impact assessments of AI systems, and other necessary guardrails. Maroney emphasizes the urgency of the situation, acknowledging that AI adoption is rapidly changing, and accountability measures must be put in place sooner rather than later.

Connecticut’s initiative is part of a broader trend, with at least 25 states, Puerto Rico, and the District of Columbia introducing AI-related bills this year. As of late July, 14 states and Puerto Rico had already adopted resolutions or enacted legislation, according to the National Conference of State Legislatures. These efforts primarily concentrate on understanding the deployment and usage of AI systems within their jurisdictions.

Gaining Insight: Gathering Data and Assessing Impact

Many states have chosen to establish advisory bodies or committees to study and monitor the AI systems employed by their respective state agencies. Texas, North Dakota, West Virginia, and Puerto Rico are among those taking this proactive approach. Louisiana, for instance, formed a technology and cyber security committee to assess the impact of AI on state operations, procurement, and policy.

Heather Morton, a legislative analyst at the National Conference of State Legislatures, explains that states are keen on gathering data to understand the scope and applications of AI within their borders. Questions such as “Who’s using it? How are you using it?” are being asked to gain insight into AI’s presence and potential ramifications. This data-driven approach drives the decision-making process and lays the foundation for future legislation.

Unveiling the Unknown: Transparency and Algorithmic Accountability

Connecticut’s recent law mandating the regular scrutiny of AI systems used by state agencies was prompted by an investigation conducted by the Media Freedom and Information Access Clinic at Yale Law School. The probe revealed that AI algorithms were already being utilized to assign students to schools, determine bail amounts, and distribute welfare benefits, among other tasks. However, transparency regarding the details of these algorithms remains limited.

The American Civil Liberties Union of Idaho also shed light on concerning AI practices. Richard Eppink, the organization’s legal director, testified before Congress about the misuse of “secret computerized algorithms” in Idaho’s assessment of individuals with developmental disabilities for federally funded health care services. Eppink emphasized that these automated systems relied on corrupt data that had not been properly validated.

Enhancing transparency in AI governance is crucial, especially considering the diverse applications of the technology. From algorithmic recommendations on platforms like Netflix to generative AI tools like ChatGPT, concerns about disinformation and deceptive capabilities have understandably emerged.

From Hawaii to Arizona: Addressing AI Challenges

While many states have actively pursued AI legislation, some have faced challenges in devising appropriate measures. Hawaii, for example, did not pass any AI-related statutes this year because lawmakers felt unsure of how to address the issue. Instead, they passed a resolution urging Congress to adopt safety guidelines for AI usage, particularly regarding law enforcement and military applications.

Similar sentiments were expressed in other states. Massachusetts proposed limitations on mental health providers utilizing AI and sought to prevent dystopian work environments where personal data is controlled by automated systems. Meanwhile, New York considered restrictions on employers using AI as an automated decision-making tool in the hiring process.

On the other hand, Arizona’s Democratic Governor, Katie Hobbs, vetoed legislation aiming to prohibit voting machines from utilizing any AI software. She deemed the bill unnecessary, as the specific challenges it addressed were not prevalent in the state at that time.

A Federal Role: The Need for Nationwide AI Regulation

While numerous states are forging ahead with AI legislation, advocates also highlight the importance of federal leadership in this domain. The European Union has initiated efforts to establish comprehensive AI regulations, positioning itself as a global leader in AI governance. In the United States, bipartisan discussions concerning AI legislation have taken place in Congress. Senate Majority Leader Chuck Schumer emphasized the need to maximize the benefits of AI while mitigating its risks. However, specific details and commitments are yet to be determined.

Connecticut State Senator Maroney acknowledges the federal government’s potential to lead AI regulation. However, state legislatures often act more swiftly, as seen with data privacy issues. States are at the forefront of AI regulation, and their experiences and insights can inform future federal measures.

Preparing for the Future: The Role of Education

Recognizing the omnipresence of machine systems in society, Washington State Senator Lisa Wellman, a former systems analyst and programmer, emphasizes the need for comprehensive AI preparation. In the upcoming legislative session, Wellman intends to introduce a bill mandating computer science education for high school graduation. She believes AI and computer science should become foundational aspects of education to ensure both students and policymakers understand and incorporate AI effectively.

As states progressively navigate the complexities of AI legislation, they aim to strike a delicate balance between fostering innovation and safeguarding against potential harms. By taking proactive steps, these states are setting an essential precedent for comprehensive and responsible AI governance in the United States.

Associated Press Writers Audrey McAvoy in Honolulu, Ed Komenda in Seattle, and Matt O’Brien in Providence, Rhode Island, contributed to this report.