Politico calls him California’s “chief gatekeeper” when it comes to AI rules and regulations. However, state Sen. Thomas Umberg isn’t all that interested in closing the door to progress on how we use and develop large language models (LLM). In fact, while the Santa Ana Democrat has concerns about the future of “AI” as we know it, he’s far more concerned with finding a way to balance regulation and innovation. With California having provided direction for decades in tech industry development and regulation, it only makes sense to Umberg that the state takes point in developing responsible but fair legislation this early in the technology’s lifespan.
Umberg joins TFIC co-host and Government Technology Staff Writer Ashley Silver, and Governing* Staff Writer Zina Hutton, to talk about his concerns with AI regulation, why states are leading the charge, dealing with tech execs, and what steps states would have to address when it comes to a cohesive response on regulating AI.
Here are the top five takeaways from this episode:
- State-Led AI Regulation Efforts: States, particularly California, are at the forefront of AI regulation, drafting hundreds of legislative proposals. State Sen. Tom Umberg emphasizes the need for clear definitions of key terms like “artificial intelligence,” “transparency,” “bias” and “privacy” to create effective regulations.
- Challenges in Balancing Progress and Protection: Officials like Umberg struggle to balance the benefits of AI with the need to mitigate its risks. The complexity of AI requires extensive consultation with experts across various sectors to ensure regulations are well-informed and effective.
- Federal vs. State Responsibilities: There’s a perceived vacuum at the federal level in addressing AI regulation, prompting states to take the initiative. California, home to many AI companies, feels a unique responsibility to set national and potentially international standards.
- Risks and Opportunities of AI: AI’s integration into numerous aspects of life, including health care, law enforcement and employment, presents both significant benefits and potential catastrophic risks. Transparent and unbiased AI models could improve objectivity in areas like insurance and employment.
- Ongoing Efforts and Collaboration: Continuous dialog with academics, AI enterprises and other stakeholders is crucial for developing robust regulations. Policymakers aim to find a “sweet spot,” where regulations foster AI's positive potential while minimizing its risks.
The following has been edited for length and clarity.
TFIC: What are your biggest concerns about AI regulation? Are there any concerns about free speech or curbing innovation that particularly stand out?
Sen. Umberg: Artificial intelligence is going to become part of everyone’s life, whether it’s employment, health care, law enforcement or the ability to speak freely. It will permeate all those things and be part of everything that we do. I believe that California’s going to set the standard for the country, maybe even internationally. I don’t think the federal government’s going to create the legal guidelines, standards, laws that may be required to make sure that, with artificial intelligence, we want to encourage the good and to minimize the risk.
TFIC: As we’ve seen, states are stepping up where the federal government is currently stalling. What is it about state legislatures [that lend themselves] to swifter progress on AI legislation?
Umberg: This is an area that’s emerging very, very quickly. State governments, governors, legislatures are saying, ‘We need to step into this vacuum.’ And in California, we have a unique responsibility, not just because we’re so big, but also because so many of the artificial intelligence companies are here in California.
TFIC: Because of that relationship between the tech companies and legislation and legislators [in California], how do you work with these companies to figure out what that balance looks like? How do you talk to someone like Sam Altman and convince him to put brakes on some OpenAI features, or get them to work with state and local governments on regulations? How do you talk to the guys behind the curtain of AI?
Umberg: That is a question that I wrestle with on an hourly basis. Folks that I talk to often have an interest in making sure that whatever we do is beneficial to whatever interests they may have. And I’m aware of that. I talk to a whole variety of folks, academics, some of the medium-sized or smaller AI enterprises, and of course some of the major AI enterprises. And I solicit information, I solicit suggestions all the time, on a daily basis. I’m blessed with a staff that is extremely smart, extremely hardworking and extremely passionate about getting this right. And so, we are collecting information, trying to distill that information and arrive at a point where we think we’ve hit that sweet spot.
TFIC: With hundreds of bills around AI, what would be the most-effective first steps to take toward responsible AI regulation?
Umberg: For me, the most important place to start is with definitions that people can understand. So if you say to an entity, ‘You must disclose if you’re using artificial intelligence to come to this decision,’ the first step in that process is [defining] what artificial intelligence is. What does artificial intelligence mean? What does transparency mean? What does bias mean? What privacy interests are at issue? Secondarily is transparency. We need to have some sort of way to mark products that are derivative of artificial intelligence. So you ask where you start, you start with definitions, and then I think you start with transparency so folks know when artificial intelligence is being used to make a decision in law enforcement, employment or in health care.
Our editors used ChatGPT 4.0 to summarize the episode in bullet form to help create the show notes. The main image for this story was created using DALL-E 3.
*Governing and Government Technology are both part of e.Republic.