Newsroom

October 31, 2025

Making AI Work for You

Why it Matters: Understanding the applications and risks associated with artificial intelligence (AI) focused community discussions that emphasize education and collaboration assist leaders deploy AI-powered tools with greater responsibility and equity. Through these communities of practice, California is continuing to ensure AI supports better decisions and delivers greater value for Californians.

SACRAMENTO, Calif. – CDT’s AI Community met on September 25 to explore practical ways to make AI work for end users. The session featured Dr. Hemant Bhargava, Distinguished Professor and Jerome and Elsie Suran Chair in Technology Management, Associate Dean for Academic Affairs, and Director of the Center for Analytics and Technology in Society at UC Davis, who outlined AI’s structural limits and practical oversight measures.

State Chief Technology Innovation Officer Vera Zakem, opened the meeting by reaffirming AI remains a top state priority. She framed the conversation around improving public service through effective AI governance and adoption, emphasizing the importance of balancing effectiveness with accountability. Zakem encouraged participants to consider how AI-driven changes will affect Californians and tap into the state’s intellectual, innovative, and institutional strengths, especially the University of California system, as partners in shaping public-interest AI strategy, governance, and adoption.

Dr. Bhargava described AI as pervasive and economically consequential with investment and adoption growing rapidly. While many organizations adopting AI often outperform peers, he noted that many pilots fail to realize economic returns. This he said reflects a familiar challenge in adopting new technology, where initial technical investment alone rarely yields lasting gains without organizational transformation and complementary innovations.

He cautioned that, despite some vital technical breakthroughs, modern AI models are probability-based pattern matchers that can produce authoritative but inaccurate results. Through examples like fabricated legal research citations and inconsistent mathematical reasoning, Dr. Bhargava underscored the need for verification rather than blind trust.

To manage those risks, Dr. Bhargava urged agencies to adopt deliberate oversight and verification practices including prompt engineering, red-teaming, chain-of-thought elicitation, independent evaluation, and redundancy across models to raise confidence in results. “Using AI well requires oversight,” he noted, “and because oversight is expensive, we have to apply it strategically.” He closed by stressing human responsibility: “We are the ones accountable for decisions, and we need to exercise that oversight.”

He offered a concise four-point framework to guide agencies in rigorously validating AI outputs:  

  1. Magnitude of Consequence: Apply stronger oversight for higher risk uses.
  2. Solution Landscape (Peaks or Plateaus): Determine whether high-quality results are common or rare for the use case.
  3. Cost of Constructing Good Solution vs (Verification x Rejection Rate): Favor AI use cases where the human operator struggles to construct a good quality result (and AI can do it faster) but can easily verify the quality of an AI-produced result (after accounting for rejections and iterations).
  4. Data and Context Sensitivity: Guard or avoid AI use when private data or data from a sensitive context are involved.

Following the presentation, Zakem and Dr. Bhargava discussed practical steps to build trustworthy AI in government to deliver for Californians. Dr. Bhargava stressed that literacy and targeted education are essential to foster trust. He recommended that agencies develop domain-specific protocols and task forces to evaluate trade-offs such as verification cost, consequence, and energy consumption, enabling them to balance realizing benefits with managing risks.

The meeting concluded with an extensive audience Q&A that generated dozens of questions about public sector AI concerns, including trust and literacy, verification versus construction costs, testing and metrics, environmental impacts, and procurement. 

The AI Community is open to California State Employees. To join the community, state employees can subscribe to the AIC listserv: https://cdt.ca.gov/technology-innovation/artificial-intelligence-community/subscribe-to-the-ai-community/.