The Charger Blog

Professor’s Artificial Intelligence Expertise Helps to Inform Connecticut State Policy

A leading artificial intelligence researcher and expert, Vahid Behzadan, Ph.D., was appointed to the new Connecticut Artificial Intelligence Working Group earlier this year. He’s bringing his own expertise, as well as what he’s discovering in the laboratory at the University, as he endeavors to promote statewide cyber safety.

December 6, 2023

By Renee Chmiel, Office of Marketing and Communications

Vahid Behzadan, Ph.D. speaks at meeting
Vahid Behzadan, Ph.D., speaks as part of a task force meeting. (Courtesy of Connecticut Network)

Vahid Behzadan, Ph.D., is passionate about the safe and ethical use of artificial intelligence (AI). He regularly brings his expertise to his students in the classroom, in the laboratory, and, even, to audiences around the world through his many interviews with various media outlets. He’s now also bringing his commitment to the responsible use of AI to the state level and to Connecticut policymakers as a member of the new Connecticut Artificial Intelligence Working Group.

Particularly concerned about the potential for AI to be used maliciously, Dr. Behzadan was grateful for the opportunity to be a part of the group. He acknowledges that while some AI regulations may be well-intentioned, they may also be misinformed and, therefore, be unable to fully address the possible trajectories that AI development could take as it evolves. Through his new role, he’s excited to be leveraging his expertise while engaging with these issues directly.

"My appointment is a privilege that carries with it a profound sense of purpose," said Dr. Behzadan, an assistant professor of computer science and data science. "I aim to contribute constructively to discussions with policymakers, helping to bridge knowledge gaps without impeding innovation."

'The broader implications of AI in society'
Vahid Behzadan, Ph.D. with student
Vahid Behzadan, Ph.D. and a student at the University.

The group has already met multiple times since its official establishment earlier this fall. Meetings have offered members the opportunity to hear testimony from major stakeholders in the AI industry, including policy organizations and tech leaders such as Google. Dr. Behzadan says the perspectives presented during the meetings have been "invaluable."

Excited about the collaborative nature of the task force, Dr. Behzadan believes the diverse expertise of its members will play a pivotal role in shaping the state’s AI policy and in ensuring that multiple ideas and perspectives are considered. He’s excited to share his work, including his recent AI forensics research, which, he believes, could help inform a legal framework for AI.

"I see my role as contributory, helping to ensure that our collective efforts benefit from a blend of academic research and practical, real-world application," he explains. "Together, we strive to craft policies that are as informed and effective as they are mindful of the broader implications of AI in society."

'Addressing AI at the state level'

Dr. Behzadan’s role has enabled him to play an active part in the discussions, which, he says, have contributed to a "rich exchange of ideas" that endeavor to ensure a comprehensive legislative approach. He has synthesized the information that a variety of experts have presented, building an informed opinion that the working group can then use to guide its policies going forward.

"It is a delicate balance to strike," he said. "I am committed to doing this in a way that supports the responsible advancement of AI while also fostering an environment that encourages technological breakthroughs. My focus is on offering insights that are informed, nuanced, and forward-looking to craft policy that is as dynamic and adaptable as the field of AI itself."

Founder and director of the University’s Secure and Assured Intelligent Learning (SAIL) research group, Dr. Behzadan draws on his own research in the laboratory in his role with the task force. It has prepared him well, as the SAIL Lab is dedicated to advancing the understanding and implementation of secure and ethical AI systems. Researchers explore the intricate technical challenges inherent to AI security and safety, offering Dr. Behzadan a critical lens through which to view policy.

"State leaders' engagement is crucial because they are positioned to create policies that protect citizens, promote innovation, and ensure that the benefits of AI are shared equitably," he said. "Addressing AI at the state level allows for more nuanced and localized governance, which is vital given the varied applications and implications of AI to the state’s economy and social integrity."

'A path of informed regulation'
Vahid Behzadan, Ph.D.
Vahid Behzadan, Ph.D.

As AI continues to evolve and improve, its capabilities, influence, and impact also expand, bringing the critical conversations around AI to fields outside of technology, such as education and healthcare. The need for policy discussions is also increasing, as laws will have to evolve along with AI technology. There’s a sense of urgency when it comes to addressing the rapidly changing technology and role of AI, and Dr. Behzadan is eager to continue to explore this with the task force. He also addresses it with his students.

As an educator, Dr. Behzadan’s goal is to provide his students with a comprehensive view of AI – from codes and algorithms to the ethical and legislative concerns. This is especially true in the "AI and Cybersecurity" graduate course he teaches at the University. Designed to enable students to understand the technical aspects of AI security and safety as well as its societal and regulatory implications, the course prepares students to be active participants in the safe and responsible use of AI.

As a researcher and an expert, Dr. Behzadan retains a cautious optimism. He recognizes AI’s potential as a force for good in the world, but he believes its integration into society needs to be intentional. He emphasizes the need for education and understanding when it comes to fostering responsible development, and he hopes to do just that as an educator and as member of the task force.

"I recommend that we pursue a path of informed regulation that promotes the notions of transparency and accountability that are grounded in solid technical foundations," he said. "I also stress the importance of public and specialist-level education on AI, to dispel myths and to build a foundation of understanding that will support responsible development."