Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

Women in AI: Brandie Nonnecke of UC Berkeley says investors should insist on responsible AI practices

To give AI academics and others their well-deserved – and overdue – time in the spotlight, TechCrunch is launching an interview series focused on remarkable women who have contributed to the AI ​​revolution. We’ll be publishing several articles throughout the year as the AI ​​boom continues, highlighting key work that often remains overlooked. Read more profiles here.

Brandie Nonnecke is the founding director of the CITRIS Policy Lab, headquartered at UC Berkeley, which supports interdisciplinary research to address questions related to the role of regulation in promoting innovation. Nonnecke also co-directs the Berkeley Center for Law and Technology, where she leads projects on AI, platforms and society, as well as the UC Berkeley AI Policy Hub, an initiative to train researchers to develop governance and effective AI policy frameworks.

In his spare time, Nonnecke hosts a video and podcast series, TecHype, which analyzes emerging technology policies, regulations and laws, providing insight into the benefits and risks and identifying strategies for putting technology to good use.

Questions and answers

In short, how did you get started in AI? What attracted you to the field?

I have worked in responsible AI governance for almost a decade. My background in technology, public policy and their intersection with societal impacts attracted me to this field. AI is already ubiquitous and having a profound impact on our lives – for better and for worse. It’s important to me to contribute meaningfully to society’s ability to harness this technology for good rather than sitting on the sidelines.

What work are you most proud of (in the field of AI)?

I’m really proud of two things we accomplished. First, the University of California was the first university to establish responsible AI principles and a governance structure to better ensure responsible procurement and use of AI. We take seriously our commitment to serving the public responsibly. I had the honor of co-chairing the UC Presidential Task Force on AI and its subsequent Permanent Council on AI. In these roles, I was able to gain first-hand experience thinking about how to best implement our Responsible AI principles to protect our faculty, staff, students, and the broader communities we serve. Second, I think it is essential that the public understands emerging technologies and their real benefits and risks. We launched TecHype, a video and podcast series that demystifies emerging technologies and provides advice on effective technical and policy interventions.

How can we meet the challenges of a male-dominated technology sector and, by extension, the male-dominated AI sector?

Be curious, persistent and don’t let imposter syndrome discourage you. I found it crucial to seek out mentors who support diversity and inclusion, and to offer the same support to others entering this field. Building inclusive communities in tech has been a powerful way to share experiences, advice, and encouragement.

What advice would you give to women looking to enter the AI ​​field?

For women entering the AI ​​field, my advice is threefold: relentlessly pursue knowledge, as AI is a rapidly evolving field. Embrace networking because connections will open doors to opportunities and provide invaluable support. And advocate for yourself and others, because your voice is essential to shaping an inclusive and equitable future for AI. Remember, your unique perspectives and experiences enrich the field and drive innovation.

What are the most pressing issues facing AI as it evolves?

I think one of the most pressing issues facing AI as it evolves is not dwelling on the latest hype cycles. We’re seeing this now with generative AI. Of course, generative AI is showing significant progress and will have a huge impact, both good and bad. But other forms of machine learning are now being used to surreptitiously make decisions that directly affect everyone’s ability to exercise their rights. Rather than focusing on the latest wonders of machine learning, it is more important to focus on how and where machine learning is applied, regardless of its technological prowess.

What issues should AI users be aware of?

AI users should be aware of issues surrounding data privacy and security, the potential for bias in AI decision-making, and the importance of transparency in how AI systems AI functions and makes decisions. Understanding these issues can empower users to demand more responsible and equitable AI systems.

What is the best way to develop AI responsibly?

Building AI responsibly involves integrating ethical considerations at every stage of development and deployment. This includes engagement of diverse stakeholders, transparent methodologies, bias management strategies and ongoing impact evaluations. It is fundamental to prioritize the public good and ensure that AI technologies are developed with respect for human rights, equity and inclusiveness.

How can investors better promote responsible AI?

This is such an important question! For a long time, we never explicitly discussed the role of investors. I can’t express enough how impactful investors are! I think the cliché that “regulation stifles innovation” is overused and often false. Instead, I strongly believe that small companies can gain a late-stage innovation advantage and learn from large AI companies that have developed responsible AI practices and guidance from academia, civil society and government. Investors have the power to shape the direction of the industry by making responsible AI practices a critical factor in their investment decisions. This includes supporting initiatives to address social challenges through AI, promoting diversity and inclusion within the AI ​​workforce, and advocating for better governance and strong technical strategies that help ensure that AI technologies benefit society as a whole.

techcrunch

Back to top button