Tech

Women in AI: Arati Prabhakar thinks getting AI right is crucial

To give AI academics and others their well-deserved – and overdue – time in the spotlight, TechCrunch has published a series of interviews focused on remarkable women who have contributed to the AI ​​revolution. We publish these articles throughout the year as the AI ​​boom continues, highlighting key work that often remains overlooked. Read more profiles here.

Arati Prabhakar is director of the White House Office of Science and Technology Policy and science advisor to President Joe Biden. Previously, she was director of the National Institute of Standards and Technology (NIST) – the first woman to hold this position – and director of DARPA, the US Defense Advanced Research Projects Agency.

Prabhakar holds a bachelor’s degree in electrical engineering from Texas Tech University and a master’s degree in electrical engineering from the California Institute of Technology. In 1984, she became the first woman to earn a doctorate in applied physics from Caltech.

In short, how did you get started in AI?

I took over as head of DARPA in 2012, and that was when machine learning-based AI was booming. We did an incredible job with the AI, and it was everywhere, so that was the first clue that something big was coming. I assumed this role at the White House in October 2022, and a month later, ChatGPT came out and captured everyone’s imagination with generative AI. This created a moment that President Biden and Vice President Kamala Harris seized to put AI on the right path, and that’s the work we’ve done over the last year.

What attracted you to the field?

I like big and powerful technologies. They always bring a light side and a dark side, and that’s certainly the case here. The most interesting work I get to do as a techie is creating, managing, and driving these technologies, because ultimately, if we can do that, that’s where progress comes from.

What advice would you give to women looking to enter the AI ​​field?

This is the same advice I would give to anyone interested in participating in AI. There are many ways to help, from soaking up the technology and developing it, to using it for many different applications, to doing the work necessary to ensure that we Let’s manage the risks and harms of AI. Whatever you do, understand that this is a technology that brings both light and dark sides. Above all, go do something big and useful, because now is the time!

What are the most pressing issues facing AI as it evolves?

What I’m really interested in is: What are the most pressing issues for us as a nation as we advance this technology? A lot of good work has been done to put AI on the right track and manage risks. We still have much to do, but the President’s executive order and the White House Office of Management and Budget’s guidance to agencies on how to use AI responsibly are extremely important steps that put us on the path the right path.

And now I think the job is twofold. The first is to ensure that the A.I. do be carried out responsibly so that it is safe, effective and trustworthy. The second is to use it to think big and solve some of our big challenges. It has this potential for everything from healthcare to education, decarbonizing our economy, weather forecasting and much more. It won’t happen automatically, but I think the journey will be worth it.

What issues should AI users be aware of?

AI is already in our lives. AI serves the ads we see online and decides what happens next in our feed. It’s behind the price you pay for a plane ticket. It may be behind the “yes” or “no” to your mortgage application. So the first thing is to be aware of how much is already present in our environment. This can be a good thing because of the creativity and scale possible. But it also carries significant risks, and we all need to be smart users in a world that is empowered – or driven, now – by AI.

What is the best way to develop AI responsibly?

Like any powerful technology, if your ambition is to use it to do something, you need to be responsible for it. This starts by recognizing that the power of these AI systems comes with enormous risks, and different types of risks depending on the application. We know you can use generative AI, for example, to boost creativity. But we also know that this can disrupt our information environment. We know this can create safety and security issues.

There are many applications where AI allows us to be much more efficient and have reach, scale and reach that we’ve never had before. But you better make sure it doesn’t incorporate bias or destroy privacy along the way before you reach scale. And this has huge implications for work and for workers. If we can do this, it can empower workers to do more and earn more, but that will only happen if we pay attention. And that’s what President Biden has made clear we need to achieve: ensuring that these technologies enable workers, not displace them.

techcrunch

Back to top button