Tech

Women in AI: Anna Korhonen studies the intersection between linguistics and AI

To give AI academics and others their well-deserved – and overdue – time in the spotlight, TechCrunch is launching an interview series focused on remarkable women who have contributed to the AI ​​revolution. We’ll be publishing several articles throughout the year as the AI ​​boom continues, highlighting key work that often remains overlooked. Read more profiles here.

Anna Korhonen is Professor of Natural Language Processing (NLP) at the University University of Cambridge. She is also principal investigator at Churchill Collegemember of the Association for Computational Linguistics, and researcher at the European Laboratory for Learning and Intelligent Systems.

Korhonen was previously a member of the Alan Turing Institute and she holds a PhD in computer science and a master’s degree in computer science and linguistics. She is researching NLP and how develop, adapt and apply computational techniques to meet the needs of AI. She has a special interest in responsible and “human-centered” NLP which – in its own words – “draws on the understanding of human cognitive, social and creative intelligence”.

Questions and answers

In short, how did you get started in AI? What attracted you to the field?

I have always been fascinated by the beauty and complexity of human intelligence, particularly as it relates to human language. However, my interest in STEM subjects and their practical applications led me to study engineering and computer science. I chose to specialize in AI because it is a field that allows me to combine all of these interests.

What work are you most proud of in the field of AI?

While the science of building intelligent machines is fascinating and it’s easy to get lost in the world of language modeling, the ultimate reason we build AI is for its practical potential. I am very proud of the work in which my fundamental research in natural language processing has led to the development of tools capable of supporting social and global good. For example, tools that can help us better understand how diseases such as cancer or dementia develop and can be treated, or applications that can support education.

Much of my current research is driven by the mission to develop AI that can improve human life. AI has enormous positive potential for social and global good. Much of my work as an educator is to encourage the next generation of AI scientists and leaders to focus on realizing this potential.

How can we address the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?

I’m fortunate to work in an AI field where we have a large female population and established support networks. I have found them extremely helpful in overcoming professional and personal challenges.

For me, the biggest problem is how the male-dominated industry sets the AI ​​agenda. The current arms race to develop ever larger AI models at all costs is a prime example of this. This has a huge impact on the priorities of academia and industry, as well as broad socio-economic and environmental implications. Do we need larger models, and what are their overall costs and benefits? I think we would have asked these questions a lot earlier in the game if we had better gender balance on the field.

What advice would you give to women looking to enter the AI ​​field?

Amnesty International desperately needs more women at all levels, but particularly at leadership level. The current leadership culture is not necessarily attractive to women, but active participation can change that culture – and ultimately that of AI. Unfortunately, women aren’t always good at supporting each other. I would really like to see a change in attitude in this regard: we need to actively network and help each other if we want to achieve a better gender balance in this area.

What are the most pressing issues facing AI as it evolves?

AI has developed incredibly quickly: it has gone from an academic field to a global phenomenon in less than a decade. During this time, most of the effort was put into scaling through massive data and calculations. Little effort has been put into thinking about how this technology should be developed so that it can best serve humanity. People have good reason to be concerned about the safety and reliability of AI and its impact on jobs, democracy, the environment and other areas. We urgently need to put human needs and safety at the center of AI development.

What issues should AI users be aware of?

Current AI, while seemingly very fluid, ultimately lacks the knowledge of the human world and the ability to understand the complex social contexts and norms with which we operate. Even the best technology today makes mistakes, and our ability to prevent or predict these errors is limited. AI can be a very useful tool for many tasks, but I wouldn’t trust it to educate my children or make important decisions for me. We humans should remain in charge.

What is the best way to develop AI responsibly?

AI developers tend to think about ethics after the fact, after the technology has already been built. The best way to think about it is Before all development begins. Questions like: “Do I have a diverse enough team to develop a fair system?” » or “Are my data truly free to use and representative of all user populations?” or “Are my techniques robust?” we really should ask ourselves the question from the start.

Although we can solve part of this problem through education, we can only enforce it through regulation. Recent developments in national and global AI regulations are important and must continue to ensure that future technologies will be safer and more reliable.

How can investors better promote responsible AI?

Regulations on AI are emerging and companies will eventually have to comply with them. We can think of responsible AI as sustainable AI that is definitely worth investing in.

techcrunch

Back to top button