Skip to content
Facebook announces the Ego4D first-person video dataset for AI training

[ad_1]

Mark Zuckerberg, CEO and Founder of Facebook Inc., speaks during a House Energy and Commerce Committee hearing in Washington, DC, the United States, Wednesday, April 11, 2018.

Andrew Harrer | Bloomberg | Getty Images

Facebook announced a research project on Thursday in which it collected 2,200 hours of first-person images from around the world to train next-generation artificial intelligence models.

The project is called Ego4D, and it could prove to be crucial for Facebook’s Reality Labs division, which is working on many projects that could benefit from AI models trained using video footage shot from the perspective of ‘a human. This includes smart glasses, like the Ray-Ban Stories published by Facebook last month, and virtual reality, in which Facebook has invested heavily since its acquisition of Oculus for $ 2 billion in 2014.

The images could teach artificial intelligence to understand or identify something in the real world, or in a virtual world, that you could see from a first-person perspective through a pair of glasses or an Oculus headset.

Facebook announced that it will make the Ego4D dataset available to researchers in November.

“This version, which is a set of open data and a research challenge, will catalyze progress for us internally but also largely externally in the academic community and [allow] other researchers to support these new issues but now be able to do so in a more meaningful and scalable way, “Facebook principal researcher Kristen Grauman told CNBC.

The dataset could be deployed in AI models used to train technologies like robots to understand the world faster, Grauman said.

“Traditionally, a robot learns by doing things in the world or by literally being held in the hand to be shown how to do things,” Grauman said. “There are openings for them to learn from the video just from our own experience.”

Facebook and a consortium of 13 partner universities relied on more than 700 participants in nine countries to capture the first-person images. Facebook says Ego4D has more than 20 times more hours of footage than any other such dataset.

Facebook’s academic partners included Carnegie Mellon in the United States, the University of Bristol in the United Kingdom, the National University of Singapore, the University of Tokyo in Japan and the International Institute of Information Technology in India. , among others.

The images were captured in the US, UK, Italy, India, Japan, Singapore, and Saudi Arabia. Facebook said it hopes to expand the project to other countries, including Colombia and Rwanda.

“An important design decision for this project is that we wanted partners who are first and foremost leading experts in the field, interested in and motivated to pursue these issues, but also geographically diverse,” Grauman said.

Ray-Ban Stories Facebook Glasses

Sal Rodriguez | CNBC

Ego4D’s announcement comes at an interesting time for Facebook.

The company has continued to intensify its efforts in the field of hardware. Last month, he released the $ 299 Ray-Ban Stories, his first smart glasses. And in July, Facebook announced the formation of a product team to work specifically on “metaverse,” which is a concept of creating digital worlds that multiple people can inhabit at the same time.

Over the past month, however, Facebook has been hit by a deluge of news articles from a mine of internal company research disclosed by Frances Haugen, a former Facebook product manager turned whistleblower. Among the research published were slides that showed Instagram to be harmful to adolescent mental health.

Images were captured using standard devices such as GoPro cameras and Vuzix smart glasses.

For privacy reasons, Facebook said participants were instructed to avoid capturing any personally identifying characteristics when collecting footage indoors. This includes people’s faces, conversations, tattoos, and jewelry. Facebook said it removed personally identifiable information from videos and scrambled the faces of passers-by and vehicle license plate numbers. Audio was also removed from most videos, the company said.

“The university partners who made this video collection, Step # 1 for all, was a pretty intensive and important process to create an appropriate collection policy,” Grauman said.

.

[ad_2]