Jannah Theme License is not validated, Go to the theme options page to validate the license, You need a single license for each domain name.
Tech

NIST Launches New Platform to Evaluate Generative AI

The National Institute of Standards and Technology (NIST), the agency of the U.S. Department of Commerce that develops and tests technologies for the U.S. government, businesses and the general public, announced Monday the launch of NIST GenAI, a new program directed by NIST to evaluate generative technologies. AI technologies, including text and image generating AI.

A platform designed to evaluate various forms of generative AI technology, NIST GenAI will publish benchmarks, help create “content authenticity” detection systems (i.e., in-depth counterfeit checking) and will encourage the development of software to spot the source of false or misleading information, says NIST. on its new NIST GenAI website and in a press release.

“The NIST GenAI program will release a series of problems designed to evaluate and measure the capabilities and limitations of generative AI technologies,” the press release states. “These assessments will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content.”

NIST GenAI’s first project is a pilot study to create systems that can reliably differentiate between human-created and AI-generated media, starting with text. (Although many services claim to detect deepfakes, studies – and our own tests – have shown that they are unreliable, especially when it comes to text.) NIST GenAI invites teams from academia , industry and research labs to submit either “generators” — AI systems for generating content — or “discriminators” — systems that attempt to identify AI-generated content.

The study generators should generate summaries provided they have a subject and a set of documents, while the discriminators should detect whether a given summary is written by the AI ​​or not. To ensure fairness, NIST GenAI will provide the data needed to train generators and discriminators; systems trained on publicly available data will not be accepted, including but not limited to open models like Meta’s Llama 3.

Registration for the pilot will begin on May 1, and results are expected to be released in February 2025.

The launch of NIST GenAI – and a study focused on deepfakes – comes as deepfakes are growing exponentially.

According to data from Clarity, a deepfake detection company, 900% more deepfakes have been created this year compared to the same period last year. This causes concern, understandably. A recent survey by YouGov found that 85% of Americans said they were concerned about the spread of misleading deepfakes online.

The launch of NIST GenAI is part of NIST’s response to President Joe Biden’s executive order on AI, which established rules requiring greater transparency from AI companies about how their models work and established a series of new standards, notably for the labeling of AI-generated content. .

This is also NIST’s first AI-related announcement following the appointment of former OpenAI researcher Paul Christiano to the agency’s AI Safety Institute.

Christiano was a controversial choice due to his “catastrophic” views; he once predicted that “there is a 50% chance that the development of AI will result in (the destruction of humanity).” Critics – including NIST scientists, it seems – worry that Cristiano is encouraging the AI ​​Safety Institute to focus on “fantasy scenarios” rather than realistic, more immediate scenarios. risks related to AI.

NIST says NIST GenAI will inform the work of the AI ​​Safety Institute.

techcrunch

Back to top button