politics

How AI is poised to unlock innovations at an unprecedented pace – POLITICO

Artificial intelligence (AI) has rapidly evolved from a future promise to a current reality. Generative AI has become a powerful technology applied in countless contexts and use cases, each with its own potential risks and involving a diverse set of stakeholders. As enterprise adoption of AI accelerates, we find ourselves at a crucial juncture. Proactive policies and smart governance are needed to ensure AI develops as a trustworthy and equitable force. Now is the time to develop a policy framework that unlocks the full beneficial potential of AI while mitigating the risks.

Proactive policies and smart governance are needed to ensure AI develops as a trustworthy and equitable force.

The EU and the pace of AI innovation

The European Union has been a leader in AI policy for years. In April 2021, she presented her dossier on AI, which included her proposal for a regulatory framework on AI.

These first steps have triggered discussions on AI policies amid accelerating innovation and technological change. Just as personal computing democratized internet access and coding accessibility, further fueling technology creation, AI is the latest catalyst poised to unlock future innovations at an unprecedented pace. But with such powerful capabilities comes great responsibility: we must prioritize policies that allow us to harness its power while protecting us from harm. To do this effectively, we need to recognize and address the differences between enterprise AI and consumer AI.

We need to recognize and address the differences between enterprise AI and consumer AI.

Enterprise AI versus consumer AI

Salesforce has been actively researching and developing AI since 2014, introduced our first AI capabilities into our products in 2016, and established our Office of Ethical and Humane Use of Technology in 2018. Trust is our core value. That’s why our AI offerings are built on trust, security and ethics. As with many technologies, AI has several uses. Many people are already familiar with Large Language Models (LLM) through consumer apps like ChatGPT. Salesforce is leading the development of AI tools for businesses, and our approach differentiates between consumer LLMs and what we classify as enterprise AI.

Enterprise AI is designed and trained specifically for professional environments, while consumer AI is open and can be used by anyone. Salesforce is not in the consumer AI space: we build and deploy enterprise customer relationship management (CRM) AI. This means our AI is specialized to help our customers meet their unique business needs. We did this with Gucci through the use of Einstein for Service. Working with Gucci’s global customer service center, we helped create a framework that was standardized, flexible and aligned with the brand voice, enabling customer advisors to personalize their customers’ unique experiences.

Besides their target audiences, consumer and enterprise AI differ in a few other key areas:

  • Background: Enterprise AI applications often have limited inputs and potential outputs due to enterprise-specific design patterns. Consumer AI typically performs general tasks that can vary widely depending on usage, making it more prone to abuse and harmful effects, such as exacerbating discriminatory outcomes due to unqualified data sources and to the use of copyrighted materials.
  • Data: Enterprise AI systems rely on curated data, which is typically obtained consensually from enterprise customers and deployed in more controlled environments, thereby limiting the risk of hallucinations and increasing accuracy. At the same time, consumer AI data can come from a wide range of unverified sources.
  • Privacy, security and data accuracy: Business clients often have their own regulatory requirements and may require service providers to ensure rigorous privacy, security and accountability controls to avoid bias, toxicity and hallucinations. Enterprise AI companies have an incentive to offer additional guarantees because their reputation and competitive advantage depend on it. Consumer AI applications are not subject to such strict requirements.
  • Contractual obligations — the relationship between an enterprise AI provider and its customers is based on contracts or procurement rules, clarifying the rights and obligations of each party and how data is processed. Enterprise AI offerings undergo regular review cycles to ensure continued alignment with high customer standards and responsiveness to changing risk landscapes. In contrast, consumer AI companies offer take-it-or-leave-it terms of service that inform users what data will be collected and how it may be used, with no ability for consumers to negotiate protections on measure.

Policy frameworks for ethical innovation

Salesforce serves organizations of all sizes, jurisdictions and industries. We are in a unique position to observe global trends in AI technology and identify developing areas of risk and opportunity.

Humans and technology work better together. To facilitate human oversight of AI technology, transparency is essential. This means that humans must have control and understand the appropriate uses and limitations of an AI system.

Another key element of AI governance frameworks is context. AI models used in high-risk contexts could have a profound impact on an individual’s rights and freedoms, including economic and physical impact, or impact on a person’s dignity, right to privacy and the right not to be discriminated against. These “high-risk” use cases should be a priority for policymakers.

Humans must have control and understand the appropriate uses and limitations of an AI system.

This is exactly what the EU AI law does: it addresses AI risks and ensures the safety of people and businesses. It creates a regulatory framework that defines four levels of risk for AI systems – minimal, limited, high and unacceptable – and allocates obligations accordingly.

Comprehensive data protection laws and strong data governance practices are essential for responsible AI. For example, the EU’s General Data Protection Regulation (GDPR) has shaped global data privacy regulation, using a risk-based approach similar to European AI law. It contains principles impacting AI regulation: accountability; justice; data security; and transparency. GDPR sets the standard for data protection laws and will be a determining factor in how personal data is managed with AI systems.

Partnership for the future

Navigating the enterprise AI landscape is a multi-stakeholder endeavor that we cannot tackle alone. Fortunately, governments and multilateral organizations such as the US, UK and Japan, UN, EU, G7 and OECD have launched efforts to collaboratively shape regulatory structures that promote both innovation and security. By forging the right cross-sector partnerships and aligning with principled governance frameworks, we can unlock the full transformative potential of AI while putting humans and ethics first.

Learn more about Salesforce’s enterprise AI policy recommendations.


Politices

Back to top button