Meta’s White House A.I. pledge may be less significant than it seems.

Meta's White House A.I. pledge may be less significant than it seems.

The Promises and Challenges of A.I. Model Security

Image

The recent announcement regarding the commitments made by seven major technology companies, including Amazon, Google, and Microsoft, to ensure the responsible use of artificial intelligence (A.I.) models has generated quite a buzz. However, upon closer examination, it becomes clear that these commitments may not be as groundbreaking as they initially appear.

Firstly, it is important to note that these commitments only apply to models that surpass the capabilities of existing state-of-the-art systems, such as Anthropic’s Claude 2, OpenAI’s GPT-4, and Google’s PaLM 2. This means that there are currently available A.I. models that may still be used to create malware or spread misinformation, without any safeguards in place.

Furthermore, many of the commitments made by these companies are not new initiatives, but rather actions they are already undertaking. For example, the pledge to publish information about an A.I. system’s capabilities, limitations, and appropriate use is something that these companies have been doing to some extent. The voluntary nature of these commitments also raises concerns about accountability and enforcement.

Even when specific actions are mentioned in the commitments, there are still unanswered questions about how they will be implemented. For instance, the commitment to conduct extensive security testing of A.I. software before release states that testing will be done in part by independent experts. However, there is no clarification about who these experts will be, how their independence and expertise will be determined, and what specific testing they will perform.

One notable aspect of these commitments is the focus on protecting “proprietary and unreleased” model weights, which are numerical coefficients that determine the outputs of a neural network. This raises a dilemma for Meta, a leading advocate of open-source A.I., as they have previously released powerful language models with their full weights. If the weights were to be leaked again, it would violate the commitment. Meta argues that open sourcing A.I. models allows for widespread benefits and continuous identification of vulnerabilities. However, the issue of ensuring security remains unresolved.

The White House, along with other regulators, has yet to fully come to terms with the inherent challenges of open-source A.I. models. The inability to guarantee their safety poses a significant concern that needs to be addressed in future discussions and policies.

In the realm of generative A.I., Google’s parent company Alphabet faces its own challenges. The emergence of generative A.I. chatbots and search tools poses a potential threat to Google’s main revenue source—Search. While Google has demonstrated its A.I. capabilities, it has yet to prove that it can generate a business model that matches the success of its current search engine. This issue is explored in further detail in a cover story published in Fortune’s August/September magazine.

Exciting Developments and Concerns in A.I.

The A.I. landscape continues to evolve, presenting new opportunities and raising concerns. Here are some noteworthy developments:

  • OpenAI’s GPT-4 is facing delays in rolling out its image analysis capabilities due to concerns about facial recognition. Laws in certain countries and states prohibit the processing of biometric information without explicit consent.

  • Apple is reportedly working on its own language model and chatbot similar to what OpenAI and Google have developed. However, Apple has yet to devise a clear strategy for the consumer release of these language models.

  • The New York City subway system is using A.I. for video surveillance to identify fare dodgers. This system, which is set to expand to more stations, raises concerns about privacy and targeting marginalized groups.

  • Google is allegedly testing an A.I. tool called “Genesis” that can generate news articles. While the software aims to assist journalists by automating certain tasks, its pitch has received mixed reviews, with some executives expressing skepticism about its ability to replicate the accuracy and artistry of human news stories.

  • OpenAI’s head of trust and safety, Dave Willner, has stepped down amidst various safety issues the company is currently facing. OpenAI has been under scrutiny, facing a Federal Trade Commission probe and multiple lawsuits.

Advancing Common Sense Reasoning in A.I.

Improving the common sense reasoning abilities of A.I. models is a topic of current research. Chinese researchers have introduced a new multi-modal dataset called COCO-MMRD, challenging A.I. models to generate open-ended answers using both images and text. Innovative techniques, such as cross-modal attention and sentence-level analysis, have been employed to achieve better results. This research aims to bridge the gap between A.I. and human-level reasoning.

Lessons from History: A.I. and the Oppenheimer Moment

Drawing parallels to the historical development of nuclear technology, director Chris Nolan sees similarities between the researchers of today’s most powerful A.I. systems and J. Robert Oppenheimer, the father of the atomic bomb. Both faced the daunting task of creating a groundbreaking yet potentially destructive technology.

Oppenheimer’s realization of the bomb’s devastating potential and subsequent guilt for the deaths caused by the bombings of Hiroshima and Nagasaki echo concerns expressed by deep learning pioneers, such as Geoff Hinton and Yoshua Bengio, who fear the unintended consequences of their work on artificial superintelligence.

While the nuclear age and the A.I. era share similarities, there are also significant differences. Unlike nuclear weapons, A.I. is software-based, making it easier to distribute and harder to regulate. Additionally, the scientific and ethical uncertainties surrounding the development of artificial superintelligence make it a complex challenge that requires careful consideration and governance.

Although an A.I. Oppenheimer moment may be on the horizon, it is crucial to recognize that we are not yet at the precipice. Time remains to establish regulations and governance systems that can guide the responsible development and deployment of A.I. technologies. Swift action is essential to prevent any potential harm from current A.I. systems while also addressing the unique risks posed by artificial superintelligence.

Conclusion

As the A.I. landscape continues to evolve, it is imperative to strike a balance between innovation and responsibility. The recent commitments made by major technology companies are a step in the right direction, but they highlight the ongoing challenges that need to be addressed. Ensuring the safety and ethical use of A.I. models, while also fostering innovation and accessibility, remains a complex task that requires continuous collaboration, transparency, and regulatory oversight.


Jeremy Kahn is a technology journalist who covers the latest developments in the A.I. sector. You can reach him at [email protected].

Additional articles from Fortune’s A.I. series: – A.I. is ‘Amazon Web Services for human effort’ because it will ‘democratize’ large workforces and allow startups to scale faster, investment firm CEO says—Paolo Confino – A.I. might have what it takes to replace the C-suite. But experts say the top jobs are safe for cultural reasons—by Geoff Colvin – Marc Andreessen says his A.I. policy conversations in D.C. ‘go very differently’ once China is brought up—by Steve Mollman – Banks have used A.I. for decades—but now it’s going to take off like never before—by Ben Weiss – Elon Musk says Tesla will spend $1 billion to build a ‘Dojo’ A.I. supercomputer—but it wouldn’t be necessary if Nvidia could just supply more chips—by Christiaan Hetzner