Companies are reconsidering their desire to move fast and break things due to the hallucination problem of generative A.I.

Companies are reconsidering their desire to move fast and break things due to the hallucination problem of generative A.I.

The Inherent Risks of Generative AI: Fact or Fiction?

AI

In a recent article by the Associated Press, artificial intelligence (AI) researchers and industry leaders discussed a significant problem plaguing generative AI – its tendency to “hallucinate” information and present it as fact. This issue stems from the way tools like OpenAI’s ChatGPT function, relying on patterns to predict the next word in a sequence rather than understanding actual truth. As a result, ChatGPT may generate false or misleading information, raising concerns about its reliability and potential consequences.

Renowned linguistics professor Emily Bender, director of the University of Washington’s Computational Linguistics Laboratory, expressed doubts about the fixability of this inherent problem: “It’s inherent in the mismatch between the technology and the proposed use cases.” Even OpenAI CEO Sam Altman joked about his lack of trust in ChatGPT’s answers, stating, “I probably trust the answers that come out of ChatGPT the least of anybody on Earth.” Altman believes the hallucination problem will eventually improve but raises critical questions about the timeline, the impact of errors in the meantime, and the disruptions AI may cause before we can fully address them.

Generative AI has already encountered instances where defamatory information was fabricated, posing threats to people’s reputations. For example, Meta’s BlenderBot 3 falsely labeled a former member of the European Parliament as a terrorist. Hallucinations, however, are just one facet of the concerns surrounding AI. Other issues include privacy, copyright, biased datasets, and AI’s potential to perpetuate discrimination. Additionally, AI’s transformative influence on work systems and societal structures is an area that we are only beginning to grasp.

The release of OpenAI’s ChatGPT acted as a catalyst, pushing companies like Google to develop their own competing models, such as Bard, and prompting businesses to overhaul their product roadmaps to embrace generative AI. The trend is not just about staying competitive but also grappling with the responsibility of ensuring the safety and ethical use of these tools. A recent survey of 400 global senior AI professionals revealed their mixed sentiments. While they recognized the return on investment AI initiatives could bring, they also expressed more concerns than excitement about AI’s future implications.

We have witnessed the consequences of prematurely unleashed technology before, as seen in Facebook’s role in the genocides and disinformation campaigns. This current AI moment feels like a pivotal inflection point, where we decide whether to repeat past mistakes or approach AI with greater consideration. Founder and principal researcher at the Montreal AI Ethics Institute, Abhishek Gupta, highlighted the internal challenges faced by companies, particularly the tension between business ambitions and privacy/legal concerns. He emphasized the need to anticipate the evolving landscape of AI’s problems and solutions.

The landscape of AI is changing rapidly, with companies like Meta and Alibaba open-sourcing their AI models, accelerating the adoption of such technologies. However, companies like Zoom have witnessed the backlash resulting from changes to their terms of service that raised privacy concerns. Users’ reactions demonstrate the significance of user consent and clear terms of service, underscoring the growing awareness and cautiousness regarding AI’s impact.

Unlike the early days of social media, policymakers are taking swift action to consider regulations for AI. Individuals, including comedian Sarah Silverman and Getty, have taken legal action against companies like Meta, OpenAI, and Stability AI for alleged copyright violations. The Federal Trade Commission is also investigating OpenAI for potential violations of consumer protection laws. It is encouraging to witness greater societal discussions surrounding ethics and societal impacts. However, it remains to be seen whether we can avoid repeating history or merely find new ways to rhyme with it.

The possibilities of AI are not limited to narrowly-defined segments. Engadget reports on IBM, Hugging Face, and NASA collaborating to develop an open-source geospatial model for climate and earth science AI applications. This model, based on NASA’s Earth-satellite data and IBM’s Watsonx.ai, holds promise for monitoring deforestation, greenhouse gas emissions, and crop yields. Alibaba has also joined the open-source movement, sharing two of its AI models – Qwen-7B and Qwen-7B-Chat. These models demonstrate the ongoing efforts to accelerate AI adoption and development.

Furthermore, Zoom’s recent backpedaling due to privacy concerns highlights the critical role of clear terms of service in the AI age. OpenAI’s release of new features for ChatGPT, such as prompt examples, suggested replies, and the ability to analyze data across multiple files, showcases the continuous refinement of AI models. Google is also experimenting with a generative AI-powered search, incorporating images and videos into search results, emphasizing the evolution of AI-driven innovations.

While AI presents immense potential, it’s crucial to acknowledge the dark side. Hackers can now exploit acoustic attacks, leveraging AI to “hear” keystrokes and gain unauthorized access to sensitive personal or business information. These attacks, capable of analyzing keystrokes recorded through microphones, pose a significant threat to online security. The practicality of these off-the-shelf attacks serves as a reminder of the need for better defenses against emerging threats.

As AI becomes an ever-present force in creative industries, the implications on writers and musicians are being hotly debated. Rapper Lil Wayne, in an interview with Billboard, dismissed AI’s ability to replicate his creativity. While generative AI may flood the internet with massive amounts of content, it does not guarantee quality. Creatives are prepared to defend their unique abilities.

In this critical moment, we must strike a balance by recognizing AI’s potential while addressing the inherent risks. Open discussions, ethical decision-making, and responsible governance will be critical in ensuring AI’s positive impact on our world.


Sage Lazzaro

Website: sagelazzaro.com

A.I. IN THE NEWS

IBM, Hugging Face, and NASA collaborate on new foundation model for climate and earth science A.I.s. Engadget reports that this open-source geospatial model, built upon NASA’s Earth-satellite data and IBM’s Watsonx.ai, will enable the monitoring and prediction of deforestation, greenhouse gas emissions, and crop yields. Hugging Face is hosting this model on its open-source A.I. platform.

Alibaba open-sources two A.I. models. CNBC confirms that Alibaba has open-sourced Qwen-7B, a generative A.I. model that creates content in both English and Chinese. They have also released Qwen-7B-Chat, a version specifically designed for conversational apps. This move aligns with recent industry trends, including Meta’s release of LLama 2 and Microsoft’s partnership, as companies strive to accelerate the adoption of their models.

Zoom faces backlash over A.I.-focused changes to terms of service. After modifying its terms of service to allow training of A.I. models with user data, Zoom received significant criticism regarding privacy concerns. Users demanded options to opt-out, and some even pledged to cancel their subscriptions. Responding to the backlash, Zoom clarified its stance through a blog post, assuring users that their content would not be used to train A.I. models without explicit consent.

OpenAI introduces new features for ChatGPT. OpenAI has made several updates to ChatGPT, including prompt examples, suggested replies, and keyboard shortcuts. Users will also enjoy extended login sessions, eliminating automatic sign-outs. Furthermore, Plus users will now have ChatGPT 4 as the default version when opening a new chat. Additionally, ChatGPT can now analyze data across multiple files, granting users valuable insights.

Google adds images and videos to generative A.I.-powered search. The Verge reveals that Google’s Search Generative Experience (SGE) feature will soon incorporate multimedia content into the summary box at the top of the search results. This experiment aims to generate accurate and helpful answers, moving beyond surface-level links. Google’s exploration of generative A.I. search represents its vision for a transformed business model, powered by advanced A.I. technologies.

EYE ON A.I. RESEARCH

Hackers utilize A.I. to “hear” keystrokes. A group of British university researchers published a paper detailing how machine learning can facilitate “acoustic attacks.” By recording keyboard keystrokes via microphones or even Zoom calls, malicious actors deploy deep learning models to classify the sound waves and spectograms, thereby gaining access to passwords and sensitive information with alarming accuracy.

ANBLE ON A.I.

  • Tim Cook defends Apple’s AI efforts against Microsoft, Google, and Elon Musk.
  • Andy Jassy outlines Amazon’s extensive generative A.I. initiatives across multiple business units.
  • Canva CEO Melanie Perkins highlights a major problem with current A.I. capabilities.
  • Kenya becomes the first country to suspend Sam Altman’s Worldcoin A.I.-crypto scheme.
  • HR leaders anticipate positive impacts from A.I., while CEOs remain skeptical.

BRAINFOOD

Lil Wayne challenges A.I. in the creative realm. In an interview with Billboard, rapper Lil Wayne dismisses A.I.’s ability to replicate his unique creative qualities. The ongoing debates surrounding A.I. in creative industries emphasize the importance of preserving creatives’ individuality and expertise. While generative A.I. can produce vast amounts of content, its quality remains a point of contention.

In this crucial moment, humanity must navigate the potential of A.I. while mitigating its inherent risks. Discussions focused on ethics, responsible governance, and awareness of emerging threats will profoundly shape the positive impact of A.I. on our world.