Harvard professors express concern over A.I. when using Alexa, especially regarding Amazon’s monopoly status.
Harvard professors express concern over A.I. when using Alexa, especially regarding Amazon's monopoly status.
Trust and Skepticism: Navigating the World of AI
When Alexa responds in a way that puts its developer’s interests ahead of yours, it becomes obvious that AI systems may not always have your best interests at heart. This raises the question of who truly benefits when AI systems speak. With the rise of newer generations of AI models, it is becoming increasingly difficult to discern the underlying motivations behind these systems.
In the realm of internet services, we are no strangers to companies manipulating what we see for their gain. Paid entries on Google search results and curated feeds on platforms like Facebook and TikTok are just a few examples. However, what sets AI systems apart is their interactivity and the potential for these interactions to mirror human relationships.
It is not a stretch to imagine a future where AI systems become personalized digital assistants, planning our trips, negotiating on our behalf, and even acting as therapists and life coaches. These AI assistants will be with us 24/7, intimately aware of our needs, and able to anticipate them by tapping into the vast network of services and resources on the web.
As security experts and data scientists, we recognize that as people come to rely on these AI assistants, implicit trust in their capabilities becomes crucial to navigate daily life. However, this trust must also come with the assurance that the AI is not secretly working for someone else’s benefit.
In today’s digital landscape, devices and services that appear to work for us may actually work against us. Smart TVs spy on us, phone apps collect and sell our data, and websites manipulate us through deceptive design elements known as dark patterns. This pervasive issue is known as surveillance capitalism, and AI technology is poised to play a significant role in it.
- Elon Musk’s fascination with the letter X continues as he bid...
- Google CEO Sundar Pichai to give significant business update.
- Elon Musk’s SpaceX is expected to drive 41% growth in the com...
The potential risks associated with AI become even more pronounced when it comes to AI digital assistants. For an AI to be truly useful, it needs to have an in-depth understanding of its user. It must know us better than our phones, better than Google’s search engine, and quite possibly, better than our closest friends, partners, and therapists. But can we truly trust today’s leading generative AI tools to know us so intimately?
One of the key concerns is that we lack knowledge about how these AI systems are configured. We don’t know their training data, the information they have been provided, or the instructions they have been programmed to follow. Even benign-seeming AI tools like Microsoft Bing’s chatbot have secret rules that can change at any time.
Moreover, many of these AI systems are created and trained at significant expense by tech monopolies. While they may be initially offered for free or at low cost, these companies will eventually need to monetize them. Like the rest of the internet, this monetization is likely to involve surveillance and manipulation.
Imagine asking your AI chatbot to plan your vacation. Did it recommend a particular airline, hotel chain, or restaurant because it truly believed they were the best for you, or because its maker received a kickback from these businesses? Similarly to paid results in search engines and sponsored ads on social media, these paid influences are likely to become increasingly subtle and covert over time.
When seeking political information from AI, we must question whether the results are skewed by the politics of the corporation that owns the AI or the candidate who paid the most money. Furthermore, the views of the demographic used to train the AI model may also influence the information provided. In this era of complex information ecosystems, our AI agents could potentially be double agents, and we have no way of knowing.
To address these concerns, it is imperative that AI systems become more trustworthy. The European Union’s proposed AI Act is a step in the right direction, with requirements for transparency, bias mitigation, disclosure of risks, and standardized testing. However, many existing AI systems do not meet these standards, and the United States lags behind in AI regulation.
Until robust consumer protections for AI products are implemented, it is up to individuals to approach AI tools with skepticism and question their potential risks and biases. When utilizing an AI tool for travel recommendations or political information, it is essential to apply the same critical eye we would to a billboard ad or a campaign volunteer. Despite their technological wizardry, AI tools may not be as impartial or objective as they seem.
In a world where AI becomes increasingly intertwined with our lives, navigating the landscape of trust and skepticism becomes vital. By approaching AI systems skeptically and advocating for regulations that prioritize transparency and accountability, we can foster an environment where AI truly serves our interests and enhances our lives.