Uber crash highlights ongoing A.I. workplace concerns

Uber crash highlights ongoing A.I. workplace concerns

Who Bears Responsibility When A.I. Causes Harm?

self-driving car crash

As most people logged off Friday evening to enjoy another sweltering summer weekend, a landmark legal case about who bears responsibility when A.I. is directly involved in physical real-world harm finally came to a close after five years.

Rafaela Vasquez, the operator behind the wheel of a self-driving Uber test car that struck and killed a pedestrian in Tempe, Ariz., in 2018, pled guilty to one count of endangerment. Maricopa County Superior Court Judge David Garbarino accepted the plea deal and sentenced her to three years of supervised probation, closing out the case for good. Vasquez was originally charged with negligent homicide, a felony that carries a sentence of up to eight years in prison.

The First Fatal Collision Involving Autonomous Vehicles

The 2018 crash, where a woman named Elaine Herzberg was killed as she walked her bicycle across the street, was the first fatal collision involving a fully autonomous vehicle. The case gripped observers as Uber and Vasquez each sought to deflect blame for a situation that not only lacked precedent but begged several questions about responsibility in a world where human workers are increasingly monitoring A.I. machines, taking direction from algorithms, and sitting on the frontlines of imperfect A.I. systems built by corporate engineers.

When the crash initially happened, Vasquez thought Uber would stand behind her, according to an in-depth interview with Wired published last year. She was genuinely excited about the burgeoning industry and saw herself as a proud steward of the company, doing her job to monitor the company’s self-driving vehicles as they racked up practice miles. Arizona, which was loosening restrictions to bring in more business from Silicon Valley companies, had recently become a haven for Uber’s on-road testing program after California revoked the Uber cars’ registrations. The medical examiner officially labeled Herzberg’s death an accident, and Uber initially provided Vasquez with an attorney, but interactions with her supervisor quickly went from “consoling to unnerving,” according to Wired.

The tides really changed for Velasquez when the investigation revealed that her personal phone was streaming the TV show The Voice at the time of the crash. Dashcam footage also showed that she was looking down in the moments before the collision, and the police analysis later determined that Vasquez could’ve taken over the car in time and deemed the incident “entirely avoidable.”

The Blame Game

While the case didn’t go to trial, Vasquez’s defense was stacked with arguments that pointed culpability at her employer. In legal filings, Vasquez claimed that she wasn’t watching but only listening to The Voice, which was permitted per Uber’s guidelines. And when she was looking down, it was to check Slack messages on her work device, which she said needed to be monitored in real-time. This was historically handled by a second operator, but Uber had recently revoked the requirement to have two test operators in every vehicle and now had backup drivers like Vasquez working alone. This change in dynamics led to lonely, long shifts circling the same roads, usually without incident or the need to intervene.

In another key part of her pre-trial defense, Vasquez’s attorneys cited a ruling from the National Transportation Safety Board that found the car failed to identify Herzberg as a pedestrian, which caused the failure to brake. The board also found that Uber had “an inadequate safety culture” and failed to prevent “automation complacency” among its test operators, a well-documented phenomenon wherein workers tasked with monitoring automated systems come to trust that the machines have it under control and stop paying attention. Additionally, a former operations manager at the company submitted a whistleblower complaint about a pattern of poor safety practices in the self-driving car division just days before the crash.

“This story highlights once more that accidents involving A.I. are often ‘problems of many hands’ where different agents have a share of responsibility,” said Filippo Santoni de Sio, a professor of tech ethics and philosophy at Delft University of Technology who specializes in the moral and legal responsibility of A.I. and robotics. He’s previously written about this case. “While Uber or the regulators have come out clear from the legal investigations,” he added, “they clearly have a big share of moral responsibility for the death of Elaine Herzberg.”

A.I. and the Moral Responsibility Debate

As companies across industries rapidly integrate A.I. at a breakneck pace, there’s a pressing need to interrogate the moral, ethical, and business questions that arise when human workers increasingly work with and at the behest of A.I. systems they had no role in creating.

Just last week, Pennsylvania Democratic Senator Bob Casey argued that A.I. will be the next frontier in the fight for workers’ rights and introduced two bills to regulate the technology in the workplace. One, called the “No Robot Bosses Act,” would forbid companies from using automated and algorithmic systems in making decisions that affect employment, while the other targets A.I. workplace surveillance. Neither of these bills relates directly to situations like Vasquez’s (though they would seemingly impact Uber’s algorithm-directed rideshare drivers and corporate employees), but they’re just a small taste of what Congress, the EU, and other governments around the world are considering in terms of A.I. regulation for the workplace and beyond. Workers’ rights in the age of A.I. are even taking center stage in the current Hollywood strike, where actors are fighting a clause in their contract that would allow studios to pay them for one day of work and then replicate their likeness in perpetuity using A.I.

“The legal dispute over the liability is over,” Santoni de Sio said of Vasquez’s case, “but the ethical and political debate has just begun.”

With that, here’s the rest of this week’s A.I. news.

A.I. IN THE NEWS

Google launches RT-2 robotics model trained on its A.I. language models

That’s according to the New York Times, which got a preview of a one-arm robot powered by the RT-2 platform. In a demonstration, the robot successfully completed a variety of tasks that required reasoning and improvisation, such as correctly selecting the dinosaur when instructed to “pick up the extinct animal” from a lineup of animal figurines. Historically, engineers trained robots to perform mechanical tasks by programming them with an explicit list of instructions, which meant robots could only learn tasks slowly and one at a time, limiting their function. But RT-2, trained on text and images from the internet, harnesses the latest developments in large language models to enable robots to learn new tasks on their own.

Biden seeks to limit A.I. investments in China

That’s according to Bloomberg, which reported that President Joe Biden is planning to sign an executive order limiting critical U.S. technology investments in China by mid-August. The order will focus on artificial intelligence, semiconductors, and quantum computing and is expected to prohibit certain transactions while not impacting existing current deals.

Incoming restrictions send the price of Nvidia A.I. GPUs skyrocketing in China

With the U.S. cracking down on the exchange of tech between the country and China, Nvidia’s A.I. GPUs are selling for as much as $70,000 per unit, more than twice what they sell for in the U.S., according to Tom’s Hardware. And that’s if it’s possible to get one at all. Since the majority of A.I. clusters are based on the Nvidia GPUs, there is high demand from companies that need the units to support their systems as they grow.

Law school announces students can use ChatGPT and other generative A.I. on applications

The Sandra Day O’Connor College of Law at Arizona State University gave applicants the go-ahead to use generative A.I. to write their admissions materials, according to ANBLE. Dean Stacy Leeds said it’s just “one more of the tools that is in their toolbox” and that many applicants already pay for help from professional consultants, whereas generative A.I. is widely accessible. Applicants using tools like ChatGPT on their applications must certify that they used A.I. and the information is true, just as they are asked to do if they had help from a professional consultant. The decision is the opposite of another recent one out of Michigan Law School, which explicitly banned prospective students from using the tech for admissions.

EYE ON A.I. RESEARCH

What Comes After Transformers?

The current tsunami of A.I. advancement is owed to one innovation in particular: the Transformer. First described in a 2017 research paper from Google, this type of neural network model has almost completely unseated its predecessor techniques and now underpins the majority of reigning machine learning models, from BERT to OpenAI’s various GPT models, where the “T” stands for Transformer. Now Stanford researchers are looking for alternatives to the notoriously compute-heavy (and expensive) Transformer approach that could deliver the same performance with more efficiency, calling the new line of research “Monarch Mixer.” In one trial that involved retraining BERT with their techniques, which essentially involves replacing the major elements of a Transformer with Monarch matrices, the researchers say they were able to get some “pretty decent results, even with fewer parameters.” This doesn’t mean the Transformer is on its way out just yet, but it’s an interesting beginning to what could be the next phase of models. You can read the research blog here.

ANBLE ON A.I.

  • Google U.K. boss says you can’t trust its chatbot Bard for accurate information – Prarthana Prakash
  • Generative A.I. will upend the workforce, McKinsey says, forcing 12 million job switches and automating away 30% of hours worked in the U.S. economy by 2030 – Paolo Confino
  • Microsoft’s CFO Amy Hood lays out an A.I. investment roadmap – Sheryl Estrada
  • Netflix wants to hire an A.I. ‘Product Manager’ on a $900,000 salary as streaming and A.I. devastate Hollywood – Chloe Berger
  • Microsoft, Google, and OpenAI just became charter members of what may be the first true A.I. lobby. Up next: Lawmakers write the rules – Paolo Confino

BRAINFOOD

A.I. and the Aliens

Outside of A.I., the news that captured the masses this past week was the bombshell House Oversight Committee hearing about UAPs (that’s Unidentified Anomalous Phenomena, the new term for UFOs), where witnesses testified that the U.S. government has UAP aircraft and biologics in its possession, has been covering up a decades-long crash retrieval program, and is misallocating funds to pay for it, among other allegations.

What didn’t come up in the hearing, which mostly focused on the implications for national security, is the widespread ambition around using A.I. for UAP tracking and research.

Machine learning, in particular, is useful for processing and making sense of vast amounts of data. In fact, when asked by the committee what needs to happen regarding UAPs, the three witnesses, all former U.S. military or intelligence officials, testified that a centralized system for tracking and analyzing UAP data should be the top priority.

Hypergiant, a Texas-based machine learning company and government contractor that develops critical infrastructure for defense and space, is one company tapping A.I. in the search for UAPs. The company built its CONTACT software, or Contextually Organized Non-Terrestrial Active Capture Tool, to categorize and analyze unidentified sightings captured by satellites.

In another example, a team of open-source developers decided to take matters into their own hands and recently launched a project called Sky 360, setting up 20 monitoring stations (and counting) and using A.I. to detect and analyze possible UAP sightings. The system is powered by the TensorFlow machine learning platform and uses computer vision to detect motion in a frame compared to previous frames.

Ravi Starzl, a computer science professor at Carnegie Mellon who focuses on A.I. and computational analysis, said he’s personally been helping multiple organizations develop machine learning systems for identifying and characterizing UAPs, including the analyses of visual, radar, audio, and text data.

NASA has also suggested using A.I. to examine UAP data, and indeed the agency has already found success using A.I. to identify new exoplanets and explore parts of Mars that would have been unreachable otherwise, among other use cases. It’s not clear if NASA has already tapped A.I. for its UAP efforts, but the agency could shed light on this question in its much-awaited UAP report expected to be released this month.