Zoom promises not to use your calls to train AI without your consent after facing criticism and causing people to consider discontinuing the service.

Zoom promises not to use your calls to train AI without your consent after facing criticism and causing people to consider discontinuing the service.

Zoom Responds to Backlash Over AI Training

Zoom, the popular video communications company, has recently faced criticism and anger from users due to a controversial section in its terms of service that seemed to imply the company could use customers’ meetings to train its artificial intelligence (AI) algorithms. However, Zoom has now responded to the backlash and clarified its stance on the matter.

The Controversial Section

On August 6, Stack Diary, a tech news blog, pointed out a section of Zoom’s terms of service that raised concerns among users. According to section 10.2 of the terms, Zoom had the right to access, use, and collect customer data for various purposes, including machine learning or AI training. This section triggered a wave of negative feedback from users, who voiced their objections and threatened to switch to alternative platforms.

User Backlash and the Company’s Response

The negative response from users was swift and passionate. Gabriella Coleman, an anthropology professor at Harvard, expressed her frustration on X, the platform formerly known as Twitter, saying, “Well time to retire @Zoom, who basically wants to use/abuse you to train their AI.” Others, like video game developer Brianna Wu, took a more direct approach, declaring their intention to cancel their Zoom accounts and switch to competitors.

In response to the user backlash, Zoom’s chief product officer, Smita Hashim, published a blog post clarifying the company’s position regarding the use of customer data. The post revealed that Zoom had added a sentence to its terms of service stating that they would not use audio, video, or chat customer content to train their AI models without explicit user consent. This addition aimed to alleviate users’ concerns and provide greater clarity on how Zoom handles customer data.

Clearing the Air

While the clarification from Zoom has addressed some of the worries, there are still lingering concerns about the extent to which users’ calls are protected. Aparna Bawa, Zoom’s COO, further clarified in a comment on the Y Combinator-run site Hacker News that participants receive notifications within Zoom’s user interface when generative AI features are enabled, ensuring they are aware that their data may be utilized for product improvement purposes.

However, Stack Diary has pointed out that the revised terms of service may not fully protect users’ calls. Sean Hogle, a tech and intellectual property lawyer, highlighted that the new sentence only applies to customer content and does not encompass telemetry data, which includes information such as event time, client type, and operating system details.

The Broader Context

Zoom’s response to the backlash over AI training comes at a time when concerns about data usage for AI purposes are growing among the general public. In July, over 8,000 authors, including renowned writers Suzanne Collins and Margaret Atwood, signed an open letter addressed to companies like OpenAI and Stability AI, demanding compensation for their books used to train AI systems without permission.

Furthermore, a group of artists recently filed a lawsuit against AI companies such as Stability AI and Midjourney, alleging their artwork was used to train AI image generators without consent. These incidents highlight the importance of user consent and proper handling of data in the emerging field of AI.


Zoom has taken steps to address the backlash it faced following the discovery of a controversial term in its terms of service. By clarifying their position and explicitly stating that they will not use customer content without consent, the company aims to alleviate the concerns of its users. However, there are still concerns about the extent to which user calls are protected, particularly regarding telemetry data.

The recent events involving Zoom, as well as the broader instances of data usage without consent in the AI field, emphasize the need for companies to prioritize transparency and user consent when deploying AI technologies. As AI continues to play an increasingly prominent role in our lives, safeguarding personal data and ensuring proper ethical practices will become imperative for companies operating in this space.