Finance

When the Terms of Service Change to Make Way for A.I. Training

Last July, Google made an eight-word change to its privacy policy that represented a significant step in its race to build the next generation of artificial intelligence.

Buried thousands of words into its document, Google tweaked the phrasing for how it used data for its products, adding that public information could be used to train its A.I. chatbot and other services.

The subtle change was not unique to Google. As companies look to train their A.I. models on data that is protected by privacy laws, they’re carefully rewriting their terms and conditions to include words like “artificial intelligence,” “machine learning” and “generative A.I.”

Some changes to terms of service are as small as a few words. Others include the addition of entire sections to explain how generative A.I. models work, and the types of access they have to user data. Snap, for instance, warned its users not to share confidential information with its A.I. chatbot because it would be used in its training, and Meta alerted users in Europe that public posts on Facebook and Instagram would soon be used to train its large language model.

Those terms and conditions — which many people have long ignored — are now being contested by some users who are writers, illustrators and visual artists and worry that their work is being used to train the products that threaten to replace them.

Back to top button