January 15, 2025

The Microsoft 365 website on a laptop, Tuesday, June 25, 2024, in New York, United States.

Bloomberg | Bloomberg | Getty Images

The beginning of the year is a great time to practice some basic cyber hygiene. We are all told to patch, change passwords and update software. But a growing concern is the sometimes covert integration of artificial intelligence into programs that could invade privacy.

“The rapid integration of artificial intelligence into our software and services has and should continue to raise significant questions about privacy policies that predate the AI ​​era,” said Lynette Owens, vice president of global consumer education at cybersecurity company Trend Micro. Many of the things we use today Programs—whether email, bookkeeping, productivity tools, social media, and streaming apps—may be governed by privacy policies that lack clarity on whether our personal data can be used to train artificial intelligence models. .

“This makes it easy for all of us to have our personal information used without appropriate consent. It’s time for every app, website or online service to take a hard look at the data they are collecting and who they are sharing it with. How do they share it and whether they can access it to train artificial intelligence models,” Owens said. “There’s still a lot of work to be done.”

Artificial intelligence has been integrated into our daily lives

Owens said the underlying issues overlap with most programs and apps we use every day.

“Many platforms have been integrating AI into their operations for years, long before it became a buzzword,” she said.

As an example, Owens noted that Gmail already uses artificial intelligence for spam filtering and predictive text through its “Smart Compose” feature. “Streaming services like Netflix rely on artificial intelligence to analyze viewing habits and recommend content,” Owens said. Social media platforms like Facebook and Instagram have long used artificial intelligence for facial recognition in photos and personalized content feeds.

“While these tools provide convenience, consumers should consider potential privacy trade-offs, such as how much personal data is collected and how that data is used to train artificial intelligence systems. Everyone should carefully review privacy settings to understand what data is being shared, And check the Terms of Service regularly for updates,” Owens said.

Microsoft’s Connected Experience is a specially scrutinized tool that has been around since 2019 and is launched with optional exit. Recent media reports highlighting this as a new feature or one whose settings have been changed are inaccurate, according to the company and some outside cybersecurity experts who have studied the issue. Sensational headlines aside, privacy experts do worry that advances in artificial intelligence could lead to data and text in programs like Microsoft Word being used in ways that privacy settings don’t adequately cover.

“When tools like connected experiences continue to evolve, the impact of data use can be broader, even if the underlying privacy settings don’t change,” Owens said.

A Microsoft spokesperson wrote in a statement to CNBC that Microsoft will not use customer data from Microsoft 365 consumer and business applications to train basic large-scale language models. He added that in some cases, customers may agree to use their data for specific purposes, such as customized model development that some commercial customers explicitly request. In addition, the setting supports cloud-enabled features that many people expect from productivity tools, such as real-time co-authoring, cloud storage, and tools like the Word Editor that provide spelling and grammar suggestions.

Default privacy settings are a problem

Ted Miracco, chief executive of security software company Approov, said features like Microsoft’s connected experience are a double-edged sword – promising productivity improvements but also raising serious privacy red flags. Miracco said that the default on state of the setting may allow people to choose things they are not necessarily aware of, mainly related to data collection, and organizations may also want to think twice before enabling this feature.

“Microsoft’s assurances provide only partial relief, but still fail to alleviate some of the real privacy concerns,” Miraco said.

Kaveh Vadat, founder of SEO marketing agency RiseOpp, said perception may be your problem.

Preset enabled changes the dynamics significantly. Feeling intruded or manipulated.

His point is that in an environment where there is a lot of distrust and skepticism about artificial intelligence, companies need to be more transparent, not less.

Companies including Microsoft should emphasize preset opt-outs rather than opt-ins, and perhaps provide more granular, non-technical information about how personal content is handled, as perceptions may become reality.

“Even if the technology is completely safe, public perception will depend not just on facts but on fears and assumptions, especially in the age of artificial intelligence where users often feel disempowered,” he said.

OpenAI's Sam Altman: Microsoft's partnership is very positive for both companies

Jochem Hummel, assistant professor of information systems and management at Warwick Business School, University of Warwick, said enabling shared default settings makes sense for business reasons, but is detrimental to consumer privacy.

Hummel said the company can enhance its products and stay competitive by pre-configuring more data sharing. However, from a user’s perspective, prioritizing privacy by adopting an opt-in data sharing model would be “a more ethical approach,” he said. As long as the additional features provided through data collection are not essential, users can choose features that better suit their interests.

Hummel said there are real benefits to the current trade-off between AI-enhanced tools and privacy, based on what he’s seen in student submissions. Hummel said students who grew up with webcams, live streaming on social media and all-encompassing technology tend to be less concerned about privacy and are embracing these tools with enthusiasm. “For example, my students are creating better presentations than ever before,” he said.

Manage risk

Colby College Librarian Kevin Smith said concerns about mass copying of LL.M.s in areas such as copyright law are overblown, but developments in artificial intelligence do intersect with core privacy concerns.

“Many of the privacy concerns currently being raised about AI have actually been around for years; the rapid deployment of AI trained on large language models is just focusing attention on some of these issues,” Smith said. “Personal information is all about relationships, So the risk that AI models might find data that’s more secure in a more ‘static’ system is a real change in how we need to find management,” he added.

In most programs, turning off artificial intelligence features is an option hidden in the settings. For example, with a connected experience, open a document, click “Files,” then go to “Accounts,” then find Privacy Settings. Once there, go to Manage Settings and scroll down to Connected Experiences. Click the box to close it. Once you do so, Microsoft warns: “If you turn this feature off, you may not be able to get certain experiences.” Microsoft says leaving the setting on will allow for more communication, collaboration and suggestions provided by artificial intelligence.

In Gmail, you need to open it, click the menu, then go to Settings, then click the account you want to change, then scroll to the “General” section and uncheck the boxes next to various “Smart Features” and personalization options.

As cybersecurity vendor Malwarebytes puts it Blog posts about Microsoft features: “If you’re working on the same document with others in your organization, turning this option off may result in some loss of functionality. …If you want to turn off these settings for privacy reasons and don’t use them anyway, by all means , these settings can all be found under “Privacy Settings,” but I couldn’t find any indication that these connected experiences were used to train artificial intelligence models.

While the instructions are easy to follow and knowing more about what you’re agreeing to can be a good option, some experts say it shouldn’t be the onus on consumers to deactivate these settings. “When companies implement such features, they often include them as an opt-in to enhance functionality, but users may not fully understand the scope of what they are consenting to,” said data privacy expert Wes Chaar.

“The crux of the matter is vague disclosures and a lack of clear communication about what ‘connected’ means and how deeply their personal content is being analyzed or stored,” Char said. “For people outside of technology, It might be like inviting a helpful assistant into your home, only to learn later that they have recorded your private conversations as a training manual.”

Decisions to manage, restrict or even revoke access to data highlight the imbalance in the current digital ecosystem. “Without strong systems that prioritize user consent and provide control, individuals’ data can easily be repurposed in ways they neither anticipated nor benefited from,” Char said.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *