MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
data
Search

AI and Privacy Issues: What You Need to Know

Wednesday September 20, 2023. 11:33 PM , from eWeek
Privacy issues created by widespread use of artificial intelligence are a major concern, despite the fact that businesses and consumers alike are excited by the potential for AI to transform daily life. Clearly, as more and more personal data is fed into AI models, many consumers are rightfully concerned about their privacy and how their data is being used.
This guide is for these consumers to build a deeper knowledge base about AI’s privacy features. Additionally, it is a guide for business owners and leaders to better understand their customers’ concerns and how to use AI in a way that protects privacy without sacrificing functionality.
Table of Contents: AI and Privacy

Issues with AI and Privacy
How Is AI Data Collected?
Solutions for AI and Privacy Concerns
Bottom Line

Issues with AI and Privacy
Little regard for copyright and IP laws
AI models pull training data from all corners of the web. Unfortunately, many AI vendors either don’t realize or don’t care when they use someone else’s copyrighted artwork, content, or other intellectual property without their consent.
This problem grows much worse as models are trained, retrained, and fine-tuned with this data; many of today’s AI models are so complex that even their builders can’t confidently say what data is being used and who has access to it.
Also see: AI Detector Tools
Unauthorized incorporation of user data
When AI model users input their own data in the form of queries, there’s the possibility that this data will become part of the model’s future training dataset. When this happens, this data can show up as outputs to other users’ queries, which is a particularly big issue if users have input sensitive data into the system.
In a now-famous example, three different Samsung employees leaked sensitive company information to ChatGPT that could now possibly be part of ChatGPT’s training data. Many vendors, including OpenAI, are cracking down on how user inputs are incorporated into future training, but still, there’s no guarantee that sensitive data will remain secure and outside of future training sets.
Limited regulatory bodies and safeguards
Some countries and regulatory bodies are working on AI regulations and safe use policies, but no overarching standards are currently in place to hold AI vendors accountable for how they build and use artificial intelligence tools.
A number of AI vendors have already come under fire for IP violations and opaque training and data collection processes. But in most cases right now, AI vendors get to decide their own data storage, cybersecurity, and user rules without interference.
Also see: Top Generative AI Apps and Tools
Unauthorized usage of biometric data
A growing number of personal devices use facial recognition, fingerprints, voice recognition, and other biometric data security in place of more traditional forms of identity verification. Public surveillance devices also frequently use AI to scan for biometric data so individuals can be identified more quickly.
While these new biometric security tools are incredibly convenient, there’s limited regulation regarding how AI companies can use this data once it’s collected. In many cases, individuals don’t even know that their biometric data has been collected, even less that it is being stored and used for other purposes.
Covert metadata collection practices
When a user interacts with an ad, a TikTok or other social media video, or pretty much any web property, metadata from that interaction and the person’s search history and interests can be stored up for more precise content targeting in the future.
This method of metadata collection has been going on for years, but with the help of AI, more of that data can be collected and interpreted at scale, making it possible for tech companies to further target their messages at users without their knowledge of how it works. While most user sites have policies that mention these data collection practices, it’s mentioned only briefly and in the midst of other policy text, so most users don’t realize what they’ve agreed to and subject themselves and everything on their mobile devices to scrutiny.
Limited built-in security features for AI models
While some AI vendors may choose to build in baseline cybersecurity features and protections, many AI models do not have native cybersecurity safeguards in place. This makes it incredibly easy for unauthorized users and bad-faith actors to access and use other users’ data, including personal identifiable information (PII).
Extended data storage periods
Few AI vendors are transparent about how long, where, and why they store user data, and the vendors who are transparent often store data for lengthy periods of time.
For example, OpenAI’s policy says it can store user input and output data for up to 30 days “to identify abuse.” However, it’s not clear when or how the company is justified to take a closer look at individual users’ data without their knowledge.
Privacy and the Collection of AI Data
Web scraping and web crawling
Because it requires no special permissions and enables vendors to collect massive amounts of varied data, AI tools often rely on web scraping and web crawling to build training datasets.
Content is scraped from publicly available sources on the internet, including third-party websites, wikis, digital libraries, and more. In recent years, user metadata has also become a large portion of what’s collected through web scraping and crawling. This metadata is usually pulled from marketing and advertising datasets and websites with data regarding targeted audiences and what they engage with most.
User queries in AI models
When a user inputs their question or other data into an AI model, most AI models store that data for at least a few days. While that data may never be used for anything else, it has been proven that many artificial intelligence tools not only collect that data but hold onto it for future training purposes.
Biometric technology
Surveillance equipment, including security cameras, facial and fingerprint scanners, and microphones that detect human voices can all be used to collect biometric data and identify humans without their knowledge or consent.
State by state, rules are getting stricter regarding how transparent companies need to be when using this kind of technology. Yet for the most part, they can collect this data, store it, and use it without asking customers for permission.
IoT sensors and devices
Internet of Things (IoT) sensors and edge computing systems collect massive amounts of moment-by-moment data and process that data nearby in order to complete larger and quicker computational tasks. AI software often takes advantage of an IoT system’s database and collect relevant data through methods like data learning, data ingestion, secure IoT protocols and gateways, and APIs.
APIs
APIs give users an interface with different kinds of business software so they can easily collect and then integrate different kinds of data for AI analysis and training. With the right API and setup, users can collect data from CRMs, databases and data warehouses, and both cloud-based and on-premises systems.
Public records
Whether records are digitized or not, public records are often collected and incorporated into AI training sets. Information about public companies, current and historical events, criminal and immigration records, and other public information can be collected with no prior authorization required.
User surveys and questionnaires
Though this data collection method is more old-fashioned, using surveys and questionnaires is still a tried-and-true way that AI vendors collect data from their users.
Users may answer questions about what content they’re most interested in, what they need help with, how their most recent experience with a product or service was, or any other question that gives the AI a better idea about how to personalize interactions with that person in the future.
Also see: 100+ Top AI Companies
Solutions for AI and Privacy Concerns
With a handful of best practices, tools, and additional resources, your business can effectively use artificial intelligence solutions without sacrificing user privacy. To protect your most sensitive data at all stages of AI usage, follow these tips:

Establish an appropriate use policy for AI: Internal users should know what data they can use and how and when they should use it when engaging with AI tools. This is particularly important for organizations that work with sensitive customer data, like protection health information (PHI) and payment information.
Invest in data governance and security tools: Some of the best solutions for protecting AI tools and the rest of your attack surface include extended detection and response (XDR), data loss prevention, and threat intelligence and monitoring software. A number of data-governance-specific tools also exist to help you protect data and ensure all data use remains in compliance with relevant regulations.
Read the fine print: AI vendors typically offer some kind of documentation that covers how their products work and the basics of how they were trained. Read this documentation carefully to identify any red flags, and if there’s something you’re not sure about or that’s unclear in their policy docs, reach out to a representative for clarification.
Use only non-sensitive data: As a general rule, do not input your business’s or customers’ most sensitive data in any AI tool, even if it’s a custom-built or fine-tuned solution that feels private. If there’s a particular use case you want to pursue that involves sensitive data, research if there’s a way to safely complete the operation with digital twins, data anonymization, or synthetic data.

For additional tips related to cybersecurity, risk management, and ethical AI use when it comes to generative AI in particular, check out these previous best practice guides:

Generative AI Ethics: Concerns and Solutions
Generative AI and Cybersecurity
Risks of Generative AI: 6 Risk Management Tips

Bottom Line: AI and Privacy Issues
AI tools present businesses and the everyday consumer with all kinds of new conveniences, ranging from task automation to guided Q&A to product design and programming. But as much as these tools can simplify our lives, they also run the risk of violating individual privacy in ways that can damage vendor reputation and consumer trust, cybersecurity, and regulatory compliance.
It takes extra effort to use AI in a responsible way that protects user privacy, but it’s well worth it when you consider how privacy violations can impact a company’s public image. Especially as this technology matures and becomes more pervasive in our daily lives, it will become crucial to follow AI laws as they’re passed and develop more specific AI use best practices that align with your organization’s culture and customers’ privacy expectations.
Read next: Best Artificial Intelligence Software
The post AI and Privacy Issues: What You Need to Know appeared first on eWEEK.
https://www.eweek.com/artificial-intelligence/ai-privacy-issues/
News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Apr, Mon 29 - 22:26 CEST