OneStart

The Data You Didn’t Share: How AI Still Profiles You Through Others

You’ve locked down your social media, disabled ad tracking on your phone, and maybe even stopped sharing location data. Yet somehow, you still get eerily personalized ads and recommendations. How does that happen? The answer lies in a digital shadow you never created, but AI built it anyway.

This digital twin, formed not by your data but by those connected to you, shows how AI’s power to profile isn’t limited to what you share.

How AI Profiles You Through Others

AI models thrive on connections and patterns. They don’t view individuals in isolation. Instead, they “connect the dots” using data from your social network, household devices, and shared behaviors.

For example, your smart home assistant, Alexa, or streaming service, Netflix, learns from everyone in your household. If your sibling binge-watches horror movies, Netflix may recommend similar shows to you, even if you have never watched one. Spotify may infer your music tastes based on the playlists your roommates play. These platforms gather signals from multiple users on shared devices, then generalize preferences across profiles.

Social media platforms go even further. You might not have a TikTok account, but AI can guess your interests based on your partner’s “For You” page or your shared Wi-Fi network. Being tagged in friends’ photos or appearing in their contact lists adds pieces to the puzzle. AI systems leverage this networked data to infer traits, interests, and habits.

A LinkedIn article explains this well: AI can build a profile of a user not only from direct input but also by analyzing conversations, tone, and patterns over time, even filling in gaps based on social connections.

Another article highlights how shared devices and social media footprints multiply the data AI can draw from, creating profiles even for those who never gave consent directly.

Example Breakout: AI vs. AI Moment

Here’s a relatable scenario:

They use ChatGPT to ask questions about business ethics, criminology, and neuroscience. They search the same academic databases, they access the university’s learning platform through the same dorm Wi-Fi, and they mention your name in study group chats.

Suddenly, you start noticing subtle shifts: your school library recommends articles aligned with those same topics. A smart scheduling tool prioritizes events or seminars you didn’t show interest in. Even group project tools begin suggesting templates that match your classmates’ AI prompts.

You never gave your preferences—but the system inferred them anyway.

This happens because AI models often learn from clusters of behavior, especially in shared networks. It’s not just your activity being analyzed—it’s everyone else’s. Their digital behavior becomes the proxy for your profile.

This indirect personalization isn’t hypothetical. Recommendation engines, learning management systems, and even job boards are beginning to rely on aggregated behavior patterns to improve engagement.

The Problem with Opting Out

You’ve turned off tracking, cleared your cookies, and locked down your privacy settings. But you still get targeted ads, eerily accurate recommendations, and content that seems to know you better than you know yourself.

That’s because opting out only works in a world where your data stays yours. We don’t live in that world anymore.

Today’s AI systems learn from networks, not individuals. Even if you say no, the people around you—friends, roommates, coworkers—might say yes. Their data doesn’t just reflect their behavior; it shapes how AI perceives you. That’s the digital loophole we rarely talk about: your profile gets built from someone else’s consent.

This has real-world consequences.

  • Job screening algorithms might rank you lower because your social graph resembles past applicants who failed.
  • Loan or insurance decisions may be influenced by neighborhood behavior patterns—regardless of your personal credit score or health.
  • Education platforms might push certain tracks or subjects on you because your classmates frequently engage with them.

A study in PNAS found that AI systems could predict personal traits (like sexual orientation or political views) with high accuracy, based solely on the Facebook likes of friends, even when individuals never posted such info themselves. More recent tools use social and device-level signals to replicate this kind of inference across platforms.

The point isn’t that privacy controls are broken. It’s that they were never designed for a networked world.

Saying “I opt out” means little when the systems profiling you are built on collective data. Until laws, platforms, and AI models shift toward recognizing and limiting these indirect inferences, privacy remains a group responsibility and a group risk.

Where This Gets Tricky

AI learns from what seems likely about you, based on indirect, sometimes misleading data streams known as telemetry.

These signals aren’t always personal. They can come from public posts, nearby devices, or even people who share no close ties with you. Yet algorithms can confidently fill in the blanks—and that can affect how you’re treated online.

For example:

  • Search engine results might differ based on what others in your region click, nudging you toward groupthink or reinforcing local biases.
  • Job recommendation systems have shown bias when inferring career interests based on aggregated behaviors of similar users, sometimes steering certain demographics away from higher-paying roles.
  • Retail pricing algorithms may adjust offers or discounts based on past behaviors of people with similar digital profiles, not because of your actions, but theirs.

In all these cases, AI uses statistical patterns to guess your preferences, identity, or intent. These guesses might seem harmless, until they start limiting your opportunities or shaping your digital experience in invisible ways.

What You Can Actually Do About Indirect AI Profiling?

You can’t completely stop AI from forming assumptions about you based on other people’s data. Still, there are ways to reduce how much of that indirect data sticks or how it’s used against you.

1. Review and Limit Shared Device Permissions

If you share smart TVs, speakers, or streaming accounts, check each app’s settings to:

  • Create separate user profiles
  • Disable cross-device personalization
  • Limit ad personalization where possible

Platforms like Netflix, YouTube, and Spotify offer account-level controls—use them to prevent blending preferences across people.

2. Turn Off Contact Syncing

Apps like WhatsApp, TikTok, Facebook, and email platforms often ask to sync contacts. If your friends or family do this, your name and number could land in their systems even without your knowledge.

You can:

  • Avoid giving apps access to your contacts
  • Ask people you live or work with to do the same
  • Regularly audit your own app permissions

3. Segment Your Digital Footprint

Use different browsers, email accounts, or even devices for different purposes (e.g., work vs. personal). This reduces the chance of cross-data inference:

  • Use privacy-focused browsers like Brave or Firefox
  • Set up dedicated Chrome profiles to separate activity
  • Clear cookies regularly or browse in incognito for sensitive searches

4. Audit and Adjust Ad Settings

Major platforms let you see how they categorize your interests and show you ads.

5. Talk About Digital Boundaries

Especially in households and workplaces, indirect profiling becomes more powerful through shared habits. Consider having privacy conversations with those close to you:

  • Ask not to be added to bulk contact lists without permission
  • Set agreements for managing shared accounts or devices
  • Discuss smart home setups before enabling always-on voice assistants.

A simple way to think about it comes down to this: “You Should Know When You’re Being Guessed.” If AI builds a picture of you based on your friends or family, systems should let you see what’s assumed and give you tools to manage it.

AI Profiling Is More Pattern Recognition

These inferences don’t stay theoretical. They shape what you see, the content you’re shown, the prices you’re offered, and even how platforms rank your credibility. The profile becomes a proxy for identity, and the guess becomes the experience.

Unlike older systems that relied solely on personal data, modern AI models can confidently act on assumptions. This isn’t just about ads. In some cases, algorithmic predictions influence access to services, moderation decisions, or even eligibility scores for loans and insurance.

That’s why this issue matters. It’s no longer about whether AI knows the “real you.” It’s about how the profile created by others’ data becomes the version of you that systems respond to. You may never be consulted, yet the effects feel personal.

Scroll to Top