I am moving away. To the digital suburbs, at least. I finally had a little free time, and I also ran out of patience with big tech companies. I have never been opposed to training on public data in principle, but recent behavior crossed a line for me. The issue is not just model quality or product direction. It is the growing gap between what users expect and how their data is actually being handled.
Here are a few things that pushed me:
- Google announced broader AI usage around personal ecosystem data, including information many people assume is private by default.
- Microsoft introduced features like Recall and Copilot workflows that normalize continuous capture and analysis of personal computing activity.
- AI crawlers from both AI-first companies and legacy platforms often ignore `robots.txt`, which pushes hosting costs onto independent site owners.
None of this feels user-first. It feels like the burden keeps moving in one direction.
§ § §
Why this started to bother me
The part that bothered me most is not that these systems are advanced. It is that the attack surface is no longer purely technical.
Historically, abusing sensitive information required real technical skill or direct system compromise. With LLMs, prompt design itself becomes part of the attack vector. We now live in a world where people can try to extract private context using natural language and a little persistence. That should make everyone more cautious about where personal files live and what systems can access them.
If your tax returns, banking records, legal files, and identity documents are sitting inside services that are actively being repurposed for AI features, you are trusting a very complicated stack to always behave exactly as intended.
I am not comfortable making that bet by default anymore.
The move: self-hosting
So I moved a big chunk of my digital life inward.
In the 2010s, self-hosting felt niche and fragile. Today it feels practical. Hardware is fast, software is cheap, and setup is much easier than it used to be. My baseline stack took an afternoon.
Networking setup
I bought an M4 Mac mini and use it as my personal server.
I chose it for three reasons:
- The price-to-performance is good.
- It fits cleanly into the Apple ecosystem I already use.
- It is simple enough to maintain without turning this into a full-time hobby.
For secure remote access, I use Tailscale as the VPN layer.
Tailscale does most of the heavy lifting. It gives me encrypted connectivity between my devices and the Mac mini without exposing services to the public internet. I did not want to spend weekends hand-rolling network configs or maintaining public ingress rules. This setup is boring, and boring is exactly what I wanted.
What I actually run
A Mac mini plus Tailscale handles most of the infrastructure. The remaining work is deciding what belongs there.
For me, the main categories are:
- **Sensitive files.** Tax docs, bank docs, and personal records now live in a tighter local workflow instead of being scattered across external apps.
- **Development workspace.** Personal projects and code move through this machine so I always have a stable server to SSH into and run scripts on.
- **Local AI utilities.** I run smaller local models for focused tasks like autocomplete and lightweight assistance.
I still use iCloud as backup and sync, but I changed the center of gravity. My day-to-day operations happen on infrastructure I control, then I sync outward where needed.
Is this overkill?
Maybe for some people. For me, it has been worth it.
I trust this setup more, I understand it better, and I spend less time wondering what invisible data policy changed this week. I am not claiming everyone needs a home server. I am saying the default assumptions about where personal data should live are worth re-evaluating.
Moving to the suburbs of the internet has made my setup quieter, more private, and more deliberate.