The Targeting Issue
What the technology says it does. What it actually does.
⚙️ How It Actually Works
ChatGPT Memory — What OpenAI Says vs. What’s Actually Happening
OpenAI’s memory feature sounds simple: ChatGPT remembers things about you across conversations. Useful. Personal. Almost human.
Here’s what’s actually happening underneath.
When you interact with ChatGPT, the model doesn’t “remember” anything in the way you do. It has no persistent internal state between sessions. What the memory feature actually does is maintain a structured text file — essentially a notes document — that gets appended to your system prompt at the start of each new conversation.
So when ChatGPT “remembers” that you’re a vegetarian or that you prefer concise answers, it’s not because the model learned that. It’s because a fact was written to a file, and that file is now being fed back to the model every single time you open a new chat.
Now, this requires a few considerations: First: your memory file can be read by anyone with access to your account. It’s not encrypted in some special way — it’s just text, appended to context. Second: the model can be manipulated into writing things to your memory file that you didn’t intend. There are documented prompt injection attacks where malicious content in a webpage you asked ChatGPT to summarise caused it to save false information about you. Third: the “memory” is only as good as what gets written down. The model decides what’s worth storing — and that decision is itself a black box.
None of this makes the feature useless. It’s genuinely helpful for repeated workflows. But “memory” is doing a lot of rhetorical work here. What you actually have is a persistent, editable text file that gets stapled to your prompt. Understanding that changes how you’d use it — and how much you’d trust it.
You can view and edit your memory file in Settings → Personalization → Manage Memory. Worth doing.
🔍 What I’m Investigating
How ICE Agents Actually Use ELITE — and Who Built It
I’ve been reading court transcripts.
Not the most glamorous research method, but it’s one of the more honest ones: when ICE agents describe their own workflows under oath, they’re not doing PR. What’s emerged from a set of transcripts I’ve been working through is a picture of ELITE — Enhanced Leads Identification and Targeting Enforcement — as a map-based interface that agents use to identify and target communities. Not individuals with specific warrants. Communities.
That framing matters. A lot of surveillance technology gets described in terms of individual case management — tracking a specific person through a legal process. What the transcripts suggest is something with broader geographic and demographic sweep. I’m still building the evidentiary picture, so I’m not overstating what I have.
What I’m currently pulling on: the relationship between ELITE and Palantir. There’s a known Palantir contract with ICE — that’s public record. What’s less clear is the precise scope of that contract and whether ELITE sits inside it, adjacent to it, or is something else entirely. I’ve found connections that suggest the relationship is real, but I want to be precise about the architecture before I say more.
The question I most want to answer is about training data: what data was used to build the targeting models, who provided it, and whether communities were aware their data was feeding a system like this. That’s where I’m focused next.
More when I have it.
📖 Reading the Room
Atlas of AI by Kate Crawford — still the one I keep returning to
Crawford’s argument is structural: AI isn’t primarily a software problem, it’s an extraction problem. Extraction of labour, of data, of natural resources (the mines that produce the hardware), of political attention. The technology doesn’t exist in the cloud — it exists in physical infrastructure, in legal frameworks, in the bodies of people doing low-wage labelling work.
I’m returning to it now because the current AI governance conversation — EU AI Act, UK’s “pro-innovation” approach, US executive orders — is almost entirely focused on outputs: what can the model say, what can it do, what might it get wrong. Crawford’s frame asks a different question: what did it cost to build this, and who paid? The data labellers in Nairobi paid. Those are harder questions to regulate, which is probably why they’re not being regulated.
If you’re going to read one thing to understand why AI governance feels inadequate, start here.
🌍 Beyond Silicon Valley
What “Digital Access” Means When the Government Can Turn Off the Internet
Kashmir experienced the world’s longest internet shutdown in a democracy — over 550 days following August 2019. I remember not being able to speak with family in those months while I was at Stanford. During that period, every “digital inclusion” initiative — mobile banking, telemedicine, e-government — simply stopped working. Overnight. The region was practically thrown back in time.
I think about this when I read breathless takes about how fintech is “banking the unbanked” or how AI will “democratise access to expertise.” But the question is — for whom?
These claims assume infrastructure stability that much of the world cannot assume. The risk isn’t just that the technology doesn’t reach everyone — it’s that communities can become dependent on it and then have it removed as a political instrument.
❓Open Question
At what point does building more accurate surveillance become more dangerous than building less accurate surveillance?
The usual accountability argument is: this tool is biased, therefore fix it. But a more accurate ELITE might be a more effective ELITE. I’m not sure “make it work better” is the right ask when the underlying question — whether this targeting approach should exist at all — hasn’t been answered. I don’t know how to hold both of those at once. But I think you have to.
Thanks for reading. If something here made you think differently — or if you think I’ve got something wrong — reply and tell me. I read everything.


