Ask ChatGPT to help you write a news story and you'll get something that sounds like a news story. It'll be fluent, structured, and completely useless — because it doesn't know what you know.
It doesn't know your notes. It doesn't know the interview you did this morning. It doesn't know your publication's style guide, or the three previous articles you wrote on this topic, or the document a source sent you last week. You'd have to copy-paste all of that into a prompt every single time. Nobody does that. So the AI works blind, and the output is generic.
The problem is worse than it looks
Even when you do provide context — carefully pasting your notes, your draft, your background research into the chat window — the AI loses it. Ask a follow-up question three messages later and it's already forgotten half of what you gave it. The context window fills up, older information drops out, and you're back to square one. You either re-paste everything with every query, which is an enormous waste of time, or you accept degraded responses.
This is why AI in journalism has never been as helpful as it could be. The technology is capable. The models are powerful. But the delivery mechanism — a blank chat window that forgets what you told it five minutes ago — is fundamentally broken for any serious professional workflow.
Context is not a feature — it's the architecture
RE::DACT was designed around one insight: AI is only as good as what it knows about your work. Not general knowledge — your specific work. Your notes, your sources, your research, your editorial standards.
In RE::DACT, you select any text or note and add it as context for the AI with one click. Your documentation, your earlier research, a paragraph you're struggling with — the AI sees everything it needs without you having to construct a prompt. In Settings, you define your publication, your style, your standards once. The AI remembers. Every response already fits your workflow.
The context doesn't disappear between queries. It doesn't degrade over a long session. It's managed by the workspace, not the chat window — so the AI always has exactly the context it needs, no matter how many questions you've asked.
What changes when AI has context
A fact-checking agent that knows your article can point to specific claims that need sourcing — not generate a generic checklist. A research assistant that knows your topic can find what's actually missing from your coverage — not return the first ten Google results. A writing assistant that knows your notes can suggest a lead that reflects what you actually learned — not produce a template.
The difference between a generic AI tool and a useful one isn't the model. It's whether the model knows what you're working on — and keeps knowing it.
Research that happens in the background
Context isn't only what you feed the AI. It's also what the AI feeds you. RE::DACT's Sparks system lets you define a topic and install a browser extension that flags relevant content as you browse the web — even when you're not actively researching. You collect sparks for later, monitor specific sources, and build a research base without interrupting your actual work.
The result is an AI assistant that gets smarter the more you use it. Not because the model improves — but because it accumulates the context of your actual journalism.