AI and accuracy at Spark
We use AI to keep our tutorials current. We’re also picky about what that means, because tutorial content that gets the details wrong is worse than tutorial content that doesn’t exist.
This page is the unfiltered version of how we work. If you spot anything in any article that doesn’t match what’s actually in the software, please flag it — the button is on every article. Reading this page is not required to flag something.
What AI does on Spark
When a vendor (Ableton, Resolume, Adobe, others) updates their software, their documentation usually updates too. Our pipeline does three things on a recurring schedule:
- Watches canonical sources. We harvest the official docs that vendors publish — the ones written by their own engineers and tech writers — and keep our own dated archive of them. When the source changes, we know.
- Compares our articles against those canonical sources. Each article on Spark links to one or more canonical sources it’s derived from. When a source changes, we re-evaluate our article against the new version, claim by claim.
- Drafts updates and routes them for review. When the AI finds something that’s drifted — a renamed parameter, a removed feature, new default behavior — it drafts a targeted edit and queues it for a human to review. Nothing gets published without that human step.
What AI does not do on Spark
- It does not invent features. If our article says a knob does X, that claim is anchored to a canonical source we can show you. We list those sources on every AI-touched article.
- It does not auto-publish. Every change goes through a review queue. A person looks at the diff and either approves it, edits it, or rejects it.
- It does not write the brand voice from nothing. Our voice (“warm, witty, clearly explained”) is described in a profile the AI is asked to apply to factual content, but the factual content always comes first and is verified separately from the voice.
- It does not pretend to be human. Articles that have been touched by the pipeline carry a banner saying so, with the date of the last canonical check.
How accurate is this, really?
Honestly: mostly good, sometimes wrong, never dishonest about it.
The pipeline is built around a verification cascade — a fact-check step that compares each claim in an article to the canonical sources before the claim ships. Recent academic work on this kind of verification (FActScore, MiniCheck, and others) puts a well-tuned cascade somewhere in the high-90s percent on technical-document fact-checking accuracy. We tune ours conservatively: when the cascade is uncertain, the article is flagged for human review rather than auto-corrected.
That said — software is messy, edge cases exist, our understanding of any given vendor’s product can lag. You will sometimes find something wrong. When you do, we want to hear about it.
How to flag an inaccuracy
Every article has a “Flag this passage” button at the top and at the bottom. You can also highlight any sentence in the article and the flag dialog pops up.
You don’t need an account. We rate-limit by IP for anonymous flags but we never block them — false negatives (missed errors) are worse than false positives.
When you flag something:
- We capture the highlighted passage, a few sentences of surrounding context, and your reason if you give one.
- The flag goes into a triage queue.
- The pipeline runs a targeted re-verification on just that passage against the canonical sources — within minutes, usually.
- A human reviews the verification result and either confirms the issue (and queues a fix), dismisses it with a note, or asks for more info if the flag is ambiguous.
- If you were signed in when you flagged, we tell you what happened.
How to find out when an article was last verified
Open any AI-touched article. The transparency banner at the top shows:
- The voice profile applied
- The canonical sources the article was derived from (with links — go check them yourself if you want)
- When the article was last verified against those sources
- The article version
If the article hasn’t been verified in a while, or our drift sweep flagged something, the banner will tell you that too.
Why we do it this way
Three reasons.
One — software changes faster than human writers can keep up with, and stale tutorials are worse than no tutorials. The AI lets us watch everything, all the time, and triage what actually changed.
Two — the canonical-source anchoring is the difference between AI that helps and AI that hallucinates. Asking a model to “tell me about Ableton’s new sidechain” produces fiction. Asking it to “rewrite this passage to match what this specific Ableton documentation page says, citing your evidence” produces something verifiable.
Three — being upfront about it is the whole point. There’s a lot of AI content on the internet right now that pretends to be something else. We’d rather show you the receipts.
Questions or concerns
Email hello@pixelantern.com if you want to talk about any of this. If something feels off about an article and you’re not sure how to flag it, just write us — we’ll figure it out.
If you’re a vendor and you’d like to talk about how we represent your software, write us and we’ll set up a conversation.
If you’re a researcher or another platform thinking about doing similar work, we’re open to comparing notes.