Start Here
You found this page because someone told you about Get Fact, or you stumbled across one of our posts. Either way, here’s what you need to know.
Why Subscribe?
Subscribing gives you full access to every edition of the newsletter along with the complete archive.
The Problem
AI creates information faster than anyone can check it. Deepfakes, synthetic documents, and machine written narratives now move at a pace our institutions were never designed to handle.
Misinformation cuts across everything that matters. It touches climate, crime, health, your business, and the information your teams rely on every day. This is not a niche problem. It shapes decisions in every part of life.
The old model relied on fact checking. Chase the falsehood after it spreads. Publish a correction almost no one sees. Repeat. That belonged to a world where misinformation traveled at the speed of print. We do not live in that world anymore.
There is funny…
and then the stuff that is in that middle zone where things get really confusing.
What We Are Building
Get Fact is building verification infrastructure. Think of it the way you think about spellcheck. You do not call in a linguist every time you write an email. You have a system that quietly flags problems before they reach anyone else.
That is what we are creating for trust. Our AI tool, Laura, analyzes claims in real time for news stories and shows what is verified, what needs context, and what does not hold up. No opinions. No ideology. Just evidence, sourcing, and transparency.
No one has time to ask an AI three times to show a source. Verification should not feel like a scavenger hunt. We are making that process effortless and built into the way people already work.
We are a growing community of people who believe trust should not be controlled by a few giant platforms. We are building the infrastructure and the networks needed for this new AI driven information crisis. Every person who joins strengthens the work and helps shape what comes next.
Who We Are
There are many of us behind Get Fact, but here on Substack most posts are written by Wilf Dinnick. Wilf spent twenty five years reporting around the world, often in conflict zones, for CNN, Al Jazeera, ABC News, Global News.
He saw how information, or the lack of it, can tear communities apart long before AI sped everything up. Later, he led Al Jazeera’s global digital.
We started Get Fact because we began to see the same patterns appearing here in Canada and across democracies everywhere. The tactics are familiar. The technology is simply faster.
What This Newsletter Covers
Get Fact First is where we explore the forces reshaping trust in the AI era and how we are responding. You will find:
The landscape
What is happening with AI driven misinformation, information warfare, and the collapse of institutional trust.
The infrastructure
Updates on Laura and the broader movement to make verification a built in part of how information flows.
The policy
What governments are doing and not doing to build verification infrastructure. Canada, the EU, and the gaps that remain.
The argument
Why verification needs to work like spellcheck instead of a courtroom. Why humans still matter. Why speed without trust is a losing game.
Where to Start Reading
If you are new, these pieces will give you the foundation:
“Same as the Old Boss”- Why the architects of our broken information ecosystem are now building AI and why that matters.
“Fact Checking Was Yesterday’s Fix”- The case for shifting from reactive fact checking to proactive verification.
How Not to Lose Canada’s Information War - We keep treating misinformation as cleanup work.
The Mission
Facts should work like nutrition labels. You do not have to love them, but it matters that they exist. Know where someone got that information and decide for yourself.
Get Fact is building the infrastructure to make that possible for newsrooms, governments, companies, and anyone making decisions in a world where synthetic content is the default.
We are non partisan. We do not take sides in politics. We take sides in evidence.
Stay Connected
Subscribe to get every post delivered to your inbox. Free subscribers get full access to the newsletter.
For now, we are not charging anyone to be part of this work. If you choose to donate, it goes straight to keeping the lights on and building the tools that make verification possible. We share that support with our paying subscribers and thank them for helping this project grow.
As we grow, paid tiers will offer deeper analysis, early access to Laura, and opportunities to help shape what we build.
Try Laura
https://www.getfact.ca/ Our AI verification tool is in beta.
If you want to test it, reach out.
Get in touch
If you work in government, media, or corporate communications and want to explore what verification infrastructure could look like in your organization, I would love to talk.
Wilf@getfact.ca / Hello@getfact.ca
Welcome. The work is urgent. Let’s get started.
Canada needs a real defence against misinformation. If you think that matters, help build it. Share this post. Bring someone who should be part of this conversation.
✔️ Use Laura.
✔️ Help build a Canada with facts.🔗 getfact.ca
We apply the best in human and machine intelligence to verify what’s being said online about Canada and its people.
Read more:GetFact.ca
Watch: YouTube
Follow: Facebook | Instagram | TikTok | Bluesky
Listen to GetFact by Kevin Newman (Podcast):
Spotify | Apple Podcasts
Let us know if you see anything worth sharing, Canadians pushing back against attacks, misinformation, or disinformation.
Did we got something wrong? Tell us. It happens. We correct it.
DECLARATION OF MATERIAL CONFLICT OF INTEREST:
This group hereby declares an irreconcilable conflict: Decades deploying effective information technology in war zones, around governments, and in democracies by journalists, military personnel, government and business leaders has resulted in wildly pro-technology bias. Declarants use AI daily. Find it astonishing. Are enthusiastic early adopters.
Said enthusiasm creates disclosed conflict: Declarants simultaneously (a) depend on tools that manufacture reality, and (b) spent careers verifying reality in contexts where getting it wrong destabilized democracies and risked people’s lives.
This is not Luddism. This is pattern recognition from professionals who’ve seen what happens when powerful information tools deploy without verification infrastructure. Spoiler: bad for our democracies.
The undersigned reserve the right to be both amazed and alarmed, often simultaneously, sometimes mid-sentence.





