Canada Votes. Silicon Valley Counts.
Bill C-25: What Works, What Doesn't, and Why It Matters
We have been following Bill C-25, the Strong and Free Elections Act, closely. What follows is not a critique. The people who drafted this bill obviously know the gaps that exist. They are working within real constraints, and none of what we are about to say changes that.
But we think it is worth walking through it, the opportunity and the limits, because this is not a bill most people are going to read. But if you are worried about what you are seeing in your feeds, in the news, and wondering whether you can trust it, this is where the bill works. Where it stops. And why.
Bill C-25 has a long road ahead before it becomes law. With three federal by-elections looming, it needs to move quickly.
An informed public is not a threat to good legislation. It is the whole point of it.
Let us start with a simple and very obvious example.
If you saw a fake video of Prime Minister Carney during the last federal election, one that looked exactly like a CBC news report, pushing a cryptocurrency investment scam, you were watching one of the problems this bill is trying to solve. Those videos were created by accounts managed from multiple countries, with possible Russian involvement.
The bill is broader than just deepfakes. It extends rules against foreign interference and election bribery to apply year-round, not just during official campaign periods. It bans political donations made with cryptocurrency, money orders, and prepaid cards, all of which are hard to trace. It tightens enforcement powers and significantly increases fines.
Here is where it gets challenging.
The platforms where most Canadians get their news and political information, Facebook, Instagram, X, YouTube, are built, owned, and operated in the United States. They do not answer to Canadian electoral law the way a Canadian company would. The Commissioner of Canada Elections has no jurisdiction over a server in California.
The bill gives the Commissioner new power to pursue information-sharing agreements with other countries. That is a step forward. But it is an unlikely one, given the current state of our relationship with the United States. A diplomatic agreement is not the same as enforcement power.
The bill bans AI-generated content intended to mislead voters, with an exception for satire and parody. That exception is legally necessary. But bad-faith actors can simply call their content a joke. Proving deceptive intent in court is slow and expensive, and when the person behind the video is anonymous and operating overseas, there may be nobody to prosecute at all.
The fines tell the same story. Twenty-five thousand dollars for an individual sounds serious. Sure, there were cases last year of national (Canadian) political operatives but against a state-backed foreign influence operation, it is meaningless. You cannot fine someone you cannot find. And by the time enforcement catches up, the election is over.
Here is the part that has not gotten enough attention.
The most underreported piece of this package is the $31.5 million commitment to strengthen the Rapid Response Mechanism at Global Affairs Canada. This is the federal team that monitors foreign interference campaigns, tracks their origins, and names them publicly. That investment matters more than any fine in the bill.
When a government publicly identifies a foreign influence operation and exposes it to international scrutiny, it creates real consequences. Diplomatic pressure. Repetitional damage. This makes it a harder operating environment for the actors behind it. We see this as emboldening democracy, fighting back against state-sponsored disinformation. Not quietly in court. Out loud, in public.
So where does that leave us?
Bill C-25 is serious legislation from people who understand the threat. The gaps in it are not failures of will. They are the unavoidable reality of trying to protect a Canadian democratic process that runs, in large part, on American platforms governed by American rules.
If you are someone who wonders whether what you are reading is real, whether that video is genuine, whether the story in your feed came from a journalist or a foreign bot farm, knowing how this bill works and where it stops is not cause for despair. It is the starting point for the next conversation.
RELATED UPDATES
Hate speech on X is surging post-moderation. USC Viterbi research found a 50% increase in hate speech, 30% rise in homophobic posts, and 42% rise in racist posts on X between January 2022 and June 2023 after Community Notes replaced structured moderation. Meta is now following the same model.
The EU’s first full reporting cycle under the Digital Services Act disinformation code (July to December 2025) was published recently, covering election integrity and crisis-related disinformation from Google, Meta, Microsoft, and TikTok. The reports describe measures taken but don’t assess whether they worked. USC Viterbi study | EU DSA reports
The Council of Europe published a ten-building-block framework (February 2026) for national strategies to resist disinformation, covering research infrastructure, media literacy, election safeguards, and cross-border cooperation. It’s a policy design document, not enforcement, but the timing ahead of several European elections makes it relevant. Read the framework
Let us know if you see anything worth sharing, Canadians pushing back against attacks, misinformation, or disinformation.
Did we got something wrong? Tell us. It happens. We correct it.
Get Fact. Verification, information integrity, misinformation and the threats worth understanding, direct to your inbox.





Thank goodness we have the EU (and other international entities) standing guard in juxtaposition to the unfettered, Silicon Valley debasement of our on-line, digital commons. Withdrawing from US based social media to take up European and other international sources has really countered so much gloom for me. It’s more than simply ideological differences or cultural. Laws have been enacted in the US (to protect the libertarian ethos which underpins the US-centric, version of the meta-verse) that have become structural threats to democracy.
Canada Votes. Silicon Valley Counts.
Bill C-25: What Works, What Doesn't, and Why It Matters
Response to Get Fact First
This is a serious piece.
And it gets one thing exactly right:
The problem is not the law. It is the system the law operates inside.
Bill C-25 assumes something traditional:
That democracy can be protected through rules and enforcement.
Define the boundary. Punish violations. Stabilise the system.
But the environment has changed.
Political discourse no longer happens inside national systems.
It happens on platforms like Meta, X, and YouTube.
Those platforms are not Canadian.
They are not governed by Canadian law.
That creates a structural gap:
👉 authority is national 👉 influence is global
So enforcement becomes secondary.
You cannot fine actors you cannot reach. You cannot prosecute actors you cannot identify.
And even if you could—
you are already too late.
Because influence does not depend on attribution.
It depends on circulation.
A deepfake does not need to convince.
It only needs to spread.
That is why the most important part of the bill is not the ban.
It is the exposure mechanism.
Naming interference.
Making it visible.
Raising reputational cost.
That is a shift:
control → friction punishment → exposure
But even that has limits.
Not all actors care about being exposed.
Some are anonymous. Some are state-backed.
Some are designed to be untraceable.
So the deeper problem remains:
Democracy is national. Information is not.
And that creates a permanent asymmetry.
Bill C-25 strengthens the edges.
But it cannot control the centre.
Because the centre is not where the law is.
It is where attention is.
And attention is governed by:
speed emotion engagement
—not truth.
So this is not a solution.
It is a recognition.
The question is no longer:
“How do we stop interference?”
But:
“How do we make the system resilient to it?”
That is harder.
Less visible.
But much closer to reality.