Is This The Future We Wanted?
We're Asking the Wrong Questions About the Information War
Another turning point in artificial intelligence arrived in January when Anthropic launched a new capability tied to its model Claude AI. The system could now access, edit, and move files directly on a user’s computer. In effect, this now operates a machine on a person’s behalf.
Markets noticed immediately. Enterprise software stocks dropped sharply as investors realized what this shift implied, AI was no longer just answering questions. It was beginning to do work.
Within weeks, Microsoft moved to incorporate similar autonomous features into Microsoft Copilot.
The direction is clear. AI systems are now evolving into agents and increasingly operating computers, executing tasks, and transacting on behalf of people faster than most expected.
That evolution matters for the problem we began with. Misinformation will not disappear. It may simply move deeper into these systems, into the software that increasingly acts for us, where it becomes harder to see and even harder to correct.
At the World Economic Forum Annual Meeting 2026, Elon Musk told the audience that AI could become smarter than any individual human within the year. The exact timeline may prove optimistic. The trajectory, however, is difficult to dispute.
AI will increasingly mediate how information is produced, distributed, and acted upon.
And that brings us back to the central problem.
Eighty-Six Percent Know the System Is Rigged
The trial documents are public. The polling is clear. Yet little has changed.
Start with the numbers.
Eighty-six percent of voters say companies like Meta and Google should be held accountable for what came out during recent court proceedings. Internal documents presented at trial showed that executives knew their platforms were building addictive products aimed at children. The companies’ own research connected those products to anxiety, depression, and children thinking about ending their lives.
So the question is simple.
Why has nothing happened?
Part of the answer may lie in the language. The word misinformation often puts people on the defensive. It implies that someone was fooled. The conversation quickly devolves into an argument about who is right.
A more useful concept might be ground truth.
Pilots use the term to describe the reliable reference points that tell them where they actually are. When those references disappear, the pilot is not suddenly incompetent. The pilot is flying blind.
The same thing is happening in the information environment. For years, digital systems have rewarded speed, outrage, and engagement rather than accuracy. That dynamic is not a human failure. It is a system failure.
And system failures require system fixes.
When you look at the polling, people’s reaction reflects that instinct. They are not angry because they lost a debate about facts. They are angry because the system feels rigged.
The proposed fixes are not radical.
Restrictions on infinite scroll. Limits on push notifications. Rules governing algorithms that target children. These are familiar forms of consumer protection. We label food. We require seatbelts. We test drugs before they reach the market.
None of that is censorship. It is basic safety.
The platforms that shaped the modern information environment were relatively simple machines. They distributed content. They did not create it.
Generative AI changes that equation. These systems write, personalize, and scale information instantly.
Which means the stakes are rising quickly.
Politicians often wait for public permission before acting. In this case, the permission already exists.
Eighty-six percent is not a close call. It is a mandate.
The hesitation from governments is understandable. Democracies worry about restricting speech or regulating too aggressively. Those concerns should shape careful policy.
But they should not justify doing nothing.
Accuracy should not lose to virality. Children should not serve as test subjects for products companies already knew were harmful. And a society that values democratic norms must protect the systems that make those norms possible.
Canada needs a real defence against misinformation. If you think that matters, help build it. Share this post. Bring someone who should be part of this conversation.
✔️ Use Laura.
✔️ Help build a Canada with facts.🔗 getfact.ca
We apply the best in human and machine intelligence to verify what’s being said online about Canada and its people.
Read more:GetFact.ca
Watch: YouTube
Follow: Facebook | Instagram | TikTok | Bluesky
Listen to GetFact by Kevin Newman (Podcast):
Spotify | Apple Podcasts
Let us know if you see anything worth sharing, Canadians pushing back against attacks, misinformation, or disinformation.
Did we got something wrong? Tell us. It happens. We correct it.






Your argument about “ground truth” captures something important. The central problem in the information environment is increasingly structural rather than individual. When systems reward speed, outrage, and engagement over verification, the resulting distortions are predictable. The pilot analogy is useful: if the instruments are unreliable, the issue is not the competence of the pilot but the integrity of the system.
But the emergence of AI agents adds a second layer to the problem.
For most of the last decade, platforms primarily distributed information. Users still had to read, interpret, and act on it themselves. The new generation of AI systems is beginning to act on behalf of users—retrieving data, executing tasks, moving files, making decisions inside software environments. When systems transition from information distribution to delegated action, the consequences of faulty inputs become far more significant.
In that sense, the concept of ground truth becomes operational rather than merely informational. It is no longer only about whether people believe something inaccurate. It is about whether automated systems are acting on data that has been verified, contextualized, and traceable.
This raises three structural challenges.
First, verification must move upstream. In an environment where AI systems generate and process information at scale, the reliability of source data becomes a core infrastructure issue, much like financial auditing or aviation safety.
Second, transparency becomes critical. If autonomous systems mediate decisions or execute actions, users must be able to trace where the underlying information originated and how it was processed.
Third, institutional trust becomes part of the technical architecture. Democracies have traditionally relied on distributed institutions—courts, universities, scientific bodies, professional media—to maintain shared reference points. AI systems will increasingly depend on those institutions as anchors of verified knowledge.
None of this necessarily implies censorship. It points instead toward standards, auditing, and accountability mechanisms similar to those that govern other safety-critical systems.
The core issue, therefore, may not be misinformation alone. It is the preservation of reliable reference points in a digital environment where information is now produced, interpreted, and increasingly acted upon by machines.
In aviation, when the instruments fail, pilots are trained to rely on redundant systems and verified references.
The information ecosystem may now need to develop the same principle.
I saw how Claude now edits files on computers. That agent future is here sooner than expected. In Ottawa, the ETHI committee heard experts warn AI will make deepfakes impossible to spot. They discussed it while reviewing our online harms bill. Check the evidence here: https://www.ourcommons.ca/DocumentViewer/en/44-1/ETHI/meeting-129/evidence. It makes you wonder if our lawmakers can keep pace.