Skip to content

On Compelled Silence and Compelled Lies

If a canary song is heard, but no one verifies the source, are we safe or already compromised?

Someone on We2.ee asked me about the “warrant canary” I recently introduced there. What it’s good for, how it holds up legally, if canaries are more than just digital virtue signaling. It’s a sharp question, and one I wanted to explore thoroughly. So let’s dive into what warrant canaries are, their uncertain legal terrain, and why—and how—I chose to implement one anyway.

From Mineshafts to Mainframes

The term “warrant canary” originates in a familiar but grim coal mining practice. For generations, miners would carry caged canaries—small, famously sensitive songbirds—down into the shafts with them. The bird’s well-being was a constant, living indicator of air quality. As long as the canary sang, the air was safe. If it fell silent, it served as an urgent warning to GTFO before invisible toxic gases overwhelmed the miners, too.

A warrant canary is an attempt to apply this idiomatic concept in an entirely different context where the invisible threat isn’t carbon monoxide but a secret government order accompanied by a gag clause. The canary’s ‘song’, here, is a regularly published statement from a service provider asserting that no such orders have been received. Its sudden absence is the signal. But this elegant concept raises some thorny questions: can we trust a canary that continues to sing? And if it falls silent, what does that actually tell us?


The entire legal theory of a warrant canary hinges on a frankly audacious distinction that has never been definitively tested in court. It operates in the grey area between the government's power to compel silence (via a gag order) and the First Amendment's protection against compelled speech.

compelled silence ≠ a compelled lie

Laws like the USA PATRIOT Act and the Foreign Intelligence Surveillance Act (FISA) grant government agencies the power to issue secret orders for data, like National Security Letters (NSLs). These often come with gag orders, making it a crime for the recipient to disclose the order's existence. That is compelled silence.

However, forcing a provider to continue publishing a statement they know to be false (e.g., "we have not received any NSLs") would be compelling them to lie. This is a form of compelled speech that would face much stricter constitutional scrutiny. The warrant canary is designed to exploit this gap. Rather than lie, the provider simply stops speaking.

But let's be clear: this is a high-stakes legal gamble.

A prosecutor would almost certainly argue that the sudden, deliberate silence is a form of communication intended to circumvent the gag order. And given this administration’s relentless assault on civil liberties, paired with a Supreme Court that has shown willingness to reconsider long-standing precedents, it is far from certain that a court would favor the canary's subtle logic over a government demand for total secrecy.

A provider who lets their canary expire isn't just signaling to their users; they are potentially volunteering to be the test case in a novel, expensive, and foundational legal battle they are not guaranteed to win.


Warrant Canaries in the Wild

With legal precarity as backdrop, three prominent cases demonstrate the challenges, the triumphs, and why precise design is everything when navigating such hostile territory.

Apple: The Canary, the Standoff, and the Long Game

The most famous and fiercely debated warrant canary incident is Apple's. To understand it, you have to remember the context: it was November 2013, the ink barely dry on the Snowden/PRISM revelations. Public trust in tech companies' ability to protect user data from government overreach was at an all-time low. In its very first transparency report, Apple made a bold claim:

“Apple has never received an order under Section 215 of the USA Patriot Act. We would expect to challenge such an order if served on us.”

For privacy advocates, this was a masterstroke. The implication was that if this sentence ever vanished, users should assume Apple had been served such an order and was legally gagged from saying so.

Then, in the very next report and all that would follow, the sentence was gone. Rampant speculation ensued. Had the canary worked, signaling a secret order? Or was it just a lawyerly rewording in response to new DOJ reporting guidelines? The ambiguity was the message, and no one outside Apple knows for sure.

Of course, Apple’s canary saga doesn't exist in a vacuum. It must be viewed as part of a bigger pivot toward user privacy under Tim Cook, memorialized in his 2014 statement of privacy as a fundamental human right. Just two years later, Apple’s commitment to user privacy faced an arguably much more meaningful test, in its famous 2015–2016 standoff with the FBI. The Bureau demanded Apple create a backdoor into the San Bernardino shooter's iPhone; Apple refused. Apple aced the test, moving the conversation from the fine print of a transparency report to the front page of every major newspaper.

🍎
I have a lot more to say, as it turns out, about functional privacy in Apple's ecosystem. Watch this blog for posts about Advanced Data Protection, which brings long-awaited end-to-end encryption to the bulk of iCloud data, and Lockdown Mode, which—contrary to Apple’s documentation—isn’t just for predictable targets of “extremely rare and highly sophisticated cyber attacks.”

RiseUp: Silence in Seattle Cements Concept

If the Apple story is about interpreting ambiguous signals from a corporate giant, the experience of the activist-focused tech collective RiseUp is about a canary serving its exact, intended purpose under immense pressure. In late 2016, the collective failed to update the warrant canary on their website.

The silence did not go unnoticed. For a user base of activists and organizers deeply attuned to such signals, the lapse was an immediate red flag. Commentators like William Gillis at the Center for Stateless Society quickly documented the "dead canary," correctly interpreting it as a near-certain sign that RiseUp had been served a secret government order with a gag clause. The system, at least as an alarm, had worked.

Months later, in February 2017, the collective was finally able to publish a statement explaining the situation. They had received two sealed FBI warrants, both accompanied by gag orders, related to an international DDoS extortion ring and a ransomware operation—criminal activities that violated their terms of service. Faced with the choice of perjuring themselves by issuing a false canary or facing contempt of court charges and likely incarceration, they chose the correct (and only legally viable) option: they said nothing and let the canary expire.

In their own words:

"The canary was so broad that any attempt to issue a new one would be a violation of a gag order... This is not desirable, because if any one of a number of minor things happen, it signals to users that a major thing has happened."

As a result, RiseUp changed their canary, making it narrower and more specific.

Cloudflare: Questions of Scale and the Plausibility of Perfection

The lessons from Apple's ambiguity and RiseUp's high-stakes test culminate in the Cloudflare approach: a masterclass in surgical precision. As one of the core infrastructure providers of the modern web, the promises they make carry enormous weight. Their canary isn't a single sentence in a report, but a specific list of substantive actions they have never taken.

Since at least 2019, their transparency reports have consistently included the following six attestations:

1. Cloudflare has never turned over our encryption or authentication keys or our customers' encryption or authentication keys to anyone.
2. Cloudflare has never installed any law enforcement software or equipment anywhere on our network.
3. Cloudflare has never provided any law enforcement organization a feed of our customers' content transiting our network.
4. Cloudflare has never modified customer content at the request of law enforcement or another party.
5. Cloudflare has never modified the intended destination of DNS responses at the request of law enforcement or another third party.
6. Cloudflare has never weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.

Given this clarity, it was interesting to see the community concern that erupted on platforms like Hacker News in 2023 and early 2024. Vigilant users noticed that the date stamp on their transparency report page appeared to be stale, leading to speculation that one of these six promises had been broken. The situation highlighted the hair-trigger sensitivity of the community, and even prompted a response from Cloudflare's CEO, Matthew Prince, who noted the delay was likely an oversight.

This confusion appears to stem from a stale date on a webpage, not a change in the canary's substance. My own research confirms the six attestations above have remained consistent through every semi-annual report. As of this writing, their canary page is fully up-to-date and those same six attestations remain intact.

Far from being a failure, I see the Cloudflare case as a textbook example of a canary working as intended, albeit in a roundabout way. The precision of their statements leaves little room for misinterpretation, and the community's swift reaction to a perceived lapse—even one that turned out to be a clerical error—proves that people are watching. That vigilance is a non-negotiable component of any successful canary system.

And yet, one must acknowledge the extraordinary nature of Cloudflare's position. Given that they route roughly 20% of the entire web, the claim to have a perfect record against immense, constant state-level pressure is, to say the least, a remarkable one. This doesn't invalidate their canary; rather, it underscores why the community's vigilance is so important. For those who wish to monitor for themselves, Cloudflare helpfully provides an RSS feed for its transparency reports.


Words Are Very Necessary

So, after wading through the legal ambiguities and the real-world case studies, the question remains: is running a warrant canary a worthwhile endeavor, or just digital virtue signaling?

For me, it’s a yes—but only if it is a direct and thoughtful response to the lessons learned from those who came before:

  1. The attestations must be precise. As RiseUp learned, a broad canary is brittle. The We2.ee canary makes only four, narrowly-scoped promises focused on the most fundamental compromises: secret court orders, physical seizure, compelled surveillance modifications, and gag orders that would force a lie by omission.
  2. The proof must be verifiable and decentralized. As Apple’s ambiguity taught us, a canary needs a strong, public proof of life. Each We2.ee canary is time-stamped with a current news headline and a Monero block hash, then cryptographically signed with my PGP key.
  3. The process must be deliberate. A fully automated canary is worthless—a sophisticated adversary who compromises a server could simply let it keep running. My ⁠canary.py script gathers data but requires my manual review and GPG passphrase to sign and post the final message.
BTS: We2.ee’s weekly warrant canary
BTS: We2.ee’s weekly warrant canary

Of course, even this design has weaknesses. A missing canary is still ambiguous—it could mean a gagged order, but it could also mean I’m on vacation. It’s vulnerable if an adversary compromises not just the server, but my private PGP keys. As the Cloudflare case proves, it only works if people are actually watching.

If a canary song is heard, but no one verifies the source,
are we safe or already compromised?

How to Verify the Canary

Step 1: Import My Public Key (One-time setup)

My public PGP key is the ultimate source of truth for my online identity. You can import it from the OpenPGP.org keyserver:

# Import the key from a public server
gpg --keyserver keys.openpgp.org --recv-keys '323A8A2C47B376224B3613B7535B265AEDBE5B44'

# Verify the fingerprint of the imported key
gpg --fingerprint sij@sij.law

# This should display: 323A 8A2C 47B3 7622 4B36 13B7 535B 265A EDBE 5B44

Step 2: Verify the Canary Message (Weekly check)

With my key imported, you can download the latest canary statement from the sw1tch repository and verify its cryptographic signature.

# Download the latest canary
curl -s https://sij.ai/sij/sw1tch/raw/branch/main/canary.txt > canary.txt

# Verify its signature
gpg --verify canary.txt

A successful check will show a Good signature from my key.

A warning about the key not being certified with a trusted signature is normal; the `Good signature from "Sangye Ince-Johannsen (Attorney) <sij@sij.law>"` line is what confirms the message is authentic.

The Bottom Line

This all leads to a simple protocol: if the weekly canary in the #announcements:we2.ee room is more than three days late without a prior announcement, ping me. If there’s still radio silence, you should assume the canary's purpose has been triggered and act accordingly.

Ultimately, running a canary for We2.ee is not about digital virtue signaling. It is about transforming an abstract commitment to user privacy into a concrete, weekly, verifiable action. It is not a solution to state surveillance, but it is an alarm bell—and in a world of compelled silence, a reliable alarm bell is a powerful tool.

CTA Image

The free open source code for sw1tch, including the canary.py module, is available on my personal code hub.

sij.ai/sij/sw1tch

Comments