Skip to content

Python

On Compelled Silence and Compelled Lies

If a canary song is heard, but no one verifies the source, are we safe or already compromised?

Someone on We2.ee asked me about the “warrant canary” I recently introduced there. What it’s good for, how it holds up legally, if canaries are more than just digital virtue signaling. It’s a sharp question, and one I wanted to explore thoroughly. So let’s dive into what warrant canaries are, their uncertain legal terrain, and why—and how—I chose to implement one anyway.

From Mineshafts to Mainframes

The term “warrant canary” originates in a familiar but grim coal mining practice. For generations, miners would carry caged canaries—small, famously sensitive songbirds—down into the shafts with them. The bird’s well-being was a constant, living indicator of air quality. As long as the canary sang, the air was safe. If it fell silent, it served as an urgent warning to GTFO before invisible toxic gases overwhelmed the miners, too.

A warrant canary is an attempt to apply this idiomatic concept in an entirely different context where the invisible threat isn’t carbon monoxide but a secret government order accompanied by a gag clause. The canary’s ‘song’, here, is a regularly published statement from a service provider asserting that no such orders have been received. Its sudden absence is the signal. But this elegant concept raises some thorny questions: can we trust a canary that continues to sing? And if it falls silent, what does that actually tell us?


The entire legal theory of a warrant canary hinges on a frankly audacious distinction that has never been definitively tested in court. It operates in the grey area between the government's power to compel silence (via a gag order) and the First Amendment's protection against compelled speech.

compelled silence ≠ a compelled lie

Laws like the USA PATRIOT Act and the Foreign Intelligence Surveillance Act (FISA) grant government agencies the power to issue secret orders for data, like National Security Letters (NSLs). These often come with gag orders, making it a crime for the recipient to disclose the order's existence. That is compelled silence.

However, forcing a provider to continue publishing a statement they know to be false (e.g., "we have not received any NSLs") would be compelling them to lie. This is a form of compelled speech that would face much stricter constitutional scrutiny. The warrant canary is designed to exploit this gap. Rather than lie, the provider simply stops speaking.

But let's be clear: this is a high-stakes legal gamble.

A prosecutor would almost certainly argue that the sudden, deliberate silence is a form of communication intended to circumvent the gag order. And given this administration’s relentless assault on civil liberties, paired with a Supreme Court that has shown willingness to reconsider long-standing precedents, it is far from certain that a court would favor the canary's subtle logic over a government demand for total secrecy.

A provider who lets their canary expire isn't just signaling to their users; they are potentially volunteering to be the test case in a novel, expensive, and foundational legal battle they are not guaranteed to win.


Warrant Canaries in the Wild

With legal precarity as backdrop, three prominent cases demonstrate the challenges, the triumphs, and why precise design is everything when navigating such hostile territory.

Apple: The Canary, the Standoff, and the Long Game

The most famous and fiercely debated warrant canary incident is Apple's. To understand it, you have to remember the context: it was November 2013, the ink barely dry on the Snowden/PRISM revelations. Public trust in tech companies' ability to protect user data from government overreach was at an all-time low. In its very first transparency report, Apple made a bold claim:

“Apple has never received an order under Section 215 of the USA Patriot Act. We would expect to challenge such an order if served on us.”

For privacy advocates, this was a masterstroke. The implication was that if this sentence ever vanished, users should assume Apple had been served such an order and was legally gagged from saying so.

Then, in the very next report and all that would follow, the sentence was gone. Rampant speculation ensued. Had the canary worked, signaling a secret order? Or was it just a lawyerly rewording in response to new DOJ reporting guidelines? The ambiguity was the message, and no one outside Apple knows for sure.

Of course, Apple’s canary saga doesn't exist in a vacuum. It must be viewed as part of a bigger pivot toward user privacy under Tim Cook, memorialized in his 2014 statement of privacy as a fundamental human right. Just two years later, Apple’s commitment to user privacy faced an arguably much more meaningful test, in its famous 2015–2016 standoff with the FBI. The Bureau demanded Apple create a backdoor into the San Bernardino shooter's iPhone; Apple refused. Apple aced the test, moving the conversation from the fine print of a transparency report to the front page of every major newspaper.

🍎
I have a lot more to say, as it turns out, about functional privacy in Apple's ecosystem. Watch this blog for posts about Advanced Data Protection, which brings long-awaited end-to-end encryption to the bulk of iCloud data, and Lockdown Mode, which—contrary to Apple’s documentation—isn’t just for predictable targets of “extremely rare and highly sophisticated cyber attacks.”

RiseUp: Silence in Seattle Cements Concept

If the Apple story is about interpreting ambiguous signals from a corporate giant, the experience of the activist-focused tech collective RiseUp is about a canary serving its exact, intended purpose under immense pressure. In late 2016, the collective failed to update the warrant canary on their website.

The silence did not go unnoticed. For a user base of activists and organizers deeply attuned to such signals, the lapse was an immediate red flag. Commentators like William Gillis at the Center for Stateless Society quickly documented the "dead canary," correctly interpreting it as a near-certain sign that RiseUp had been served a secret government order with a gag clause. The system, at least as an alarm, had worked.

Months later, in February 2017, the collective was finally able to publish a statement explaining the situation. They had received two sealed FBI warrants, both accompanied by gag orders, related to an international DDoS extortion ring and a ransomware operation—criminal activities that violated their terms of service. Faced with the choice of perjuring themselves by issuing a false canary or facing contempt of court charges and likely incarceration, they chose the correct (and only legally viable) option: they said nothing and let the canary expire.

In their own words:

"The canary was so broad that any attempt to issue a new one would be a violation of a gag order... This is not desirable, because if any one of a number of minor things happen, it signals to users that a major thing has happened."

As a result, RiseUp changed their canary, making it narrower and more specific.

Cloudflare: Questions of Scale and the Plausibility of Perfection

The lessons from Apple's ambiguity and RiseUp's high-stakes test culminate in the Cloudflare approach: a masterclass in surgical precision. As one of the core infrastructure providers of the modern web, the promises they make carry enormous weight. Their canary isn't a single sentence in a report, but a specific list of substantive actions they have never taken.

Since at least 2019, their transparency reports have consistently included the following six attestations:

1. Cloudflare has never turned over our encryption or authentication keys or our customers' encryption or authentication keys to anyone.
2. Cloudflare has never installed any law enforcement software or equipment anywhere on our network.
3. Cloudflare has never provided any law enforcement organization a feed of our customers' content transiting our network.
4. Cloudflare has never modified customer content at the request of law enforcement or another party.
5. Cloudflare has never modified the intended destination of DNS responses at the request of law enforcement or another third party.
6. Cloudflare has never weakened, compromised, or subverted any of its encryption at the request of law enforcement or another third party.

Given this clarity, it was interesting to see the community concern that erupted on platforms like Hacker News in 2023 and early 2024. Vigilant users noticed that the date stamp on their transparency report page appeared to be stale, leading to speculation that one of these six promises had been broken. The situation highlighted the hair-trigger sensitivity of the community, and even prompted a response from Cloudflare's CEO, Matthew Prince, who noted the delay was likely an oversight.

This confusion appears to stem from a stale date on a webpage, not a change in the canary's substance. My own research confirms the six attestations above have remained consistent through every semi-annual report. As of this writing, their canary page is fully up-to-date and those same six attestations remain intact.

Far from being a failure, I see the Cloudflare case as a textbook example of a canary working as intended, albeit in a roundabout way. The precision of their statements leaves little room for misinterpretation, and the community's swift reaction to a perceived lapse—even one that turned out to be a clerical error—proves that people are watching. That vigilance is a non-negotiable component of any successful canary system.

And yet, one must acknowledge the extraordinary nature of Cloudflare's position. Given that they route roughly 20% of the entire web, the claim to have a perfect record against immense, constant state-level pressure is, to say the least, a remarkable one. This doesn't invalidate their canary; rather, it underscores why the community's vigilance is so important. For those who wish to monitor for themselves, Cloudflare helpfully provides an RSS feed for its transparency reports.


Words Are Very Necessary

So, after wading through the legal ambiguities and the real-world case studies, the question remains: is running a warrant canary a worthwhile endeavor, or just digital virtue signaling?

For me, it’s a yes—but only if it is a direct and thoughtful response to the lessons learned from those who came before:

  1. The attestations must be precise. As RiseUp learned, a broad canary is brittle. The We2.ee canary makes only four, narrowly-scoped promises focused on the most fundamental compromises: secret court orders, physical seizure, compelled surveillance modifications, and gag orders that would force a lie by omission.
  2. The proof must be verifiable and decentralized. As Apple’s ambiguity taught us, a canary needs a strong, public proof of life. Each We2.ee canary is time-stamped with a current news headline and a Monero block hash, then cryptographically signed with my PGP key.
  3. The process must be deliberate. A fully automated canary is worthless—a sophisticated adversary who compromises a server could simply let it keep running. My ⁠canary.py script gathers data but requires my manual review and GPG passphrase to sign and post the final message.
BTS: We2.ee’s weekly warrant canary
BTS: We2.ee’s weekly warrant canary

Of course, even this design has weaknesses. A missing canary is still ambiguous—it could mean a gagged order, but it could also mean I’m on vacation. It’s vulnerable if an adversary compromises not just the server, but my private PGP keys. As the Cloudflare case proves, it only works if people are actually watching.

If a canary song is heard, but no one verifies the source,
are we safe or already compromised?

How to Verify the Canary

Step 1: Import My Public Key (One-time setup)

My public PGP key is the ultimate source of truth for my online identity. You can import it from the OpenPGP.org keyserver:

# Import the key from a public server
gpg --keyserver keys.openpgp.org --recv-keys '323A8A2C47B376224B3613B7535B265AEDBE5B44'

# Verify the fingerprint of the imported key
gpg --fingerprint [email protected]

# This should display: 323A 8A2C 47B3 7622 4B36 13B7 535B 265A EDBE 5B44

Step 2: Verify the Canary Message (Weekly check)

With my key imported, you can download the latest canary statement from the sw1tch repository and verify its cryptographic signature.

# Download the latest canary
curl -s https://sij.ai/sij/sw1tch/raw/branch/main/canary.txt > canary.txt

# Verify its signature
gpg --verify canary.txt

A successful check will show a Good signature from my key.

A warning about the key not being certified with a trusted signature is normal; the `Good signature from "Sangye Ince-Johannsen (Attorney) <[email protected]>"` line is what confirms the message is authentic.

The Bottom Line

This all leads to a simple protocol: if the weekly canary in the #announcements:we2.ee room is more than three days late without a prior announcement, ping me. If there’s still radio silence, you should assume the canary's purpose has been triggered and act accordingly.

Ultimately, running a canary for We2.ee is not about digital virtue signaling. It is about transforming an abstract commitment to user privacy into a concrete, weekly, verifiable action. It is not a solution to state surveillance, but it is an alarm bell—and in a world of compelled silence, a reliable alarm bell is a powerful tool.

CTA Image

The free open source code for sw1tch, including the canary.py module, is available on my personal code hub.

sij.ai/sij/sw1tch

Toward Enduring Web Citations

An attempt to solve the twin problems of impermanence and imprecision.

📖
deep·cite | /ˈdiːpˌsaɪt/
n. A citation that combines a textual reference with a durable hyperlink to that exact passage in a preserved copy of the source.
v. (–cited, –citing) To create a citation by joining a selected text passage to a permanent, pinpoint hyperlink of its archived source.

Before going into the mechanics, see it in action:

Here’s the deepcite I created there—try the link for yourself:

🔗
“Currently, there are at least 1,923 individuals in the 48 contiguous states, with 727 in the GYE demographic monitoring area, 1,092 in the NCDE, about 60 in the CYE and a minimum of 44 in the United States portion of the SE, although some bears have home ranges that cross the international border, as documented by C.M. Costello and L. Roberts in 2021 and M.A. Haroldson and others also in 2021.” Grizzly Bear (Ursus arctos horribilis) | U.S. Fish & Wildlife Service (last accessed Jun 15, 2025).

The architecture of the web is fundamentally at odds with the demands of lasting citation. Any link we use as a reference is undermined by two distinct problems: one of permanence, the other of precision.

The permanence problem—link rot—is well-known. A normal hyperlink is a fragile, hopeful pointer to a resource you don't control. Pages change, URL schemes evolve, and critical information simply disappears. The Internet Archive has been fighting this battle for decades.

The precision problem is more subtle, a chronic friction we’ve just grudgingly accepted. A link to a 10,000-word article isn’t a citation; it’s a research assignment you’ve hoisted upon your reader. In effect they are asked to (a) take you at your word, (b) try to guess the right keywords for a ⌘ F search with whatever contextual clues you’ve provided, or (c) just resign themselves to reading the document in full.

The web has an emerging tool for the precision problem in the text fragment URL. By appending #:~:text=... to the end of a link, you can direct nearly every modern web browser to scroll to and highlight a specific passage on that page. At first blush, this recent W3C standard seems incredibly useful. Provide a colleague the precise pincite to one consequential fact you spot on line 2416 of a dense environmental report. Or point your future self back to a key insight toward the end of an obscure scientific study that took you multiple reads to appreciate its significance. This looks promising...

But on its own, this technology only exacerbates the permanence problem. It creates a citation so specific that a single punctuation fix on the live page will break it—a phenomenon one might aptly call “fragment rot.”

A truly useful web citation needs to solve both problems at once: it must point precisely to the relevant portion of a cited source, and it must do it enduringly. For that to happen, the source itself must be frozen in time, exactly as it appeared when the citation was made.

Calling All Archivists

Inspired by my earlier work on a similar script for DEVONthink (which also has significant new features and improvements as I'll detail in a follow up post), I saw the potential a text fragment tool could have. I enlisted my trusty AI coding assistant, and within five minutes, I had a working proof of concept. But my initial success was short-lived. I experienced fragment rot firsthand after just a few uses, and then again a few days later. The theoretical risk I’d anticipated was a practical, frequent reality.

This technical frustration soon collided with a much larger concern. Immediately after the presidential inauguration on January 20, 2025, information began disappearing from federal government websites—first sporadically, very soon systematically. Databases and other resources conservationists have long relied on from agencies like the EPA and NOAA, among others, abruptly went dark.

As a public interest environmental lawyer, my work is built on this data. My cases under the Endangered Species Act (ESA), Clean Water Act (CWA), and National Environmental Policy Act (NEPA—RIP) depend on a stable, verifiable administrative record. This wasn’t just another vaguely-menacing news item portending yet more symbolic violence on the Rule of Law. It represented (and still represents) a clear, concrete, and immediate threat to my clients' interests and to the science-based, mission-driven advocacy my colleagues and I have built our careers on.

💭
This moment underscores a critical vulnerability. The thankless work of archivists has never been more essential, and the efforts of institutions like the Harvard Law Library to preserve these resources give me hope. At the same time, with this administration’s unprecedented hostility toward academia, open government, and truth itself, I am not confident we can rely solely on these institutional bulwarks.

I had already explored the world of self-hosting enough to have come across ArchiveBox, an open-source tool that creates high-fidelity, personal archives of web content. Its recent beta API made it the perfect engine. But ArchiveBox alone wasn’t sufficient. The URL for each archived snapshot includes a timestamp with microsecond accuracy, making it impossible to predict from the client-side. I needed a custom bridge to sit between my browser and my archive.


The Web Deepcite Tool

My solution is composed of two parts that work together: a script that runs in your browser, and an optional backend you can host yourself.

1. Browser Deepciter (client)

The heart of the system is a single JavaScript file. I run it in Orion Browser using its excellent Programmable Button functionality with a keyboard shortcut, but it works just as well as a standard bookmarklet in any other modern browser.

When you select text on a page and run the script as-is, it assembles and stores in your system clipboard a deepcite formatted in rich text looking like this:

🔗
"This domain is for use in illustrative examples in documents." Example Domain (last accessed Jun 12, 2025).

note: the cite is hyperlinked with text fragments to the original:
https://example.com/#:~:text=This%20domain%20is ...

When you configure the script by pointing PROXY_BASE_URL to your self-hosted backend, and specifying URL_PREFIX to match your backend’s configuration, it creates a deepcite that looks the same, except that the citation's hyperlink points to the archived webpage.

2. Self-Hosted Backend (server)

The backend pairs a standard ArchiveBox instance with a FastAPI server that I wrote to act as a smart proxy with basic URL shortening / analytics functionality built in. When you create a deepcite, the backend tells ArchiveBox to save a 100% self-contained archive of the page using Singlefile.

When the link is visited, the proxy serves that file after injecting a minimalist banner at the top to indicate:

  • archival date;
  • any delay between when the citation was made and when the page was archived (this can happen if ArchiveBox had a long job queue or was unresponsive);
  • link to archived PDF of page;
  • link to original / live page; and
  • QR code for archival URL.
Detail view of example banner my custom server injects at the very top of archived pages

Try First On My Demo Server

Setting up a self-hosted server can be a project. To help decide if this is a workflow you'd find useful, you can point the client script to my public demo instance. To do this, configure the variable at the top of the JavaScript file:

PROXY_BASE_URL = 'https://cit.is'
⚠️
Disclaimer: This is a demo instance only, intended as a public proof-of-concept and a temporary playground. I make no guarantee to host archives you may create on it for any duration. Any sites you archive while using this demo are visible to me and may be visible to others. Please use it with discretion.

Getting Your Own Setup Running

CTA Image

If you're as excited about this as I am, and want your very own permanent private archival deepciter, head over to my open source code repository to get started:

sij.ai/sij/deepcite

The ⁠README.md file in that repository provides the canonical step-by-step instructions. The setup process should be familiar to anyone who has dabbled in self-hosting. You will use a standard ⁠.env file to configure the ArchiveBox Docker container, and a ⁠config.yaml file to tell the proxy script where to find your ArchiveBox instance and how to behave. Once configured, you run the services with ⁠docker compose and the proxy script via Python.

Next Steps

This toolkit is already a core part of my own workflow, but I am considering several future improvements and welcome feedback. I'm currently mulling adding the Internet Archive as an alternative to Archivebox, finding a creative way to bypass the need for a server script (perhaps by combining Internet Archive with an API call to a link shortening service), integrating deepcite functionality directly into ArchiveBox (i.e. by forking that project), and building browser extensions for a more polished UX than bookmarklets.


The web's citation problems aren't going away—if anything, the recent wave of government data disappearing has made clear how fragile our digital references really are. Deepcite won't solve every corner case, and setting up your own archive does require some technical effort. But for researchers, writers, and lawyers who depend on precise, durable evidence, the investment in a system you control is, I believe, a necessary one.

UPDATES

2025.06.20

I've added support for using SingleFile directly and bypassing ArchiveBox. In my testing so far SingleFile is faster, more reliable, simpler, and uses a lot less space. I.e., a win/win/win/win. SingleFile is therefore now the default mode.

2025.06.22

I'm excited to share that I'm busy building this out as a subscription service at cit.is. Stay tuned for announcements about a public beta soon. Meantime, please note I'm moving the demo deepcites from https://sij.law/cite/ to https://cit.is/ .

Simplifying Web Services with Caddy

Running multiple web services doesn't have to be complicated. Here's how Caddy makes it simple by handling reverse proxying and HTTPS certificates automatically, plus a script I use to set up new services with a single command.

After my recent posts about We2.ee, Lone.Earth, Earth.Law and that pump calculator project, several folks asked about managing multiple websites without it becoming a huge time sink. The secret isn't complicated—it's a neat tool called Caddy that handles most of the tedious parts automatically.

Understanding Reverse Proxies

Traditionally, web servers were designed with a simple model: one server running a single service on ports 80 (HTTP) and—more recently as infosec awareness increased—443 (HTTPS). This made sense when most organizations ran just one website or application per server. The web server would directly handle incoming requests and serve the content.

But this model doesn't work well for self-hosting. Most of us want to run multiple services on a single machine - maybe a blog like this, a chat service like We2.ee, and a few microservices like that pump calculator. We can't dedicate an entire server to each service—that would be wasteful and expensive—and we can't run them all on ports 80/443 (only one service can use a port at a time).

This is where reverse proxies come in. They act as a traffic director for your web server. Instead of services competing for ports 80 and 443, each service runs on its own port, and the reverse proxy directs traffic for

  • A blog to port 2368
  • A Mastodon instance to port 3000
  • An uptime tracking service to port 3001
  • A code hub to port 3003
  • An encrypted chat service to port 8448
  • DNS-over-HTTPS filter and resolver on 8502
  • A Peertube instance to port 9000
  • An LLM API to port 11434
  • ... etc.

When someone visits any of your websites, the reverse proxy looks at which domain they're trying to reach and routes them to the right service. That's really all there is to it—it's just routing traffic based on the requested domain.

Why Caddy Makes Life Easier

Caddy is a reverse proxy that manages this well and also happens to take care of one of the biggest headaches in web hosting: HTTPS certificates. Here's what my actual Caddy config looks like for this blog:

sij.law {
    reverse_proxy localhost:2368
    tls {
        dns cloudflare {env.CLOUDFLARE_API_TOKEN}
    }
}

This simple config tells Caddy to:

  • Send all traffic for sij.law to the blog running on port 2368
  • Automatically get HTTPS certificates from Let's Encrypt
  • Renew those certificates before they expire
  • Handle all the TLS/SSL security settings

If you've ever dealt with manual certificate management or complex web server configurations, you'll appreciate how much work these few lines are saving.

Making Domain Setup Even Easier

To streamline things further, I wrote a script that automates the whole domain setup process. When I was ready to launch that pump calculator I mentioned in my last post on the open web, I just ran:

cf pumpcalc.sij.ai --port 8901 --ip 100.64.64.11

One command and done—cf creates the DNS record on Cloudflare and points it to the IP of the server running Caddy, creates a Caddy configuration that reverse proxies pumpcalc.sij.ai to port 8901 on my testbench server (which has the Tailscale IP address 100.64.64.11), and handles the HTTPS certification.

🌐
Using Tailscale for the connection means I don't need to expose the underlying service to the public internet—the server doesn't need a public IP address, port forwarding rules, or open firewall ports. I plan to do a deep dive on Tailscale in a future post, but for now just know it adds an important layer of security and simplicity to this setup.

If you want to try this script out yourself, see the more detailed documentation at sij.ai/sij/cf, and by all means have a look at the Python code and see how it works under the hood.

Getting Started

  1. Start by installing Caddy on your server
  2. Create a config for just one website
  3. Let Caddy handle your HTTPS certificates
  4. Add more sites when you're ready

Start small, get comfortable with how it works, and expand when you need to. Ready to dig deeper? The Caddy documentation is excellent, or feel free to reach out with questions.

Thinking Like a Developer, Pt. 1

When our backup pump failed on the homestead, I built a calculator to figure out what we really needed. It’s a small, open-source tool born from necessity and a few iterations with self-hosted AI.

I live on a homestead in southern Oregon, surrounded by the vastness of the Umpqua National Forest in every direction. It’s the kind of place where nature dictates the rhythm of life. We rely on a natural mountain spring for water most days, but when that fails (as nature sometimes does), we turn to a small stream or a mile-long ditch. According to local lore, some intrepid homesteader dug that ditch in the early 1900s to water a single cow.

These water sources connect to a network of pipes, backup pumps, and an unoptimized system that could generously be described as "inventive." When our pump failed recently, we faced an immediate and critical question: how powerful does a replacement pump need to be?


From Problem to Solution

To answer that question, I did what any coder-lawyer-homesteader would do—I wrote a script. Specifically, a pump power calculator that factors in pipe diameter, distance, flow rate, pipe material, and other inputs to calculate the horsepower needed for a given setup. It factors in key considerations like friction head loss, flow velocity, and static head, ultimately providing recommendations with a built-in safety margin. For example, in our setup, with a 4000-foot run of 1" pipe that rises up around 120 feet and delivers around 7.5 gallons per minute, it calculated we needed at least 0.75 HP—but 1 HP if we want a 30% safety margin.

You can try it out for yourself at pumpcalc.sij.ai or embedded at the bottom of this post, and if you’re curious about the code, it’s open source at sij.ai/sij/pumpcalc. I built the calculator in Python using FastAPI for the backend and Jinja2 for templating—simple, reliable tools that get the job done without unnecessary complexity.

This wasn’t a solo endeavor. I leaned on the open-source AI tool Ollama and specifically QwQ, a powerful 32 billion parameter research model that rivals leading commercial AI systems like ChatGPT o1 in reasoning capabilities. QwQ particularly excels at technical problem-solving and mathematical tasks, making it perfect for engineering calculations like this.

🫠
I now avoid AI tools like ChatGPT or Claude because of their environmental impact and privacy concerns. Self-hosted tools like Ollama solve both issues: they’re private and energy-efficient. In fact, consulting Ollama to write this script used just about 6 Wh of additional electricity on my 2021 MacBook Pro—roughly the energy it takes to keep an efficient LED bulb running for half an hour.

The Iterative Process of Coding

Developing this script wasn’t a one-and-done affair. It took five back-and-forth sessions with the AI to:

  1. Factor in relevant variables like pipe roughness and flow rate.
  2. Exclude unnecessary inputs that made the interface clunky.
  3. Add some polish, like the Gruvbox Dark color scheme that now graces the app.

Each iteration made the calculator more useful and user-friendly. By the end, I had something functional, simple, and—dare I say—elegant.



Why Share This?

I’m sharing this as the first in a series of "Thinking Like a Developer" stories, because I believe coding isn’t as mystifying as it might seem. If a lawyer on a homestead with a temperamental water system can write a pump calculator, anyone can. The key is thinking like a developer: break the problem into smaller, solvable pieces, and don’t be afraid to consult tools or collaborators along the way.

This approach to problem-solving—breaking down complex challenges and leveraging coding tools—mirrors how I approach legal technology challenges. I frequently rely on Python and AI libraries to streamline legal work, from document analysis to case law research. Whether it's calculating pump requirements or processing legal documents, the fundamental thinking process remains the same. Who knows? You might find your next project hidden in a problem you didn’t even know you wanted to solve.