Reverse Proxy in shadow-cljs

03 March 2026

I’ve broken my reverse proxy configuration in shadow-cljs multiple times, so I need a reminder for myself.

I have the server-side api running in a container, and the CLJS is running in shadow-cljs in dev mode.

With a :proxy-url set, shadow-cljs will forward any request it can’t match with a file in the root to the other server. This helps avoid Cross-Site-Request-Forgery failures in the browser during development.

 :dev-http {3000 {:root "public"
                  :proxy-url "http://localhost:7000/my-backend-service"}}

The important part is to ensure the URL lacks the trailing slash. With the slash, I see lots of not-found errors from the backend server for everything but the default index.


Teach the AI to Unit Test

19 February 2026

The Gemini AI will make some pretty good guesses about how a 3rd-party API may work. It is good at searching the internet, but when APIs have changed across versions, the old and new docs and examples it’ll find can confuse it. In a dynamic language and environment you’ll not spot these errors until runtime.

To combat the ambiguity and to give the AI agent more power to solve its own problems, ask it to add some tests around the code that uses the API. (In my case, the API is the XTDB client API.) Once it has a way to execute the code through tests, it’ll quickly start figuring out where it’s made mistakes and start running its own experiments to observe errors, search for fixes, and applying those fixes around the codebase. I exhibit the same pattern when I’m doing it by hand.

The tests also give you, the human, an easier entry point to evaluate the code the AI generated. If the tests look gnarly, you know to suggest refactorings to improve the architecture and make it easier to test. When the AI has the tests passing, and the test code is easy enough to read, then you can have a closer look at the application code to refine and keep that maintainable too.


Ask the AI

09 February 2026

Gemini CLI is getting even stronger for Clojure code. The clojure-mcp is part of that power: it’s exploring the code and fixing up syntax quicker now.

In addition to writing code, it’s been good at listing and implementing optimizations and refactorings to improve the code when asked.

I’d still like to remind you to ask the AI to explain itself. It’s still our responsibility to understand what it’s doing and to question its decisions, just like we would to get the best we can from any other teammate. It’s our opportunity to learn and understand too from a very comprehensive summary of all the internet searches I used to need to do for myself.


Iterative Development with Gemini CLI

31 December 2025

Models and Expectations

I’ve had Gemini CLI installed on my workstation since August 2025.

Originally, it would default to use the gemini-2.5-pro model and your "access" to that would run out for the day, and it would switch to using gemini-2.5-flash. I found the flash model to be adequate for the way I’d use it to do Clojure and ClojureScript, so most the time I’d override it to just use flash from the beginning. I thought I could kick over to pro if I found a problem for which I’d need more power.

Eventually, Gemini CLI started switching back and forth between models more intelligently, so it didn’t burn through your limited access to pro, so I no longer override it with 3.0 models.

Pairing with a Junior Developer

The AI agent by itself has read lots of documentation, and it’s pretty good at Googling the answers to questions and picking something to try. (I often get a bit of analysis paralysis when trying to choose a library.) It can be surprisingly good at translating sample usage of some JavaScript library it finds into a simple bit of ClojureScript.

In my experience, it’s sometimes bad at matching parentheses, so I just fix them myself. Recently, it may be getting better, and some Clojure MCP projects can cleanup parentheses automatically.

I only ask it to do small tasks, and I closely review and test the code it generates. When it looks good, I commit and push the code, but I know I can always easily go back to a previous working version when the AI goes off the rails. I don’t have to worry too much about it getting too confused or destroying something. I tell it to forget what we were doing, /clear the context, or just restart the agent completely, and recover my known good state from git. (Update 2026-02-17: /rewind may be better these days for clearing some context.)

I find that even if it fails to complete a task, I at least learn a little from what it did, and often have an initial direction or two to explore.

It’s pretty good at keeping my momentum when working and keeping me from spinning my wheels, like pairing with another programmer.


All the Posts

March 2026

February 2026

December 2025

August 2025

March 2025

April 2023

November 2022

January 2021

November 2020

October 2020

May 2020

March 2020

December 2019

March 2018

February 2018

January 2018