19 February 2026
The Gemini AI will make some pretty good guesses about how a 3rd-party API may work. It is good at searching the internet, but when APIs have changed across versions, the old and new docs and examples it’ll find can confuse it. In a dynamic language and environment you’ll not spot these errors until runtime.
To combat the ambiguity and to give the AI agent more power to solve its own problems, ask it to add some tests around the code that uses the API. (In my case, the API is the XTDB client API.) Once it has a way to execute the code through tests, it’ll quickly start figuring out where it’s made mistakes and start running its own experiments to observe errors, search for fixes, and applying those fixes around the codebase. I exhibit the same pattern when I’m doing it by hand.
The tests also give you, the human, an easier entry point to evaluate the code the AI generated. If the tests look gnarly, you know to suggest refactorings to improve the architecture and make it easier to test. When the AI has the tests passing, and the test code is easy enough to read, then you can have a closer look at the application code to refine and keep that maintainable too.
17 February 2026
I saw a tire was low on the display of my 2017 Chevy Bolt EV, so I topped up the tire with the tire inflator jumpstarter I keep in the car. The next day, I noticed it was still low, so I hit it again and brought it up a little higher. When I checked in the car again, I saw it’s still low, but the back tire is high. The TPMS sensors aren’t in the right place.
They apparently are the type of sensors that I can receive with my RTL-SDR as they drive by the house.
To get the car to register each sensor in the right place, I needed to tell it to relearn the IDs. I bought a little tool for $10 to trigger the sensors, but I apparently could have also done it by deflating the tires in turn to also trigger the sensor on demand.
Activate the Service Mode by pressing the Start button for 5 seconds with no foot on the brake.
On the driver’s display, select the Tire Pressure display, and click it to activate Relearn Mode. The horn chirps twice at the start, and the display says it’s active.
Start at the front left tire.
Place the relearn tool next to the sidewall near the valve stem and pointing the antenna toward the center of the wheel.
Press the button once and it’ll transmit for 10 seconds or so, to trigger the TPMS sensor.
The car chirps the horn when it registers the actiated sensor.
Go around to the others tires clockwise (front left, front right, back right, back left) activating each in turn. The car will chirp the horn as each sensor registers.
Done!
09 February 2026
Gemini CLI
is getting even stronger
for Clojure code.
The clojure-mcp
is part of that power:
it’s exploring the code and fixing up syntax
quicker now.
In addition to writing code, it’s been good at listing and implementing optimizations and refactorings to improve the code when asked.
I’d still like to remind you to ask the AI to explain itself. It’s still our responsibility to understand what it’s doing and to question its decisions, just like we would to get the best we can from any other teammate. It’s our opportunity to learn and understand too from a very comprehensive summary of all the internet searches I used to need to do for myself.
31 December 2025
I’ve had Gemini CLI installed on my workstation since August 2025.
Originally,
it would default
to use the gemini-2.5-pro model
and your "access" to that
would run out for the day,
and it would switch to using gemini-2.5-flash.
I found the flash model to be adequate
for the way I’d use it to do Clojure and ClojureScript,
so most the time I’d override
it to just use flash from the beginning.
I thought I could kick over to pro
if I found a problem for which I’d need more power.
Eventually,
Gemini CLI started switching back and forth
between models more intelligently,
so it didn’t burn through your limited access
to pro,
so I no longer override it with 3.0 models.
The AI agent by itself has read lots of documentation, and it’s pretty good at Googling the answers to questions and picking something to try. (I often get a bit of analysis paralysis when trying to choose a library.) It can be surprisingly good at translating sample usage of some JavaScript library it finds into a simple bit of ClojureScript.
In my experience, it’s sometimes bad at matching parentheses, so I just fix them myself. Recently, it may be getting better, and some Clojure MCP projects can cleanup parentheses automatically.
I only ask it to do small tasks,
and I closely review and test
the code it generates.
When it looks good,
I commit and push the code,
but I know I can always
easily go back to a previous working version
when the AI goes off the rails.
I don’t have to worry too much
about it getting too confused
or destroying something.
I tell it to forget what we were doing,
/clear the context,
or just restart the agent completely,
and recover my known good state from git.
(Update 2026-02-17:
/rewind may be better these days
for clearing some context.)
I find that even if it fails to complete a task, I at least learn a little from what it did, and often have an initial direction or two to explore.
It’s pretty good at keeping my momentum when working and keeping me from spinning my wheels, like pairing with another programmer.