Claude & Clojure![]() It's no secret that I use Generative AI, specifically Claude Sonnet, to assist with the Ooloi project. I use it for writing Clojure tests TDD fashion, for generating Clojure code, for generating documentation, READMEs, architectural design documents and much more. Above all, I use Claude for exploring architectural strategies before coding even begins. It's somewhat reminiscent of pair programming in that sense: I'd never just task GenAI with generating anything I wouldn't scrutinise very carefully. This approach works very well and allows me to quickly pick up on good design patterns and best practices for Clojure. Claude & Python![]() Overall, working with Claude on Clojure code works surprisingly well. However, this is not the case when I try to involve Claude for coding in Python, the main language I use as an AWS Solutions Architect. Generative AI struggles with creating meaningful Python tests and code – especially tests, which rarely work at all. This hampers its use as an architectural discussion partner and a TDD assistant. In fact, I've given up trying to use Generative AI for coding in Python. DifferencesI have a deep background in Common Lisp and CLOS, dating back to the 1970s. I've written Common Lisp compilers and interpreters, as many Lispers did in those days. The standard practice was to write a small kernel in assembler or C or some other low-level language, and then use it to write an optimising compiler on top of it to replace the kernel in an iterative fashion, sometimes using transformations of source code based on lambda calculus. (I still remember that paper by Guy Steele.) I see Common Lisp essentially as a big bag of good-to-haves (a really excellent one, mind you). As such, it was designed by committees over a period of decades. Clojure, on the other hand, is much tighter and rests solidly on consistently applied computer science design principles. Common Lisp is pragmatic and eclectic and thus somewhat sprawling in design. Clojure, in comparison, is smaller and much more focussed, even opinionated in nature, and for clear reasons. People attracted to Common Lisp and Clojure tend to be pretty well versed in computer science, especially Clojurians who generally have a good understanding of functional programming and immutable data structure concepts. Thus, the public code "out there" on sites like GitHub tends to be fairly advanced and of high quality. Python is an entirely different ballgame. It's one of the most commonly used languages today, and the public code base is absolutely enormous. This also means that the quality of that public code varies considerably. Also, Python is not functional at heart, and its data structures aren't immutable: side effects is the name of the game. Python programmers, though much greater in number than Clojure programmers, range from script kids to computer scientists. Thus, public Python code is much more all over the place and of varying quality. This may make it harder for a large language model to reason about. I wondered whether these differences accounted for the discrepancies in the usefulness of Claude and other LLMs as coding assistants in Clojure and Python. Asking ClaudeSo I decided to ask Claude 3.7 itself. I shared the Clojure codebase with it and asked why it thought results were so much better for Clojure than for Python. This is what it said:
It then continued, quoting the code base:
I guess this answers my question about Clojure vs Python. It's not just the functional and immutable aspects of Clojure, it's also specifying the domain stringently through design, architectural discussons, specs, and tests. The Ooloi project in particularWith all that out of the way, I then went on to describe how I use Claude as a discussion partner before code is generated, and the TDD approach I'm using where the tests of course describe the domain. Claude was almost embarrassingly enthusiastic:
I continued: "I also use Claude to create ADRs, READMEs, introductions for newcomers, Dev plans, etc. I'm quite open about this; transparency is paramount in an open-source project, and the idea is to facilitate collaborators' understanding of the Ooloi project as much as possible. This means that adherence to best practices, the use of sane architecture, abstracting away the complex stuff without compromising on performance or flexibility, etc, all are central." Claude responded:
We then went on discussing open-source strategies, tutorials and onboarding materials for contributors and much more which I'll spare you from for now. Finally, I asked it to summarise its views on Ooloi as a project:
Now, this level of enthusiasm and praise from an anthropomorphic-presenting LLM (essentially a glorified Markov chain) always makes me deeply suspicious – I'm absolutely allergic to toxic positivity, especially the kind from the USA – so I asked it "Are you sure you're not overly enthusiastic just to please me?". It replied:
I then asked Claude 3.7 to comment on my journey from Common Lisp to Clojure and on the influence from Common Lisp and CLOS on Ooloi:
And finally, I just couldn't resist asking: "What would Rich Hickey say?"
I guess that hypothetical assessment is good enough for me at this point. Still not quite sure that Claude isn't giving me a friendly reach-around, though... ;)
0 Comments
![]() I should perhaps say something about how Generative AI is used in the FrankenScore project. First of all, I have a prompt of about 4100 lines which prefaces every conversation with the AI chat client. The prompt consists of project documentation, background, design principles and goals, coding principles and conventions, explanations of central code and code examples. It also includes a major part of the source. This allows the AI to:
The copy on this website was almost entirely created by AI means, often using multiple iterations until I arrived at something suitable for publication. There remain a few passages that slipped me by as the AI produced text that reads a little too self-congratulatory on my part, but it was simply the opinion of the AI (though it is of course nice that it likes the code). I'll fix that during the days to come. Also, the technical comparison with other software is a bit too speculative and monotone. I'll change that, too. In terms of code, I've found that Claude 3.5 Sonnet reasons better at depth about Clojure code than GPT-4o and consequently is the superior choice for complex coding. GPT-4o is still useful for producing text, though. It isn't exactly bad at coding, but it has a tendency to vomit code at you at every opportunity, which is both tiresome and expensive. Also, it kind of loses track when conversations get very long. And they do; the chains of thought are sometimes complex, and a meandering AI can get costly. Therefore using Claude saves money in the long run. By the way, it's easy to tell when I am writing. Just look for signs of British English. You know, -ise and colour and whilst and so forth. The AI invariably produces American English. |
AuthorPeter Bengtson –composer, organist, programmer, cloud architect. Currently windsurfing through parentheses. Archives
March 2025
Categories
All
|
|
FrankenScore is a modern, open-source music notation software designed to handle complex musical scores with ease. It is designed to be a flexible and powerful music notation software tool providing professional, extremely high-quality results. The core functionality includes inputting music notation, formatting scores and their parts, and printing them. Additional features can be added as plugins, allowing for a modular and customizable user experience.
|