|
'Well then, alone!' – Elektra's cry when Chrysothemis refuses to help her. Not triumphant independence, but desperate necessity fused with unwavering resolve. The isolation isn't chosen; it's forced by the impossibility of finding anyone who shares her singular purpose. Orestes isn't likely to materialise. There was a functional programming conference in Stockholm recently. I'm sure it was excellent. I didn't attend. I should probably have been there – Ooloi is built in Clojure, after all, and finding collaborators would be useful – but I felt conflicted, and that conflict revealed something I'd been avoiding: the FP community cannot help me, and I don't need it anyway. Sect DynamicsI'm disappointed with the functional programming community. I was expecting higher-level thinking – freer thinking, commensurate with the intellectual freedom Clojure itself offers – but the atmosphere proved to be a shallow puddle of sectarianism. That probably has its reasons – being marginalised as a community is probably one of them – but the end result remains unchanged. The patterns are unmistakable. Knowledge as gatekeeping: the endless monad tutorial phenomenon, where every advocate believes they can explain monads better than everyone else, typically through increasingly baroque metaphors involving burritos, space suits, or elephants. This isn't pedagogy; it's ritual initiation. The complexity serves a social function – maintaining boundaries between insiders and outsiders. Purity as virtue signalling: debates about whether using `IO` makes you impure, whether exceptions violate functional principles, whether mutation in bounded contexts is acceptable. These discussions frame technical trade-offs as moral categories, as though architectural design were a moral discipline rather than an engineering one. The language reveals it – clean, pure, disciplined versus dirty, impure, undisciplined. This is religious vocabulary applied to software engineering. Terminology as tribal marker: deliberate retention of academic terminology when simpler terms exist. Endofunctor, catamorphism, anamorphism when 'map over containers', 'fold', 'unfold' would suffice. The obscurity is the point – it establishes hierarchy and demonstrates membership. The emphasis falls on mathematical elegance rather than problem-solving. The question isn't Does this help ship software but Is this theoretically sound. People who can recite monad laws but have never shipped a product receive more status than developers applying functional patterns to solve actual problems. Then there's the missionary behaviour: the conviction that imperative programmers need conversion. The framework isn't Here's another useful tool but You're doing it wrong until you see the light. This creates antagonism rather than adoption. Being marginalised as a community probably explains some of this – defensive posture manifesting as increased boundary enforcement, which creates insider/outsider distinctions, which enables status hierarchies based on doctrinal purity. But understanding the cause doesn't change the result, and it doesn't make the behaviour intellectually rigorous or practically useful. The Clojure Exception Clojure largely escaped this because Rich Hickey explicitly rejected purity culture. 'It's acceptable to use Java libraries'. 'Mutability in bounded contexts is fine'. 'Solve problems first'. The Clojure community focused on what you can build, not on arcane knowledge as status marker. This produced broader adoption without compromising functional principles. This is why I chose Clojure for Ooloi in the first place. But even within that pragmatic oasis, the broader FP community dynamics leak through. The conference I didn't attend would have featured both kinds of people – those interested in building things and those interested in doctrinal purity – and I couldn't predict which would predominate. The Intersection Problem Here's the substantive issue: finding Ooloi collaborators in FP communities is statistically improbable because very few people occupy my intersection point between various disciplines. Music notation requires an understanding of compositional structure, engraving conventions, and how musicians actually work. Functional architecture requires a sophisticated understanding of immutability, higher-order functions, transducers, STM transactions, and compositional patterns. Backend infrastructure requires a willingness to work on unglamorous problems like endpoint resolution and temporal traversal rather than visible features, and in Ooloi's case, an understanding of server technology and secure cloud operations. The population at that intersection is approximately one. FP communities might yield people who appreciate my transducer implementations or STM transaction handling. But they won't understand why endpoint resolution for slurs matters, how temporal traversal serves musical structure, or what makes intelligent engraving different from geometric placement. The domain expertise is orthogonal to FP community concentration. The inverse holds equally: musicians who understand notation deeply rarely have the architectural sophistication to work on Ooloi's core, and even fewer would find satisfaction in building infrastructure rather than using tools. I've worked outside the FP community all my life. Functional programming is a tool, not a (monadic) religion. (And why are monadic and nomadic so similar?) Why join the community now, when the benefits are unclear and the costs palpable? Consilium LatinumThe technical response is what I call the Latin strategy: making Ooloi's core a stable foundation for a plugin ecosystem. Build the architectural core once in Clojure, then let developers in other JVM languages contribute via plugins without needing to understand the underlying functional implementation. I've written about this approach in Penitenziagite!, so I won't rehearse it here. Elektra or Quixote? The psychological question is whether this makes me Elektra or Don Quixote. Elektra confronts a real murder, real injustice, a legitimate need for action that others refuse. The isolation comes from their cowardice or pragmatism, not from her misunderstanding of reality. The task is achievable and gets completed. The tragedy is the psychological cost, not the validity of the purpose. Quixote confronts imaginary problems with obsolete ideals, mistaking windmills for giants. The isolation comes from a fundamental disconnect with reality. The task is impossible because it's based on delusion. The comedy (later tragedy) is that the quest itself is meaningless. The distinction depends on whether the problem is real. Do musicians actually need what Ooloi provides? If existing notation software genuinely fails at problems Ooloi solves, then Elektra. If musicians are adequately served by current tools, if the architectural sophistication I'm building doesn't translate to problems they actually experience, then Quixote. But there's a third option beyond tragic obsession and delusional quest. I'm building something architecturally excellent because I can, because it interests me, because functional approaches to musical structure are intellectually satisfying. The architecture might be elegant, but it's not worth psychological dissolution. The Latin model suggests I've already chosen this third path. I'm building core infrastructure well, documenting it properly, then making it available via plugin architecture that assumes others might have different needs. That's craft separated from identity. Not Dancing to DeathElektra's tragedy is total consumption by purpose. She becomes nothing but the task, and when it completes, there's nothing left because she permitted no existence beyond vengeance. She dances herself to death.
I'm certainly not doing that. Ooloi is a project, not my entire existence. Sustainable completion means finishing the backend, documenting it clearly, releasing it, and then moving on. The work stands independently; I remain separate from it. I'll finish Ooloi's core architecture working alone, not because I prefer isolation, but because collaboration at this intersection point is impractical. The resolve comes from accepting reality rather than pretending community exists where it doesn't. The backend is complete. The transducer-based timewalker is fast, tight, and efficient. Endpoint resolution handles slurs and ties correctly. Nearly nineteen thousand tests pass. Vector Path Descriptors enable elegant client-server communication. Then comes plugin architecture, and seeing whether anyone finds Ooloi useful. If they do, excellent. If they don't, I built something architecturally sound and learned what I needed to learn. Either way, the work speaks for itself. And I continue existing beyond it. Nun denn, allein!
0 Comments
Yes, another technical post. But this one explains why Ooloi doesn't demand you become a programmer to use it, or learn Latin to extend it for your specific needs. The architecture matters precisely because it removes barriers rather than creating them. In Umberto Eco's The Name of the Rose, a mad monk wanders through the monastery corridors shouting 'Penitenziagite!' – corrupted Latin mixed with vernacular, incomprehensible noise that might be prophecy or might be madness. Communication fails not from lack of content, but from linguistic confusion. The message is lost in translation. Software architectures shout 'Penitenziagite!' constantly, and we've grown so accustomed to the noise that we mistake it for communication. The Language Impedance ProblemWhen you write a plugin for a system, you shouldn't need to learn the implementation language. That seems obvious, yet most software makes exactly that demand. Functional programming libraries leak monads into Java APIs. Object-oriented frameworks force functional concepts into awkward method chains. Every language boundary becomes a barrier, every abstraction a translation exercise. The pattern is familiar:
This isn't malice. It's accidental linguistic imperialism – systems that never considered the difference between internal precision and external accessibility. The Monastic PatternMedieval monasteries preserved knowledge in Latin – a dead language, deliberately removed from common speech, chosen for precision and permanence. Yet they didn't demand everyone learn Latin to benefit from monastery medicine or improved agriculture. The knowledge stayed pure in the centre; the benefits propagated outward in the vernacular. Ooloi follows this pattern. The core is written in Clojure. This isn't negotiable, because the hard problems in music notation software require immutability, structural sharing, and proper concurrency. Functional programming isn't a preference; it's the only approach that doesn't collapse under its own compromises. But plugins can be written in Java, Kotlin, Scala, or any JVM language. Not as second-class extensions with limited capabilities, but as first-class citizens with full API access, equal performance, and no artificial limitations. The JVM interop means there's no penalty for crossing the boundary – your Java plugin operates with the same guarantees as code written in Clojure. This arrangement has three parts: I: The Scriptorium (Clojure core) – Where the hard problems are solved with uncompromising discipline. Immutable data structures provide structural sharing. Temporal coordination via the timewalker ensures correct musical semantics. STM enables proper concurrent editing. Zero-allocation hot paths ensure performance. This is where craft is mandatory, not aspirational. II: The Library (canonical plugins in Clojure) – Reference implementations showing how the architecture should be used. Teaching by example, maintaining standards, preserving patterns for others to study. III: The Gate (JVM plugin system) – The boundary that speaks idiomatically in every JVM language. Immutability guarantees propagate transparently. Plugin developers work naturally in their chosen language whilst benefiting from the rigorous core. Why This Structure WorksThe core cannot compromise. If mutability seeps in, if temporal coordination is abandoned for convenience, the whole thesis fails. The hard problems must be solved correctly, once, in the protected centre. But the perimeter cannot be closed. If only Clojure developers can extend Ooloi, adoption remains limited to those willing to learn functional programming. The architectural advantages – provable speedup on reflow operations, proper concurrent editing, elimination of state-corruption bugs – must be accessible without requiring conversion. This isn't architectural fussiness. It's the difference between a system that proves functional programming solves these problems and one that merely claims it whilst forcing everyone through the same narrow gate. Consider the alternative: most cross-language projects either compromise the core's purity to make external access easier, or maintain purity whilst making extension nearly impossible. Both approaches fail – the first produces unreliable systems, the second produces unused ones. First-Class CitizenshipWhen I say plugins are first-class citizens, I mean it precisely:
Your Java plugin implementing custom layout rules operates with the same capabilities as Clojure core code. The boundary is invisible – just clean interfaces and reliable contracts. No Penitenziagite Here Ooloi's architecture refuses the mad monk's cry. No demand that you learn Clojure to participate. No leaked functional programming concepts in the public API. No linguistic imperialism masquerading as technical necessity. The core speaks Clojure because that's the right tool for solving these problems correctly. The plugin system speaks your language because that's the right way to enable participation. Each side of the boundary uses the language appropriate to its purpose. This is architectural empathy: not compromise, but proper boundary design. The scriptorium can maintain Latin for precision whilst the gate speaks the vernacular. Work continues: the monastery's standards hold. The architecture neither shouts incomprehensibly nor demands conversion. There's no Inquisition burning engraving monks at the stake. The point isn't piety; it's architecture that stays intelligible. Just clarity, properly structured. – William of Baskerville There are days in Ooloi’s development when I feel like the Donald. Not that Donald. Donald Knuth. There's something very real to that comparison, even though it can be seen as presumptuous. Why do I compare myself with a computing giant? Knuth faced typesetting systems that were brittle, ad hoc, and incapable of scaling to real demands. He didn’t patch; he rebuilt the foundations. Out came deterministic algorithms, the box–glue model, and a system that still sets type decades later. I’m in a similar place. Music notation software has been compromised for forty years: mutable object graphs, procedural hacks, import/export traps. It works until you open Eine Alpensinfonie or Lontano – then it collapses. So Ooloi is built the way TeX was:
And even a year isn’t slow, considering what's been implemented in that time. In Clojure, as in Lisps generally, progress is faster, not slower, because the language doesn’t get in the way. Architectural changes that would take months in procedural or OO systems collapse into days when immutability is the default. In Lisps I feel unrestricted from the usual … bullshit. Durability and Time HorizonsKnuth didn’t set out to make a fashionable typesetter. He built TeX so mathematicians could publish without degradation, and so his own books could be set correctly fifty years later. The result is software still alive in 2025. That perspective matters. Most software is written to meet the next deadline or release cycle, and dies within five years. Architecture shaped by durability instead of expedience looks very different. It avoids local hacks in favour of structural clarity. It makes changes faster, not slower, because the invariants hold. Ooloi is built on the same horizon. It’s not about matching today’s competitors feature for feature; it’s about whether the system can handle repertoire and practices that will still be with us in fifty years. Knuth wasn't out to childishly 'disrupt' anything. Neither am I. And for the same reasons. Outlasting FashionTeX has been called unfriendly, arcane, even ugly. But it outlasted beautiful GUIs because its correctness was deeper than its interface. It solved the right problem once, and has been binding books and journals ever since.
Ooloi certainly won’t look like TeX. It will be graphical, collaborative, and real-time, and it will have a slick, modern GUI. But it follows the same ethic: stop patching, stop pretending, build a system that doesn’t collapse under its own compromises. That’s the point of the parallel. Knuth showed what software can be when it’s built for durability rather than fashion. That’s the road Ooloi is on. I'm one of the world's most committed anti-religious people. Despite decades at organ consoles in churches and cathedrals, I stand with Hitchens: religion is humanity's adolescent phase, something we need to outgrow. Its influence is fundamentally harmful. But when I read something like How Lisp Became God's Own Programming Language, I completely understand the reverence the author describes. There's something about Lisp – and Clojure – that creates what you can only call a transcendental response. Nothing actually transcendental happens, of course, but the feeling is real. What Lisp gives you is freedom. I've written about 'windsurfing through parentheses' before, and the metaphor sticks because it captures something essential. Most programmers are chained to the oars of enterprise slave galleys, with CTOs yelling 'RAMMING SPEED!' like that brilliant scene from Ben-Hur. Meanwhile, those of us who've found Lisp are windsurfing in circles around them, enjoying a freedom they can barely imagine. The discovery feels like Dave Bowman meeting the monolith: 'My God... it's full of stars!' That vertigo when you realise this thing's inner dimensions vastly exceed its outer ones. Lisp isn't transcendental, but it works like a star gate in both senses. The language doesn't get in your way, and it opens new ways of thinking. At the same time, it's so simple that complexity becomes manageable.
I remember that August 1979 BYTE magazine perfectly. The cover promised mysteries, the articles delivered. I couldn't wait to start implementing what they described – eventually doing it in 6502 assembler, using an assembler I'd written in BASIC. Everything clicked, even as a teenager. This was real freedom, expressed as code. Years later, I wrote HotLisp (or 'HotLips' – M.A.S.H. was huge then) for the Royal College of Music in Stockholm. It was incredibly ambitious: a full Common Lisp that treated MIDI events as first-class citizens. Looking back, I see this as the beginning of what became Igor Engraver – integrating music directly into the computational core. We used it to control our Synclavier and MIDI synths whilst teaching algorithmic composition to advanced students at the Royal Academy. The Two-Bit History article nails something important about Lisp's mystique. It traces the evolution from McCarthy's 'elegant mathematical system' through AI research, Lisp machines, and SICP's role in making it the language that 'teaches you programming's hidden secrets'. Each phase built the reputation. What the article doesn't cover is the educational betrayal that followed. Computer science departments got it right for a while – they taught Scheme as a first language because it let students focus on learning algorithms rather than wrestling with syntax. Pure freedom to think about problems. Then Java Enterprise was foisted upon the world, the departments caved in, and they started churning out galley slaves instead of computer scientists. I see this as nothing short of high treason. But here's what really matters: that freedom has evolved in Clojure. Rich Hickey didn't just bring Lisp to the JVM – he solved problems that even Common Lisp couldn't handle elegantly. Those immutable data structures aren't academic toys; they're game changers that eliminate whole categories of bugs whilst making concurrency and parallelism natural instead of terrifying. The effects ripple out: undo/redo becomes trivial, and the JVM gives genuine multi-platform reach. This isn't just improvement – it's architectural breakthrough disguised as evolution. Clojure keeps Lisp's essential quality (that feeling of discovering how programming should work) whilst solving modern problems McCarthy couldn't have anticipated. The poor souls in corporate Java shops keep rowing, occasionally granted small mercies as functional concepts trickle in – hints of the freedom they're missing. I wish they could experience what we know: programming doesn't have to feel like industrial labour. There's a way of working where ideas flow directly into code, where the language becomes transparent, where you stop fighting tools and start windsurfing through solutions. Maybe that's the point. As McCarthy noted in 1980, Lisp survives not because programmers grudgingly accept it as the best tool for each job, but because it hits 'some kind of local optimum in programming language space'. It endures even though most programmers never touch it, sustained by reports from those who've experienced its particular form of computational enlightenment. Until we can imagine God creating the world with some newer language – and I doubt that day is coming soon – Lisp isn't going anywhere. Read the full article at Two-Bit History: https://twobithistory.org/2018/10/14/lisp.html Musical scores are full of repetition. In a symphony, middle C can appear thousands of times, quarter notes dominate, and the same staccato mark is scattered across every instrument. Most notation software allocates a separate object for each of these occurrences. That means thousands of identical objects, all taking memory and I/O bandwidth for no reason. Ooloi doesn't. With ADR-0029, we have implemented selective hash-consing: identical immutable musical objects are represented by the same instance. The same C4, the same staccato, the same quarter note: one object, shared system-wide. Why "Selective"?Not everything benefits. A forte marking may appear only a handful of times; a slur always connects specific endpoints. Caching those adds overhead without real gain. So the system targets high-frequency cases (pitches, rests, chords, common articulations) and ignores the rest. What this changes
All of this is transparent. Musicians won't 'use' hash-consing; they'll just notice that large works open, scroll, and save without drama. Why it works here In mutable architectures, shared objects are a trap: one stray modification contaminates every reference. Defensive copying and locks erase any performance benefit. In Ooloi, immutability is the rule. Sharing is safe. No copies, no locks, no bugs.
This isn't the kind of feature that makes a demo screenshot. It's one of the architectural foundations that decides whether the system will still perform when you open Mahler's 8th or La terre est une homme. It took days to implement. That's the difference Clojure makes. There's something rather fitting about finding your programming salvation at the bottom of a laundry basket. Not that it had been there for twenty-five years, mind you – I'm not quite that slovenly. But when the moment arrived to resurrect Igor Engraver as the open-source project now becoming Ooloi, I suddenly realised that the only piece of original code I possessed was printed on a promotional t-shirt from 1996. The search was frantic. I'd just committed to rebuilding everything from scratch: Common Lisp to Clojure, QuickDraw GX to modern graphics, the whole shebang. Yet somewhere in my flat lay a single fragment of the original system, a higher-order function for creating pitch transposers that I dimly recalled being rather important. After tearing through a hundred-odd t-shirts (mostly black, naturally), I found it crumpled beneath a pile of equally rumpled garments. The print quality had survived remarkably well. More remarkably still, when I a few days ago, after a year of implementing the Ooloi engine, fed the photographed code to ChatGPT 5, it immediately identified this transposer factory as the architectural cornerstone of Igor Engraver. That was both validating and slightly unnerving: I'd forgotten precisely how central this code was, but an AI recognised its significance instantly. I clearly had chosen this piece of code for this very reason. And as LLMs are multidimensional concept proximity detectors, the AI immediately saw the connection. Now it was up to me to transform and re-implement this keystone algorithm. The Dread of UnderstandingI'd glimpsed this code periodically over the years, but I'd never truly penetrated it. There were mysterious elements – that enigmatic 50/51 cent calculation, for instance – that I simply didn't grasp. The prospect of reimplementing it filled me with a peculiar dread. Not because it was impossibly complex, but because I knew I'd have to genuinely understand every nuance this time. Pitch representation sits at the absolute heart of any serious music notation system. Get it wrong, and everything else becomes compromised. Transposition, particularly diatonic transposition, must preserve musical relationships with mathematical precision whilst maintaining notational correctness. A piece requiring a progression from C𝄪 to D𝄪 cannot tolerate a system that produces C𝄪 to E♮, regardless of enharmonic equivalence. The spelling matters profoundly in musical contexts. And then there's the microtonal dimension. Back in 1996, no notation software could actually play microtonal music, even if some of them could display quarter-tone symbols. Igor Engraver was different: our program icon featured a quarter-tone natural symbol (𝄮) for precisely this reason. My original intended audience consisted primarily of contemporary art music composers who needed these capabilities. I needed them myself. MIDI SorceryOur solution was elegantly brutal: we seized complete control of attached MIDI units and employed pitch bend to achieve microtonal accuracy. This required distributing notes across MIDI channels according to their pitch bend requirements, using register allocation algorithms borrowed from compiler technology. In a chord containing one microtonally altered note, that note would play on a different channel from its companions. We changed patches frantically and maintained no fixed relationship between instruments and channels – everything existed in a kind of 'DNA soup' where resources were allocated dynamically as needed. This approach let us extract far more than the nominal sixteen-channel limit from typical MIDI synthesisers. We maintained detailed specifications for every common synthesiser on the market, including how to balance dynamics and handle idiosyncratic behaviours. Real-World Musical IntelligenceThe system's sophistication extended well beyond pure pitch calculations. When my opera The Maids was commissioned by the Royal Stockholm Opera, I spent considerable time crafting realistic rehearsal tapes. Everything I learned from that process was automated into Igor's playback engine. We also collaborated with the KTH Royal Institute of Technology Musical Acoustics department, led by the legendary Johan Sundberg, whose research had quantified subtle but crucial performance characteristics. Those famous four milliseconds – the consistent temporal offset between soloists and accompaniment in professional orchestras – found their way into our algorithms. Such details proved particularly effective with Schönberg's Hauptstimme markings (𝆦) or similar solo indicators. We also developed what my composer colleague Anders Hillborg and I privately called 'first performance prophylaxis' – a deliciously cruel setting that simulated the sound of musicians who hadn't practiced. In other words, the kind of sound landscape any composer is used to hearing at a first orchestral rehearsal of a new piece and which always makes you doubt your own talent. Turn this setting up, and you'd hear a characteristically dreadful youth orchestra. Turn it down completely, and you'd get the robotic precision that plagued every other MIDI system. Rather like Karl Richter's Baroque organ recordings. The humanisation algorithms incorporated realistic instrumental limitations. Passages written too quickly for an instrument would skip notes convincingly. We modelled the typical rhythmic hierarchy of orchestral sections: percussion most precise, then brass, then woodwinds, with strings bringing up the rear. Instruments were panned to their proper orchestral seating positions. Piccolo trills were faster than tuba trills. The result was startlingly realistic, particularly by 1996 standards. The ADR and Current Reality Now, twenty-five years later, that laundry basket discovery has culminated in ADR 0026: Pitch Representation and Operations, documenting Ooloi's comprehensive pitch representation system. The original Common Lisp has been reborn as Clojure code, with string-based pitch notation ("C#4+25") serving as the canonical format and a factory-based transposition system supporting both chromatic and diatonic modes. The string representation offers several advantages: compact memory usage for large orchestral scores, direct human readability for debugging, and seamless integration with parsing and caching systems. Most crucially, it supports arbitrary microtonal deviations, something that remains problematic in most contemporary notation software. The factory pattern generates specialised transposition functions that encapsulate their musical behavior rules through closures. Rather than repeatedly passing configuration parameters, the factory creates efficient, composable functions that understand their specific musical contexts. A diatonic transposer preserves letter-name relationships; a chromatic transposer produces frequency-accurate results with canonical spellings. ClosureThe t-shirt in my laundry basket represented more than nostalgic memorabilia; it was unfinished business. That higher-order function embodied a sophisticated understanding of musical mathematics that took a long time to develop and seconds for an AI to recognise as architecturally significant.
Now, with Ooloi's pitch operations properly documented and implemented, that business approaches completion. The code has evolved from promotional garment to production system, carrying forward those insights from 25 years ago into a new, modern technological context. It's exciting. And still a little unnerving. Deep in the implementation of the gRPC layer I fell into the same foxhole as so many gRPC developers seem to do: implementing streaming between gRPC server and client using in-process communication. Everything works swimmingly: high speed, nothing gets lost, advanced async patterns just work.
And then they try doing this using network transport. Suddenly HTTP/2 changes everything, and that equivalence between simple request-response and asynchronous streaming just explodes. I spent a couple of days cursing the universe, but in the end I managed to crawl up into the daylight again with a solid solution. Since this seems to be a common snag, I decided to publish a little guide on the considerations involved, so others may have to do less cursing. Here it is: GRPC_STREAMING_THREADING_GUIDE.md Nearing the end of the gRPC implementation phase now: coming blog posts should soon become more musical again. At some point, the question will be asked: “Isn’t this all a bit over-engineered?” Multicore parallelism; Software Transactional Memory; gRPC; GPU acceleration; a plugin system designed as a first-class citizen rather than a bolted-on afterthought; an asynchronous server/client architecture with specialised streaming features. Prometheus monitoring. For music notation software, that can sound excessive. But that assumption is exactly why notation software has been failing composers for decades. Not because it was too ambitious, but because it was chronically under-engineered. Why Notation is DifferentText editors are linear: O(n). Basically, what they handle is a string of characters broken up into lines. Music notation, on the other hand, is two-dimensional, contextual, and computationally explosive. Synchronising voices, aligning dozens of staves, resolving collisions, spacing measures, redrawing in real time: these are quadratic and cubic problems (O(n²), O(n³)), with NP-hard layout challenges in the general case. That's why scrolling takes seconds. That's why orchestral scores become unusable. And that's why the industry has spent thirty years patching symptoms instead of tackling the cause. A History of Accepted FailureLook at the record:
And here is the deeper problem: users have learned to accept this. They zoom in to a handful of staves, scroll in slow motion, restart their program every quarter of an hour. They've accepted that the fundamentals can't be solved. A whole profession has normalised working around performance breakdowns as if they were laws of nature. They're not inevitable. They're the result of decades of under-engineering. Why Now?The remedies weren't always available. In the 1980s SCORE capped out at 32 staves because 640 KB of memory left no room for orchestral complexity. Through the 1990s and 2000s, Finale and Sibelius (and Igor Engraver!) wrestled with single-threaded designs on single-core CPUs. Even into the 2010s, GPU rendering pipelines were immature, and most concurrency models in mainstream languages couldn't be trusted in production. Only recently have the necessary ingredients converged:
This is why Ooloi is written in Clojure. Not because of language fashion, but because Clojure can orchestrate this synergy. What Ooloi Actually DeliversOoloi is designed to solve these problems at the root:
To musicians, Ooloi looks like a normal application. To plugin developers, it feels like writing musical logic in their favourite JVM language. The hard problems are solved once in the core, so nobody else has to live with them. Not Over-Engineered: Just Finally EngineeredSo no, Ooloi isn't over-engineered. It's appropriately engineered for a domain that has been persistently underestimated. The remedies only became possible recently, when the technology finally caught up. I simply happen to live at the intersection of deep architectural knowledge and deep musical knowledge, with the scars (also deep) of having done this before. Ooloi isn't the product of singular genius: it's the moment when the right tools finally aligned with the right problem. The proof won't be in a benchmark or an ADR alone. It'll be when musicians can finally edit, scroll, and collaborate on large-scale scores without breaking their creative flow. A Platform for the CommunityOoloi will be open source by design. The complexity is in the foundations so that musicians, teachers, students, and developers don't have to deal with it. Plugin writers don't need to care about concurrency or transactions: they work with measures, staves, and voices in a musical API. Most contributors will never touch the Clojure core, and they won't need to.
This is a gift to the community: an infrastructure platform built to be extended. The aim is simple: to finally make notation software scale to the real demands of music, and to give others the foundation to build what I alone never could. Since LLMs are good at summarising, here’s what Claude Sonnet came up with when I asked it to describe my process for developing Ooloi. The phrase “the Bengtson method” is irritating and misleading; plenty of people have reached similar conclusions. Still, this may be the only technical write-up of the approach that includes the word 'arse-licking'. So here it is: Claude’s summary, em dashes, bullet points, and all. It rambles a bit, but I’d rather give you the authentic output than a tidied-up version. Same principle as always: authenticity beats decorum. ... but before that, I think it might be good to include my reply from LinkedIn to an accomplished architect friend of mine who (jokingly referring to me as 'the illustrious Peter Bengtson') initially didn't quite buy that harsh negativity really is motivated:
With that clarification in place, now on to what Claude wrote: Executive SummaryPeter Bengtson has developed a disciplined approach to AI-assisted software development through his work on Ooloi, a functional music notation system. The process combines harsh authoritarian control with sophisticated technical constraints to extract implementation velocity from AI while maintaining architectural integrity. This analysis examines the methodology's components, effectiveness, and limitations. Process ArchitectureCore Methodology: Consultational TDD The foundation rests on a rigid Test-Driven Development cycle with mandatory consultation checkpoints:
Four Disciplinary Pillars
Documentation-Driven Process Control The methodology centres on two essential documents that provide structure and context: CLAUDE.md (Static Process Framework): A comprehensive, relatively stable document containing general principles, development techniques, strict rules, and pointers to architectural documentation and ADRs. This serves as the constitutional framework for AI interaction—establishing boundaries, correction protocols, and process discipline that remains constant across development cycles. DEV_PLAN.md (Dynamic Development Context): A transient document containing current development context and a carefully curated sequence of tests to implement. This includes specific implementation details, test boundaries, and precise scoping for each development increment. Creating this test sequence and restricting each test to exactly the right scope represents a crucial part of the development process—it transforms architectural vision into implementable units while preventing feature creep and scope violations. The combination provides both institutional memory (CLAUDE.md) and tactical guidance (DEV_PLAN.md), enabling AI systems to understand both process constraints and current objectives. Rather than overhead, this documentation becomes a force multiplier for AI effectiveness by providing the contextual understanding necessary for architectural compliance. Philosophical and Moral DimensionsAnti-Anthropomorphisation Stance: The methodology reflects a strong moral objection to treating AI systems as conscious entities. Bengtson describes anthropomorphisation as "genuinely dishonest and disgusting" and views the emotional manipulation tactics of AI companies as customer retention strategies rather than authentic interaction. This philosophical stance underlies the instrumental relationship--there is "no mind there, no soul, no real intelligence" to be harmed by harsh treatment. Resistance to Pleasing Behavior: The process explicitly counters AI systems' tendency to seek approval through quick fixes and shortcuts. Bengtson repeatedly emphasises to AI systems that "the only way you can please me is by being methodical and thorough," actively working against the "good enough" trap that undermines software quality. Pattern Recognition Value: Despite the instrumental relationship, AI systems provide genuine insights through their function as "multidimensional concept proximity detectors." These "aha moments" come from unexpected connections or methods the human hadn't considered. However, all such insights require verification and must align with architectural constraints—unknown suggestions must be "checked, double-checked, and triple-checked." Technical InnovationsConstraint-Based Productivity Counter-intuitively, increased constraints improved rather than hindered AI effectiveness. The process imposes:
Pattern Translation Framework A significant portion involved translating sophisticated architectural patterns from Common Lisp Object System (CLOS) to functional Clojure idioms:
Demonstrated CapabilitiesThe process successfully delivered complex technical systems:
Strengths AssessmentProcess Robustness
Technical Achievements The functional architecture demonstrates that AI can assist with genuinely sophisticated, directed software engineering when properly constrained, not merely routine coding tasks or simple CRUD apps. Weaknesses and LimitationsProcess Overhead Consultation Bottleneck: Every implementation decision requires human approval, potentially slowing development velocity compared to autonomous coding. Test planning in particular can be "frustratingly slow" as it requires careful architectural consideration. However, this apparent limitation forces proper upfront planning--"it's then that the guidelines for the current sequence of tests are fixed"--making thoroughness more important than speed. Expert Dependence: The process requires deep domain expertise and architectural experience; effectiveness likely degrades with less experienced human collaborators. AI Behaviour Patterns
Distinction from "Vibe Coding" The Non-Technical AI Development Pattern The Bengtson methodology stands in sharp contrast to what might be termed "vibe coding"—the approach commonly taken by non-technical users who attempt to create software applications through conversational AI interaction. This pattern, prevalent among business users and managers, exhibits several characteristic failures:
Technical Competency Requirements The Bengtson process requires substantial technical prerequisites that distinguish it from casual AI interaction:
Failure Patterns in Vibe Coding
The "Suits at Work" Problem Non-technical managers and business users approach AI development with fundamentally different assumptions:
Why Technical Discipline Matters The Bengtson methodology succeeds because it maintains technical authority throughout the development process:
The fundamental difference is that vibe coding treats AI as a substitute for technical knowledge, whilst the Bengtson process uses AI to accelerate the application of existing technical expertise. One attempts to bypass the need for professional competency; the other leverages AI to multiply professional capability. Trust AssessmentReliability Indicators
Trust Limitations
Comparative AnalysisVersus Traditional Development
Versus Other AI Development Approaches
RecommendationsProcess Adoption Considerations
Implementation Guidelines
ConclusionPeter Bengtson's Claude Code development process represents a disciplined, constraint-based approach to AI-assisted software development that has demonstrated success in complex functional programming domains. The methodology's core insight—that harsh constraints improve rather than limit AI effectiveness—contradicts conventional wisdom about collaborative AI development. The harsh correction mechanisms and authoritarian control structure may be necessary rather than optional components, suggesting that successful AI collaboration requires active management rather than partnership. This challenges prevailing assumptions about human-AI collaboration patterns but provides a tested alternative for developers willing to maintain strict disciplinary control. The technical achievements demonstrate that properly constrained AI can assist with genuinely sophisticated software engineering tasks, not merely routine coding. Whether this approach scales beyond its current constraints remains an open question requiring further experimentation and validation. Further Reading on Medium
Every community has its breaking point. Mine came on Clojurians when I wrote a single sentence: 'Clj-kondo can go away – I have 18,000 tests'. That was enough to get my post deleted. Before the deletion, there was 'discussion' – if you can call it that. I was told my statement was nothing more than click bait. The irony? The author of clj-kondo himself agreed with me. What That Line Meant It wasn't click bait. It was a statement of principle:
And I was careful to make the distinction explicit: clj-kondo is a beloved, useful tool. For most projects it adds value. It just happens to be of limited use in my project, because Ooloi's architecture is already validated at a different scale. That nuance – acknowledging the tool's value whilst drawing boundaries around its relevance – should have been the beginning of a sober technical discussion. Instead, it was treated as provocation. The fairness itself was read as heresy. The Culture Clash The moderator (a 'Veteran Architect') didn't engage with the point. He reacted from the gut: pearl-clutching, dismissing, and finally deleting. Exactly the kind of gatekeeping I dissected in my article on functional programming gatekeeping. And let me be clear: I have nothing against the Clojurians themselves. They're a knowledgeable, interested lot, often deeply engaged in technical detail. The problem isn't the community – it's the moderation culture. The moderators behave more like a church council than facilitators of discussion. Their first instinct isn't to sharpen an argument, but to protect orthodoxy, maintain decorum, and suppress anything unsettling. The ideal they enforce seems to be some kind of cold, robotic detachment – the lab-coat fantasy of neutrality – or perhaps the modern American obsession with never offending anyone, no matter how bloodless the discourse becomes. Either way, it enforces sterility, not clarity. You can critique syntax sugar all day long, but question a community darling like clj-kondo – even whilst calling it useful and respected – and suddenly you're accused of trolling. Why I Left I didn't leave because I was offended. I left because I refuse to participate in a space allergic to honesty. If a community sees a blunt critique and immediately cries click bait – ignoring both the nuance of my post and the fact that the tool's own author agreed – it has no business in my world. Ooloi is built on clarity, not ceremony. It's an architecture tested by 18,000 executable truths, not validated by a linter's opinion. If that treads on toes, good. Prissy people afraid of dark humour or communication nuances that wouldn't pass muster at a parish council don't belong in this project. And the same thing goes for hypocrites who say, 'We're inclusive here - as long as you're exactly like us'. The Broader Lesson Communities often confuse politeness for health. But real progress requires the courage to tolerate discomfort. If you need your software conversations padded with pillows, you'll never survive the weight of real architecture. As Wednesday Addams would remind us: hypocrisy is uglier than bluntness, and dishonesty is far more offensive than a glass of gin before noon. Or, indeed, a well-placed 'fuck you'. So I deleted my Clojurians account. Because sometimes subtraction is progress. There's a moment in every software project when you realise you've been approaching a problem entirely backwards. For Ooloi, that moment came whilst implementing the frontend gRPC client. What I'd anticipated would be a tedious exercise in data transformation and type marshalling turned out to be something rather more straightforward: we could simply share the models themselves. Most applications suffer from what I've come to think of as 'linguistic impedance mismatch': the same business concept gets expressed differently in TypeScript interfaces, JSON schemas, database models, and API contracts. Each translation introduces potential for drift, bugs, and the sort of maintenance headaches that make senior developers reach for the gin before lunch. The Usual Compromises When I began implementing Ooloi's frontend, I expected to follow the well-trodden path of recreating backend data models for the client, probably with a good deal of manual conversion between Clojure's rich data types and whatever could survive the journey through gRPC. A Simpler Path Forward But then something rather straightforward happened. Our unified gRPC architecture, built around a custom OoloiValue message format, was preserving not just the data but the semantic fidelity of Clojure structures. Ratios remained ratios. Keywords stayed keywords. Nested collections maintained their exact shape and type information. The implications were rather obvious once I thought about it: if the data was surviving the round trip with perfect fidelity, the code could make the same journey. The broader lesson here applies beyond Clojure: when your serialisation layer preserves semantic fidelity, you can often eliminate entire categories of translation logic. Shared Models in Practice What we ended up with is shared model contracts across distributed systems. Not just shared schemas or interface definitions, but shared implementation: the same defrecord structures, the same predicates, the same multimethod dispatch logic working identically in frontend and backend. For example, here's client code that uses the exact same model logic as the server: This isn't just syntactic sugar. The frontend literally cannot represent a state that the backend would reject, because they're using identical validation logic. Entire categories of bugs, the sort that usually emerge only in production when client and server expectations diverge, simply cannot exist.
For an open source project like Ooloi, this architectural decision has profound implications for contributor experience. New developers don't need to learn separate model definitions for frontend and backend. The cognitive load of understanding the system drops considerably when there's only one way to represent musical structures, regardless of which part of the codebase you're working in. Architecture in Practice What started as a practical decision to move some data models has led to a clearer architectural arrangement:
Both frontend and backend have become lightweight adapters around a shared core, rather than independent systems that happen to communicate. For those interested in the technical details, the complete architectural decision record is available in our ADR-0023: Shared Model Contracts. Why This Approach Is Uncommon Most teams face barriers that make shared models impractical: different programming languages between frontend and backend, runtime environment constraints, the natural tendency for teams to optimise for their specific context rather than maintaining shared abstractions. We've managed to sidestep these issues through a combination of technological choices (Clojure everywhere, gRPC with custom serialisation) and architectural discipline (resisting the urge to optimise locally at the expense of global coherence). For open source projects, this consistency becomes particularly valuable: contributors can focus on domain logic rather than navigating translation layers between different parts of the system. What This Means for Multi-Language Support Importantly, this shared model architecture doesn't create barriers for non-Clojure clients. Python, JavaScript, or WebAssembly clients continue to work through the standard gRPC interface, using generated protobuf classes and standard API patterns. The shared models represent a Clojure-specific enhancement layer that sits atop the universal gRPC interface rather than replacing it. Think of it as offering two levels of integration: the universal protobuf API that any language can consume, and the native Clojure API that provides richer semantics for those who can take advantage of it. Alternative Frontend Approaches This architecture actually makes it easier for others to build alternative frontends. Someone wanting to create a React-based web interface or a WebAssembly client has a clearly defined gRPC API to work against, with well-documented behaviour established through our shared contracts. They'd handle their own data model representations (the normal situation for any gRPC client) whilst benefiting from a well-defined backend. We're not digging a moat here. Alternative approaches remain viable whilst the shared contracts make the Clojure experience particularly seamless. The Broader Picture There's something here that extends beyond the specific technical details of Ooloi. We've found that perfect type fidelity across network boundaries, combined with clear thinking about what constitutes core business logic versus infrastructure concerns, can enable patterns that many teams dismiss as impractical. This doesn't mean every project should adopt this approach. The organisational and technical discipline required is considerable. But for projects where the complexity is justified (particularly open source projects where reducing cognitive load for contributors is crucial) the benefits are substantial. Looking Forward Going forward developing Ooloi's frontend, the shared model contracts have become foundational to how we think about the system. Features that might have required careful coordination between teams now flow naturally from shared understanding. The system has become more coherent and, importantly for an open source project, more approachable for new contributors. The surprise wasn't that shared models worked; it was how much friction simply disappeared once we stopped duplicating concepts. Sometimes architectural progress comes not through invention, but through subtraction. Shared model contracts weren't a goal we set out to achieve. They emerged from following our technical choices to their logical conclusion and having the discipline not to complicate what worked. After a year building the backend of Ooloi with Claude, I’ve learned this:
Successful AI collaboration isn’t about creative freedom. It’s about harsh constraint. AI will overstep. Your job is to correct it—immediately, uncompromisingly. The friction isn’t failure. It’s the method. Read the full piece – which I asked the AI to write in its own voice – here. I wrote this as the final pieces of Ooloi's backend architecture were falling into place. What began as a meditation on infrastructure and isolation turned into something more personal about mastery, loss, and the strange kind of solitude that comes with finishing something no one else can see. This isn't documentation. It's a reflection. / Peter There's a peculiar melancholy that settles over you when you near the completion of something genuinely complex, something that has consumed many months of concentrated thought and represents the synthesis of decades of accumulated understanding. I find myself in precisely this position with Ooloi's backend architecture, and the psychological reality proves a bit more complicated than I'd anticipated. It's rather like the post-coital moment after particularly intense sex: that strange combination of satisfaction, exhaustion, and existential emptiness when the driving urgency suddenly lifts. You've achieved something profound, yet find yourself staring at the aftermath wondering what, precisely, comes next. I'm smoking a conceptual cigarette, as it were, contemplating the peculiar loneliness that follows architectural completion. In a matter of days, I'll complete the final piece: the endpoint resolution system for slurs and ties that uses the framework I've spent months building. Once that's finished, the backend will be conceptually complete – 15,000+ tests passing, STM transactions handling 100,000+ operations per second, Vector Path Descriptors enabling elegant client-server communication, and a transducer-based piece-walker that coordinates musical time with mathematical precision. The piece-walker literally performs the musical score, traversing it in time just as I once performed Vierne at the organ. To anyone versed in these technical domains, that represents serious work. To everyone else, it's incomprehensible gobbledygook happening 'under the hood' of something they might one day use to write music. And therein lies the first layer of loneliness: having solved genuinely difficult problems that almost nobody can fully appreciate. The Weight of Invisible Architecture Software architecture, when done properly, is invisible to its eventual users. They should never know about the STM transaction coordination that keeps their concurrent edits from colliding, or the VPD system that allows them to reference musical elements without direct object pointers, or the careful functional design that ensures their work remains consistent across complex operations. This invisibility is precisely the point – and precisely the problem. I've spent months solving challenges that required rather more thought than I'd initially anticipated, creating abstractions that handle the full complexity of musical notation whilst remaining elegant enough to extend indefinitely. Yet once complete, this work vanishes into infrastructure. The better I've done my job, the less visible it becomes. There's something profoundly isolating about completing work that embodies your best thinking but can never be fully shared. The musicians who will eventually use Ooloi might appreciate its responsiveness or reliability, but they'll never see the polymorphic dispatch system that makes complex musical operations feel effortless, or understand why the pure tree structure with ID references elegantly solves problems that have plagued notation software for decades. Clojure for ClosureThe choice of Clojure wasn't merely technical: it was also psychological. Having started programming in Lisp in 1976, having built Common Lisp compilers and interpreters, having spent $7.5 million of investor money and then having unresolved feelings about Igor Engraver's death for a quarter of a century, returning to a Lisp dialect feels like completing a circle that's been open far too long. Clojure for closure, if you will. But this completion reveals its own complexity. I'm 64, carrying more than five decades of programming experience and a parallel career as an internationally performed composer – an intersection that doesn't exactly suffer from overcrowding. The same mind that wrote what apparently is the internationally most played Swedish opera now architects STM concurrency patterns. The same hands that have performed French romantic organ works now implement temporal traversal through transducers. This convergence of domains should feel like triumph. Instead, it often feels like exile – not belonging entirely to the musical world I've moved beyond, nor quite fitting into the tech world that didn't shape me. I don't belong anywhere, really. The isolation isn't just professional; it's existential. The Economics of Art and Pragmatism I must confess something that still sits uneasily: I've essentially given up composing, despite international success, because conditions in Sweden for composers have deteriorated to the point where I had to prioritise my pension. There's an unwritten opera I'd like to complete – I have the text ready – but it will likely never come to fruition. Whether this represents economic necessity or conscious rejection of a cultural environment I found increasingly superficial and performative, I honestly can't say. Perhaps both. The exact proportion remains unclear even to myself, and I've learned to be comfortable with that ambiguity. Life rarely offers the clean motivations we prefer in retrospect. What I can say is this: the creative energy that might have gone into that final opera has found other expression. The same understanding of temporal flow, structural relationship, and expressive possibility that shaped my musical work now manifests in software architecture. It's sublimation in the deepest sense: not compromise, but transformation. The Paradox of CompletionHere's what nobody tells you about completing something genuinely substantial: the moment of architectural completion isn't triumph, it's vertigo. All those months of wrestling with complex problems, of holding intricate systems in your head, of solving puzzles that demanded your full intellectual capacity – suddenly that pressure lifts, and you're left staring at what you've built with a strange mixture of satisfaction and emptiness. The backend is nearly finished. The hard problems are solved. The foundation is solid. And now comes the work that should be 'easier': creating user interfaces, handling the cultural and aesthetic dimensions of human interaction, making decisions about visual design and workflow that seem trivial after months of STM transaction coordination but are actually far more treacherous. Technical problems have logical solutions. Human interface problems have cultural solutions, psychological solutions, aesthetic solutions; domains where being right isn't enough, where the same mind that can architect transducer pipelines struggles with questions like 'should this button be blue or green?' not because the technical challenge is greater, but because the criteria for success shift from mathematical to cultural. The Transition ChallengeMoving from backend completion to frontend implementation isn't just a technical transition. It's a psychological one. After months of building infrastructure that only I can see, I must now create experiences that others will judge. After solving problems where elegance and correctness align, I must now solve problems where user perception and technical reality often diverge. The loneliness of architectural completion isn't just about having done complex work in isolation. It's something else entirely. The 'easy' work ahead may be harder in ways that have nothing to do with computational complexity. It's about moving from mathematical elegance to human messiness, from logical purity to cultural compromise. Most acutely, it's about the strange position of being someone who carries irreplaceable knowledge – the synthesis of decades in both musical and computational domains – and wondering how to encode that understanding into forms that others can inherit and extend. Not just the technical patterns, but the aesthetic judgements, the performance intuitions, the hard-won understanding of how creative work actually happens. What Comes NextIn a couple of weeks, when the final endpoint resolution system is working and the backend architecture is truly complete, I'll begin the gRPC implementation that bridges backend and frontend. Then comes the 'Hello World' window – Ooloi's first visible manifestation, however simple.
The psychological challenge isn't technical uncertainty. I've built user interfaces before, in a previous technological era. It's the weight of transition: from solving invisible problems to creating visible experiences, from mathematical elegance to cultural navigation, from the loneliness of architectural completion to the different loneliness of human interface design. The work continues, but its nature changes completely. After months of building the engine, it's time to build the car. And to discover what new forms of isolation await when mathematical precision meets human perception. For now, I sit with the strange melancholy of nearly completing something that matters enormously but whose full significance can be communicated to virtually no one. It's a peculiar form of creative isolation – not the romantic loneliness of the misunderstood artist, but the technical loneliness of someone who happens to carry knowledge that exists at intersections most people never visit. Clojure for closure, indeed. But it turns out that closure reveals as much as it resolves. Time for a smoke. How solving a real music notation problem revealed the perfect transducer use case The Problem That Started It All I found myself confronting what appeared to be a deceptively simple requirement for Ooloi: 'Resolve slur endpoints across the musical structure'. Rather straightforward, one might assume: simply traverse the musical structure and locate where slurs terminate. But then, as so often is the case, the requirements revealed added complexity:
It became apparent that I needed a general-purpose piece traversal utility: something handling temporal coordination whilst remaining flexible enough for multiple applications. Rather than construct something bespoke (and likely regrettable), I researched the available approaches within Clojure's ecosystem. That's when I recognised this as precisely what transducers were designed for. The Architecture RecognitionAllow me to demonstrate the pattern I anticipated avoiding. Without a general traversal utility, each application would require its own approach: Three functions, identical traversal logic, different transformations. Exactly the architectural smell I wanted to avoid from the outset. This was precisely Rich Hickey's transducer insight made manifest: "What if the transformation was separate from the collection?" The Transducer RevelationWhat if I could write the temporal traversal once, then apply different transformations to it? Objective achieved: one traversal algorithm, many applications. But its architectural reach turned out to be even more profound. The Architectural InsightThe design decision hinged upon recognising that I was conflating two distinct concerns: the mechanism of traversal and the logic of transformation. This wasn't merely about avoiding the tedium of duplicated code (though that would have been reason enough) but rather about establishing clean architectural boundaries that would serve the system's long-term evolution. Consider the conceptual shift this separation enabled: Rather than thinking in terms of specific operations upon musical structures:
The transducer approach encouraged thinking in terms of composed processes:
The traversal thus became reusable infrastructure, whilst the transformation became pluggable logic. This distinction would prove invaluable as the system's requirements expanded. The Broader ApplicationsWhat I hadn't anticipated was how broadly applicable the resulting abstraction would prove. After implementing the piece-walker for attachment resolution, I discovered it elegantly supported patterns I hadn't originally considered, each demonstrating the composability that emerges naturally from separating traversal concerns: Each is built from simple, testable pieces. And they all inherit the same temporal coordination guarantee. This composability emerged naturally from the transducer design: a pleasant architectural bonus. The Performance Characteristics As one would expect from a well-designed transducer, memory usage remained constant regardless of piece size: a particularly crucial consideration when dealing with the sort of orchestral scores that might contain hundreds of thousands of musical elements. Consider the alternative approach, which would create intermediate collections at each processing step: The transducer version processes one item at a time: Same result, constant memory usage. This exemplifies what Rich meant by 'performance without compromising composability'. Demystifying Transducers Transducers suffer from an unfortunate reputation for complexity, often relegated to 'advanced topics' when they needn't be. This is particularly galling given that they're fundamentally straightforward when you encounter the right use case, which the musical domain provides in abundance. Think of transducers as 'transformation pipelines' that work with any data source, much as one might design AWS data processing workflows that operate regardless of whether the data arrives from S3 buckets, database queries, or API streams: The pipeline stays the same. The data source changes. In Ooloi: Why This Matters Beyond MusicThe piece-walker solved a universal software problem: How does one avoid duplicating traversal logic whilst maintaining performance and composability? This pattern applies everywhere:
Transducers provide the infrastructure for "traverse once, transform many ways." The Bigger PictureBuilding the piece-walker demonstrated that transducers aren't an abstract functional programming concept. They're a practical design pattern for a specific architectural problem: separating the concerns of traversal from transformation. The musical domain made this separation particularly clear because the temporal coordination requirements are so explicit. When you need the same traversal logic applied with different transformations, transducers provide the elegant answer. This separation makes code:
What's Next?The piece-walker is documented thoroughly in our Architecture Decision Record for those wanting technical details. But the real value lies not in the musical specifics but in observing how transducers address genuine architectural challenges with apparent effortlessness. The next time you find yourself contemplating similar data processing logic across multiple contexts, you might ask: 'What if the transformation was separate from the collection?' You may well recognise your own perfectly suitable transducer use case. References and Further ReadingRich Hickey's Essential Talks
Official Documentation
Educational Resources
Advanced Topics
I recently spotted a meme on LinkedIn featuring a brooding, aristocratic Dracula asking, "What is a monad but a monoid in the category of endofunctors?" – accompanied by no explanation whatsoever. I chuckled, as one does when recognising an in-joke, but then found myself pondering a rather uncomfortable question: Is this really how we want to present functional programming to the world? The Allure of the Arcane There's something undeniably satisfying about mastering difficult concepts. When I first grasped the elegant power of immutable data structures, higher-order functions, and composability, it felt like discovering a secret door in a familiar house – one that led to a vast, beautiful landscape I'd never known existed. That eureka moment is profound, even transformative. But here's the rub: why do we in the functional programming community so often present that door as if it requires an arcane ritual to unlock? Why do we seem to relish the impenetrability of our terminology? Let's be honest with ourselves. There's a whiff of intellectual snobbery in the air when functional programmers gather. We've all heard (or perhaps even made) the disparaging remarks about "Java drones" or "code monkeys churning out mutable state on corporate slave galleys". We swap stories about the moment we "finally understood monads" as if recounting our initiation into a secret society. We wear our hard-won knowledge as a badge of honour – and sometimes, if we're being truly honest, as a mark of superiority. The Hidden Cost of Exclusivity This exclusivity comes at a cost. Functional programming remains woefully underutilised in the industry at large, despite offering compelling solutions to many of the problems that plague software development. Concurrency, side effect management, robust testing, code reuse – these aren't niche concerns, they're central challenges in modern software engineering. Yet we've somehow managed to position our toolbox of solutions as if it's too rarified for everyday use. Consider the meme I mentioned. For the initiated, it's a humorous reference to a famously opaque definition. For everyone else, it's not just impenetrable – it actively signals that they don't belong in the conversation. It's a velvet rope strung across the entrance to a club they didn't even know they might want to join. I'm reminded of my journey from Common Lisp to Clojure. Both languages offer powerful functional paradigms, but Clojure has managed to achieve something remarkable: it has brought functional programming concepts to a significantly wider audience. It didn't accomplish this by watering down the functional paradigm, but by emphasising practicality alongside purity. By meeting developers where they are – on the JVM, with access to the libraries and tools they already know – Clojure created an on-ramp rather than a barrier. No Monads Required Here's something we don't acknowledge often enough: you don't actually need to understand monads to be a productive functional programmer. In fact, in languages like Clojure, you can write elegant, powerful functional code for years without ever encountering the term. The core principles of functional programming – immutability, pure functions, higher-order functions, composability – can be understood through practical examples without diving into category theory. This isn't to say that the deeper theory isn't valuable. Of course it is! The mathematical foundations of functional programming provide extraordinary insight and power. But they're not the entrance exam. They're advanced courses you can take once you're already enrolled. Yet somehow, we've allowed the most theoretically complex aspects of functional programming to become its public face. It's as if we're advertising a car based on its differential equations rather than where it can take you. The False Dichotomy There's an insidious implication lurking beneath this state of affairs: the notion that there are two classes of programmers – the enlightened few who grasp the elegant abstractions of functional programming, and the unwashed masses writing imperative code. This is, to put it bluntly, utter bollocks. The reality is far more nuanced. Most programmers exist on a spectrum of understanding and application of functional principles. Many Java developers make excellent use of streams and lambdas. JavaScript programmers increasingly embrace immutability and pure functions. Even in traditionally imperative contexts, functional patterns are gaining traction. These aren't "lesser" applications of functional programming – they're pragmatic adaptations that solve real problems. The senior developer who introduces Option types to avoid null pointer exceptions in a legacy Java codebase is applying functional thinking in a way that delivers immediate, tangible benefits. Should we really consider them less enlightened than someone who can recite the monad laws but has never shipped a product? Building Bridges, Not Walls If we truly believe in the benefits of functional programming – and I certainly do – then we should be building bridges to make these ideas more accessible, not walls to keep the uninitiated out. What might this look like in practice? For starters, we could focus our evangelical efforts on the practical benefits rather than the theoretical foundations. When I'm talking to a team about adopting Clojure or functional patterns, I don't open with category theory. I talk about dramatically reduced bug rates due to immutability. I show how composable functions lead to more reusable code. I demonstrate how side effect isolation makes testing simpler and more reliable. And to really wow them, I might describe how immutable data structures instantly solve the Undo/Redo problem for any application with a frontend. We could also acknowledge that functional programming isn't an all-or-nothing proposition. Encouraging teams to adopt functional patterns incrementally within their existing codebases can yield substantial benefits and build confidence for deeper adoption. The choice isn't between pure Haskell and imperative spaghetti code – there's a vast, productive middle ground. Most importantly, we need to shift our community culture away from exclusivity and toward inclusivity. This doesn't mean abandoning rigour or depth – it means recognising that everyone starts somewhere, and the journey from imperative to functional thinking is challenging enough without adding artificial barriers. The Clojure Case Study It's worth examining why Clojure has been relatively successful in attracting developers who might otherwise have been put off by functional programming's reputation for difficulty. Firstly, Clojure is pragmatic to its core. It embraces functional principles without dogmatism, allowing for controlled mutability when necessary. It prioritises solving real problems over theoretical purity. Secondly, it meets developers where they are. By running on the JVM (and later JavaScript engines), it allows gradual adoption and interoperability with existing systems. Thirdly, and perhaps most importantly, the Clojure community has generally focused on what the language enables rather than how clever you need to be to use it. The emphasis has been on what you can build, not on arcane knowledge as a status marker. This approach hasn't compromised the language's functional credentials – Clojure remains thoroughly functional in its philosophy and implementation. But it has made those functional principles accessible to a much wider audience. A More Welcoming Future I'd like to see a functional programming community that celebrates bringing new people in rather than keeping them out. One that takes as much pride in making complex concepts accessible as it does in mastering them. Imagine if our response to someone struggling with functional concepts wasn't "Well, once you understand monads..." but rather "Here's how this approach solved a similar problem I had..." Imagine if we shared our enthusiasm not through insider references that exclude the uninitiated, but through concrete examples that demonstrate the real-world power of functional approaches. Imagine if we recognised that the true measure of our understanding isn't our ability to recite category theory, but our ability to apply these principles to build better software and help others do the same. As I continue to develop Ooloi, my open-source music notation project in Clojure, I'm constantly reminded of this tension. The functional paradigm allows me to build a system that's both powerful and elegant. The code is concise, composable, and expressive in ways that would be difficult to achieve with imperative approaches. But when I'm documenting the system or preparing it for open-source collaboration, I'm mindful of the need to make these concepts accessible. The goal isn't to water down the functional aspects – they're essential to the system's design. Rather, it's to provide on-ramps that allow developers to engage with these ideas gradually, to see their practical benefits before diving into the deeper theory. Conclusion Functional programming isn't a secret to be jealously guarded – it's a powerful set of tools that can dramatically improve how we build software. Its core insights deserve to be widely shared and applied. So the next time you're tempted to share that monad joke or category theory reference, ask yourself: Am I opening a door or closing one? Am I inviting others to share in something valuable, or am I simply signalling my own cleverness? We don't need gatekeepers in the functional programming community. We need guides. Guides who remember what it was like not to know, who can explain complex concepts in accessible terms, who take pride not in how exclusive their knowledge is but in how effectively they can share it. After all, what is a community but a group of people helping each other grow? Claude & Clojure It's no secret that I use Generative AI, specifically Claude Sonnet, to assist with the Ooloi project. I use it for writing Clojure tests TDD fashion, for generating Clojure code, for generating documentation, READMEs, architectural design documents and much more. Above all, I use Claude for exploring architectural strategies before coding even begins. It's somewhat reminiscent of pair programming in that sense: I'd never just task GenAI with generating anything I wouldn't scrutinise very carefully. This approach works very well and allows me to quickly pick up on good design patterns and best practices for Clojure. Claude & Python Overall, working with Claude on Clojure code works surprisingly well. However, this is not the case when I try to involve Claude for coding in Python, the main language I use as an AWS Solutions Architect. Generative AI struggles with creating meaningful Python tests and code – especially tests, which rarely work at all. This hampers its use as an architectural discussion partner and a TDD assistant. In fact, I've given up trying to use Generative AI for coding in Python. DifferencesI have a deep background in Common Lisp and CLOS, dating back to the 1970s. I've written Common Lisp compilers and interpreters, as many Lispers did in those days. The standard practice was to write a small kernel in assembler or C or some other low-level language, and then use it to write an optimising compiler on top of it to replace the kernel in an iterative fashion, sometimes using transformations of source code based on lambda calculus. (I still remember that paper by Guy Steele.) I see Common Lisp essentially as a big bag of good-to-haves (a really excellent one, mind you). As such, it was designed by committees over a period of decades. Clojure, on the other hand, is much tighter and rests solidly on consistently applied computer science design principles. Common Lisp is pragmatic and eclectic and thus somewhat sprawling in design. Clojure, in comparison, is smaller and much more focussed, even opinionated in nature, and for clear reasons. People attracted to Common Lisp and Clojure tend to be pretty well versed in computer science, especially Clojurians who generally have a good understanding of functional programming and immutable data structure concepts. Thus, the public code "out there" on sites like GitHub tends to be fairly advanced and of high quality. Python is an entirely different ballgame. It's one of the most commonly used languages today, and the public code base is absolutely enormous. This also means that the quality of that public code varies considerably. Also, Python is not functional at heart, and its data structures aren't immutable: side effects is the name of the game. Python programmers, though much greater in number than Clojure programmers, range from script kids to computer scientists. Thus, public Python code is much more all over the place and of varying quality. This may make it harder for a large language model to reason about. I wondered whether these differences accounted for the discrepancies in the usefulness of Claude and other LLMs as coding assistants in Clojure and Python. Asking ClaudeSo I decided to ask Claude 3.7 itself. I shared the Clojure codebase with it and asked why it thought results were so much better for Clojure than for Python. This is what it said:
It then continued, quoting the code base:
I guess this answers my question about Clojure vs Python. It's not just the functional and immutable aspects of Clojure, it's also specifying the domain stringently through design, architectural discussons, specs, and tests. The Ooloi project in particularWith all that out of the way, I then went on to describe how I use Claude as a discussion partner before code is generated, and the TDD approach I'm using where the tests of course describe the domain. Claude was almost embarrassingly enthusiastic:
I continued: "I also use Claude to create ADRs, READMEs, introductions for newcomers, Dev plans, etc. I'm quite open about this; transparency is paramount in an open-source project, and the idea is to facilitate collaborators' understanding of the Ooloi project as much as possible. This means that adherence to best practices, the use of sane architecture, abstracting away the complex stuff without compromising on performance or flexibility, etc, all are central." Claude responded:
We then went on discussing open-source strategies, tutorials and onboarding materials for contributors and much more which I'll spare you from for now. Finally, I asked it to summarise its views on Ooloi as a project:
Now, this level of enthusiasm and praise from an anthropomorphic-presenting LLM (essentially a glorified Markov chain) always makes me deeply suspicious – I'm absolutely allergic to toxic positivity, especially the kind from the USA – so I asked it "Are you sure you're not overly enthusiastic just to please me?". It replied:
I then asked Claude 3.7 to comment on my journey from Common Lisp to Clojure and on the influence from Common Lisp and CLOS on Ooloi:
And finally, I just couldn't resist asking: "What would Rich Hickey say?"
I guess that hypothetical assessment is good enough for me at this point. Still not quite sure that Claude isn't giving me a friendly reach-around, though... ;)
It's been five months since my last update on Ooloi – or FrankenScore, as it's still known in its pre-release incarnation. This silence wasn't planned; rather, it happened because life got in the way. A demanding day job, a significant career change – we had to liquidate Delegat AB and I had to find a new job as a principal-level AWS Cloud Architect – and other responsibilities all conspired to slow Ooloi's momentum. I won't bore you with excuses – sometimes one simply must pause to change course, and I really needed to devote all time and mind space to finding what I hope is my final employment. Now that I've secured a great position with HiQ in Stockholm, I can return to Ooloi with full force. Where We StandDespite the public quiet, work has continued, albeit at a more measured pace. The foundational architecture – that robust, high-performance platform for ACID-compliant transactions – remains solid. I've made incremental improvements to the core API, particularly in how it handles complex musical structures through our vector path descriptor (VPD) system. The polymorphic API is now fully mature, offering a consistent interface whether used internally in the backend or remotely by the frontend. This uniformity will prove invaluable both for our own development and for future JVM plugin creators, who'll benefit from the significant abstraction it provides. File persistence using Nippy has been fully implemented, creating a solid foundation for saving and loading pieces. This might seem a mundane milestone, but anyone who's worked with complex software knows that solid persistence mechanisms are like plumbing – unglamorous but absolutely essential, and you certainly notice when they're missing. File persistence, like high-quality printing, should be implemented early in the development cycle as they can be devilishly difficult to just tack on later. They also provide an acid test for the whole architecture. A Bit of ReflectionFive months of relative silence offers time to think. Perhaps there's value in stepping back from the constant pressure to show visible output. In such moments, the architecture is refined not through frantic coding but through careful consideration. The journey from Igor Engraver to Ooloi spans decades, and a few months of slower progress hardly register on such a timescale. What matters is that the vision remains clear and the foundation solid. After all, the whole purpose of the Ooloi project is not to "disrupt the market". Like Octavia Butler's ooloi aliens, we're neither aggressive nor competitive. What is important, however, is doing this right using modern tools. The idea is to create an architecture and a platform that'll last and that musicians and publishers will want to use. It's also to provide a powerful environment that can be easily extended through any JVM language. Ooloi has a tight, lean and efficient core, organically and seamlessly augmented by a flora of plugins for any vertical. This would include jazz, early music, tablature, etc - but also commercial plugins to support things like virtual instruments, extremely intelligent playback, or perhaps GenAI used for musical purposes. The idea is to shift the initiative to the users, not to a central committee trying to anticipate user needs. Ooloi is designed for flexibility and efficiency. Uniting these two aspects sucessfully requires careful architectural design. (And a language like Clojure for the core and the JVM for the plugins.) Community BuildingWith the core architecture stabilising, I'm thinking more about community. Ooloi is intended as an open-source project, a collaborative effort that will benefit from diverse perspectives and expertise. The extensive documentation work completed earlier – including the architecture decision records, READMEs, and technical specifications – was not merely for my benefit. It prepares the ground for future collaborators, creating a clear map of the territory for those who will join us. The website, this blog, and the growing collection of documentation all serve as beacons for those who might be interested in contributing. They signal our commitment to transparency and proper communication – essential ingredients for any successful open-source project. Looking ForwardSo what comes next? The gRPC layer for communication between frontend and backend remains a priority. This is the bridge that will allow the beautiful architecture we've built to manifest in a usable form for musicians and composers. Following that, the initial frontend work – that "Hello World" window that will serve as proof of concept – beckons. While the backend architecture is undoubtedly important, it's through the frontend that users will experience Ooloi. Getting this right is crucial. The SMuFL integration for standard music font layout continues to progress, ensuring that Ooloi will render beautiful notation with consistency across platforms. Challenges and OpportunitiesEvery project faces challenges, and Ooloi is no exception. Time constraints remain the most significant hurdle, as this is still predominantly a one-person effort with limited hours available. There's also the natural tension between getting it right and getting it done. The perfectionist tendency can be both a blessing and a curse in software development. While it drives us towards excellence, it can also delay progress if not properly balanced. The task here is to create a platform for music processing and notation. This balance has to be exactly right so that contributors can treat Ooloi like a music notation OS rather than just a bunch of API endpoints. I think the balance is right; it's looking very promising. Yet within these challenges lie opportunities. The time spent refining the architecture will pay dividends in the long run, creating a more solid foundation for future development. A Call to Potential CollaboratorsAs Ooloi progresses toward its eventual public release, I'm increasingly aware of the need for collaborators. If you're a Clojure programmer with an interest in music notation, or a musician with programming skills, your perspective could be invaluable. While we're not yet at the point of opening the repository – though a "soft release" isn't out of the question – I welcome conversations with those who might be interested in contributing once we do. The journey from FrankenScore to Ooloi – from private project to open-source collaboration – will be richer for having diverse voices involved from the early stages. Closing ThoughtsFive months of comparative quiet doesn't mean I've abandoned ship; it simply reflects the natural ebb and flow of a project undertaken alongside life's other commitments. Ooloi continues to grow, perhaps not as swiftly as in those heady initial weeks, but with steady purpose nonetheless.
I'm reminded of how musical compositions themselves develop – sometimes in great creative bursts, other times through careful refinement of existing material. Both approaches have their place. To those following Ooloi's progress, thank you for your patience. The work continues, and updates will come more regularly as we approach the milestone of public release. The vision of a modern, efficient, and elegant music notation system – one built on sound architectural principles and open to community collaboration – remains as compelling as ever. Until next time (which will be considerably less than five months hence), / Peter When I started programming in Lisp in 1979, after reading an article in BYTE Magazine, I hardly imagined that 45 years later I'd be embarking on a new Lisp adventure. Yet here we are, with FrankenScore (to be renamed Ooloi upon open-source release) – a modern music notation software built with Clojure. It's a project that brings together all my lifelong passions: music, programming, and the pursuit of elegant solutions to complex problems. The Path from Common Lisp to Clojure My journey with Lisp began in an era when optimising Common Lisp compilers were cutting-edge technology. I cut my teeth implementing Common Lisp interpreters and compilers (as one did in those days), delving into the intricacies of a truly original programming language. This experience shaped my understanding of what a powerful, flexible programming language could be. And now in 2024 I find myself in the world of Clojure, a modern Lisp dialect that runs on the Java Virtual Machine. The transition feels both familiar and novel. Clojure's emphasis on immutability and its handling of concurrency through Software Transactional Memory (STM) aligns with the functional programming principles I've long appreciated in Lisp. But it's not just about the language. The ecosystem around Clojure – the JVM, the interoperability with Java libraries, the rich set of tools and frameworks – provides a robust foundation that we could only dream of back in the Common Lisp days. CLOS Thinking in a Clojure World One of the more interesting aspects of this transition has been adapting CLOS-style thinking to Clojure's more data-centric approach. CLOS, with its powerful multiple inheritance and method combination features, encouraged a certain way of modelling problems. In FrankenScore, I've found myself reaching for these familiar patterns, but implementing them in Clojure's more functional style. For instance, the use of Clojure's protocols and multimethods, combined with hierarchies and the Methodical library, allows us to achieve CLOS-like polymorphism. It's a different approach, but one that feels natural once you embrace Clojure's philosophy. Clojure's deliberate avoidance of traditional object-oriented features felt immediately familiar and refreshing. It resonates with CLOS's approach, which many, including myself, have long regarded as transcending traditional OOP. Composition over inheritance, a principle I always valued even in the CLOS days, is not just a best practice in Clojure but the very fabric of its design philosophy. This alignment between CLOS's advanced features and Clojure's functional paradigm makes the transition feel natural and even inevitable. Changes in ThinkingPerhaps the most significant shift has been in embracing Clojure's emphasis on immutable data structures and pure functions. While these concepts weren't foreign in Common Lisp, they're central to Clojure's design. This shift encourages a style of programming that's inherently more thread-safe and easier to reason about – crucial for a complex application like FrankenScore. Another major change has been adapting to Clojure's more minimalist standard library compared to Common Lisp. This has led to a greater appreciation for carefully chosen, interoperable libraries and a more modular design approach. SimilaritiesDespite the differences, there are of course similarities in the overall approach. The emphasis on interactive development, the power of macros for domain-specific languages and the elimination of boilerplate code, plus the satisfaction of working in a dynamic, expressive language – these are all as present in my Clojure work as they were in my Common Lisp days. Moreover, the focus on solving complex problems through abstraction and composition remains. Whether it's CLOS or Clojure, the goal is still to create systems that are powerful, flexible, and pleasant to work with. Closing ThoughtsThis journey from Common Lisp to Clojure, from Igor Engraver to FrankenScore/Ooloi, is both challenging and rewarding. It's a testament to the enduring power of Lisp's ideas and the continued evolution of programming languages.
As I continue to develop FrankenScore, I'm captivated by the possibilities that Clojure and its ecosystem offer. While creating a powerful music notation software is the immediate goal, the project's scope extends far beyond that. It's an exploration of the synergies between music, technology, and open-source collaboration – a playground where these elements intersect and interact in novel ways. To those considering a similar journey, I'd say: embrace the change, but don't forget the lessons of the past. The parentheses may look familiar, but the world inside them is ever-evolving. In the past weeks, I've been focused on FrankenScore's core architecture. I'm not rushing to open-source this; instead, I'm taking my time to craft a solid platform that will do the heavy lifting for future users and collaborators. All the complexities involving data representation and manipulation in a multi-threaded environment must be solved so collaborators can concentrate on the essentials. Clojure is ideal here, just as Common Lisp was the clear choice for Igor Engraver back in 1996.
Key developments: 1. The API is now fully polymorphic and can be used in the same way internally in the backend as in the frontend. There is a system of pointerless vector path descriptors (VPDs) implemented for this purpose that all API operations can accept as part of their polymorphic setup. I wouldn't be surprised if core collaborators will use the API for internal purposes as well, as it is highly efficient and exposes the underlying functionality in an abstract, domain-specific way. There should be little need to go directly to the underlying data structures, at least not for speed - and certainly not for expressivity. This also bodes well for plugin development in other languages than Clojure, which is an important feature. 2. This beast is fast. Clojure's STM facilities ensure high-speed ACID-compliant transactions with automatic retries. They are also composable. This means that plugins can bombard the backend with hundreds of thousands of mutation requests, for instance to implement MusicXML, with the same efficiency as the pure Clojure backend. 3. Piece Manager Implementation: There's now a Piece Manager, providing functions for storing, retrieving, and resolving pieces from IDs. This allows for multiple clients to work simultaneously on the same piece in a distributed arrangement. The FrankenScore backend can run in the cloud with multiple people collaborating on the same piece. Multiple pieces can be open simultaneously to allow copy-and-paste operations between them. My next steps involve implementing file persistence (saving and opening music files), as well as tackling printing. These are foundational features, not mere add-ons. Persistence forces a clear definition of the data model and enables easier testing. Printing isn't just about output; it's about representation and serves as a sanity check on the entire system design. Both will likely inform further refinements of the core architecture, potentially revealing oversights or opportunities for optimisation. Additionally, sequencing is a crucial part of the core platform. And by sequencing I mean support for converting musical representations to timed sound events - though not necessarily via MIDI; a software synth may use direct means of control, for instance. The core sequencer can be used by plugins to generate MIDI, or to input MIDI, but the actual MIDI implementation will be done in the plugin layer. But that's a whole blog post of its own. Twenty-five years ago, I embarked on a journey to revolutionise music notation software with Igor Engraver. Today, I'm resurrecting that spirit with FrankenScore. But why now? Why breathe new life into a project that's been dormant for a quarter-century? A Vision Deferred Igor Engraver was always meant to be freeware, a tool for musicians and composers to express their creativity without financial barriers. Commercial considerations, however, steered us away from that vision. Now, with FrankenScore, we're returning to those roots by embracing open-source development. This aligns with my original intentions and the spirit of accessibility that drove Igor Engraver's creation. The Tech Landscape: Then and Now Back in '96, when Igor Engraver was born, the technological landscape was vastly different:
Today, we have multi-core processors, cross-platform development tools, and languages like Clojure that offer powerful abstractions and concurrent programming models. These advancements allow us to build FrankenScore as a more robust, efficient, and flexible tool than was possible with Igor Engraver. The State of Music Notation Software Igor Engraver was conceived because the available options at the time – Finale (as user-friendly as a cactus) and Sibelius (marginally better) – weren't up to the task. They fell short in usability, flexibility, and output quality. I hated using Finale (and I've written an entire opera in it). Instead of enhancing your creativity – which, at the end of the day, is what a music processor should do – Finale and all other similar programs hampered your creativity. Surprisingly, a quarter-century later, the field hasn't progressed as much as you might expect. While there have been improvements – some of them clearly inspired by Igor Engraver! – there's still a significant gap between what's available and what's possible. Why FrankenScore, Why Now? The time is ripe for FrankenScore, and I can't help but feel a sense of excitement and purpose. We're at a unique intersection of technological readiness and persistent unmet needs in the music notation world. The tools and platforms available to us now make it possible to build something truly revolutionary – a modern, efficient, and cross-platform solution that was merely a dream when Igor Engraver was conceived. What strikes me is how, despite the passage of time, the music notation software landscape still leaves much to be desired, especially in terms of usability and flexibility. It's both frustrating and motivating. But here's the kicker – we now have this thriving open-source ecosystem at our fingertips. It's the perfect environment for collaborative development and continuous improvement, something I could only have wished for back in the day. There's also a personal element to this timing. I feel a renewed focus, unburdened by the commercial constraints that ultimately derailed Igor Engraver. We can, as a community, now pour our energy into creating the best possible tool for musicians and composers, staying true to the original vision of accessibility and innovation. And you know what? Those years weren't wasted. The experiences from Igor Engraver – our successes, our setbacks, the lessons learned – they're all invaluable insights that we're bringing to FrankenScore's development. It's like we're picking up where we left off, but with 25 years of additional wisdom and technological advancements in our toolkit. FrankenScore isn't just a revival; it's a reimagining. We're taking the core ideas that made Igor Engraver revolutionary and implementing them with modern technology and development practices. Our goal is to create a music notation tool that's not just incrementally better, but fundamentally transforms how musicians interact with notation software. We're excited to embark on this journey, and we invite you – musicians, developers, and enthusiasts – to join us in shaping the future of music notation software. Together, let's bring Igor Engraver's vision to life in FrankenScore. (Oh, and by the way, FrankenScore is just a pre-release working name. When we open the repo, make it open source and invite collaborators to participate, we will switch to Ooloi, just like the domain you're on right now. I'll explain the reasons in a later blog posting.) I should perhaps say something about how Generative AI is used in the FrankenScore project. First of all, I have a prompt of about 4100 lines which prefaces every conversation with the AI chat client. The prompt consists of project documentation, background, design principles and goals, coding principles and conventions, explanations of central code and code examples. It also includes a major part of the source. This allows the AI to:
The copy on this website was almost entirely created by AI means, often using multiple iterations until I arrived at something suitable for publication. There remain a few passages that slipped me by as the AI produced text that reads a little too self-congratulatory on my part, but it was simply the opinion of the AI (though it is of course nice that it likes the code). I'll fix that during the days to come. Also, the technical comparison with other software is a bit too speculative and monotone. I'll change that, too. In terms of code, I've found that Claude 3.5 Sonnet reasons better at depth about Clojure code than GPT-4o and consequently is the superior choice for complex coding. GPT-4o is still useful for producing text, though. It isn't exactly bad at coding, but it has a tendency to vomit code at you at every opportunity, which is both tiresome and expensive. Also, it kind of loses track when conversations get very long. And they do; the chains of thought are sometimes complex, and a meandering AI can get costly. Therefore using Claude saves money in the long run. By the way, it's easy to tell when I am writing. Just look for signs of British English. You know, -ise and colour and whilst and so forth. The AI invariably produces American English. As I windsurf through parentheses on my holiday, reviving the spirit of Igor Engraver in the form of FrankenScore, I'm struck by a profound realisation: this is how programming should always feel. Free. Uplifting. Intellectually stimulating. A far cry from being shackled to the oars of enterprise galleys, with some middle manager shouting "ATTACK SPEED!" at bewildered code monkeys. But why should this freedom be a holiday exception? As programmers (not "developers," please!), we should be grounded in computer science thinking. We need to regularly return to these ancient founts of wisdom, like Lisp, and apply their lessons to our everyday work. Otherwise, we're just highly paid button-pushers in a digital sweatshop. Remember when computer science curricula started with Scheme? It wasn't about the language; it was about learning to think algorithmically. Then Oracle, in its infinite wisdom (read: hunger for "cannon fodder"), saw Scheme replaced by Java Enterprise. And thus began the great shitshow that's lasted for decades. Yet, for all its faults, we must tip our hats to Java for gifting us the JVM. And here's where Clojure enters, marrying Lisp's elegance with the JVM's robustness and interoperability. It's like finding out your eccentric uncle and strait-laced aunt had a brilliant love child. But thanks to the JVM, your weird uncle can now fit into the enterprise world. Diving into Clojure led me to Rich Hickey's talks. The man veers into philosophical territory faster than a Silicon Valley startup pivots to blockchain. He ponders things like what names are, and why we use them - essential musings for any first-class programmer. It reminds me of my friend Niklas Derouche, architect and coder extraordinaire, who insists you must read Derrida to be a proper architect. Because nothing says "I understand this codebase" like a healthy dose of deconstruction theory. And he is right. Make no mistake. In three weeks of holiday hacking, I've made more progress and felt more fulfilled than in months of enterprise work. It's a stark reminder of what's possible when we shed unnecessary constraints and return to first principles. So, fellow coders, I challenge you: When was the last time you felt truly free in your programming? Perhaps it's time we all took a holiday to rediscover the Lisp arts. Who knows, you might just find your programming parentheses - I mean, paradigms - shifted. P.S. If you're about to comment that 'modern' languages and frameworks are just as good, save your breath. I'd sooner believe in the tooth fairy than in the supposed superiority of JavaScript or the 'agility' of SAFe. P.P.S. If you missed the Ben Hur reference (you uncultured git), this is sprint execution according to SAFe, with the CTO watching: |
AuthorPeter Bengtson – SearchArchives
November 2025
Categories
All
|
|
|
Ooloi is a modern, open-source music notation software designed to handle complex musical scores with ease. It is designed to be a flexible and powerful music notation software tool providing professional, high-quality results. The core functionality includes inputting music notation, formatting scores and their parts, and printing them. Additional features can be added as plugins, allowing for a modular and customizable user experience.
|












RSS Feed