An Incremental Approach to Compiler Construction
Abdulaziz Ghuloum
http://scheme2006.cs.uchicago.edu/11-ghuloum.pdf
Abstract
Compilers are perceived to be magical artifacts, carefully crafted by the wizards, and unfathomable by the mere mortals. Books on compilers are better described as wizard-talk: written by and for a clique of all-knowing practitioners. Real-life compilers are too complex to serve as an educational tool. And the gap between real-life compilers and the educational toy compilers is too wide. The novice compiler writer stands puzzled facing an impenetrable barrier, “better write an interpreter instead.”
The goal of this paper is to break that barrier. We show that building a compiler can be as easy as building an interpreter. The compiler we construct accepts a large subset of the Scheme programming language and produces assembly code for the Intel-x86 architecture, the dominant architecture of personal computing. The development of the compiler is broken into many small incremental steps. Every step yields a fully working compiler for a progressively expanding subset of Scheme. Every compiler step produces real assembly code that can be assembled then executed directly by the hardware. We assume that the reader is familiar with the basic computer architecture: its components and execution model. Detailed knowledge of the Intel-x86 architecture is not required.
The development of the compiler is described in detail in an extended tutorial. Supporting material for the tutorial such as an automated testing facility coupled with a comprehensive test suite are provided with the tutorial. It is our hope that current and future implementors of Scheme find in this paper the motivation for developing high-performance compilers and the means for achieving that goal.
One nice piece of advice that I received is that books are like RAMs, you do not have to go through them sequentially, but can do random access to the parts of it you need. With this in mind I find it doable to get one the thick books and only read the part that I need for my task.
But, to also be fair, the above random access method does not work when you don't know what you don't know. So I understand why having a light, but good introduction to the topic is important, and I believe that's what the author is pointing out.
Compiler writing has progressed a lot. Notably in meta compilers [1] written in a few hundred lines of code and adaptive compilation [3] and just in time compilers. Alan Kay's research group VPRi tackled the problems of complexity (in writing compilers) [4].
[1] Ometa https://tinlizzie.org/VPRIPapers/tr2007003_ometa.pdf
[2] Other ometa papers https://tinlizzie.org/IA/index.php/Papers_from_Viewpoints_Re...
[3] Adaptive compilation https://youtu.be/CfYnzVxdwZE?t=4575
the PhD thesis https://www.researchgate.net/publication/309254446_Adaptive_...
[4] Is it really "Complex"? Or did we just make it "Complicated"? Alan Kay https://youtu.be/ubaX1Smg6pY?t=3605
Nowadays I’ve heard recommended Crafting Interpreters. (https://craftinginterpreters.com)
The Nanopass paper link doesn’t work.
Been working on a toy compiler for fun recently.
I have ignored all the stuff about parsing theory, parser generators, custom DSL's, formal grammers etc. and instead have just been using the wonderful Megaparsec parser combinator library. I can easily follow the parsing logic, it's unambiguous (only one successful parse is possible, even if it might not be what you intended), it's easy to compose and re-use parser functions (was particularly helpful for whitespace sensitive parsing/line-fold handling), and it removes the tedious lexer/parser split you get with traditional parsing approaches.
What taught me how to write a compiler was the BYTE magazine 1978-08 .. 09 issues which had a listing for a Tiny Pascal compiler. Reading the listing was magical.
What taught me how to write an optimizer was a Stanford summer course taught by Ullman and Hennessy.
The code generator was my own concoction, and is apparently quite unlike any other one out there!
I have the Dragon Book, but have never actually read it. So sue me.
Nanopass paper seems to be dead but can be found here at least https://stanleymiracle.github.io/blogs/compiler/docs/extra/n...
I have fond memories of implementing an optimizing compiler for the CS241 compiler course offered back then by Prof Michael Franz who was a student of Niklaus Wirth, probably the most exhilarating course during my time at UC Irvine. This was in 2009 so my memory is vague but I recall he provided a virtual machine for a simple architecture called DLX and the compiler was to generate byte code for it.
Google search points me to https://github.com/cesarghali/PL241-Compiler/blob/master/DLX... for a description of the architecture and possibly https://bernsteinbear.com/assets/img/linear-scan-ra-context-... for the register allocation algorithm
the article's framing around nanopass is undersold: the real insight isn't the number of passes but that each pass has an explicit input and output language, which forces you to think about what invariants hold at each stage. that discipline alone catches a suprising number of bugs before you even run the compiler. crenshaw is fantastic but this structural thinking is what separates toy compilers from ones you can actaully extend later.
It's been about 4 years since I took a compilers course (from OMSCS, graduate program) and still shutter ... it was, hands down, the most difficult (yet rewarding) classes I've taken.
I'm also writing a compiler and CS6120 from Cornell has helped me a lot: https://www.cs.cornell.edu/courses/cs6120/2025fa/self-guided...
The biggest issue with technical books is they spend the first 1-2 chapters vaguely describing some area and then follow up with but that's for a later more advanced discussion or we'll cover that in that last 1-2 chapters. Don't vaguely tell me about something you're not gonna go into detail about, because now all I'm thinking about reading the subsequent chapters is all the questions I have about that topic.
I learned from the Dragon Book, decades ago. I already knew a lot of programming at that point, but I think most people writing compilers do. I'm curious if there really is an audience of people whose first introduction to programming is writing a compiler... I would think not, actually.
I wonder if it makes sense to do the nand2tetris course for an absolute beginner since it too has compiler creation in it.
Maybe I'm missing the point of this article? Writing a simple compiler is not difficult. It's not something for beginners, but towards the end of a serious CS degree program it is absolutely do-able. Parsing, transforming into some lower-level representation, even optimizations - it's all fun really not that difficult. I still have my copy of the "Dragon Book", which is where I originally learned about this stuff.
In fact, inventing new programming languages and writing compilers for them used to be so much of a trend that people created YACC (Yet Another Compiler Compiler) to make it easier.
archive of forth translation of Crenshaw’s howto
https://web.archive.org/web/20190712115536/http://home.iae.n...
A similarly scoped book series is „AI game programming wisdom“, which contains a multitude of chapters that focus on diverse, individual algorithms that can be practically used in games for a variety of usecases.
I might be in the minority, but I think the best way to learn how to write a compiler is to try writing one without books or tutorials. Keep it very small in scope at first, small enough that you can scrap the entire implementation and rewrite in an afternoon or less.
See also, Andy Keep's dissertation [1] and his talk at Clojure/Conj 2013 [2].
I think that the nanopass architecture is especially well suited for compilers implemented by LLMs as they're excellent at performing small and well defined pieces of work. I'd love to see Anthropic try their C compiler experiment again but with a Nanopass framework to build on.
I've recently been looking in to adding Nanopass support to Langkit, which would allow for writing a Nanopass compiler in Ada, Java, Python, or a few other languages [3].
[1]: https://andykeep.com/pubs/dissertation.pdf
All you really need is a copy of "The Unix Programming Environment", where you can implement `hoc` in a couple hours.
Any tips on reading material for code generation and scheduling for parallel engines?
> The best source for breaking this myth is Jack Crenshaw's series, Let's Build a Compiler!
Right, I've heard of that...
> , which started in 1988.
... Oh. Huh.
(Staring at the red dragon book on my bookshelf, which was my course textbook in the early 00s.)
Would a practical approach be parsing the source into clang's AST format. Then let it make the actual executable.
fanf2 on Dec 25, 2015 [dead] | parent | prev | next [–]
I quite like "understanding and writing compilers" by Richard Bornat - written in the 1970s using BCPL as the implementation language, so rather old-fashioned, but it gives a friendly gentle overview of how to do it, without excessive quantities of parsing theory.
https://t3x.org has literal books on that, from a simple C compiler to Scheme (you might heard of s9) and T3X0 itself which can run under Unix, Windows, DOS, CP/M and whatnot.
PD: Klong's intro to statisticks, even if the compiler looks like a joke, it isn't. It can be damn useful. Far easier than Excel. And it comes with a command to output a PS file with your chart being embedded.
Intro to statistics with Klong
https://t3x.org/klong/klong-intro.txt.html
https://t3x.org/klong/klong-ref.txt.html
On S9, well, it has Unix, Curses, sockets and so on support with an easy API. So it's damn easy to write something if you know Scheme/Ncurses and try stuff in seconds. You can complete the "Concrete Abstractions" book with it, and just adapt the graphic functions to create the (frame) one for SICP (and a few more).
And as we are doing compilers... with SICP you create from some simulator to some Scheme interpreter in itself.
I've been having a look at the Crenshaw series, and it seems pretty good, but one thing that kinda annoys me is the baked-in line wrapping. Is there a way to unwrap the text so its not all in a small area on the left of my screen?
[dead]
[dead]
[dead]
[dead]
[dead]
These days there's an even easier way to learn to write a compiler. Just ask Claude to write a simple compiler. Here's a simple C compiler (under 1500 lines) written by Claude: https://github.com/Rajeev-K/c-compiler It can compile and run C programs for sorting and searching. The code is very readable and very easy to understand.
*Donald Knute -> Donald Ervin Knuth is the author of the book "The Art of Computer Programming" (in progress for a couple of decades, currently volume 4c is being written). It is quite advanced, and it will likely not cover compilers anymore (Addison-Wesley had commissioned a compiler book from Knuth when he was a doctoral candidate, now he is retired and has stated his goal for the series has changed).
I disagree with the author's point: the "Dragon book"'s ("Compilers: Principles, Techniques, and Tools" by Aho et al.) Chapter 2 is a self-sufficient introduction into compilers from end to end, and it can be read on its own, ignoring the rest of the excellent book.
Another fantastic intro to compiler writing is the short little book "Compilers" by Niklaus Wirth, which explains and contains the surprisingly short source code of a complete compiler (the whole book is highly understandable - pristine clarity, really) and all in <100 pages total (99).
(I learned enough from these two sources to write a compiler in high school.)