What similar changes have you seen that could compare to some extent to AI in the technology field?
Martin Fowler: It's the biggest I think in my career. I think if we looked back at the history of software development as a whole, the comparable thing would be the shift from assembly language to the very first high-level languages. The biggest part of it is the shift from determinism to non-determinism and suddenly you're working in an environment that's non-deterministic which completely changes.
What is your understanding and take on vibe coding?
Martin Fowler: I think it's good for explorations. It's good for throwaways, disposable stuff, but you don't want to be using it for anything that's going to have any long-term capability. When you're using vibe coding, you're actually removing a very important part of something, which is the learning loop.
What are some either new workflows or new software engineering approaches that you've kind of observed?
Narrator/Intro: One area that's really interesting is Martin Fowler is a highly influential author and software engineer in domains like agile, software architecture, and refactoring. He is one of the authors of the Agile Manifesto in 2001, the author of the popular book Refactoring, and regularly publishes articles on software engineering on his blog.
In today's episode, we discuss how AI is changing software engineering, and some interesting and new software engineering approaches LLMs enable, why refactoring as a practice will probably get more relevant with AI coding tools, why design patterns seem to have gone out of style the last decade, what the impact of AI is on agile practices, and many more. This podcast episode is presented by Statsig.
Martin, welcome to the podcast.
Martin Fowler: Well, thank you very much for having me. I didn't expect to be actually doing it face to face with you. That was rather nice.
It's all the better this way. I wanted to start with learning a little bit on how you got into software development which was what 40ish years ago.
Martin Fowler: Yeah. It was—yeah it would have been late 70s early 80s. Yeah. I mean like so many things it was kind of accidental really. At school I was clearly no good at writing because I got lousy marks for anything to do with writing.
Really?
Martin Fowler: Yeah. Oh absolutely. But I was quite good at mathematics and that kind of thing and physics. So, I kind of leaned towards engineering stuff and I was interested in electronics and things because the other thing is I'm hopeless with my hands. I can't do anything requires strength or physical coordination.
So, all sorts of areas of engineering and building things, you know, I've tried looking after my car and, you know, I couldn't get the rusted nuts off or anything. It was hopeless. So, but electronics is okay because that's all very, you know, it's more than in the brain than, you know, you need to be able to handle a soldering iron, but that was about as much as I needed to do.
And then computers and it's easy. I don't even need a soldering iron. So, I kind of drifted into computers in that kind of way. And that was my route into software development. Before I went to university, I had a year working with the UK Atomic Energy Authority—or "ukulele" as we call it. And I did some programming in Fortran 4 and it seemed like a good thing to be able to do.
And then when I finished my degree, which was a mix of electronic engineering and computer science, I looked around and I thought, well, I could go into traditional engineering jobs, which weren't terribly well paid and weren't terribly high status, or I could go into computing where it looked like there was a lot more opportunity. And so I just drifted into computing. And this was before the internet took off.
What kind of jobs were there back then that you could get into? What was your first job?
Martin Fowler: Well, my first job was with a consulting company Coopers and Lybrand—or as I refer to them, Cheetum and Lightum. We were doing advice on information strategy in the particular group I was with although that wasn't my job. My job was I was one of the few people who knew Unix because I'd done Unix at college and so I looked after a bunch of workstations that they needed to run this weird software that they were running to help them do their strategy work. Then I got interested in what they were doing with their strategy work and kind of drifted into that. I look at it back now and think, god, that was a lot of snake oil involved. But hey, it was my route into the industry and it got me early into the world of object-oriented thinking and that was extremely useful to get into objects in the mid-80s.
And how did you get into like object-oriented? Back then in the mid-80s that was a very radical thing. And you said you were working at a consulting company which didn't seem like the most cutting edge. So how does two plus two get together? How did you get to do cutting edge stuff?
Martin Fowler: Because this little group was into cutting edge stuff and they had run into this guy who had some interesting ideas, some very good ideas as well as some slightly crazy ideas. And he packaged it up with the term object orientation, which wasn't really the case, but it was kind of, you know, it's part of the snake oil as it were. I mean, that's a little bit cruel to call it snake oil because he had some very good ideas as well. But that kind of led me into that direction and of course in time I've found out more about what object orientation was really about and that eventually led to my whole career in the next 10 or 15 years.
How did you make your way and eventually end up at Thoughtworks and also start to write some books and publish on the side? How did you go from someone who was brand new to the industry to starting to slowly become someone who was teaching others?
Martin Fowler: Well, here again bundles of accidents, right? So, while I was at that consulting company, I met another guy that they had brought in to help them work with this kind of area, an American guy who became the really the biggest mentor and influence upon my early career. His name is Jim Odell and he had been an early adopter of information engineering and had worked in that area.
He saw the good parts of these ideas that these folks were doing and he was an independent consultant and a teacher, so he spent a lot of his time doing work along those lines. I left Coopers and Lybrand after about a couple of years to actually join the crazy company which is called PEK. I was with them for a couple of years. It was a small company. There was a grand total of four of us in the UK office and that was the largest office in the company.
Wow.
Martin Fowler: Kind of thing. So I did—having seen a big company's craziness, I then saw a small company's craziness. Did that for a couple of years and then I was in a position to go independent and I did. Helped greatly by Jim Odell who fed me a lot of work basically, and also by some other work I got in the UK and that was great. I remember leaving PEK and thinking that's it, independence life for me. I'm never going to work for a company again.
Famous last words.
Martin Fowler: Exactly. And I carried on. I did well as an independent consultant throughout the '90s and during that time I wrote my first books. I moved to the United States in '93 and I was doing very, very happily and obviously got the rise of the internet, lots of stuff going on in the late '90s.
It was a good time and I ran into this company called Thoughtworks and they were just a client. I would just go there and help them out. The story gets more—I had met Kent Beck and worked with Kent at Chrysler, the famous C3 project, which is kind of the birth project of extreme programming. So I'd worked on that, seen extreme programming, seen the agile thing.
So I'd got the object orientation stuff, I got the agile stuff, and then I came to Thoughtworks and they were tackling a big project. Still sizable, about 100 people working on the project. It was clearly going to crash and burn. But I was able to help them both see what was going on and how to avoid crashing and burning.
They invited me to join them and I thought, hey, you know, join a company again maybe for a couple of years. They're really nice people. They're my favorite client. You know, I always thought of it as other clients would say, "These are really good ideas, but they're really hard to implement." While Thoughtworks would say, "These are really good ideas. They're really hard to implement, but we'll give it a try." And they usually pulled it off. And so I thought, "Hey, with a client like that, I might as well join them for a little while and see what we can do." That was 25 years ago.
And then fast forward today, your title has been for over a decade, Chief Scientist.
Martin Fowler: Since I joined. That was my title at joining.
Since you joined. So I have to ask: what does a Chief Scientist at Thoughtworks do?
Martin Fowler: Well, it's important to remember I'm chief of nobody and I don't do any science. The title was given because that title was used a fair bit around that time for some kind of public-facing ideas kind of person. If I remember correctly, Grady Booch was Chief Scientist at Rational at the time actually. And there were other people who had that title. So it was a high-falutin, very pretentious title but they felt it was necessary.
It was weird because one of the things of Thoughtworks at that time was you could choose your own job title. Anybody could choose whatever job title they like. But I didn't get to choose mine. I had to take the Chief Scientist one. They didn't like titles like flagpole or battering ram or loudmouth, which is the one I most prefer.
One thing that Thoughtworks does every six months is the Thoughtworks Radar. Can you share a little bit of how Thoughtworks comes up with this technology radar? How do people at Thoughtworks stay this close to what is happening in the industry?
Martin Fowler: Okay. Yeah. Well, this will be a bit of a story. So, it started just over 10 years or so ago. Its origin was one of the things that we've really pushed at Thoughtworks is to have technical people, practitioners, really involved at various levels of running the business. One of the leaders of that was our former CTO Rebecca Parsons.
Rebecca became CTO and she said "I want an advisory board who will keep me connected with what's going on in projects." So she created this technology advisory board. She had me on the advisory board because I was very much a public face of a company. Originally that was just our brief. And then one of these meetings, Daryl Smith who was her TA (Technical Assistant) at the time, said "We've got all these projects going on, it would be good to get some picture of what kinds of technologies we're using and how useful they are."
He came up with this idea of the radar metaphor and the rings. It's a habit if we do something for internal purposes, we try and just make it public. We give away our secret sauce all the time.
Now the process has changed. Now we've created a process where people can submit "blips" (an entry). They brief the members of the "Doppler group." At the meeting we'll decide which of these blips to put on the radar and not. It's very much this bottom-up exercise. For me it's a bit weird because I'm so detached from the day-to-day these days that it's just this lineup of technologies and things I have no idea what most of them are, but interesting to hear about.
The radar is full with a lot of AI and LLM related things because this is a huge change. Looking back on your career, what similar changes have you seen that could compare to some extent to AI in the technology field?
Martin Fowler: It's the biggest I think for my career. I think if we looked back at the history of software development as a whole, the comparable thing would be the shift from assembly language to the very first high-level languages, which is before my time.
What was that shift like in terms of mindset? You really needed to know the internals of the hardware and the instructions.
Martin Fowler: I did very little assembly at university but it's been very useful because I never want to do it again. Things were very specific to individual chips. You had these very convoluted ways of doing even the simplest thing because your only instruction was something like move this value from a memory location to this register.
Even a relatively poor high-level language like Fortran 4, at least I can write things like conditional statements and loops. There's a definite shift of moving away from the hardware to thinking in terms of something a bit more abstract. You've got a degree of decoupling there.
With LLMs, it's a similar degree of mind shift. The shift is not so much of an increase of a level of abstraction—the biggest part of it is the shift from determinism to non-determinism. Suddenly you're working in an environment that's non-deterministic which completely changes how you have to think about it.
Can we talk about that shift in abstraction? Some say we have a new abstraction: English language which will generate this code. You’re saying you don't think it’s just an abstraction jump, why?
Martin Fowler: I think the abstraction jump difference is smaller than the determinism/non-determinism jump. It's worth remembering one of the key things about high-level languages is the ability to create your own abstractions in that language.
An old Lisp adage is: "What you want to do is create your own language in Lisp and then solve your problem using the language that you've created." That way of thinking is a good way of thinking in any programming language. If you can balance those two nicely, that is what leads to very maintainable and flexible code.
AI helps us a little bit because we can build abstractions a bit more easily, but we have this problem: non-deterministic implementations of those abstractions. We've got to learn a whole new set of balancing tricks.
My colleague Unmesh Joshi has been writing about using the LLM to co-build an abstraction and then using the abstraction to talk more effectively to the LLM. There was a thing I read that talked about how if you describe chess matches to an LLM in plain English, it can't really understand how to play. But if you describe them in chess notation, then it can. By using a much more rigorous notation, you get more traction. That has great parallels with Domain Driven Design and Domain Specific Languages.
Is this the first time we're seeing a tool that is so wide in software engineering that is non-deterministic?
Martin Fowler: It's a whole new way of thinking. Other forms of engineering think in terms of tolerances. My wife's a structural engineer; she always thinks in terms of: "What are the tolerances? How much extra stuff do I have to do beyond what the math tells me?" We need some of that thinking ourselves. What are the tolerances of the non-determinism that we have to deal with? We can't skate too close to the edge. I suspect we're going to have some noticeable crashes, particularly on the security side.
What are some either new workflows or new software engineering approaches that you've observed that sound exciting?
Martin Fowler: One area is being able to knock up a prototype in a matter of days. That's way more than you could have done previously. This is the "vibe coding" thing. For throwaway explorations, disposable little tools, and stuff by people who don't think of themselves as software developers, that's very valuable.
On the completely opposite end of the scale: helping to understand existing legacy systems. My colleagues take the code itself, do semantic analysis, populate a graph database, and use that in a RAG-like style to interrogate it: "Which bits of code touch this data as it flows through the program?" Incredibly effective. We put understanding of legacy systems into the "Adopt" ring of the radar.
Can LLMs help us modify legacy code in a safe way?
Martin Fowler: It's still a question. James Lewis was playing with Cursor. He wanted to change the name of a class in a not-too-big program. It came back an hour and a half later and had used 10% of his monthly allocation of tokens, and all he's doing is changing the name of a class. We've had IDE functionality for that for 20 years with JetBrains ReSharper. So LLMs are not very efficient at that yet.
Another area that's up in the air is what happens when you've got a team of people. We will always want teams. The question is how do we best operate with AI in the team environment?
What is your take on "vibe coding"?
Martin Fowler: When I use the term vibe coding, I define it as: you don't look at the output code at all. Maybe take a glance at it out of curiosity, but you really don't care, and maybe you don't even know what you're doing because you've got no knowledge of programming.
It's good for throwaways, but you don't want to use it for anything long-term. As my colleague Unmesh wrote: when you're vibe coding, you're removing the learning loop. If you're not looking at the output, you're not learning. You cannot shortcut that process. When you produce something you didn't learn from, you don't know how to tweak and evolve it. All you can do is nuke it from orbit and start again.
I'm noticing it's so easy to give a prompt and get output, but you get tired of reviewing. How can people keep learning with these tools?
Martin Fowler: I am paying attention to Unmesh's approach of building a language to communicate to the LLM more precisely. Also, using it to understand unfamiliar environments. James Lewis was working with a game engine called Godot and C#. With an LLM, he can learn a bit about it and explore.
It's similar to when Stack Overflow appeared 10 or 15 years ago. People mindlessly copied and pasted snippets. As you get more experienced, you tell junior engineers: "You need to understand why it works." We've been here before, but now it's boosted and on steroids. If you don't care about the craft and understand the LLM's output, you'll eventually be no better than someone prompting it mindlessly.
Martin Fowler was just talking about the importance of testing when working with LLMs. I mean, one of the people I particularly focus on is Simon Willison—something he stresses constantly is the importance of tests. What's your take?
Martin Fowler: You've got to really focus a lot on making sure that the tests work together. And of course, this is where the LLMs struggle because you tell them to do the tests and they tell you "Everything's fine," then you run npm test and get five failures. They do lie to you all the time. In fact, if they were truly a junior developer, I would be having some words with HR.
I just had a weird experience where I told an LLM to add a configuration blob to a JSON file and add the current date. It just copied the last date. I said "That is not today's date." It said "I'm so sorry," then put yesterday's date. Even for the simplest things, as a professional, you should not trust. Don't trust, but do verify.
One interesting area is spec development. What if we define what we want it to do and give it a really good specification? Do you have deja vu from waterfall development?
Martin Fowler: The similarity to waterfall is where people try and create a large amount of spec and not pay much attention to the code. To me, the key thing is you want to avoid that. It's got to be: do the smallest amount of spec you can possibly get to make some forward progress. Cycle with that, build it, get it tested, get it in production. What matters is the tight loops and thin slices.
Can we craft some kind of more rigorous spec to talk about? We still want the ubiquitous language notion—that it's the same language in our head as is in the code. We're seeing the same names. The structure is parallel, but obviously the way we think is a bit more flexible than the way code can be.
This must be especially important in enterprises where developers are not the majority of people.
Martin Fowler: That is the world I'm most familiar with. The corporate enterprise world is a whole different kettle of fish. Suddenly software developers are a small part of the picture. There’s very complex business things, regulation, and a much worse legacy system problem.
Banks tend to be more technologically advanced than most other corporations—retailers, airlines, and government agencies. I was chatting with folks in the Federal Reserve in Boston; they have to be extremely cautious. They are not allowed to touch LLMs at the moment because
The consequences of error in a major government banking organization are pretty damn serious. So you've got to be really, really careful about that kind of stuff. Their constraints are very different, and it brought to mind an adage: to understand how the software development organization works, you have to look at the core business of the organization and see what they do.
Martin Fowler: I was at this agile conference for the Federal Reserve in Boston and they took me on a tour of where they handle the money. I saw the places where they bring in the notes, clean them, count them, and send them out again. The degree of care and control is strenuous. You look at that and say, "Yep, I can see why in the software development side that mindset percolates."
A lot of corporations have that similar notion. If you're involved in an airline, you are really concerned about safety and getting people to their destination. That affects your whole way of thinking.
We always see a divide in technology usage. Startups have everything to gain and nothing to lose, whereas large enterprises have a different risk tolerance. But AI seems to be everywhere rapidly. Are even the most risk-averse organizations already evaluating it?
Martin Fowler: Oh, it is. We see it all over the place, but with more caution in the enterprise world where they say, "Yeah, we also see the dangers here."
The important thing to remember with these big enterprises is they are not monolithic. Small portions can be very adventurous and other portions can be extremely not so. The variation within an enterprise often is bigger than the variation between enterprises.
LM's are very good at refactoring. You wrote the book "Refactoring" in 1999 and refreshed it 20 years later. Can you bring us back to the environment of 1999 and the impact of the first edition?
Martin Fowler: I first came across refactoring at Chrysler working with Kent Beck. In my hotel room in Detroit, he showed me how he would refactor Smalltalk code. I've always cared a lot about something being comprehensible, but what he was doing was taking these tiny little steps. I was astonished at how small each step was, but because they were small, they didn't go wrong and they would compose beautifully.
Kent was focused on the first Extreme Programming book, so I thought, "Well, I'm going to write the refactoring book then." Whenever I was refactoring something, I would write careful notes for myself on how to extract a method without screwing it up. I did it in Java because Smalltalk was dying and Java was the "only programming language we'd ever need" in the late 90s.
The impact was that refactoring became a word. It got misused—people use "refactoring" to mean any change to a program—but refactoring is strictly these very small behavior-preserving changes. Tiny, tiny steps.
What made you do a second edition in 2019?
Martin Fowler: A sense of wanting to refresh it. Core ideas were sound, but Java from the late 1990s shows its age. I decided to switch to JavaScript to reach a broader audience and allow a less object-oriented centered way of describing things—"extract function" instead of "extract method."
With AI tools generating a lot more code faster, how do you think the value of refactoring thinking is going to change?
Martin Fowler: I expect it to be increasingly important. If you're going to produce a lot of code of questionable quality, but it works, then refactoring is the way to get it into a better state while keeping it working.
These tools at the moment cannot definitely refactor on their own. The refactoring mindset—boiling changes down to small steps that compose easily—is the trick. It provides a way for us to control the tool. Using the LLM as a starting point to drive a deterministic tool is where there's some interesting interplay.
In the early 2000s, everyone talked about patterns and software architecture. Since 2010, it seems technologists stopped talking about them. Why did that happen?
Martin Fowler: With patterns, you're trying to create a vocabulary to talk more effectively. Just like the medical world has jargon, we need nouns to describe alternatives and options. It's a shame the wind has gone out of the sails. Perhaps people were overusing them like pinning medals on a chest.
Grady Booch suggested that "Cloud" happened. Hyperscalers like AWS built well-architected building blocks (like DynamoDB), so you didn't need to reinvent the architecture anymore. You just use the blocks.
Martin Fowler: Yeah, but I suspect there's still patterns of using these things. In larger organizations, the lingo is baked in. It can take you several years just to figure out what the hell's going on because you have to learn all these interconnecting systems.
I remember chatting with someone at an established bank who joined from a startup. He said, "Now I've been here three years, I think I can understand the problem." It takes that long because it’s not a logical system—it’s built by humans over decades.
You were part of the 17 people who created the Agile Manifesto in 2001. What was the story there?
Martin Fowler: It started about a year before at a meeting Kent Beck ran. We discussed if Extreme Programming should be narrow or broad. That led to getting people from different groups together in Utah.
I don't remember much of the meeting itself. I curse myself for not writing a journal. I remember Bob Martin being insistent on making a "manifesto." I thought, "Oh well, the manifesto itself will be completely useless and ignored, but the exercise of writing it will be interesting."
And then it made a massive impact.
Martin Fowler: It was a shock. It gets misinterpreted, but it had an impact. In 2000, clients didn't want to work the way we wanted. They wanted a big plan over five years, two years of design, then implementation, then testing. Our notion was: "We'd like to do that entire process for a subset of requirements in one month, please." Agile made the world safe for people who wanted to work that way. We've made material progress, though it is still a pale shadow of what we originally wanted.
Does Agile work with AI? Will we see even shorter increments?
Martin Fowler: I still feel that building things in thin slices with human review is the way to bet. Improving the frequency is what we need to do. Speed up that cycle time. Look for ways to get ideas from running code from two weeks down to one week.
Boris from the Claude team shared how he did 20 prototypes of a feature in two days using AI. To me, this was like—wow. If you told me that before, I would have said it takes two weeks. It comes back to tightening feedback loops so we are able to learn.
How do you personally learn about AI and keep up to date?
Martin Fowler: The main way I learn is by working with people who are writing articles for my site. I'm not the best person to write this stuff because I'm not doing day-to-day production work. I learn through the editing process.
I also look for sources I trust. Birgitta Böckeler, Simon Willison, and Kent Beck.
How do you identify a "good source" of information?
Martin Fowler: A lack of certainty is a good thing. When people tell me "I know the answer to this," I'm suspicious. I like when people say "This is what I understand at the moment, but it's fairly unclear."
I look for someone who explores nuances. If someone says "Always use microservices" or "Never use microservices," both arguments are discounted. It’s when they say, "These are the factors and trade-offs you should consider," that my confidence increases.
What is your advice for junior software engineers starting out today?
Martin Fowler: You have to be using AI tools, but the hard part is you don't know if the output is good. Find good senior engineers to mentor you. A good experienced mentor is worth their weight in gold.
AI is handy, but it's gullible and likely to lie. Ask it: "What is leading you to say that? What are your sources?" AI is just regurgitating the internet—the question is, did it see the good stuff or the crap?
How do you feel about the tech industry in general right now?
Martin Fowler: Long-term, I'm positive. Demand is more than we can imagine. Short-term, it's strange. We are in a depression regarding jobs—hundreds of thousands of layoffs. The end of zero interest rates is the big thing that hit us, not AI.
We have this weird mix of no investment/depression in the software industry with an AI bubble going on at the same time. But the core skills of being a good developer aren't about writing code—it's about understanding what to write, which is communication with users.
Rapid Fire Questions
Favorite programming language and why?
Martin Fowler: At the moment, Ruby, because I’m so familiar with it. But my love is Smalltalk. There was nothing as much fun as programming in Smalltalk in the '90s.
What are one or two books you would recommend?
Martin Fowler:
Thinking, Fast and Slow by Daniel Kahneman. It gives you an intuition about probability and statistics, which is vital for software and life.
The Power Broker by Robert Caro. It's about Robert Moses and how power works in a democratic society. It's 1,200 pages, but magnificent writing.
A board game recommendation?
Martin Fowler: Concordia. It’s fairly abstract, easy to get into, but has a lot of richness in decision-making.
Martin, thank you so much.
Martin Fowler: Thank you. It worked out really well.
[Closing Remarks]
One of the things that really stuck with me is how the single biggest change with AI is going from deterministic systems to non-deterministic ones. The problem with vibe coding is that when you stop paying attention to the code, you stop learning. Be mindful of that trade-off.
n5321 | 2026年1月28日 22:39