The Future of Electric Motor Factories: A Drucker Perspective

the essence of management is not in predicting trends or piling up technology, but in asking the right questions: What is the purpose of an electric motor factory? How does it organize people, resources, and processes in a knowledge society to create and retain customers? How does it make knowledge workers more effective, turning ordinary operations into extraordinary results?

In the future—by 2030 and beyond—electric motor factories will no longer be mechanical assembly lines. Their true purpose is to deliver results: precise, controllable rotary motion under extreme constraints of size, efficiency, precision, and reliability.

This amplifies customers' capabilities, making end products smaller, lighter, more efficient, and smarter—reducing lifecycle costs, accelerating innovation, and serving broader societal goals like healthier lives, cleaner mobility, smarter production, and more humane interactions.

The factory's operation will shift from "selling hardware" to "selling quantifiable results + services," becoming customers' visibility partners and organizational transparency partners.

Digitization of outcomes—making uptime, efficiency, and precision more visible, certain, and measurable via IoT, AI, and data visualization—will be the core engine.


n5321 | 2026年2月28日 23:17

从德鲁克的管理管理视角理解提示工程

在提示工程(Prompt Engineering)常常被视为一种技术技巧,Claude现在就在搞SKILL 模版。大家热衷关键词、模板、句式和各种提示窍门。模板深度理解之后,发现他本质上跟管理的operations具有一致性!

Peter F. Drucker在《管理实践》(The Practice of Management)中提出的管理框架。the specific work of a manager,一个经理人的作业:设定目标,组织资源,激励沟通,评估,培养人才!

两个框架的pattern几乎是一样的!

如果再去分析问题模型,更加确认这种一致性!

管理的本质是在不确定性中塑造可预测的成果。当我们编写提示时,面对的是一个复杂且概率驱动的语言模型。我们无法直接操控它的每一个输出,只能通过精心设计的prompt来引导它。

借用management的框架,可以发现提示工程不再是技巧堆砌,而是设计一个系统以稳定产出成果的过程。

  1. 一切从设定目标开始,这是德鲁克管理框架的首要任务。目标不是模糊的愿望,而是明确的、可衡量的方向。想想一个弱提示:“写一篇文章。”它缺乏具体性,只能是一个不确定,难以匹配预期的输出结果。相反,一个强有力的提示会定义目标读者、文章长度、结构、风格,甚至成功标准。

    这些元素的作用不是简单表达想法,而是减少不确定性,或者说增加收敛性。没有明确目标,组织会陷入忙碌却无成效的泥沼;同样,没有目标的提示会让模型的输出发散,偏离预期。

  2. 接下来是组织结构的设计。在企业管理中,这意味着分工、任务分解和责任匹配,以减少混乱。应用到提示工程中,则是通过分步骤执行、明确角色、提供背景和设定边界来实现。例如,一个有效的提示可能这样设计:“首先分析问题,其次给出框架,最后展开正文。”这不是随意的写作技巧,而是 deliberate 的结构工程。通过这些约束,我们减少了模型的自由度,确保输出更有序、更可靠。管理通过organizing来驾驭复杂性,提示工程亦然——它在概率的海洋中筑起堤坝。

  3. 沟通的清晰度则是确保输出稳定性的关键。在组织中,模糊的表达会导致执行偏差;在AI模型中,指令的模糊性会放大概率补全的随机性。模型本身不会“犯错”,它只是填补你未定义的空间。因此,编写提示的本质在于定义边界:用精确的语言划定范围,避免歧义。清晰不是一种礼貌,而是核心控制机制,它将潜在的混乱转化为可靠的成果。

  4. 衡量标准是另一个不可或缺的环节。在提示工程中,如果缺少字数限制、结构要求、风格规范或判断准则,model就无法评估输出质量。衡量机制本身就是一种隐形的控制工具。随着提示工程的成熟,评估(evaluation)将成为标准实践,帮助我们量化改进的空间。

  5. 管理并非一蹴而就,它是一个持续的反馈循环:设定目标、执行、衡量、调整。提示工程同样如此。初次输出往往不是完美版本;高效的使用者会通过精细化约束、重组结构和优化表达来迭代。这不是在“修正”模型,而是在管理一个智能系统,确保每一次循环都更接近理想成果。更深层的洞察在于,组织是复杂的人类系统,而AI模型是复杂的概率系统。二者都不可完全预测,却都对结构极其敏感。

管理的目标不是消除不确定性,而是设计出能在不确定性中稳定产出的框架。提示工程正是如此:它在AI的“黑箱”中注入秩序。这种视角转变意味着什么?如果你将提示工程视为技巧,你可能会 endlessly 追逐各种模板和诀窍。但如果你视之为管理,你会问自己:我的目标是否清晰?结构是否合理?评估是否明确?这是一种管理者心态,而非被动用户视角。它让你从单纯的输入-输出转向系统设计。

今天我们面对的是AI模型,但核心问题不变:如何在复杂系统中稳定产生成果?答案依旧是:通过通过目标导向、结构设计、清晰沟通、严格衡量和持续迭代。从这个角度看,prompt engineering就不再只是编写提示,而是构建一个高效的成果系统。


n5321 | 2026年2月28日 17:19

A practical guide to OpenAI prompt generation

So, you’ve started playing around with OpenAI. You’ve seen moments of brilliance, but you’ve probably also felt that flicker of frustration. One minute, it’s writing flawless code; the next, it’s giving you a completely generic answer to a customer question. If you’re finding it hard to get consistent, high-quality results, you're definitely not alone. The secret isn't just what you ask, but how you ask it.

This is where OpenAI Prompt Generation comes into play. It's all about crafting instructions that are so clear and packed with context that the AI has no choice but to give you exactly what you need.

In this guide, we'll walk through the pieces of a great prompt, look at the journey from writing prompts by hand to using automated tools, and show you how to put these ideas to work in a real business setting.

What is OpenAI Prompt Generation?

OpenAI Prompt Generation is the art of creating detailed instructions (prompts) to get Large Language Models (LLMs) like GPT-4 to do a specific job correctly. It’s a lot more than just asking a simple question. Think of it less like a casual chat and more like giving a detailed brief to a super-smart assistant who takes everything you say very, very literally.

The better your brief, the better the result. This whole process has a few stages of complexity:

  • Basic Prompting: This is what most of us do naturally. We type a question or command into a chat box. It works fine for simple things but doesn't quite cut it for more complex business needs.

  • Prompt Engineering: This is the hands-on craft of tweaking prompts through trial and error. It means adjusting your wording, adding examples, and structuring your instructions to get a better answer from the AI.

  • Automated Prompt Generation: This is the next step up, where you use AI itself (through something called meta-prompts) or specialized tools to create and fine-tune prompts for you.

Getting this right is how you actually get your money's worth from AI. When prompts are fuzzy, the results are all over the place, which costs you time and money. When they’re well-designed, you get predictable, quality outputs that can genuinely handle parts of your workload.

The core components of effective OpenAI Prompt Generation

The best prompts aren't just one sentence, they’re more like a recipe with a few key ingredients. Based on what folks at OpenAI and Microsoft recommend, a solid prompt usually has these parts.

Instructions: Telling the AI what to do

This is the core of your prompt, the specific task you want the AI to tackle. The most common mistake here is being too vague. You have to be specific, clear, and leave no room for misinterpretation.

For instance, instead of saying: "Help the customer."

Try something like: "Read the customer's support ticket, figure out the main cause of their billing problem, and write out a step-by-step solution for them."

The second instruction is crystal clear. It tells the AI exactly what to look for and what the final answer should look like.

Context: Giving the AI the background info

This is the information the AI needs to actually do its job. A standard LLM has no idea about your company’s internal docs or your specific customer history. You have to provide that yourself. This context could be the text from a support ticket, a relevant article from your help center, or a user's account details.

The problem is that this information is usually scattered everywhere, hiding in your helpdesk, a Confluence page, random Google Docs, and old Slack threads. Manually grabbing all that context for every single question is pretty much impossible. This is where a tool that connects all your knowledge can be a huge help. For example, eesel AI solves this by securely connecting to all your company's apps. It brings all your knowledge together so the AI always has the right information ready to go, without you having to dig for it.

eesel AI connects to all your company
eesel AI connects to all your company

Examples: Showing the AI what "good" looks like (few-shot learning)

Few-shot learning is a seriously powerful technique. It just means giving the AI a few examples of inputs and desired outputs right inside the prompt. It’s like showing a new team member a few perfectly handled support tickets before they start. This helps guide the model’s behavior without having to do any expensive, time-consuming fine-tuning.

Picking out a few good examples yourself is a great start. But what if an AI could learn from all of your team's best work? That's taking the idea to a whole new level. eesel AI can automatically analyze thousands of your past support conversations to learn your brand's unique voice and common solutions. It’s like giving your AI agent a perfect memory of every great customer interaction you've ever had.

Cues and formatting: Guiding the final output

Finally, you can steer the AI's response by using simple formatting. Using Markdown (like # for headings), XML tags (like ``), or even just starting the response for it ("Here’s a quick summary:") can nudge the model to give you a structured, predictable output. This is incredibly handy for getting answers in a specific format, like JSON for an API or a clean, bulleted list for a support agent.

The evolution of OpenAI Prompt Generation: From manual art to automated science

Prompt generation isn't a single thing, it's more of a journey. Most teams go through a few stages as they get better at AI automation.

Level 1: Manual OpenAI Prompt Generation

This is where everyone begins. A person, usually a developer or someone on the technical side, sits down with a tool like the OpenAI Playground and fiddles with prompts. It’s a cycle of writing, testing, and tweaking.

The catch? It’s slow, requires a ton of specific knowledge, and just doesn't scale. A prompt that works perfectly in a testing environment is completely disconnected from the real-world business workflows where it needs to be used.

Level 2: Using prompt generator tools

Next up, teams often find simple prompt generator tools. These are usually web forms where you plug in variables like the task, tone, and format, and it spits out a structured prompt for you.

They can be useful for one-off tasks, like drafting a marketing email. But they're not built for business automation because they can't pull in live, dynamic information. The prompt is just a fixed block of text, it can't connect to your company's data or actually do anything.

Level 3: Advanced prompt generation with meta-prompts

This is where things get really clever. A "meta-prompt," as OpenAI's own documentation explains, is an instruction you give to one AI to make it create a prompt for another AI. You're essentially using AI to build AI. It’s the magic behind the "Generate" button in the OpenAI Playground that can whip up a surprisingly good prompt from a simple description.

But even this has its limits. At its core, it's still a tool for developers. The great prompt it creates is still separate from your helpdesk, your knowledge base, and your team's daily grind. You still have to figure out how to get that prompt into your systems and connect it to your data.

The next step: Integrated AI platforms

The real goal isn't just to generate a block of text, it's to build an automated workflow. This is where you graduate from a prompt generator to a true workflow engine. The prompt becomes the "brain" of an AI agent that can access your company's knowledge, look up live data, and is allowed to take action, like tagging a ticket or escalating an issue.

This is exactly how eesel AI works. Our platform lets you set up your AI agent’s personality, knowledge sources, and abilities through a simple interface. You’re not just writing a prompt in a text box; you’re building a digital team member that works right inside your existing tools like Zendesk, with no complex coding needed.

With eesel AI, you can build a digital team member by setting up its personality, knowledge, and abilities through a simple interface, moving beyond simple OpenAI Prompt Generation.
With eesel AI, you can build a digital team member by setting up its personality, knowledge, and abilities through a simple interface, moving beyond simple OpenAI Prompt Generation.

The business impact: Understanding the costs of OpenAI Prompt Generation

While writing prompts can feel like a technical chore, its impact is all about the money. According to OpenAI's API pricing, you pay for both the "input" tokens (your prompt) and the "output" tokens (the AI's answer). This means every time you send a long, poorly written prompt, it costs you more money. Good prompt engineering is also about keeping costs down.

OpenAI does have a feature called prompt caching that can help with speed and cost for prompts you use over and over. But it doesn’t fix the main issue of unpredictable usage, which can lead to some nasty surprise bills.

This is why "per-resolution" pricing models from many AI vendors can be so tricky. They lead to unpredictable costs that go up when you're busiest. With eesel AI’s pricing, you get clear, predictable plans based on a set number of monthly AI interactions. You’re in complete control of your budget, with no hidden fees, even if your support ticket volume suddenly doubles.

eesel AI’s pricing provides clear, predictable plans, giving you control over your budget for OpenAI Prompt Generation.
eesel AI’s pricing provides clear, predictable plans, giving you control over your budget for OpenAI Prompt Generation.

Go beyond the playground

The OpenAI Playground is a great place to experiment, but businesses need something reliable, scalable, and plugged into their day-to-day work. The final step is to move from a "prompt generator" to a full "workflow engine."

That's why having a safe place to test things out is so important. With eesel AI, you can run a powerful simulation using thousands of your past support tickets. You can see exactly how your AI agent will behave, check its responses, and get accurate predictions on how many issues it will solve and how much you'll save, all before it ever talks to a real customer. This lets you build and launch with total confidence.

The eesel AI platform allows you to run powerful simulations to test your OpenAI Prompt Generation against historical data before deployment.
The eesel AI platform allows you to run powerful simulations to test your OpenAI Prompt Generation against historical data before deployment.

Stop generating prompts, start building agents

Effective OpenAI Prompt Generation is structured, full of context, and always improving. While tinkering by hand and using simple tools are fine for small tasks, the real value for your business comes from weaving this intelligence directly into your workflows.

The goal isn't just to create better text. It's to automate repetitive tasks, give your team instant access to information, and deliver better, faster results for your customers. It's time to move beyond just writing prompts and start building intelligent agents that actually get work done.

Ready to see how easy it can be to build a powerful AI agent without touching a line of code? Set up your AI agent with eesel AI in minutes and see how our platform turns the complex world of prompt generation into a simple, straightforward experience.


n5321 | 2026年2月28日 09:06

the origin of AI

一切真正开始于 1950 年代。那时候计算机才刚出生没几年,一群年轻人——Alan Turing、John McCarthy、Marvin Minsky、Allen Newell、Herbert Simon 这些人——突然冒出个大胆的想法:能不能造一台机器,让它表现出“智能”?1956 年,他们在达特茅斯学院开了个夏天研讨会,直接把“人工智能”这个词发明出来了。那会儿的乐观情绪高得离谱:有人说“20 年内就能解决所有智能问题”。他们相信,只要把逻辑、搜索、符号处理这些东西装进计算机,就能模拟人类思考。于是 第一个 AI 热潮(1956–1973)开始了。成果还真不少:逻辑理论家证明了数学定理、通用问题求解器 Shakey 机器人能在房间里晃悠、ELIZA 这个聊天程序居然能骗人觉得它懂心理学。但问题也很快就来了:这些系统只能在非常窄、非常结构化的玩具世界里玩得转。一遇到真实世界的复杂性、模糊性、不确定性,它们就卡壳了。计算资源也跟不上,内存小、速度慢。到了 1973 年,英国 Lighthill 报告一锤子砸下来,说 AI 基本没戏,美国和英国的资助大幅缩水——第一个“AI 冬天”就这么开始了。70 年代到 80 年代初,AI 低调了好一阵子。但没完全死掉。有些人悄悄转向了 专家系统(expert systems)。想法很简单:别再试图让机器自己“思考”了,直接把人类专家的知识一条一条编码进去,做成 if-then 规则库。MYCIN 诊断细菌感染、DENDRAL 分析化学分子、XCON 帮 DEC 公司配置电脑订单——这些系统真的在某些领域赚了钱、帮了大忙。80 年代中期,日本的第五代计算机计划和美国 DARPA 的战略计算计划又把钱砸进来,专家系统公司如雨后春笋一样冒出来。可好景不长。到 80 年代末,大家发现:规则写得再多,也写不完现实世界的全部例外;维护知识库贵得离谱;新情况一来,系统就崩溃。最要命的是,专家自己常常说不清“为什么”这么判断。于是第二个 AI 冬天又来了(1987–1993),资金撤退,公司倒闭,很多人以为 AI 这事儿彻底凉了。但就在冬天里,有几条暗流在悄悄流动。一条是 神经网络。它其实 50 年代就有(感知机),但因为 Minsky 和 Papert 1969 年那本《Perceptrons》把单层网络批得体无完肤,大家都觉得它没戏。80 年代中期,Rumelhart、Hinton、Williams 重新发明了反向传播,多层网络开始复活。Yann LeCun 搞出了卷积神经网络,能认手写数字了。但那时候计算力不够,数据也不够,大家还是觉得“神经网络太慢、太黑箱”。另一条暗流是 概率方法和机器学习。Judea Pearl 的贝叶斯网络、统计学习理论、支持向量机这些东西开始冒头。它们不像符号 AI 那么刚愎自用,而是承认世界有不确定性,愿意从数据里学。然后 1997 年出了个事儿,让很多人重新抬起头:IBM 的深蓝(Deep Blue)下棋打败了世界冠军卡斯帕罗夫。那不是神经网络,是暴力搜索 + 手工特征 + 评估函数。但它告诉大家:专用系统在特定任务上,是真的可以超过人类的。真正的转折点来得晚一些——2010 年代初。GPU 的出现让训练深层神经网络突然变得可行;ImageNet 大规模数据集公开;AlexNet(2012)在图像识别比赛上把错误率砍了一半多。大家突然意识到:只要数据够多、计算力够强、层数够深,神经网络的性能就能指数级飙升。这就是所谓的“深度学习革命”。从那以后,AI 进入第三个春天,而且是前所未有的热潮:

  • 2014–2016:生成对抗网络(GAN)、AlphaGo(2016 打败李世石)

  • 2017:Transformer 架构横空出世(Attention is All You Need)

  • 2018–2022:BERT、GPT 系列把自然语言处理彻底颠覆

  • 2022–现在:ChatGPT、GPT-4、Gemini、Claude、Llama、o1 系列……大语言模型把“会聊天、会写代码、会推理”这事儿做到了让普通人都震惊的程度

但我得跟你说实话(因为我见过太多轮热潮了):我们现在拥有的这些东西,虽然表面上很强大,但它们本质上还是超级厉害的统计模式匹配器。它们在海量数据里学会了模仿人类的语言、图像、代码,但它们没有真正理解意义、没有稳固的常识、没有对世界的因果模型、很容易在边缘情况崩溃、会一本正经地胡说八道(hallucination)。所以 AI 的历史,其实是一部人类对“智能”定义不断调整的历史。一开始我们以为智能就是逻辑推理,后来以为是专家知识,再后来以为是模式识别和大规模统计。现在很多人又开始说“也许智能就是足够大规模的模式匹配”。但我总觉得,我们可能还差了点什么——也许是类比、也许是抽象、也许是对“意义”的真正把握。咱们别急着宣布胜利,也别急着宣布失败。这段历史告诉我们:每一次大突破,都伴随着巨大的 hype 和后来的清醒。每一次冬天,都在为下一次春天攒能量。你看,现在我们站在又一个高点上。但真正的问题不是“机器会不会超过我们”,而是我们人类在试图造出“像我们一样”的东西时,到底学到了多少关于自己的事。



n5321 | 2026年2月28日 01:07

Prompt Value

在NASA工作17年的航空工程师,未来的米帝国家工程院院士Walter G. Vincenti,在1957年转身斯坦福任教,重建航空航天工程专业。Walter G. Vincenti是工程界的大佬。

1970年,经济系的同事Nathan Rosenberg在一个午餐后问他:What is it you engineers really do?”

翻译过来是你们工程师到底是做什么的!

大佬被问懵了!鱼是最后一个知道水的!工程师可能对哲学或者说社会学意义上的工程并不了解!按IT界大佬Leslie Lamport的台词,那就更加刁毒了: If you think you know something but don't write it down, you only think you know it. 意思说你要是写不清楚,那不过是自以为是。

于是Walter G. Vincenti转向研究技术哲学。20年后写了《What Engineers Know and How They Know It》

那个工程师怎么干的问题被直接置换了!被置换成What engineers do, however, depends on what they know。

没有分析、没有论证的置换过了,当成公理一样置换了!

工程师想要知道!They want to know!

在AI时代,大语言模型(LLMs)声称“浓缩(abstract)”了人类知识,但实际上,LLMs的“知道”依赖于我们如何通过prompt“提取”和“引导”它——这正是prompt engineering的核心。

LLMs不是主动“知道”的实体,而是通过提示(prompt)激活潜在模式的“知识库”。

正如Vincenti强调的,如果你不写下来(或不精确prompt),你就只是“自以为知道”。借用Leslie Lamport的台词:如果你不能清晰prompt,那你对LLM的认知也只是幻觉。


n5321 | 2026年2月28日 01:06

What is token

简单说,token 就是模型“看世界”的最小单位。

想象一下:你读一本书的时候,不是一个字一个字地看,而是把句子拆成一个个有意义的“块”来理解,对吧?人类大脑很擅长做这种拆分。但计算机,尤其是神经网络,它没有我们那种直觉,所以需要先把所有文本切成小块,这些小块就叫 token。token 到底长什么样?不同的模型切法不太一样,但主流的做法(比如 GPT 系列、Claude、Llama、Gemini 用的那些 tokenizer)大概是这样的:

  • 一个常见的英文单词,比如 “hello” → 可能就是一个 token。

  • 但 “unbelievable” 这种长词,可能被切成 “un” + “believ” + “able” 三个 token。

  • 中文就更直白了:通常一个汉字就是一个 token(有时候两个常见汉字组合会合并成一个)。

  • 标点、空格、特殊符号也都是 token(比如 “!” 就是一个单独的 token)。

  • 数字、URL、代码里的变量名,也会被拆得很细。

举个例子,把这句话喂给 tokenizer:“人工智能正在改变世界。”可能的 token 大概是: [“人”, “工”, “智”, “能”, “正在”, “改变”, “世界”, “。”]一共 8 个 token。再来个英文的: “The quick brown fox jumps over the lazy dog.”可能拆成: [“The”, “ quick”, “ brown”, “ fox”, “ jumps”, “ over”, “ the”, “ lazy”, “ dog”, “.”]大约 10 个 token。你看,token 不是严格等于“词”或“字”,它是一种模型自己学出来的、统计上最有效率的切分方式。OpenAI 他们用的是叫 BPE(Byte Pair Encoding)的算法,简单说就是:先把所有文本拆成单个字节,然后反复把最常一起出现的字节对合并成一个新“词”,直到达到想要的词汇表大小(通常 5 万到 10 万个 token 类型)。为什么 token 这么重要?因为大语言模型的一切“理解”和“生成”都是基于 token 的:

  • 模型的输入上限(context window)是用 token 算的。比如 GPT-4o 的 128k token、Claude 3.5 的 200k token、Gemini 1.5 的 1M+ token——这些数字指的就是它一次能“看”多少个 token。

  • 训练的时候,模型就是在预测“下一个 token 是什么”。

  • 你付钱给 OpenAI、Anthropic 的时候,也是按 token 计费(输入多少 + 输出多少)。

  • 模型的“聪明”程度很大程度上取决于它在训练时见过多少 token(现在顶级模型都训练到几万亿甚至十几万亿 token 了)。

所以当有人说“这个模型的上下文窗口是 128k token”,其实就是在告诉你:它一次最多能记住/处理相当于大概 10 万个英文单词(中文会少一些,因为一个汉字 ≈ 一个 token)的文本长度。但这里有个小陷阱,得提醒你token 不是均匀分布的:

  • 常见词、常见汉字用得少 token(效率高)。

  • 生僻词、长尾英文、专业术语、emoji、代码里的奇怪变量名,会“吃”很多 token。

  • 所以同样一段意思,英文可能 100 token,中文可能 150 token,代码可能 300 token。

这也是为什么有些人觉得“中文模型吃 token 比英文贵”——其实不是模型故意坑中文,而是 tokenizer 的词汇表对英文优化得更好。


n5321 | 2026年2月28日 01:05

Interview with Leslie Lamport: Turing Award Winner


Teaser / Intro

Leslie Lamport: If you think you know something but don't write it down, you only think you know it.

Host: This is Leslie Lamport. He's a Turing Award winner famous for his contributions to distributed systems. And I interviewed him for the stories behind his papers.

Leslie Lamport: Their reaction shocked me. They became angry. I really thought they might physically attack me.

Host: What was it about Dystra's old solution that you felt was unsatisfactory? It was not an obvious idea to most people that had actually impressed Dystra.

Host: As the inventor of the Paxus algorithm, I asked them his thoughts on the competing raft algorithm.

Leslie Lamport: There was a bug discovered in Raft and fixed, but I believe the algorithm that they found more understandable was one with that bug.

Host: I also enjoyed reflecting over his 50-year career. You say things like, "You never considered yourself smart. How could that be?" Stupid people think they're smart because they're too stupid to realize they're not.

Host: You felt like a failure at some point because you wanted to develop this grand theory of concurrency and you never discovered it. Do you still feel that way?


The Bakery Algorithm & Dijkstra

Host: Here's the full episode. I wanted to start with the bakery algorithm. What is the problem that the bakery algorithm solves? And you know how did you discover the problem?

Leslie Lamport: Well uh the problem was invented or discovered by Edkar Dystra in a 1965 I think it was 1965 paper and that began I consider that really the beginning of the theory of uh concurrency concurrent programming. He was the first one who really made use of the idea of of concurrency as a way of structuring programs as a as a collection of semi-independent tasks and the processes have to uh synchronize with one another.

Uh one of the processes or you know among the processes would be well this was in the days of time sharing uh you know right at really at the beginning of beginnings of time sharing and the idea of multiple people using the same computer. People realized that computers were worked faster than humans and so and computers were very expensive in those days. So uh they wanted they could use a computer to simultaneously to be used simultaneously by multiple people. The program that each user was running you know was a separate program but sometimes you know there were resources that got shared. for example, a printer, two people trying to print on the same printer at the same time. Well, the result would be, you know, not very satisfactory.

So uh he realized there was this problem of uh synchronizing um multiple processes via the idea of what he called a critical section or some piece of code in each of the processes so that at most one process can be executing that piece of code at any particular time. So that code might be the code that prints something on the printer. So the problem was how to get the uh processes to synchronize among themselves so that at most one process was executing its critical section at a time.

And um it was in 1972 that I learned about the problem because there was an article giving a solution to it um in the CACM communications of the ACM and uh I mean I used to program and I liked little programming problems you know uh and this was just a very nice little programming problem. And so I looked at the solution, which is fairly complicated, and I said, "Oh, gee, that shouldn't be so hard." And so I whipped off a very simple uh algorithm for two processes and submitted it to CACM. And a couple of weeks later, I received uh a letter from the editor uh pointing out the bug in my program. So that had two effects.

The first was that I realized that concurrent programs were hard to get right and that you needed a proof that they were correct. And second uh was that made me feel I'm gonna solve that damn problem. And I came up with the bakery algorithm which was inspired by uh the idea came from you know what now called the deli problem where you have a deli counter and that collects you know tickets a roll of tickets and every customer would come in and take a ticket and then the the next person s to to be served would be the one with the highest the lowest numbered ticket uh that hadn't been served yet.

And basically that I took that idea uh but since uh there was no central server uh or at least the the problem is as specified by Dystra involved no central control. Each process basically had to choose their own ticket. That was you know the basic idea and the algorithm was you know quite simple. And I wrote a a proof of correctness.

And uh the proof of correctness revealed to me that this algorithm was had this very interesting property. There was a general feeling in fact somebody published in a in a book or paper saying you know that it was impossible to implement mutual exclusion like this without using some lower level mutual exclusion. And the way most the mutual exclusion that was assumed generally was that of shared registers. you know, shared pieces of memory that could be written and write and uh read by different processes. And the idea is that, you know, you couldn't have one process, you know, two processes writing at the same time or one process reading while the other process was writing. People assumed that those actions were atomic. They always performed as if they occurred in some specific order.

But the amazing thing about the bakery algorithm was that it didn't require that assumption. It it used uh each shared memory a piece of memory was only written by a single process. So it didn't have to worry about two processes interfering with each other. The only problem that you might come is that somebody reading the uh value while it was being written might get you know some unknown value but the algorithm worked anyway. If somebody read if one process read while the registers was being written that process reading process could get absolutely any value and the algorithm still worked.

Host: I saw in your your writing about this problem that you shared it with a colleague named Anatol Hol and the proof was so remarkable that uh they didn't believe it and...

Leslie Lamport: well the the result was so remarkable.

Host: Yes. Yes. that didn't believe it.

Leslie Lamport: And uh you know I wrote the proof on the on the whiteboard for him and you know he couldn't find it but he went home and saying there must be something wrong with it and uh he obviously never found anything wrong with it.

Host: Right. I saw the name of the paper is a new solution of Dystra's concurrent programming problem. What was it about Dystra's old solution that you felt was unsatisfactory and made you want to solve this problem?

Leslie Lamport: Uh well, there was an unsatisfactory aspect of his original solution that had the property that if there were a lot of if processes kept trying to uh enter their critical section uh an individual process might be starved. might never get access to to the critical section that was solved uh by you know the next solution I think was Don Kuth's the condition that was desired or that that measured that what was considered the uh the efficiency of it was how long a process might have to wait and I believe that the bakery algorith of them was the first one that was really first come first served. That is if one process came if what it meant is if one process chose its number before another process tried to enter the first process would enter the critical section before the other process did. And I believe the bakery algorithm was the first one with that uh with that property. And also I think it was simpler than uh other uh solutions.

Working with Dijkstra & The Gift of Abstraction

Host: in a lot of the writing. I see that you worked with Dystra and I saw in 1976 you actually worked for a month in the Netherlands and you worked with them. Can you talk about that a little bit?

Leslie Lamport: Dyster used to had the things they're called EWDs his initials. little papers, things that when he he thought of something, had some idea, he would write it down and send it out to people. Well, one of those EWDs was about he and uh some associates or actually sort of mentees I guess you would call them wrote this this algorithm. It was the first concurrent garbage collection algorithm.

a way of writing programs evolved where there was a pool of memory uh that when a program would need a piece of memory, it would ask some server for it and be given this piece of memory. Uh but at some point it would stop using that memory. But the program itself wouldn't know that the one the particular process that created this memory you know wouldn't know whether some other process is using that memory or not. So there was an additional process called the garbage collector which would go around examining the memory and decide which pieces of memory were no longer being used and then put them back on the it's called the free list and in which uh the uh server that the process that was giving out the uh uh memory would be able to to take it.

I looked at it and I realized that uh I could simplify the algorithm. Uh because he had some some spe the the handling of the free list was done by a special process that you know which had it to worry about its own coordination with the uh uh the processes that were using the memory. And I realized that that free list could just be made part of the regular data structure. uh so it didn't need special handling and that seemed to me like a very uh simple idea and a very obvious idea and I sent it to him and then when I get got the next version of the paper I discovered he had made me an author and I thought that was very generous of him uh to have to have done that because it seemed like very simple idea and I mean very obvious vious idea and I later realized much later that it was not an obvious idea to most people uh and that that had actually impressed uh Dystra.

when that was the only thing I actually did with Dystra many years later he said that I had uh a remarkable ability at abstraction only in very recent years I mean Maybe maybe after I got the touring award that I realized that the reason for my success and the reason I got it wind up wound up getting a touring award was not that I was particularly that smart but that I had this gift of abstraction and Dystra was smart enough to realize that I was invited to uh spend a month uh but not with Dysterra, with a colleague of his, Carl Carl Holton. Only one thing that was ever published came out of that. Carl and I would uh meet with Dystra once a week. Uh in the in the course of that discussion, the idea somehow came up that led to uh a variant of the bakery algorithm that I wrote up and published. Uh so that was the the one tangible result that that came from my month in uh the Netherlands.

Host: Yeah. I I saw that you wrote that. Yeah. You you spent one afternoon a week working, talking and drinking beer at Dextrous House and you kind of don't remember exactly who was uh you know in charge of uh what on that paper, but...

Leslie Lamport: yeah. Well, I don't think I was really could have gotten that drunk because uh I probably drove to the meeting and back from the meeting. So,

Host: Right. Right.

Leslie Lamport: The the Dutch beer that I was drinking was not very alcoholic.

Time Clocks and Ordering of Events

Host: I wanted to talk about your most cited paper, the one titled time clocks and the ordering of events and distributed systems. What's the story behind the paper and the problem you were solving with it?

Leslie Lamport: The origin was simple. uh it well somebody sent me a paper on building distributed databases and so where you'll have well multiple copies of the data in different places and you need to keep them synchronized in some way. I looked at it and I realized that their solution had this problem that the se that it it had the property that things would be executed as if they occurred in subsequence but that sequence could be different from the sequence in which they actually happened.

The notion of of what you know happening before means is not obvious or not obvious to most people but I happen to you know learn about you know special relativity in particular uh what's known as the it's the space-time view of of special relativity where you basically consider space and time together just one four-dimensional thing and that was Einstein wrote his paper in 1905 and and in I think it was 1909 uh somebody whose name I'm blocking on provided this four-dimensional view and that four-dimensional view has the the particular notion of what it means for one pro one event to occur before another and that notion is that one event happens before another. If a a signal uh was emitted from the first event and received by the whoever did that second event before that second event happened, but the communication could not travel faster than the speed of light because nothing can travel faster than the speed of light.

Well, I realized there was an obvious analogy. Uh the notion of happens before is exactly the same as in relativity except instead of being whether something one event can influence another by things traveling at the speed of light, it's whether the first event could have affected the other by information sent over messages that were actually sent in the system. The thing that you know blew people away was this this definition of of happens before in a distributed system with also this was the first paper I would call like you know had a scientific result about distributed systems.

I made perhaps you know mistake that I was warned against at some point of having two ideas in in one paper. The other thing that I realized was that there was an an algorithm that would show whether one event that it would produce an ordering that satisfied this that condition that if some if one event happened the other before the other then that first event would be ordered before the other. And I realized that if you had an algorithm to do that, you could use it to basically provide the the synchronization you needed for any distributed system because you could describe that system in terms of a state machine.

And a state machine as as I described it then is something that has a state and process executes you know uh commands that need to be executed in order and the command simply is something that makes a change of the state and and produces a value. And so you can just describe this state machine as just you know how event how commands affect the state and and how they produce and what you know what the new state is as a function of the original state and what the value is as a function of the original state.

It it turned out that this was very obvious to me, but that's really in practice the important idea in that paper because it showed that this method of building distributed systems by thinking in terms of state machine and and can thinking about concurrent systems in terms of state machines. Um but that part was completely ignored. As a matter of fact, twice I talked to people about that paper and they said there was nothing in that paper about state machines and I had to go back and reread and reread the paper to be sure I wasn't going crazy and it really did talk about state machines.

It's important uh for another reason. uh if you're trying to understand a concurrent program you concurrent programs are written the bakery algorithm is really an exception uh concurrent programs are written assuming atomic actions so that you assume that the execution behaves like a a sequence you you can assume that the execution proceeds as a sequence of events. It turns out that the way to understand, you know, why why does a program produce the right answer? Well, the answer is well, you it you give it the uh the you know the right input. You give it the input and then it produces the right answer. Well, but by the time you're in the middle of execution, what it was given at the beginning is ancient history. The only thing that that tells the program what to do next is its current state.

And the way to understand uh a program, you know, a simple program that just, you know, takes input and produces an answer is to say what is the property of the state at each point that ensures that the answer it produces is correct is going to be correct. And that property which is mathematically a fun a boolean valued function of the state is called an invariant. And understanding the invariant is the way to understand the system. You know the the program and I realized that the same thing is true of concurrent systems and concurrent programs. People like to write proof you know behavioral proofs reasoning about sequences.

And the problem with that is that the number of sequences possible sequences you know is exponential in the length of the sequence while the com so your complexity of your reasoning gets to be very complicated. It's very easy to to miss cases. Um but the complexity of an invariance proof the complexity of the invariant basically is well oh god it's the number of possible executions is exponential in the number of processes but the uh the behavior of the proof of a an invariance proof is quadratic in the number of processes. you know that's basically why invariance proofs are better but you know there's still for a long time that you know people you know doing uh distributed systems theory are trying to do it uh you know develop you know methods and formalism something that are based on partial orderings and that they've you know published a lot of papers but it's just you know not the way if you want to do it in practice that's that's not the way to do it and I shouldn't say you know it's not the way uh you know there are algorithms like the bakery algorithm that you know you know thinking in partial orderings is in fact a very good way of doing it but those are the exceptions the the work the method that works you know that you can be sure will will will work is the use of invariance.

The Byzantine Generals Problem

Host: I want to talk about the I guess the next paper which uh is the Byzantines general's problem. I think that's something that we hear about and we learn about when you're going through college and computer science and the name is great and I want to know the the story behind that problem.

Leslie Lamport: After I wrote that time clocks paper that was a tells you how to build a distributed system but assuming no failures and it was obvious that um you know distributed one reason for a distributed systems is you have multiple computers so if one fails you can you know keep going. in particular uh that was the problem that it was being solved at SRRI when I uh when I joined it but before I got to SRRI and I started working on that problem and I uh there's no notion of idea of you know what I should think about is you know what what can a failure do so I assume that you know the worst possible case that a failed process might do absolutely anything.

And I came up with an algorithm that basically would uh implement a state machine uh under that assumption and that the algorithm I came out with used digital signatures. Yeah. So that it used the fact that a faulty process might do anything but it could not forge the signature of another process.

Host: which just means that the message can be trusted that it came from a private...

Leslie Lamport: right so that you can relay messages and the people know can check that the relayed message is actually the one that was originally sent uh and so that a solution using that when I got to SRRI I realized that the people were were trying to solve the same problem. Uh but there are two differences. First of all, at the time I did this was you know 1975. very few people know knew about digital signatures and in fact I don't remember when the Diffy Helman paper was published but it was around 1975 and I happen to know about digital signatures because Whit Diffy who was one of the author two authors of that paper uh was a friend of mine and in fact at one point we were at a coffee house uh and he was describing these things that he said we have this problem of building digital signatures uh you know we haven't solved and I said oh that seems easy enough and uh and I sat down and literally on a napkin I wrote out a a you know the first digital signature algorithm.

It was not practical at the time because it it required basically something like uh you know 128 bits to sign one bit of the you know of of the thing you they're signing. It's not quite that bad because you know as you might think because you could use sign not a the entire dent document but a hash of that document which you assume you know people cannot forge uh

Host: the hash they can't reverse

Leslie Lamport: yeah you can't reverse you go take a hash and and you know you find some other hash that you know or some other document that satisfies that hash. But anyway, that's why I had, you know, digital signatures were part of my toolkit. Uh, so the people at SRRI didn't have that, but they also had a nicer abstraction of it. Instead of getting agreement on a sequence among the processes on a sequence of commands, uh they would agree have an algorithm for agreement on a single command and then that algorithm would be uh executed multiple times to and you know that was a nicer way of of describing uh you know what you're doing than than than my method.

So the first paper that was published uh use gave both the their original oh so but since they didn't have digital signatures they used a different algorithm uh and they had the property that to tolerate one faulty process uh you needed four processes whereas if you used digital signatures you only needed three processes. So the original paper contained both algorithms and so I was one of the authors. The other algorithm without digital signatures is is more complicated and the general one for end processes was really a work of genius. Uh it was almost incomprehensible. You just had to read in this complicated proof that uh you know for the arbitrary case of an arbitrary number of processes you need n pro for to tolerate n faults you needed four n processes whereas with digital signatures you need three n processes and the the algorithm for single fault wasn't hard but the one for multiple four parts was uh Marshall Peas was the one who did it and just brilliant uh Later in a in a later paper I uh I discovered uh a simpler proof one that was an inductive proofly proof that if it works for n minus one you know you it worked for n with 3 n if it works for 3 n * n minus one the original paper was uh you know the original one was just brilliant uh who would have discovered it anyway um so we published that paper and I realized that this was this the whole idea of Byzantine fault.

So the thing is that Byzantine well Byzantine fault is one that where process assume a process can do anything. Now I was assuming that you know processing can do anything because you know I didn't know what to assume but the people at SRRI had the contract for building a multiprocess multi- computer system for flying airplanes and so they were the ones who appreciated the need for solving processes that can do malicious things because they they really couldn't assume what it would do. And every time you would get an algorithm and you you'd see, oh, uh, well, this algorithm, you know, try to get an algorithm with three processes, you know, for one fault, you know, you'd find that, you know, oh, you know, this this works and it must be, you know, really couldn't happen in practice. And then you'd be able to find some sequence of plausible failures that would lead the algorithm to be defeated if there were a faulty process.

So you needed four uh and for some reason you know I thought that digital signatures was almost a metaphor in the algorithm that it should be possible you know since we weren't worried about malicious failures but but you know just things that happen randomly that there should be some way of of writing a digital signature algorithm that uh you know would have a sufficiently low probability of of failing but I never worked on that and nobody else ever did. So that those that algorithm was was pretty much ignored because digital signatures were very expensive in those days. I don't know what's being done now because you know computers are digital signatures are just computing and computing is you know is cheap. Uh but uh I remember at some point I happened to be communicating with someone who was an engineer at Boeing and I asked whether they knew about those results and he said yes when that he in fact uh was the one at Boeing who would read that paper and his reaction was oh we need four four computers.

Uh but at any rate I realized that this was an important result and it should be well known and I had learned one thing from Dystra. uh Dy, you know, one of the things I learned from Dystra, he wrote this paper called the the dining philosophers problem. And that paper got a lot of attention, but the dining philosophers problem, I won't go into what it is, but I think the basic problem uh was not particularly interesting, but it had a cute story to it. It involved a bunch of philosophers sitting around a table with uh some funny kind of spaghetti that it required two forks and there was one fork between you know each fork would be shared with two people and uh but and I think realized it was because of that cute story that that problem was was popular.

And so I decided that you know this this our work needed a cute story you know a nice story and I in invented Byzantine generals with the idea being that you have a group of you know for the for the one failure case you have four generals who have to agree whether or not to attack. uh and if they all attack uh they'll win the battle. But if only some of them attack or if even if three of them attack they'll win the battle. But if only two attack you know they would lose. But one of the the generals might be uh a traitor. And so how could you you know solve this problem? And so so it's phrased in terms of these generals having to communicate and decide whether to make the single decision whether to uh attack or or retreat. Um and you know I called it the Byzantine generals uh problem.

Host: I saw in your your uh notes about the problem that there was maybe a subset of the problem or a prior version that was called the Chinese general's problem or something like that.

Leslie Lamport: Oh yeah. that yeah I was uh there was a different problem that uh Jim Gray uh described uh as an impossibility result basically it's called the Chinese generals problem and I I won't bother going into what it is and so that gave me the idea of generals uh I actually initially thought of the idea of Albanian generals because at that time Albania was a black hole as far as the rest of the world was concerned. It was a communist regime, a part of the the Soviet uh block, but it was even more Soviet than the Soviet Republic and and and you know, more restrictive. So someone uh my boss said, "Well, you know, there are Albanians in the world, so shouldn't that so should have a different name?" And then I I realized that Byzantine there aren't any Byzantiums Byzantines around and that was the perfect name.

Host: So it's interesting to me in the story that because this isn't the first time the problem was specified but it was the first time that you had named it uh um gave it a good catchy name essentially and and uh you know added some additional results. What was it that you saw in that problem that made it interesting? Or rather like how do you know that a problem is worth putting extra time into?

Leslie Lamport: Oh, well this one it was because you know the it was obvious that people were going to be building that computers were going to fly our airplane fly airplanes and the reason in fact because was was that this was during the the time of the oil crisis in the 70s and that they knew people knew that they could build more energyefficient planes by reducing the size the the size of the control surfaces. But that made the plane aerodynamically unstable. Uh and a a pilot couldn't make the all the adjustments needed to, you know, to keep it flying, but a computer could. So it was clear the future was, you know, airplanes were going to be flying be flown by computers as they are, you know, today. uh and uh people didn't realize they thought that oh if you want to be able to tolerate one fault you just use three computers and they didn't realize that you know with arbitrary faults you need four and so that a really important result and that's why I believe that it it needed to to be well known.

Problem Solving & Paxos

Host: generally when you look at the problems that you are solving with your work Um, how'd you decide? Cuz if you're working at a company, you can decide based off of maybe the I guess the impact to the company like is it going to make more money or save cost or something like that. But I wonder in your work across your career um you know think about the bakery problem or some of your later work as well. How do you know it's it's so open-ended. How do you know which problems are uh the ones worthwhile?

Leslie Lamport: Throughout my career, I worked for private companies, you know, not, you know, not in academia or or for the government. Uh, and so some problems arose because of, you know, sometimes, you know, an engineer would have a problem and come come to me. And so, uh, you know, DIS Paxos, for example, was was a case of that that somebody actually wanted an algorithm to do what it did.

Host: You mentioned earlier Paxos and I know that's one of your your most famous uh works. Curious about the story behind maybe that paper and the problem you're solving.

Leslie Lamport: Well, the problem I was trying is exactly the same problem as I was solving in the the Byzantine general's work uh basically building a a fault tolerant state machine. But by that time it was you know the in the faults that interested industry were ones where failure meant that the computer just stopped the not not that it did arbitrary things. So uh the paxos is an algorithm for uh for for building fault tolerance systems for handling that class of faults.

uh and the people I was working at was which the it was the deck circ was in which I joined in 1985 and they built a uh one of the first operating systems that uh was a a distributed operating system. Uh so that um basically everybody had the they basically these are the people who had come from Xerox Park and had invented personal computing but they also had the notion of distributed personal computing and they invented the Ethernet uh you know for that. So they basically all of the uh computers in the building were on a single Ethernet network and shared a common storage uh and they had an algorithm for maintaining consistency of that storage and I didn't believe well they didn't have an algorithm they had an operating system with code that did that um and I didn't believe that what they what they did was possible. Uh, namely I I didn't think um well I forget exactly why I didn't think it was possible but at any rate I started you know trying to uh come up with a a an impossibility proof and start solidity proof well and an algorithm to solve this would have to do this and in order to do this it would have to do that and at some point I stopped and said oh this isn't a proof. It it can't. This is an algorithm that does it.

Host: You said that they had code but not an algorithm.

Leslie Lamport: Yeah.

Host: Um what do you mean by that?

Leslie Lamport: when most people sit down and start writing programs that you know they start by thinking in terms of code and one of the things I learned fairly early in my career I don't remember exactly when that back in in the days when I started writing algorithms people talked about people were calling them programs and I was probably calling them programs too I mean I remember then at some point I realized that that wasn't wasn't talking about programs. I was talking about al interested in algorithms.

Uh and an algorithm is something that's more abstract than a pro than than a program. U an algorithm can be you know a program is written in in some particular code. But an algorithm can be implemented if programs written in any any kinds of code. It's it's something that's that's at a higher level of of of abstraction. And of course I like that because abstraction is something I'm I was good at, you know, even without realizing that that's what I was doing.

Uh and so what I've spent a large part of my career basically from maybe about you know 2000 or so onward uh was getting people who build concurrent systems uh to not just write code but to have an algorithm. Now a system does lots of things but there should be some kernel of the of the program that's that's involved with synchronizing the different processes or the distributed system the different computers and that code you know is very hard to get you know the you know that correct so you you don't want to think in terms of code because static coding, you know, conflates, you know, a lot of issues that are irrelevant to the concurrency aspect. And so you should be thinking, you know, first get an algorithm that does that synchronization and then implement that algorithm.

Host: I was looking at the Paxos paper and uh some of your notes about it and I saw that um there's a there's an eight-year gap between when you came up with the algorithm and when the the paper was actually published called Part-Time Parliament is the name of the paper. Why why is there an eight-year gap?

Leslie Lamport: Oh, well the re the referees originally said well this paper is okay you know not terribly important but fortunately Butler Lamson realized the importance of the algorithm and together with the idea of you know I guess you can implement anything because it's implementing a state machine uh and you know went about proceed uh procilitizing uh building your systems you know using paxos uh you know and and thinking in terms of state machines and uh so you know I wasn't uh so the idea was getting out so you know I was in no hurry to publish so you know I just let the paper sit and eventually uh there was a new editor that came along and uh uh he said that you know I think the status of the paper was that it was just uh you know it had been accepted but had no and needed revision and uh so he decided that yeah let's you know to to publish it and uh it was eventually published with little some a few things that uh to take well to to mention work that had been done in the in the in the uh interim and what I got is uh got a Keith Marzulo uh you know to do that part for me.

Uh and uh so the story was that this manuscript this was that well the story about Paxos was that you know was a this happened you know centuries ago and you know this manuscript and uh I used that to the effect that you know when something you know the tales of something were I considered obvious and you know not interesting you know the the paper would say it's not clear how the Paxons what the Paxons did you know at this point but Um at any rate and uh so uh and Keith you know kept up the that idea that you know this was a you know a description of this ancient thing and and he wrote a you know a little prefix or a preface or something to to it and uh you know added maybe I think some uh references.

Host: I saw in your writing too when you were talking about presenting the paper initially, you even uh dressed up in like an Indiana Jones style archaeologist. Well, how did that go when you presented about this Paxos uh paper and algorithm?

Leslie Lamport: Well, I think the the lecture may have gone well, but uh I think nobody understood the algorithm where nobody understood the significance of the algorithm.

Host: It sounds like no one understood it except for Butler Lamson. What what did he see that made him unique? I guess.

Leslie Lamport: Well, he had a good understanding of building systems, you know, he really deserved his touring award. He was one of the original people at Xerox Park who were building distributed uh personal computing. He and Chuck Thacker, I think, were probably the two senior people, you know, in that lab.

Paxos vs. Raft

Host: I saw later there was a paper which describes a new algorithm which seems to solve the same problem. The raft paper. I was wondering if you read that and what your thoughts were on on that versus Paxos.

Leslie Lamport: The authors of that actually sent me a draft of the original paper and I looked at it and said uh I forget whether I said send it back to me when you have an algorithm or send that back to me when you have a proof. I I forget which one it was and uh you got the idea and they really they they did write you know add a proof in the paper or not. Uh yeah and I never read future later versions and someone whose judgment I value said you know had read it and said that it's basically it's it's the Paxos paper but no but with some of the the tales left unfinished by the Paxos paper uh by uh you know filled some of the tales filled in but they you know described it in a in a very different.

Hey, the basic idea of the what Paxos works is it's two phases and you're trying to implement a sequence of you know of decisions and it turns out you can do the first phase once for a whole it involves a leader. So um and the leader has to get elected. Uh so but it turns out that you can do the first uh phase once uh and you don't have to do it again as long as you have the same leader. Uh but it's only the second part that you have to do and then you have to elect the the new leader if a new leader fails and do the first part.

So think about it in those two phases. But the way people the way engineers you know like to think about it is well you do this you know you talking about the first part the the second phase you keep doing this uh until the leader fa fails and then you go back then you have to do this thing so it's explaining it in the in the opposite order uh and in fact you know when you start it from from fresh the uh you don't have to do the first uh uh the first phase you can you basically what what's done in the first phase could be just built in into the initial state but you know I think that that's the right you know of those two phases the way to understand it.

Uh but you know the raft people also had this idea that you know raft is better because it's simpler and I I must say that a lot of people say that uh Paxos is hard to understand and I don't understand why. I mean, I've explained it to some people in five minutes and they understood it. At any rate, the raft people said that one of the ideas were simpler because and they even have, you know, taught, you know, Paxos to one class and and uh the raft to another and they took and then yes, the people all the students said that yes, it was more understandable.

Uh the interesting thing about it though is that uh there was a bug discovered in raft and fixed but I believe the algorithm that they found more understandable was one with that bug. So uh made me realize that uh you know what most people you know what does understanding mean and for me understanding means you know you can write a proof of it but what understanding means for most people is warm fuzzy feeling and you know the raft description gave them you know more of a warm fuzzy feeling because you know you you know that that was seems to be the way you know programmers, you know, like to think about the the algorithm, you know, you know, the second phase, you know, first until, you know, you get a failure and uh but the way I describe it is one that helps you get a better understanding of why it actually works.

LaTeX

Host: So, yeah, we talked about a lot of your papers. I know one of your other uh contributions whether you knew it or not at the time was latte and uh building that and something that has impacted the entire academic community. What's the story behind wanting to build latte?

Leslie Lamport: Oh, that was uh very simple. Um, I was wanted I was in the process of starting to write a book and uh it was clear that tech was the basic uh type setting system that one had to use. But you know I felt that I would need macros uh to make tech do what I wanted it to do. And uh so I decide figured with uh been a little extra effort uh I could make the macros usable by other people. The system I had been using before tech it's called scribe and uh that really had the basic idea of scribe was that you describe the logical structure of of the document not and the and scribe will do the formatting. Well, scribe didn't do that great a job of formatting. Uh so, uh but obviously, you know, I like the idea abstraction that it's the ideas that matter, not the text that ma, you know, the the writing that matters, not the type setting.

And so, um, I actually at some point, uh, I met Peter Gordon, Addison Wesley, uh, I'm not sure what what you would call him, but he looks for, you know, books to publish. And, uh, he convinced me that I should write a book on it. And those days, it never occurred to me people would actually spend money for a book about software. But you know what the hell? And what he did was he introduced me to uh a typographic designer at uh Addison Wesley who was responsible for really for the typographic design that's in the standard uh latex styles. You know, basically I just did that in my quote spare time. You know, took me six or nine months or so. I I suppose the uh statute of limitations has run out, but I was really, you know, spent some time working on that when I was allegedly, you know, billing the time to some project that had nothing to do with it.

Writing, Thinking, and Proofs

Host: On the topic of writing, you have a quote that I really enjoy. It's if you're if you're thinking without writing, you only think you're thinking. And I was curious to hear your thoughts on what you mean by that.

Leslie Lamport: Well, it was really meant for, you know, people building computer systems. You have an idea and you think it's going to work. Uh or you have something that, you know, you think is something that somebody else will you want to use. Well, write a description of it. Uh there's an old maxim that I don't I heard uh that is you know write the instruction manual before you write the program. a great advice. Uh I did not do that uh with latte but it I definitely when I was writing the book and I discovered that something was hard to to describe hard to explain that needed to be changed and I made you know a number of uh of changes to it uh as a result of that but uh I didn't start at the beginning with the instruction manual.

Host: Why is writing conducive to good thinking?

Leslie Lamport: Because it's very easy to uh it's very easy to fool yourself. Uh I mean that underlies my uh my whole idea of of writing proofs. One thing I learned is that you had to write a a correctness proof of an concurrent algorithm. And when my algorithm was starting to get more complicated, the proofs started I started writ PhD in math. I knew how to write proofs and I was starting writing the proofs the way I would normally do. And I realized it just didn't work because there were just so many details involved and I just couldn't keep track of them and whether I had done it.

And so as a computer science know how to deal with concurrency uh it's hierarchical structure and so I devised this hierarchical structure where a proof is uh you know is a sequence of steps each of which has a proof and the proof is either a par well a proof is either a paragraph or a statement a sequence of steps each with its proof and that proof can be either a parag graph or a sequence of steps with its proof and you know so you break the whole problem up into these smaller pieces. So there's never any question of you know where is this coming from. You know you're stating that this step follows from you know this step this step this step this step and if it does not follow from that step your proof is wrong. The theorem might be correct but but means your proof is wrong.

Um well you know so I've discovered that worked great on writing my proofs of programs but I decided to really you know I also write proofs of theorems you know you know uh you know think proofs that are things that are you know more like ordinary math and I started trying that on them and I discovered it worked beautifully. So when I started to to try to convince mathematicians to write these proofs uh I started in one small seminar I went you know won't describe what it was about but uh and I I described this this proof through maybe uh 20 mathematicians or something their reaction shocked me they became angry I really thought that they might physically attack me.

So I believe that what's going on is that when pe I mean I believe that's totally irrational and when people act irrationally it tends to be out of fear and what I believe people are afraid of is that mathematicians are afraid of is that they're going to have to write their proofs to convince a a computer program and and in fact you know and I give it a one of those talks I gave you know I say very clearly this doesn't have to be you don't have to be any more formal than you do you can write the exact same thing proof you know but it's just a matter of organizing things and it's very simple you know hierarchical structure and then when you're using a fact mention that you're using that nothing about formalism or anything you know after I gave that talk someone got up and said I don't want to have to write my proof my my proofs for a computer program.

And in fact, it's more work doing that because the reason it's more work is that it reveals what you haven't said and that there's steps in there that you know, you may think they're obvious, but you haven't written them down. And if you believe something is correct but don't really if you if you think you know something but don't write it down, you only think you know it. And that's where errors come in. You know, that's where that one-third of your paper's errors can, you know, you know, uh, come in because it really makes you honest.

Career Reflections: Industry & Academia

Host: When I look across your career, I think you had a lot of contributions people might expect might come from academia, these papers and things, but you did uh all of your work in industry. Why did you not see yourself as a academic and more of uh working for industry?

Leslie Lamport: Well, I started out programming uh and I eventually got jobs where took me into what we now call computer science. At the time I never even realized that uh there was you know there could be a computer a science of computing. Uh it wasn't until you know maybe until mid to late '7s that I realized yes there was a computer science and know as a computer scientist. Um but it never seemed to me that like computer science was a an academic subject. At some point I, you know, had to make a choice between doing computer science and without calling it computer science or or teaching math at a at a university. And I I chose for random fairly random reasons to you do computer science. Uh so it you know for the first I don't know well till maybe the mid80s or something it just didn't seem to me that you know you know computer science was something that people needed to go to to a university to learn. And uh I suppose afterwards that I was sort of I guess I I just didn't think it would be fun teaching computer science. So

Host: I saw in your your writing you had a footnote that said somewhere that you you you felt like a a failure at some point because you you wanted to develop this grand theory of concurrency and you never discovered it. Um do do you still feel that way or what are your thoughts on that that footnote?

Leslie Lamport: lots of people who you know a large percentage of the people who were doing things like I was doing which is not a large number of people uh there's this notion that uh they're looking for the touring machine of concurrency you know the touring machine was this abstraction which really captured what computing was uh and they were looking for something that would be the you know the touring machine of of concurrent computing and you know nobody succeeded. I mean there are some people who think they've succeeded. Uh the patronets are are something that uh I guess I don't have time to to explain but uh there was a big it was big in the 70s. Uh, and I was actually surprised to think that there's still a large community of people doing uh, patriets. But what I now realize is that patriets and most of the things that people were doing was really language-based.

And I was never interested in languages. I'm interested in what the language is expressing. And you know I realized in some sense you know maybe I've realized what the touring machine of of of computing is state machines. Uh state machines are a little bit different the way I now describe them. They don't have commands. They just have a state and a and a next state relation. Uh even simpler than talking about commands and and and values and stuff. Uh and you know to me you know that's the uh that's the touring machine of of con of concurrency but it uh it it doesn't have the function that that touring machines offer because it it doesn't what touring machines do is uh describe what's you know what's possible uh and state machines can describe anything including things that are not possible.

Uh and and in fact uh the there's a good reason for that. Um for example uh when I describe a uh an algorithm I will talk about you know the values of a variable you know can be any integer. Now you can implement the program where you have any integer uh but that makes the but talking about you know computer integers would complicate things unnecessarily. The people have this funny idea that you know because something is infinite it's more complicated. They got it backwards. Infinity was introduced to simplify things. You know the first thing you learn is arithmetic. You're learning arithmetic with an infinite number of integers because if you restricted to a finite set of integers, arithmetic becomes much more complicated.

So you know the abstractions of mathematics uh which people find you know because they don't have the proper training in mathematics find you know difficult uh are really what's simplifying things and that's what you what you use this mathematics the state machine is described for me by me using mathematics that's the right you know the the most powerful way of doing But computer people and computer scientists and programmers are really hung up on languages and so they are looking for you know they invent all sorts of languages and they're all describable and in fact if you want to give them a semantics you would do it in terms of a state machine and they just think that this uh you know this language ES improves your thinking. uh it doesn't you may I mean there are reasons why you use computer languages and you don't write your your programs code in math and they involve basically efficiency but for understanding you know you can't build math you can't beat math and you know attempts to uh do it by something that looks like a programming language uh is is just the wrong way to to to deal when you're trying to deal with concurrency.

Host: When I look at uh everything that you've written and all the stories, there's these little anecdotes. There's things where you say things like you you never considered yourself smart, but you noticed that other kids had an awful time understanding things or yeah, there's a problem that you solved where someone else had difficulties, but you don't view your contribution as a brilliant one or anything like that. And that that uh doesn't connect with me because you've also won a touring award and done all these amazing things. So, how could that be that you, you know, just merely discover things and are are not smart yet you've achieved so much?

Leslie Lamport: Well, this general thing that, you know, psychologists talk about uh is that when someone is good at something, they don't realize how they're good they are at it because it's simple to them. There's the opposite one that uh people who are bad at something think they're better than they are because they're bad at it. Or to put uh a little bit more concisely, stupid people think they're smart because they're too stupid to realize they're not. Uh my the gift that I have is not in some sense raw intelligence. It's abstraction and it's only recently, you know, the last 10 or so years that I realized how much better I am at that than other people, most other people.

Host: At this point, you've experienced so much and you when you look back on your career. If you could go back to yourself when you just graduated college and give yourself some advice knowing what you know now, what would you say?

Leslie Lamport: One thing I've learned fairly early in my life is that I shouldn't waste time trying to answer questions that I don't have to answer. I don't think about, you know, what I should have done because uh that's a question that I don't have to answer.

Outro

Host: Thank you for listening to the podcast. It's a passion project of mine that I've really enjoyed building. Another passion project that I've been working on kind of in secret is building an ergonomic keyboard that I wish existed and I finally have a prototype. So, I'd love to show you what we've built. It's ultra low profile and ergonomic. and I couldn't find anything like it on the market. So, that's why we built it. I'll put a link to the keyboard in the description. You can take a look and learn more about the project there. We could definitely use your support. Also, if you have any feedback for me about the show, I'd love to hear it. Comments on YouTube have led to guests coming on like Ilia Gregoric and David Fowler. I wasn't aware of them until someone dropped a comment. Also, feedback in the comments helped me learn to reduce the number of cliffhers in the intros. So, your comments definitely make a difference. Please keep letting me know what you'd like to see more of in the show, and I'll see you in the next episode.


n5321 | 2026年2月27日 12:13

The third golden age of software engineering – thanks to AI, with Grady Booch

Host: Some people worry that AI writing surprisingly good code could mean the end of software engineering. But Grady Booch disagrees and says that we are entering the third golden age of software engineering. Grady Booch is one of the founding figures of software engineering as we know it. He co-created UML, pioneered object-oriented music] design, spent decades as an IBM fellow, and has witnessed every major transformation this industry has undergone since music] the 1970s. In today's conversation, we discuss the three golden ages of software engineering and what history music] teaches us about surviving and thriving through major technology shifts. Why coding has always been just one part of software engineering and why the human skills of balancing technical, economic, and music] ethical forces are not going anywhere. Grady's direct response to Dario's prediction that software music] engineering will be automated in 12 months. Spoiler, he does not hold back. And many more. If you want to understand that the massive change that AI is bringing has in fact happened before and not just once, this episode is for you. This episode is presented by Statsig, music] the Unifi platform for flags, analytics, experiments, and more. Check out the show notes to learn more about them and our other season sponsors. So, Grady, it's great to have you back on the podcast again. Thanks for having me. Aloha. So touching a little bit on the the history of of software engineering, you've said many times before that the entire history of software engineering is one of rising levels of abstraction. Can you walk us through the key inflection points that help us understand this and then of course tie it into how AI is is all tying into this?

Grady Booch: Well, the very term software engineering did not come to be until Margaret Hamilton was probably the first to uh anoint it. uh she at the time had just left the man orbiting laboratory project. She was working on the Apollo program and she was one of the very few people who were software developers in a sea of mostly men who were the hardware structural engineers and she wanted to come up with a phrase that distinguished herself from the others. So she began using the term software engineer and I think we can rightfully give her the claim to the first one that coined that. There were others that followed most notably people talk about the NATO conference uh on software engineering and when the organizers established that which was actually a few years after Margaret's work they did so as kind of a controversial name not unlike how the term artificial intelligence was named controversially for its first conference on the west coast. Um, so there were others that followed and after a period of time it kind of stuck and I think what it meant the essence of what Margaret and others were doing is to say there's something engineeringish about it in the sense that ours is a field that tries to build reasonably optimal solutions. You can't have perfect solutions that balances the static and dynamic forces around them much like what structural, electrical, chemical engineers do. In the software world, of course, we deal with the medium that is extraordinarily funible and elastic and fluid and yet we still have the same kinds of forces upon us. Uh here we've got the forces of the laws of physics. You can't pass information faster than the speed of light, which is kind of annoying in some cases, but hey, we'll have to live with it. There are issues about how large we could build things, largely constrained by our hardware below us. There are constraints we have on the algorithmic side of things. We may know theoretically how to do something such as the Viterbi algorithm, which was essential to the creation of cellular phones. For the longest time, we didn't know how to implement it, but there was indeed a calculable solution. similar stories with regards to fast Fourier transform. We knew the theory but until Fourier transforms could be turned into something computational we couldn't pro progress. And there are also other constraints upon us not just these scientific ones and and the computer sciency ones but constraints such as the human ones. Uh can I get enough people to do what I need to do? Can I organize teams doing what I want to do? Ideally the largest team size you want for software is zero. Well, that's not very practical. The next best one is one and then it kind of grows from there. And there are projects that simply are of a certain scale that you cannot conceive of them being done by a small group of people. I mean, why do any of the large projects we have have a cadre of folks in them? It's because the footprint of these systems and their enduring economic and social importance is so great. You can't rely upon just an individual. That software must endure beyond them. And increasingly as software moves into the interstitial spaces of the world, we have the legal issues uh such as we see with you know digital rights management but I think more importantly and overarching the ethical issues. We know how to build certain things but should we build them? Is it the right thing for us to do in our humanity? So these are the collection of things that are in a way well not in a way but absolutely are the static and dynamic forces that weigh upon a software engineer and that's why I can say we are engineers because much like the other kinds of engineers we build systems that balance those forces and we do so in a medium that is absolutely wonderful. So that's software engineering. Now I mentioned in our last call there are certain ages of software engineering and I think as we look from the from the lens of looking backward there are at least two identifiable major epics in software engineering. In the earliest days there was no software because what we did was simply managing our machines and the difference between the hardware and the software was completely indistinguishable. you know, putting plugs in a plugboard as was happened with the ENIAC. Is that programming? Well, yes, but there's not really software there. It's something else. And it wasn't until our machines came to the point in late 40s, early 50s that we began to find a difference for them. Most of this software written at that time was bespoke. Well, really all of it was. And virtually all that software was tied to a particular machine. But the economics of software were sh such that we love these machines. We'd like them to be faster, but gosh, we put a lot of investment in the software itself. Is there a way to decouple these kinds of things? We talk about the recent history of our of our world. The term digital was not coined until the late 40s. The term software was not done until the 50s. And so even the acknowledgement that software was an entity unto itself was just about in my lifetime which is frightening to think about.

Host: Yeah. Like 70 80 years ago. Wow.

Grady Booch: Yeah. Yeah. Exactly. So this is this were an astonishingly young young industry. If you were to take Carl Sagan's cosmic calendar and uh and put software in it, we would be in the last few nanoconds of that cosmic calendar. It would be less than a blink of an eye. But anyway, as software began began to be decoupled from hardware itself, then folks such as Grace Hopper and others were beginning to realize that this is a thing that we could treat as a business and an industry as an institution unto itself. So the earliest software of course was as it was software itself was assembly language which was very much tied to the machine. And jumping ahead a little bit, as IBM came along in the '60s recognizing that there was a way to establish a whole architecture of machines with a common instruction language, then it was possible to preserve software investments and yet decouple it from hardware in a way that I could improve my hardware without throwing away the software. Once that realization happened which was both an engineering decision, a business decision and overall an economic decision then the floodgates opened up and all of a sudden we had a lot more software that could be and needed to be written. This was the first golden age of software engineering in which we had software was an industry unto itself. And so the essential problems that world faced were problems of complexity. uh complexity in that we were building things that were, you know, difficult to understand, that were trying to manipulate our machines in some cunning ways, but it was complexity that by today's standards was, you know, laughably simple. We could, you know, this is the equivalent of hello world, but they were problems that were hard unto themselves. And so because we were so coupled to the machines, the primary abstraction used in the first golden age of software engineering was that of algorithmic abstraction because that's what our machines did. Most of our machines were meant for mathematical kinds of operations and so as as was done in Fortran it was a matter of building our software that could do formula translation. So that was the realm and the problems faced by the first generation

Host: and and this first generation like in timeline where would you put it roughly

Grady Booch: timewise I'd put it in the late 40s to the late7s or thereabouts

Host: and that's what dominated that time frame. So the figures you would see would be uh Ed Yourdon, Tom DeMarco, Larry Constantine. This is when uh ERP uh sorry not entity relationship ideas came about. And so these ideas of that kind of abstraction poured over not just into software but also into the data side of things as well. This was an extraordinarily vibrant period of time in software engineering in which we had the invention of flowcharts for example which were an aid to thinking about how to construct these kinds of systems. You saw a division of labor where you had people who would analyze the system. You people who would then program it, people who would key punch the solutions, people would operate the computers. And again this was largely driven driven by economic reason because the cost of machines were far greater than the cost of the humans involved in them. So a lot of what was happening was done to optimize the use of the machines which were very very rare resources. Um the lesson in this as we'll see coming back in the next generations is that these forces much like with software engineering itself have shaped the very industry of software and economics and the whole social context also influences them. So in the first generation it was largely focused upon mathematical needs and the automation of existing business processes. So what you had happen is that you would have businesses that have literal, you know, floors of offices with people doing accounting and payroll and like that. And this was the lowhanging fruit because now all of a sudden we could accelerate those processes and actually improve their precision by pulling the human out of it and automating it. So the vast amount of software written during that time was business and and mathematical and and numerical kinds of things. Now this is an important thing because while this was the focus, this was not the only kind of thing because you saw in the periphery or shall I say from the point of view of a person who was a programmer in that time it looked to them as the dominant places was in the IBMs, the insurance companies, the banks and the like. There's a lot of work going on outside that world in the defense industry as well. We saw people moving software and hardware into our machines of destruction into our aircraft into our missiles. We saw it moving into weather forecasting. We saw it moving into medical devices itself. So while the concentration was the things that the general public would see a lot of stuff happening around the edges as well. I would say in the first golden age of software engineering there was this central push of algorithmic abstractions into business and numerical things but the real innovation was happening in that fringe in particular it wasn't in business cases but it was in defense cases because Russia was the clear and present threat for us at the time in which there was a need to build distributed systems of real time nature most of the systems I've talked about this were were not real time. And so we saw the rise of of experimental machines such as whirlwind. We saw the work in the mother of all demos which was experimentation of various human interface kinds of things which was not the center of gravity of of software development at the time with the things on the fringes. We saw we saw researchers such as David Parnes who were coming on the scene CAR Dyster and others were forbidding to look at the formalisms of these systems and looking at treating software development is actually a formal mathematical activity.

Host: Grady just mentioned formal methods and formal mathematics and software engineering. Being able to verify that software does what it should has been a problem since the early days of software engineering. And this leads us nicely to our seasonal sponsor, Sonar. As we're living through what Grady might call the third golden age of software engineering, AI coding assistants generate code faster than we ever thought was possible. This rapid code generation has already created a massive new bottleneck at code review. We're all feeling it. All that new AI generated code must be checked for security, reliability, and maintainability. A question that is tricky to answer though. How do we get the speed of AI without inheriting a mountain of risk? Sonar, the makers of Sonar Cube, has a really clear way of framing this. Vibe then verify. The vibe part is about giving your teams the freedom to use these AI tools to innovate and build quickly. The verify part is the essential automated guardrail. It's the independent verification that checks all code human and AI generated against your quality and security standards. Helping developers and organizational leaders get the most out of AI while still keeping quality, security, and maintainability is high on the main themes of the upcoming Sonar Summit. It's not just a user conference. It's where devs, platform engineers, and engineering leaders are coming together to share practical strategies for this new era.

Host: I'm excited to share that I'll be speaking there as well. If you're trying to figure out how to adopt AI without sacrificing code quality, come join us at the Sonar Summit. To see the agenda and register for the event on March 3rd, head to sonarsource.com/pragmatic/sonarssummit. With this, let's get back to Grady and treating software development as a form of mathematical activity. And you saw the rise of I said distributed and real-time systems primarily in the defense world. So from whirlwind it begat a system called sage the semi-automatic ground environment which came about during the six during the 50s and60s and indeed the last one was decommissioned I think in the 1990s. This was based upon the threat of Russia. This is you know pre premissiles Russia would send a fleet of bombers over the Arctic and invade the United States. So thus was born the D line, the distance early warning system across Canada. And all that data was then fed into a series of systems called SAGE, the semi-automatic ground environment. This system was so large it consumed according to some reports easily 20 to 30% of every number of software developers in the United States at the time. Wow,

Grady Booch: that's a lot of folks. But remember back in the time there were maybe only a few tens of thousands of software developers but this was the biggest project

Host: basically the military was the biggest spender uh in soft in research and moving the industry forward right because they had

Grady Booch: absolutely absolutely correct they had to because it was a clear and present threat and so a lot of the innovation was happening to the defense world as I think I passed this phrase on to you in the documentary I'm working on in computing I use the phrase that there are two major influences in the history of computing one is commerce. We've talked about the economics already. And the second is warfare. And thus I claim and I think there's much defense for it. Much of modern computing is really woven upon the loom of sorrow. Referring back to Jacquard's loom. So yeah, a lot of the things we take for granted today like the internet uh like uh micro miniaturaturization, this all came from government funding in these cases. So we owe a lot to the cold war. This phase was this still the first golden age? We passed the first golden age. These are the things happening in the first golden age. But what I'm pointing out is there was sort of a center of mass to it, but lots of things happening on the edge that were driving software out from its primary roots. So let's recap here. In the first golden age, you had the focus primarily upon mathematical and business kinds of applications. And the primary means of decomposition was an algorithmic abstraction. We looked at the world through processes and functions, not so much through data. But on the fringe, we had organizations, use cases that were pushing us beyond that simple place. Use cases that demanded distribution, use cases that demanded the coupling of multiple machines and also use cases that demanded real-time processing and use cases that demanded human user interfaces. Yeah,

Host: the interfaces we deal with today, they had their roots in whirlwind and the roots in Sage. This is the first UI interface that was graphic tube, a CRT. And so these kinds of things were born from that. So that was the point and I think the lesson from this is that software is a wonderfully dynamic, fluid, fungeible domain. But it's also one that tends to grow because once we built something and we know how to build it and we have patterns for doing so, all of a sudden we discover there economically interesting ways we can apply it elsewhere. So this was the first generation, the first golden age of software engineering. But you could begin to see cracks in the facade in the late 70s early 80s. The NATO conference on uh software engineering uh was one of the first to do this in a big public way. And for them NATO was realizing we NATO have a software problem. We have an insatiable demand for software and yet our ability to produce it of quality at speed, we just don't know how to do it. And so this was the so-called software crisis and you know people didn't know what to do about it. Can you help us understand or take us back what what was the crisis about? What were people like kind of like saying oh my gosh this is the problem?

Grady Booch: Yeah the problem was to recap was software was clearly useful. There were economic incentives to use it and yet the industry could not generate quality software of scale fast enough.

Host: I see. I see. So it it was both expensive, slow and and not good.

Grady Booch: There's a fourth one which was the demand was so great that I guess you could call it the slow the demand was so great. It's like wow this is we want more of this stuff. Give us more software. So those four things together put us in the sense of crisis. Notice subtly it's not the same kind of crisis we have today where we worry about surveillance, we worry about you know crashes, that kind of thing. So the nature of the problems have changed and they do in every every golden age.

Host: It's fascinating that this thing existed, you know, living in our our current reality.

Grady Booch: Yes. Yes. It's a very different world itself. But it was a the clear and present danger at the time was that and it was an exciting vibrant time because there was so much that could be done and software being such a funible elastic fluid medium meant that we were primarily limited just by our imagination. You add to this then micro miniaturaturization. Why did integrated circuits come about? Why did Fairchild uh come about and and establish Silicon Valley be the basis for it? It's because of the transistor. Who was the first customer of the Fairchild? It was the Air Force primarily for their men missile. In fact, most of the transistors being made in Silicon Valley in the earliest days went to our cold war programs. But that was great because that established then the the economic basis for the whole infrastructure for doing it where it was possible to start doing these things at scale and of of course we knew that begat integrated circuits that begat personal computers and so on. So here we are now in the late '7s and the software crisis was quite clear. The US government in particular, to focus on one story, recognized that they had the problem of Babel and that there were so many programming languages in place. By their count, there were at least 14,000 different programming languages used through military systems. Oh wow. Back then when software was so much smaller than today. Wow.

Host: Absolutely. It's incredible. And languages like languages like Jovial was a very popular one. a jovial kind of a play on words for COBOL and and the like. We had the rise of ALGOL which was not a military language but the formal forces of Hoare and Dijkstra and Wirth led to this discipline of applying mathematical rigor to our languages and so the idea of you know formal language research was born you had this wonderful confluence of resources it said by the late '7s the government recognizing that we have a problem that's when they funded the ADA project which at the time was called the joint program working group something something like that which was an attempt to remove the number of language that exist and try to reduce it to one language that ruled them all. Now what was interesting is that you saw at this time there was a lot of interesting research that was feeding into it. the work of uh abstract data types uh from Galan and the ideas of information hiding from Dave Parnes uh separation of concerns uh the ideas today we would call it clean programming clean coding but it's the ideas of literate programming from canuth so these kinds of things were bubbling away in the late 70s and early 80s and ADA was a little bit of a a push to make that happen on a big scale no other industry or company could really do it because they didn't have the exposure or weight or gravitas or economic powerhouse as the US military at the time did. At the same time, you had some interesting work going on in laboratories like at Bell which had begat C and Unix and the like which was becoming incredibly important. But there was this crazy researcher at the time by the name of Bjarn Struestrip who was saying wow you know this is kind of cool but hey let's take some of these ideas from simula I should mention simula which was the first object-oriented language and let's see if we can apply them to C because you know C's got problems with it let's see if we can move about so what was happening in the background in academia and in in these fringes was the realization that we needed new kinds of abstractions and it wasn't just algorithmic abstractions But it was object abstractions. Turns out there's an interesting history behind that dichotomy. There is a discourse in Plato about that very kind of split in which he has he has a dialogue between two people who are you know talking about how I look at the world and one of them says we should look at the world in terms of its processes. This is the ancient Greek philosopher from like before Christ. that guy that Plato he he he brought up some parallel ideas.

Grady Booch: He brought up the ideas of the dichotomy of looking at the world through two lenses. The very Plato whose work has now been banned in certain US universities because he was so radical. Right? But but in one of these dialogues he observed that one of the writers said oh we have to look at the world through through the processes how things flow. And the other one said no no no we have to look at them through things. And this is where the idea of atoms came about. The very term atom came from Greek terms and and that terminology. So the idea of looking at the world and looking at and looking at the world are basically abstractions is not a new one. But people like Parnis and and others and the the designers of Simula said, "Wait a minute, we can apply these ideas to software itself and we can look at the world not just through algorithmic abstractions, but we can look at them through object abstractions. Now there's another factor that came into the place and this is where uh the inventor of Fortran came into be. After Fortran he went off and he did this at IBM of course he he was made a fellow and he went off and said this was fun but I want to do something else and he said let's let's look at a different way of programming and it was the idea of functional programming which was looking at the world through mathematical functions stateless kinds of things so there was work here we are talking what in the the 70s now in which uh the ideas of functional programming came to be I had a chance to interview him a few a few months before he passed away and I asked asked him, you know, why did functional programming never make the big time? And his answer was because functional programming makes it easy to do hard things, but it makes it astonishingly impossible to do easy things.

Host: Easy things.

Grady Booch: Yeah. So, so functional programming has a role. There's no doubt. And I think its foundations were laid at the time by John. But even today, it has a role. It has a niche but it hasn't become dominant because of that very same edict. So any rate here we are at the sort of end of the first golden age of software engineering and moving into the second. What were the forces that led us into that? First off it was growing complexity.

Host: Grady just mentioned how growing complexity was a force pushing the industry into a new golden age of software engineering. Fast forward to today and software complexity keeps growing, growing and growing in part thanks AI generating a lot more code a lot faster. And this brings us nicely to our season sponsor work OS. Work provides the primitives that make it easy to make your app enterprise ready. But under the hood, there's so much complexity that happens. I know this because I recently took part in an engineering planning meeting at work called the Hilltop review. An engineer walking through their proposed implementation. In this review, we discuss how to implement authentication for customers when their users authenticate across several platforms using work OS. For example, what should happen if a user logs out on the mobile version? Should they stay logged in in the web version? What about the other way around? We covered 10 plus similar questions. The answer, as I learned, goes down to it depends what the customer using work OS wants. The work OS team walks through edge cases I had no idea existed and then turns those decisions into configurable behavior in the admin panel so customers choose the right trade-offs for their product and their users without having to build and maintain all of this logic themselves. But this is not always enough. And when customers have unique needs, the work engineering team often works with them directly to figure out how to solve their very specific problem. They then generalize these solutions so they become part of the platform for everyone. After this planning session, I have a newfound appreciation for just how much complexity works absorbs so product and engineering teams don't have to. The same planning goes into all work products and customers get all the benefit. Learn more at workowwise.com.

Host: And with this, let's get back to Grady and how the second golden age of software engineering came about. As I mentioned, growing complexity, difficulty of building software fast enough and building building big enough software and I would add to this the things that came about in in the defense world which were the desire and an obvious value in building systems from a distributed kind of way. Now come on to the scene because what was happening around that same time is the fruits of micro miniaturaturization came to be and it led us to the personal computer. This was because transistors, right? And and the breakthroughs in in like electronics and and

Grady Booch: precisely and you know this too was a vibrant time because you had you know you had hobbyists who could put these things together and and build them from scratch and there were no personal computers at the time. Was this the first time that hobbyists could actually like meaningfully get their hands on it in in the history of computing? Really? I think at scale, yes, you you had you had hobbyists such as Pascal back in his day who decided that his father was so tediously working over his accounting that Pascal built a little machine for him. So there was hobbyist work at that time, no doubt about it. But in terms of scale and also remember post World War II, you had the addition of especially in the United States, you had more disposable income which made it possible for hobbyists to actually do these kinds of things. And then lastly, you had the military who was producing integrated circuits and transistors. And all of a sudden, especially in Silicon Valley, you could go down to Fry or the Fry equivalent. This is before Fries and buy these things. they were just they were there and so it enabled people to play and play is an important part in the history of software. So you had this wonderful thing happening and I'd say the late 70s and early 80s which was a vibrant time of experimentation. There's a delightful book called what the doormouse said which posits that the rise of the personal computer was also tied together with the rise of the hippie counterculture. And so this this drive toward you know power to the people and you know let's you know love make love not war these kinds of things. This is the era of Steuart Brand the era of of the Murray pranksters and the like and that led to things like the well which was the very first social network which was today we call them bulletin boards which grew up in in Silicon Valley. Quick aside, Stuart just a lovely fellow. He was actually mentioned as one of the merry pranksters in uh in the book about uh about them. He's still on the scene and he's just released a wonderful book called maintenance part one which looks at the problems of systems. Software is one of them and the problems of maint associated with them. Anyway, here we are um late 70s early 80s uh also a very vibrant time because there's a lot of cool stuff that could be done.

Host: Yeah. And and it's Strike Press is publishing this actually. So, uh, I'll I'll leave a link in the show notes below. It looks like a really nice book and Stride Press is known to produce excellent quality. So, I'm actually excited to look into this.

Grady Booch: Yeah, it's a great great book. So, the realization was that we now had the beginnings of theories of looking at the world not through processes, but through objects and classes. We had the the demand pull of distributed systems, the demand pull from trying to build more and more complex systems. And so there was also this perfect storm that really launched that second golden age. And that's frankly where I came onto the scene. I was just in a lucky place at a lucky time. Um I was at the time working at Vandenberg Air Force Base on uh missile systems and space systems. Uh there was envisioned military space shuttle and I was part of that program as well. It was great. It was a fun place to be because we'd have launches like twice a week. It was pretty cool. You'd run up and say, "Wow, look at that." It was it was pretty wild. At the building in which I work, I had to evacuate whenever there was a building, ever a launch because if it was a Titan launch, the Titan launch pad was really close to us and if it had blown up on the launch pad, it would have it would have blown up our building, which would have been really annoying. So, yeah. Good stuff.

Host: And one other one other quick story, you could always tell when it was the secret launches going off, the secret spy satellites, because there were two main clear indications. The first is all the hotels would fill up because you'd have the contractors come in. And second, the day of the launch, the highway nearby where you could see the launch would fill up with people to watch it. So there were no secrets in that world. So here we are, late 80s. uh the the world was poised for a new way of looking at the world and that was object-oriented programming and object-oriented design. So how does that differ from the first generation? It differs in the sense that we approach the world at a different layer of abstraction. Rather than just looking at the data which was this raw lake out here and the algorithms we have to manipulate them, we bring them together into one place. We combined the the objects and the and the uh processes together and it worked. My gosh, it'll enable us to do things we could not do before. It was the foundation for a lot of systems. Uh go out to the computer history museum and go look at the software for for Mac Write and Mech Paint. It was written in object Pascal, one of the early object-oriented programming languages. One of the most beautiful pieces of software I've seen. It's it's well structured. It's well organized. And in fact, much of the design decisions made in it, you still see persist in systems such as Photoshop today. Uh they still exist, which is an interesting story unto itself about the lifetime of software. So looking at software through the lens of object proved to be very effective because it allowed us to attack software, the software complexity problem in a new and new and novel way. And so much like the first golden age, this was also a very vibrant time. in I would say the the 80s and 90s where you had people such as the three amigos, me, Ivar Jacobson, uh and James Rumbaugh, you had Peter Coad, you had Larry Constantine was back on the scene, uh Ed Yourdon was back on the scene, uh a lot of folks who were saying, "Let's look at software not from processes but from objects and think about it." Now, this was great. We made some mistakes. there was an overemphasis upon the ideas of inheritance. We thought this would, you know, be the greatest thing. Uh that was kind of wrong. But the idea of looking at the world from classes and objects, it was kind of built in. And so what began to happen, this was also an economic thing. As it's people started building these things, all of a sudden we saw the rise of platforms. Now there was precedence for this because in the first golden age of software people started you know building the same kinds of things over and over again. The idea of collecting processes collecting algorithms that were commonly used like you know how do I manipulate a hard drive or a drum? How do I write things to a teletype? How do I you know put things on a screen? uh these kind how do I sort these kinds of algorithms could be codified and so the first ideas of if you will packaging them up into reusable things came into be. This is when at least in the the the world of of business systems IBM share came to be. Share was a customer uh organized group that literally shared software among one anothers. Totally.

Grady Booch: And this was in the first golden age, right?

Host: This is the first golden age, right?

Grady Booch: So So this was kind of like a primitive or like I mean looking back a more primitive way of just like packaging stuff into like yeah related may that be sorting algorithms or or as you said IBM IBM was distributing just like functions and things like that.

Host: IBM wasn't doing it. It was perfect. It was completely public driven. IBM supported it but was done for it.

Grady Booch: Yeah. So the point is this was the earliest open- source software. So the ideas of open source existed and remember too in the economics of software and hardware back in the time software was pretty much given away free by the main manufacturers. IBM did not charge for software until later in the later 60s7s they realized my gosh we can make money and they decoupled software and hardware and started charging you for it. But in the earliest days, there was this vibrant community of people who could say, you know, gosh, I've written this thing. Go ahead and use it. That's fine. No problem. So, open source was was late at that time. And the same thing began to happen in the second golden age in which we saw much like the rise of operating systems, the rise of open-source software, the same phenomena applied in the second golden age, but now it was a new layer of abstraction. Oh, I want to have now a new uh library for, you know, writing to these new fangled CRTs. Here it is. No competitive value in me having it, but by gosh, it enables me to build some really cool things. You can have it, too. So, open source laid its roots, took its ideas from the first golden age, applied itself in the second golden age, but in a different kind of abstraction. Lurking in the background. Speaking of economics, was the rise of platforms because now all of a sudden these libraries are becoming bigger and bigger. And as we moved to distributed systems, there was the rise of back then we called it serviceoriented architectures. There was this need of, you know, we had HTML and the like. We could, you know, pass links back and forth, but there was some crazy folks that said, wouldn't it be cool if we could do things like, you know, share images? And that was one of the things that uh Netscape allowed which was they they produced this addition to HTML that allow you to put images. Wouldn't it be cool if we could pass messages back and forth via HTML? So all of a sudden uh the internet became via HTML protocols, HTTP protocols became a medium at a higher level abstraction for passing information and and processes around. But there was a need to package it up. So thus was born serviceoriented architectures, SOAP, the serviceoriented architecture, serviceoriented protocols, all that the predecessors to what we have today. And this was laying the foundations in the second golden age for the the beginnings of the platform era which is you know what Bezos and and others have really brought us to where jumping ahead in our current age where you have these islands which are sort of formed by all sort of APIs around them. But it was in the second golden age is they were being born. And when you say platforms what do you mean when you say the rise of platforms? What how do you think of a platform? AWS would be a good one. Uh Salesforce would be another one in which I have these economically interesting castles defended by the moat around them and those organizations like Salesforce give you access across the moat for you know a slight fee. Well, not even a slight fee.

Host: Yes. Not a slight fee.

Grady Booch: Yeah. under the assumption that we as like a salesforce uh the cost of you doing it yourself is so high it makes sense for you to buy from us. So during the second golden age we saw the rise of those kinds of businesses because the cost of certain kinds of software was sufficiently high and the complexity was certainly high it allowed the business and the industry of these kinds of SAS companies. So, let's look at the the late '9s, early 2000s. Also a vibrant time, much like the first golden age. We had the growth of the internet. Uh, when did you get your first email address?

Host: My first email address I got sometime in maybe 2005 six. It was still very fresh when Gmail launched. But when did you get your first email address? 1987 when it was the ARPANET. And in fact, at that time, yes, we had a little book. It was probably a hundred pages long that listed the email address of everybody in the world. It was pretty cool. You can find them online and you can see my email there. Doesn't work anymore because it doesn't have the same, you know, top level domain kind of things. So, I've been on email before email was cool. And so as you saw these kinds of structures like email becoming a commodity thing in the second golden age of software, this is when software began to filter into the interstitial spaces of civilization and it became not just this one thing fueling businesses or certain domains. It became something that became part of the very fabric of civilization. This was important. And so now the things we worried about in the first golden age, we'd solved them for the most part. They were part of the very atmosphere. We didn't think about algorithms much because, you know, gosh, everybody kind of knows about them. And this is as technology should be. The best technology evaporates and disappears and becomes part of the the air that we breathe. And that's what's happening now. But it was in the second golden age. The foundations of where we are today are here. So what happened around 2000 or so? Well, we had by that time internet was big, lots of businesses being built, but there was the crash around that time because economically it just didn't make sense. So there was this great pullback. Also happening was the whole Y2K situation where a lot of effort was put into, you know, solving that problem. You know, people in retrospect say, well gosh, we didn't need to worry about that. But being in the middle of it, you realize, oh no, there was a lot of heroic work. And if that hadn't been done, then lots of problems would have happened. So this is a good example of how the best technology you simply don't see. A lot of effort and a lot of money was spent to subvert a problem that simply did not manifest itself. That's a great thing.

Host: Grady just mentioned how the best technology is one that you simply do not see. This is an underrated observation and it's true for most mission critical software. When it works, it's invisible. It's only when it breaks when users notice that it's there. There is however a problem with building reliable invisible software. There's often a tension between moving fast with few guard rails that can make things break or putting in more guard rails for stability but then slowing down in shipping speed. Well, there's a third way which leads us nicely to our presenting sponsor stats. Static built a unified platform that enables the best of both cultures continuous shipping and experimentation. Feature flags let you ship continuously with confidence. Roll out to 10% of users. Catch issues early. Roll back instantly if needed. Built-in experimentation means every roll out automatically becomes a learning opportunity with proper statistical analysis showing you exactly how features impact your metrics. And because it's all in one platform with the same product data, analytics that should replace everything. Teams across your organization can collaborate and make datadriven decisions. Companies like Notion went from single digit experiments per quarter to over 300 experiments with stats. They ship over 600 features behind feature flags, moving fast while protecting against metric regression. Microsoft, Atlashian, and Brex use static for the same reason. It's the infrastructure that enables both speed and reliability at scale. They have a generous free tier to get started, and pro pricricing for teams starts at $150 per month. To learn more and get a 30-day enterprise trial, go to stats.com/pragmatic. And with this, let's get back to the Y2K event that Grady was talking about. Yeah, I I I I remember how stressful that time was leading up to year 2000. I think some movies even came out uh predicting, you know, h how the world would collapse, but there was this fear of like will all these systems crash and it it it started to become pretty intense in in the few months leading up. So I I I was, you know, like a a kid at that time. But when the year 2000, like that was probably the most stressful new year because you weren't kind of sure. You were hoping, you know, and then nothing happened and you're like, okay, it was just a hoax. So anyone who who went through there uh like kind of learned to like not trust these predictions. But you're right like knowing what know there was so much work right to make to make sure that that overflow did not like hit at the wrong place. Yeah. So here we are mentally put yourself in the the first first decade of the 2000s is a fun place because well yeah the there was the crash but still so much fun stuff to do, so much great software to be written. We were still only limited largely by our imagination. Now I'm going to pause for a moment and backfill with some history that I hadn't mentioned. We've been talking about software in general. There was a parallel history going on in AI in which we saw also some generations. The first golden age of AI was in the 40s and 50s where you had people such as Herbert Simon and Newell and Minsky in particular. The focus there was upon gosh we could build intelligence artificially using symbolic methods. So this was the first golden age first great age of AI and the ideas of neural networks were tried. The the thing they built was the SNARC which was the first vacuum tube artificial neuron. It took like five vacuum tubes to make a single neuron. And there was a report coming out of the UK at the time that said we're spending a lot of money here but by gosh it doesn't work. And so the first golden age ended when they realized you can't really build anything interesting. And furthermore, neural networks are a dead end. Largely a dead end because we didn't have the computational power to do them. We didn't have the algorithmic concepts, the abstractions to to know what to do with them once we had them at scale. The second golden age of of AI was really in the 80s when you had people like Falcon come along and say hey there's another way of looking at it and it's looking at it through rules. Thus was born the idea of machine learning uh things like MYCIN and the like came upon the scene but there too we saw the AI winter come about. By the way there was an interesting rise in hardware at the time. The Lisp machine the thinking machine were all built during this time. vibrant periods of time of a of computer architectures. So you see these kind of feeding into one another, but ultimately it failed because they didn't scale up once you got beyond a few hundred if then statements. We simply didn't have a means of building inference engines that could do anything with them. So here we are in exciting time again two first decade of the 2000s. AI was kind of you know back in in the back rooms. we still had a lot of cool things to do and uh more and more distributed kind of systems plus fueling that also was the fact that software was now in the hands of individuals through personal computers. So the demand for software was even greater. I would claim and this may be a little controversial. We are in the third golden age of software engineering but it actually started around the turn of the millennium. It's not it's not now but it's then. And the first indication of the rise of it is we saw a new rise in levels of abstraction from individual components of our software programs to whole libraries and packages that were part of our platform. Oh, I need to do messaging. Well, I'm not going to do that on my own machine. I can go out to this library which does messaging. I need to manage this whole chunk of data. Let's, you know, use Hadoop or something like that. it wasn't around the time but the seeds where it was growing. So we again saw a growth in levels of abstraction from just simple programs to now subcomponents of systems and that was the next great shift that happened and our methodologies and our languages and all that began to follow. So the third golden age we've been in for several years already. And not to get ahead of ourselves, what's happening with AI assistance and the like in the coding space is in many ways a reaction to the growth of those kinds of things because we want to accelerate their use. We want to we have so many of those kinds of libraries out there and not enough people know about them. We want to accelerate the use of them by having aids that help us do so. So that's the context in which I put AI agents such as cursor and chat tpt in and that they are in a way a follow on to the forces that have already led us to this third golden age. So we are now in a very vibrant time but the problems are different from the first and second generations. What are the problems now? First, it's problems of we have so much software. How do we manage it? And we have to deal with issues of safety and security. Can somebody sneak in something that I can't trust? How do I defend myself against that? It is so easy to inject something in the software supply chain. How do I prevent the bad guys from putting stuff inside there? How do I defend against it? the whole history behind Stuxnet and the like is a good one uh to show you know espionage and software. And so all of a sudden the human issues that we had for much of the history of software we were insulated about because it was so much part of civilization these human issues became front and center clear and present for our world. And the other element is to the economic issues of it. We had now companies that were too big to fail. What would happen if a Microsoft were to go under? What would happen if a Google were to go under? They're so economically important to the world that the things they do, they sneeze in some part of the world catches a cold. And so the problems we have now in this third golden age of software are different than they were than the first and second generations, but equally as exciting. And then last, we have the the ethical issues. because I can do this kind of software, it is possible for me to track where you are in every moment of the day. I can do that. Should I do that? Some will say yes, I should because it, you know, it's a good thing for humanity. Others will say not so sure about that.

Grady Booch: So, I like how you laid it on. It's very interesting, especially through both your experience and also sharing the history that I think a lot of us don't really reflect on, which is how it all started and just honestly how young it is. If if I mean you know like 70 or 80 years can be long depending on how old you are but it is it's it's not even a generation or barely generation.

Host: It's a couple of generations. Yeah.

Grady Booch: But one thing that I'm seeing across the industry right now which feels very like this setup makes sense but one thing that kind of feels it contradicts it for a lot of software engineers today

Host: is there seems to be an ex existential dread that is especially accelerating especially over the winter break. What happened over the winter break is before the winter break, these AI uh LLMs were were pretty good for autocomplete. Sometimes they could generate this or that. And over the winter break, I'm not sure if you played with some of I have with the new Yeah, with the new models, they actually generate really good code to the point that I'm starting to trust them. And

Grady Booch: yes,

Host: as far as the history of software has been, my understanding is that software developers have written code and it's a hard thing to do. And a lot of us, you know, it takes years for us to learn and to be excellent at it even longer. And so a lot of us are starting to have this really existential crisis of okay, well the machine can write really really good software code first of all like WTF and how did this happen over the last few months and then the question is what next? this it feels that it could shake the profession because I feel coding has been so tightly coupled to software engineering and and now it might not be you know looking at I guess you know like taking a breathe out first and looking through the both the history and and your your what is your take on what's happening right now well let me say that this is not the first existential crisis the developers have faced tell us more they have faced the same kind of existential crisis in the first and the second generation. So that's why I look at this and say, you know, this too will pass when I talk to people who are concerned about it. Don't worry, focus upon the fundamentals because those skills are never going to go away. I had a chance to meet Grace Hopper. She was just delightful, you know, fireplug of a woman. Just amazing, amazing thing. For for your readers, go Google Grace Hopper and David Letterman and there's this she appeared on the David Letterman show and you'll get a sense of her personality.

Grady Booch: Well, we're going to link in the show notes below. She of course is the one who recognized that it was possible here we are in the 50s that it was possible to separate our software from our hardware. This was threatening to those who were building the early machines because they said you know gosh you could never build anything efficient because you have to be a tied so closely to the machines and many in that field and they wrote about it expressed concerns that you know this is going to destroy what we do and it should have. So we had here the beginnings of the first compilers. The same thing happened with the invention of Fortran where people were saying gosh you know we can write tight assembly language better than anybody else better than any machine can kind of do but that was proved wrong when we moved up a level of abstraction from the assembly language to the higher order programming languages. And so you had a set of people who were similarly concerned and distressed by the changes in levels of abstraction because they recognized that the skills they had in that time were going to go away and they were going to be replaced by the very thing themselves created. Now you didn't see as much of a crisis because there weren't that many of us back in that time frame. We're talking, you know, a few thousands of people now. We're talking millions of people who ask quite legitimately the question, what does it mean for me? So, I've had, as I'm sure you have had, a number of, you know, especially young developers come up to me and say, Grady, what should I do? Am I choosing the wrong field? Should I, you know, do something different? And I assure them that this is actually an exciting time to be in software because of the following reasons. We are moving up a level of abstraction much like what happened in the rise from machine language to assembly language from assembly language to to higher order programming languages from higher order programming languages to libraries the same kind of thing happened and we're seeing the same change in levels of abstraction and now I as a software developer I don't have to worry about those details so I view it as something that is extraordinarily ly freeing from the tedium of which I had to do, but the fundamentals still remain. As long as I am choosing to build software that endures, meaning that I'm not going to build it and I throw it away. If you're going to throw it away, do what you want. That's great. And I see a lot of people using these agents for that very purpose. That's wonderful. You're going to go off and automate things you could not have afforded to do today. And if you're a single user for it, then more power to you. This is the hobbyist rarer and the hobbyist side of software if you will much like we saw in the earliest days of personal and computers where people will build these things. Great stuff. Great ideas will come from it.

Host: I like the comparison. Yes.

Grady Booch: Yeah. Great ideas will come from it. You know, people will build skills. We'll do things we could not have done before. We'll automate things that were economically not possible, but they're not going to endure necessarily, but still we will have made a valuable impact. And I guess just like in the first era where personal people could buy it, you will have people come into the industry who have honestly nothing to do with it and they might bring amazing ideas, right? Like back then, you know, school school teacher might have bought a personal computer. Today I I just talked to my neighbor upstairs, an accountant. She has instructed Chad GBT to build some appcript to uh help their accounting teams process a bit better because she knows how that thing works. Nothing to do with software, but now creating their own personal throwaway software. by the way.

Host: Yes, absolutely. The same parallels and I celebrate that. I encourage it. I think it's the most wonderful thing which is why we are in this vibrant period. In the early days of of the personal computer, the very same thing happened. You found artists drawn to especially the PC and the Amiga at the time. You found gamers who realized I've got a new medium for expression that I did not have before and that's why it was a very vibrant time. the same thing is happening. And so much of the lamenting of oh gosh, we have an existential crisis are those who are narrowly focused upon their industry not realizing that what's happening here is actually expanding in the industry. We're going to see more software written by people who are not professionals. And I think that's the greatest thing around because now we have software much like in the in the counterculture era of of the personal computer. The same thing is happening today as well. I like what you're saying. However, one however

Host: laughter] however one one thing that I also pay attention to uh one person I pay attention to is is Dario Amod the CEO of Anthropic. And the reason I pay attention to him is I I try I tend not to pay attention to CEOs but he actually said about a year ago he said something interesting. He says he thinks most code will be generated by AI about 90% of it maybe in a year and then more and we thought that's silly and then he was right and code was generated and now he said some another thing interesting that sounded interesting but the next one sounds scary he said I quote software engineering will be automatable in 12 months now this sounds a lot more scarier for reasons we know coding is a subset of software engineering but he said this what is your take on on this and you've had you've had a strong response already. So,

Grady Booch: u I have one or two things to say about it. So, first off, I use Claude. I use Anthropics work. I think it's it's my it's my go-to system. I've been using it for problems with JavaScript, with Swift, uh with PHP of all things and Python. So, I use it and it's it's been a great thing for me primarily because, you know, there are certain libraries I want to use. Google search sucks. documentation for these things suck and so I can use these agents to accelerate my understanding of them. But remember also I have a foundation of at least one or two years of experience in these spaces okay a few decades where I sort of understand the fundamentals and that's why I said earlier that the fundamentals are not going to go away and this is true in every engineering discipline the fundamentals are not going to disappear the tools we apply will change so Dario man I I respect what you're saying but recognize also that Dario has a different point of view than I do. He's leading a company who needs to make money and it's a company who he needs to speak to his stakeholders. So outrageous statements will be said like that. I think he said these kind of things at Davos if I'm not mistaken.

Host: It it was very Yes.

Grady Booch: And and I'd say politely well I'll use a scientific term in terms of how I would characterize what Dario said and put it in context. It's utter uh that's the technical term because I think he's profoundly wrong and and he I think he's wrong for a number of reasons. First, I accept his point of view that it's going to accelerate some things. Is it going to eliminate software engineering? No. I think he has a fundamental misunderstanding as to what software engineering is. Go back to what I said at the beginning. Software engineers are the engineers who balance these forces. So we use code as one of our mechanisms, but it's not the only thing that drives us. None of the things that he or any of his colleagues are talking about attend to any of those decision problems that a software engineer has to deal with. None of those we see within the within the realm of automation. His work is primarily focused upon the automation at the lowest levels which is I would put akin to what was happening with compilers in these days. That's why I say it's another level abstraction. Fear not, O developers. Your tools are changing, but your problems are not. There's another reason why I I push back on what he's saying. And that is if you look at things like cursor and the like, they have mostly been trained upon a set of problems that we have seen served over and over again. And that's okay. Much like I said in the first generation, first golden age, we had a certain set of problems. And so libraries are built around them. The same thing is happening here. If I need to build a UI on top of CRUD, it's sub winter or some web ccentric kind of thing. I can do it. And much like your friend, more power to them. They can do it themselves because the power is there to do so. They're going to, you know, probably not build a business around it. Some small percent of them might do so. But it's enabled them to do things they could not do before because they're now at a higher level abstraction. what Dario neglects and I used a a bit of a paraphrase from from Shakespeare. There are more things in computing Dario that are dreamt of in your philosophy. The world of computing is far larger than web centric systems of scale. So we see many of the things applied today on these webric systems and I think that's great and wonderful but it means that there's still a lot of stuff out there that hasn't yet been automated. So we have we keep pushing these fringes away. So I told you those stories at the beginning because history is repeating itself where some will say history is rhyming again. The same kinds of phenomena are applying today just at a different level of abstraction. So that's the first one. Software is bigger than this world of software is bigger than what he's looking at. It's bigger than just software intensive systems. And then second, you know, if you look at the kinds of systems that most of these agents deal with, they are in effect automating patterns that we see over and over again for which they have been trained upon. Patterns themselves are new abstractions that are in effect not just single algorithms or single objects, but they represent societies of objects and algorithms that work together. These agents are great at automating generations of patterns. I want to do, you know, this kind of thing and I can tell you in English because that's how I describe the pattern. So anyway, that's why I think he's wrong. More power to him. But, you know, I think this is an exciting time more than things to worry about exist existentially. Let me offer another story with regards to how we see a shift in levels of abstraction. English is a very imprecise language full of ambiguity and nuance and the like. Though one would wonder how could I ever make that you know as a useful language and the answer is we already do this as software engineers. I go to somebody and say hey I want my system to do this. It kind of looks like this and I give them some examples. I do that already. And then somebody goes and turns that into code. We've moved up a level of abstraction to say I'd like it to do this. I'll give you a concrete example. I'm working with a library I'd never touched before. It's the JavaScript D3.js library which allows me to do some really fascinating visualizations. I go off and search for a site called Victorian Engineering Connections. It's just this lovely little site where the gentleman did this for a museum Andrew and you can, you know, put in a name like George Bool and you see his name, you find things about him and you find his social network around him and you can go touch it and explore. It's very, very cool. And I said,"I want that kind of thing, but my gosh, I don't know how to do that. So, what can I do?" He gave me his code. I realized it uses the D3.js library. I knew nothing about the D3.js library. So, I said to Cursor, "Go build me the simplest one possible. Go do it out of, you know, five nodes and show me." So, I could then study the code. And then I could say, "Well, what they wanted would really wanted to do is this. Go make the nodes look like this, depending upon their kind." So, just like I would do with a human, I was expressing my needs in an English language that now all of a sudden I didn't need to labor to turn that into reality. I could simply have a conversation with my tool to help me do that. So, it it reduced the distance between what I wanted and what it could do. And I think that's great. That's a breakthrough. But remember, as I said to Dario, this only works in those circumstances where I'm doing something that people have done hundreds and hundreds of times before. I could have learned it on my own. As Fineman would have said, you know, go do it yourself because then that's the only way you're going to understand. And I my reaction is that's great, but there's so much in the world I'm curious about. I can't understand it all. Let's go, you know, let's decide what I want to do. So go do it for me. So that's why I say these kinds of tools are another shift in the levels of abstraction because they're reducing the distance from what I'm saying my English language to the the programming language. Last thing I'll say is that you know what do we call a language that is precise and expressive enough to be able to build executable artifacts? We call them programming languages. And it just so happens that English is a good enough programming language much like COBOL was in that if I give it those phrases in a domain that is well enough structured, it allows me to have good enough solutions that I who know those fundamentals can begin nudging and cleaning out the pieces. That's why the fundamentals are so important. And speaking of history rhyming, one thing that happened in both the first age and the the sec second golden age or as we jumped abstractions or every time we had an abstraction is some skills became obsolete and then there was a demand for for new skills. For example, when we from assembly level the the skill of like knowing how the instruction set of a certain board and knowing how to optimize it, that became obsolete in favor of thinking at a higher level. In this jump right now where I think it's safe to say we're going from we do not need to write any more code and the computer will do it pretty good and we'll check it and tweak it. What do you think will become obsolete and what will become more important as software professionals?

Host: Great question. The software delivery pipeline is far more complex than it should be. Uh that my gosh just getting something running is hard if you have no pipeline. If you're within a company such as a Google or a Stripe or whatever, you have

Grady Booch: you have a huge infrastructure about around them.

Host: A custom one.

Grady Booch: Yes.

Host: Yeah. A custom one. Yes. And so there is lowhanging fruit for the automation of those. I mean I don't need a human that fills in the edges of those kind of things. By the way, I'm talking about in effect infrastructure is software.

Host: clears throat]

Grady Booch: It's not just, you know, not just raw lines of code. So, this is lowhanging fruit where we could begin seeing these agents that say, "Hey, you know, I want you to go, you know, gosh, I don't know, you know, spin up something for this part of the world. I don't want to write the code for that stuff because it's complex and messy. I'd rather use an agent that helps me do it." So there's a case where I think you're going to have the loss of jobs in those places where it's messy and complex because the automation has clear economic and you know frankly value in terms of security. That's a place where people are going to need to reskill in the building of simple applications and the like. Well, I think you know people who had uh who had skills in saying I want to build this you know thing for iOS or whatever they're going to lose you know they're going to lose some jobs cuz frankly people could do it just by you know prompting it that's great that's fine because we've enabled a whole another generation of folks to do things that professionals did in the past exactly what happened in the era of PCs themselves what should these people do move up a level of abstraction start worrying about systems so the shift now I think is less so from dealing with programs and apps to dealing with systems themselves and that's where the new skill set should come in. If you have the skills of knowing how to manage complexity at scale if you know as a software engineer how to deal with all of these multiple forces which are human as well as technical your job's not going to go away. If anything, there will be even greater demand for what you're doing because those human skills are so rare and delicate.

Host: So, you mentioned the importance of of having strong foundations and and you've previously said, I'm actually quoting you, the field is moving at an incomp incomprehensible pace for people without deep foundations and a strong model of understanding. What foundations would you recommend people to look at? both students, people who are at university studying or looking for their first job and also software professionals who you know now actually want to go back and strengthen those foundations that that will be helpful. I find my my uh my happy place if you will, my sweet space that I retreat back to when I'm faced with a difficult problem back into systems theory. go read the work of of Simon and Newell in the the sciences of the artificial. Uh there's a whole set of work that's come out on complexity and systems from the Santa Fe Institute. It's those kinds of fundamentals of system theory that ground me in the next set of things in which I want to build. I think I mentioned to you in in one of our our previous discussions, I was doing some really interesting work on NASA's mission to Mars. we were faced with an issue of saying, "Hey, you know, we we want to, you know, have people go off on these long missions. We want to put robots on the surface of Mars." And so I was commissioned to go off and think about that for a while. And in effect, I realized NASA wanted to build a howl. And you'll notice I've got a how above me here.

Grady Booch: Yes.

Host: Uh this is I I'm a great one for history. This is my sword of Damocles that passes behind me. If you know the history behind the sword of Dacles, the king Damacles, he was always kept humble because at his throne there was a sword right above him on a thread. So he felt, you know, constantly, you know, unease. And this is why I have Hal behind me as well. For for some reason, NASA didn't want the kill all the astronauts use case. Don't understand why, but we we threw that one kind of out. But if you look at the problems there, this is a systems engineering problem because you needed something that was embodied in the spacecraft. Much of the kind of software we have today in AI is disembodied. Uh the cursor, the copilot and like they have no connection to the physical world. So our work was primarily in embodied cognition. Around the same time, I was studying under a number of neuroscientists trying to better understand the architecture of the brain. And here's where the fundamentals of that came together for me because I began to realize there are some certain structures we see in systems engineering that I can apply to the structure of these really large systems. Taking ideas of Marvin Minsky society of mind which is a way of of systems architecting multiple agents. We're in agent programming now which I think people are just beginning to tap upon how those things apply. they need to go look at systems theory because that problem has been looked at with multiple agents already. Go read Minsky society of mine. You'll see some ideas that will guide you there in dealing with multiple agents. The ideas from bears of uh which was manifest in early AI systems such as hearsay. The ideas of of global workspaces, blackboards and the like. Another architectural element. the ideas of subsumption architectures from uh from Rodney Brooks. Uh his was influenced by by biological things. If you look at a cockroach, a cockroach is not a very intelligent thing. But we know there's there's there's not a central brain in it and yet it does some magnificent things. We have been able to map the entire neural network of the common worm. We're not flush with, you know, evil worms running around the world. There's something else going on there. But biological systems have an architecture to them. So to go back to your question by looking at architecture from a systems point of view from biology from uh neurology from systems in in the real world as Herbert Herbert Simon and New did this is what's guiding me to the next generation of systems and so I would urge you know people looking at systems now go back to those fundamentals. There is nothing new under the sun in many ways. We've just, you know, applied them in different ways. Those fundamentals in engineering, they're still there. And then as closing, uh, you gave some really good recommendations to read, to ponder, to educate yourself, and and get ideas that will probably useful in this new world, especially as as we're going to have a lot more agents. For example, like I now just heard that agents will be part of Windows 11 and operating system. So, they will be everywhere. But looking back at the the previous rises of abstractions and also the previous golden ages, the people who who did great at the start of a new golden age or at the start of a new abstraction even if they were not amazing at the previous one, what have you seen those people do? Like what and and based on this historical lesson, what would you recommend if if we were just kind to kind of copy successful, you know, things that that that people did because I feel this is an opportunity as well, right? we have this rise of abstraction. A lot of people will be paralyzed. But there will be new superstars being born who will be basically riding the wave and they will be the experts of uh agents of of AI of building these new and complex a lot more complex systems that we could have done before.

Grady Booch: So I as I alluded to earlier the main thing that constrains us in software is our imagination. Well actually that's where we begin. We're actually not constrained by imagination. We can dream up amazing things and yet we are constrained by the laws of physics by how we build algorithms and the like ethical issues and the like. So what's happening now is that you are actually being freed because some of the friction, some of the constraints, some of the costs of development are actually disappearing for you. Which means now I could put my attention upon my imagination to build things that simply were not possible before. I could not have done them because I couldn't have raised a teen to do them. I couldn't have afforded that. I could not have uh done it because I couldn't have had the reach in the world as I did before. So think of it as an opportunity. So it's not a loss. It'll be a loss for some who have a vested interest in the economics of this, but it's an a net gain because now all of a sudden these things unleash my imagination to allow me to do things that were simply not possible before in the real world. This is an exciting time to be in the industry. It's frightening at the same time, but that's as it should be. When there's an opportunity where you're on the cusp of something wonderful, you should look at the abyss and say, you can either take a look and say, "Crap, I'm gonna fall into it." Or you can say, "No, I'm going to leap and I'm going to soar. This is the time to soar."

Host: Grady, thank you so much for giving us the the overview, the outlook, and and for and for a little bit of perspective. I I personally really appreciate this,

Grady Booch: and I hope I offered some hope as well.

Host: I think you definitely did. This was a really inspiring episode. Thank you, Grady.

Host: One thing that really struck with me was when Grady pointed out that developers

Host: music] have faced this exact existential crisis before, multiple times, in fact. When compilers came along, assembly programmers thought their careers were over. When highle languages emerged,

Host: music] the same fear ripped through the industry. And each time the people who understood what actually was happening, that

Host: music] it was just a new level of traction, they came out ahead. This historical lens is something that I think we often miss when some of us are caught up in the

Host: music] day-to-day anxiety of new AI capabilities. I don't think we're at the end of software engineering and neither does a Grady. We're at the beginning of another chapter and if history has any guide, it's going to

Host: music] be a pretty exciting one.

Host: If you found this episode interesting, please do subscribe in your favorite podcast platform and

Host: music] on YouTube. A special thank you if you also leave a rating on the show. Thanks and see you in the next one.


n5321 | 2026年2月26日 00:36

Objectively Speaking - Episode 261

Jag: Hey everyone and welcome to the 261st episode of Objectively Speaking. I'm Jag, CEO of the Atlas Society. I'm very excited to have Zultan or Zol Sendes with us today to talk about his book, The Objectivist's Guide to the Galaxy, Answers to the Ultimate Questions of Life, the Universe, and Everything. And as you all can see, I have made quite a few bookmarks. I thoroughly enjoyed it, and I'm really excited to talk about it with Zol. So, welcome. Thanks for joining us.

Zol: Hi, Jagazim. Thanks for inviting me.

Jag: Absolutely. And also folks, I did a review of this book on the Atlas Society website, so we're going to put that up through the links as well. So Zol, I I couldn't help but notice some similarities between your family's story and that of Rand in terms of fleeing totalitarian communism. Your parents were Hungarian. How did you come to be born in a refugee camp in Austria?

Zol: Well, towards the end of the Second World War, the Soviets were invading in into Hungary and my father and mother had a wagon with two horses and they my older brother was three months old at the time and they grabbed him and started going toward the the Austrian border and it was quite an experienced line of refugees going along. But before they got to Austria, an airplane came in and strafed the column of refugees. My parents grabbed my baby brother and jumped under a bridge. But one of the horses was killed and the other one shot in one leg. So it ended up being three-legged. And they had to use that three-legged horse to keep pulling the wagon towards Austria. And my dad had to help push the wagon up the hills. But they made it. that they had to go night and day to get into the British sector in Austria. Where they ended up in a refugee camp run by the British. And there wasn't much there. It was a the camp was a bunch of long tents where they had multiple families in in one tent and the families would put up blankets between the various locations to try to get a little bit of privacy. But I came along a year and a half later and of course they didn't have anything. So my crib was a an old crate that had been abandoned. So so life was very hard but I end up being born there in this refugee camp.

Jag: Now your father was a very academic learned man had two PhDs. Um but in terms of getting an immigrate visa that was given on condition of him working in a farm, right? Tell us about that story and what happened.

Zol: Well, there's sort of two parts to that story. One is that Canada actually had a law. We immigrated to Canada, but Canada actually had a law that if you were an educated person, you could only be French or English. um other nationalities couldn't come in and be educated. So my father got sponsorship from farmer to go to a farm in Canada. But unfortunately before we were about to leave in the springtime, I got scarlet fever and I was taken to a hospital and quarantined for a couple of months where I couldn't see anyone other than the nurses. The by the time I got out and we got new arrangements going, we ended up arriving in Canada on this farm in the middle of October and there was nothing there. There's nothing you could really grow in in October in Canada and they had no money, very few possessions, didn't speak English, so it was very difficult. But my my dad found a job in an auto parts factory and went to work there. Uh but the farmer sued saying he had to stay on the farm. And without really being able to speak much English, he had to persuade the judge that well, we'll all starve to death if if we stay on the farm. So he did he did end up working on the farm for several years. Uh it it was still a tough time. I remember one time he was laid off and some people brought some boxes of canned food around so we'd have something to eat. So it was still difficult but eventually we came out of that.

Jag: And eventually your dad did manage to finally get an academic post and then tragedy struck. What happened?

Zol: Yeah, he he ended up um first getting a job at the University of Windsor in Canada and then at a university position in Georgia, then Michigan State and then finally in Carnegie Melon University. That's odd. That's where I taught that the that university was Central Michigan University and things were going well. But on my I was 19 at the time and I he was home and I was home and all of a sudden he collapsed. So I called an ambulance and they took him to the hospital and he had a brain tumor and died three three months later. And I remember on his deathbed there in the hospital is telling my mother, "Gee, I finally got our lives together and now this happens." is all extremely sad.

Jag: Well, you know, with all of these very traumatic experiences that that happened to your parents, to to you, you know, growing up in some cases almost, you know, on the semi starvation level. How do you feel like that influenced you and and how did finding objectivism help you to think about understanding your life, understanding this? I mean, I could I could see that someone might have had these experiences and say, the world is unfair, the world is chaotic.

Zol: Um I the world is not sacrif the interesting thing even though our lives were difficult and poor I I was a happy child. I think partly if you have good parents and mine certainly were they carry along and you know my parents my brother I I was never um I was had a happy childhood I said even though there wasn't much to eat at times so it it's kind of attitude I think in and the way your life develops um I did get out of that um an ambition to do better. I know, you know, my brother and I, we were kind of outsiders and and poorer than most of the people in in Canada. So, you know, we stro strove to achieve things. So, I think it did provide incentive to work hard and and try to achieve something. And of course eventually finding Ayn Rand it really helped make all the ideas clear and beneficial.

Jag: How did you get into engineering?

Zol: Well, as a child I was always interested in building things and how things worked and so I had several projects going all on all the time. Um so I studied math and physics and you know decided on engineering as a good career and I was always consumed by technology and science. So it was a natural fit for me.

Jag: How did you come to see the value of computer simulation in the early 1980s? And can you describe the the finite element method um for our audience?

Zol: Well, actually in 1970 I went to pursue my PhD in at McGill University in Montreal, Canada. Um working with a wonderful professor by the name of Pete Sylvester. And he was the one of the first to do electromagnetic field simulation. Um just the comment electromagnetics is all around us. I mean light is an electromagnetic wave but but of course the the signals that are propagating taking our image back and forth are are electromagnetic waves as well. And back then the computers were really primitive by today's standards. I remember being outside the there was a huge room with an air conditioner that you couldn't enter and we used punched cards which you had to submit and then wait for it to come back um to solve these problems and so it was a very different experience. I worked on electromagnetic field simulation during my graduate work and then continued to work on that afterwards.

Jag: Yeah you started off in in academia. um tell us a little bit about what your area of of research and and teaching was and then uh how you ended up founding Ansoft and what the company does.

Zol: Okay. Um I I guess the you know I was always interested in electromagnetics. Now there the equations were developed in 1860 or so um and they describe how electromagnetic fields work but they're they're very difficult to solve. They're partial differential equations. And so in the computer what you do is you you break it up into a bunch of smaller pieces. Um, I guess I just realized, you know, like the the bricks or the stonework behind me, you you'd have this lump and you break it up into small pieces and then model the fields in each one. The computer puts it all together. In any event, um I ended up at Carnegie Melon University in 1982 and doing research in this area and an individual came by from Alcoa. Um and Alcoa makes aluminum and they're they're they make the aluminum. The molten aluminum is poured into a mold and that then forms an ingot which when it's cooled is solid. But as it's going through the mold, it'll have all sorts of distortions from from the mold. So they want to develop a contactless mold system. If you put a coil around the molten aluminum and put an alternating current in it, it will just induce the opposite currents in the molten aluminum and the force from the opposite currents will push it apart. So the electromagnetic field in this case this magnetic field will actually hold the molten aluminum without anything touching it. And so they were designing this technology and they wanted a computer program to to model it. And so I said, "Well, uh, I'll go get a graduate student and we'll start working on this problem right away." And he said, "No, no, we don't want you to do research. We want you to start a company and create a commercial program that we can actually use to design products." And so that's what I we I did. I accepted his proposal. He gave us a contract to start a company and create the software program. And of course he he used it for this particular application. But electromagnetic applies to thousands of things, millions. And so we could sell the software for other applications. And so slowly, one set at a time, we kind of grew the company from this initial beginning with Alcoa.

Jag: Wow. But that is one of those forks in the road that um if you had not met that person, your life would be on a different trajectory. So speaking of things that put one's life on a different trajectory, how and when did you discover Ayn Rand? Was it the fiction, the non-fiction? Was it a friend?

Zol: I guess I was about 30 at the time and there was a colleague at work um that was an objectivist and he was trying to persuade me all the time to read something by Rand. But, you know, it's one of those things that unless you until you read it, you don't really realize what you're missing. And it took me a while before I read. And the first book I read was The Fountainhead and of course I loved it. I think it's a fabulous book and you know as we all know and then I read Atlas Shrugged and the other works. So it it really was a a wonderful experience.

Jag: So um and it ultimately inspired a book about the ultimate answers to the ultimate questions of life, the universe and everything. So let's talk about your new book, The Objectivist's Guide to the Galaxy. And as I mentioned from all of the bookmarks, you gave me a lot to chew on. What inspired this undertaking of a book originally? And what what most surprised you about the process of writing a book?

Zol: Well, I I guess there's a saying among professors, if you want to learn a subject, teach it. Now, I'm not teaching anything anymore, but I thought, well, I want to explore the relationship between science and objectivism, so let me write a book about it. And I thought that it would be very helpful because I know many objectivists have very low education in terms of scientific principles. And so I thought it would be very useful for objectivists to have a better foundation in science. And then it occurred to me that it works the other way as well. Perhaps there'll be a lot of scientists who read the book and then discover objectivism.

Jag: So is that your target? You kind of had a segmented target audiences, both objectivists and also non-objectivist scientists.

Zol: I think it's useful for both. I I really had the objectivist in mind who really wants to learn more about science and how objectivism relates to science and vice versa.

Jag: So in your acknowledgements you mentioned consulting with Atlas Society founder David Kelley and our senior scholar Stephen Hicks. How did they help contribute to your thinking?

Zol: Well, Stephen read some early drafts of what I wrote and was very helpful in straightening a few things out and also his philosophy chart where he provides the essential elements of medieval modern and postmodern philosophy. I found that to be a very useful concept or item to be working with in the book. David and I had some conversations on induction and on the nature of space and so he was very helpful in in a couple of the chapters that that I wrote.

Jag: So the title of the book is sort of a playful nod to the cult favorite Douglas Adams's The Hitchhiker's Guide to the Galaxy. In your introduction, you describe feeling disappointed with the Hitchhiker's Guide. I couldn't help connecting Adams's post-modern meta narrative with the despairing "Who is John Galt?" when it was expressed to mean that the universe is unknowable, unfixable, really a verbal shrug to express the futility of seeking answers. Do you see that connection? And how does objectivism and your book offer a radically different proposition?

Zol: Yes, that's a very interesting question. It never occurred to me this this connection, but you're right. In both cases, the the question is really talking about how you don't know. It's it's impossible to answer that question. Now, of course, Rand answered the question. Atlas Shrugged in Galt's speech, but Adams basically just wrote the number 42 is the answer to this ultimate question of life, the universe, and everything. And recently I saw a post where someone wrote, "Gee, when he read the number 42 in Hitchhiker's Guide to the Galaxy, he laughed out loud. It was so funny." And it puzzled me. What? There's nothing funny about the number 42. But then I realized he laughed because the number 42 is kind of poking in the eye anyone who says, "Gee, they have any real fundamental knowledge." Anyone who says, "Gee, I I know the answer." They don't believe it. And so they have to laugh and and make a joke of it.

Jag: Right. Well, it's it's probably laughter that covers up a deeper anxiety, I would say. And the way that "Who is John Galt?" is used by many of the characters in Atlas Shrugged that there's a reason why Dagny finds it so infuriating because she herself is on a search for answers. And um this kind of resignation that answers are not possible is anathema to her. And one of my favorite scenes in the book is when she and Rearden are about to take the first train on tracks made of Reirden metal and she's decided to call it the John Galt line in order to reclaim this message. And when a reporter asks her "Who is John Galt?", she shoots back defiantly, "We are." So I think that's why so many people have connected to the the phrase to the question less as a admission of futility and more as kind of a defiant response to to those who would say things are are otherwise.

Jag: So, um, what's remarkable about your book is how you integrate the hard sciences like physics, math, biology, and engineering with philosophy? Um, I was wondering, do you consider philosophy to be a science? And therefore, is objectivism a science?

Zol: Both philosophy and science begin with the three axioms that existence exists, consciousness exists and identity exists. And so they all have the same origin. Now science focuses more on the metaphysics what exists but it has to use epistemology to figure out the answer. Philosophy is more on the epistemology side. How do we know it? But then to the it in that statement requires science to figure out what it is. So I think the two are are together. It's very hard to decouple science and philosophy. They work have to work hand-in-hand and glove to give meaningful answers. So is one or the other? I I just say both are together. Uh essential to arrive at truth.

Jag: Yes. Yes. I I agree with that. Um to me that's why and you might disagree that we tend to see philosophy as a science. Um and that science must be you know open to inquiry and elaboration. Um it's science and philosophy are different than let's say a body of literature right that that was created by one person and you know has copyrighted and cannot be changed as opposed to science and philosophy which needs to is not necessarily about consensus it is about it is about inquiry. So um you describe philosophy as divided into four historical movements with a 250 year battle between modernism and postmodernism. What are the roots of modernism and how did Isaac Newton's Principia change the relationship between man and his thoughts of existence?

Zol: Well, Newton's work the Principia Mathematica really took nature and how things move and explained it using first principles and and mathematics and particularly in terms of orbits of the planets. He showed that they work through natural laws and that all movement is really through a natural law. This showed that you don't have to go to um to find God to find the truth. You can find the truth by examining this world reality. And because he did that, the age of enlightenment developed that led into the modern world where people are really looking at things that exist, trying to understand them and trying to work with them. So it really changed from uh the medieval philosophy or religion and superstition to to reason and logic and experimentation. I think Newton's book is by far the most important book in human history to transform the world from a backward superstitious era to the modern era that we enjoy today.

Jag: So that we've talked about the roots of modernism. Um how would you describe the roots of postmodernism?

Zol: Well, it's a that's an unfortunate turn in history. Um, postmodernism was a term coined by Jean-François Lyotard, a French philosopher who didn't like the result of science. And so he he kind of developed this new postmodern philosophy, but he really drew on the philosophy of Immanuel Kant from the 1700s where Kant basically said that you can't really know things as they are because everything you know comes through your senses and your senses will distort what you see. So there's there's the the world we live in, but we can't really know true reality. And Lyotard then said, "Well, how do you know truth?" Well, it's it's the collective truth. The only only truth you have is is the group. And every group has its own truth. And whatever most people think that's how you define truth. But of course when it's group truth then you have conflicts and communism and collectivism and all sorts of horrible things. So postmodernism has been has its origins in Kant's philosophy but has got grown more and more destructive through the decades and now as you probably know a lot of postmodern philosophy drives the academics and a a great deal of of bad events around the world.

Jag: Yes, of course. Uh as I think Stephen did a wonderful job of in his definitive book explaining postmodernism and which we tried to distill in our Pocket Guide to Postmodernism after the you know failure of communism and Soviet Union became so apparent that there was an effort to repurpose class division, class struggle, oppressor/oppressed to all kinds of identity groups and this idea of lived experience and that structural racism, structural—all of these nebulous and nefarious forces—that I think those who buy into it really develop a victim mindset and it's one of the reasons why you you see so much unhappiness and confusion among the left who've been indoctrinated with these philosophies.

Jag: So going back even earlier from postmodernism, modernism, you talk about the Chauvet caves discovered in 1994 which contain prehistoric drawings, paintings dating back some 30,000 years. What do they tell us about cognitive development um and the cognitive revolution in human beings and what is the relevance to modern man in terms of how we form concepts?

Zol: Yeah, it's it's a fascinating question in terms of evolution. When did conceptual thinking originate? I mean animals are on the perceptual level. They don't have concepts. And the question is when did people first develop concepts? Now Homo sapiens have been around for about 330,000 years but there's no evidence of concept formation way back then. And until about 30,000 years ago, there there was no art, nothing of substantial that was created by Homo sapiens. But then around that time period, you have cave paintings and figurative models being formed. So all of a sudden, people are making objects which shows a conceptual ability. Art is—you have to have concepts in order to appreciate art. Animals simply can't appreciate art. And so the archaeologists maintain that there was a revolution, a cognitive revolution or the tree of life mutation as they call it around 30,000 years ago that changed Homo sapiens from thinking on the perceptual level to forming concepts. And ever since once you're able to form a concept, the brain develops more and way more ways of thinking higher level concepts. So it's really been a 30,000 year evolution to develop the conceptual ability that we have today.

Jag: So, speaking of caves, I thought you made a provocative connection between the 1999 sci-fi blockbuster The Matrix and Plato's allegory of the cave. What do they have in common? And why the persistent impulse in philosophy to doubt the evidence of the senses?

Zol: Well, I I think in terms of mathematicians in particular, they're very much Platonists. And the reason for that is that, you know, you can see two dolls over here or two balls over there and and you can count to the number two, but the number two doesn't exist as such in metaphysical reality. It's a higher level concept. First level concepts are things you point at like dolls or balls. But the number two you can't point at. Now there is good reasons to say that it's derived from reality and we can go through that but the point is all these higher level concepts—and mathematics is 100% higher level concepts—are things you can't really point at. They're just concepts formed in man's mind as tools or knowledge to help interact with reality. So there's a lot of people particularly you know mathematical oriented they don't understand the connection between reality and numbers and so then they they say well there's a Platonic universe and you know like in The Matrix it's all a simulation. Serious "scientists" have actually written books saying that the universe is just a mathematical simulation that nothing is really real. So it's a stupid idea, but it doesn't seem to be—we don't seem to be able to stamp it out, even though it is a stupid idea.

Jag: Well, I think your book will take a a good stab at stamping it out. Um, now I'm going to get stamped out if I don't answer or at least bring up some of these questions that we have from the audience. I'm very thrilled to see that we have a lot of people here saying, "I was looking for a good book and they're going to be picking yours up." Um, so Ilishian asks, "Reflecting on your family's experience as an immigrant, do you think the issues surrounding immigration today um parallel the past or is today's situation unique?"

Zol: I I think there are parallels to be drawn. I think even back in my day not anyone would come—it was limited to you had to have some qualifications or some ability—in our case have some farmer sponsor our way here. So you know immigration of course should be allowed but there has to be controls so that criminals and terrorists don't get across the border. But I think in terms of parallels it it really was much more restricted earlier on than it is today.

Jag: All right. Alan Turner asks, "Having seen the rise of computers over the past 50 years or so, do you think progress is still advancing at the same pace, or are we starting to slow down?"

Zol: Progress is still advancing. And one of the comments there is computers get faster because there's software to simulate. You could not build a say an iPhone that you have today if it wasn't simulated first. They're so complicated that it's only through computers and the the physics is so involved. It's only by simulating these things that we can build them. And now when the computer gets faster then you can simulate more. So it's a process that just speeds up the simulation. So the computers will keep increasing. Now of course the the big event which everyone is aware of is AI is all of a sudden take this dramatic turn and I think everyone sees it's one of the few times in history where something comes out ChatGPT and people can see right away that year that this is going to change the world dramatically. And so I think that that technology in the hardware but also in the software especially AI now is going to revolutionize the world over the next few decades. It's very exciting.

Jag: Yeah. So uh getting back to your book, let's talk about Tabula Rasa. The fact that humans are born without innate ideas. Is that consistent with genetic variability in intelligence, ability or temperament? Uh and does such variability present any challenges to the idea of free will?

Zol: Um I don't think that those two ideas are really in conflict. Uh Tabula Rasa and and the variability in intelligence and such. You know, a a baby is born with billions of neurons and trillions of synapse connections, but they have to think as they grow and make the connection stronger and weaker. There's someone with more genetic ability for intelligence or some trait may be able to perform that better. But it's available to anyone. Any anyone is free to think the way they like and promote the positive features of their their character or or emphasize the negative ones. So I don't think there's a conflict between those two.

Jag: Yeah. So, you know, I was uh reflecting back on a dinner we had a while ago in which you shared a coin a term that you had coined to the term was "Selfsmart" as a way to encapsulate the ethics of long-term rational self-interest. Of course, Rand chose to provocatively capture this with her virtue of selfishness. Do you think that such branding has contributed to misconceptions about objectivism?

Zol: Well, I think you know Rand was a a genius in terms of philosophy and and literature and and we all owe her a great debt but unfortunately she wasn't the best at marketing and I think it's the most unfortunate thing she did was say that "Gee, I am selfish." Because as she writes in The Virtue of Selfishness, for most people when you say selfish, it means the brute who's going to trample over people and steal things and be a horrible person. And yet that's the word she uses for describing pursuing your own interests. So I think that word is very very difficult to work with. I think if Rand hadn't used that word, we would be much farther along in being able to spread objectivism through the wider society. But today, if you tell someone, "Gee, um, Rand is a selfish person. Why don't you read her books?" It turns them off. It's a word that it's very difficult to rebrand a very negative word like that. No one wants to be thought of as a brute.

Jag: Yes. Well, you know, I think just as Rand described Dagny as overconfident in assessing her ability to save her railroad, to you know, save the world, to keep it steady on her own shoulders. I think perhaps Rand also may have had a bit of overconfidence in her ability to change a perception of a word which was you know deeply ingrained as a negative into a positive but at the same time she also certainly captured people's attention. So, speaking of Rand, as we did in our most recent video, she we showed that she obviously was the victim of a lot of viciously unfair attacks. Now, for her admirers, this may have contributed to this kind of siege mentality in which there's a fear of acknowledging any mistakes. And I fear that that fear presents a danger of conflating a personality with principles whether with regard to Rand or other thinkers. Do you see that as well?

Zol: Well, I think principles are—if you go through the logic and begin with existence and work your way up—the principles are sound regardless of other character personality traits. So, I think the philosophy stands the basic principles on their own. Now I wouldn't want to say anything negative about Rand because I owe her so much and she she was such a great and wonderful person, but she she was a human being, not a god. And so any human being is going to make a mistake here or there, but for us, we shouldn't emphasize anything like that. There are plenty of people who will criticize her.

Jag: Yeah. Now my point is not that we should criticize her or go out of our our way to but that being through a sense of justice wanting to repay the debt that we have incurred by benefiting so much about her shouldn't lead us to try to evade things that that were true or that were unfortunate or that were contradictory. We should also be in touch with reality. Uh so now my personal favorite chapters in The Objectivist's Guide to the Galaxy concern character and ethics. You argue that character is built through repetitive choice of where to focus the mind. You write, quote, "Thinking about something modifies the neural connections in the brain, reinforces your thought patterns, and locks in good or bad behavior. Each person is responsible for what they think about, and hence the type of person they become." End quote. Can you give us an example um even a hypothetical one of how this works in in practice?

Zol: Well, first off, let me just say that the common saying in biology is that neurons that fire together get come to be wired together. Basically—and there there are experiments that people have done in animals to show that the neural wiring in your brain is dependent on where you focus, where the brain is focused sort of. So it's a scientifically established fact at this point. Now I'll I'll take my own personal case because I know that one. As I mentioned at a young age I was very interested in science engineering and I I worked on math and physics and of course the more I did that the better I became at it and the more more skilled I had. Obviously the neural connections get strengthened by by using it. So in terms of character doing positive things like studying and and trying to apply oneself I believe leads to a good character where someone who says "Oh I don't want to study for that math test, I'm going to figure out a way to cheat," that person is strengthening the parts of his brain that focus on "How do I cheat?" and ends up with a bad character. So it it really is the thoughts you have literally rewire the the circuits in your brain and then will make you a better person or worse person.

Jag: So Rand's quote that "Art is the indispensable medium for the communication of a moral ideal" is probably the quote that I use the most in trying to explain the Atlas Society's strategy of leveraging artistic content like graphic novels or animated book trailers—even music videos—to reach new audiences. But your chapter gave me a much deeper understanding of why art is so indispensable because it helps humans grasp normative concepts concerning alternatives on how to behave. Perhaps you could walk us through this process using one of Rand's novels of how the characters and their choices concretize normative concepts and choices for the readers.

Zol: Well, probably the the best one there is The Fountainhead, my favorite book. Because The Fountainhead really is about integrity and—as she says—the integrity not just in architecture or building but in a man's soul. And Howard Roark goes through many struggles. I mean he's expelled from university. He's offered commissions that he can't accept because they're they destroy his his values. He he has many of these struggles and yet he perseveres with his vision and comes out with a heroic end. And I think that's a character that I certainly have related to in thinking about things when I when I did have a conflict. Sort of, what would Howard Roark do? How would he handle it? And I think it's very helpful to have a character like Roark in mind when one is facing difficult conflicts and decisions.

Jag: Yes. You know, it reminds me of Rand's explanation of the mass appeal of James Bond. She said, "The obstacles confronting an average man are to him as formidable as Bond's adversaries." But what the image of Bond tells him is quote "It can be done." The the sense of life it conveys is not one of a chaotic malevolent universe in which we are at the mercy of capricious forces but a benevolent one in which the right choices can help us ultimately prevail against great odds. Are there other works of art that have connected with you in that sense?

Zol: Before I go there, I just want to mention that art and movies have gone downhill so much. I mean, there's a sick mentality. Even James Bond in the latest movie is kind of a weaker character than than before. And you know gangsters and various criminals are treated with respect or—I'm not sure what the right word is—but semi-heroes in in many movies today. So unfortunately we're living in a very bad time for art. In terms of positive I would recommend the novels of Wilbur Smith. Wilbur Smith is a South African author and he writes very dramatic scenes with great characters doing wonderful things and he's got several series of books that are very enjoyable to read. Obviously they're not as intellectually stimulating as Rand's books. Nothing even comes close to that. But he does have a great sense of life and positive image for heroes.

Jag: So looking back over your involvement with objectivism, what are some of the ways in which you've seen the movement change over time?

Zol: Well, I wish it had changed more in in many ways. Even 50 years ago when I first became aware of it, it was always an outside movement with a very small percentage of people understanding that it's a true philosophy. The world really does work like that. But the vast majority of people around us just don't pay attention and don't understand it. And we're still in that in that phase. We haven't reached a critical mass. Now, what you're doing in the Atlas Society, I hope can can bring us to that critical mass. So, we certainly need more people to understand objectivism, to be aware of what it what it means and what it stands for. And hopefully what you're doing will allow us to to get there and really make a change in the world. But it's been disappointing for me that here this wonderful new philosophy—and it really hasn't taken over the world by storm the way it should have.

Jag: Well, it—if to the extent that the Atlas Society's growth is a proxy for the potential of the growth of objectivism—quadrupling our revenues, growing our student conference by 50% year-over-year, putting out things like our animated book trailer of Atlas Shrugged and drawing 12 million views and doubling book sales of the novel. I'm feeling pretty optimistic and one of the things that I wouldn't say that I have seen as a change but definitely this—our scholars who have been involved with objectivism for for decades is just the remarkable growth and interest in objectivism overseas. I mean we we have our videos in English but then we've translated them into 12 different languages and in particular a video that gets a million views in English will get 5 million views in Spanish and 6 million views in Hindi and 8 million views in Arabic. So I think the world is changing in ways that aren't necessarily perceptible on the surface. I think a lot of that has to do with preference falsification and collective illusions.

Zol: That that's wonderful news but surprising to me because somehow I guess I've been United States centric and thinking Rand wrote here it's really the USA that has well for almost 250 years led the world in freedom but I think today it is floundering as as we all know and it may be that there is there are other places in the world where freedom can can come about. I'd be certainly thrilled if if it happens and we really do do see a rise in liberty around the world.

Jag: Oh, I I definitely. Earlier this year we launched Atlas Society International with our 20 John Galt schools around the world and our new European conference and I mean we could have easily 50 schools given the demand in Africa, in Latin America, in Asia, in the Middle East, in India. But you know what you're saying is important because it's going to be American philanthropy that's going to be able to fund that kind of expansion because sadly the the philanthropic tradition is not the same in in Europe and understandably American donors are interested in funding American programs. So 50 years ago this is another challenge that we're facing. You know 70% of young people read novels on a daily basis for fun. Today that percentage is 12% with young people spending upwards of nine hours online a day. How should those who care about Ayn Rand's ideas go about connecting with new audiences?

Zol: Well, I think what you're doing is going online and creating the videos and the graphic novels and such. I think that's the right direction. I'm not the probably the best to figure out how to reach better that way, but I think what you're doing is definitely the right approach.

Jag: Well, it's—as as opposed to what you're doing, which is if not rocket science, then you know, definitely science. Marketing is not a science. It's just a matter of looking at your market, looking at your product, looking at you know, consumption patterns, and then finding a way to serve an your target audience with the kinds of products that they want. I used to work at Dole Food Company and I I remember that there were you know governments and charities that were concerned about certain deficiencies in certain countries and so could we find a way to have bananas that had more iron to help with blindness in certain countries because people were eating bananas. So in a way it's similar. It's like, okay, people are eating bananas or people are reading graphic novels, people are watching, you know, animated videos, then I'll just find a way to make sure that they contain—we adapt Rand's novels and find a way to make our banana contain objectivism.

Jag: Well, in just the few minutes that we have, I I don't know either if there was anything else that maybe we didn't get to, that you wanted to mention about your book or, perhaps again given that our audience, at our conferences and online is made up of young people and students and young adults. Perhaps any advice that you'd give to young engineers or entrepreneurs aiming to make a transformative impact on their field?

Zol: Well, I think in terms of advice, the the best advice is to follow your dream and pursue it. You have to think long range. I know when when I started the company, there are all sorts of data issues that come up and and produce impediments, but you've got to have that vision for where you want to go. Think long range. Find something that you're really interested in, something new and original, and pursue it. And don't be sidelined by some minor issues that come up along the way. You really have to have the vision and stick to it.

Jag: Well, I I agree with you. Find the thing that you love doing that you would do even if they didn't pay you. Just don't let your bosses know that. Fortunately, I think they do know that at the Atlas Society. Do the thing that you will just lose yourself in for hours. Find your default mode and then and then just keep persevering against all odds and being mindful of what you focus on and you know as the old adage: sew a thought, reap a habit; sew a habit, reap a character; and then reap a destiny. So I think that's really great advice and I am going to be—he's not giving me an answer now but between now and June of next year, I'm gonna be working on Zol to see if we can get him to come and give some of that advice to our audience at Galt's Gulch next year and sign a few of these. Because again, I'd love to get these into the hands of more people. So those watching, don't deprive yourself. Go out and get the book. Zol, thank you very much. It's just been a wonderful hour to spend with you and thank you for the amazing achievement that is your book.

Zol: Well, thank you very much, Jennifer. Very pleasant talking with you.

Jag: All right. Well, thanks all of you for joining us today. Be sure to join us next week. I am going to be—well actually I'm not going to be off on Wednesday. We are going to have Atlas Society senior scholar Stephen Hicks and Richard Salsman talking about public choice theory and the politics of self-interest. I know you guys were really patient last week with our planned interview with Martin Gurri, author of Revolt of the Public. We've got him rescheduled. So check that out on our events page and I'll see you for that interview as well. Thanks everyone.


n5321 | 2026年2月13日 00:56

A practical guide to prompt engineering


We’ve all been there. You ask an AI chatbot a question, hoping for a brilliant answer, and get something so generic it's basically useless. It’s frustrating, right? The gap between a fantastic response and a dud often comes down to one thing: the quality of your prompt.

This is what prompt engineering is all about. It’s the skill of crafting clear and effective instructions to guide an AI model toward exactly what you want. This isn't about finding some secret magic words; it's about learning how to communicate with AI clearly.

This guide will walk you through what prompt engineering is, why it’s a big deal, and the core techniques you can start using today. And while learning to write great prompts is a valuable skill, it's also worth knowing that some tools are built to handle the heavy lifting for you. For instance, the eesel AI blog writer can turn a single keyword into a complete, publish-ready article, taking care of all the advanced prompting behind the scenes.

The eesel AI blog writer dashboard, a tool for automated prompt engineering, shows a user inputting a keyword to generate a full article.

What is prompt engineering?

So, what prompt engineering is?

Simply put, it’s the process of designing and refining prompts (prompts) to get a specific, high-quality output from a generative AI model.


It's way more than just asking a question. It's a discipline that blends precise instructions, relevant context, and a bit of creative direction to steer the AI.

Think of it like being a director for an actor (the AI). You wouldn't just hand them a script and walk away. You’d give them motivation, background on the character, and the tone you’re looking for to get a compelling performance. A prompt engineer does the same for an AI. You provide the context and guardrails it needs to do its best work.

An infographic explaining the concept of prompt engineering, where a user acts as a director guiding an AI model.

The whole point is to make AI responses more accurate, relevant, and consistent. It transforms a general-purpose tool into a reliable specialist for whatever task you have in mind, whether that’s writing code, summarizing a report, or creating marketing copy. As large language models (LLMs) have gotten more powerful, the need for good prompt engineering has exploded right alongside them.

Why prompt engineering is so important

It’s pretty simple: the quality of what you get out of an AI is directly tied to the quality of what you put in. Better prompts lead to better, more useful results. It's not just a nice-to-have skill; it’s becoming essential for anyone who wants to get real value from AI tools.

Here are the main benefits of getting good at prompt engineering:

  • Greater control and predictability: AI can sometimes feel like a slot machine. You pull the lever and hope for the best. Well-crafted prompts change that. They reduce the randomness in AI responses, making the output align with your specific goals, tone, and format. You get what you want, not what the AI thinks you want.

  • Improved accuracy and relevance: By giving the AI enough context, you guide it toward the right information. This is key to avoiding "hallucinations," which is a fancy term for when an AI confidently makes stuff up and presents false information as fact. Good prompts keep the AI grounded in reality.

  • Better efficiency: Think about how much time you've wasted tweaking a vague prompt over and over. Getting the right answer on the first or second try is a massive time-saver. Clear, effective prompts cut down on the back-and-forth, letting you get your work done faster.

The main challenge, of course, is that manually refining prompts can be a grind. It takes a lot of trial and error and a good understanding of how a particular model "thinks." But learning a few foundational techniques can put you way ahead of the curve.

Don't get me wrong, being able to engineer a good prompt is an important skill. If I had to guess, I'd say it accounts for about 25% of getting great results from a large language model.

Core prompt engineering techniques explained

Ready to improve your prompting game? This is your foundational toolkit. We'll move from the basics to some more advanced methods that can dramatically improve your results.

Zero-shot vs. few-shot prompt engineering

This is one of the first distinctions you’ll run into.

Zero-shot prompting is what most of us do naturally. You ask the AI to do something without giving it any examples of what a good answer looks like. You’re relying on the model's pre-existing knowledge to figure it out. For instance: "Classify this customer review as positive, negative, or neutral: 'The product arrived on time, but it was smaller than I expected.'" It's simple and direct but can sometimes miss the nuance you're after.

Few-shot prompting, on the other hand, is like giving the AI a little study guide before the test. You provide a few examples (or "shots") to show it the exact pattern or style you want it to follow. This is incredibly effective when you need a specific format. Before giving it your new customer review, you might show it a few examples first:

  • Review: "I love this! Works perfectly." -> Sentiment: Positive

  • Review: "It broke after one use." -> Sentiment: Negative

  • Review: "The shipping was fast." -> Sentiment: Neutral

By seeing these examples, the AI gets a much clearer picture of what you're asking for, leading to a more accurate classification of your new review.

An infographic comparing zero-shot prompt engineering (no examples) with few-shot prompt engineering (with examples).

Chain-of-thought (CoT) prompt engineering

This one sounds complicated, but the idea is brilliant in its simplicity. Chain-of-thought (CoT) prompting encourages the model to break down a complex problem into a series of smaller, logical steps before spitting out the final answer. It essentially asks the AI to "show its work."

Why does this work so well? Because it mimics how humans reason through tough problems. We don’t just jump to the answer; we think it through step-by-step. Forcing the AI to do the same dramatically improves its accuracy on tasks that involve logic, math, or any kind of multi-step reasoning.

An infographic illustrating how Chain-of-Thought (CoT) prompt engineering breaks down a problem into logical steps.

The wildest part is how easy it is to trigger this. The classic zero-shot CoT trick is just to add the phrase "Let's think step-by-step" at the end of your prompt. That simple addition can be the difference between a right and wrong answer for complex questions.

Retrieval-augmented generation (RAG) for prompt engineering

Retrieval-augmented generation (RAG) is a powerful technique, especially for businesses. In a nutshell, RAG connects an AI to an external, up-to-date knowledge base that wasn't part of its original training data. Think of it as giving the AI an open-book test instead of making it rely purely on its memory.

Here’s how it works: when you ask a question, the system first retrieves relevant information from a specific data source (like your company’s private documents or help center). Then, it augments your original prompt by adding that fresh information as context. Finally, the LLM uses that rich, new context to generate a highly relevant and accurate answer.

An infographic showing the three steps of Retrieval-Augmented Generation (RAG) prompt engineering: retrieve, augment, and generate.

This is huge for businesses because it means AI can provide answers based on current, proprietary information. It's the technology that powers tools like eesel AI's AI internal chat, which can learn from a company’s private Confluence or Notion pages to answer employee questions accurately and securely. RAG ensures the AI isn't just smart; it's smart about your business.

The eesel AI internal chat using Retrieval-Augmented Generation for internal prompt engineering, answering a question with a source link.

Best practices for prompt engineering

Knowing the advanced techniques is great, but day-to-day success often comes down to nailing the fundamentals. Here are some practical tips you can use right away to write better prompts.

Define a clear persona, audience, and goal

Don't make the AI guess what you want. Be explicit about the role it should play, who it's talking to, and what you need it to do.

  • Persona: Tell the AI who it should be. For example, "You are a senior copywriter with 10 years of experience in B2B SaaS." This sets the tone and expertise level.

  • Audience: Specify who the response is for. For instance, "...you are writing an email to a non-technical CEO." This tells the AI to avoid jargon and be direct.

  • Goal: State the desired action or output clearly, usually with a strong verb. For example, "Generate three subject lines for an email that announces a new feature."

Provide specific context and constraints

The AI only knows what you tell it. Don't assume it understands implied context. Give it all the background information it needs to do the job right.

  • Context: If you're asking it to write about a product, give it the product's name, key features, and target audience. The more detail, the better.

  • Constraints: Set clear boundaries. Tell it the maximum word count ("Keep the summary under 200 words"), the desired format ("Format the output as a Markdown table"), and the tone ("Use a casual and encouraging tone").

Use formatting to structure your prompt

A giant wall of text is hard for humans to read, and it’s hard for an AI to parse, too. Use simple formatting to create a clear structure within your prompt. Markdown (like headers and lists) or even simple labels can make a huge difference.

For example, you could structure your prompt like this: "INSTRUCTIONS: Summarize the following article." "CONTEXT: The article is about the future of remote work." "ARTICLE: [paste article text here]" "OUTPUT FORMAT: A bulleted list of the three main takeaways."

This helps the model understand the different parts of your request and what to do with each piece of information.

Iterate and refine your prompts

Your first prompt is almost never your best one. Prompt engineering is an iterative process. Think of it as a conversation. If the first response isn't quite right, don't just give up. Tweak your prompt, add more context, or try a different phrasing. Experiment with different techniques to see what works best for your specific task. Each iteration will get you closer to the perfect output. <quote text="There are a lot of tips to remember in these two guides, so I tried to 80/20 them all and I came up with 5 questions I usually run through when I'm putting a prompt together:

  1. Have you specified a persona for the model to emulate?

  2. Have you provided a clear and unambiguous action for the model to take?

  3. Have you listed out any requirements for the output?

  4. Have you clearly explained the situation you are in and what you are trying to achieve with this task?

  5. Where possible, have you provided three examples of what you are looking for?

The initials on each of the bolded words spells PARSE which is just an easy acronym to remember when you need them." sourceIcon="https://www.iconpacks.net/icons/2/free-reddit-logo-icon-2436-thumb.png" sourceName="Reddit" sourceLink="https://www.reddit.com/r/PromptEngineering/comments/1byj8pd/comment/kz7j6kv/"%3E

How the eesel AI blog writer automates prompt engineering

Learning all these manual techniques is powerful, but it’s also a lot of work, especially for complex tasks like creating SEO-optimized content at scale. This is where specialized tools come in to handle the heavy lifting for you.

The eesel AI blog writer is a key example. It has advanced prompt engineering built right into its core, so you don't have to become a prompt wizard to get high-quality results. Instead of spending hours crafting and refining complex, multi-part prompts, you just enter a keyword and your website URL. That’s it.

A screenshot of the eesel AI blog writer, a tool that automates advanced prompt engineering for content creation.

Behind the scenes, the eesel AI blog writer is running a series of sophisticated, automated prompts to generate a complete article. Here’s what that looks like:

  • Context-aware research: It acts like a specialized RAG system designed for content creation. It automatically researches your topic in real-time to pull in deep, nuanced insights, so you get a well-researched article, not just surface-level AI filler.

  • Automatic asset generation: It prompts AI image models to create relevant visuals and infographics for your post and automatically structures complex data into clean, easy-to-read tables.

  • Authentic social proof:

    It searches for real quotes from Reddit threads and embeds relevant YouTube videos directly into the article. This adds a

    layer of human experience

    and credibility that’s nearly impossible to achieve with manual prompting alone.

    An infographic detailing the automated prompt engineering workflow of the eesel AI blog writer, from keyword to publish-ready post.

The results speak for themselves. By using this tool, our own eesel AI blog grew from 700 to 750,000 daily impressions in just three months.

It's entirely free to try, and paid plans start at just $99 for 50 blog posts. It's built to give you the power of expert prompt engineering without the learning curve.

The future of prompt engineering

The field of prompt engineering is evolving fast. As AI models get smarter and more intuitive, the need for hyper-specific, "magic word" prompts might fade away. The models will get better at understanding our natural language and intent without needing so much hand-holding.

We’re already seeing a shift toward what’s called Answer Engine Optimization (AEO). This is less about tricking an algorithm and more about structuring your content with clear, direct answers that AI overviews (like in Google Search) and conversational tools can easily find and feature. It’s about making your content the most helpful and authoritative source on a topic.

An infographic comparing Traditional SEO, prompt engineering, and Answer Engine Optimization (AEO).

So, while the specific techniques we use today might change, the core skill won't. Being able to communicate clearly, provide good context, and define a clear goal will always be the key to getting the most out of AI, no matter how advanced it gets.

For those who prefer a visual walkthrough, there are excellent resources that break down these concepts further. The video below provides a comprehensive guide to prompt engineering, covering everything from the basics to more advanced strategies.

A comprehensive guide to prompt engineering, covering everything from the basics to more advanced strategies.

Prompt engineering is the key to unlocking consistent, high-quality results from generative AI. It's the difference between fighting with a tool and having a true creative partner.

Understanding the foundational techniques like zero-shot, few-shot, CoT, and RAG gives you the control to tackle almost any manual prompting task. But as we've seen, for high-value, repetitive work like creating amazing SEO content, specialized tools are emerging to automate all that complexity for you. These platforms have the expertise baked in, letting you focus on strategy instead of syntax.

Stop wrestling with prompts and start publishing. Generate your first blog post with the eesel AI blog writer and see the difference for yourself.


n5321 | 2026年2月11日 23:59