关于: "BOEING":

Thirty years of development and application of CFD at Boeing Commercial Airplanes,

2004年发布的波音公司CFD应用30年。

1973到2004.波音搞了数千亿美金的飞机。30年里,波音的工程师所使用的工具必须具备准确预测和确认飞机的飞行特性的能力。在1973年以前,这些工具由解析近似法、风洞试验和飞行测试组成。但是这三十年里,波音用的CFD。

这篇短文讲的是波音的西雅图采购,开发,应用CFD的情况。

介绍

1973 年,波音商用估计有 100 到 200 次的CFD分析。2002年,超过了2W次。这2W次案例涉及的情况也更为复杂。为什么?原因:

  1. 现在人们承认 CFD 具有巨大的价值,并在飞行器设计、分析和支持流程中带来了范式转变;

  2. 波音公司的 CFD 工作由强大而有能力的远见卓识者 Paul Rubbert 博士领导,他招募并得到了许多才华横溢的管理人员和技术人员的支持;

  3. CFD 工作非常多样化,涉及算法研究、代码开发、应用和验证研究、流程改进和用户支持;

  4. 波音公司开发了广泛的产品线,并得到了许多富有创新精神和严格要求的项目工程师的支持;

  5. 计算能力和可负担性提高了三到四个数量级;

  6. 学术界和政府中的众多先驱者继续在算法上取得突破;

  7. 波音公司和政府中的资金经理不反对承担风险。

The role and value of CFD

工程师的目的:预测和确认飞行特性。方式:解析,风洞测试,飞行测试。新方式CFD—— 用数值算法进行仿真分析。CFD 的价值是它以低成本进行少量模拟就能获得完成设计所需的“理解”。具体说,CFD 可用于“逆向设计”或优化模式,预测优化某些流动特性或收益函数(例如阻力)所需的几何形状变化。可以对实验数据(通常通过在风洞中测试飞行器的缩比模型)进行分析,扩展数据,获取准确的飞机的特性。还可以帮工程师找到设计失效问题的根源。

Effective use of CFD is a key ingredient in the successful design of modern commercial aircraft.

有效运用 CFD 是波音成功设计飞机的一项关键因素

聪明、普遍且谨慎地使用 CFD 是波音产品开发的主要战略。Experience to date at Boeing Commercial Airplanes has shown that CFD has had its greatest effect in the aerodynamic design of the high-speed cruise configuration of a transport aircraft.

经验表明CFD) 在波音的飞机设计中发挥了至关重要的作用。过去 20 年使用 CFD 搞飞机开发波音公司节省了数千万美元。数千万美元好像不菲,但它们只是 CFD 为波音创造的价值的一小部分。大头是使用CFD以后为飞机增加附加值。Value to the airline customer is what sells airplanes!

Value is added to the airplane product by achieving design solutions that are otherwise unreachable during the fast-paced development of a new airplane. Value is added by shortening the design development process. Time to market is critical in the commercial world, particularly when starting after a competitor has committed a similar product to market. Very important in the commercial world is getting it right the first time. No prototypes are built. From first flight to revenue service is frequently less than one year! Any deficiencies discovered during flight test must be rectified sufficiently for government certification and acceptance by the airline customer based on a schedule set years before. Any delays in meeting this schedule may result in substantial penalties and jeopardize future market success. The added value to the airplane product will produce increased sales and may even open up completely new markets. The result is more profit to both the buyer and seller (who does not have to discount the product as much to make the sale). All this translates into greater market share.

商业价值详解见上。

CFD 开发和应用过程

In industry, CFD has no value of its own. The only way CFD can deliver value is for it to affect the product. CFD必须成为产品设计、制造和支持工程流程中不可或缺的一部分 。it must get into the hands of the engineers who execute these processes. 理想

The CFD developers and ‘‘expert’’ users can certainly contribute, but are only a part of the engineering process.

将 CFD 投入“生产”并非易事——这通常是一个耗时多年的过程。

CFD 开发流程分为五个不同的阶段

  1. 第一阶段旨在开发使能技术算法,为解决特定问题提供基本方法。

  2. 第二阶段是对新计算技术的初步探索、验证和演示。(demo)主要输出是演示代码(可用于计算实验和演示),并结合对实际需求的设想。

  3. 第三阶段旨在提供该设想的实质内容,通常需要对第二阶段的代码进行泛化或其他修改(可能是完全重写),并结合前后端界面,以生成用户友好、易于理解且易于维护的软件。They have yet to gain enough confidence to make important, standalone decisions based on the code. That takes time, exposure, and experience.

  4. 第四阶段涉及“应用研究”,设计工程师、管理人员和代码开发人员共同努力,研究这项新功能将如何融入并改变气动设计流程。软件落地

  5. 第五阶段:成熟的能力。代码通常需要相当长的时间才能达到第五阶段的成熟度

Forrester T. Johnson *, Edward N. Tinoco, N. Jong Yu

Received 1 June 2004; accepted 18 June 2004 Available online 26 February 2005

Abstract

Over the last 30 years, Boeing has developed, manufactured, sold, and supported hundreds of billions of dollars worth of commercial airplanes. During this period, it has been absolutely essential that Boeing aerodynamicists have access to tools that accurately predict and confirm vehicle flight characteristics. Thirty years ago, these tools consisted almost entirely of analytic approximation methods, wind tunnel tests, and flight tests. With the development of increasingly powerful computers, numerical simulations of various approximations to the Navier–Stokes equations began supplementing these tools. Collectively, these numerical simulation methods became known as Computational Fluid Dynamics (CFD). This paper describes the chronology and issues related to the acquisition, development, and use of CFD at Boeing Commercial Airplanes in Seattle. In particular, it describes the evolution of CFD from a curiosity to a full partner with established tools in the design of cost-effective and high-performing commercial transports.


Contents

  1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1116

  2. The role and value of CFD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1117

  1. The CFD development and application process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1120

  2. Chronology of CFD capability and use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1124 4.1. Linear potential flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125 4.1.1. First generation methods––early codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125 4.1.2. First generation methods––TA230 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126 4.1.3. Second generation linear potential flow method––PANAIR/A502 . . . . . . . . . . . . . . . . . 1128 4.2. Full potential/coupled boundary layer methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1131 4.2.1. A488/A411 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1131 4.2.2. TRANAIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1132 4.2.3. BLWF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137 4.3. Euler/coupled boundary layer methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1138 4.4. Navier–Stokes methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1139 4.4.1. Structure grid codes––Zeus TLNS3D/CFL3D, OVERFLOW . . . . . . . . . . . . . . . . . . . . . . 1139 4.4.2. Unstructured grid codes––Fluent, NSU2D/3D, CFD++ . . . . . . . . . . . . . . . . . . . . . . . 1142 4.4.3. Other Navier–Stokes codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143 4.4.4. Next generation Navier–Stokes codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143 4.5. Design and optimization methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145 4.5.1. A555, A619 inverse design codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1145 4.5.2. TRANAIR optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146

  3. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148

  4. Introduction

In 1973, an estimated 100–200 computer runs simulating flows about vehicles were made at Boeing Commercial Airplanes, Seattle. In 2002, more than 20,000 CFD cases were run to completion. Moreover, these cases involved physics and geometries of far greater complexity. Many factors were responsible for such a dramatic increase: (1) CFD is now acknowledged to provide substantial value and has created a paradigm shift in the vehicle design, analysis, and support processes; (2) the CFD effort at Boeing was led by a strong and capable visionary, Dr. Paul Rubbert, who recruited and was supported by the services of a number of talented managers and technical people; (3) this CFD effort was well diversified, involving algorithm research, code development, application and validation studies, process improvement, and user support; (4) Boeing developed a broad line of products, supported by a number of innovative and demanding project engineers; (5) computing power and affordability improved by three to four orders of magnitude; (6) numerous pioneers in academia and the Government continued to make algorithmic breakthroughs; and (7) there were funding managers in Boeing and the Government who were not averse to taking risks.

It would be impossible to adequately address all these factors in this short paper. Consequently, we will concentrate on issues that were central to the efforts of the authors, who have been members of the CFD Development and Applications groups at Boeing, Seattle for more than 30 years. In Section 2, we describe the role and value of CFD as it has evolved over the last 30 years and as it may possibly evolve in the future. In Section 3, we describe the CFD development and application processes. In Section 4, we lay out a brief history of the codes and methods that were most heavily used at Boeing, Seattle, as well as some of the issues that lay behind their development. In Section 5, we draw some brief conclusions.

Finally, we note that CFD has had a long and distinguished history in many other parts of the Boeing Enterprise. That history would best be related by those intimately involved.

  1. The role and value of CFD

The application of CFD today has revolutionized the process of aerodynamic design. CFD has joined the wind tunnel and flight test as primary tools of the trade [1–4]. Each has its strengths and limitations Because of the tremendous cost involved in flight testing, modern aircraft development must focus instead on the use of CFD and the wind tunnel. The wind tunnel has the advantage of dealing with a ‘‘real’’ fluid and can produce global data over a far greater range of the flight envelope than can CFD. It is best suited for validation and database building within acceptable limits of a development program's cost and schedule. Historically, CFD has been considered unsuited for such as task. However, the wind tunnel typically does not produce data at flight Reynolds number, is subject to significant wall and mounting system corrections, and is not well suited to provide flow details. The strength of CFD is its ability to inexpensively produce a small number of simulations leading to understanding necessary for design. Of great utility in this connection is the fact that CFD can be used in an ‘‘inverse design’’ or optimization mode, predicting the necessary geometry shape changes to optimize certain flow characteristics or a payoff function (e.g., drag). Beyond this, CFD is heavily used to provide corrections for the extrapolation of data acquired experimentally (typically from testing a reduced scale model of the vehicle in a wind tunnel) to conditions that characterize the full-scale flight vehicle. Finally, CFD is used to provide understanding and insight as to the source of undesirable flight characteristics, whether they are observed in subscale model testing or in the full-scale configuration.

Effective use of CFD is a key ingredient in the successful design of modern commercial aircraft. The combined pressures of market competitiveness, dedication to the highest of safety standards, and desire to remain a profitable business enterprise all contribute to make intelligent, extensive, and careful use of CFD a major strategy for product development at Boeing.

Experience to date at Boeing Commercial Airplanes has shown that CFD has had its greatest effect in the aerodynamic design of the high-speed cruise configuration of a transport aircraft. The advances in computing technology over the years have allowed CFD methods to affect the solution of problems of greater and greater relevance to aircraft design, as illustrated in Figs. 1 and 2. Use of these methods allowed a more thorough aerodynamic design earlier in the development process, permitting greater concentration on operational and safety-related features.

The 777, being a new design, allowed designers substantial freedom to exploit the advances in CFD and aerodynamics. High-speed cruise wing design and propulsion/airframe integration consumed the bulk of the CFD applications. Many other features of the aircraft design were influenced by CFD. For example, CFD was instrumental in design of the fuselage. Once the body diameter was settled, CFD was used to design the cab. No further changes were necessary as a result of wind tunnel testing. In fact, the need for wind tunnel testing in future cab design was eliminated. Here, CFD augmented wind tunnel testing for aft body and wing/body fairing shape design. In a similar fashion, CFD augmented wind tunnel testing for the design of the flap support fairings. The wind tunnel was used to assess the resulting drag characteristics. CFD was used to identify prime locations for static source, sideslip ports, and angle-of-attack vanes for the air data system. CFD was used for design of the environmental control system (ECS) inlet and exhaust ports and to plan an unusual wind tunnel evaluation of the inlet. The cabin (pressurization) outflow valves were positioned with CFD. Although still in its infancy with respect to high-lift design, CFD did provide insight to high-lift concepts and was used to assess planform effects. The bulk of the high-lift design work, however, was done in the wind tunnel [5]. Another collaboration between the wind tunnel and CFD involved the use of CFD to determine and refine the corrections applied to the experimental data due to the presence of the wind tunnel walls and model mounting system.

The Next Generation 737-700/600/800/900 (illustrated in Fig. 2), being a derivative of earlier 737s, presented a much more constrained design problem. Again the bulk of the CFD focused on cruise wing design and engine/airframe integration. Although the wing was new, its design was still constrained by the existing wing-body intersection and by the need to maintain manual control of the ailerons in case of a complete hydraulic failure. As with the 777, CFD was used in conjunction with the wind tunnel in the design of the wing-body fairing, modifications to the aft body, and design of the flap track fairings and the high-lift system.

Boeing Commercial Airplanes has leveraged academia- and NASA-developed CFD technology, some developed under contract by Boeing Commercial Airplanes, into engineering tools used in new airplane development. As a result of the use of these CFD tools, the number of wings designed and wind tunnel tested for high-speed cruise lines definition during an airplane development program has steadily decreased (Fig. 3). In recent years, the number of wings designed and tested is more a function of changing requirements during the development program and the need to support more extensive aerodynamic/structural trade studies during development. These advances in developing and using CFD tools for commercial airplane development have saved Boeing tens of millions of dollars over the past 20 years. However, as significant as these savings are, they are only a small fraction of the value CFD delivered to the company.

A much greater value of CFD in the commercial arena is the added value of the product (the airplane) due to the use of CFD. Value to the airline customer is what sells airplanes! Value is added to the airplane product by achieving design solutions that are otherwise unreachable during the fast-paced development of a new airplane. Value is added by shortening the design development process. Time to market is critical in the commercial world, particularly when starting after a competitor has committed a similar product to market. Very important in the commercial world is getting it right the first time. No prototypes are built. From first flight to revenue service is frequently less than one year! Any deficiencies discovered during flight test must be rectified sufficiently for government certification and acceptance by the airline customer based on a schedule set years before. Any delays in meeting this schedule may result in substantial penalties and jeopardize future market success. The added value to the airplane product will produce increased sales and may even open up completely new markets. The result is more profit to both the buyer and seller (who does not have to discount the product as much to make the sale). All this translates into greater market share.

CFD will continue to see an ever-increasing role in the aircraft development process as long as it continues to add value to the product from the customer's point of view. CFD has improved the quality of aerodynamic design, but has not yet had much effect on the rest of the overall airplane development process, as illustrated in Fig. 4. CFD is now becoming more interdisciplinary, helping provide closer ties between aerodynamics, structures, propulsion, and flight controls. This will be the key to more concurrent engineering, in which various disciplines will be able to work more in parallel rather than in the sequential manner as is today's practice. The savings due to reduced development flow time can be enormous!

To be able to use CFD in these multidisciplinary roles, considerable progress in algorithm and hardware technology is still necessary. Flight conditions of interest are frequently characterized by large regions of separated flows. For example, such flows are encountered on transports at low speed with deployed high-lift devices, at their structural design load conditions, or when transports are subjected to in-flight upsets that expose them to speed and/or angle of attack conditions outside the envelope of normal flight conditions. Such flows can only be simulated using the Navier–Stokes equations. Routine use of CFD based on Navier–Stokes formulations will require further improvements in turbulence models, algorithm, and hardware performance. Improvements in geometry and grid generation to handle complexity such as high-lift slats and flaps, deployed spoilers, deflected control surfaces, and so on, will also be necessary. However, improvements in CFD alone will not be enough. The process of aircraft development, itself, will have to change to take advantage of the new CFD capabilities.

  1. The CFD development and application process

In industry, CFD has no value of its own. The only way CFD can deliver value is for it to affect the product. To affect the product, it must become an integral part of the engineering process for the design, manufacture, and support of the product. Otherwise, CFD is just an add-on; it may have some value but its effect is limited. To make CFD an integral part of the Product Development and Support engineering processes, it must get into the hands of the engineers who execute these processes. This is the only way the volume of analysis/design runs necessary to affect the product can be made. Moreover, it is in the Product Development and Support organizations that ownership of the CFD/engineering processes resides, and it is these processes that management relies on when investing billions of dollars in a new airplane development. The CFD developers and ‘‘expert’’ users can certainly contribute, but are only a part of the engineering process.

Getting CFD into ‘‘production’’ use is not trivial––it is frequently a multiyear process. There are five distinct phases in the CFD development process. These are illustrated in Fig. 5.

Phase I produces enabling technology algorithms that provide a basic means for solving a given problem. Phase II, which overlaps Phase I, constitutes the initial attempts to explore, validate, and demonstrate a new computational technology. There are some limited pioneering applications at this stage, but the emerging technology is not yet at a state that will produce significant payoff or impact because the technology is still subject to surprise. Hence, managers and design engineers are unwilling at this point to make important, standalone design decisions based on computed results. Such decisions by users do not happen until well into Phase IV.

Many of the code developments end in the middle of Phase II with a contractor report or scientific paper that proclaims, ‘‘Gee whiz, look what can be done.’’ For many codes, this is a good and natural transfer point for industry to assume responsibility for further development, because most of what must occur beyond this point will be unique to the particular needs of each individual industry organization. Of course, this implies that corporate managers must have the wisdom to understand what they must support to turn such a code into a mature and effective capability that will live up to the ‘‘Gee whiz’’ expectations. That requires the time and investment associated with Phases III and IV.

The main outputs of Phase II are demonstrator codes (useful for computational experiments and demonstrations) combined with a vision of what is really needed. Phase III is aimed at supplying the substance of that vision and usually entails a generalization or other modification of Phase II codes (perhaps complete rewrites) combined with a coupling of front- and back-end interfaces to produce user-friendly, well-understood, and maintainable software. Most commercially available (COTS) codes have reached this stage of development. But even at this stage, their contribution or effect on the corporate bottom line is still minimal because engineers and managers don't yet understand how the existence of this new tool will change the engineering process and what it will be used for. They have yet to gain enough confidence to make important, standalone decisions based on the code. That takes time, exposure, and experience.

In the fourth phase, the payoff or affect of a code grows rapidly. Phase IV entails ‘‘applications research,’’ where design engineers, management, and code developers work together to learn how this new capability will enter into and change the aerodynamic design process. The applications research endeavor requires people with broad backgrounds who can ask the right questions of the algorithm researchers, and code developers who can intelligently question experimental data when test-theory comparisons don't agree. Both must also be good physicists, for it is not unusual to find that the short-comings lie neither in the experiment nor in the quality of the computations, but in the fact that the theoretical model assumed in the computations was not an adequate description of the real physics. Need for code refinements that were not anticipated invariably surface during this phase and these refinements often require more algorithm research, additional geometry preprocessors, and so on. Over time, the requests for additions or refinements diminish until the code settles down to occupy its proper niche in the toolbox, and design engineers and managers have learned the capabilities, limitations, and proper applications of this now-mature code. Without the investments in Phase IV, the enormous pay-off of having a mature capability in Phase V will not happen. An attempt to bypass Phase IV by taking a code developed by algorithm researchers and placing it directly in the hands of design engineers, who may not understand the underlying theoretical models, algorithms, and possible numerical idiosyncrasies, usually results in a prolonged period of frustration and unreliability that leads to abandonment of the code.

Product Development engineers must be able to focus on engineering processes and have little time for manipulating the CFD ‘‘process’’ (i.e., codes must be very user oriented). Stable, packaged software solutions enable and promote consistent processes. These not only put CFD into the hands of the Product Development/Product Support engineers but also allow the ‘‘expert’’ user to get fast results with reduced variation. Integrated packaged software solutions combine various components to go from ‘‘lofts to plots’’ in the time scale consistent with a fast-paced engineering program. These packages include scripted packages for ‘‘standard’’ configurations, geometry and grid/paneling generation components, flow solvers, and postprocessing components for analyzing the results. These are all placed under some form of software version control to maintain consistency.

A key component of CFD and most engineering processes is geometry. CAD systems, such as CATIA, dominate most geometry engineering needs. However, these systems are designed for component design and definition and are not well suited to CFD use. A key component of many Boeing Commercial Airplanes CFD processes is AGPS––Aero Grid and Paneling System [6]. AGPS is a geometry software tool implemented as a programming language with an interactive graphical user interface. It can be dynamically configured to create a tailored geometry environment for specific tasks. AGPS is used to create, manipulate, interrogate, or visualize geometry of any type. Since its first release in 1983, AGPS has been applied with great success within The Boeing Company to a wide variety of engineering analysis tasks, such as CFD and structural analysis, in addition to other geometry-related tasks.

Computing resources consisting of high-end computing and graphics workstations must also be integrated. Seamless mass data storage must be available to store the vast amount of information that will be generated during the engineering application. These resources require dedicated computing system administration. The software control and computing system administration are necessary to free the engineers to focus their work on the engineering processes and not be consumed by the ‘‘computing’’ process.

Close customer involvement and acceptance is absolutely essential to deriving value from CFD. Customers are responsible for implementing the engineering process that will use CFD. They own the process, they determine what CFD, if any, they will depend on to carry out their assigned tasks. They are being graded on the engineering tasks they accomplish not on which CFD codes they use. Their use and trust of CFD is based on a long-term relationship between supplier and user. This relationship has engaged the customer early on in demonstrations of a new code or new application of an existing code. Validation is an on-going process, first of cases of interest to the customer, and then of the customer's ability to implement the new tool. Frequently, parallel applications are undertaken in which the customer continues with the existing tools while the supplier/developer duplicates the process with the new tool. This is especially the case when the new tool may enable the development of an entirely new process for executing the engineering task.

The long-term relationship with the customer is essential from another point of view. Until recently, project engineers, without exception, initially rejected every new CFD development that later became the primary CFD analysis and design tool in Boeing Commercial Airplanes Product Development and Product Support organizations. Every new or proposed CFD capability was initially viewed as too difficult to use, too costly to run, not able to produce timely results, not needed, and so on. ‘‘Just fix what we already have,’’ the customer would tell the developers. The customers had a point. Not until the new CFD technology had been integrated with the customer's preprocessing/postprocessing tools and computing system, validated to the customer's program, guaranteed of long-term support, and committed to continuous development and enhancement would the new technology be useful to them.

This made it difficult for the developers to propose new Phase I, II and III efforts. In particular, the initiation and continual defense of Phase I efforts demanded clear and unwavering vision. True vision invariably requires a fundamental understanding of both needs and means. As customers generally did not have the specialized algorithmic knowledge underlying CFD numerics, it was incumbent on the developers to acquire a thorough understanding of customer needs and concerns. The developers learned they could not just throw a new CFD tool over the fence and expect the customer to use it no matter how good it might be. The customer was interested in getting an engineering job done and not in the CFD tool itself! The process of thoroughly understanding customer issues took many years, and early Phase I, II, and III efforts were mostly ‘‘technology push’’ efforts, which had to be funded by NASA or other Government agencies. As these efforts progressed to Phase IV and V, and the developers established a track record for producing useful capabilities, the situation gradually changed.

Each success allowed the developers a little more leeway. Often they spotted ‘‘niche’’ needs that could be satisfied by the introduction of their new technology. It was felt that when the users were satisfied with the usability and utility of the technology in these areas they would then be willing to consider whether or not replacing their old tools in other areas might offer distinct advantages. Once the users accepted a new capability, they often became very innovative and applied the codes in unanticipated ways, perpetually keeping the developers and validation experts in an anxious state. Most of the new applications were, in fact, legitimate, and the developers had to run fast to understand the implications involved as well as to try and anticipate future application directions. As time went on, code developers, application experts, and project engineers began understanding each other's functions and issues, and a certain amount of trust developed. Gradually, CFD became a ‘‘pull’’ rather than ‘‘push’’ technology. This transformation was greatly facilitated by the rotation of top engineers between these functions.

Today in Boeing Commercial Airplanes, more than 20,000 CFD runs a year are made to support product development and the various existing product lines. More than 90% of these runs are done by production engineers outside the research group. The CFD methods in use provide timely results in hours or days, not weeks or months. Sufficient experience with the methods has given management confidence in their results. This means that solutions are believable without further comparison of known results with experiment, that the CFD methods contain enough of the right physics and resolve the important physical and geometric length scales, that the numerics of the method are accurate and reliable, and that the CFD tools are already in place––for there is no time to develop and validate new methods. Most of all, management is convinced that the use of CFD makes economic sense. A look at the history of CFD at Boeing Commercial Airplanes will show how we got to this level of use.

  1. Chronology of CFD capability and use

CFD today covers a wide range of capabilities in terms of flow physics and geometric complexity. The most general mathematical description of the flow physics relevant to a commercial transport is provided by the Navier–Stokes equations. These equations state the laws of conservation of mass, momentum, and energy of a fluid in thermodynamic equilibrium. Unfortunately, direct solutions to these equations for practical aircraft configurations at typical flight conditions are well beyond the capabilities of today's computers. Such flows include chaotic, turbulent motions over a very wide range of length scales. Computations for the simulations of all scales of turbulence would require solving for on the order of 10¹⁸ degrees of freedom!

Fortunately, solutions to simplified (and more tractable) forms of these equations are still of great engineering value. Turbulent flows may be simulated by the Reynolds equations, in which statistical averages are used to describe details of the turbulence. Closure requires the development of turbulence models, which tend to be adequate for the particular and rather restrictive classes of flow for which empirical correlations are available, but which may not be currently capable of reliably predicting behavior of the more complex flows that are generally of interest to the aerodynamicist. Use of turbulence models leads to various forms of what are called the Reynolds-averaged Navier–Stokes equations.

For many aerodynamic design applications, the flow equations are further simplified to make them more amenable to solution. Neglecting viscosity leads to the Euler equations for the conservation of mass, momentum, and energy of an inviscid fluid. Fortunately, under many flight conditions the effects of viscosity are small and can be ignored or simulated by the addition of the boundary layer equations, a much simplified form of the Reynolds-averaged Navier–Stokes equations.

The introduction of a velocity potential reduces the need to solve five nonlinear partial differential equations (that make up the Euler equations) to the solution of a single nonlinear partial

differential equation known as the full potential equation. However, the potential approximation assumes an inviscid, irrotational, isentropic (constant entropy) flow. Potential solutions can adequately simulate shock waves as long as they are weak, which is the normal case for commercial transport configurations.

Further simplifications eliminate all the nonlinear terms in the potential equation, resulting in the Prandtl–Glauert equation for linear compressible flows, or the Laplace equation for incompressible flows. The use of these equations is formally justified when the vehicle is relatively slender or thin and produces only small disturbances from freestream flow.

In the following sections, we describe the CFD capability most heavily used at Boeing Commercial Airplanes in Seattle over the last 30 years. For the purposes of a rough chronological summary, we can say the following. Before 1973, the main codes employed by project engineers involved linearized supersonic flows with linearized representations of the geometry or else 2D incompressible flows. From 1973 to 1983, panel methods, which could model complex geometries in the presence of linear subsonic and supersonic flows, took center stage. The nonlinear potential flow/coupled boundary layer codes achieved their prime from 1983 to 1993. Their Euler counterparts came into use later in that timeframe. From 1993 to 2003, Reynolds averaged Navier–Stokes codes began to be used with increasing frequency. Clearly, much of the development and demonstration work leading to the widespread use of these codes occurred from five to 10 years earlier than these dates. It is important to note that a considerable length of time is often required for a code to achieve the Phase V level of maturity. It is also important to realize that once a code achieves this level of maturity and is in use and accepted by the user community, it tends to remain in use, even though improved capability at the Phase III or IV level may be available.

The Boeing panel code, A502, remains in some use today, even though its underlying technology was developed almost 30 years ago. The full potential code TRANAIR still receives widespread and heavy use.

4.1. Linear potential flow

4.1.1. First generation methods––early codes The flow physics described by the early linear methods were greatly simplified compared to the ‘‘real’’ flow. Similarly, the geometric fidelity of the actual configuration also had to be greatly simplified for the computational analysis to fit within the speed and size constraints of the computers of that era. In spite of such seemingly hopeless limitations, these early CFD methods were successfully applied during the supersonic transport development programs of the late 1960s––the Anglo-French Concord and the United States/Boeing SST. The need for computational help in the aerodynamic development of these aircraft stemmed from two factors. First, there was the relative lack of experience in designing supersonic cruise aircraft (the first supersonic flight had occurred only 15 years earlier). Second, there is great sensitivity of supersonic wave drag to details of the aircraft design. Thus, the challenge of developing a viable low-drag design through empirical ‘‘cut and try’’ demanded whatever computational help was available. The opportunity to use simplified computational methods resulted because the design requirements for low supersonic wave drag led to thin, slender vehicles that minimized ‘‘perturbing’’ the airflow. These characteristics were consistent with the limitations of the linearized supersonic theory embedded in the early CFD codes. These codes included TA80 [7], a Supersonic Area Rule Code based on slender body theory; TA139/201 [8], a Mach Box Code based on linearized supersonic theory; and TA176/217 [9], a Wing-Body Code based on linear potential flow theory with linearized geometry representations. These codes ran on IBM7094 machines. The good agreement with test data predicted by these linear theory methods for a drag polar of the Boeing SST model 733-290 is shown in Fig. 6. This was a linear theory optimized design of the configuration that allowed Boeing to win the SST design development Government contract. The resulting supersonic transport designs ended up looking as they did, in part, because the early CFD codes could not handle more geometrically complex configurations.

The linear aerodynamics of the Wing-Body Code was later combined with linear structural and dynamic analysis methods in the FLEXSTAB [10] system for the evaluation of static and dynamic stability, trim state, inertial and aerodynamic loading, and elastic deformations of aircraft configurations at supersonic and subsonic speeds. This system was composed of a group of 14 individual computer programs that could be linked by tape or disk data transfer. The system was designed to operate on CDC-6000 and -7000 series computers and on the IBM 360/370 computers. A very successful early application of FLEXSTAB was the aeroelastic analysis of the Lockheed YF-12A as part of the NASA Flight Loads program. Thirty-two flight test conditions ranging from Mach 0.80 to 3.0 and involving hot or cold structures and different fuel loading conditions were analyzed at several load factors [11].

4.1.2. First generation methods––TA230 By 1973, 3D subsonic panel methods were beginning to affect the design and analysis of aircraft configurations at Boeing. Subsonic panel methods had their origins with the introduction of the Douglas Neumann program in 1962 [12]. This program was spectacularly successful for its time in solving the 3D incompressible linear potential flow (Laplace) equation about complex configurations using solid wall (Neumann) boundary conditions. The numerical method represented the boundary by constant strength source panels with the strengths determined by an influence coefficient equation set relating the velocities induced by the source panels to the boundary conditions. The lack of provision for doublet panels limited the class of solutions to those without potential jumps and hence without lift. One of the first computer programs for attacking arbitrary potential flow problems with Neumann boundary conditions [13,14] combined the source panel scheme of the Douglas Neumann program with variations of the vortex lattice technique [15]. This program became known as the Boeing TA230 program. A very useful feature of this program was the ability to handle, in a logical fashion, any well-posed Neumann boundary value problem. From its inception, the method employed a building block approach wherein the influence coefficient equation set for a complex problem was constructed by simply assembling networks appropriate to the boundary value problem. A network was viewed as a paneled surface segment on which a source or doublet distribution was defined, accompanied by a properly posed set of Neumann boundary conditions. The surface segment could be oriented arbitrarily in space and the boundary conditions could be exact or linearized. Several doublet network types with differing singularity degrees of freedom were available to simulate a variety of physical phenomena producing discontinuities in potential. Compressibility effects were handled through scaling. These features combined to allow the analysis of configurations having thin or thick wings, bodies, nacelles, empennage, flaps, wakes, efflux tubes, barriers, free surfaces, interior ducts, fans, and so on.

By 1973, Boeing had acquired a CDC 6600 for scientific computing, which allowed the TA230 program to solve problems involving hundreds of panels. This was sufficient to model full configurations with the fidelity necessary to understand component interactions.

One of the most impressive early uses of the TA230 code was in the initial design phase of the B747 Space Shuttle Carrier Aircraft (SCA). The purpose of the initial design phase was to define the modifications needed to accomplish the following missions: to ferry the Space Shuttle Orbiter; to air-launch the Orbiter; and to ferry the external fuel tank. To keep the cost of the program to a minimum, CFD was extensively used to investigate the Orbiter attitude during the ferry mission, the Orbiter trajectory and attitude during the launch test, and the external tank location and attitude during the ferry mission. At the conclusion of the design phase, the final configurations selected were tested in the wind tunnel to verify predictions. A typical example of a paneling scheme of the B747 with the Space Shuttle Orbiter is depicted in Fig. 7. In this example, the Orbiter incidence angle was 8 deg with respect to the B747 reference plane. The predicted lift coefficient, CL, as a function of wing angle of attack for this configuration is shown in Fig. 8. The agreement between the analyses and wind tunnel data shown in this figure is excellent.

TA230 was used with TA378 [16], a 3D Vortex Lattice Method with design/optimization capability, to develop winglets for a KC-135 aircraft. Wind tunnel tests confirmed a 7–8% drag reduction in airplane drag due to the installation of these winglets [17].

Another early CFD success was the improvement of the understanding of the interference drag of a pylon-mounted engine nacelle under the wing. The existence of unwanted interference drag had been revealed by wind tunnel testing, but the physical mechanism of the interference was still unknown. To avoid the interference drag, it is common practice to move the engine away from the wing. The resulting additional weight and drag due to the longer engine strut must be balanced against the potential interference drag if the engine is moved closer to the wing. CFD studies with TA230 along with specialized wind tunnel testing in the mid-1970s, provided the necessary insight into the flow mechanism responsible for the interference. This understanding led to the development of design guidelines that allowed closer coupling of the nacelle to the wing [18]. The Boeing 757, 767, 777, 737-300/400/500 series, Next Generation 737/600/700/800/900 series, and the KC-135R are all examples of aircraft where very closely coupled nacelle installations were achieved without incurring a significant drag penalty.

4.1.3. Second generation linear potential flow method––PANAIR/A502 The success of the TA 230 code in modeling complete vehicle configurations and component interactions created a strong demand among Boeing aerodynamicists for CFD analyses and was undoubtedly the key factor that initiated the paradigm shift toward acceptance of CFD as an equal partner to the wind tunnel and flight test in the analysis and design of commercial aircraft. However, the paradigm shift was slowed by the fact that the code had to be run by experts possessing specialized knowledge, some of which was totally unrelated to aerodynamics. In fact, it often took weeks requiring the expertise of an engineer having months or years of experience with the method to set up and run a complex configuration. To some extent this was unavoidable; to correctly model a complex flow for which no previous user experience was available, the engineer had to understand the mathematical properties and limitations of potential flow. Nevertheless, once the boundary value problem was formulated, the user still had to contend with certain numerical idiosyncrasies and inefficiencies that required adherence to stringent paneling rules, frequently incompatible with the complex geometrical contours and rapidly changing aerodynamic

length scales of the vehicle under analysis. Such difficulties were directly related to the use of flat panels with constant source and doublet strengths. Methods employing these features were quite sensitive to panel layout. Numerical problems arose when panel shapes and sizes varied, and fine paneling in regions of rapid flow variations often forced fine paneling elsewhere. In addition, excessive numbers of panels were often required since numerical accuracy was strongly affected by local curvature and singularity strength gradient. These problems placed severe limitations on the development of automatic panelers and other complementary aids aimed at relieving the user of the large amount of handwork and judgments associated with producing accurate numerical solutions.

Consequently, a method was developed under contract to NASA to enhance practical usability by improving upon the flat, constant singularity strength panels employed in the construction of networks [19]. This method featured the use of curved panels and higher order distributions of singularities. Source and doublet strengths were defined by least square fits of linear and quadratic splines to discrete values located at specific points on the networks. Higher order influence coefficients were obtained using recursion relations with the standard low order coefficients as initial conditions. Boundary conditions were enforced at the same or other discrete locations depending on their type. Virtually any boundary condition that made sense mathematically was provided for. In particular, the incorporation of Dirichlet boundary conditions not only offered the opportunity to design surface segments to achieve desired pressure distributions, but also clarified the nature of the boundary value problem associated with modeling viscous wakes and propulsion effects. Robin boundary conditions provided for the modeling of slotted walls, which allowed for direct comparisons of CFD results with wind tunnel data. These features were incorporated in the NASA code known as PANAIR and the Boeing code known as A502. The latter code was generalized to treat supersonic flows [20], free vortex flows [21], and time harmonic flows [22]. In the supersonic case, upwinding was achieved by forward weighting the least square singularity spline fits.

The numerics incorporated into A502 solved a number of usability issues. Fig. 9 clearly demonstrates the relative insensitivity and stability of computed results to paneling. This insensitivity encouraged project users to apply the code and trust results. In addition, the boundary condition flexibility allowed users to experiment with various types of modeling, leading to a wide variety of applications never entirely envisioned by the developers.

The versatility of A502 paid off when a ‘‘surprise’’ was encountered during the precertification flight testing of the then new 737-300. The aircraft was not demonstrating the preflight wind tunnel based prediction of take-off lift/drag ratio. A fix was needed quickly to meet certification and delivery schedules. Specialized flight testing was undertaken to find the cause and to fix the performance shortfall. A CFD study was immediately undertaken to enhance understanding and provide guidance to the flight program. Eighteen complete configuration analyses were carried out over a period of three months. These included different flap settings, wind tunnel and flight wing twist, flow through and powered nacelle simulations, free air and wind tunnel walls, ground effect, seal and slotted flaps, and other geometric variations [23]. These solutions explained and clarified the limitations of previous low-speed wind tunnel test techniques and provided guidance in recovering the performance shortfall through ‘‘tuning’’ of the flap settings during the flight testing. The aircraft was certified and delivered on schedule. A comparison of the computation L/D predictions with flight is shown in Fig. 10.

A502 studies have been used to support other flight programs on a time-critical basis. In particular, the code was used to support engine/airframe installation studies in the early 1980s [24], to evaluate wind tunnel tare and interference effects, and to provide Mach blockage corrections for testing large models. In addition, the code was used for the design of the wingtip pod for the Navy E6-A, a version of the Boeing 707. No wind tunnel testing was done before flight. The FAA has accepted A502 analysis for certification of certain aircraft features that were shown to have minimal change from previous accepted standards. Finally, A502 was used to develop a skin waviness criteria and measurement technique that led to the virtual elimination of failed altimeter split testing during the first flight of every B747-400 aircraft coming off the production line. Initially, one of every three aircraft was failing this test, requiring several days down time to fix the problem. The A502-based procedure could identify excessive skin waviness before first flight and led to manufacturing improvements to eliminate the root cause of the problem.

A502 is still used today to provide quick estimates for preliminary design studies. A relatively new feature of the code takes advantage of available linear sensitivities to predict a large number of perturbations to stability and control characteristics and stability derivatives, including control surface sensitivities. Virtual control surface deflections and rotary dynamic derivatives are modeled through surface panel transpiration. Stability derivatives, such as the lift curve slope or directional stability, are calculated automatically. A typical application may involve 20 subcases submitted in a single run, with solutions available in an hour or so. Within the limitations of the code, all major stability and control derivatives can be generated in a single run (at a single Mach). The method is typically used to calculate increments between similar configurations. The code was recently used to calculate stability and control increments between a known baseline and a new configuration. A total of 2400 characteristics were computed for eight configurations by one engineer in a two-day period!

4.2. Full potential/coupled boundary layer methods

4.2.1. A488/A411 Since Murman and Cole [25] introduced a numerical solution method for the transonic small disturbance equation in the early 1970s, computational fluid dynamics method development for nonlinear flows has progressed rapidly. Jameson and Caughey [26] formulated a fully conservative, rotated finite volume scheme to solve the full potential equation––the well-known FLO27/28 codes. The Boeing Company acquired the codes and invested a significant amount of effort to advance the capability from Phase II to Phase V. Convergence reliability and solution accuracy were enhanced. To allow transonic analyses over complex transport configurations, a numerical grid generation method based on Thompson's elliptic grid generation approach [27] was developed [28] and tested extensively for wing or nacelle alone, wing-body, and wing-body-strut-nacelle configurations. The potential flow solvers FLO27/28 coupled with the 3D finite difference boundary layer code A411 [29] and the 3D grid generation code formed the major elements of the Boeing transonic flow analysis system, A488––the most heavily used analysis code at Boeing from late 1970s to early 1990s. The production version of the A488 system, illustrated in Fig. 11, included a number of preprocessing and postprocessing programs that could handle the complete analysis process automatically for specific configuration topologies––a truly useable code for design engineers. This integrated packaged combined the various software components to go from ‘‘lofts to plots’’ in the time scale consistent with a fast paced engineering program––overnight!

Fig. 12 shows a comparison of A488 results obtained by project engineers with wing pressure distributions measured in flight on a 737-300. The computational model consisted of the wing, body, strut, and nacelle. The wing definition included the estimated aeroelastic twist for the condition flown. Although the character of the pressure distribution on the wing changes dramatically across the span, the computational results agree reasonably well with the measured data.

The Boeing Propulsion organization also employed a full potential/coupled boundary layer code called P582. It was developed at Boeing and used a rectangular grid [30] and multigrid acceleration scheme [31]. P582 was used extensively for engine inlet simulation and design in the late 1970s and 1980s and is still used in the Propulsion organization for various nacelle inlet simulations.

4.2.2. TRANAIR By 1983, complex configurations were routinely being analyzed by project engineers using panel methods. Surface geometry generation tools were maturing, and users took for granted the ability to add, move, or delete components at will; readily change boundary condition types; and obtain numerically accurate solutions at reasonable cost in a day or two. On the other hand, the nonlinear potential flow codes required expert users and considerable flow time to obtain converged and accurate results on new and nonstandard configurations. Often, geometrical simplifications had to be made jeopardizing the validity of conclusions regarding component interactions. Clearly, the nonlinear nature of the flow was responsible for numerous difficulties. The development of shocks in the flowfield prolonged convergence, especially if the shocks were strong and prematurely set in the wrong location. Moreover, weak and double shocks were often not captured accurately, if at all. Boundary layer coupling contributed problems as well, especially as separation was approached. Often, the boundary layer displacement effect had to be fixed after a certain number of iterations, leading to questionable results. Experts became very good at circumventing many of these problems; however, the one problem that could not readily be overcome was the necessity to generate a volume grid to capture nonlinear effects.

Even today, volume grid generation is one of the main barriers to routine use of nonlinear codes. Often the creation of a suitable grid about a new complex configuration can take weeks, if not months. In the early 1980s, the situation was far worse, and suitable grids were readily available only for standard and relatively simple configurations. Because of the enormous promise demonstrated by existing nonlinear methods, the panel method developers at Boeing were awarded a contract from NASA to investigate alternatives to surface fitted grid generation. In the next few paragraphs, we describe some of the technical issues that arose during this contract. They are of interest to this paper in that they followed directly from a ‘‘needs and usability’’ starting point rather than the usual ‘‘technology discovery’’ starting point. To a large extent, this has characterized the CFD development efforts at Boeing.

The developers started with a rather naıve approach, i.e., take an A502 paneling, with which the project users were already familiar, and embed it in a uniform rectangular grid to capture nonlinear effects (Fig. 13). This approach logically led to a sequence of subproblems that had to be addressed in turn [32]. First, one could hardly afford to extend a uniform grid into the far field to ensure proper far field influence. However, if the flow was assumed to be linear outside a compact region enclosing the configuration, one could use linear methods to obtain the far field influence. A discrete Green's function for the Prandtl–Glauert equation was constructed, which incorporated the effect of downstream sources and sinks resulting from wakes. This Green's function was applied using FFTs and the doubling algorithm of Hockney [33], a standard technique in astrophysics. The net effect was the same as if the uniform grid extended all the way to infinity, the only approximation being the assumption of linearity outside a compact box. As a byproduct of this solution, the user no longer had to estimate a suitable far field stretching ratio.

The next problem that had to be addressed was how to handle the intersections of the grid with the paneling and how to apply boundary conditions. The developers decided to use a finite element approach based on the Bateman variational principle [34]. Upwinding was achieved by factoring the density at the centroid of the elements out of the stiffness integrals and then biasing it in an upwind direction. The elements intersecting the paneled boundary were assumed to have linear basis functions regardless of their shapes. Stiffness matrix integrals were then evaluated over the subset of the elements exposed to the flowfield. The integration was performed recursively using volume and then surface integration by parts. Additional surface integrals were added to impose the same variety of boundary conditions as available in A502.

The main problem with a uniform rectangular grid is its inability to capture local length scales of the geometry and flow. Consequently, grid refinement was an absolutely necessary feature of the approach. However, it was felt that solution adaptive grid refinement was necessary in any event to ensure accuracy, especially if the code was to be used by project engineers without the aid of the developers. The refinement mechanism was relatively straightforward, just divide each rectangular grid box into eight similar boxes (Fig. 14) and keep track of the refinement hierarchy using an efficient oct-tree data structure.

Development of a suitable error indicator was another matter, however. Mathematical theory certainly offered guidance here, but a surprising amount of engineering knowledge had to be injected into the process. A typical ‘‘gotch-ya’’ with a pure mathematical approach was the tendency of the refinement algorithm to capture the precise details of a wing tip vortex all the way from the trailing edge to the end of a wind tunnel diffuser.

The existence of refined grid complicated the design of a solution algorithm. Multigrid methods were somewhat of a natural here, but the developers were partial to direct solvers, as they had turned out to be so flexible for the panel codes, especially when it came to implementing unusual boundary conditions and coupling boundary layer equations and unknowns. They adopted a damped Newton method approach, with the Jacobian solved using a preconditioned GMRES iterative algorithm. A sparse direct solver was used as a preconditioner. Even with nested dissection ordering, the cost and storage for a complete factorization was prohibitive, hence they settled on the use of an incomplete factorization employing a dynamic drop tolerance approach, whereby small fill-in elements were dropped as they were formed. The method was surprisingly efficient and robust. As a rule, decomposition of the Jacobian resulted in fill-in factors of less than two and constituted less than 10% of the total run cost, even for grids having more than a million nodes.

Early versions of TRANAIR used the A411 boundary layer code in an indirectly coupled mode in much the same manner as A488. However, the desired convergence reliability was never achieved, and the shock boundary layer interaction model was occasionally suspect. About this time, Drela [35] developed an exceedingly accurate 2D integral boundary layer that he directly coupled with his 2D Euler solver. With Drela's help, the TRANAIR development team modified this boundary layer to incorporate sweep and taper effects and integrated it into the code. In this connection, the use of a direct solver was invaluable. The resultant code turned out to be very accurate for transport configurations and agreement with experiment was considered by project users to be quite remarkable.

As TRANAIR received increasing use, a number of enhancements were added. To model powered effects, regions of non-freestream but constant total temperature and pressure were simulated along with appropriate shear layer effects [36]. Far field drag calculations were added, which later led to the ability to perform aerodynamic optimization. Time harmonic capability was created for stability and control calculations. Aeroelastic effects were simulated by adding structural unknowns and equations to the system [37]. Here again the use of a sparse solver was invaluable.

Without question, the development of the TRANAIR code strongly benefited from the work and experiences of CFD pioneers such as Murman [25], Jameson [26], Hafez [38], Cebeci [39], McLean [29], Drela [35], and others. Nevertheless, about 10 major and 30 minor algorithms had to be developed or adapted. A few were quite far from the mainstream CFD efforts of the time and required considerable effort. It took almost five years of research and development before a truly useful result could be produced (1989). The TRANAIR code ultimately evolved into the Boeing workhorse aerodynamic code of the 1990s and up to the current time for analyzing flows about complex configurations. TRANAIR was heavily used in the design of the 777, the 737NG, and all subsequent modifications and derivatives to the Boeing Commercial Airplanes fleet. Since 1989, it has been run to completion more than 70,000 times on an enormously wide variety of configurations, some of which were not even vehicles. It has had about 90 users in Boeing. An older version of the code was used by NASA, the Air Force, the Navy, and General Aviation. In 2002, TRANAIR was run to completion at Boeing more than 15,000 times, which is considerable use for a complex geometry CFD code. If we had to choose one single technical feature of TRANAIR that was responsible for such widespread use, we would choose solution adaptive grid refinement. In retrospect, while this feature was intended to improve accuracy, its main benefit was to greatly relieve the user of the burdensome and labor-intensive task of generating a volume grid.

Even with substantially simplified gridding requirements, inputting a general geometry CFD code and processing the outputs are still formidable tasks. An essential enabler for TRANAIR has been the development of a packaged process for inputting ‘‘standard’’ configurations. By ‘‘standard,’’ we mean those configuration types that have been scripted in the various components that make up the process. Configurations not included in the ‘‘standard’’ can still be analyzed but will not benefit from the same degree of automation. This package, illustrated in Fig. 15, is compatible and takes advantage of common Boeing Commercial Airplanes processes for geometry and postprocessing. At the center of this process is the TRANAIR flow solver. AGPS scripts have been developed to automate the paneling of ‘‘standard’’ configurations from AGPS lofts. AGPS scripts have also been developed to generate the input deck for the TRANAIR solver. These inputs define the flight conditions, solution adaptive gridding strategy, and the boundary layer inputs for ‘‘standard’’ configurations. A UNIX script is available to generate the various job control files to execute the solver on several types of computers. The TRANAIR solver generates several files for restarts of the solver and output processor, output files for various aerodynamic parameters, and a file for flowfield parameters. A special-purpose code, compatible with the unique TRANAIR grid structure, is available to view the flowfield properties. The package enables setting up and submitting for solution a ‘‘standard’’ configuration from AGPS lofts in one or two hours. Complete solutions from ‘‘lofts to plots’’ are frequently available in less than 12 h. ‘‘Standard’’ configurations include transport configurations including, for example, four-engine 747-like aircraft with underwing struts and nacelles and vertical and horizontal stabilizer with boundary layer on both wing and body.

During the aerodynamic design of the Boeing 777 in the early 1990s, the risk of significant interference drag due to the exhaust from the large engines was revealed through TRANAIR analysis. Neither the earlier linear-based CFD methods nor conventional wind tunnel testing techniques, which did not simulate the exhaust, would have detected this potential problem. Only a very expensive powered-nacelle testing technique could assess these interference effects. Three different manufacturer's engines were being considered for the new aircraft. Using the powered testing technique to develop the engine installations would have added considerable expense. Moreover, such a wind tunnel based development would have unacceptable design flow time. Nonlinear transonic TRANAIR analysis by the product development engineers made it practical to address these installation problems including the effects of the engine exhaust flows in a timely manner. Had these problems gone undetected until late in the aircraft's development when the powered testing is usually done, any fixes would have been extremely expensive to implement.

Fig. 16 shows a comparison of TRANAIR results with test data from a similar configuration. TRANAIR's ability to provide insight to design changes allowed a close ‘‘Working Together’’ relationship between the various Boeing engineering disciplines and the engine manufacturers. It is noteworthy that the exhaust system of all three engines models is very similar in design, a feature found only on the 777. Key to the success of this application was the ability to model enough of the relevant physics and to provide solutions quickly enough to support the development schedule. The effect of CFD on the project was to provide information facilitating a closer working relationship between design groups. This enabled detecting problems early in the development process, when fixing or avoiding them was least expensive.

TRANAIR continues to see extensive use as the primary tool for transonic aerodynamic evaluation and design of commercial aircraft configurations. It is well suited for analysis in the attached and mildly separated flow portion of the flight envelope. For conditions with strong viscous interactions, one must resort to using the Navier–Stokes equations.

4.2.3. BLWF The BLWF code was developed by researchers at the Central Aerohydrodynamic Institute (TsAGI) and enhanced under contract with the Boeing Technology Research Center in Moscow, CIS [40]. It saw it first use at Boeing in 1994. The BLWF technology was very similar to the technology of the A488 system that had been developed internally at Boeing. However, it differed from A488 in that it had been designed and tuned for workstations and later, PC computing systems, instead of the large vector supercomputers that had been the main computational modeling tool within Boeing Commercial Airplanes. The tool was very responsive, providing solutions within minutes, rather than hours. The rapidity of response, along with the significant cost-of-use reduction by hosting on less expensive hardware systems, changed the nature of use of the modeling tool. New applications, such as Reynolds number corrections for wing loads, have become feasible with such a tool. This application requires solutions for about a dozen Mach numbers over a range of angles of attack (five to 10). Use of BLWF allows a database of hundreds of solutions to be generated in a matter of a few hours, rather than days or weeks. The code has also been used extensively in the preliminary design stage of aircraft definition. At this point in the airplane development cycle, there are typically a large number of significant changes in the aircraft definition, along with a need to understand the behavior of the configuration over a large range of conditions. BLWF allows more realistic modeling of the flight characteristics than other Preliminary Design methods and also provides an ability to obtain the information rapidly, allowing more effective cycling of the preliminary design through the evolution of an aircraft.

4.3. Euler/coupled boundary layer methods

The use of full potential/boundary layer coupling code reaches its limit in predicting airplane performance at off-design conditions where significant shock induced flow separations or vortex flows generated from sharp edges of the configuration, occur in the flowfield. The boundary layer approximation breaks down, and the irrotational/isentropic flow assumption is not a good approximation for such flow conditions. Moreover, wake locations must be estimated a priori, preventing the accurate analysis of flows where vortex interactions are an important feature.

Algorithm research in the early 1980s focused on solution of the Euler equations––the governing equations for inviscid fluid flows. The Boeing version of an Euler/boundary layer coupling code––A588 is based on FLO57 [41] coupled with the same boundary layer code A411 used in A488. The code also introduced a capability for simulating engine inlet and exhaust flows with various total pressures and total temperatures, as well as propfan engine power effects through the use of an actuator disk concept. A588 was the main analysis tool for isolated nacelle development studies until very recently. It provided accurate predictions of nacelle fan cowl pressure distributions, as well as fan cowl drag rise. The multiblock 3D Euler code was used extensively for the simulation of the propfan engine on The Boeing 7J7 program during the mid-1980s, as shown in Fig. 17. A key application was the evaluation of propfan engine installation effects on tail stability characteristics––including simulations that could not be accomplished in the wind tunnel.

Another Euler/integral boundary layer coupling code––A585, based on Drela and Giles [42], was developed in mid-1980s for 2D airfoil analysis and design. This code has been used extensively for advanced airfoil technology development, an essential capability for airplane product development engineers.

4.4. Navier–Stokes methods

The limitation of full potential or Euler/boundary layer coupling codes to flow regimes without significant flow separation leads to the development and application of solutions to Navier–Stokes equations, which are valid over the whole range of flight regime for most commercial airplanes. Finite difference schemes [43] or finite volume schemes with either artificial numerical dissipation [44] or Roe's upwind scheme [45] were developed and tested extensively during the late 1980s and early 1990s. At the same time, development of turbulence models for attached and separated flow simulations progressed rapidly. The simple zero equation Baldwin/Lomax model [46] was used extensively during the early stage of Navier–Stokes code applications. Later on, the Baldwin/Barth one equation model [47], the Spalart/Allmaras one equation model [48], together with Menter's shear-stress transport k–w model [49], were available, and were used for a wide range of flight conditions including massively separated flows.

4.4.1. Structure grid codes––Zeus TLNS3D/CFL3D, OVERFLOW Navier–Stokes technology using structured grids was well developed by the early 1990s and is available to the industry. However, most existing structured grid Navier–Stokes codes require the users to provide high-quality 3D grids to resolve detailed viscous flows near configuration surfaces and viscous wake regions. The task of grid generation––both surface grid and field grid––has become one of the essential elements, as well as the bottleneck in using Navier–Stokes technology for complex configuration/complex flow analysis. In addition, most Navier–Stokes solvers have not been thoroughly checked out and validated for numerical accuracy, convergence reliability, and application limitations. Boeing has acquired several Navier–Stokes codes from NASA, as well from other research organizations, and has devoted a great deal of effort testing the codes and validating numerical results with available wind tunnel and flight data. In addition, to make the codes usable tools for engineering design, Boeing CFD developers have rewritten a 3D grid generation code through the use of an advancing front approach [50], so that a precise control on grid quality, such as grid spacing, stretching ratio, and grid orthogonality near configuration surfaces can be achieved. This is an important requirement for accurate resolution of viscous flow regions for all existing Navier–Stokes solvers. Two structured grid generation approaches are currently in use (i.e., the matched/patched multiblock grid approach and the overset or overlap grid approach). The former approach subdivides the flowfield into a number of topologically simple regions, such that in each region high quality grid can be generated. This is a rather time-consuming and tedious process for complex configuration analysis. However, once this ‘‘blocking’’ process is done for one configuration, a similar configuration can be done easily through the use of script or command files. The TLNS3D/CFL3D based Zeus Navier–Stokes analysis system [51] developed and used at Boeing for Loads and Stability and Control applications belongs to this structured, multiblock grid approach. The Zeus analysis system inherited the process developed in the A488 system, which packaged many user-friendly preprocessing programs that handled geometry and flow condition input as well as postprocessing programs that printed and plotted wing sectional data and airplane force and moment data. This has allowed the design engineers to reduce their input to just geometry lofts and flight conditions and obtain the solution within a few hours or overnight depending on the size of the problem and the availability of the computing resources. The Zeus system is illustrated in Fig. 18.

Some recent applications of using the Zeus Navier–Stokes analysis system include the prediction of Reynolds number effects on tail effectiveness, shown in Fig. 19. CFD results captured the effect of Reynolds number on horizontal tail boundary layer health and on tail effectiveness quite well.

Another application is the simulation of vortex generators on a complete airplane configuration [52] as shown in Fig. 20. The effects of vortex generators on airplane pitch characteristics are shown. Again, the results compare reasonably well with flight data with respect to predicting airplane pitch characteristics, even at relatively high angles of attack where the flow is massively separated. The CFD solution also provides flowfield details that illustrate the flow physics behind how vortex generators work to improve high-speed handling characteristics, a very useful tool for design engineers in selecting and placing vortex generators on lifting surfaces.

The second structured grid Navier–Stokes method uses the overset grid approach, whereby the flowfield grid is generated for each component of the configuration independently. Each set of grid overlaps with other set or sets of grid, and communication between various sets of grid is achieved through numerical interpolation in the overlap region. The advantage of this approach is that each component of the configuration is relatively simple, and a high-quality local grid can be easily generated. However, one pays the price of performing complex 3D interpolation with some risk of degrading overall numerical accuracy. The OVERFLOW code [43] used at Boeing for high-speed and high-lift configuration analysis belongs to this overset/overlap structured grid approach. Fig. 21 shows the overset grids and OVERFLOW solution of a complex high-lift system, including all high-lift components of the airplane [53]. Results agree well with experimental data for low to moderate angle of attacks. At high angle of attack, there are complex flow separations in the flap and slat gap regions, which could not be simulated adequately with the current one- or two-equation turbulence models. Improvements in turbulence models for separated flow simulation, as well as Navier–Stokes solver accuracy and robustness, are essential for a reliable prediction of airplane high-lift performance, as well as airplane pitch characteristics.

Another important element for successful use of Navier–Stokes technology in airplane design and analysis is the availability of high-performance computing. All Navier–Stokes codes require large memory and many CPU hours to resolve viscous flows over an airplane configuration. The rapid development of parallel computing hardware and software, as well as PC clusters with large number of CPUs, have made the use of Navier–Stokes technology in practical airplane design and analysis a reality. The analysis of an airplane configuration with 16 vortex generators on each side of the wing consists of approximately 25 million points. Using 56 CPUs on a SGI Origin 2000 machine, the CFD solution for each flight condition can be obtained within 11 h of flow time.

4.4.2. Unstructured grid codes––Fluent, NSU2D/3D, CFD++ The structured grid Navier–Stokes codes make highly efficient use of computer memory and processing power due to the well-ordered data structure used in the solution algorithm. However, they suffer two major drawbacks; i.e., the lack of flexibility in handling complex geometry and the difficulty of implementing solution adaptive gridding. These requirements, namely, complex geometry and solution adaptive capability, are essential for accurate and reliable predictions of airplane design and off-design performance. Consequently, it is less common and often more difficult to use CFD to analyze geometrically complex parts of the airplane, such as high-lift systems (flaps and slats), engine compartments, auxiliary power units, and so on. Paradoxically, the success of CFD in designing major components has eliminated many of the experiments that previously provided a ‘‘piggyback’’ opportunity to test these complicated devices. Consequently, there is an increased need to compute airflows around and through systems that are distinguished by very complex geometry and flow patterns. In the last decade, there has been impressive progress in unstructured grid Navier–Stokes code developments [54–57]. Boeing Commercial Airplanes has explored and used Fluent, the most recent unstructured grid Navier–Stokes codes NSU2D/NSU3D of Mavriplis [54], and CFD++ of Chakravarthy [57] for 2D and 3D high-lift analysis with success.

A recent application of unstructured grid technology involved the use of Fluent V5 [58] to investigate the behavior of the efflux from engine thrust reversers [59]. A typical commercial airplane deploys its thrust reversers briefly after touch down. A piece of engine cowling translates aft and blocker doors drop down, directing the engine airflow into a honeycomb structure called a cascade. The cascade directs the flow forward, which acts to slow the aircraft and decrease lift for more effective braking. There are some critical design considerations in properly directing the reversed flow. The reverser is used precisely at the time when high-lift devices, wing leading and trailing edge flaps and slats, are fully deployed. Consequently, the plumes of hot exhaust must be directed so as not to impinge on these devices. In addition, the plumes should not hit the fuselage or other parts of the aircraft. Moreover, reingestion (in which the reversed plume reenters the engine inlet), engine ingestion of debris blown up from the runway, and plume envelopment of the vertical tail (which affects directional control) must be avoided. To eliminate these effects, it's important for designers to know exactly where the exhaust plumes go.

The Tetra module of grid generation software from ICEM CFD Engineering [60] has been used to obtain fully unstructured meshes. Starting from a new airplane geometry (with cleaned up lofts), these meshes can be created in a day or two. The grid generation software contains a replay capability so that minor changes to the geometry can be remeshed quickly. Because the entire CFD analysis cycle can be completed in about three days, designers can use this tool repeatedly as a way to optimize the design. In this way, it is possible to map the performance of the reverser against the power setting of the reversed engine fan and the airplane forward speed. Tests that involve geometry changes, such as the repositioning of the cascades or the nacelle relative to the wing or variation of the cascade angles, can be accomplished with minimal remeshing and analysis. Wind tunnel testing and expense are reduced, but the key benefits are really time and risk mitigation. If a need to change the design should become apparent after the tooling was built and aircraft was in test, the delay in entry into service and the expense of retooling would be unacceptable. The grid and engine reverser efflux particle traces from one of these cases is illustrated in Fig. 22. Fluent is in widespread use at Boeing for other geometrically complex problems, such as cooling flows in engine compartments and dispersion of fire suppression chemicals.

4.4.3. Other Navier–Stokes codes The Propulsion Analysis group at Boeing Commercial Airplanes has long acquired, supported, and used a number of other Navier–Stokes codes. The present authors are not qualified to describe this activity; however, we do wish to mention some of the codes involved. These include the Boeing named Mach3 code based on the implicit predictor, corrector methodology of McCormack [61], the PARC code [62] of NASA Lewis, the WIND code [63], and BCFD [64], which is scheduled to be the platform for an Enterprise common Navier–Stokes code. These codes have been used for nacelle inlet analysis and design and for nacelle fan and core cowl nozzle performance studies [64,65].

4.4.4. Next generation Navier–Stokes codes The successful application of Navier–Stokes codes during the last 10 years has raised expectations among Boeing engineers that CFD can become a routine tool for the loads analysis,stability and control analysis, and high-lift design processes. In fact, there is considerable speculation that it may be possible to populate databases involving tens of thousands of cases with results from Navier–Stokes CFD codes, if dramatic improvements in computing affordability continue over the next five years. For the first time, the affordability per Navier–Stokes data point may rival that of a wind tunnel generated data point. Of course, project engineers use CFD and wind tunnel data in a complementary fashion so that cost is not a competitive issue here. Before Navier–Stokes codes can be routinely used to populate databases; however, accuracy, reliability, efficiency, and usability issues need to be addressed. Gaps in data, inconsistent data, and long acquisition times seriously degrade the utility of a database. Even with current user aids, the application of Navier–Stokes codes to new configurations generally requires the services of an expert user. The generation of a ‘‘good grid’’ is still somewhat of an art and often quite labor intensive. Although everyone realizes that a ‘‘good grid’’ is necessary for accuracy and even convergence, there is no precise definition of what constitutes a ‘‘good grid’’. In fact, the definition would probably vary from code to code and is certainly case dependent. Usability problems are reflected in the fact that although Navier–Stokes codes are now considered capable of generating more accurate results, they are used far less frequently than TRANAIR at Boeing Commercial Airplanes.

Much of the current effort to improve the usability of our Navier–Stokes codes would have to be termed evolutionary. As is always the case with evolutionary improvements, it is necessary to determine whether or not incremental improvements are approaching a horizontal asymptote, while implementation costs are mounting. Boeing is currently involved in an effort to reevaluate the current technology and explore alternatives, much the same as was done 20 years ago in the case of potential flow. The project is called General Geometry Navier–Stokes Solver (GGNS).

From our TRANAIR experience, it seems rather evident that solution adaptive grids must be an essential feature for reliability and usability. This is especially true when computing flows at off-design conditions where our understanding of the flow physics is limited, making it difficult to generate ‘‘good grids’’. However, these grids must now be anisotropic and, more than likely, quite irregular. This places a huge burden on improving discretization fidelity, as current discretization algorithms do not seem to do well with irregular spacings and cell shapes. Higher order elements are certainly desirable for efficiency's sake and for capturing latent features. However, stabilization and limiter technologies need to be advanced to handle such elements. Current solvers are relatively weak, and convergence is often incomplete, especially when turbulent transport equations are involved. Some of these issues are addressed in detail elsewhere [66]. It should be noted that our reevaluation and development work here is a joint effort between the CFD developers at Boeing and their colleagues at the Boeing Technical Research Center in Moscow. We also note there are related efforts going on elsewhere. We mention in particular the FAAST project at NASA Langley.

4.5. Design and optimization methods

4.5.1. A555, A619 inverse design codes Most existing CFD codes are analysis tools (i.e., given a configuration, the codes predict aerodynamic characteristics of the configuration). In airplane design, one would like to have tools that can provide design capability (i.e., given airplane aerodynamic characteristics, the codes generate realistic geometry). The design method used by Henne [67], which prescribes wing surface pressures and employs an iterative method to find the corresponding geometry, was one of the very first inverse design methods used in the airplane industry. Boeing Commercial Airplanes developed a similar method for wing design using the A555 code [68], illustrated in Fig. 23. This code was used extensively on the 7J7, 777, and 737NG programs. The code borrowed heavily from the A488 system to ensure usability in the fast-paced airplane development environment. On the Boeing 777 program, CFD contributed to a high degree of confidence in performance with only a three-cycle wing development program. Significantly fewer wing designs were tested for the 777 than for the earlier 757 and 767 programs. The resulting final design would have been 21% thinner without the ‘‘inverse design’’ CFD capability of A555. Such a wing would not have been manufacturable due to skin gages being too thick for the automatic riveting machines in the factory, and it would have less fuel volume. Conversely, if the wing could meet the skin gage and fuel volume requirements, the cruise Mach number would have had to be significantly slower. In either case, the airplane would not have achieved customer satisfaction. The effect of CFD wing design in this case was an airplane that has dominated sales in its class since being offered to the airlines.

More recently, Campbell [69] introduced a constrained, direct, iterative, surface curvature method (CDISC) for wing design. The method has been incorporated into both the structured grid single-block Navier–Stokes code A619 [70], and the overset grid code OVERFLOW/OVERDISC at Boeing. Both codes are in use for configuration design in the product development organization.

4.5.2. TRANAIR optimization Because of boundary condition generality, and in particular the use of transpiration to simulate surface movement, the TRANAIR code could have easily been substituted into the existing Boeing standard inverse aerodynamic design process, A555. However, the process itself had a number of issues. First and foremost was the difficulty of finding ‘‘good’’ pressure distributions for highly 3D flows. Such pressure distributions needed to result in acceptable off-design performance as well as low cruise drags. Although many rules of thumb were developed through the years, only a few highly experienced aerodynamicists could create acceptable distributions on a routine basis. Second, it was never clear whether the resultant designs were in fact optimal, a question of some importance in a highly competitive environment. Third, multidisciplinary constraints often had to be imposed after the fact leading to a highly iterative and time consuming process as well as potentially suboptimal designs.

A serendipitous result of the decision to use a powerful sparse solver to converge the TRANAIR analysis cases was the ability to rapidly generate solution sensitivities. In a sense, each sensitivity represented just another right hand side for the already decomposed analysis Jacobian matrix to solve. In addition, the adaptive grid capability allowed accurate tracking of changes in critical flow features predicted by these sensitivities. Formally, it was an easy matter to feed the sensitivities into an optimization driver such as NPSOL [71] and systematize the design process as illustrated in Fig. 24. However, optimization codes have been notorious for promising spectacular results and then falling flat because of overly simplistic mathematical realizations of the problems. Aerodynamic design requires understanding of very complicated geometric, flow and interdisciplinary constraints. These constraints are rather nebulous and often exist only in the minds of the designers. An initial optimization capability using TRANAIR was available in 1992 [72], but it took several more years before project users were willing to trust their design processes to optimization [73]. A wide variety of payoff functions and constraints were built into TRANAIR, but the one component of a payoff function that users were really interested in was, of course, drag. Consequently, a great deal of effort was invested in numerical work to improve TRANAIR's drag calculations. Careful studies in the mid-1990s [74] then validated the ability of TRANAIR to compute accurate drag increments for subsonic transports.

At the same time, a multipoint optimization capability was introduced, since it was well understood that drag minimization at a single flight condition was somewhat ill-posed and often led to unacceptable off design characteristics. Moreover, users desired capability for simultaneously optimizing slightly different configurations having major portions of their geometries in common. By 1997, TRANAIR optimization had replaced inverse design as the preferred aerodynamic design process for flight conditions where full potential/boundary layer modeling is applicable. At the current time, the code can handle as many as 600 geometry degrees of freedom and 45,000 nonlinear inequalities. These inequalities represent the pointwise application of roughly 25 different types of flow and geometry constraints. The code has seen extensive use in the design of a large variety of configurations covering the Mach range from transonic to Mach 2.4. This has contributed (in several cases critically) to detailed development studies for a number of vehicles, some of which are illustrated in Fig. 25.

TRANAIR design/optimization applications that have affected a product include the payload fairing on the Sea Launch rocket, nacelle fan cowl for the GE90-115B engine, and the process used to determine ‘‘Reduced Vertical Separation Minimums’’ compliance for new and in-service aircraft.

  1. Conclusions

During the last 30 years at Boeing Commercial Airplanes, Seattle, CFD has evolved into a highly valued tool for the design, analysis, and support of cost-effective and high-performing commercial transports. The application of CFD today has revolutionized the process of aerodynamic design, and CFD has joined the wind tunnel and flight test as a critical tool of the trade. This did not have to be the case; CFD could have easily remained a somewhat interesting tool with modest value in the hands of an expert as a means to assess problems arising from time to time. As the reader can gather from the previous sections, there are many reasons that this did not happen. The one we would like to emphasize in this Conclusion section is the fact that Boeing recognized the leverage in getting CFD into the hands of the project engineers and was willing to do all the things necessary to make it happen.



n5321 | 2025年7月29日 20:58

Early Investigation, Formulation and Use of NURBS at Boeing

Robert M. Blomgren Solid Modeling Solutions

David J. Kasik Boeing Commercial Airplanes

Geometry that defines the shape of physical products has challenged mathematicians and computer scientists since the dawn of the digital age. Such geometry has strict requirements for accuracy and must be able to be understood as documentation for products that have a multi-year life expectancy. In the commercial airplane business, product life expectancy is measured in decades.

Geometry data represents points and curves in two dimensions and points, curves, surfaces and solids in three dimensions. A large number of descriptive forms are now used that range from precise canonical definitions (e.g., circle, sphere, cone) to general parametric forms (e.g. B~zier, non-uniform rational B-spline (NURBS), multi-resolution).

Solids add a level of complexity when bounded with general surfaces because of the need for reliable and efficient surface/surface intersection algorithms.

Core geometry algorithms are compute intensive and rely on floating point arithmetic. The mathematical theory of computational geometry is well documented and relies on infinity and absolute zero, a continuing problem for digital computers.

Some of the computational problems can be avoided when a closed form solution is available that does not require convergence to compute a result. As the shapes people modeled expanded beyond canonical forms, more general representations (with the associated computational problems) became necessary.

This article describes how Boeing initiated and supported a concerted effort to formulate a more computationally useful geometry representation.

Boeing Motivation and Experience

Engineering drawings were the dominant output of computer-aided design (CAD) systems in the 1970s and 1980s. The primary examples were a set of turnkey systems built from minicomputers and Tektronix direct view storage tube displays. The standalone systems produced large amounts of paper engineering drawings and gave users the famous 'green flash effect' from the Tektronix terminals.

The majority of the systems used two-dimensional geometry entities (mostly canonical forms) and integer arithmetic to provide acceptable performance. Work done at General Motors was turned into a software-only product called AD-2000 and into a number of turnkey (computer hardware and CAD software) offerings from Computervision, Autotrol, Gerber, Intergraph and McDonnell-Douglas Automation. Applicon developed its own system but the result was architecturally and computationally similar.

The other major player was Lockheed. Lockheed developed CADAM, which ran on an IBM mainframe and IBM refresh graphics terminals, to create the computer analog of a drafting table. Performance was key. The team built a program that regularly achieved sub-quarter response for a large number of users. Dassault noted the success of CADAM and built CATIA to not only generate engineering drawings but improve computer-aided manufacturing functions. CATIA also started on IBM mainframes and refresh graphics terminals like the IBM 2250 and 3250.

Other production tools were built inside large aerospace and automotive companies. These systems were based on three-dimensional entities and addressed early stages of design. In aerospace, the batch TX-90 and TX-95 programs at Boeing and the interactive, IBM-based CADD system at McDonnell-Douglas generated complex aerodynamically friendly lofted surfaces. The automotive industry followed a different path because they most often worked with grids of points obtained from digitizing full-scale clay models of new designs. Surface fitting was essential. Gordon surfaces were the primary form used in General Motors, Overhauser surfaces and Coons patches at Ford, and B~zier surfaces at Renault.

The third rail of geometry, solid modeling, started to receive a significant amount of research attention in the late 1970s. Larry Roberts started using solids as a basis in his Ph.D. thesis, Machine Recognition of 3D Solids, at MIT in the late 1960s. The Mathematical Applications Group (MAGI) developed Synthavision, a constructive solid geometry (CSG) approach to modeling scenes for nuclear penetration analysis in the early 1970s. The University of Rochester PADL system, the General Motors GMSOLID program and others started making more significant impact in the later 1970s.

Boeing and CAD

With all this variation in approach, how did Boeing get involved in the NURBS business? There were three distinct drivers:

  1. Airplane surface design and definition

  2. Experience with turnkey drafting systems

  3. Convergence of the right people

Of Ships and Airplanes

Early commercial airplane design was derived from ship design. Both the terminology (airplanes have waterlines; they roll, pitch, and yaw; directions include fore, aft, port and starboard) and the fundamental design of surfaces (lofting to make airplanes fly smoothly in the fluid material called air) are still in use today.

The lofting process is based on sets of cross sections that are skinned to form an aerodynamic surface. The fundamental way surface lofters used to derive the curves was via conic sections. Most of the early techniques for generating families specified a circular or elliptic nose followed by some sort of elaborate spline representation. The splines could then be scaled to give the desired thickness within a specified shape. Once generated, a conformal mapping routine produced the classic Joukowski family of sections.

As Boeing computerized manual processes in the 1950s and 1960s, one of the earliest was lofting. Programs like TX-90, which evolved into TX-95, implemented the math equivalent of conic-based cross sections. These programs accepted batch card input and used listings, plots or a simple viewing program using a graphics terminal to review results. The lofts became the master dimensions and defined the exterior shape of an airplane.

All Boeing evaluations of geometry representation techniques placed a high premium on the ability of the form to represent conic sections exactly because of their importance in lofting.

The Drafting World

The advent of two new airplane programs (757 and 767) in the late 1970s placed a significant burden on the company. Because drawings were the lingua franca of both engineering and manufacturing, a concerted effort was started to improve the drafting process, and an explicit decision was made to buy commercial CAD products.

The two programs, located on different campuses in the Puget Sound region, chose different vendors to act as their primary CAD supplier. The 757 program chose Computervision (CV) CADDS, and the 767 chose Gerber IDS.

As the design and engineering job moved forward, staff members wanted to exchange information for parts that were similar between the two designs. The central CAD support staff, responsible for both systems, quickly discovered that translating geometry between CV and Gerber caused subtle problems. As a result, a design was put in place to translate all CV and Gerber entities into a neutral format. Translators were then built for CV to Neutral and Gerber to Neutral.

The design for the Geometry Data Base System (GDBMS) was therefore quite extensible, and translators were built to other turnkey systems. The GDBMS concept was ultimately adopted as the model for the Initial Geometry Exchange Standard (IGES), a format still in use.

The implementation of CAD caused two significant geometry problems. First, translation between systems, even with the neutral format, was fraught with problems. Geometry that was fine in CV could not be understood by Gerber and vice versa. The workaround to the problem required that users incorporate only 'translatable' entities on their drawings, which reduced the use of sophisticated, time saving features. Second, the overall set of entities was essentially two-dimensional. Boeing realized early on that the airplane design, engineering and manufacturing business required three-dimensional surfaces and solids.

Key People

Perhaps the biggest contributor to the Boeing geometry work was the willingness of managers, mathematicians and computer scientists to push the envelope. The team put together an environment that enabled the geometry development group to discover and refine the NURBS form. After validation of the computational stability and usefulness of the NURBS form, this new representation was accepted as the mathematical foundation of a next-generation CAD/CAM/CAE system better suited to Boeing.

The head of the central CAD organization, William Beeby, became convinced early on of the shortcomings of turnkey systems. He put a team of Robert Barnes, a McDonnell-Douglas veteran who understood the importance of CADD, and Ed Edwards, who brought experience of complex systems development from Ford and Battelle, in charge of the project.

Early experimentation focused on the display of complex surfaces. Jeff Lane and Loren Carpenter investigated the problem of direct rendering B-spline surfaces. Their work was important both academically through published papers [3] and from a sales perspective because potential users could see the results clearly.

Other early work focused on evaluation of then state-of-the-art solid modeling tools like Synthavision and PADL. Kalman Brauner, Lori Kelso, Bob Magedson and Henry Ramsey were involved in this work.

As the evaluations changed into a formal project, computing experts were added to the team in:

  • System architecture/user interface software (Dave Kasik)

  • 3D graphics (Loren Carpenter, Curt Geertgens, Jeremy Jaech)

  • Database management systems (Steve Mershon, Darryl Olson)

  • Software development for portability (Fred Diggs, Michelle Haffner, William Hubbard, Randy Houser, Robin Lindner)

The team pushed the envelope in all of these areas to improve Boeing's ability to develop CAD/CAM applications that were as machine, operating system, graphics and database independent as possible. New work resulted in Vol Libre, the first fractal animated film [1], user interface management systems [2] and software that ran on mainframes, minicomputers, workstations and PCs.

The rest of this article focuses on the key NURBS geometry algorithms and concepts that comprised the geometry portion of the overall framework (called TIGER - The Integrated Graphics Engineering Resource). Robert Blomgren led the efforts of Richard Fuhr, Peter Kochevar, Eugene Lee, Miriam Lucian, Richard Rice and William Shannon. The team investigated this new form and developed the robust algorithms that would move NURBS geometry from theory to a production implementation to support CAD/CAM applications. Other Boeing mathematicians (David Ferguson, Alan Jones, Tom Grandine) worked in different organizations and introduced NURBS in other areas.

Investigation into NURBS

As its name implies, the Boeing geometry development group was the portion of the TIGER project responsible for defining, developing and testing the geometric forms and algorithms. The functional specifications listed some 10 different types of curves (from lines to splines) and an extensive list of surfaces.

There was no requirement for upward compatibility with older design systems. The most difficult requirement was that no approximation could be used for a circle.

Early research pointed to the importance and difficulty of the curve/curve, curve/surface and surface/surface intersection algorithms. As it turned out, the development of robust intersections was critical to the Boeing formulation of the NURBS form.

Curve/Curve Intersection

An excellent and meticulous mathematician, Eugene Lee, was assigned the task of developing the curve/curve intersection algorithm. The representations needed for each of the required curve forms had already been defined. The initial approach, special purpose intersection routines from each form to the other, would have resulted in far too many special cases to maintain easily.

Lee realized that each segment could be represented as a rational Bzier segment at the lowest segment level. In short, doing one intersection would solve them all. It was a great step forward, but few people knew anything about rational Bzier segments. The primary references consisted of Faux and Pratt's geometry book, deBoor's Guide to Splines, and Lane and Riesenfeld's B~zier subdivision paper. No reference contained significant discussion of rational splines.

The Lane and Riesenfeld subdivision paper was used as the basis for Lee's first curve/curve intersection algorithm. The process relied on the fact that a Bzier curve could be very easily and quickly split into two Bzier curves. Since the min-max box is also split in two, box/box overlap was used to isolate the points of intersection between two curves. This algorithm gave reasonably good results.

Rational B~zier

Since Lee needed to convert circles and other conics to rational Bzier curves to allow use of the general curve/curve intersector, he became the Bzier conics expert. His work eventually led to an internal memo of February '81, A Treatment of Conics in Parametric Rational B~zier Form. At that time, Lee felt this memo was "too trivial" and "nothing new," and it was several years before he incorporated it into later publications. The content of the memo was foundational because it contained all the formulas for converting back and forth between conics defined by the classical methods in the plane and the new 3D definition of the rational Bezier conic containing three weighted points.

B~zier to NURBS

The transition from uniform to non-uniform B-splines was rather straightforward, since the mathematical foundation had been available in the literature for many years. It just had not yet become a part of standard CAD/CAM applied mathematics.

The next step was to combine rational Bzier and non-uniform splines. Up to this point, the form P(t) = Σi wiPi bi(t) / Σi wi bi(t) (1) was used for nothing more complex than a conic Bzier segment.

As the searching for a single form continued, knowledge about knots and multiple knots led to the observation that B~zier segments, especially for conics, could be nicely embedded into a B-spline curve with multiple knots. This now seems simple because it is easy to verify that equation (1) for P(t) is valid for B-spline basis functions as well as Bernstein basis functions. By the end of 1980, the complete representation of all required curves by a single form was complete, and the form is now known as the NURBS.

We quickly realized the importance of this new geometry form. The form could provide a concise and stable geometric definition to accurately communicate design data to and from subcontractors. It would no longer be necessary to send a subcontractor 5,000 points to well define a curve segment; the few NURBS coefficients could be used instead. Therefore, Boeing proposed NURBS as an addition to IGES in 1981.

Properties

This brief overview lists many properties of the NURBS form. These properties further were observed early in the Boeing work on the form and drove NURBS to become the de facto standard representation for CAD/CAM geometry.

The NURBS form is extremely well suited for use on a computer since the coefficients, the Pi given in equation (1) above, are actually points in three dimensions. Connecting the coefficients together to form a simple polygon yields first approximation to the curve. The first and last points of the polygon are usually the actual start and end point of the curve.

Mathematically, a NURBS curve is guaranteed to be inside the convex hull of the polygon. Therefore, knowing where the polygon is means that the location of the curve is also known, and useful decisions can be made quickly. For example, the polygon may be used to determine a min-max box for each curve. The check for box/box overlap is very fast. So the curve/curve intersection process can trivially reject many cases because the bounding boxes do not overlap.

Another property of the polygon is that the curve cannot have more "wiggles" than the polygon does. Hence, if the polygon does not have an inflection, neither does the curve. When the curve is planar, this means that any line cannot intersect the curve more times than it intersects the polygon.

Simple linear transformations (translate and rotate) can be made only to the polygon, a simple operation.

As splines, a NURBS curve may consist of many segments (spans) that are connected together with full continuity. For example, a cubic curve may have C2 continuity between each span. Such curvature continuity is important in aerodynamic and automotive design.

Another NURBS advantage is that the continuity of each span is also under local control. In other words, each span of a cubic is defined by only the four neighboring coefficients of the polygon. Local control is guaranteed because only the four spans are modified if one Pi is moved.

As a continuous set of spans, a NURBS curve is defined on a set of parameter values called knots. Each knot is the parameter value at the boundary between the spans. It is often desirable to increase the number of spans. For example, this occurs when more detail is needed for a curve in one region. Adding a new knot into the existing knots is a powerful feature that increases the number of spans by one without changing the curve.

Surfaces

Given a set of basis functions like those for the NURBS form, it is a straightforward mathematical exercise to extend the curve definition to the corresponding surface definition. Such a representation is referred to as a tensor product surface, and the NURBS surface is such a surface defined over a square domain of (u,v) values. Holding one parameter yields a NURBS curve in the other parameter. Extracting and drawing the NURBS curves of the surface at each of the u and v knot values results in a satisfactory display of the surface.

The one non-trivial drawback to tensor product surfaces is that all surfaces must have four sides. One side must collapse to a point to obtain 3D, three-sided surface. This point is referred to as a pole or singularity. The partial derivatives of the surface must be calculated carefully at a pole. This is particularly important when surfaces are to be trimmed since the path of trimming in the (u,v) space must be determined correctly as the path approaches the pole.

Surface/Surface Intersection

Not all early ideas and experiments gave desirable results. Curve/curve subdivision worked well as the basis for the curve/curve intersection algorithm. However, using surface/surface subdivision as the basis for surface/surface intersection proved problematic.

Peter Kochevar developed and implemented the first Boeing NURBS surface/surface intersection algorithm using subdivision. The results quickly brought available computing resources to a halt because of computational complexity and data explosion.

Upon further analysis, it was observed that if the end result is a point, such as in curve/curve intersection, subdivision gives a good result since the segments become so small at the lowest level that line/line intersection can be used. But if the end result is a curve, which is the normal result of surface/surface intersection, the small surface segments become planes at the lowest level and the result depends on plane/plane intersection. This yields thousands of very short line segments that don't even join up. No satisfactory approach was discovered for surface/surface intersection until later.

Solids

Even in 1980, Boeing realized the importance of solid modeling. Kalman Brauner led a solids group that worked alongside the geometry development group. Their task was to design a state of the art solid modeler to develop the requirements for Boeing's aerodynamic and mechanical design processes.

The requirements were given to the geometry development group to develop and test the appropriate algorithms. This was a useful cooperative effort between groups since the requirements for doing Boolean operations on solids are very stringent. Not only do the various intersections have to give accurate results, but they also have to be extremely reliable. Everything in a Boolean fails if any one of the many intersections fails.

This work on solids was later incorporated into the Axxyz NURBS based solid modeler.

Migration from Boeing Outward

Boeing was able to demonstrate the value of NURBS to the internal design and engineering community as well as a number of CAD vendors through TIGER. A decision to launch a new airplane program (the 777) resulted in a decision to purchase a next generation CAD system from a commercial vendor rather than build one internally. Ultimately, Dassault's CATIA running on IBM mainframes was chosen as the CAD system used to design and build the 777.

To IGES

One of first adopters of NURBS was the IGES community. Dick Fuhr, of the TIGER geometry development group, was sent to the August 1981 IGES meeting where he presented the NURBS curve and surface form to the IGES community. At this meeting Boeing discovered that SDRC was also working with an equivalent spline form. The members of the IGES community immediately recognized the usefulness of the new NURBS form. In the years that have followed, NURBS has become the standard representation for not only CAD/CAM but also for other uses (e.g., animation, architecture) of computational geometric modeling.

To Axxyz

Even though Boeing chose to design commercial airplanes with CATIA, other groups expressed interest in the TIGER NURBS work for government and commercial purposes. The core NURBS implementations were given to Boeing Computer Services and a number of the technical staff built a computer independent CAD/CAM/CAE software system marketed under the name Axxyz. Axxyz debuted formally at the 1985 Autofact conference and was eventually sold to General Motors and Electronic Data Systems.

The Axxyz group did early implementations of NURBS surface bounded (B-Rep) solids as part of the first commercial product release. Topology information, based on the twin edge boundary representation, was added to enable trimmed NURBS surfaces to be used as faces that were combined into a shell structure defining a solid.

Other applications were added that associated engineering, manufacturing, and drafting data directly with the NURBS geometry. This approach added a wealth of tessellation and inquiry capabilities to the basic geometry algorithm library.

To Intergraph and Dassault

One of Boeing's goals was to improve the use of NURBS in commercial CAD packages. The algorithms that led to the software implemented in TIGER had all been documented. Both Dassault and Intergraph received copies of the algorithm books for ultimate implementation in their products.

Hard Problems

NURBS implementation pushed compute power of the late 1970s and 1980s significantly. Performance tuning was always an adventure and permeated the various algorithm implementations.

The most critical problem, intersection, has already been discussed for both curve/curve and surface/surface cases. Other issues, like display and interaction, tolerance management and translation also arose.

Display

The first interactive NURBS implementations were delivered on calligraphic, vector-only graphics devices (the Evans and Sutherland Multi-Picture System). As the technology progressed into raster graphics, other work was done to generate solid, shaded images. From an architectural perspective, Boeing treated NURBS curves, surfaces and solids as standard objects that the graphics subsystem (not the application) could draw directly. In this way, changes could be made in display resolution as zooms occurred without involving the application.

Vector rendering of NURBS curves and surfaces relied on a chord-height tolerance technique. The technique, while slower than equal subdivision, was more aesthetically pleasing because areas of high curvature were drawn with more line segments. Shading used a similar technique to tessellate surfaces into polygons that were then used as input to a standard polygon shader.

Tolerances

Like any digital form, computation of results stops before reaching an exact zero. Perhaps the most difficult values to manage were the tolerance epsilons that indicated that an answer had been found. Experimentation on the best set of tolerances was continual to balance performance and accuracy. The tolerance values were changed frequently, and no one truly understood their interrelationships.

Translation

The Boeing NURBS implementations stored all entities in that form, even if the form that the user input was simpler or used less storage. In this way, a single intersection routine would be used for curves and a second routine for surfaces. Conceptually, the design was quite clean but numerous attempts to improve performance resulted in some special cases. Even so, the special cases were embedded in the intersection routines and not in the database form of the NURBS entities.

This approach caused some interesting problems with translation to terminology the user understood and to other CAD systems that understood more primitive entities and may not have accepted rich NURBS forms. The solution to the problem was to develop a set of routines that would examine the NURBS definition to see if it was close enough to being a line or a circle to call the entity a line or a circle. This information was used dynamically to report the simplest math formulation to the user. In addition, the same technique was useful when data was being translated into forms for other systems and applications. When a NURBS curve could be identified as an arc, an arc entity was generated for IGES and a radial dimension used in the drafting package.

Conclusion

In spite of our best efforts, 3D design applications require users to become geometry experts. NURBS is no panacea. But the foundational NURBS work done at Boeing did demonstrate the utility of the approach. As a result of this pioneering work, NURBS is still the preferred form to precisely represent both complex curves and surfaces and a large number of simple curves, surfaces and solids.

Acknowledgments

Boeing's John McMasters contributed to the discussion of the reality of lofting. Rich Riesenfeld and Elaine Cohen of the University of Utah acted as early consultants who introduced NURBS basics to Boeing. There was a huge number of contributors to the proliferation of NURBS through the industry that started in the mid-1980s. Tracing a complete genealogy of their continued work is well beyond the scope of this article. Our thanks go to all who have helped solidify the approach.


n5321 | 2025年7月28日 21:58

An Historical Perspective on Boeing’s Influence on Dynamic Structural

the Finite Element Method, Craig-Bampton Reduction and the Lanczos Eigenvalue extraction method formed a foundation。

三种方法里面,Craig-Bampton Reduction 和 Lanczos Eigenvalue extraction method 传播度很低。

Craig-Bampton Reduction(Craig-Bampton 模态缩减法)用于在有限元模型中减少自由度(DOF),同时保持结构动力学行为的主要特征。1968 年 Roy R. Craig Jr. 和 M. C. C. Bampton 在他们的经典论文《Coupling of Substructures for Dynamic Analyses》中首次系统提出。当时大型结构的模态分析计算量非常大,需要将复杂结构分解成较小的子结构来分析再组合。

Lanczos Eigenvalue Extraction Method(Lanczos 特征值提取法) Lanczos 方法是一种迭代算法,用于从大型稀疏矩阵中提取低阶特征值和对应特征向量,特别适用于结构振动分析中的模态提取问题。 1950 年 Cornelius Lanczos(匈牙利裔美国数学家)该方法将原始稀疏矩阵投影到一个低维 Krylov 子空间中,在该空间中求解特征值问题,效率远高于直接求解原始大矩阵。 仅关心低频模态(例如前几个模态)

Today’s automobiles have superb handling and are extremely quiet compared to vehicles 30 years ago. The comfortable environment of a modern automobile is largely due to the industry’s focused efforts on Noise, Vibration, and Harshness (NVH) analysis as a means to improve their product and sales. At the heart of each NVH analysis is a multi-million DOF finite element vibro-acoustic model. The low to mid frequency acoustic and structural responses require that thousands of frequencies and mode shapes be calculated.

We look into the role Boeing played in the inspiration, development, and deployment of these numerical solution methods and summarize how these methods are used both within Boeing and outside of Boeing in the development of multitudes of products.

Today within Boeing, the finite element method is pervasive with several thousand engineers utilizing FEA on a regular basis

It turns out that it was Boeing’s need and desire to improve flutter prediction that led to Boeing’s leading role in the development of the finite element method.

Who invented finite elements? In the publication “The Origins of the Finite Element Method” [1], Carlos Felippa states:

“Not just one individual, as this historical sketch will make clear. But if the question is tweaked to: who created the FEM in everyday use? there is no question in the writer’s mind: M. J. (Jon) Turner at Boeing over the period 1950–1962. He generalized and perfected the Direct Stiffness Method, and forcefully got Boeing to commit resources to it while other aerospace companies were mired in the Force Method. During 1952–53 he oversaw the development of the first continuum based finite elements.”

Figure 1: Jon Turner

Jon Turner was the supervisor of the Structural Dynamics Unit at Boeing in Seattle. In the early 1950’s, with the growing popularity of jet aircraft, and with demands for high performance military aircraft, delta wing structures presented new modeling and analysis problems. Existing unidirectional (that is, beam models) models did not provide sufficient accuracy. Instead, two-dimensional panel elements of arbitrary geometry were needed.

At this time, Boeing had a summer faculty program, whereby faculty members from universities were invited to work at Boeing over the summer. In the summers of 1952-53, Jon Turner invited Ray Clough from the University of California at Berkley, and Harold Martin from the University of Washington to work for him on a method to calculate the vibration properties for the low-aspect ratio box beam. This collaboration resulted in the seminal paper by Turner, Clough, Martin and Topp in 1956 [2] which summarized a procedure called the Direct Stiffness Method (DSM) and derived a constant strain triangular element along with a rectangular membrane element. (Topp was a structures engineer at the Boeing Airplane Company, Wichita Division.)

It is apropos to hear this story in the words of Clough. The following passage is from a speech by Clough transcribed and published in 2004. [3]:

“When I applied for the Boeing Summer Faculty job in June 1952, I was assigned to the Structural Dynamics Unit under the supervision of Mr. M. J. Turner. He was a very competent engineer with a background in applied mathematics, and several years of experience with Boeing. The job that Jon Turner had for me was the analysis of the vibration properties of a fairly large model of a ‘delta’ wing structure that had been fabricated in the Boeing shop. This problem was quite different from the analysis of a typical wing structure which could be done using standard beam theory, and I spent the summer of 1952 trying to formulate a mathematical model of the delta wing representing it as an assemblage of typical 1D beam components. The results I was able to obtain by the end of the summer were very disappointing, and I was quite discouraged when I went to say goodbye to my boss, Jon Turner. But he suggested that I come back in Summer 1953. In this new effort to evaluate the vibration properties of a delta wing model, he suggested I should formulate the mathematical model as an assemblage of 2D plate elements interconnected at their corners. With this suggestion, Jon had essentially defined the concept of the finite element method.

“So I began my work in summer 1953 developing in-plane stiffness matrices for 2D plates with corner connections. I derived these both for rectangular and for triangular plates, but the assembly of triangular plates had great advantages in modeling a delta wing. Moreover, the derivation of the in-plane stiffness of a triangular plate was far simpler than that for a rectangular plate, so very soon I shifted the emphasis of my work to the study of assemblages of triangular plate ‘elements’, as I called them. With an assemblage of such triangular elements, I was able to get rather good agreement between the results of a mathematical model vibration analysis and those

好的,这是清理了换-行符并根据段落和结构重新整理好的完整文本:

measured with the physical model in the laboratory. Of special interest was the fact that the calculated results converged toward those of the physical model as the mesh of the triangular elements in the mathematical model was refined.”

While Jon Turner’s application for DSM was vibration calculations to facilitate flutter and dynamic analysis, Ray Clough realized that DSM could be applied to stress analysis. In 1960, Clough penned the famous paper titled “Finite Elements for Plane Stress Analysis” which both adapted the DSM method for stress analysis and simultaneously coined the phrase “Finite Element.” [4].

Besides the work done by those directly affiliated with Boeing, many others contributed to the development and popularization of today’s modern finite element method. In particular, J.H. Argyris, O.C. Zienkiewicz, and E.L. Wilson should be credited with their huge contributions in developing and broadening the scope of the finite element method beyond aerospace applications. References 1, 5 and 6 provide comprehensive historical background on the development and evolution of the finite element method. References 2, 4 and 17 can be considered seminal papers that laid out the foundation of the modern finite element method.

Of significance is that Argyris was a consultant to Boeing [1] in the early 1950’s and continued to collaborate with Boeing well into the 1960’s [17]. Both Turner and Topp were Boeing engineers, and Clough and Martin were affiliated with Boeing via the summer faculty program. Therefore, it is evident that Boeing, both inspired, and was directly involved in, the research and development that directly led to today’s modern finite element method.

Dr. Rodney Dreisbach, (Boeing STF, Retired 2015) summarized Jon Turner’s significance in the FEA development and deployment within Boeing’s ATLAS program nicely. He wrote about this in the BCA Structures Core “Life@Structures Blog” on November 14, 2013. His closing paragraph reads:

“In guiding the not-so-obvious steps leading up to the creation of the FEA Method, Jon Turner has been recognized as a scientist, an engineer, a mathematician, and an innovator. Furthermore, he was a visionary as exemplified by his continued leadership in addressing more advanced flight vehicles such as advanced composite structures for a Mach 2.7 supersonic cruise arrow-wing configuration in 1976, and his continued support and advocacy of Boeing’s development of the integrated multidisciplinary structural design and analysis system called ATLAS. The ATLAS System was a large-scale finite-element-based computing system for linear and nonlinear, metallic and composite, structural optimization, including the ply stackup of advanced composite structures. The engineering disciplines represented by the System included statics, weights, dynamics, buckling, vibrations, aeroelasticity, flutter, structural optimization, substructuring, acoustics, nonlinear mechanics, and damage tolerance. Its architecture was comprised of separate modules for the various technical disciplines, all of which shared a common data management system. The System also included several advanced matrix equation solvers and eigensolvers, as well as state-of-the-art substructuring techniques. Substructured interactions could be considered as being static, or as dynamic using either a modal synthesis or branch modes approach.”

Of significance in the above description of ATLAS, is that it closely describes NASTRAN as well. This is not a coincidence. The roots of both NASTRAN and ATLAS date back to the mid–late 1960’s. Boeing was the industrial center of finite element analysis and was ahead of the other major aerospace companies in recognizing the superiority of the displacement method and deploying that method within Boeing’s precursors to ATLAS.

In 1964, NASA recognized that the future of structural analysis, particularly for complex aerospace structures, was the finite element method. At this time, NASA created a committee composed of representatives from each NASA center and chaired by Tom Butler (considered by Dr. Richard MacNeal to be the Father of NASTRAN). The committee was commissioned to investigate the state of analysis in the aerospace industry and to find an existing finite element program worth recommending to all NASA centers. The first committee action was to visit the aircraft companies that had done prominent work in finite element analysis. In the end, this committee concluded that no single computer program “incorporated enough of the best state of the finite element art to satisfy the committees hopes” and recommended that NASA sponsor development of its own finite element program [18]. This program would be called NASTRAN which is an acronym for NAsa STRuctural ANalysis.

In July, 1965, NASA issued the RFP for NASTRAN. The MacNeal-Schwendler Corporation (MSC) was not recognized as a significant or large enough entity in the finite element world, and so it partnered with Computer Sciences Corporation as the lead in its response to the RFP. Boeing considered the RFP, but in the end did not submit a proposal. Had Boeing participated, according to Dr. MacNeal (co-founder of the MacNeal-Schwendler corporation), the NASTRAN contract would have certainly gone to Boeing since Boeing was the clear industrial leader in the finite element method.

In the mid-to-late 1990’s, as an employee of MSC, the author brought Dr. MacNeal to Boeing’s Renton engineering facility where Dr. MacNeal spoke to BCA’s team of finite element analysis experts. Dr. MacNeal began his talk by thanking Boeing for not participating in the NASTRAN RFP, and he went on to tell the story of how MSC essentially won the eventual NASTRAN contract due to Boeing’s decision to not participate.

Dr. MacNeal writes that Boeing departed NASA’s NASTRAN Bidders’ Conference after being told that they could not have an exception to NASA’s requirement that all work be done on NASA’s computers [18]. The NASA purchasing agent, Bill Doles, said that an exception could not be granted because NASA had determined that their computers had a lot of excess capacity and it would be uneconomical to pay the contractors for use of their computers. Boeing responded that they would carry the costs of their own computers as overhead and not charge NASA. Bill Doles responded that this was unacceptable since most of Boeing’s work was with the government, and the government would have to pay the overhead anyway. After this exchange, at the next break, the Boeing team abruptly departed the conference.

Nonetheless, Boeing had likely influenced the RFP. The RFP was essentially a collection of what NASA perceived to be the state of the art in FEA that it gathered from its studies of the various aerospace FEA codes. The fact that NASTRAN, (developed according to the requirements of the RFP), both architecturally and capability-wise are closely paralleled by ATLAS may not be due to pure coincidence, but perhaps due to the NASA incorporating Boeing’s “state of the finite element art” into the RFP.

III. CRAIG-BAMPTON COMPONENT MODE REDUCTION AND SYNTHESIS

As mentioned in the last section, ATLAS included several “state-of-the-art substructuring techniques.” One of these techniques was Component Mode Reduction. Component Mode Reduction is a technique for reducing a finite element model of a component down to a set of boundary matrices that approximately represent the dynamic characteristics of the component. The accuracy of the approximation is generally improved by increasing the number of component modes retained during the reduction process. The reduced component is generically referred to as a substructure but currently the term superelement, coined by commercial FEA software providers, is more prevalent.

There are a litany of component mode reduction and reduced order modeling techniques, but one technique stands out due to widespread usage and deployment in the popular commercial FEA packages (for example, MSC Nastran, NX Nastran, ABAQUS and ANSYS). This technique is the “Craig-Bampton” (C-B) component mode reduction method and this method is applied to a wide variety of dynamic simulations not only in aerospace, where it was conceived, but also in virtually every industry where structural dynamics has a large influence on the product design and performance, especially the automotive industry. [19]

Within Boeing, the C-B technique is central to the Boeing Aeroelastic Process (BAP) that is used for flight loads and flutter analysis. Of significant importance to the flutter community is that the C-B methodology enables rapid frequency variation studies as well as insertion and tailoring of assumed modes. The C-B method is also extensively applied in propulsion dynamics for Windmilling, Fan Blade Out (FBO) loads and Engine Vibration Related Noise (EVRN) analyses.

The EVRN analysis is a coupled vibro-acoustic analysis where C-B reduction is performed on both the airframe and acoustic fluid model and reduced down to the interface with the engine. Of significance, is that this C-B superelement package can be delivered to the engine manufacturers in the form of boundary matrices and output transformation matrices (OTMs), thereby preserving all Boeing IP, while enabling the engine companies to determine how different engine bearing and mount designs effect the interior cabin noise.

C-B reduction with OTMs is also central to Coupled Loads Analysis in Boeing’s Space spacecraft industry. Coupled Loads Analysis, in this context, is essentially the dynamic structural analysis of the complete space structure. For example, in the case of a rocket or launch vehicle, you also have the cargo (for example, a satellite). The various components of the launch vehicle and cargo are frequently built by different companies neither company can generally have visibility of the other’s finite element models. However, the dynamics of the entire system must be analyzed. This is facilitated by use of superelements typically created using C-B reduction and OTM’s similar to what was described with the propulsion EVRN analysis. This process enables all parties to generate the detailed data necessary to analyze and design their structure while preserving any IP, export, and ITAR data requirements.

Outside of Boeing, it was summarized in the Introduction how the automotive industry applies FEA with C-B reduction to their NVH dynamic analyses of their vehicles and sub-systems. Another class of dynamic analysis performed in the automotive industry, and across virtually every other industry (including aerospace) that analyzes dynamic systems is Multi-Body Dynamic (MBD) simulation.

MBD is a numerical simulation method in which systems are composed as assemblies of rigid and/or elastic bodies. Connections between the bodies are modeled with kinematic joints or linear/nonlinear springs/bushings/dampers. If inertia (mass) is eliminated, and all bodies are rigid links with kinematic constraints, then the multibody analysis reduces down to a kinematic mechanism analysis. However, when mass is included, the analysis is inherently dynamic.

For the dynamic case with flexible bodies, the challenge is to bring the flexibility of each body into the system simulation in an accurate and efficient manner. The standard methodology used to create the “flex body” is to perform a C-B reduction where the body is reduced down to the interface DOFs that connect the body to its surrounding joints. Additional transformations may be done to put the interface matrices in a form compatible with the formulation of the MBD software system. However, the first step is typically the C-B reduction. All the popular commercial finite element packages have the ability to generate “flex bodies” of components from finite element models of the component and the C-B method is used to create the reduced mass and stiffness matrices that are processed to generate the flexible body. (There are other techniques beyond C-B that can be used to generate flex bodies, particularly when nonlinearities of the component model are needed. However, for the linear cases most prevalent today, the C-B method is pervasive.)

Therefore, at this point, we have seen that Boeing had a role with the inspiration and development of the finite element method, and we have discussed how the C-B reduction technique is prevalent across industries performing dynamic structural analysis. The C-B reduction technique was also one of the “state-of-the-art substructuring techniques” present in Atlas.

The seminal paper on the C-B method was published as “Coupling of Substructures for Dynamic Analysis” in July 1968 in the AIAA Journal [9] by Roy Craig of the University of Texas and Mervyn Bampton, a Boeing Sr. Structures Engineer. Hence the name of the method “Craig-Bampton.”

Figure 2: Roy Craig

Of note is that [9] describes both the C-B reduction technique and synthesis of the multiple C-B reduced parts to generate an accurate system level dynamic model of substantially reduced order enabling both accurate and efficient calculation of dynamic characteristics of highly coupled structures. This AIAA paper has more than 1100 subsequent journal citations since publication demonstrating the impact the C-B methodology on subsequent applications and research. Of course, the motivation for this development within Boeing and Atlas was for application of Flutter and Coupled Dynamic Loads analysis of highly redundant space vehicle and airframe structures.

Also of note is that this very same paper was earlier published within Boeing in 1966 as document D6-15509 [10]. (This document is available electronically from library.web.boeing.com.) This document was prepared by R. R. Craig, supervised by M. C. C. Bampton and approved by L.D. Richmond. This work took place when Craig was employed by Boeing as part of the summer faculty program. [19]

Therefore, just as we saw substantial collaboration between Boeing and leading researchers in the development of the finite element method, we see a similar collaboration with Roy Craig in inspiration, development, and deployment of the Craig-Bampton method for Boeing’s dynamic analysis needs. The methodology is most credited to Roy Craig who spent his 40 years at the University of Texas specializing in development of computational and experimental methods of flexible substructures. However, the need by Boeing for efficient and accurate coupled dynamic analysis methods inspired and accelerated the development that became the Craig-Bampton technique and 50 years later, this Craig-Bampton method is omnipresent! [19]

IV. THE LANCZOS METHOD OF EIGENVALUE EXTRACTION

The natural frequencies of a structure may be the most fundamental dynamic characteristic of a structure. Dynamicists use the natural frequencies and associated mode shapes to understand dynamic behavior and interplay of components in a dynamic system. The computation of a structure’s or substructure’s natural frequencies and mode shapes is of fundamental importance to the dynamicist.

From a mathematical perspective, the calculation of natural frequencies and mode shapes is an eigenvalue extraction problem in which the roots (eigenvalues) and associated mode shapes (eigenvectors) are computed from the dynamic equation of motion with the assumption of harmonic motion while neglecting damping and applying no loading.

Eigenvalue/Eigenvector calculation is also a requirement of the C-B reduction method. The C-B method uses the natural frequencies and mode shapes of a component constrained at its interface to generate the dynamic portion of the reduced stiffness, mass and loads matrices. Therefore, a robust and efficient C-B reduction requires a robust and efficient eigenvalue/eigenvector calculation.

The Lanczos eigenvalue extraction method is by far the most prevalent eigenvalue extraction method used today in the popular finite element programs for vibration and buckling modes. Today, the AMLS and ACMS methods are promoted as the state-of-the-art eigenvalue/extraction methods for the largest models commonplace in the automotive industry.

While AMLS and ACMS can easily outperform the Lanczos method on large models, they are essentially automated methods of substructuring the mathematical finite element model utilizing enhanced C-B reduction for an accurate approximation of each substructure. When these C-B reduced substructures are assembled, a final system level eigenvalue extraction is performed to compute approximate system level modes.

This complete substructuring, assembly, and solution process is captured in the AMLS and ACMS methods. However, it is typically the Lanczos method with C-B reduction that is utilized to form the reduced approximate system that was solved to obtain the approximate system frequencies and mode shapes.

The Lanczos method is the bread and butter of dynamicists, whether used directly for computation of natural frequencies and mode shapes, or used indirectly with the AMLS/ACMS and similar methods that are based upon automated component modal synthesis of very large systems.

Prior to the commercial availability of the Lanczos method in the mid 1980’s, dynamicists spent a large amount of thought and time in determining how to reduce a model down to a size that could be efficiently solved with their finite element program and yield an accurate, albeit approximate solution. This is precisely why the C-B and other dynamic reduction techniques were created. However, the underlying weakness of all these methods was an accurate, efficient, and robust eigenvalue extraction method for the reduction process.

From a high level, there were essentially two families of eigenvalue extraction methods from which a dynamicist could choose: 1) Iterative based methods such as Inverse Power and Subspace Iteration, and 2) Tridiagonal methods such as the Householder-QR method. The iterative methods were relatively fast and efficient, but suffered from accuracy issues and struggled with closely spaced and large numbers of roots. The Tridiagonal methods were relatively robust and could accurately solve for all the roots of a system. Unfortunately, they also required enormous amounts of memory and were very inefficient making them impractical for all but the smallest models. The Lanczos method gained instant popularity because it could solve large models both accurately and efficiently, eliminating the tedious reduction process for a large variety of dynamic analyses.

In the 1960’s-1980’s substructuring and component mode reduction were primarily performed to enable computation of a system’s modes when the system could not be solved without reduction on the computers of the time due to memory, disk, and time constraints. After the commercial availability of the Lanczos method, substructuring and component mode reduction were primarily performed for other reasons, such as to enable efficient frequency variation studies (as is the case with BCA’s standard flutter analysis process), or to generate reduced matrix level models of components that can be shared with a third party to assemble into their system.

Only in the last 15 years with the advent of High Performance Computing (HPC) systems, have the AMLS/ACMS methods brought us back to substructuring as the norm for solving the largest eigenvalue problems because parallelization and improved performance is more easily enabled using a substructured solution process.

So what does Boeing have to do with the Lanczos method? It is twofold. First, the method was invented by Cornelius Lanczos. He published the method in 1950 while working at the National Bureau of Standards [11, 14]. However, prior to joining the National Bureau of Standards, Lanczos was employed with the Boeing Aircraft Company in Seattle from 1946-49 where he was inspired to study and improve matrix methods and numerical eigenvalue extraction of linear systems. Shortly after leaving Boeing, he completed the formulation of what we now call the Lanczos eigenvalue extraction method [12, 13].

Cornelius Lanczos was a colleague of Albert Einstein, and on December 22, 1945, he penned this passage in a letter to Einstein:

”In the meantime, my unfavorable situation here at the University has changed for the better. I have been in cooperation with Boeing Aircraft in Seattle, Washington for almost two years. Our relationship has developed in such a way that the company offered me a permanent position. It is somewhat paradoxical that I with my scientific interest can always get on as an applied mathematician.” [13]

Figure 3: Cornelius Lanczos at his desk at Boeing Plant 1, Seattle, WA

More insight into Lanczos’ inspiration from his tenure at Boeing is obtained in his recorded interview by the University of Manchester in 1972 [12]. There are several references to his time at Boeing where among other things, he mentions:

“of course this eigenvalue problem interested me a great deal because in Boeing one encountered this eigenvalue problem all the time and the traditional methods, they give you – it was easy enough to get asymptotically the highest eigenvalue, but the question is how do you get all the eigenvalues and eigenvectors of a matrix in such a way that you shouldn’t lose accuracy as you go to the lower eigenvalues… I knew of course from theoretical physics that eigenvalues and eigenvectors, I mean wave mechanics, everything, is eigenvalues and eigenvectors. Only in this case it was numerical, and in Boeing when I was frequently asked to give lectures, one of the lecture topics was matrices and eigenvalues and linear systems so that I was familiar in a theoretical way of the behavior of linear systems, particularly large linear systems.”

After joining the National Bureau of Standards, Lanczos had the opportunity to complete the formulation of his method based upon his experience at Boeing. He applied it on an analog computer available to him, but in the end, he doubted the practicality of his method. In reference 12, he tells this story:

“And I will never forget when I think it was an 8x8 matrix and the eigenvalues varied in something like 10^6. I mean the highest to the lowest, and I expected that the highest eigenvalues would come out to 10 decimal places and then we gradually lose accuracy but actually all the eigenvalues came out to 10 decimal places. I mean this was a tremendous thrill to see that, that we didn’t lose anything, but of course it had to require the careful reorthogonalization process which makes my method practically, let’s say, of less value or perhaps even of no value.”

It is somewhat entertaining that the roots of the de facto standard eigenvalue extraction method for nearly 30 years were thought by its inventor to be “of less value, or perhaps even no value.” Of course, by Lanczos’ own admission, the method was difficult to apply in practice. However, the significance of the Lanczos method in maintaining accuracy was not lost on the mathematical community and over the years, many mathematicians studied the method and searched for numerical methodologies that would make the method practical and of high value. An in-depth historical development of the Lanczos method is beyond the scope of this writing. However, this leads us to Boeing’s second point of influence on the Lanczos method: The development and deployment of the first robust commercially viable implementation of the Lanczos method.

V. BOEING COMPUTER SERVICES AND BCSLIB

The late 1960’s is a significant period for Boeing as well as for finite element analysis, numerical computing, and mainframe computing data centers. At this time, Boeing had just launched the 747 in 1969 and was about to enter the big “Boeing Bust” which saw its employment drop from >100,000 down to under 40,000 by the end of 1971. At the same time, within Boeing, this bust is perhaps responsible for the consolidation of two largely disconnected Boeing math groups: one on the military side and one on the commercial side. In 1970, Boeing Computer Services (BCS) was formed and these two math groups were brought together under the BCS organization [15].

By the 1980’s, BCS had both a mature data center where time was leased on Boeing computers to run commercial applications like NASTRAN and ANSYS. The expertise of the math group resulted in the establishment of a software group that built and licensed the math library BCSLIB-ext as well as developed the systems and controls software Easy5 (the “-ext” version of BCSLLIB was licensed externally. BCSLIB was used internally).

During the 1980’s and early 1990’s the BCS math/software team had a major impact on solutions of large linear static and dynamic systems. Notably, they were directly responsible for the first significant robust and efficient Lanczos method deployed in a commercial FEA package. In 1985, The MacNeal-Schwendler Corporation (MSC) released Nastran V65 with Boeing’s Lanczos eigensolver [22] and in the decade following, similar implementations were deployed in most of the other popular finite element packages.

The major players on the Boeing side were John Lewis, Horst Simon, Roger Grimes, and their manager Al Erisman. Louis Komzsik, from MSC also played a major role. Louis recognized the impact the Lanczos method would have if implemented robustly. He convinced MSC to fund Boeing to bring the Lanczos method to fruition in MSC Nastran. Louis was a perfectionist and drove the Boeing team to handle everything that could break so as to make it as bomb-proof as possible.

Figure 4: John Lewis, Horst Simon and Roger Grimes Figure 5: Louis Komzsik

Taking the Lanczos method from an unstable, impractical methodology to a highly practical, robust and efficient methodology was the result a many researchers and the coalescence of several key break-throughs. The summary, as provided to the author during an interview with John Lewis in May 2016 is as follows: The Lanczos algorithm in Boeing’s BCSLIB code combined work from five PhD theses with critical industrial support. The key contributions to efficiency are:

  1. Block algorithms (2 Stanford PhD theses – Richard Underwood, John Lewis))

  2. Stability correction only as needed (2 Berkeley PhD theses – David Scott, Horst Simon, part of one of the Stanford theses -- Lewis)

  3. Shifting (Swedish PhD thesis – Thomas Ericsson)

  4. Integrating all of these with a smart algorithm for choosing shifts (BCS – Grimes, Lewis & Simon)

The “creative break through,” according to John Lewis, emerged over a couple of pints with David Scott while in a pub in Reading, England in 1980 where they discussed Ericsson’s work on shifting and came up with a plan to improve upon earlier Lanczos method implementations. However, they could not get funding to implement the plan, so it sat for several years. In John Lewis’s words, Louis Komzsik emerged as the “Guardian Angel” when he brought forward the funding from MSC to implement the plan in MSC Nastran. Louis was Hungarian as was Lanczos, so he had great faith in his countryman’s idea!

Besides the Lanczos component, the other major thrust of BCSLIB was the sparse direct linear equation solver. This solver provided a substantial performance boost in solution of large linear systems and played a significant role in Lanczos performance. Within the Lanczos implementation, a series of linear solutions (matrix decompositions) takes place as the algorithm searches for the eigenvalues. Maximum performance is dependent on minimizing the number of decompositions. This requires algorithms for selection of good trial eigenvalues along with transformations to find both closely space roots and widely separated roots efficiently. (What is described here is the “shifting” and “smart algorithm for choosing shifts” mentioned above.)

The work of the BCS math team cannot be overstated when it is recognized that 30–35 years after their heroic efforts, the Lanczos method is still prominent in the popular commercial FEA packages. We attribute much of the performance improvement in finite element solutions to computing hardware improvements. However, in the late 1980’s, between the Lanczos and Sparse Solver methods, engineers realized order of magnitude gains in solution performance independent of any hardware improvements. These two performance improvements meant that many models that had previously required substantial substructuring and complex dynamic reduction could now be solved directly with the Lanczos method.

Also of significance is that this Boeing team, along with Cray went on to win the 1989 Society of Industrial and Applied Mathematics (SIAM) Gordon Bell Award. They received their award specifically for achieving record performance with their implementation of a general sparse matrix factorization on an 8-processor Cray Y-MP computer. This Sparse Matrix Solver development was another great effort that found its way into the commercial FEA codes that contributed to both Lanczos efficiency and solution efficiency of linear systems of equations.

In closing this section, the overall contribution Al Erisman made, should be noted. Erisman managed and directed the math group from 1975 until his retirement in 2001. According to John Lewis, “Al Erisman created the ethos of the Boeing Math Group, which strongly valued academic-industrial collaboration.” Were it not for Erisman, the industrial collaboration between MSC and Boeing may never have taken place.

VI. CONCLUSION

The finite element method was invented roughly 60 years ago. Craig-Bampton reduction was invented roughly 50 years ago and the modern Lanczos and Sparse solver methods were deployed into commercial FEA packages roughly 30 years ago. Virtually every Boeing product created since the 1950’s relied significantly in whole or in part on these technologies. The same can be said outside of Boeing where multitudes of consumer products ranging from toys to automobiles are engineered with significant application of these technologies. In many cases, engineers are utilizing these technologies today within modern GUI’s with no idea of the underlying solution methods and algorithms at play. The fact that after multiple decades, these technologies persist, albeit in often simpler and automated implementations, is a testament to the significance of these methods. Moreover, while Boeing did not solely invent any of these technologies, Boeing’s need to engineer some of the most complex and high performance structures, had a tremendous influence on the development and eventual deployment of these methods. We feel and see the effects of these technologies in the products all around us today. As we celebrate Boeing’s Centennial, it is appropriate to not only applaud our predecessors for the impact the products they engineered had on our society, but also applaud the engineers and mathematicians at Boeing who contributed to solution methods and algorithms that are routinely applied outside of Boeing to the development of the superb products that grace our society today.

It is also fitting to mention that on the same day this conclusion was penned, the author received an assignment to generate reduced dynamic models for the 777-9X folding wing tip. The author will utilize the aeroelastic finite element model along with C-B reduction and Lanczos eigenvalue extraction to form the flexible body representation of the airframe and folding wing tips. These reduced dynamic models will be integrated into the external controls multibody dynamic system model. Therefore, the work of Boeing engineers/mathematicians Turner, Bampton, Lanczos, Lewis, Simon, and Grimes will be applied to engineer perhaps the most iconic feature of Boeing’s next great commercial airplane. However, this is not unusual since as previously mentioned, superb products are being engineered all over the world with exactly these same methods every day!

ACKNOWLEDGMENTS

I would like to acknowledge John Lewis and Roger Grimes for their time spent outlining and explaining the Boeing BCSLIB Lanczos and Sparse Solver development history. I would like to acknowledge Louis Komzsik who provided both technical and historical background on the Lanczos development. I would like to thank both Dr. Rodney Dreisbach and Dr. Kumar Bhatia. Over the past decade, I discussed this piece of Boeing history with both on numerous occasions. Their passion for these simulation technologies and Boeing inspired me to document this piece of Boeing history.


n5321 | 2025年7月28日 21:48

A Methodology to Identify Physical or Computational Experiment Conditions for Uncertainty Mitigation

近期最佳paper!一个土耳其人,有博世工作经验,做学术,然后拿了米帝佐治亚理工学院的博士,中间做了NASA的project。兼工程&学术背景的人写的东西还是不一样一点。

它有几个基本点:工程师难以全面准确把握产品内部的物理关系——这是他想要解决的问题——工程师的认知问题!

仿真分析是航天业的常规操作,他有经验,所以可能用了一整套航天业的语言来描述这个问题。整体的思路是完全同意,本质上也没有超出我的认知!相当多的一部分人天天还盯着单元类型,自由度等等问题,算是没有进入实战阶段!不过他本质上还是在用概率论来实现目标。

简单一点说:仿真结果是很容易出错,或者说它就是错的,但是它是有用的。那错的原因在哪里,这个paper的分类是不错的,一个是工艺问题,一个是认知的问题,理论上来看,你不可能建立一个完全物理镜像的,跟实物一致的仿真模型,一个是工艺的原因,他有很多偶然性,有公差在!第二个是你对真实世界的还原是存在偏差的,科学的关系都是对现实世界抽象以后获得的数学表达式,你现在要把所有的表达式东西都还原,应用到工程实际中去其实是不可能的。然后有用的原因他讲的也不错。就是仿真的结果跟最后实际使用的结果一致。怎么实现?中间就需要对模型进行很多的处理!怎么处理,去尝试!——算是Swanson那个台词的一个大的补充,跟自己的思路基本上也一致。简单说就是在电脑上试错,把设计参数跟性能结果之间的应变关系基本搞清楚!然后再做模型的处理,最后让计算机实验跟物理实验的结果一致!——达到预测价值!

但是传统行业的人对IT理解不够,如果他懂数据库,知道SQL,再添加上一点data visulization,应该要厉害更多!

Complex engineering systems require the integration of sub-system simulations and the calculation of system-level metrics to support informed design decisions. This paper presents a methodology for designing computational or physical experiments aimed at mitigating system-level uncertainties.   工程问题的解决方案这样子计算两个分类。分析解,数值解,实验解算是做三个分类。

The approach is grounded in a predefined problem ontology, where physical, functional, and modeling architectures(这三个词已经屏蔽了很多人!) are systematically established.——仿真分析的框架已经定下来了。目的,领域,模型都已经有了!By performing sensitivity analysis using system-level tools, critical epistemic uncertainties can be identified. 这个地方是我原来定义的所谓工艺参数!包含不确定和偶然误差。Based on these insights, a framework is proposed for designing targeted computational and physical experiments to generate new knowledge about key parameters and reduce uncertainty.这个framework是有一点兴趣的!跟我的会有什么不一样吗?——他的重点还是在mathmatica modelling上


 The methodology is demonstrated through a case study involving the early-stage design of a Blended-Wing-Body (BWB) aircraft concept, illustrating how aerostructures analyses can support uncertainty mitigation through computer simulations or by guiding physical testing. The proposed methodology is flexible and applicable to a wide range of design challenges, enabling more risk-informed and knowledge-driven design processes.

1Introduction and Background

背景——没学历做不下去的行业:

The design of a flight vehicle is a lengthy, expensive process spanning many years. 大海捞针型的低概率行业,存在解决的方案,但是找出这个方案很难!, With the advance in computational capabilities, designers have been relying on computer models to make predictions about the real-life performance of an aircraft. 随着计算能力的进步,设计人员一直依赖计算机模型来预测飞机的实际性能——需要analysis。这个行业是唯一完全CAE覆盖的行业,而且覆盖的特别早!MSC最早做的业务就只有航空航天,而且业务也拓展不出去。However, the results obtained from computational tools are never exact due to a lack of understanding of physical phenomena, inadequate modeling and abstractions in product details [123]. 每一个人都关心的准确性问题!跟求解器不同,CAE厂家的台词不同,这是工程师视角:物理没学好,模型不准确导致仿真结果错误! The vagueness in quantities of interest is called uncertainty.对不确定的定义。 The uncertainty in simulations may lead to erroneous predictions regarding the product; creating risk.仿真没做好,仿真中的不确定会导致风险!——工程师的核心关注点!

难点——费钱、时间:

Because most of the cost is committed early in the design [4], any decision made on quantities involving significant uncertainty may result in budget overruns, schedule delays and performance shortcomings, as well as safety concerns. 盲人摸象的设计带过来的几个风险词说得也不错!超预算,延期,性能差,安全隐患。

目标——获得全知,准确预测:

Reducing the uncertainty in simulations earlier in the design process will reduce the risk in the final product. The goal of this paper is to present a systematic methodology to identify and mitigate the sources uncertainty in complex, multi-disciplinary problems such as aircraft design, with a focus on uncertainties due to a lack of knowledge (i.e.epistemic).

我取的名字是如何获得上帝视角——omniscience!

1.1The Role of Simulations in Design

Computational tools are almost exclusively used to make predictions about the response of a system under a set of inputs and boundary conditions [5].

工程师确实是应该把CAE厂家在80年代奠定的一系列台词丢掉!这个台词就带一点the first principle的味道!CAE是预测工程方案与设计意图是否匹配!
At the core of computational tools lies a model, representing the reality of interest, commonly in the form of mathematical equations that are obtained from theory or previously measured data. 数学建模  How a computer simulation represents a reality of interest is summarized in Figure 
1. Development of the mathematical model implies that there exist some information about the reality of interest (i.e., a physics phenomenon) at different conditions so that the form of the mathematical equation and the parameters that have an impact on the results of the equation can be derived. The parameters include the coefficients and mathematical operations in the equations, as well as anything related to representing the physical artifact, boundary and initial conditions, system excitation [6].  里面涉及到大量的参数!一个参数错了,结果就不对!

A complete set of equations and parameters are used to calculate the system response quantities (SRQ). Depending on the nature of the problem, the calculation can be straightforward or may require the use of some kind of discretization scheme.

If the physics phenomenon is understood well enough such that the form of the mathematical representation is trusted, a new set parameters in the equations (e.g., coefficients) may be sought in order to better match the results with an experimental observation. This process is called calibration. Calibration(校准)本质上我自己挑的validation不够准确,还是用这个词的描述更好!With the availability of data on similar artifacts, in similar experimental conditions; calibration enables the utilization of existing simulations to make more accurate predictions with respect to the measured "truth“ model.  校准点的周边点可以获得准确预测!这个就毫无新意了!

Refer to caption
Figure 1:Illustration of a generic computer simulation and how System Response Quantities are obtained. Adapted from [6]

Most models are abstractions of the reality of interest as they do not consider the problem at hand in its entirety but only in general aspects that yield useful conclusions without spending time on details that do not significantly impact the information gain. 这是every mocel is wrong ,but some are useful的另外一个讲法! Generally, models that abstract fewer details of the problem are able to provide more detailed information with a better accuracy, but they require detailed information about the system of interest and external conditions to work with. Such models are called high-fidelity models (HIFI 音响). Conversely, low-fidelity models abstract larger chunks of the problem and have a quick turnaround time. They may be able to provide valuable information without so much effort going into setting up the model; unlike high-fidelity models, they can generally do it with very low computational cost. The choice of the fidelity of the model is typically left to the practitioner and depends on the application.

搭配看Swanson的一段台词看:

Let me go slightly sideways on that. When I first started teaching ANSYS, I said always start small. If you're going to go an auto crash analysis, you have one mass rep a car and one spring. Understand that? Then you can start modifying. You can put the mass on wheels and allow it to rotate a little bit when it hits a wall, and so on. So each, each new simulation should be a endorsement of what's happened before. Not a surprise. Yeah, if you get a surprise and simulation, you didn't understand the problem. Now, I'm not quite I think that relates to what your question was. And that is basically start at the start small and work your way up. Don't try to solve the big problem without understanding all the pieces to it.
只是同一个意思的不同表达

A few decades ago, engineers had to rely on physical experiments to come up with new designs or tune them as their computational capabilities were insufficient. 做样优化 Such experiments that include artifacts and instrumentation systems are generally time consuming and expensive to develop. In the context of air vehicle design, design is an inherently iterative process and these experiments would need to be rebuilt and reevaluated. Therefore, while they are effective for evaluation purposes, they cannot be treated as parametric design models unless they have been created with a level of adjustability. With the advance of more powerful computers and widespread use of them in all industries(ANSYS都卖身了,明显行业没渗透出去), engineers turned to simulations to generate information about the product they are working on. Although detailed simulations can be prohibitively expensive in terms of work-hours and computation time; the use of computer simulations is typically cheaper and faster than following a build-and-break approach for most large-scale systems.

As the predictions obtained from simulations played a larger part in the design, concepts such as “simulation driven design" has been more prominent in many disciplines. [7] If the physics models are accurate, constructing solution environments with very fine grids to capture complex physics phenomena accurately become possible.准确分网是CAE分析结果准确的一个因数而已,只是影响收敛性!建立准确的模型!基本上来说准确的模型,准确的仿真结果! The cost of making a change in the design increases exponentially from initiation to entry into service [8]. If modeling and simulation environments that accurately capture the physics are used in the design loop, it will be possible to identify necessary changes earlier.仿真的价值所在,X准了Y才会准。 Because making design changes later may require additional changes in other connected sub-systems, it will lead to an increase in the overall cost [9].

1.2Modeling Physical Phenomena

When the task of designing complex products involves the design of certain systems that are unlike their counterparts or predecessors, the capability of known physics-based modeling techniques may come short.经验的价值,一个全新的产品不太能建立准确的分析模型! For example, the goal of making predictions about a novel aircraft configuration aircraft, a gap is to be expected between the simulation predictions and the measurements from the finalized design——仿真结果跟测试结果基本上会存在差异——定量的差异!. If the tools are developed for traditional aircraft concepts (e.g., tube and wing configurations) there might even be a physics phenomenon occurring that will not be expected or captured——甚至有定性的差异!预测不准的源头!. Even if there is none, the accuracy of models in such cases are still to be questioned.——就算仿真跟实际检验结果一致,也可能模型不准!仿真经验丰富的人知道这不是假话! There are inherent abstractions pertaining to the design and the best way to quantify the impact of variations in the quantities of interest by changing the geometric or material properties is by making a comparative assessment with respect to the historical data 性能设计上面需要PDM. However, in this case, historical data simply do not exist. 没有已知的数据可以做参考

Because of a lack of knowledge or inherent randomness, the parameters used in modeling equations, boundary/initial conditions, and the geometry are inexact, i.e., uncertain. 本质上不确定性是无法消除的,原因他认识是两类,工艺加无知!没有实验室的行业无法做CAE的calibration。 The uncertainty in these parameters and the model itself(模型参数本质上是不确定的!), manifest themselves as uncertainty in the model output. As mentioned before, any decision made on uncertain predictions will create risk in the design. John A. Swanson(ansys founder)所谓的分析的时候没有意外!他们本质上把CAE当做一个定量的工具,trench是了然的,但是到哪个点不清楚,需要准确判断,才需要CAE。In order to tackle the overall uncertainty, the sources of individual uncertainties must be meticulously tracked (some are useful的本质原因,风险是已知的,类似身体里的淋巴!)and their impact on the SRQs need to be quantified. By studying the source and nature of these constituents, they can be characterized and the necessary next steps to reduce them can be identified.

In a modeling and simulation environment, every source of uncertainty has a varying degree of impact on the overall uncertainty.理解关系 Then, they can be addressed by a specific way depending on their nature. If they are present because of a lack of knowledge about it(认知问题) (i.e., epistemic uncertainty), can by definition be reduced [10]. The means to achieve this goal can be through designing a study or experiment that would generate new information about the model or the parameters in question. In this paper, the focus will be on how to design a targeted experiment for uncertainty reduction purposes(说要提供新的how!). Such experiments are not be a replication of the same experimental setup in a more trusted domain, but a new setup that is tailored specifically for generating new knowledge pertaining to that source of uncertainty.

减少无知的影响!

An important consideration is in pursuing targeted experiments is the allowed time and budget of the program. If a lower-level, targeted experiment to reduce uncertainty is too costly or carries even more inherent unknowns due to its experimental setup, it might be undesirable to pursue by the designers. Therefore, these lower-level experiments must be analyzed on a case-by-case basis, and the viability need to be assessed. There will be a trade-off on how much reduction in uncertainty can be expected, against the cost of designing and conducting a tailored experiment. From a realistic perspective, only a limited number of them can be pursued. Considering the number of simulations used in the process of design of a new aircraft, trying to validate the accuracy of every parameter or assumption of every tool will lead to an insurmountable number of experiments. For the ultimate goal of reducing the overall uncertainty, the sources of uncertainty that have the greatest impact on the quantities of interest must be identified.工程的价值是实现目的,不是科学,大概其把问题搞清楚,受控就可以了! Some parameters that have relatively low uncertainty may have a great effect on a response whereas another parameter with great uncertainty may have little to no effect on the response. 这是公差的另外一个说法

In summary, the train of thought that leads to experimentation to reduce the epistemic uncertainty in modeling and simulation environments is illustrated in Figure 2. If the physics of the problem are relatively well understood, then a computational model can be developed. 理解清楚物理关系的数学模型算是基础!If not, one needs to perform discovery experiments, simply to learn about the physics phenomenon 解决认知的问题! [11]. Then, if the results of this model are consistent and accurate, then it can be applied to the desired problem. 仿真结果跟测试结果一致! If not, aforementioned lower-level experiments can be pursued to reduce the uncertainty in the models. Created knowledge should enable the reduction of uncertainty in the parameters, or the models; reducing the overall uncertainty.理解清楚关系,才能控制不确定性!




1.3Reduced-Scale Experimentation,航空航天行业的缩小模型制样测试——也是一种预测!

An ideal, accurate physical test would normally require the building duplicates of the system of interest in the corresponding domain so that obtained results reflect actual conditions. Although this poses little to no issues for computational experiments -barring inherent modeling assumptions-, as the scale of the simulation artifact does not matter for a computational tool, it has major implications on the design of ground and flight tests. Producing a full-scale replica of the actual product with its all required details for a certain simulation is a expensive and difficult in the aerospace industry. Although full-scale testing is necessary for some cases and reliability/certification tests [12], it is desirable to reduce the number of them. Therefore, engineers always tried to duplicate the full-scale test conditions on a reduced-scale model of the test artifact.

The principles of similitude have been laid out by Buckingham in early 20th century [1314]. By expanding on Rayleigh’s method of dimensional analysis[1516], he proposed the Buckingham-Pi Theorem. This method enables to express a complex physical equation in terms of dimensionless and independent quantities (Π groups). Although they need not to be unique, they are independent and form a complete set. These non-dimensional quantities are used to establish similitude between models of different scales. When a reduced scale model satisfies certain similitude conditions with the full-scale model, the same response is expected between the two. Similitudes can be categorized in three different groups[17]:

  1. 1. 

    Geometric similitude: Geometry is equally scaled

  2. 2. 

    Kinematic similitude: “Homologous particles lie at homologous points at homologous times" [18]

  3. 3. 

    Dynamic similitude: homologous forces act on homologous parts or points of the system.

1.4Identification of Critical Uncertainties via Sensitivity Analysis算是不错的概念

Over the recent decades, the variety of problems of interest has led to the development of many sensitivity analysis techniques. While some of them are quantitative and model-free [19], some depend on the specific type of the mathematical model used [20]. Similar to how engineering problems can be addressed with many different mathematical approaches, sensitivity analyses can be carried out in different ways. For the most basic and low-dimensional cases, even graphical representations such as scatter plots may yield useful information about the sensitivities [21]. As the system gets more complicated however, methods such as local sensitivity analyses (LSA), global sensitivity analyses (GSA) and regression-based tools such as prediction profilers may be used.

筛选过滤关键参数!——类似工程图的关键尺寸、重要尺寸!

LSA methods provide a local assessment of the sensitivity of the outputs to changes in the inputs, and are only valid near the current operating conditions of the system. GSA methods are designed to address the limitations of LSA methods by providing a more comprehensive assessment of the sensitivity of the outputs to changes in the inputs. Generally speaking, GSA methods take into account the behavior of the system over a large range of input values, and provide a quantitative measure of the relative importance of different inputs in the system. In addition, GSA methods do not require assumptions about the relationship between the inputs and outputs, such as the function being differentiable, and are well suited for high-dimensional problems. Therefore, GSA methods has been dubbed the golden standard in sensitivity analysis, in the presence of uncertainty [22].

Variance-based methods apportion the variance caused in the model outputs with respect to the model input and their interactions. One of the most popular variance-based methods is Sobol Method [23]. Consider a function f(X)=Y, Sobol index for a variable Xi is the ratio of the variability obtained by calculating variability due to all other inputs to the overall variability in the output. These variations can be obtained from parametric or non-parametric sampling techniques. Following this definition, the first-order effect index of input Xi can be defined as:

S1i=VXi(EXi(YXi))V(Y)(1)

where the denominator represents the total variability in the response Y whereas the numerator represents the variation of Y while changing Xi but keeping all the other variables constant. The first-order effect represents the variability caused by Xi only. Following the same logic, combined effect of two variables Xi and Xj can be calculated:

S1i+S1j+S2ij=VXij(EXij(YXij))V(Y)(2)

Finally, since the total effect of a variable will be the sum of all first-order effects and the sum of all interactions of all orders with other input variables. Because the sum of all sensitivity indices must be unity, then the total effect index of Xi can be calculated as [24]:

STi=1VXi(EXi(YXi))V(Y)(3)

Because Sobol method is tool agnostic and can be used without any approximations of the objective function, it will be employed throughout this paper for sensitivity analysis purposes.

2Development of the Methodology——跟Science的logic一致性很高

The proposed methodology is developed with the purpose of identifying and reducing the sources of epistemic uncertainty in complex design projects with a systematic fashion.——主要是想解决认知问题。帮助工程师搞清楚问题!

First, the problem for which the mitigation of uncertainty is the objective, is defined, and corresponding high-level requirements are identified. In this step, the disciplines that are involved in the problem at hand and how requirements flow down to analyses are noted. Then, the problem ontology is formulated by using functional, physical and modeling decompositions. A top-down decision making framework is followed to create candidate, fit-for-purpose modeling and simulation environments and select the most appropriate one for the problem at hand. Upon completion of this step, a modeling and simulation environment that covers every important aspect of the problem and satisfies the set of modeling requirements will be obtained, while acknowledging the abstractions of the environment.

The third step is to run the model to collect data. Due to aforementioned reasons, there will be many uncertainties in the model. Therefore the focus of on this step is to identify critical uncertainties that have a significant impact on the model response. 这算是他的core process。这是一个大概搞清楚关系的过程!——按这个做法太耗费时间了!

If one deems this uncertainty to be unacceptable, or to have a significant impact on the decision making processes; then a lower-level experiment will be designed in order to create new knowledge pertaining to that uncertainty. 这是他横向的层次,渐进式的把问题搞清楚。This new knowledge can be carried over to the modeling and simulation environment so that the impact of new uncertainty characteristics (i.e., probability distribution) on the model response can be observed.-科学的observe and reason! The main idea of the proposed methodology is to generate new knowledge in a systematic way(科学的另外一个说法), and update appropriate components involved in the modeling and simulation environment with the newly obtained piece of information.(问题的推进)

This process is illustrated in a schematic way in Figure 3.


搞全新产品的玩法!

  1. 1. 

    Problem Definition:
    All aircraft are designed to answer a specific set of requirements that are borne out of needs of the market or stakeholders.总结的还不错


    Definition of the concept of operations outline the broad range of missions and operations that this aircraft will be used for. After the overall purpose of the aircraft is determined, the designers can decide on the required capabilities and the metrics that these capabilities are going to be measured by can be derived. Considering these information, a concept of the aircraft and the rough size of the aircraft can be determined. This process can be completed by decomposing the requirements and tracking their impact on metrics of capability and operations perspectives [25]. Multi-role aircraft will require additional capabilities will emerge and parallel decompositions may need to be used.——目标分解!

  2. 2. 

    Formulate the Problem Ontology:
    Following systems-engineering based methods given in References [2526], a requirements analysis and breakdown can be performed, identifying the key requirements for the aircraft, the mission and the wing structure to be analyzed. Then, a physical decomposition of the sub-assembly is developed, outlining its key components and their functionalities, decide on the set of abstractions. 类似面向对象编程的概念了! Finally, a modeling architecture that maps the physical and functional components to the decompositions is created. 类比电机设计的尺寸链,or 产品的概念框架! This mapping from requirements all the way to the modeling and simulation attributes is called the problem ontology, and illustrated in Figure 4. With this step it is possible to follow a decision-making framework to select the appropriate modeling and simulation tool among many candidates for this task.






  3. 3. 

    Identification of Critical Uncertainties
    With a defined M&S environment, it is possible to run cases and identify the critical uncertainties rigorously that have the most impact on the quantities of interest. 这个人本质上对仿真和物理学很熟悉才能玩这个游戏!  As mentioned before, there is a plethora of methods how uncertainty can be mathematically represented and the impact is quantified.通过CAE来实现数学建模! For this use case, a sensitivity analysis will be performed to determine which parameters have the greatest impact on the output uncertainty in corresponding tools, by calculating total Sobol indices.

  4. 4. 

    Design a Lower-level Experiment:

    The next task is to address the identified critical uncertainties at the selected M&S environment.——简化问题,筛选参数! To this end, the steps corresponding to designing a lower-level experiment are illustrated in Figure 5. It is essential to note here that the primary purpose of this lower-level experiment is not to model the same phenomenon at a subsequent higher-fidelity tool, but rather use the extra fidelity to mitigate uncertainty for system-level investigation purposes.


    The experimental design is bifurcated into computational experiments (CX) and physical experiments (PX)——两个同时做!, each may serve a unique purpose within the research context. For computational experiments, the focus is on leveraging computational models to simulate scenarios under various conditions and parameters, allowing for a broad exploration of the problem space without the constraints of physical implementation. Conversely, the physical experiments involve the design and execution of experiments in a physical environment. This phase is intricately linked to the computational experiment by the fact that they can accurately represent the physical experiments can be used to guide the physical experimentation setups. This entails a careful calibration process, ensuring that the computational models reflect the real-world constraints and variables encountered in the physical domain as best as possible.大家的想法差不多!解决这个问题是需要一点技术含量的! This step is a standalone research area by itself, and it will only be demonstrated on a singular case.

    Upon the completion of experimentation procedure, the execution phase takes place, where the experiments are conducted according to the predefined designs. This stage is critical for gathering empirical data and insights, which are then subjected to rigorous statistical analysis. The interpretation of these results forms the basis for drawing meaningful conclusions, ultimately contributing to the generation of new knowledge pertaining to the epistemic uncertainty in question.This methodological approach, characterized by its dual emphasis on computational and physical experimentation, provides a robust framework for analyzing uncertainties,目标不一样,我觉得目标是optimization!.

3Demonstration and Results

3.1Formulating the Problem Ontology

Development of a next-generation air vehicle platform involves significant uncertainties. To demonstrate how the methodology would apply on such a scenario, the problem selected is aerostructures analyses for a Blended-Wing-Body (BWB) aircraft in the conceptual design stage. The goal is to increase confidence in the predictions of the aircraft range by reducing the associated uncertainty on parameters used in design, 飞机行业本质上是作坊式or我们所谓做样式,它的challenge有不一样的地方,但是有技术含量的!. A representative OpenVSP drawing of the BWB aircraft used in this work is given in Figure 6.

Refer to caption
Figure 6:BWB concept used in this work.

3.2Identification of Critical Uncertainties

For the given use case, two tools are found to be appropriate for the early-stage design exploration purposes, FLOPS and OpenAeroStruct.

As a low fidelity tool——极简模型!, NASA’s Flight Optimization System (FLOPS)[27] will be used to calculate the range of the BWB concept for different designs. FLOPS is employed due to its efficiency in early-stage design assessment, providing a quick and broad analysis under varying conditions with minimal computational resources. FLOPS facilitates the exploration of a wide range of design spaces by rapidly estimating performance metrics, which is crucial during the conceptual design phase where multiple design iterations are evaluated for feasibility and performance optimization.目标上、方法论类似ANSYS rmxprt ,它可能是降低单元自由度或者模型简化得更多一点!FLOPS uses historical data and simplified equations 那就还是跟rmxprt一样的! to estimate the mission performance and the weights breakdown of an aircraft. Because it is mainly based on simpler equations, its run time for a single case is very low, making it possible to run a relatively high number of cases. Because FLOPS uses lumped parameters, it is only logical to go to a slightly higher fidelity tool that is appropriate with the conceptual design phase in order to break down the lumps. Therefore, a more detailed analysis of the epistemic uncertainty variables will be possible.

OpenAeroStruct [28] is a lightweight, open-source tool designed for integrated aerostructural analysis and optimization. It combines aerodynamic and structural analysis capabilities within a gradient-based optimization framework, enabling efficient design of aircraft structures. The tool supports various analyses, including wing deformation effects on aerodynamics and the optimization of wing shape and structure to meet specific design objectives. In this work it will be used as a step following FLOPS anaylses, at it represents a step increase in fidelity. According to the selected analysis tools, the main parameter uncertainties involved are to be investigated are the material properties (e.g., Young’s modulus) and the aerodynamic properties (e.g., lift and drag coefficients).

Table 2:Nomenclature for mentioned FLOPS variables
Variable NameDescription
WENGEngine weight scaling parameter
OWFACTOperational empty weight scaling parameter
FACTFuel flow scaling factor
RSPSOBRear spar percent chord for BWB fuselage at side of the body
RSPCHDRear spar percent chord for BWB at fuselage centerline
FCDI
Factor to increase or decrease lift-dependent drag
coefficients
FCDO
Factor to increase or decrease lift-independent drag
coefficients
FRFUFuselage weight (composite for BWB)
ESpan efficiency factor for wing

First, among the list of FLOPS input parameters, 31 of them are selected as they are either not related to the design, or highly abstracted parameters that may capture the highest amount of abstraction and cause variance in the outputs. Of these 31 parameters, 27 of them are scaling factors and are assigned a range between 0.95 and 1.05. Remaining four are related to the design, such as spar percent chord at BWB at fuselage and side of the body and they are swept in estimated, reasonable ranges. The parameter names and their descriptions will be explained throughout the discussion of the results as necessary, but an overview of them are listed in Table 2 for the convenience of the reader. Aircraft range is calculated through a combination of sampled input parameters. Among these parameters 4096 samples are generated using Saltelli sampling [19], a variation of fractional Latin Hypercube sampling, due to its relative ease in calculation of Sobol indices and availability of existing tools. Calculation of the indices are carried out by using the Python library SALib [29].

Refer to caption
Figure 7:Comparison of sensitivity indices calculated by three different methods: Quasi Monte Carlo, Surrogate model with full sampling, and surrogate model with 10% sampling

In Figure 7, total sensitivity indices calculated are given for three different sampling strategies. Blue bars represent the Quasi-Monte Carlo (QMC) sampling that uses all 4096 input factors. To demonstrate the impact of how sensitivity rankings may change through the use of surrogate modeling techniques, two different response equations (RSE) are employed. First one is the an RSE that is constructed using all 4096 points, and the other one is constructed using only 10% of the points, representing a degree of increase in the computational cost. After verifying that these models fit well, it is seen that they are indeed able to capture the trends albeit the ranking of important sensitivities need to be paid attention.

In this analysis, it is seen that aerodynamic properties, material properties and the wing structure location have been found to have significant impact on the wing weight and aerodynamic efficiency. Therefore, they are identified as critical uncertainties. This is indeed expected and consistent with the results found for a tube-and-wing configuration before in Reference [30].

3.3 Experiment Design and Execution

For demonstration of the methodology, the next step will include the low-fidelity aero-structures analysis tool, OpenAeroStruct [28] and how such a tool can be utilized to guide the design of a physical experimentation setup.

  1. 1. 

    Problem Investigation:

    •  

      Premise: The variations in range calculations are significantly influenced by uncertainties in the Young’s modulus, wingbox location and aerodynamic properties. These parameters are related to the disciplines of aerodynamics and structures.

    •  

      Research Question: How can the uncertainties in these parameters impacting the range can be reduced?

    •  

      Hypothesis: A good first-order approximation is the Breguet range equation is used to calculate the maximum range of an aircraft [31]. A more accurate determination of the probability density function describing the aerodynamic performance of the wing will reduce the uncertainty in the wing weight predictions.

  2. 2. 

    Thought Experiment: Visualizing the impact of more accurately determined parameters on the simulation results, we would expect to see a reduction in the variation of the simulation outputs.

  3. 3. 

    Purpose of the Experiment: This experiment aims to reduce the parameter uncertainty in our wing aero-structures model. There is little to none expected impact of unknown physics that would interfere with the simulation results at such a high level of abstraction. In other words, the phenomenological uncertainty is expected to be insignificant for this problem. In order to demonstrate the proposed methodology, both avenues —computational experiment only, and physical experiment— will be pursued.

  4. 4. 

    Experiment Design: We decide to conduct a computational experiment that represents a physical experimentation setup where the parameters pertaining to the airflow, material properties and structural location are varied within their respective uncertainty bounds and observe the resulting lift-to-drag ratios. For subscale physical experiment, the boundary conditions of the experiment need to be optimized for the reduced-scale so the closest objective metrics can be obtained.

  5. 5. 

    Computational Experiments for both cases:

    •  

      Define the Model: We use the OpenAeroStruct wing structure model with a tubular spar structures approximation and a wingbox model.

    •  

      Set Parameters: The parameters to be varied are angle of attack, Mach number, location of the structures.

    •  

      Design the Experiment: We use a Latin Hypercube Sampling to randomly sample the parameter space. Then Sobol indices are computed to observe the global sensitivities over the input space.

    •  

      Develop the Procedure:  (仿真跟制样测试是同步进行的!)

      •  

        For CX only: For each random set of parameters, run the OpenAeroStruct model and record the resulting predictions, case numbers and run times. After enough runs to sufficiently explore the parameter space, analyze the results.

      •  

        For PX only: Use the wingbox model only in OpenAeroStruct, pose the problem as a constrained optimization problem to get PX experimentation conditions, scale is now a design variable, scale dimensionless parameters accordingly.

3.4Computer Experiments for Uncertainty Mitigation

Reducing the uncertainty in the lift-to-drag ratio would have a direct impact on reducing the uncertainty in range predictions. L/D is a key aerodynamic parameter that determines the efficiency of an aircraft or vehicle in converting lift into forward motion while overcoming drag. By reducing the uncertainty in L/D, one can achieve more accurate and consistent estimates of the aircraft’s efficiency, resulting in improved range predictions with reduced variability and increased confidence.

To calculate L/D and other required metrics, the low-fidelity, open-source aerostructures analysis software OpenAeroStruct is used. BWB concept illustrated in Figure 6 is exported to OpenAeroStruct. For simplicity purposes, vertical stabilizers are ignored in the aerodynamics and structures analyses. The wingbox is abstracted as a continuous structure, spaning from 10% to 60% of the chord, throughout the whole structural grid. This setup is used for both CX and PX cases, and the model parameters are manipulated according to the problem.

3.4.1Test conditions

First, it is necessary to develop a simulation that would develop the full-scale conditions. The simpler approximation models the wingbox structure as a tubular spar, with a reducing diameter from root to tip. The diamaters are calculated through optimization loops so that stress constraints are met. For the wingbox model, the location is approximated from the conceptual design of the structrual elements from public domain knowledge. For both cases, different aerodynamic and structural grids are employed to investigate the variance in SRQs. Cruise conditions at 10000 meters are investigated, with a constant Mach number of 0.84.

Five different model structures are tested for this experimentation setup with the same set of angle of attack, Mach number, spar location and Young’s modulus in order to make an accurate comparisons. Through these runs, lift-to-drag ratios are calculated and the histogram is plotted in Figure 8. In this figure, the first observation is that although the wingbox model is a better representation of reality, its variance is higher than compared to the tubular spar models, as well as being hypersensitive to certain inputs in some conditions. It is also seen that the predictions of the tubular-spar model generally lie between the predictions that of the two different fidelities of the wingbox model.

Furthermore, the runtime statistics have a significant impact on how the results are interpreted, as well as how many cases can be realistically considered. The overview is presented in Table 3. It is obvious that the mesh size is the most dominant factor on the average run time for a single case. An interesting observation is that the coarser wingbox model takes less time to run compared to a lower fidelity tubular spar model, and predicts a higher lift-to-drag ratio. The reason of this was the wingbox model with the coarser mesh was able to converge in fewer iterations compared to the tubular-spar model. Using a shared set of geometry definition and parameters, as much as the corresponding fidelity level allows, showed that decreasing the mesh size resulted in less variance in the predicted SRQs, as expected. However, increasing the fidelity level comes with a new set of assumptions pertaining to the newly included subsystems, or physical behavior. Therefore, one cannot definitively say that increasing the fidelity level would decrease the parameter uncertainty without including the impact of the newly added parameters.

Table 3:OpenAeroStruct runtime and output variation statistics with respect to different model structures, on a 12-core 3.8 GHz 32GB RAM machine
Run typeStd. Deviation in L/DMean Runtime [s]
Tubular spar-coarse0.1846.79
Tubular spar-medium0.16915.38
Wingbox - coarse0.7942.5
Wingbox - medium0.37620.7
Wingbox - fine0.40176.67
Refer to caption
Figure 8:Probability densities of CL/CD for five different model structures.

3.5Leveraging Computer Experiments for Guiding Physical Experimentation  建立缩小版本的飞机,设计物理测试方案!

3.5.1Feasibility of the full-scale model

As discussed before, construction and testing of a full-scale vehicle models is almost always not viable in the aerospace industry, especially in the earlier design phases. For this demonstration, a free-flying sub-scale test will be pursued. The baseline experiment conditions will be the same as in the computational-only experimentation, except for appropriate scaling of parameters. Therefore, a scale that would be an optimum for a selected cost function needs to be found, considering the constraints. For this use case, following constraints are defined:

  •  

    Scale of the sub-scale model: n<0.2

  •  

    Mach number: 0.8<Ma<0.87, The effects of compressibility are going to be much more dominant as Ma=1 is approached, therefore the upper limit for the Mach number was kept at 0.87.

  •  

    Angle of attack 0<α<10, Because all similitude conditions will not be met, it the flight conditions for a different angle of attack need to be simulated. This is normal practice in subscale testing [32].

  •  

    Young’s Modulus: ES<3EF,Young’s modulus of the model, should be less than three times of the full-scale design.

3.5.2Optimize for similitude

For this optimization problem, a Sequential Least Squares Programming method is used. SLSQP is a numerical optimization algorithm that is particularly suited for problems with constraints [33]. It falls under the category of sequential quadratic programming (SQP) methods, which are iterative methods used for nonlinear optimization problems. The idea behind SQP methods, including SLSQP, is to approximate the nonlinear objective function using a quadratic function and solve a sequence of quadratic optimization problems, hence the term “sequential". In each iteration of the SLSQP algorithm, a quadratic sub-problem is solved to find a search direction. Then, a line search is conducted along this direction to determine the step length. These steps are repeated until convergence. One advantage of SLSQP is that it supports both equality and inequality constraints, which makes it quite versatile in handling different types of problems. It is also efficient in terms of computational resources, which makes it a popular choice for a wide range of applications.

Algorithm 1 Constrained Optimization for finding Physical Experiment Conditions
procedure Optimization(x)
     Define Scaling parameters
     Define x=[nα, Ma, h, E]
     Define constraint s
     Initialize x with initial guess [0.1, 0, 0.84, 10000, 73.1e9]
     while not converged do
         Evaluate cost function f(x) Equation 7
         Solve for gradients and search directions
         Run OpenAeroStruct optimization
         if Failed Case then
              Return high cost function
              Select new x
         end if
     end while
     return x
end procedure

The algorithm used for this experiment is presented in Algorithm 1. For convenience purposes, the altitude is taken as a proxy for air density. In the optimization process, mass (including the fuel weight and distribution) is scaled according to:

nmass=ρFρSn3(4)

where ρF is the fluid density for the full-scale model, ρS the fluid density for the sub-scale model, and n is the geometric scaling factor [32]. Since aeroelastic bending and torsion are also of interest, following aeroelastic parameters for bending (Sb) and torsion (St) need also be satisfied, and they are defined as:

Sb=EIρV2L4(5)
St=GJρV2L4(6)

These two parameters need to be duplicated in order to assure the similitude of inertial and aerodynamic load distributions for the same Mach number or scaled velocity, depending on the compressibility effects at the desired test regime [32]. And the cost function is selected to be:

f(x)=|CLCDSCLCDFCLCDF|2+30|ReSReFReF|2+3000|MaSMaFMaF|2(7)

where, lift-to-drag ratio, Reynolds number and the Mach number of the sub-scale model are quadrically penalized with respect to their corresponding deviation form the simulation results of the full scale result. Because the magnitudes of the terms are vastly different, second and third terms are multiplied with coefficients that would scaled their impact to the same level of the first term. For other problems, these coefficients present flexibility for engineers. Depending on how much certain deviations in the ratio of similarity parameters are penalized, the optimum scale and experiment conditions will change. In this application, the simulated altitude for the free-flying model is changed, rather than changing the air density directly.

3.5.3Interpret results

Without the loss of generality, it can be said that the optimum solution may be unconstrained or constrained, depending on the nature and the boundaries of the constraints. At this step, the solution will need to be verified as to whether it will lead to a feasible physical experiment design. Probable causes will be conflicts between the design variables, or infeasibility to match them in a real experimentation environment. In such cases, the optimization process needs to be repeated with these constraints in mind. However, for this problem, the constrained optimum point is found to be:

  •  

    n=0.2

  •  

    Ma=0.86

  •  

    α=10

  •  

    ES=219GPa

  •  

    h=0m

  •  

    Re=6.2x106

Furthermore,, it can be noted that due to the nature of the constrained optimization problem, the altitude for the sub-scale test was found to be sea-level. While, this is not completely realistic, it points to an experiment condition where high-altitude flight is not necessary. Finally, the Young’s modulus for the sub-scale model is slightly below the upper threshold, which was three times that of the Young’s modulus of the full-scale design. With this solution we can verify that solving a constrained optimization problem to find the experiment conditions is a valid approach, and provides a baseline for other sub-scale problems as well.

Furthermore, the runtime statistics have a significant impact on how the results are interpreted, as well as how many cases can be realistically considered. The overview is presented in Table 3. It is obvious that the mesh size is the most dominant factor on the average run time for a single case. An interesting observation is that the coarser wingbox model takes less time to run compared to a lower fidelity tubular spar model, and predicts a higher lift-to-drag ratio. The reason of this was the wingbox model with the coarser mesh was able to converge in fewer iterations compared to the tubular-spar model. Using a shared set of geometry definition and parameters, as much as the corresponding fidelity level allows, showed that decreasing the mesh size resulted in less variance in the predicted SRQs, as expected. However, increasing the fidelity level comes with a new set of assumptions pertaining to the newly included subsystems, or physical behavior. Therefore, one cannot definitively say that increasing the fidelity level would decrease the parameter uncertainty without including the impact of the newly added parameters.

Table 3:OpenAeroStruct runtime and output variation statistics with respect to different model structures, on a 12-core 3.8 GHz 32GB RAM machine

4 Conclusion

In this work, a novel approach is introduced for the design of physical experiments, emphasizing the quantification of uncertainty to target the of engineering models小语法错误 with a specific focus on early-stage aircraft design. Sensitivity analysis techniques are intelligently utilized to find out specific computational and physical experimentation conditions, to tackle the challenge mitigation of epistemic uncertainty.

Findings indicate that this methodology not only facilitates the identification and reduction of critical uncertainties through targeted experimentation but also optimizes the design of physical experiments through computational efforts.作者可能对数据库的理解不够!感觉它不懂SQL,不然可以走得更远一点! This synergy enables more precise predictions and efficient resource utilization. Through a case study on a Blended-Wing-Body (BWB) aircraft concept, the practical application and advantages of the proposed framework are exemplified, demonstrating how subsequent fidelty levels can be leveraged for uncertainty mitigation purposes.

Presented framework for uncertainty management that is adaptable to various design challenges. The study highlights the importance of integrating computational models by guiding physical testing, fostering a more iterative and informed design process that will save resources. Of course, every problem and testing environment got its own challenges. Therefore, dialogue between all parties involved in model development and physical testing is encouraged.

Future research is suggested to extend the application of this methodology to different aerospace design problems, including propulsion systems and structural components. Additionally, the development of more advanced computational tools and algorithms could further refine uncertainty quantification techniques. With more detailed models and physics, integration of high performance computers, it is possible to see the impact of this methodology in later stages of the design cycle. The reduction of uncertainty on performance metrics can contribute to avoiding program risk — excessive cost, performance shortcomings and delays.






n5321 | 2025年7月11日 00:00

波音公司的历史

Legend and legacy : the story of Boeing and its people  

序言写的比较煽情。最近对波音公司感兴趣。这本书只能read online



n5321 | 2025年7月9日 22:27

大厂自研CAD——Industry-Developed_CAD_CAM_Software


Between the late 1950s and mid-1980s, some major automotive and aerospace companies developed their own CAD/CAM programs. They wanted to leverage the programs to replace manual drafting and design practices to improve productivity, produce better designs, and accrue both technical and economic advantages. Frequently, these internal systems featured proprietary 3-D surface modeling, numerical control program generation, and engineering analysis capabilities. While shipbuilding, architecture, petrochemical, and electronics companies also moved from manual to CAD/CAM methods, they typically trailed the automotive and aerospace companies.

20 世纪 50 年代末至 80 年代中期,一些大型汽车和航空航天公司开发了自己的 CAD/CAM 程序。他们希望利用这些程序取代手工绘图和设计实践,以提高生产力,改进设计,并积累技术和经济优势。

正文:

Starting in the 1950s, automotive and aerospace companies purchased significant amounts of computer hardware. A number of the companies developed their own CAD/CAM programs to support the complexities and scale of their product development process. Not only were there few commercial CAD/CAM companies, but industrial companies also wanted to protect their intellectual property.

CAD/CAM programs supported drafting, 3-D surface modeling, numerical control (NC) programming, and/or engineering analysis. Drafting let users produce engineering drawings that documented designs and contained fabrication and assembly instructions. Some industrial companies, especially in the automotive and aerospace sectors, pushed the CAD envelope into 3-D surface modeling because surfaces define the external skins that drive automotive style and aerospace aerodynamics. Using the geometry CAD produced, CAM programs generated NC instructions for a new class of highly accurate machine tools. Finally, the geometry was essential input to complex engineering analysis programs (such as stress and aerodynamics).

This article begins with a general background and overview of the drafting, engineering, and manufacturing requirements in the automotive and aerospace industries. It then describes some of the technical differences between interactive CAD programs and other scientific and engineering programs in terms of performance, scale, and integration. The article then provides an overview of some of the program functions needed and why most of them were not available on a commercial basis.

This general picture is then followed by a more detailed discussion of CAD/CAM program examples from the two industries up through the mid-1980s. The automotive industry is covered first, with detailed examples from General Motors (GM), Ford, and Renault/Citroën. A similar discussion of the aerospace industry follows, with a focus on Lockheed, Northrop, McDonnell Douglas, Dassault Aviation, and Matra Datavision.

The article ends with a discussion of why and how these companies led the way in high-performance, large-scale, complex 3-D surface and NC programs. By contrast, early commercial CAD/CAM software companies focused on building programs that produced engineering drawings. In some cases, industrial companies purchased commercial programs to produce engineering drawings but relied on internal development for surface design and NC programming.

BACKGROUND

Like most forms of computing technology, CAD systems have evolved significantly. Some advances have been driven by computing technology itself, such as graphics processing units, personal computers, and cloud computing. Other have been driven by brilliant people developing and improving algorithms (such as finite elements for 3D stress analysis and nonuniform rational b-splines). Importantly, industrial companies realized that productivity improvements over manual techniques were possible using interactive graphics.

Automotive and aerospace companies have found benefits in developing and using highly interactive, computer-graphics-based CAD/CAM programs since the late 1950s. Computing helped automotive and aerospace companies move into the world of automated milling and machining with NC systems (CAM), analyzing smooth surfaces to define aerodynamically efficient and aesthetically pleasing external surfaces [computer-aided engineering (CAE)], and producing engineering drawings (CAD). Starting in the 1980s, other industries, such as shipbuilding, architecture, petrochemical plants, and manufacturing/assembly plants, adopted CAD/CAM methods more slowly.

Production-level automotive and aerospace CAD/CAM programs had features commercial companies introduced later. Early commercial offerings, as documented in David Weisberg’s excellent book [28], focused on generating engineering drawings. A few early industrial systems, such as Lockheed’s CADAM system, which became successful commercially [28, pp. 13-1–13-7], addressed engineering drawing, while other companies (such as Boeing and Ford) used commercial drafting systems.

Systems developed by industrial companies included not only 2-D engineering drawings but also CAM, engineering analysis, and 3-D surface design. By contrast, early commercial systems concentrated on producing 2-D engineering drawings. Daniel Cardoso Llach’s article [9] in this issue discusses how the 1950s CAM push to improve input definition for numerically controlled milling machines influenced some of the earliest CAD developments. Engineering analysis and surface-definition capabilities are discussed later in this article and the article by Kasik et al. [17].

Industrial and commercial systems differed for multiple reasons. First, CAD/CAM programs produce the complex, digital geometric representations and annotations needed to design, analyze, manufacture, and assemble products. Industrial companies wrote their own programs to protect their proprietary methods. Second, industrial companies chose to directly hire mathematicians, engineers, and programmers to build customized programs for 3-D surface design and engineering analysis. The programs reflected internal company practices and did not need to be as general as commercial offerings. A significant amount of the computer graphics techniques and mathematics implemented in industrial CAD/CAM programs still exist in today’s commercial offerings. Third, industrial companies were able to purchase mainframe computing. Mainframe performance was especially necessary for surface design and engineering analysis.

OVERVIEW

CAD/CAM programs produce two types of basic data. First, both automotive and aerospace require 3-D geometry to define their products. Second, they require text and 2-D/3-D geometry as input for engineering analysis (CAE) and instructions (such as finish, tolerances, and dimensions) for manufacturing and assembly. (Engineering analysis and CAE systems are beyond the scope of this article.)

Because the documentation medium is something flat (on paper, a computer screen, or microfilm), companies have long used 2-D engineering drawing techniques to represent 3-D geometry. The drawings represent 3-D objects as a collection of views (see Figure 1). Even if the CAD/CAM program defines geometry using 3-D coordinates, rendering techniques (such as shading, perspective, and dynamic rotation) are required to help the user understand the 3-D geometry on flat screens (see Figure 2).


FIGURE 1. Typical engineering drawing. (Source: https://pixabay.com/vectors/car-vehicle-draw-automobile-motot-34762/; used with permission.)


FIGURE 2. Annotated 3-D object. (Source: D. Kasik; used with permission.)

In short, CAD/CAM programs implement the necessary techniques to define, modify, and communicate the 2-D/3-D geometry and text needed to build complex products.

My Boeing job gave me a broad view of both commercial and industrial systems. As chief technical architect for Boeing’s internally developed CAD system [16], I was invited to numerous presentations from vendors and competitors and became acquainted with their internal details. Boeing CAD/CAM research and development work started in the late 1950s and ended in the late 1990s.

Academic systems are not included in this article because the most significant production program development work was being done by commercial CAD software companies and industrial companies. A number of academic research projects inspired CAD/CAM development nonetheless. The Massachusetts Institute of Technology [28, pp. 3-1–3-25] provided excellent late-1950s and early-1960s results focused on interactively generating 2-D geometry [24], 3-D geometry [15], and NC machine programming [23]. Although there were some academic contributions to solid modeling [27], [25], solids did not play a significant modeling role until Boeing used CATIA V3 and CATIA V4 to define the 777 with solids [21].

When assessing automotive and aerospace CAD programs, it is necessary to understand not only the data but also the user community:

  • those with technical expertise in one or more scientific, engineering, or manufacturing fields

  • specialists with interactive CAD systems build 2-D engineering drawings or 3-D models based on specifications from technical experts.

  • programming skills and are willing to write their own software to solve problems not addressed to their satisfaction in commercial software.

The models and text guide the activities of downstream engineering, fabrication, assembly, and maintenance staff. Making the downstream more productive was a prime motivator for the development of early CAD programs. CAM programs started in the mid-1950s because NC machines required very lengthy programs that required part geometry and manufacturing instructions to fabricate individual parts [2]. Generating the geometry for NC programs led to the development of tools to make defining the geometry easier. Engineering programs (such as computational fluid dynamics and finite-element analysis) also relied on geometry that defined external surfaces for aerodynamic analysis, more detailed part forms for structural analysis, and many others.

CAD/CAM PROGRAM CHARACTERISTICS

On a technical level, interactive CAD/CAM programs differ from other scientific/engineering programs and transaction-oriented business systems because of the greater need for performance, scale, and integration. However, CAD/CAM programs and their users did not initially levy specific demands on processor speed, network speed, memory size, and data storage capacity. Instead, users tended to start with whatever technical facilities they could access and then later demanded more processor power, network bandwidth, memory, and data storage.

Performance Requirements

CAD/CAM interactive drafting and design performance must be close to real time to allow users to manipulate geometry (either 2-D or 3-D) efficiently and comfortably. Immediate response (measured as 0.5 seconds or less) [28, pp. 13-1–13-7] for simple operations makes the CAD/CAM program feel like it is responding in real time. Simple operations include sketching a line and rotating, moving, and zooming 3-D models.

By contrast, many other scientific/engineering programs are heavily compute-bound and can generally be run as batch programs. Even when able to be run interactively, users understand how complex the algorithms are and do not expect immediate results. Hence, the necessity for real-time interaction is relaxed.

Most interactive, transaction-oriented business systems do not require near-real-time interactive performance. They often feature form interfaces that require a person to fill out multiple fields prior to processing. Interaction must be fast enough to allow quick navigation from one text field to another. Once input is completed, the user starts transactions processed by a reliable database system and expects some delay.

The real-time interaction aspect of CAD/CAM programs meant that their implementation differed significantly from other types of online programs. Getting acceptable performance for CAD stressed interactive devices, operating systems and programming languages; data storage methods; and computing/network hardware.

Other forms of scientific computing generate or measure vast amounts of data, as in computational fluid dynamics or astronomy. When a person produces a CAD drawing or model, it is most often part of a larger collection of parts, subassemblies, and assemblies that ultimately define the entire product. A complex product, such as a commercial airplane or a building, requires thousands of drawings, hundreds of thousands of unique parts, and millions of individual parts. A configuration management system rather than a CAD system defines and controls interpart relationships and versions. (Configuration management systems are beyond the scope of this article.) The system must be able to handle all of the original data as versions evolve in addition to the data generated by CAE/CAM processes. All versions are stored to document design decisions and evolution.

The thousands of people involved in designing, analyzing, building, and maintaining a complex product put significant stress on the supporting software and hardware. It is critical for the software to keep track and organize all of the parts, drawings, analyses, and manufacturing plans. Tracking and organizing generally required centralized computing resources (yesterday’s mainframes and today’s cloud). Tracking and organizing CAD data on centralized mainframes was difficult enough. The problem got worse as personal computers started having enough computing power and networking resources to move design to a distributed computing environment. Although tracking and organizing mainframe-based data were difficult, and distributed work relied on detailed centralized tracking and organizing, making sure that a user was working on the latest version added complexity.

Scale: Product Complexity and Longevity

The problem of scale stresses computer systems across both size and time. Then, as computer performance improves, users tend to push the limits by attacking more complex problems, producing more design and simulation iterations, generating more numerous and more detailed models, and so on. For example, when Boeing developed the 777 during the late 1980s and early 1990s, each airplane was represented by a collection of models that contained about 300 million polygons. The fourth version of the Dassault Systèmes CATIA CAD system (CATIA V4) was the primary modeling tool. When the 787 started in 2004, the geometric models developed using CATIA V5 required more than 1 billion polygons. Although not necessarily as large in terms of absolute amounts of storage consumed as business systems, geometry data are structurally complex (with both intrapart and interpart relationships) and contain mostly floating-point values (for example, results of algorithms only come close to zero).

Scale is also measured in calendar time. CAD programs generate geometry and documentation data that represent products that could be in use for decades (such as aircraft and military aircraft) or more (such as power generators). CAD/CAM programs tend to have a shorter half-life than the product definition data they produce. This puts significant stress on data compatibility across vendors or across software versions from the same vendor. Different vendors’ implementations of the same type of entity could all too easily result in translation errors. New versions of a single vendor’s product could also result in translation errors.

Data Integration

CAD/CAM program integration has different variations [18]. Effective, active data integration allows different programs to read and potentially write geometry data directly without translation. For example, a finite-element analysis program requires geometry from which it builds a mesh of elements. Many analysis programs (such as NASTRAN) have been in existence for decades and still do not have direct access to CAD geometry models.

Having full data integration across all CAD/CAM/CAE programs is a complex and fragile endeavor that remains a challenge for multiple reasons. Different groups developed the programs and use different internal representational that require translation. For example, CAD-generated geometry must be translated into the nodes and elements that finite-element codes can process. Similarly, different organizations use different brands of CAD/CAM/CAE programs that also require translation. For example, Boeing used two different CAD systems (Computervision for the 757 and Gerber for the 767) that forced the company to develop its own translator.

The translation of geometric data has proven to be nearly as challenging as translating natural language. Programs often have unique data entities, different algorithms for the same function, and even different hardware floating-point representations. The differences mean that 100% accurate and precise translation among systems has yet to be realized.

INTERNAL AUTOMOTIVE AND AEROSPACE PROGRAM DEVELOPMENT

Three factors drove CAD/CAM adoption in the aerospace and automotive industries. First, companies observed that engineering drawing preparation was time-consuming for both an initial release and subsequent modifications. Interactive graphics obviated the need for drafting tables, drafting tools, and erasers. Drafters could generate and modify engineering drawings more quickly. Large plotters on paper or mylar for certification agencies, such as the U.S. Federal Aviation Authority, for approval. Second, engineering analysis showed real promise in terms of virtually analyzing engineering characteristics, such as aerodynamics, structural integrity, and weight. Accurate geometry, especially external surface definitions, was required. Third, NC machines gained popularity and required efficient methods to create the geometry of individual parts.

Many automotive and aerospace companies developed their own programs. Unlike the early commercial CAD/CAM companies, which often relied on minicomputers, automotive and aerospace companies had enough mainframe resources to support a large user community and large amounts of data. A single mainframe could be upgraded to support, test, and even hundreds of CAD/CAM users and provide acceptable interactive performance. In addition, aerospace and automotive companies hired the mathematical and programming talent needed to build CAD/CAM programs. The programs were tuned to internal corporate drafting standards, manufacturing, and surface-modeling techniques.

Commercial CAD software systems were able to penetrate a few large companies in the early days. For example, Boeing used them for 757 and 767 engineering drawings. However, it was more common for large aerospace and automotive companies to develop their own systems to give themselves a competitive advantage in surface modeling and NC programming. A few other large design and build companies in the shipbuilding, architecture, industrial design, process plant, and factory design industries also developed or used early CAD systems, like Fluor [20] and GE [2], but they were the exceptions. Automotive and aerospace led the way, but, in many cases, surface modeling and NC programming were the prime focus. Engineering drawing programs were developed primarily to save documentation labor.

Both commercial software companies and industrial companies developed dozens of CAD/CAM programs that had significant functional overlap. As is the case with other product classes, many competitors initially emerged. However, market evolution saw the many gradually coalesce into a few large players. The CAD/CAM business was no different. Today, a few large players (Autodesk, Dassault Systèmes, Parametric Technology, and Siemens) acquired competitors or forced them into bankruptcy and now dominate the market [28, pp. 8-1–8-51, 13-1–13-7, and 16-1–16-48, and 19-1–19-38].

The internal industrial programs stayed in production through the mid- to late 1980s. Commercial software companies started adding functions for 3-D solid and surface modeling and advanced NC programming. The commercial companies were able to spread development and maintenance costs over multiple clients, and industrial companies realized that commercial systems could provide cost savings.

The power of personal computers based on raster graphics devices also started matching and even exceeding minicomputer and workstation performance. Personal computers, which were much cheaper and offered another cost-savings opportunity, contributed to the demise of mainframe-based systems.

AUTOMOTIVE INDUSTRY

Companies like General Motors [19] and Renault [4] had strong research and development organizations and started recognizing CAD’s benefits in the late 1950s.


FIGURE 3. Coordinate measuring machine. (Source: https://www.foxvalleymetrology.com/products/metrology-systems/coordinate-measuring-machines/wenzel-r-series-horizontal-arm-coordinate-measuring-machines/wenzel-raplus-horizontal-arm-coordinate-measuring-machine/; used with permission.)

Automotive surfaces are often defined using full-scale clay models (see Figure 3). While manual sculpting of new car body designs in clay was hard enough, manually transferring computer-pressable surfaces to support design, engineering, and manufacturing was even harder. Companies still use full-size coordinate measuring machines and numerical surface-fitting algorithms to do so.

The automotive industry especially cares about how a vehicle looks to a potential buyer. Mathematicians like Steve Coons (Massachusetts Institute of Technology, Syracuse, and Ford), Bill Gordon (GM and Syracuse), and Pierre Bézier (Renault) solved complex computational geometry problems both as academics and as employees. Their solutions became the basis for substantial improvements in surface modeling. The methods for defining surfaces, true 3-D objects, varied from company to company. For example, General Motors used full-scale coordinate measuring machines that capture height along the width and the length of a full-scale clay model of a proposed automobile. Bill Gordon’s surface algorithms accounted for height differences in the width and length measurements.

GM

GM started its CAD developments in the late 1950s [19]. The staff at GM Research (GMR) worked with IBM to develop time-sharing and graphics capabilities that were responsive enough to support interactive design. The original computer used was an IBM 704 (upgraded to a 7090 and then a 7094) running a Fortran language compiler. The program itself was called Design Augmented by Computers (DAC-1).

It not only did DAC-1 provide body-styling assistance, but it also forced the IBM–GM team to develop an early time-sharing strategy (the Trap Control System) in 1961. Time sharing itself was in its infancy in the 1960s and generally supported alphanumeric character terminals connected at low speeds (110 or 300 bits per second). Supporting interactive performance required higher bit rates and put more pressure on the operating system. Earlier computers, such as Whirlwind, that supported graphics and light-pen interactive devices were dedicated to a single user.

IBM and GM formed a joint development project to develop a light-pen-driven interactive device to meet GM’s DAC-1 requirements. Even the choice of programming language was scrutinized. The Fortran compiler proved to be too slow, so DAC-1 moved to NOMAD, a customized version of the University of Michigan’s Michigan Algorithm Decoder compiler, in 1961–1962 [19].

Patrick Hanratty and Don Beque worked on the CAM systems that dealt with stamping the designs produced by DAC-1 between 1961 and 1964. Hanratty left GM in 1965 and went to a West Coast company, where he developed his design software. He later took his work and formed an independent company [2], [38, pp. 15-1–15-20].

DAC-1 was formally moved from the GMR to GM operating division in 1967, but they did not use the GM CAD system development. Two different surface-modeling packages, Fisher Body and CADANCE, appeared in the 1970s. Each ran on IBM 360/370 machines using the PL/1 programming language. Most users had access to IBM 2250/3250 graphics terminals [17]. Some GM divisions reported between 50 and 100 GM devices with DEC GT40 vector graphics terminals hooked up to a PDP 11/45. The 11/05 handled communication to and from the mainframe. In the late 1970s, Fisher Body and CADANCE were merged into GM’s Corporate Graphics System (CGS). The systems were based on GM proprietary surface geometry algorithms. Gordon surfaces [13] were particularly useful when fitting surfaces to scans of data collected from automobile clay body models.

GM developed its own mainframe-based solid-modeling system, GMSolid [7], in the early 1980s that was eventually integrated into CGS. GMSolid used both constructive (i.e., users used solid primitives, like spheres, cylinders, and cones) and boundary representations (i.e., solid faces contained arbitrary surfaces).

Ford

Ford developed a minicomputer-based 3-D system for multiple programs in the mid- to late 1970s. The Ford Computer Graphics System [6] used a Lundy HyperGraf refresh graphics terminal connected to a Control Data 18-10 M minicomputer. Ford modified the operating system to maximize performance. There was one terminal per minicomputer.

The programs supported product design with the Product Design Graphics System to define an auto body using Coons [12] or Overhauser [8] surfaces. Other functions included a printed circuit board design, plant layout, die/tool element modeling, and NC. Ford used commercial CAD systems, such as Computervision and Gerber IDS, for drafting and design functions throughout its powertrain (engine, axle, transmission, and chassis). Ford used different minicomputer brands and graphics terminals for different programs. Computervision ran on its own proprietary minicomputer and a Tektronix direct-view storage tube; Gerber IDS ran on an HP 21MX and Tektronix terminal; and the printed circuit board design program ran on a Prime 400 minicomputer and Vector General refresh graphics terminals.

Even though Ford worked in a distributed minicomputer-based (rather than mainframe-based) environment, the company used centralized servers to store, retrieve, and distribute its design files worldwide.

Renault and Citroën

Pierre Bézier popularized and implemented the curve definitions for defining the smooth curves needed for auto bodies [10] developed by Paul de Casteljau (a Citroën employee) in 1959. Bézier developed the nodes and control handles needed to represent and interactively manipulate Bézier curves via interactive graphics. He was responsible for the development of Renault’s UNISURF system [5] for auto body and tool design. System development began in 1968 and went into production in 1975 on IBM 360 mainframes.

Citroën developed two of its own systems (SPAC and SADUSCA) in parallel with Renault [30]. The systems were also based on de Casteljau’s work and ran on IBM 360 and 370 series computers and IBM 2250 graphics terminals.

AEROSPACE INDUSTRY

In the aerospace CAD/CAM world, companies started defining aerodynamically friendly surfaces shortly before human-powered flight at Kitty Hawk. The National Advisory Committee for Aeronautics (NACA) defined, tested, and published families of airfoils [5] in the late 1920s and 1930s. The idea was to assist aircraft development by predefining the aerodynamic characteristics of wing cross sections (see Figure 4).


FIGURE 4. Sample NACA airfoils. (Source: Summary of Airfoil Data, NACA Report 824, NACA, 1945; used with permission.)

Aerospace engineers must design surfaces that balance aerodynamic performance, structural integrity, weight, manufacturability, fuel efficiency, and other parameters. Industrial aerospace CAD systems adopted 3-D surface-definition technology that was consistent with their company surface-lofting practices and could produce surfaces that could be modified relatively easily, represented conics precisely, and exhibited C2 continuity. (C2 means continuous in the second derivative, an advantage when doing aerodynamic analysis.)

Aerospace companies tried to use automotive surface-modeling methods, but they did not work particularly well. Automobile companies care more about the attractiveness of smooth surfaces, although aerodynamics has become more important as fuel efficiency demands have increased. Aerospace is based on aerodynamic efficiency, and demands C2 continuity in analysis, which were not handled well with automotive surfaces. The development and implementation of nonuniform rational b-spline surfaces became and remain the preferred aerospace surface-modeling method [26].

Lockheed

Lockheed focused on producing engineering drawings and NC programming, not surface modeling. The goal was to be able to speed up both processes. Lockheed, California developed computer-aided drafting software internally to run on IBM mainframes and 2250/3250 graphics terminals [28, pp. 13-1–13-7]. Development started in 1965 as “Project Design” to create 2-D engineering drawings quickly. Project Design was rechristened as CADAM in 1972. An acceptable response time was deemed to be 0.5 seconds or less. CADAM operators were often judged by how fast they seemed to be working, even if little was actually happening. Generally, CADAM has a lot of relatively short-duration functions that made operators appear busy.

Project Design drawings were used to drive NC machines as early as 1966. Use of the software spread quickly inside Lockheed, which established a separate business to sell CADAM in 1972. The new business started sending CADAM source code to others in 1974, including IBM Paris, Lockheed Georgia, and Lockheed Missile and Space in Sunnyvale, California. Eventually, IBM started a successful effort to offer CADAM (acquired from Lockheed) to drive mainframe sales.

Additional CAD development occurred at Lockheed Georgia in 1965 [28, pp. 4-3–4-4]. Spearheaded by Sylvan (Chase) Chasen, the software ran on CDC 3300 computers and Digigraphics terminals. The purpose was more to assist in NC program path planning than to create engineering drawings.

Northrop

Northrop military program funding often drove the development of aerospace company systems. Northrop Computer-Aided Design and Northrop Computer-Aided Lofting (NCAD/NCAL) is an excellent example [1]. Northrop based the system design for the mid-1970s B-2 Spirit stealth bomber on NCAD/NCAL. Other Northrop military programs and Northrop subcontractors used NCAD/NCAL for 3-D surface modeling and CADAM for drafting.

Northrop used funds from the B-2 program to develop NCAD/NCAL [14] rather than use similar systems from other contractors. NCAD/NCAL ran on IBM mainframes interconnected with classified networks. Importantly, the mainframes and networks crossed multiple corporate boundaries, including Boeing, Hughes Radar, GE Engines, and Vought. All partners had to use NCAD/NCAL and provide their own IBM mainframes. This approach simplified data integration and transfer issues and resulted in the first military aircraft fully designed on a CAD system. The B-2 program started in the early 1980s, and its first flight occurred 17 July 1989. The airplane is still in service today.

McDonnell Douglas

McDonnell Douglas implemented two distinctly different CAD systems [22]. The first, Computer Aided Design and Drafting (CADD), was developed in 1965 by the McDonnell Aircraft Company. It was initially a 2-D design and drafting system that was extended to 3-D in 1973, integrated with NC software in 1976 and sold commercially beginning in 1977.

McDonnell Douglas Automation (McAuto), the computer services business unit, purchased Unigraphics (UG) from United Computing in 1976. McAuto rewrote and expanded the United Computing system software based on a license to Hanratty’s ADAM software. The first production use of UG occurred at McDonnell Douglas in 1978. Bidirectional data exchange between the two was not completed until 1981 even though both were in production use.

The two systems’ implementations differed substantially. CADD ran on IBM mainframes and its geometry was based on parametric cubic polynomials and evaluators. Graphics support was primarily the IBM 2250, a 2-D-only device. Evans and Sutherland (E&S) [11] sold a number of Multi-Picture Systems (MPSs) as a 2250 alternative. The MPS featured hardware for 3-D transformations, which had the potential to offload the mainframe. E&S modified its controller to allow two terminals to share a single controller through a device called a Watkins box (named after the designer and developer, Gary Watkins). The Watkins box was attached to a small DEC minicomputer, which handled communications to and from the mainframe. This configuration provided enough savings over the 2250/3250 to justify the purchase of dozens of E&S terminals.

UG ran on multiple brands of midrange minicomputers, including DEC PDP and VAX systems as well as the Data General S/250, S/230, and S/200. UG derived its geometry from the ADAM system. Early versions of ADAM relied on canonical forms and special case geometry algorithms. Interactive graphics for UG was provided on Tektronix storage-tube devices.

Dassault Aviation

Dassault Aviation started its journey in computer graphics to help smooth curve and surface data in the late 1960s. In 1974, the company became one of the first licensees of Lockheed’s CADAM software for 2-D drafting.

Designing in 3-D took a different route. In 1976, Dassault Aviation acquired the Renault UNISURF program and its Bézier curve and surface capability to complement CADAM.

CATIA itself started in 1978 as the Computer-Aided Tridimensional Interactive (CATI) system. Francis Bernard [3] gave credit for extending CATI to surface modeling to generate geometry that would be easier to machine, a capability particularly important for wind tunnel models. CATI became CATIA in 1981 when Bernard convinced Dassault Aviation to commercialize the system through the Dassault Systèmes spinout. As both an internal and commercial product, CATIA ran on IBM mainframes with attached IBM 2250/3250 and IBM 5080 graphics terminals. The early underlying geometry forms included Bézier curves and surfaces and grew to include canonical solid definitions and constructive solid geometry operations. Later versions ran on IBM RS/6000s and other Unix-based workstations.

Matra Datavision Euclid

French aerospace company Matra’s Euclid system (not to be confused with the C-Side Subtec Euklid system for NC machining) addressed modeling for fluid flow. Euclid was a modeler sold by the French startup Datavision in 1979. It was originally developed by Brun and Theron in the Computer Science Laboratory for Mechanics and Engineering Sciences in Orsay, France. Its initial purpose was fluid flow modeling. The French conglomerate Matra, which had aerospace components, bought the controlling interest in Euclid in 1980. Dassault Systèmes purchased the software in 1998.

CONCLUSION

Even though internally developed CAD/CAM programs are unusual today, a number of commercial systems had their roots in early industrial programs. Internally developed programs had direct access to user communities and were able to develop math software that matched company practice. The interactive methods and mathematics influenced other industries, such as electronic games and animated films.

Early commercial CAD/CAM programs were packaged as turnkey systems. Each turnkey system supported only a few concurrent users at relatively slow speeds. Industrial companies, which had to support hundreds and even thousands of users, had the computer power (generally large mainframes), the talent (mathematicians and programmers), and the money to build their own proprietary CAD/CAM programs. By the late 1980s, however, there was not enough of a competitive advantage to continue development and support. At that point, commercial companies had developed enough manufacturing, surface design, and other capabilities that internal development and maintenance were no longer cost efficient.

Industrial companies experienced the requirement firsthand by developing their own CAD/CAM programs. Because of that experience, industrial companies can clearly articulate the problems CAD/CAM programs have with performance, scale, and integration to today’s commercial vendors.

As noted earlier, the basic performance and integration requirements for CAD/CAM programs are essentially the same today as in the early days. Scale adds another zero or two to the left of the decimal point as CAD/CAM data quantities grow.

The mainframes and minicomputers of 1960–1985 were supplanted by workstations that cost significantly less. Workstations were overtaken by the ever-increasing compute power and the ability to network personal computers in the mid-1990s. Computing today has a turn-the-clock-back feel as cloud systems are gaining momentum, and current CAD systems are being delivered via the cloud. As was the case with early mainframes, cloud computing centralizes processing and data resources. Users take advantage of high-performance networks and access cloud systems remotely via lower cost PCs. When CAD software is executed in the cloud, license sharing becomes feasible and software updates occur remotely.

When applied to CAD, cloud computing faces the same scale and performance issues present in the early days with centralized mainframes and minicomputers. Cloud scales well from a raw processing perspective. It is easy to add more processing power, and servers are generally in the same physical location, which decreases data transfer costs. What is hard for cloud computing is satisfying CAD systems’ requirement for near-real-time interactive performance, especially at significant distances. Many cloud services are based in data centers that are tens, hundreds, and even thousands of miles away. Such distances make achieving near-real-time interactive performance difficult. Interactive performance continues to force many CAD/CAM applications to run in a distributed manner. Applications run on a PC near the user, whose data are stored on a file server that is configuration managed. When requested, the data are most often checked out from the server, downloaded to the PC, processed locally, and checked back in.

As AI has become more popular, using it to improve CAD/CAM user productivity is also being pursued. There has been significant research into design optimization and automated documentation production, with limited success to date. Design optimization relies on one or more engineering analyses to tweak the geometry. Not only are multiple runs needed to optimize the geometry, but the suggested optimization can force changes to the geometry (such as folds and tears) that to create useful exploded-view drawings typical in a parts catalog from the raw geometry. This seems to be straightforward because assembly components can be easily moved along an x-, y-, or z-axis. The issue is that an exploded view in a parts catalog shows a disassembly/reassembly sequence. Automated disassembly is a task that has been unsuccessfully researched for decades.

CAD/CAM programs are still evolving with significant amounts of work still needed. The field remains as pertinent and as challenging as it was in the early years.


n5321 | 2025年7月7日 22:25

Boeing R&D

最近对boeing的研发体系感兴趣。

As Condit said, “Designing the airplane with no mock-up and doing it all on computer was an order of magnitude change.”


n5321 | 2025年7月7日 22:24