The Nature of Mathematical Modeling

从NASA的人那里看system engineering,感觉像是在讨论数学建模的问题,于是找了本数学建模的本质——CAE的方法论也在数学建模那里!

《The Nature of Mathematical Modeling》

解析解,数值解,有限单元,偏微分方程等等。这都是CAE的概念!

This is a book about the nature of mathematical modeling and about the kinds of techniques that are useful for modeling (both natural and otherwise). It is oriented towards simple, efficient implementations on computers. 大家的目的算是一致了!

The text has three parts. 解析解的问题The first covers exact and approximate analytical techniques (ordinary differential and difference equations, partial differential equations, variational principles, stochastic processes); 数值分析的问题the second, numerical methods (finite differences for ODEs and PDEs, finite elements, cellular automata); and the third, 实验解的问题model inference based on observations (function fitting, data transforms, network architectures, search techniques, density estimation, filtering and state estimation, linear and nonlinear time series).

这个分类本质上跟altair的那本the practice of

Each of these essential topics would be the worthy subject of a dedicated text, but such a narrow treatment obscures the connections among old and new approaches to modeling. By covering so much material so compactly, this book helps bring it to a much broader audience. Each chapter presents a concise summary of the core results in an area, providing an accessible introduction to what they can (and cannot) do, enough background to use them to solve typical problems, and then pointers into the specialized research literature. The text is complemented by a Website and extensive worked problems that introduce extensions and applications. This essential book will be of great value to anyone seeking to develop quantitative and qualitative descriptions of complex phenomena, from physics to finance.

Professor Neil Gershenfeld leads the Physics and Media Group at the MIT Media Lab and codirects the Things That Think research consortium. His laboratory investigates the relationship between the content of information and its physical representation, from building molecular quantum computers to building musical instruments for collaborations ranging from Yo-Yo Ma to Penn & Teller. He has a BA in Physics from Swarthmore College, was a technician at Bell Labs, received a PhD in Applied Physics from Cornell University, and he was a Junior Fellow of the Harvard Society of Fellows.


n5321 | 2025年7月14日 23:18

A Methodology to Identify Physical or Computational Experiment Conditions for Uncertainty Mitigation

近期最佳paper!一个土耳其人,有博世工作经验,做学术,然后拿了米帝佐治亚理工学院的博士,中间做了NASA的project。兼工程&学术背景的人写的东西还是不一样一点。

它有几个基本点:工程师难以全面准确把握产品内部的物理关系——这是他想要解决的问题——工程师的认知问题!

仿真分析是航天业的常规操作,他有经验,所以可能用了一整套航天业的语言来描述这个问题。整体的思路是完全同意,本质上也没有超出我的认知!相当多的一部分人天天还盯着单元类型,自由度等等问题,算是没有进入实战阶段!不过他本质上还是在用概率论来实现目标。

简单一点说:仿真结果是很容易出错,或者说它就是错的,但是它是有用的。那错的原因在哪里,这个paper的分类是不错的,一个是工艺问题,一个是认知的问题,理论上来看,你不可能建立一个完全物理镜像的,跟实物一致的仿真模型,一个是工艺的原因,他有很多偶然性,有公差在!第二个是你对真实世界的还原是存在偏差的,科学的关系都是对现实世界抽象以后获得的数学表达式,你现在要把所有的表达式东西都还原,应用到工程实际中去其实是不可能的。然后有用的原因他讲的也不错。就是仿真的结果跟最后实际使用的结果一致。怎么实现?中间就需要对模型进行很多的处理!怎么处理,去尝试!——算是Swanson那个台词的一个大的补充,跟自己的思路基本上也一致。简单说就是在电脑上试错,把设计参数跟性能结果之间的应变关系基本搞清楚!然后再做模型的处理,最后让计算机实验跟物理实验的结果一致!——达到预测价值!

但是传统行业的人对IT理解不够,如果他懂数据库,知道SQL,再添加上一点data visulization,应该要厉害更多!

Complex engineering systems require the integration of sub-system simulations and the calculation of system-level metrics to support informed design decisions. This paper presents a methodology for designing computational or physical experiments aimed at mitigating system-level uncertainties.   工程问题的解决方案这样子计算两个分类。分析解,数值解,实验解算是做三个分类。

The approach is grounded in a predefined problem ontology, where physical, functional, and modeling architectures(这三个词已经屏蔽了很多人!) are systematically established.——仿真分析的框架已经定下来了。目的,领域,模型都已经有了!By performing sensitivity analysis using system-level tools, critical epistemic uncertainties can be identified. 这个地方是我所谓的工艺参数!Based on these insights, a framework is proposed for designing targeted computational and physical experiments to generate new knowledge about key parameters and reduce uncertainty.这个framework是有一点兴趣的!跟我的会有什么不一样吗?


 The methodology is demonstrated through a case study involving the early-stage design of a Blended-Wing-Body (BWB) aircraft concept, illustrating how aerostructures analyses can support uncertainty mitigation through computer simulations or by guiding physical testing. The proposed methodology is flexible and applicable to a wide range of design challenges, enabling more risk-informed and knowledge-driven design processes.

1Introduction and Background

The design of a flight vehicle is a lengthy, expensive process spanning many years. With the advance in computational capabilities, designers have been relying on computer models to make predictions about the real-life performance of an aircraft. 飞行器的设计是一个漫长而昂贵的过程,需要耗费数年时间。随着计算能力的进步,设计人员一直依赖计算机模型来预测飞机的实际性能。这个行业是唯一完全CAE覆盖的行业,而且覆盖的特别早!However, the results obtained from computational tools are never exact due to a lack of understanding of physical phenomena, inadequate modeling and abstractions in product details [123]. 每一个人都关心的准确性问题!跟求解器不同,CAE厂家的台词不同,这是工程师视角:物理没学好,模型不准确导致仿真结果错误! The vagueness in quantities of interest is called uncertainty. The uncertainty in simulations may lead to erroneous predictions regarding the product; creating risk.仿真没做好,仿真中的不确定会导致风险!——工程师的核心关注点!

Because most of the cost is committed early in the design [4], any decision made on quantities involving significant uncertainty may result in budget overruns, schedule delays and performance shortcomings, as well as safety concerns. 盲人摸象的设计带过来的几个风险词说得也不错!超预算,延期,性能差,安全隐患。

Reducing the uncertainty in simulations earlier in the design process will reduce the risk in the final product. The goal of this paper is to present a systematic methodology to identify and mitigate the sources uncertainty in complex, multi-disciplinary problems such as aircraft design, with a focus on uncertainties due to a lack of knowledge (i.e.epistemic).

我取的名字是如何获得上帝视角——omniscience!

1.1The Role of Simulations in Design

Computational tools are almost exclusively used to make predictions about the response of a system under a set of inputs and boundary conditions [5].

工程师确实是应该把CAE厂家在80年代奠定的一系列台词丢掉!这个台词就带一点the first principle的味道!CAE是预测工程方案与设计意图是否匹配!
At the core of computational tools lies a model, representing the reality of interest, commonly in the form of mathematical equations that are obtained from theory or previously measured data. 数学建模  How a computer simulation represents a reality of interest is summarized in Figure 
1. Development of the mathematical model implies that there exist some information about the reality of interest (i.e., a physics phenomenon) at different conditions so that the form of the mathematical equation and the parameters that have an impact on the results of the equation can be derived. The parameters include the coefficients and mathematical operations in the equations, as well as anything related to representing the physical artifact, boundary and initial conditions, system excitation [6].  里面涉及到大量的参数!一个参数错了,结果就不对!

A complete set of equations and parameters are used to calculate the system response quantities (SRQ). Depending on the nature of the problem, the calculation can be straightforward or may require the use of some kind of discretization scheme.

If the physics phenomenon is understood well enough such that the form of the mathematical representation is trusted, a new set parameters in the equations (e.g., coefficients) may be sought in order to better match the results with an experimental observation. This process is called calibration. 本质上我自己挑的validation不够准确,还是用这个词的描述更好!With the availability of data on similar artifacts, in similar experimental conditions; calibration enables the utilization of existing simulations to make more accurate predictions with respect to the measured "truth“ model.  这个就毫无新意了!

Refer to caption
Figure 1:Illustration of a generic computer simulation and how System Response Quantities are obtained. Adapted from [6]

Most models are abstractions of the reality of interest as they do not consider the problem at hand in its entirety but only in general aspects that yield useful conclusions without spending time on details that do not significantly impact the information gain. 这是every mocel is wrong ,but some are useful的另外一个讲法! Generally, models that abstract fewer details of the problem are able to provide more detailed information with a better accuracy, but they require detailed information about the system of interest and external conditions to work with. Such models are called high-fidelity models. Conversely, low-fidelity models abstract larger chunks of the problem and have a quick turnaround time. They may be able to provide valuable information without so much effort going into setting up the model; unlike high-fidelity models, they can generally do it with very low computational cost. The choice of the fidelity of the model is typically left to the practitioner and depends on the application.

搭配看Swanson的一段台词看:

Let me go slightly sideways on that. When I first started teaching ANSYS, I said always start small. If you're going to go an auto crash analysis, you have one mass rep a car and one spring. Understand that? Then you can start modifying. You can put the mass on wheels and allow it to rotate a little bit when it hits a wall, and so on. So each, each new simulation should be a endorsement of what's happened before. Not a surprise. Yeah, if you get a surprise and simulation, you didn't understand the problem. Now, I'm not quite I think that relates to what your question was. And that is basically start at the start small and work your way up. Don't try to solve the big problem without understanding all the pieces to it.
只是同一个意思的不同表达

A few decades ago, engineers had to rely on physical experiments to come up with new designs or tune them as their computational capabilities were insufficient. 做样优化 Such experiments that include artifacts and instrumentation systems are generally time consuming and expensive to develop. In the context of air vehicle design, design is an inherently iterative process and these experiments would need to be rebuilt and reevaluated. Therefore, while they are effective for evaluation purposes, they cannot be treated as parametric design models unless they have been created with a level of adjustability. With the advance of more powerful computers and widespread use of them in all industries(ANSYS都卖身了,明显行业没渗透出去), engineers turned to simulations to generate information about the product they are working on. Although detailed simulations can be prohibitively expensive in terms of work-hours and computation time; the use of computer simulations is typically cheaper and faster than following a build-and-break approach for most large-scale systems.

As the predictions obtained from simulations played a larger part in the design, concepts such as “simulation driven design" has been more prominent in many disciplines. [7] If the physics models are accurate, constructing solution environments with very fine grids to capture complex physics phenomena accurately become possible.准确分网是CAE分析结果准确的一个因数而已,只是影响收敛性!建立准确的模型!基本上来说准确的模型,准确的仿真结果! The cost of making a change in the design increases exponentially from initiation to entry into service [8]. If modeling and simulation environments that accurately capture the physics are used in the design loop, it will be possible to identify necessary changes earlier.正确的废话 Because making design changes later may require additional changes in other connected sub-systems, it will lead to an increase in the overall cost [9].

1.2Modeling Physical Phenomena

When the task of designing complex products involves the design of certain systems that are unlike their counterparts or predecessors, the capability of known physics-based modeling techniques may come short.经验的价值,一个全新的产品不太能建立准确的分析模型! For example, the goal of making predictions about a novel aircraft configuration aircraft, a gap is to be expected between the simulation predictions and the measurements from the finalized design——仿真结果跟测试结果基本上会存在差异——定量的差异!. If the tools are developed for traditional aircraft concepts (e.g., tube and wing configurations) there might even be a physics phenomenon occurring that will not be expected or captured——甚至有定性的差异!预测不准的源头!. Even if there is none, the accuracy of models in such cases are still to be questioned.——就算仿真跟实际检验结果一致,也可能模型不准!仿真经验丰富的人知道这不是假话! There are inherent abstractions pertaining to the design and the best way to quantify the impact of variations in the quantities of interest by changing the geometric or material properties is by making a comparative assessment with respect to the historical data 性能设计上面需要PDM. However, in this case, historical data simply do not exist. 没有已知的数据可以做参考

Because of a lack of knowledge or inherent randomness, the parameters used in modeling equations, boundary/initial conditions, and the geometry are inexact, i.e., uncertain. 本质上不确定性是无法消除的,原因他认识是两类,工艺加无知!没有实验室的行业无法做CAE的calibration。 The uncertainty in these parameters and the model itself(模型参数本质上是不确定的!), manifest themselves as uncertainty in the model output. As mentioned before, any decision made on uncertain predictions will create risk in the design. John A. Swanson(ansys founder)所谓的分析的时候没有意外!他们本质上把CAE当做一个定量的工具,trench是了然的,但是到哪个点不清楚,需要准确判断,才需要CAE。In order to tackle the overall uncertainty, the sources of individual uncertainties must be meticulously tracked (some are useful的本质原因,风险是已知的,类似身体里的淋巴!)and their impact on the SRQs need to be quantified. By studying the source and nature of these constituents, they can be characterized and the necessary next steps to reduce them can be identified.

In a modeling and simulation environment, every source of uncertainty has a varying degree of impact on the overall uncertainty.理解关系 Then, they can be addressed by a specific way depending on their nature. If they are present because of a lack of knowledge about it(认知问题) (i.e., epistemic uncertainty), can by definition be reduced [10]. The means to achieve this goal can be through designing a study or experiment that would generate new information about the model or the parameters in question. In this paper, the focus will be on how to design a targeted experiment for uncertainty reduction purposes(说要提供新的how!). Such experiments are not be a replication of the same experimental setup in a more trusted domain, but a new setup that is tailored specifically for generating new knowledge pertaining to that source of uncertainty.

减少无知的影响!

An important consideration is in pursuing targeted experiments is the allowed time and budget of the program. If a lower-level, targeted experiment to reduce uncertainty is too costly or carries even more inherent unknowns due to its experimental setup, it might be undesirable to pursue by the designers. Therefore, these lower-level experiments must be analyzed on a case-by-case basis, and the viability need to be assessed. There will be a trade-off on how much reduction in uncertainty can be expected, against the cost of designing and conducting a tailored experiment. From a realistic perspective, only a limited number of them can be pursued. Considering the number of simulations used in the process of design of a new aircraft, trying to validate the accuracy of every parameter or assumption of every tool will lead to an insurmountable number of experiments. For the ultimate goal of reducing the overall uncertainty, the sources of uncertainty that have the greatest impact on the quantities of interest must be identified.工程的价值是实现目的,不是科学,大概其把问题搞清楚,受控就可以了! Some parameters that have relatively low uncertainty may have a great effect on a response whereas another parameter with great uncertainty may have little to no effect on the response. 这是公差的另外一个说法


In summary, the train of thought that leads to experimentation to reduce the epistemic uncertainty in modeling and simulation environments is illustrated in Figure 2. If the physics of the problem are relatively well understood, then a computational model can be developed. 理解清楚物理关系的数学模型算是基础!If not, one needs to perform discovery experiments, simply to learn about the physics phenomenon 解决认知的问题! [11]. Then, if the results of this model are consistent and accurate, then it can be applied to the desired problem. 仿真结果跟测试结果一致! If not, aforementioned lower-level experiments can be pursued to reduce the uncertainty in the models. Created knowledge should enable the reduction of uncertainty in the parameters, or the models; reducing the overall uncertainty.理解清楚关系,才能控制不确定性!


1.3Reduced-Scale Experimentation,航空航天行业的缩小模型制样测试——也是一种预测!

An ideal, accurate physical test would normally require the building duplicates of the system of interest in the corresponding domain so that obtained results reflect actual conditions. Although this poses little to no issues for computational experiments -barring inherent modeling assumptions-, as the scale of the simulation artifact does not matter for a computational tool, it has major implications on the design of ground and flight tests. Producing a full-scale replica of the actual product with its all required details for a certain simulation is a expensive and difficult in the aerospace industry. Although full-scale testing is necessary for some cases and reliability/certification tests [12], it is desirable to reduce the number of them. Therefore, engineers always tried to duplicate the full-scale test conditions on a reduced-scale model of the test artifact.

The principles of similitude have been laid out by Buckingham in early 20th century [1314]. By expanding on Rayleigh’s method of dimensional analysis[1516], he proposed the Buckingham-Pi Theorem. This method enables to express a complex physical equation in terms of dimensionless and independent quantities (Π groups). Although they need not to be unique, they are independent and form a complete set. These non-dimensional quantities are used to establish similitude between models of different scales. When a reduced scale model satisfies certain similitude conditions with the full-scale model, the same response is expected between the two. Similitudes can be categorized in three different groups[17]:

  1. 1. 

    Geometric similitude: Geometry is equally scaled

  2. 2. 

    Kinematic similitude: “Homologous particles lie at homologous points at homologous times" [18]

  3. 3. 

    Dynamic similitude: homologous forces act on homologous parts or points of the system.

1.4Identification of Critical Uncertainties via Sensitivity Analysis算是不错的概念

Over the recent decades, the variety of problems of interest has led to the development of many sensitivity analysis techniques. While some of them are quantitative and model-free [19], some depend on the specific type of the mathematical model used [20]. Similar to how engineering problems can be addressed with many different mathematical approaches, sensitivity analyses can be carried out in different ways. For the most basic and low-dimensional cases, even graphical representations such as scatter plots may yield useful information about the sensitivities [21]. As the system gets more complicated however, methods such as local sensitivity analyses (LSA), global sensitivity analyses (GSA) and regression-based tools such as prediction profilers may be used.

筛选过滤关键参数!——类似工程图的关键尺寸、重要尺寸!

LSA methods provide a local assessment of the sensitivity of the outputs to changes in the inputs, and are only valid near the current operating conditions of the system. GSA methods are designed to address the limitations of LSA methods by providing a more comprehensive assessment of the sensitivity of the outputs to changes in the inputs. Generally speaking, GSA methods take into account the behavior of the system over a large range of input values, and provide a quantitative measure of the relative importance of different inputs in the system. In addition, GSA methods do not require assumptions about the relationship between the inputs and outputs, such as the function being differentiable, and are well suited for high-dimensional problems. Therefore, GSA methods has been dubbed the golden standard in sensitivity analysis, in the presence of uncertainty [22].

Variance-based methods apportion the variance caused in the model outputs with respect to the model input and their interactions. One of the most popular variance-based methods is Sobol Method [23]. Consider a function f(X)=Y, Sobol index for a variable Xi is the ratio of the variability obtained by calculating variability due to all other inputs to the overall variability in the output. These variations can be obtained from parametric or non-parametric sampling techniques. Following this definition, the first-order effect index of input Xi can be defined as:

S1i=VXi(EXi(YXi))V(Y)(1)

where the denominator represents the total variability in the response Y whereas the numerator represents the variation of Y while changing Xi but keeping all the other variables constant. The first-order effect represents the variability caused by Xi only. Following the same logic, combined effect of two variables Xi and Xj can be calculated:

S1i+S1j+S2ij=VXij(EXij(YXij))V(Y)(2)

Finally, since the total effect of a variable will be the sum of all first-order effects and the sum of all interactions of all orders with other input variables. Because the sum of all sensitivity indices must be unity, then the total effect index of Xi can be calculated as [24]:

STi=1VXi(EXi(YXi))V(Y)(3)

Because Sobol method is tool agnostic and can be used without any approximations of the objective function, it will be employed throughout this paper for sensitivity analysis purposes.

2Development of the Methodology

The proposed methodology is developed with the purpose of identifying and reducing the sources of epistemic uncertainty in complex design projects with a systematic fashion.——主要是想解决认知问题。帮助工程师搞清楚问题!

First, the problem for which the mitigation of uncertainty is the objective, is defined, and corresponding high-level requirements are identified. In this step, the disciplines that are involved in the problem at hand and how requirements flow down to analyses are noted. Then, the problem ontology is formulated by using functional, physical and modeling decompositions. A top-down decision making framework is followed to create candidate, fit-for-purpose modeling and simulation environments and select the most appropriate one for the problem at hand. Upon completion of this step, a modeling and simulation environment that covers every important aspect of the problem and satisfies the set of modeling requirements will be obtained, while acknowledging the abstractions of the environment.

The third step is to run the model to collect data. Due to aforementioned reasons, there will be many uncertainties in the model. Therefore the focus of on this step is to identify critical uncertainties that have a significant impact on the model response. 这是一个大概搞清楚关系的过程!——按这个做法太耗费时间了!

If one deems this uncertainty to be unacceptable, or to have a significant impact on the decision making processes; then a lower-level experiment will be designed in order to create new knowledge pertaining to that uncertainty. This new knowledge can be carried over to the modeling and simulation environment so that the impact of new uncertainty characteristics (i.e., probability distribution) on the model response can be observed. The main idea of the proposed methodology is to generate new knowledge in a systematic way, and update appropriate components involved in the modeling and simulation environment with the newly obtained piece of information. This process is illustrated in a schematic way in Figure 3.

搞全新产品的玩法!

  1. 1. 

    Problem Definition:
    All aircraft are designed to answer a specific set of requirements that are borne out of needs of the market or stakeholders.总结的还不错


    Definition of the concept of operations outline the broad range of missions and operations that this aircraft will be used for. After the overall purpose of the aircraft is determined, the designers can decide on the required capabilities and the metrics that these capabilities are going to be measured by can be derived. Considering these information, a concept of the aircraft and the rough size of the aircraft can be determined. This process can be completed by decomposing the requirements and tracking their impact on metrics of capability and operations perspectives [25]. Multi-role aircraft will require additional capabilities will emerge and parallel decompositions may need to be used.——目标分解!

  2. 2. 

    Formulate the Problem Ontology:
    Following systems-engineering based methods given in References [2526], a requirements analysis and breakdown can be performed, identifying the key requirements for the aircraft, the mission and the wing structure to be analyzed. Then, a physical decomposition of the sub-assembly is developed, outlining its key components and their functionalities, decide on the set of abstractions. 类似面向对象编程的概念了! Finally, a modeling architecture that maps the physical and functional components to the decompositions is created. 类比电机设计的尺寸链,or 产品的概念框架! This mapping from requirements all the way to the modeling and simulation attributes is called the problem ontology, and illustrated in Figure 4. With this step it is possible to follow a decision-making framework to select the appropriate modeling and simulation tool among many candidates for this task.

  3. 3. 

    Identification of Critical Uncertainties
    With a defined M&S environment, it is possible to run cases and identify the critical uncertainties rigorously that have the most impact on the quantities of interest. 这个人本质上对仿真和物理学很熟悉才能玩这个游戏!  As mentioned before, there is a plethora of methods how uncertainty can be mathematically represented and the impact is quantified.通过CAE来实现数学建模! For this use case, a sensitivity analysis will be performed to determine which parameters have the greatest impact on the output uncertainty in corresponding tools, by calculating total Sobol indices.

  4. 4. 

    Design a Lower-level Experiment:

    The next task is to address the identified critical uncertainties at the selected M&S environment.——简化问题,筛选参数! To this end, the steps corresponding to designing a lower-level experiment are illustrated in Figure 5. It is essential to note here that the primary purpose of this lower-level experiment is not to model the same phenomenon at a subsequent higher-fidelity tool, but rather use the extra fidelity to mitigate uncertainty for system-level investigation purposes.

    The experimental design is bifurcated into computational experiments (CX) and physical experiments (PX)——两个同时做!, each may serve a unique purpose within the research context. For computational experiments, the focus is on leveraging computational models to simulate scenarios under various conditions and parameters, allowing for a broad exploration of the problem space without the constraints of physical implementation. Conversely, the physical experiments involve the design and execution of experiments in a physical environment. This phase is intricately linked to the computational experiment by the fact that they can accurately represent the physical experiments can be used to guide the physical experimentation setups. This entails a careful calibration process, ensuring that the computational models reflect the real-world constraints and variables encountered in the physical domain as best as possible.大家的想法差不多!解决这个问题是需要一点技术含量的! This step is a standalone research area by itself, and it will only be demonstrated on a singular case.

    Upon the completion of experimentation procedure, the execution phase takes place, where the experiments are conducted according to the predefined designs. This stage is critical for gathering empirical data and insights, which are then subjected to rigorous statistical analysis. The interpretation of these results forms the basis for drawing meaningful conclusions, ultimately contributing to the generation of new knowledge pertaining to the epistemic uncertainty in question.This methodological approach, characterized by its dual emphasis on computational and physical experimentation, provides a robust framework for analyzing uncertainties,目标不一样,我觉得目标是optimization!.

3Demonstration and Results

3.1Formulating the Problem Ontology

Development of a next-generation air vehicle platform involves significant uncertainties. To demonstrate how the methodology would apply on such a scenario, the problem selected is aerostructures analyses for a Blended-Wing-Body (BWB) aircraft in the conceptual design stage. The goal is to increase confidence in the predictions of the aircraft range by reducing the associated uncertainty on parameters used in design, 飞机行业本质上是作坊式or我们所谓做样式,它的challenge有不一样的地方,但是有技术含量的!. A representative OpenVSP drawing of the BWB aircraft used in this work is given in Figure 6.

Refer to caption
Figure 6:BWB concept used in this work.

3.2Identification of Critical Uncertainties

For the given use case, two tools are found to be appropriate for the early-stage design exploration purposes, FLOPS and OpenAeroStruct.

As a low fidelity tool——极简模型!, NASA’s Flight Optimization System (FLOPS)[27] will be used to calculate the range of the BWB concept for different designs. FLOPS is employed due to its efficiency in early-stage design assessment, providing a quick and broad analysis under varying conditions with minimal computational resources. FLOPS facilitates the exploration of a wide range of design spaces by rapidly estimating performance metrics, which is crucial during the conceptual design phase where multiple design iterations are evaluated for feasibility and performance optimization.目标上、方法论类似ANSYS rmxprt ,它可能是降低单元自由度或者模型简化得更多一点!FLOPS uses historical data and simplified equations 那就还是跟rmxprt一样的! to estimate the mission performance and the weights breakdown of an aircraft. Because it is mainly based on simpler equations, its run time for a single case is very low, making it possible to run a relatively high number of cases. Because FLOPS uses lumped parameters, it is only logical to go to a slightly higher fidelity tool that is appropriate with the conceptual design phase in order to break down the lumps. Therefore, a more detailed analysis of the epistemic uncertainty variables will be possible.

OpenAeroStruct [28] is a lightweight, open-source tool designed for integrated aerostructural analysis and optimization. It combines aerodynamic and structural analysis capabilities within a gradient-based optimization framework, enabling efficient design of aircraft structures. The tool supports various analyses, including wing deformation effects on aerodynamics and the optimization of wing shape and structure to meet specific design objectives. In this work it will be used as a step following FLOPS anaylses, at it represents a step increase in fidelity. According to the selected analysis tools, the main parameter uncertainties involved are to be investigated are the material properties (e.g., Young’s modulus) and the aerodynamic properties (e.g., lift and drag coefficients).

Table 2:Nomenclature for mentioned FLOPS variables
Variable NameDescription
WENGEngine weight scaling parameter
OWFACTOperational empty weight scaling parameter
FACTFuel flow scaling factor
RSPSOBRear spar percent chord for BWB fuselage at side of the body
RSPCHDRear spar percent chord for BWB at fuselage centerline
FCDI
Factor to increase or decrease lift-dependent drag
coefficients
FCDO
Factor to increase or decrease lift-independent drag
coefficients
FRFUFuselage weight (composite for BWB)
ESpan efficiency factor for wing

First, among the list of FLOPS input parameters, 31 of them are selected as they are either not related to the design, or highly abstracted parameters that may capture the highest amount of abstraction and cause variance in the outputs. Of these 31 parameters, 27 of them are scaling factors and are assigned a range between 0.95 and 1.05. Remaining four are related to the design, such as spar percent chord at BWB at fuselage and side of the body and they are swept in estimated, reasonable ranges. The parameter names and their descriptions will be explained throughout the discussion of the results as necessary, but an overview of them are listed in Table 2 for the convenience of the reader. Aircraft range is calculated through a combination of sampled input parameters. Among these parameters 4096 samples are generated using Saltelli sampling [19], a variation of fractional Latin Hypercube sampling, due to its relative ease in calculation of Sobol indices and availability of existing tools. Calculation of the indices are carried out by using the Python library SALib [29].

Refer to caption
Figure 7:Comparison of sensitivity indices calculated by three different methods: Quasi Monte Carlo, Surrogate model with full sampling, and surrogate model with 10% sampling

In Figure 7, total sensitivity indices calculated are given for three different sampling strategies. Blue bars represent the Quasi-Monte Carlo (QMC) sampling that uses all 4096 input factors. To demonstrate the impact of how sensitivity rankings may change through the use of surrogate modeling techniques, two different response equations (RSE) are employed. First one is the an RSE that is constructed using all 4096 points, and the other one is constructed using only 10% of the points, representing a degree of increase in the computational cost. After verifying that these models fit well, it is seen that they are indeed able to capture the trends albeit the ranking of important sensitivities need to be paid attention.

In this analysis, it is seen that aerodynamic properties, material properties and the wing structure location have been found to have significant impact on the wing weight and aerodynamic efficiency. Therefore, they are identified as critical uncertainties. This is indeed expected and consistent with the results found for a tube-and-wing configuration before in Reference [30].

3.3 Experiment Design and Execution

For demonstration of the methodology, the next step will include the low-fidelity aero-structures analysis tool, OpenAeroStruct [28] and how such a tool can be utilized to guide the design of a physical experimentation setup.

  1. 1. 

    Problem Investigation:

    •  

      Premise: The variations in range calculations are significantly influenced by uncertainties in the Young’s modulus, wingbox location and aerodynamic properties. These parameters are related to the disciplines of aerodynamics and structures.

    •  

      Research Question: How can the uncertainties in these parameters impacting the range can be reduced?

    •  

      Hypothesis: A good first-order approximation is the Breguet range equation is used to calculate the maximum range of an aircraft [31]. A more accurate determination of the probability density function describing the aerodynamic performance of the wing will reduce the uncertainty in the wing weight predictions.

  2. 2. 

    Thought Experiment: Visualizing the impact of more accurately determined parameters on the simulation results, we would expect to see a reduction in the variation of the simulation outputs.

  3. 3. 

    Purpose of the Experiment: This experiment aims to reduce the parameter uncertainty in our wing aero-structures model. There is little to none expected impact of unknown physics that would interfere with the simulation results at such a high level of abstraction. In other words, the phenomenological uncertainty is expected to be insignificant for this problem. In order to demonstrate the proposed methodology, both avenues —computational experiment only, and physical experiment— will be pursued.

  4. 4. 

    Experiment Design: We decide to conduct a computational experiment that represents a physical experimentation setup where the parameters pertaining to the airflow, material properties and structural location are varied within their respective uncertainty bounds and observe the resulting lift-to-drag ratios. For subscale physical experiment, the boundary conditions of the experiment need to be optimized for the reduced-scale so the closest objective metrics can be obtained.

  5. 5. 

    Computational Experiments for both cases:

    •  

      Define the Model: We use the OpenAeroStruct wing structure model with a tubular spar structures approximation and a wingbox model.

    •  

      Set Parameters: The parameters to be varied are angle of attack, Mach number, location of the structures.

    •  

      Design the Experiment: We use a Latin Hypercube Sampling to randomly sample the parameter space. Then Sobol indices are computed to observe the global sensitivities over the input space.

    •  

      Develop the Procedure:  (仿真跟制样测试是同步进行的!)

      •  

        For CX only: For each random set of parameters, run the OpenAeroStruct model and record the resulting predictions, case numbers and run times. After enough runs to sufficiently explore the parameter space, analyze the results.

      •  

        For PX only: Use the wingbox model only in OpenAeroStruct, pose the problem as a constrained optimization problem to get PX experimentation conditions, scale is now a design variable, scale dimensionless parameters accordingly.

3.4Computer Experiments for Uncertainty Mitigation

Reducing the uncertainty in the lift-to-drag ratio would have a direct impact on reducing the uncertainty in range predictions. L/D is a key aerodynamic parameter that determines the efficiency of an aircraft or vehicle in converting lift into forward motion while overcoming drag. By reducing the uncertainty in L/D, one can achieve more accurate and consistent estimates of the aircraft’s efficiency, resulting in improved range predictions with reduced variability and increased confidence.

To calculate L/D and other required metrics, the low-fidelity, open-source aerostructures analysis software OpenAeroStruct is used. BWB concept illustrated in Figure 6 is exported to OpenAeroStruct. For simplicity purposes, vertical stabilizers are ignored in the aerodynamics and structures analyses. The wingbox is abstracted as a continuous structure, spaning from 10% to 60% of the chord, throughout the whole structural grid. This setup is used for both CX and PX cases, and the model parameters are manipulated according to the problem.

3.4.1Test conditions

First, it is necessary to develop a simulation that would develop the full-scale conditions. The simpler approximation models the wingbox structure as a tubular spar, with a reducing diameter from root to tip. The diamaters are calculated through optimization loops so that stress constraints are met. For the wingbox model, the location is approximated from the conceptual design of the structrual elements from public domain knowledge. For both cases, different aerodynamic and structural grids are employed to investigate the variance in SRQs. Cruise conditions at 10000 meters are investigated, with a constant Mach number of 0.84.

Five different model structures are tested for this experimentation setup with the same set of angle of attack, Mach number, spar location and Young’s modulus in order to make an accurate comparisons. Through these runs, lift-to-drag ratios are calculated and the histogram is plotted in Figure 8. In this figure, the first observation is that although the wingbox model is a better representation of reality, its variance is higher than compared to the tubular spar models, as well as being hypersensitive to certain inputs in some conditions. It is also seen that the predictions of the tubular-spar model generally lie between the predictions that of the two different fidelities of the wingbox model.

Furthermore, the runtime statistics have a significant impact on how the results are interpreted, as well as how many cases can be realistically considered. The overview is presented in Table 3. It is obvious that the mesh size is the most dominant factor on the average run time for a single case. An interesting observation is that the coarser wingbox model takes less time to run compared to a lower fidelity tubular spar model, and predicts a higher lift-to-drag ratio. The reason of this was the wingbox model with the coarser mesh was able to converge in fewer iterations compared to the tubular-spar model. Using a shared set of geometry definition and parameters, as much as the corresponding fidelity level allows, showed that decreasing the mesh size resulted in less variance in the predicted SRQs, as expected. However, increasing the fidelity level comes with a new set of assumptions pertaining to the newly included subsystems, or physical behavior. Therefore, one cannot definitively say that increasing the fidelity level would decrease the parameter uncertainty without including the impact of the newly added parameters.

Table 3:OpenAeroStruct runtime and output variation statistics with respect to different model structures, on a 12-core 3.8 GHz 32GB RAM machine
Run typeStd. Deviation in L/DMean Runtime [s]
Tubular spar-coarse0.1846.79
Tubular spar-medium0.16915.38
Wingbox - coarse0.7942.5
Wingbox - medium0.37620.7
Wingbox - fine0.40176.67
Refer to caption
Figure 8:Probability densities of CL/CD for five different model structures.

3.5Leveraging Computer Experiments for Guiding Physical Experimentation  建立缩小版本的飞机,设计物理测试方案!

3.5.1Feasibility of the full-scale model

As discussed before, construction and testing of a full-scale vehicle models is almost always not viable in the aerospace industry, especially in the earlier design phases. For this demonstration, a free-flying sub-scale test will be pursued. The baseline experiment conditions will be the same as in the computational-only experimentation, except for appropriate scaling of parameters. Therefore, a scale that would be an optimum for a selected cost function needs to be found, considering the constraints. For this use case, following constraints are defined:

  •  

    Scale of the sub-scale model: n<0.2

  •  

    Mach number: 0.8<Ma<0.87, The effects of compressibility are going to be much more dominant as Ma=1 is approached, therefore the upper limit for the Mach number was kept at 0.87.

  •  

    Angle of attack 0<α<10, Because all similitude conditions will not be met, it the flight conditions for a different angle of attack need to be simulated. This is normal practice in subscale testing [32].

  •  

    Young’s Modulus: ES<3EF,Young’s modulus of the model, should be less than three times of the full-scale design.

3.5.2Optimize for similitude

For this optimization problem, a Sequential Least Squares Programming method is used. SLSQP is a numerical optimization algorithm that is particularly suited for problems with constraints [33]. It falls under the category of sequential quadratic programming (SQP) methods, which are iterative methods used for nonlinear optimization problems. The idea behind SQP methods, including SLSQP, is to approximate the nonlinear objective function using a quadratic function and solve a sequence of quadratic optimization problems, hence the term “sequential". In each iteration of the SLSQP algorithm, a quadratic sub-problem is solved to find a search direction. Then, a line search is conducted along this direction to determine the step length. These steps are repeated until convergence. One advantage of SLSQP is that it supports both equality and inequality constraints, which makes it quite versatile in handling different types of problems. It is also efficient in terms of computational resources, which makes it a popular choice for a wide range of applications.

Algorithm 1 Constrained Optimization for finding Physical Experiment Conditions
procedure Optimization(x)
     Define Scaling parameters
     Define x=[nα, Ma, h, E]
     Define constraint s
     Initialize x with initial guess [0.1, 0, 0.84, 10000, 73.1e9]
     while not converged do
         Evaluate cost function f(x) Equation 7
         Solve for gradients and search directions
         Run OpenAeroStruct optimization
         if Failed Case then
              Return high cost function
              Select new x
         end if
     end while
     return x
end procedure

The algorithm used for this experiment is presented in Algorithm 1. For convenience purposes, the altitude is taken as a proxy for air density. In the optimization process, mass (including the fuel weight and distribution) is scaled according to:

nmass=ρFρSn3(4)

where ρF is the fluid density for the full-scale model, ρS the fluid density for the sub-scale model, and n is the geometric scaling factor [32]. Since aeroelastic bending and torsion are also of interest, following aeroelastic parameters for bending (Sb) and torsion (St) need also be satisfied, and they are defined as:

Sb=EIρV2L4(5)
St=GJρV2L4(6)

These two parameters need to be duplicated in order to assure the similitude of inertial and aerodynamic load distributions for the same Mach number or scaled velocity, depending on the compressibility effects at the desired test regime [32]. And the cost function is selected to be:

f(x)=|CLCDSCLCDFCLCDF|2+30|ReSReFReF|2+3000|MaSMaFMaF|2(7)

where, lift-to-drag ratio, Reynolds number and the Mach number of the sub-scale model are quadrically penalized with respect to their corresponding deviation form the simulation results of the full scale result. Because the magnitudes of the terms are vastly different, second and third terms are multiplied with coefficients that would scaled their impact to the same level of the first term. For other problems, these coefficients present flexibility for engineers. Depending on how much certain deviations in the ratio of similarity parameters are penalized, the optimum scale and experiment conditions will change. In this application, the simulated altitude for the free-flying model is changed, rather than changing the air density directly.

3.5.3Interpret results

Without the loss of generality, it can be said that the optimum solution may be unconstrained or constrained, depending on the nature and the boundaries of the constraints. At this step, the solution will need to be verified as to whether it will lead to a feasible physical experiment design. Probable causes will be conflicts between the design variables, or infeasibility to match them in a real experimentation environment. In such cases, the optimization process needs to be repeated with these constraints in mind. However, for this problem, the constrained optimum point is found to be:

  •  

    n=0.2

  •  

    Ma=0.86

  •  

    α=10

  •  

    ES=219GPa

  •  

    h=0m

  •  

    Re=6.2x106

Furthermore,, it can be noted that due to the nature of the constrained optimization problem, the altitude for the sub-scale test was found to be sea-level. While, this is not completely realistic, it points to an experiment condition where high-altitude flight is not necessary. Finally, the Young’s modulus for the sub-scale model is slightly below the upper threshold, which was three times that of the Young’s modulus of the full-scale design. With this solution we can verify that solving a constrained optimization problem to find the experiment conditions is a valid approach, and provides a baseline for other sub-scale problems as well.

Furthermore, the runtime statistics have a significant impact on how the results are interpreted, as well as how many cases can be realistically considered. The overview is presented in Table 3. It is obvious that the mesh size is the most dominant factor on the average run time for a single case. An interesting observation is that the coarser wingbox model takes less time to run compared to a lower fidelity tubular spar model, and predicts a higher lift-to-drag ratio. The reason of this was the wingbox model with the coarser mesh was able to converge in fewer iterations compared to the tubular-spar model. Using a shared set of geometry definition and parameters, as much as the corresponding fidelity level allows, showed that decreasing the mesh size resulted in less variance in the predicted SRQs, as expected. However, increasing the fidelity level comes with a new set of assumptions pertaining to the newly included subsystems, or physical behavior. Therefore, one cannot definitively say that increasing the fidelity level would decrease the parameter uncertainty without including the impact of the newly added parameters.

Table 3:OpenAeroStruct runtime and output variation statistics with respect to different model structures, on a 12-core 3.8 GHz 32GB RAM machine


3.5 Leveraging Computer Experiments for Guiding Physical Experimentation

3.5.1 Feasibility of the full-scale model

As discussed before, construction and testing of a full-scale vehicle models is almost always not viable in the aerospace industry, especially in the earlier design phases. For this demonstration, a free-flying sub-scale test will be pursued. The baseline experiment conditions will be the same as in the computational-only experimentation, except for appropriate scaling of parameters. Therefore, a scale that would be an optimum for a selected cost function needs to be found, considering the constraints. For this use case, following constraints are defined:

  • Scale of the sub-scale model: n < 0.2

  • Mach number: 0.8 < Ma < 0.87 (The effects of compressibility are going to be much more dominant as Ma = 1 is approached, therefore the upper limit for the Mach number was kept at 0.87)

  • Angle of attack: 0 < α < 10 (Because all similitude conditions will not be met, the flight conditions for a different angle of attack need to be simulated. This is normal practice in subscale testing)

  • Young's Modulus: Es < 3Ef (Young's modulus of the model should be less than three times of the full-scale design)

3.5.2 Optimize for similitude

For this optimization problem, a Sequential Least Squares Programming method is used. SLSQP is a numerical optimization algorithm that is particularly suited for problems with constraints [33]. It falls under the category of sequential quadratic programming (SQP) methods, which are iterative methods used for nonlinear optimization problems.

The idea behind SQP methods, including SLSQP, is to approximate the nonlinear objective function using a quadratic function and solve a sequence of quadratic optimization problems, hence the term "sequential". In each iteration of the SLSQP algorithm, a quadratic sub-problem is solved to find a search direction. Then, a line search is conducted along this direction to determine the step length. These steps are repeated until convergence.

One advantage of SLSQP is that it supports both equality and inequality constraints, which makes it quite versatile in handling different types of problems. It is also efficient in terms of computational resources, which makes it a popular choice for a wide range of applications.

The algorithm used for this experiment is presented in Algorithm 1. For convenience purposes, the altitude is taken as a proxy for air density. In the optimization process, mass (including the fuel weight and distribution) is scaled according to:

复制

n_mass = (ρ_F / ρ_S) * n^3

where ρF is the fluid density for the full-scale model, ρS the fluid density for the sub-scale model, and n is the geometric scaling factor [32].

Since aeroelastic bending and torsion are also of interest, following aeroelastic parameters for bending (Sb) and torsion (St) need also be satisfied, and they are defined as:

复制

Sb = (EI) / (ρ * V^2 * L^4)
St = (GJ) / (ρ * V^2 * L^4)

These two parameters need to be duplicated in order to assure the similitude of inertial and aerodynamic load distributions for the same Mach number or scaled velocity, depending on the compressibility effects at the desired test regime [32].

And the cost function is selected to be:

复制

f(x) = |((CL/CD)_S - (CL/CD)_F) / (CL/CD)_F|^2 
      + 30 * |((Re_S - Re_F) / Re_F)|^2
      + 3000 * |((Ma_S - Ma_F) / Ma_F)|^2

where, lift-to-drag ratio, Reynolds number and the Mach number of the sub-scale model are quadrically penalized with respect to their corresponding deviation form the simulation results of the full scale result. Because the magnitudes of the terms are vastly different, second and third terms are multiplied with coefficients that would scaled their impact to the same level of the first term. For other problems, these coefficients present flexibility for engineers. Depending on how much certain deviations in the ratio of similarity parameters are penalized, the optimum scale and experiment conditions will change. In this application, the simulated altitude for the free-flying model is changed, rather than changing the air density directly.

3.5.3 Interpret results

Without the loss of generality, it can be said that the optimum solution may be unconstrained or constrained, depending on the nature and the boundaries of the constraints. At this step, the solution will need to be verified as to whether it will lead to a feasible physical experiment design. Probable causes will be conflicts between the design variables, or infeasibility to match them in a real experimentation environment. In such cases, the optimization process needs to be repeated with these constraints in mind.

However, for this problem, the constrained optimum point is found to be:

  • n = 0.2

  • Ma = 0.86

  • α = 10

  • Es = 219 GPa

  • h = 0 m

  • Re = 6.2 x 10^6

Furthermore, it can be noted that due to the nature of the constrained optimization problem, the altitude for the sub-scale test was found to be sea-level. While, this is not completely realistic, it points to an experiment condition where high-altitude flight is not necessary.

Finally, the Young's modulus for the sub-scale model is slightly below the upper threshold, which was three times that of the Young's modulus of the full-scale design. With this solution we can verify that solving a constrained optimization problem to find the experiment conditions is a valid approach, and provides a baseline for other sub-scale problems as well.

4 Conclusion

In this work, a novel approach is introduced for the design of physical experiments, emphasizing the quantification of uncertainty to target the of engineering models小语法错误 with a specific focus on early-stage aircraft design. Sensitivity analysis techniques are intelligently utilized to find out specific computational and physical experimentation conditions, to tackle the challenge mitigation of epistemic uncertainty.

Findings indicate that this methodology not only facilitates the identification and reduction of critical uncertainties through targeted experimentation but also optimizes the design of physical experiments through computational efforts.作者可能对数据库的理解不够!感觉它不懂SQL,不然可以走得更远一点! This synergy enables more precise predictions and efficient resource utilization. Through a case study on a Blended-Wing-Body (BWB) aircraft concept, the practical application and advantages of the proposed framework are exemplified, demonstrating how subsequent fidelty levels can be leveraged for uncertainty mitigation purposes.

Presented framework for uncertainty management that is adaptable to various design challenges. The study highlights the importance of integrating computational models by guiding physical testing, fostering a more iterative and informed design process that will save resources. Of course, every problem and testing environment got its own challenges. Therefore, dialogue between all parties involved in model development and physical testing is encouraged.

Future research is suggested to extend the application of this methodology to different aerospace design problems, including propulsion systems and structural components. Additionally, the development of more advanced computational tools and algorithms could further refine uncertainty quantification techniques. With more detailed models and physics, integration of high performance computers, it is possible to see the impact of this methodology in later stages of the design cycle. The reduction of uncertainty on performance metrics can contribute to avoiding program risk — excessive cost, performance shortcomings and delays.






n5321 | 2025年7月11日 00:00

波音公司的历史

Legend and legacy : the story of Boeing and its people  

序言写的比较煽情。最近对波音公司感兴趣。这本书只能read online



n5321 | 2025年7月9日 22:27

Diss SpaceX

Any SpaceX Engineers want to share some process advice with “old space”?

I work for what y’all would probably consider an “old space” entity and lately have been trying to figure out how to improve and sculpt our development and verification processes. Obviously SpaceX has been doing innovations in this regard, and in my opinion, is the distinguishing factor in their effectiveness. We have brilliant GN&C people, but if it takes 8 hours to run a sim when the dragon engineers have flight hardware on their desk, one of those teams is going to be able to test and improve ideas faster.

So here are some things I’ve sort of learned that I think spaceX is capitalizing on that others aren’t as much, fill in the blanks if you can.

  • Using proven COTS products when possible

  • When a proven COTS product is not available, build it in-house to reduce the chance of a garbage contractor burning you (we suffer from this a lot)

  • A focus on modern software development practices and applying that attitude to the vehicle. SpaceX benefits from being in the field of launch vehicles, which can be tested more readily in the actual operating environment than say a Martian orbiter/lander. I’d say they probably focus more on getting compile time and sim time tightened up as much as possible as well but I don’t know that for a fact.

  • Generally having less assessment time and more actually just make a decision and build it time to get it to the environment quicker. Again maybe not possible to extend as extremely to things outside of LEO but we’ll see. I know they couldn’t do this as much with dragon, which is closer to what I would want to emulate at my employer as opposed to starship.

What else am I missing? I don’t think there’s any great reason other entities can’t operate as quickly as SpaceX does, I really think eventually the methods used will spread and gain traction. Let’s speed that up a bit ;). If it makes you feel any better about spilling my employer is in no way competing with SpaceX, we only ever collaborate. Help us help you!



I'm in "old space" right now, and I'm fairly young.

I've found so many things I'd like to change coming from an agile background. I'll keep my list short, though:

  1. GD&T needs to go. If the prototype part works, it works. That's it. You can load test it, but if there's any reason you're not automatically assembling it after machining, you're doing something wrong.

  2. Stop using drawings for assemblies and internally machined parts (NX Drafting). I've spent 90%+ of my time making a drawing correct when the model and notes themselves were perfect. It's just stupid. Document your work and design intent - but don't have such specific requirements for drawings that make you churn out 10x fewer components.

  3. Stop holding "delegation" meetings. If you see something that needs to be done, do it. If you need someone to do something, add it to their backlog and talk to them about it. We don't need a 3 hour weekly meeting to discuss who is going to work on what for these few specific components.

  4. Improve prototype manufacturing flow. If I want to test an idea, let me make a cheap (maybe scaled down) version of it without having to go through the typical release process (which takes months and multiple peer reviews).

  5. Test complete assemblies more often. Continue to test individual parts, but if you put it all together and test it, you can pretty quickly find a bunch of issues - especially stuff like interferences and material property issues.

EDIT: edited point 1 to better express my frustration with GD&T for prototype parts.


GD&T needs to go. If the part works, it works. That's it. You'll load test it, but if there's any reason you're not automatically assembling it after machining, you're doing something wrong.

I hate doing GD&T as much as the next guy but this is absurd. Manufacturing processess are not infinitely precise or repeatable. Finding out the part doesn't fit isn't suddenly ok just because you test fit it immediately after machining. There are very few instances where a machined part can be immediately assembled without secondary operations anyway (is machining suddenly the only manufacturing process in existence?). And what about replacement parts? Without GD&T (or something like it), there is no guarantee the part will fit in every assembly.

You're basically advocating for reverting to a world of before interchangeable parts.


I can't talk about space but for comercial nuclear its very similar phenomenon from what I can tell.

Extreme reluctance to try new technologies. So much so people have to retire before upgrading.

Kids show up straight out of college and have to learn 1970s technologies or software. Like they could eliminate an entire wing of the building with an Audrino and 3 weeks programing but aren't allowed

Specs are treated like the Bible. If you don't know where it came from and often you don't you aren't allowed to change it. These were often just picked from a long list of suitable solutions or numbers in the 70s because the loudest person wanted it. Now risk adverse management and regulatiots won't change.

Implementing anything substantial requires you to convince your manager who then has to convince their manager on up to like 4 levels. The 4th level came from energy sales and has no respect for technical input or technical decision making. Once your 4th level management is approved they have to personally take the case to the regulator at some risk. The government regulator then takes 2 years.

Spacex apparently has few of these limitations. I don't think any amount of lower level process improvement will help until the whole system is overhauled.


Asimov was right, nuclear power will become a religion. And nuclear engineers the priests.


Jesus. Reading that I could actually feel the weight of that kind of bureaucratic inertia.


o generalize, the important difference between old space and new space is that new space companies are run by founders who stand to do very well only if their company is successful, and old space companies are run by seasoned executives who get paid a lot of money regardless.

That sets up very different incentives between the two kinds of companies. In new space, employees get rewarded by doing things that push the company towards the overall space-related goals. In old space, employees are typically rewarded by fitting into the existing company structure and conforming with the company culture.

Being an innovator is a great thing at a new space company and it can get you fired from an old space company.

I talk about this more in this video, where I talk about Blue Origin and Rocket Lab.

This isn't unique to the space world; this is why so many of the software innovations come from small startups rather than the tech giants.


Being an innovator is a great thing at a new space company and it can get you fired from an old space company.

I dunno about space but my experience in software was the exact opposite. I worked for a startup that was three years old and like a bright eyed fool I assumed that everybody was looking for ways to improve our core product. However a lot of people had carved out their own little fiefdoms in the codebase and it turned out to be a cardinal sin to try to bypass any one of the fiefs. The company was only three years old and perfectly demonstrated Conway's law, clearly deliminated with the four different programming languages that were being used. After they fired me as a personal project I started rebuilding everything on my own as a way to recover my sanity and from the rough draft skeleton I built out in a few weeks I could see that it was possible to replace a bloated product that required about 12 GB of memory to run with something performing faster on only 50 MB of memory.

Jaded and more cynical I moved on to the opposite of a startup, an established firm with huge legacy systems. And there I found that as long as I didn't break things, people were perfectly happy to pay me to experiment and re-imagine. It wasn't a panacea, all the knowledge silos and technical debt you would expect were present. But they weren't angry at me for trying to untangle the Gordian Knots, they were quite happy with me for trying. They didn't have the enormous sticks up their rear ends that the people at the startups did and it was okay to say that things weren't done perfectly the first time and needed to be redone based on the lessons learned.


Eric Berger wrote nicely about this in his book - "Musk is able to instantly make a competent, informed decision on any design direction in the company."


“Liftoff” really is an insightful book highlighting the DNA of SpaceX. I got the impression that having the Money guy and the Engineering guy happened to be the same person made a huge difference.



You don't have Elon Musk as CEO to keep the company culture alive. That is the primary problem with old school aerospace.

The companies started and managed by Jack Northrop, Donald Douglas, Bill Boeing et. al. did amazing things when the founders were in charge. When the founders passed on and the board of directors took over, the culture died as well.



Is it though? NASA JPL does plenty of incredible things and there is no shining CEO. Lots of other really solid space projects that push the envelope, I think it has more to do with process and culture rather than CEOs, a culture of excellence can exist without these guys I think. You could have an incredible CEO take over Lockheed but their institutional culture is going to be very difficult to steer in a different direction at this point. I wonder if their initial success is more a function of their size and risk profile as opposed to their CEO. I’m not saying Elon isn’t a huge contributor to the success of SpaceX, but I think if he was gone SpaceX would continue to do pretty great work, but who knows.


Jack Northrop, Donald Douglas, Bill Boeing

I'm surprised Northrop wasn't named Nack



I think you're skimming the surface, but if it were that easy, more companies would be successfully emulating it.

The real aspects required for success are deeper, and therefore harder to implement in existing companies. I think there's an argument to be made that old space may never be able to successfully emulate SpaceX because it requires everyone to be bought into specific cultural principals from the start. If they aren't, it's nearly impossible to root-out old cultural holdouts.

None of this is new, but I think it really comes down to things like:

  • Ownership at the individual level

  • Empowering those closest to the work to drive change

  • Ability to differentiate between "good enough" and "perfect" and willingness to accept the former

  • Acceptance of failure while maintaining accountability

  • Clear understanding of risk posture on each project throughout the organization

  • Use of project constraints to drive innovation - budget and schedule are knobs you turn to ride the line between "too scrappy" and "good enough"

  • Proper motive - there's a big difference between the decisions a company whose motive is to innovate towards a higher-level goal will make versus the decisions a company whose goal is to simply stay in business or turn profit.



    Not a SpaceX engineer…

    Set a goal. Every decision and meeting should be centered around if it accelerate or slows down the process to meet that goal. Everything else is just window dressing.

    -wasted time in meeting means less time working which slows everything down

    -waiting on suppliers slows down procurement. So in source and go vertical

    • only outsource for expertise never for cost savings. It’s never as cheap as it sounds, and you loose control (read speed)

    -software is the cheapest big purchase you can make. Dicking around looking for a solid works license is a waste of so much time it hurts.

    • the finance guys exist to find a way to fund stuff. Engineers should never have to justify to a non-engineer why they need X. And cost should be a minor part of the conversation.

    • speed towards the goal is the only measure of success. Not weight savings, not man hours expended, not revenue forecasts. Just progress towards the goal.

    Now pick a goal that inspires engineers to want to work on the thing and let them go nuts.




  • This 100%. You absolutely nailed it.

    only outsource for expertise never for cost savings. It’s never as cheap as it sounds, and you loose control (read speed)

    It amazes me how much the company I work at ("old space") outsources basic machining tasks. We have all the equipment in our shop, but we outsource so much of it - it's nuts!

    software is the cheapest big purchase you can make. Dicking around looking for a solid works license is a waste of so much time it hurts.

    A good IT department is critical, but so is using a superior CAD package: NX and TeamCenter.



    Watch this interview with NASA scientist Dan Rasky, where he talks about his experiences at SpaceX.


    If your company isn't willing to work at something for a while, realize it isn't going to work that way and then throw the entire concept away and go in an entirely new direction, regardless of time and money spent, this development style isn't for you.

    If your company isn't willing to blow things up, regularly.. this development style isn't for you.

    If your company is beholden to bean counters for every design decision.. this development style isn't for you.

    In all honesty, I think the reason SpaceX seems to outclass damn near everyone is this:

    The sunk cost fallacy doesn't exist.

    Elon is hands on at every major development decision and generally understands what he is being told.

    The only guiding principle is goal accomplishment. It doesn't matter how a system is "supposed" to work or how things have been done in the past. Can you show him the math on a better way? They'll try it.


    Get better leaders.

    Great leaders don't just motivate you, hell they might never even pat you on the back. But they give you the authority, responsibility, and resources you need to excel, and most importantly, they focus. They eliminate un-necessary work, they remove questionable requirements, and they streamline goals and objectives to only what's absolutely needed to succeed.


    I am just listening to Liftoff by Eric Berger. He reports that Elon attributed much of SpaceX's speed to the fact that Elon was both chief engineer and CFO, so there were no bean counters in the approval loop.


    I'm not a SpaceX Engineer, or any kind of Engineer, but my advice to "old space" is "just try to make a reusable rocket, or any kind of innovation".

    Old space is just stuck with the idea of building whatever they get told to build, without concern about reusability, or cost, or even if it ever flies.

    I'm sure ULA, Ariane, Roscosmos and even BO have some brilliant engineers who can figure out the solutions to the "hard, hard, hard" problems -- but their talents are being wasted. There needs to be a complete change in the top-level management who either don't think reusability is important, or who just dont care as long as they get paid.



  • I'm not an insider (in SpaceX in particular or space in general) but I design and run complex computer-driven systems and have thirty years' experience doing that, so I find this sort of thing to be fascinating.

    Two things I think I've picked up from the outside:

    1. They use COTS, but they're not trusting anymore. After they lost CRS-7 on 28 June 2015 because a COTS strut failed, they've taken to doing a lot more validation of the manufacturer's claims.

    2. I understand that they have an incredible in-house software system they share with Tesla. It really helps them track where every piece in every vehicle came from, so that they can know trends and recognize problems in sourcing early.



Yeah I think it's healthy to be skeptical of COTS. It should be used but it should be thoroughly tested to standards as well.


I'm an engineer at SpaceX. I'd recommend you read through this blog post at StackOverflow - it's an interview with an engineer in Software Delivery Engineering talking about how we develop flight software, and she goes into a lot of detail on process and off-the-shelf tech usage.

I'm in the same department, but Erin understands what we do way better than I do, and explains it better than I ever could. This is part of a whole series of blog posts on SO that go into software at SpaceX, but I think what we do is really underappreciated at a lot of places; everybody's interested in the actual software that goes on vehicles, but the software that helps write that software is, in many ways, a more complex problem.


Great contribution thank you! This is exactly the kind of rabbit hole I was hoping to find.


I noticed TDD being mentioned, is that something done for most SpaceX projects or more specifically for human rated systems? I’m also curious how far y’all go with unit testing and what your strategy is. In my experience with human vehicles, there were formal design, code, and unit test reviews with 100% coverage required then other V&V activities afterwards of course. Now I work on more R&D sort of work, where we’re flying technology demonstration work without humans on board. Our CI, MR, and unit test process is pretty open ended right now, and I’m trying to find the right balance of testing and dev ops practices without taking too many resources from development and iteration. This is part of why I’m trying to sus out what processes produce large returns on investment, as there’s no way we can be as rigorous as previous projects.

I get the feeling that having the unit test focus being on lower rates of coverage, but stress testing more at a functional level, would be a more useful for us. I guess I’m mentioning this because I’m curious what your opinion is on the return on investment for various types of software delivery and testing practices are. You’ve got a perspective from a pretty different world that could help us a lot.


This thread is that greatest thing I've ever seen in Reddit. 


Adding to that:

"Everyone is the lead engineer". That means everyone is allowed to know the high level vision and is empowered to question things, design changes, etc.

Beware of the design of your thing reflecting structure of your organization (company, lab, agency, etc.). Shuttle with its multiple but not redundant hydrazine, hydrogen, oxygen and whatever loops is a prime example of that ailment. In-space propulsion and APU groups were separate so they designed their systems completely separately wasting mass and at the same time missing redundancy opportunities.



if it takes 8 hours to run a sim

In brief, get a better simulator.

By education, I'm a mathematician, but in a very checkered career, I did a fair amount of simulation, so I have a clue about what a simulator can be expected to do. Two or maybe three orders of magnitude in one-tenth to one-hundredth of real time is a really high-end product. These people are getting six orders of magnitude in real time, and cheaply enough that you can put one on every engineer's desk. To get one on my desk back when I was doing a lot of simulations, I probably wouldn't have killed anybody, but I would have committed a lot of illegal and immoral acts.






n5321 | 2025年7月7日 23:46

Decision-Making Under Uncertainty ——电机工程师面临的问题

1 Introduction

Digital thread is a data-driven architecture that links together information from all stages of the product lifecycle (e.g., early concept, design, manufacturing, operation, postlife, and retirement) for real-time querying and long-term decision-making [1,2]. Information contained in the digital thread may include sufficient representation (e.g., through numerical/categorical parameters, data structures, and textual forms) of available resources, tools, methods, processes, as well as data collected across the product lifecycle. A desired target of digital thread is its use as the primary source from which other downstream information, such as that used for design, analysis, and maintenance/operations related tasks, can be derived [2,3].

Although a significant challenge of the digital thread involves development of an efficient architecture and its associated processing of information, a relatively unexplored aspect of the digital thread is how to represent and understand the propagation of the uncertainty within the product lifecycle itself. 研发过程中最具挑战性的问题 。High levels of uncertainty can lead to designs that are overconservative or designs that may require expensive damage tolerance policies, redesigns, or retrofits if analysis cannot show sufficient component integrity. One way to reduce this uncertainty is to incorporate data-driven decisions that are informed from data collected throughout the design process.

To assess the benefits of data-driven decisions, incorporating the value of future information into the decision-making process becomes critical. Performing this type of analysis opens up the possibility to assess not only just the current product generation or iteration but also the ones to follow. From this, new ways of thinking about design emerge, such as how can strategic collection of data from the current generation be used to improve the design of the next? For example, we may start asking what data should be collected, where should the data be collected, and when should the data be collected to minimize overall accrued costs? This problem involves analyzing uncertainty and the data collected over sequential stages of decision-making. In this article, we showcase how our digital thread analysis methodology, introduced in Ref. [4] and further developed in Ref. [5], analyzes this problem using Bayesian inference and decision theory, and solves it numerically using approximate dynamic programming.

To set the stage for discussion, we give a brief overview of the aspects of digital thread that are of relevance. Digital thread can be seen as a synthesis and maturing of ideas from product lifecycle management (PLM), model-based engineering (MBE), and model-based systems engineering (MBSE). PLM is the combination of strategies, methods, tools, and processes to manage and control all aspects of the product lifecycle across multiple products [6]. These aspects might include integrating and communicating processes, data, and systems to various groups across the product lifecycle. A key enabler of efficient PLM has been the development and implementation of MBE where data models or domain models communicate design intent to avoid document-based exchange of information [7,8], the latter which can result in lossy transfer of the original sources, as well as to eliminate redundant and inconsistent forms of data representation and transfer. Examples of MBE data models include the use of mechanical/electronic computer-aided design tools and modeling languages such as system modeling language (SysML), unified modeling language, and extensible markup language (xml). MBSE applies the principles of MBE to support system engineering requirements related to formalization of methods, tools, modeling languages, and best practices used in the design, analysis, communication, verification, and validation of large-scale and complex interdisciplinary systems throughout their lifecycles [811]. Many recent assessments and applications of these ideas in the context of the digital thread can be found in additive manufacturing [12], 3D printing and scanning [13], computer numerical control machining [14], as well as detailed design representation, lifecycle evaluation, and maintenance/operations related tasks [15,16].

With a functional digital thread in place, characterizing uncertainty and optimizing under it can be performed with statistical methods and techniques. For instance, relevant sources of uncertainty within the product lifecycle, such as design parameters, modeling error, and measurement noise, can be identified, characterized, and ultimately reduced using tools and methods from uncertainty quantification [1719]. With a means of assessing uncertainty, optimization under uncertainty can be performed with stochastic-based design and optimization methods. For instance, minimizing probabilistically formulated cost metrics subject to constraints that will arise for the digital thread decision problem may involve utilizing either stochastic programming or robust optimization. In stochastic programming, uncertainty is represented with probabilistic models, and optimization is performed on an objective statement with constraints involving some mean, variance, or other probabilistic criteria [20]. Alternatively, in robust optimization, the stochastic optimization problem is cast into a deterministic one through determining the maximum/minimum bounds of the sources of uncertainty and performing an optimization over the range of these bounds [21]. In addition, if reliability and robustness at the system level is required, uncertainty-based multidisciplinary design optimization can be employed [2224].

Of course, decision-making using the digital thread is not a one-time occurrence. Understanding the sequential nature of the multistage decision problem of the digital thread where one decision affects the next is critical for producing effective data-driven decisions. These decisions will have to be guided through some appropriate metric of assessing costs and benefits. This problem is explored in optimal experimental design where the objective is to determine experimental designs (in our case decisions) that are optimal with respect to some statistical criteria or utility function [25]. To assess the sequential nature of decision-making for the digital thread in particular, sequential optimal experimental design can be employed where experiments (again, decisions in our case) are conducted in sequence, and the results of one experiment may affect the design of subsequent experiments [26].

Despite the range of development and growth of digital thread and its application to manufacturing, maintenance/operations, and design related tasks in various multidisciplinary settings, a principled formulation that considers the propagation of uncertainty in the product lifecycle in the context of the digital thread is sparse. Furthermore, a mathematically precise way of analyzing and optimizing sequential data-informed decisions as new information is added to the digital thread remains absent. To address these gaps, in this article, we show that (1) the digital thread can be considered as a state that can dynamically change based on the decisions we make and the data we collect. We then show that (2) the evolution of uncertainty within the product lifecycle can be described with a Bayesian filter that can be represented in the digital thread itself. After expressing the digital thread in this way, we show how (3) the evolution of the digital thread can be modeled as a dynamical system that can be controlled using a stochastic optimal control formulation expressed as a dynamic program where the objective is to minimize total accrued costs over multiple stages of decision-making. Finally, we provide a (4) numerical algorithm to solve this dynamic program using approximate dynamic programming.

We illustrate our methodology through an example composite fiber-steered component design problem where the objective is to minimize total accrued costs over two design generations or iterations. In addition to evaluating design choices such as performing coupon level experiments to reduce uncertainty in materials as well as manufacturing and deploying a component to obtain operational data, effectiveness of data collection can be further tailored by determining where to place sensors (sensor placement) or selecting which sensors to use (sensor selection). The novelty in our approach is that these choices will be guided by the objective directly without the need for additional metrics or criteria.

The rest of this article is organized as follows, Sec. 2 sets up the illustrative design problem through which our methodology will be described as well as lays out the mathematical machinery that describes the dynamical process underlying the digital thread. Section 3 presents the decision problem for the digital thread-enabled design process and presents the numerical algorithm to solve the decision problem. Section 4 presents the results for the example design problem. Finally, Sec. 5 gives concluding remarks.

2 Design Problem Formulation

In this section, we formulate the design problem to be solved using our methodology. Section 2.1 describes the example design scenario through which we convey our methodology, Sec. 2.2 lays down the mathematical description of the problem, and Sec. 2.3 describes the underlying dynamical models that will be used for the overall decision-making process.

2.1 Scenario Description.

The design problem involves finding the optimal fiber angle and component thickness for a composite tow-steered (fiber-steered) planar (2D) component subject to cost and constraint metrics. We consider specifically the design of a chord-wise rib within a wingbox section of a small fixed wing aircraft of wingspan around 15 m, as shown in Fig. 1. The overall geometry has five holes of various radii with curved top and bottom edges.

Fig. 1
Geometry, initial sensor locations, and boundary conditions for the design problem for one loading condition. Transverse shear is directed out of the page and is not shown for clarity.

A challenge to our design task is the presence of uncertain inputs that directly influence the design of the component. In this problem, the uncertain inputs are the loading the component will experience in operation, the material properties of the component, and the specific manufacturing timestamps. Situations where these variables have most relevance occur during the early stage of design when testing and experimentation have not yet taken place or when a brand new product is brought to market where only partial information can be used from other sources due to its novelty.

Large uncertainties in these inputs can lead to conservative designs that can be costly to both manufacture and operate. Thus, the goal is to collect data to reduce these uncertainties to the degree necessary to minimize overall costs. Data can be collected through three different lifecycle paths: material properties can be learned through collection of measurements from coupon level experiments; manufacturing timestamps can be learned from a combination of a bill of materials, timestamps of individual processes, and other related documentation when a prototype or product is manufactured; and operating loads can be learned from strain sensors placed on the component in operation.

Although the task of learning the uncertain input variables through measurements can be addressed with methods from machine learning, and more classically from solution methods for inverse problems, this task in the context of the overall design problem is made complicated by the fact that collecting data come at a cost. To see this, we illustrate the digital thread for this design problem in Fig. 2. Here, we see that collecting relevant data can require both time and financial resources. Although material properties data can be obtained fairly readily and quickly during the design phase through coupon level experiments, manufacturing data can only be obtained once a prototype is built. In addition, operational data can only be obtained once a prototype or a full component is built, equipped with sensors, and put into operation. Depending on the scale of the component, the manufacturing process can take weeks or months, and putting a full component into operation with proper functionality of all its parts and sensor instrumentation may take much longer. Thus, making cost-effective decisions that reduce uncertainty is critical for product reliability, reducing design process flow time, and minimizing total product expenses across its lifecycle.

Fig. 2
Illustration of digital thread for the example design problem. The product lifecycle stages of interest here are between design and operation.

2.2 Mathematical Formulation.

The key elements of the design problem are broken down into five items: a notion of time or stage, the uncertain input variables (what we would like to learn), the measurement data (what we learn from), the digital thread itself (how to represent what we know), and the decision variables (the decisions and design choices we can make).

Time or Stage. Time is modeled using nondimensional increments that enumerate the sequence of decisions made or to be made up to some finite horizon . It is expressed as . Physical time is allowed to vary and will be the case when different decisions take shorter physical times to execute (e.g., performing coupon experiments) or longer physical times to execute (e.g., manufacturing and deploying a component).

Inputs. The Ny inputs (the uncertain quantities to be learned) are described at each stage t as . The variable yt is composed of parameters of a finite discretization of the five integrated through thickness traction terms (in-plane loads per unit length, in-plane moments per unit length, and transverse shear per unit length) on the component boundary for a particular operating condition, material properties (material strengths), and parameters of a manufacturing process model to compute process times consisting of Nm total steps. The composite structural model is based on the small displacement Mindlin–Reissner plate formulation [27,28] specialized for composites. For the manufacturing process model, we employ the manufacturing process and associated parameters detailed in the Advanced Composite Cost Estimating Manual (ACCEM) cost model [29,30] that consists of Nm = 56 total steps for our problem.

Measurements. The Nz measurements are described at each stage t as . The variable zt is composed of three strain sensor components for Ns sensor locations located on the top surface of the component, material properties data determined from coupon level experiments, and timestamps for the Nm total steps of the manufacturing process. Coupon level experiments here involve the static failure of composite test specimens of appropriate loading and geometry to acquire data about material properties and failure strengths used for structural analysis. Note, as illustrated in Fig. 2, the components of zt are taken at different points along the product lifecycle (corresponding to coupon tests, manufacturing, and operation) and may not be fully populated at every stage t.

Decisions. Decision-making will encompass different strategies related to performing coupon tests and manufacturing and deploying a new design to reduce uncertainty while minimizing costs. Associated with manufacturing and deployment are additional specifications of fiber angle, component thickness, and sensor placement/selection. We designate a high-level decision as , where  will correspond to performing coupon tests and  will correspond to manufacturing and deployment. For , we designate the additional design specifications as , where  specifies the geometrical parametrization of the component and  specifies the parametrization for sensor selection.

For this problem,  is composed of the coefficients of a finite dimensional parametrization of fiber angle and through thickness of the component body, as well as the spatial locations for all Ns sensors. For simplicity, we model ply angle as a continuous function of the component body and not model more detailed specifications such as individual ply and matrix compositions, additional layers consisting of different ply types, ply thicknesses, number of plies, and ply cutoffs at boundaries.

For sensor selection, we use a soft approach based on activation probabilities [31] to avoid the combinatorial problem associated with an exact binary (on/off) representation. Specifically,  gives probabilities, where the ith sensor is selected (active or on) with probability  and not-selected (inactive or off) with probability  during operation. To quantify the effective number of active sensors for this particular parametrization, we use an effective sensor utilization quantity defined as follows:
(1)
Here,  when all sensors are always off/inactive and  when all sensors are always on/active.
Digital Thread. The digital thread  at stage t reconciles the uncertain parts of the product lifecycle with its certain (or deterministically known) parts as follows:
(2)
where Rt is the representation of the deterministically known parts of the product lifecycle at stage t that we will collectively call resources,  is the representation of the uncertainty within the product lifecycle itself at stage t, and  is an information space over the product lifecycle encapsulating all possible uncertain and certain elements across all stages . Provided a criterion of sufficiency is maintained [5], the digital thread can be represented in a number of equivalent ways. In this article, the uncertainty within the product lifecycle is represented using  that specifies the probability distribution of the uncertain inputs yt given the history of collected data It = {R0u0, …, ut−1z0, …, zt−1} and the current decision to be made ut. The resources Rt are represented using a multi-data type set that contains numerical, categorical, and/or character-like specifications of:
  1. Methods, Tools, and Processes: Enterprise level information and protocols of available methods, tools, and processes across the product lifecycle.

  2. Products: Product-specific design geometry, manufacturing process details, operational and data collection protocols, operation/maintenance/repair history, and lifecycle status.

2.3 Dynamical Process of the Digital Thread.

With the design problem modeled, the dynamics of the digital thread can be described using the transition model
(3)
where  evolves the digital thread from stage t to t + 1 given the decision ut and measurements zt at stage t. Within this transition model,  is updated using the Bayesian filter
(4)
while the resources are updated according to
(5)
Here, the integration is performed over the uncertain inputs yt and ν is the measure (or volume) over the uncertain inputs yt. The function Ψt allows for changing resources (adding new elements, updating existing elements, or removing existing elements) at stage t. The Bayesian filter in Eq. (4) models the process of data assimilation from stage t to stage t + 1. First, the likelihood term p(zt|ytRtut) inside the integral represents the collection of new measurements after a decision is performed, followed by the updating of our knowledge of yt after incorporating those measurements. Next, the term p(yt+1|ytRt+1ut+1) represents inheritance and modification of information carried over from a previous design into the next design (e.g., loads from a past airplane reused and modified for a new airplane with a longer fuselage). Details of the derivation of this Bayesian filter can be found in Ref. [5].

The Bayesian filter in Eq. (4) can be computed using sequential Monte Carlo methods [32] or other linear/nonlinear filtering methods where appropriate. For linear models with Gaussian uncertainty, for example, the Bayesian filter is a variant of the Kalman filter (with the prediction and analyze steps reversed) and can be computed analytically. The resources can be managed and updated through MBE, MBSE, and PLM related tools or software as well as through other data management techniques.

3 Decision-Making Using the Digital Thread

In this section, we describe the decision-making problem for the design problem. Section 3.1 describes the specific decisions of interest, Sec. 3.2 describes the mathematical optimization problem associated to those decisions, Sec. 3.3 describes the approximate dynamic programming technique we employ to solve the mathematical optimization, and finally Sec. 3.4 provides a numerical algorithm to implement the approximate dynamic programming technique.

3.1 Decisions for the Problem Scenario.

For this problem scenario, we will produce two generations of a component where we are allowed to perform one set of coupon experiments. This will correspond to a three-stage problem where . In particular, we will be interested in the high-level decision sequences  that we will denote as EDD and  that we will denote as DED. For example, the sequence DED means to manufacture and deploy a new design first, followed by performing coupon level experiments second, and finally manufacturing and deploying another new design. The second design benefits from data collected from both coupon experiments and operational measurements of the previous design. These two sequences are distinguished by whether coupon experiments should be performed before any design is ever manufactured (and subsequently deployed) via the sequence EDD or right after the first design is manufactured and deployed via the sequence DED.

For each of these two sequences, we are interested in how subsequent data assimilation influences designs and costs of the component over the two generations. This will be explored through a greedy scenario that makes no use of future information, a sensor placement scenario, and a sensor selection scenario. Sensor placement and selection are not combined together for this problem setup to assess the performance of each (location versus activity) independently.

3.2 Decision Statement for the Digital Thread.

The decision statement for the digital thread-enabled design process is given by the following Bellman equation:
(6)
Here,  is the optimal value function or cost-to-go at stage t, the parameter γ ∈ [0, 1] is a discount factor, and the symbol * denotes optimal quantities or functions. The solution to this Bellman equation yields an optimal policy  that defines a sequence of functions  specifying new designs and changes to the Digital Thread for each stage t up to the horizon T. Each function  of the optimal policy is a function of  (a feedback policy), i.e., . The expectation is taken over the uncertain inputs {yt, …, yT} and measurements {zt, …, zT}.

The functions  and  denote the stage-wise cost and constraint functions, respectively, for the problem with Ng total constraints. For the stage-wise cost model, we employ a linear combination of the manufacturing cost model with cost functions that penalize aggressive fiber angle variation, aggressive component thickness variation, operational costs including strain sensor usage, as well as costs associated to coupon tests [5]. We do not penalize the placement of sensors for this particular setup. For the stage-wise constraint function during design stages, we use the Tsai-Wu failure criterion [33].

In some cases, the distribution of gt may be heavily tailed in which case the expected value of the constraint given in Eq. (6) may not produce robust enough designs. In those cases, the expected value of the constraint can be replaced with an appropriate measure of probabilistic risk, e.g.,  for some small ɛ > 0, or other criteria that can also take into consideration the severity of failure associated to the value of gt [34].

For the greedy scenario, the Bellman equation is solved by setting γ = 0. Although data assimilation still occurs through evolution of the digital thread via the transition model from stage t to t + 1, decisions determined from the Bellman equation at stage t with γ = 0 do not take into consideration the benefits nor costs of future data assimilation because the value of future information that comes through Vt+1 as stage t is canceled out. For the sensor placement scenario, γ = 1 and the sensor locations are allowed to vary while the sensor selection probabilities are all fixed at one. For sensor selection, γ = 1 and the sensor selection probabilities are allowed to vary while the sensor locations are fixed. Note that structural tailoring for the sensor selection and sensor placement setups also takes place by design of the Bellman equation because the future value function Vt+1 is a function of ut. This dependency enables fiber angle and component thickness to also control the effectiveness of subsequent data collection in addition to sensor placement or sensor selection.

In total, there are six policies to compare: the greedy, sensor placement, and sensor selection scenarios for both the EDD and DED high-level decision sequences.

3.3 Solving the Decision Problem Using Approximate Dynamic Programming.

We solve the decision problem by first rewriting the Bellman equation given in Eq. (6) to produce the following equivalent, but notationally simpler statement:
(7)
where
(8)
The function  is the expected value of the forward t + 1 optimal value function, Ot is the expected value of the stage-wise cost function at stage t, and Gt is the expected value of the stage-wise constraint function at stage t. Next, we use a combination of Monte Carlo sampling with policy and function approximation [35] to solve the optimization problem numerically. The functions Ot and Gt can be computed directly using Monte Carlo sampling methods. The remaining terms that need to be determined, namely, , and , will be approximated and updated using policy and function approximation. However, to apply the methods of function approximation to our problem, , and  need to first have explicit parametrized forms. The parametrized forms we use are as follows:
(9)
where  is a vector of Mp basis functions at stage t is a matrix of basis coefficients for the policy function at stage t is a matrix of basis coefficients for the value function at stage t is a vector of Mv basis functions at stage t, and  is a matrix of basis coefficients for the expected value of the forward value function at stage t. The variables  and  are used for setting initial conditions (e.g., initial component fiber angle, component thickness, and sensor locations).

The symbol * is dropped because we are now approximating the optimal functions with imposed structure, which may lose optimality. In addition, the expression for Vt and St is exponentiated to ensure nonnegativity of the value function. For the implementation, the basis functions ϕt and φt are constructed using radial basis functions (Gaussian radial basis functions in particular) with inputs appropriately scaled to lie within the bi-unit hypercube of appropriate dimension. These basis functions are not direct functions of the digital thread, but functions of numerical features of the digital thread such as mean, variances, and/or or other statistical metrics of  as well as other relevant numerical quantities from Rt.

Next, AtBt, and Ct are trained using a combination of least squares and approximate solutions of the Bellman equation. However, a direct application of least squares is challenging because we need to have a means of generating samples to train AtBt, and Ct in the first place. We also do not know in advance how many samples are sufficient to yield good estimates of the policy and value functions, so we would like to have flexibility of incorporating new samples without much recalculation of the least squares formulas. Furthermore, performing the inverses in these least squares formulas can become computationally expensive for large dimensions of AtBt, and Ct. Finally, we would like to have an incremental update rule to the approximation of the policy where new samples are obtained through exploration using the latest approximation of the policy.

These issues can be addressed by utilizing a recursive update of the least squares formulas, known as recursive least squares (RLS) [36]. Associated with the RLS formulation is a class of recursive approximate dynamic programming techniques given in Refs. [37] and [38] that we parallel. Details of the derivations for the RLS equations used in this article can be found in Ref. [5]. The RLS equations take the form:
(10)
Here, j is the update index that increases by one when a new data point is added, , and  at some digital thread  and decision . The terms  and  are determined from the minimization:
(11)
while  is given by
(12)
The decision  is generated by sampling in the neighborhood of the current iteration of the policy or decision iterates during optimization of Eq. (11). Note, this decision is different than  to allow additional flexibility on when and where to update St. This is because unlike μt and VtSt is a function of both the digital thread and decision and thus requires a different sampling approach to capture different values of the decision at a given state of the digital thread.
The symmetric matrix  at j = 0 is the regularization term in the original least squares formulas for At and Bt. It is updated using the Sherman–Morrison matrix identity:
(13)
An identical formula holds for the symmetric matrix  by swapping out  with  and  with  in Eq. (13).
The digital thread  at stage t + 1 is determined through forward simulation with the latest iteration of the policy :
(14)
while the digital thread  at stage t = 0 is sampled from . Sampling the digital thread  at stage t = 0 involves sampling different distributions for , which can be performed through using a suitable hyperprior distribution or through direct sampling of the parameters of the parametrization of , if applicable. Sampling R0 involves direct sampling from the set of allowable values (discrete and/or continuous) its elements can take.

3.4 Numerical Implementation to Solve the Multistage Decision Problem.

The numerical implementation for the algorithm presented in Sec. 3.3 is divided between Algorithms 13. To initialize and train a policy, first initializePolicy is called and then trainPolicy is called however many times is necessary until a convergence threshold on the policy or value function is achieved [38]. Details of the subroutines are given in the following.

Simulate and Initialize Policy. Simulating a policy is described in simulatePolicy. Here, a given policy π0 along with a digital thread  at stage  are used to generate the future evolution of  from stage s onward. The output is the trajectory (i.e., states) of the digital thread for t ∈ {s, …, T}. Note, the transition model outputs the digital thread from stage t + 1 from stage t, so stage T of the digital thread is computed from stage T − 1, thus the for loop is truncated to stage T − 1.

Information about product lifecycle elements (statistics of inputs, resources, products in operation, and their digital twins) at some t ∈ {s, …, T} within this trajectory is extracted through postprocessing of the appropriate . Measurements are synthetically generated if no physical measurements are available during offline training. Online measurements come directly from test data or the actual physical systems.

To initialize a policy from scratch, initializePolicy is called taking in as input the high-level decision sequence and initial condition parameters. Here, the parameters ηtκt > 0 represent the scaling factors of the initial least squares regularization term (in this implementation, the identity matrix of appropriate size).

Simulate and initialize policy

Algorithm 1

1: proceduresimulatePolicy()

2:  fordo

3:   

    ⊳Obtain measurement through forward simulation or from physical system

4:   

5:   

6:  return

1: procedureinitializePolicy

2:  fordo

    ⊳Initialize values at each stage t

3:   

4:   

5:   

6:    with 

7:    with 

    ⊳Update parameters ofwith initialized values

8:   

9: 

10: return

Train policy

Algorithm 2

1: proceduretrainPolicy()

  ⊳Build a skeleton trajectory to perform updates

2:  Sample 

3:  simulatePolicy()

   ⊳Update terms going backwards from

4:  for do

5:   

    ⊳Assign cost and constraint functions for deterministic optimizer at stage

6:   costFunction()

7:   constraintFunction()

    ⊳Run deterministic optimizer

8:   optimizer()

    ⊳Updateandusing RLS update rule

9:   

10:   

11:   

12:   

13:   

14: return

Cost and constraint functions

Algorithm 3

1: procedurecostFunction()

2:  if and then

3:   fordo

4:      Sample in a neighborhood of 

      ⊳Obtain measurement through forward simulation

5:     

     ⊳Sample forward value function

6:     

     ⊳Updateusing RLS update rule

7:    

8:    

9:    

10:    

11:    

   ⊳Construct forward value function

12:  

13: else

   ⊳ (and henceis zero

14:   

   ⊳Evaluateusing Monte Carlo

15:  where 

  ⊳Construct Bellman equation

16: 

17: return

1: procedureconstraintFunction()

  ⊳Evaluateusing Monte Carlo

2:   where 

   return

Train Policy. In trainPolicy, a digital thread state  is first sampled from  followed by generation of a digital thread trajectory using input π0. Using this trajectory as a skeleton, optimization is performed backward from each digital thread state in the trajectory. A call to a deterministic optimizer is made in optimizer and takes as arguments an initial condition, a cost function, and a constraint function. The deterministic optimizer can be any appropriate off-the-shelf optimizer. For this implementation, we use the trust-region method in matlabsfminunc and impose inequality constraints using penalty functions.

The optimizer can be run for a fixed number of iterations per call or until a suitable level of convergence is achieved. During or after running of the optimizer, the parameters of μt, {AtBt, are updated before moving to stage t − 1. For implementation, the update index j has been omitted since we only need to keep track of the current values of all relevant objects at any particular point in the routines. The procedure trainPolicy updates all parameters of the policy once over all stages , allowing flexibility for sequential updating in the future when necessary.

At lines 2–14 in costFunction of Algorithm 3Ms samples of the decision near the optimization iteration point are generated and used to update the estimate of St. Updating St adaptively at the start of the cost function is implemented, so that new samples are taken near every evaluation point of the optimization. The terms Gt and Ot are computed using Monte Carlo sampling; the same samples  can be used for both Gt and Ot to save on computation or for subsequent iterations per optimization call if an expectation-maximization related strategy is used. The policy π0 in the call to the cost function is passed “by reference,” so that updates to any part of π0 is made immediately available to all levels and Algorithms.

4 Results

In this section, we provide computational results for the example problem using the numerical algorithm given in Sec. 3.3. Data assimilation and uncertainty reduction trends are given in Sec. 4.1. Select component design, sensor placement, and sensor selection results are given in Sec. 4.2. Comparison of total costs across all policies are given in Sec. 4.3. Finally, a discussion on computational cost and complexity are given in Sec. 4.4.

4.1 Data Assimilation and Uncertainty Reduction.

Typical data assimilation and uncertainty reduction as a result of collecting measurements throughout the various lifecycle paths are shown for uncertain loads in Fig. 3, material properties in Fig. 4, and manufacturing process times in Fig. 5.

Fig. 3
Typical data assimilation and uncertainty reduction for the uncertain input loads after collecting strain sensor measurements during operation. Here, σ stands for standard deviation. Loading components are given as a function of a parameter s ∈ [0, 1] that wraps around the outer boundary of the component starting at the center of the far right edge of the component. (Color version online.)
Fig. 4
Typical data assimilation and uncertainty reduction for the uncertain material properties (material strengths) after collecting data from coupon failure tests (Color version online.)
Fig. 5
Typical data assimilation and uncertainty reduction for manufacturing times after collecting timestamps during manufacturing, ordered by decreasing step times (bottom to top) of the 20 longest processes. Here, σ stands for standard deviation. Numbers in parenthesis correspond to the step number in the manufacturing process. (Color version online.)

In Fig. 3, the loads used for the first design are compared to the loads estimated from operational data of the first design after manufacturing and deployment. These estimated loads are then used for the final design. The mean and two standard deviations of the variance for the initial estimate of loads (before any data assimilation) are shown with the red dashed-dotted line and red shading, respectively. Similarly, the mean and two standard deviations of the variance for the estimate of the loads after a design is deployed are shown with the blue dashed line and blue shading, respectively. The actual loads to be learned are shown with the thick magenta line. In this figure, we see that the large shifts in the mean for all loading components and variance reduction of the moments and shear after data assimilation indicate that the design of the next generation can be built lighter (and therefore at a lower cost) than the previous generation. This is because the loads for this particular scenario are learned to be of lower magnitude than what was used for the design of the previous generation. However, to have obtained this knowledge first, we had to manufacture and deploy first, foregoing any benefits provided by performing coupon experiments sooner.

In Fig. 4, the estimates of material strength properties known initially are compared to the estimates after learning from coupon level experiments. The probability density function for the initial estimate of strength properties is given by the red-shaded curve. Similarly, the probability density function for the strength properties after performing coupon level experiments is given by the blue-shaded curve. The actual strength properties to be learned are shown with the thick magenta vertical line. Here, we see that the large shifts in the mean and variance reduction of the strength properties after performing the coupon experiments indicate that the next design to be deployed will benefit from higher and more confident material strength property estimates and therefore be lighter and of lower cost. Of course, to have obtained this knowledge first, we had to forgo deploying a design earlier and the potential benefits it could have provided.

In Fig. 5, the estimates of manufacturing timestamps known initially are compared to the estimates after learning from data collected from the manufacturing of a component. The mean and two standard deviations of the variance for the initial estimate and final estimate of the timestamps are shown with the blue-shaded bars and yellow-shaded bars, respectively. The actual timestamps to be learned are shown with the red-shaded bars. From this figure, we see that in addition to achieving better estimates of process times, we also see that only a few number of process steps contribute significantly to the total manufacturing time.

4.2 Component Design and Sensor Placement/Selection.

We highlight the first and final designs produced through the greedy, sensor placement, and sensor selection policies for the EDD decision sequence in Fig. 6 for component thickness and sensor location. For this example problem, optimized design geometries tend to be thicker around the holes and near the left and right of the component, and regions directly below the holes tend to thin out. Optimized geometries also tend to favor modifying thickness over modifying fiber angle to minimize costs. As a result, the fiber angle tends to be similar across all policies for the final design. Typical fiber directions for the final design are shown in Fig. 7. Here, fiber steering tends to be more prominent near the surface of the component as a result of the structure being heavily driven by out-of-plane loading for this problem.

Fig. 6
Thicknesses and sensor placement/selection for the greedy, sensor placement, and sensor selection policies for the EDD decision sequence. Sensor locations are given by “+” markers, while their initial locations (before optimization) are shown in the grayed out circle markers. Sensor activation probabilities are represented using a gray shading of the “+” markers, lighter for values near zero and darker for values near one. Strain sensor data are only collected for the first design.
Fig. 7
Typical optimized fiber direction for the final design across all policies. Fiber direction is shown in a scaled vertical coordinate where a factor of 0.5 of the thickness corresponds to the top surface, a factor of 0 to the mid-plane, and a factor of –0.5 to the bottom surface. Arrows designate the local fiber zeroth direction.

4.3 Comparison of Total Costs.

Comparison of mean total costs for all policies is shown in Fig. 8. Even though the final design produced from the EDD and DED policies benefit from both operational and coupon level experimental data, the costs for each policy are accumulated differently. As a result, we see that the EDD policies achieve lower total costs than the DED policies. The best strategy from the given initial state of the digital thread is to first perform experiments to drive down the uncertainty of the material strength properties and second to manufacture and deploy that design to learn about the uncertain loading conditions from data collected through operation. Interestingly, we see that manufacturing and deploying first lead to higher overall costs as a result of designing heavier and more conservative designs from the lack of data about the material strength properties earlier. In addition, the corresponding operational costs are higher and accrued over a longer time frame. The results overall show that material strength properties have a larger impact on the overall costs than do the input loads despite the fact that the means of the material strength properties were only 10% away from the true values compared to 50% for the input loads.

Fig. 8
Comparison of mean costs for all policies. Mean costs are normalized with respect to the total mean cost of the EDD—greedy policy. Effective sensor utilization is reported for policies with sensor selection.

From Fig. 8, we see that there is no strong benefit of sensor placement in this example problem setting, even when the location of sensors are not penalized. As long as sensors are initially well dispersed on the top surface of the component, the cost improvement from moving the sensors around is small (). However, in our studies, we observed that sensor placement is typically more aggressive in the DED case than the EDD case. This is because in the DED case, not learning the material properties before manufacturing the first design means that the first design will be thicker (and thus of higher costs) compared to the EDD case. Consequently, more effort is put into placement of the sensors to recover the total cost.

Although the changes in total accrued cost are low (largely due to our particular choice for the sensor selection costs in relation to total design costs), Fig. 8 shows that the optimized sensor selection policies have effective sensor utilization of less than . That is, only  of all sensors need to be effectively active to recover sufficient data to minimize costs. To understand why this is the case, we compare different load components in Fig. 3. For this example setup, the moments are the best resolved (low variance and better mean estimates) loading components while the normal and tangent loads are not resolved as effectively. From the standpoint of structural analysis, this means that generated designs are robustly designed to variations of normal and tangent loading, while dominant structural sizing will depend on the resolved moments and possibly shears. Based on the low effective sensor utilization values, these moments can be resolved effectively without large utilization of the available sensors. Interestingly, because sensor utilization is penalized, the optimized sensor selection policy for the DED case found it more advantageous to drive the sensor utilization to less than , forfeiting some of the structural efficiency of the next design, to preserve lower overall costs.

4.4 Computational Cost and Complexity.

Computational cost depends on the number of the design decision variables, number of uncertain input variables, number of measurements, and mesh discretization for the finite element model and cost calculation. For this example problem, each of the two design-based stages consisted of 3105 design variables (parameters for fiber steering, thickness, and sensor location). The high number of design variables arises as a result of using a direct parametrization of the finite element model. The number of uncertain input variables is 1466 per stage (uncertain loads, material properties, and manufacturing parameters). The number of measurements is 187 per stage (strain measurements, material properties, and manufacturing time stamps). The policy and function approximation uses 2000 basis functions with input dimensions on the order of the number of uncertain inputs for μt and Vt and with input dimensions on the order of the number of uncertain inputs and design decision variables for St. These approximations were updated one data point at a time using the rank-1 update of the RLS formulations (i.e., no matrix inversions). The process models for this problem for the uncertain input variables per stage are taken to be linear with Gaussian noise; thus, the Bayesian filter in Eq. (4) is calculated analytically using a variant of the Kalman filter (with the prediction and analyze steps reversed). This requires computing matrix inverses of dimension equal to the number of measurements per stage. As a result, samples for Monte Carlo estimation could be easily drawn using appropriately scaled normal distributions at each stage. It was determined that 50–100 Monte Carlo samples can be used to achieve estimates of Rt and Gt with less than 1% error (the point of reference being taken at 10,000 samples). This was a result of these terms having small variances for this problem. Optimization was accelerated through the use of gradient information computed using adjoint solves of the finite element model and analytical derivatives of the policy and function approximation forms. Detailed description of all modeling of terms can be found in Re. [5]. In total, 100 policy updates for all six policies took on the order of 8 h on a Windows 10 64-bit machine with 16 GB of RAM where the majority of the time was spent on solving the finite element model. Results were run for 2000 policy updates, although convergence of the policy to within 2% of final values was achievable within 200 updates.

As with all methods involving quantification of uncertainty, efficiency of the approaches proposed here may become challenging for higher dimensional problems. For instance, the main hurdle for scalability of the stage-wise optimization (line 8 in trainPolicy of Algorithm 2) is the number of design variables. A way to reduce the number of design variables is through a reduced geometrical representation, i.e., not using a direct parametrization of a detailed finite element model but using instead simplified geometry or another low-dimensional representation of the component. Reducing the number of design variables for a given component then allows scaling up to multiple components more readily. Supplying derivative information for the stage-wise optimization is also beneficial. In addition, rather than run the detailed finite element model during filter updates (line 5 in simulatePolicy of Algorithm 1), measurement generation via forward simulation (line 4 in simulatePolicy of Algorithm 1 and line 5 in costFunction of Algorithm 3), and stage-wise optimization, a projection-based reduced-order model can be employed instead [39,40]. For filter updates themselves, exploiting independence of the uncertain input variables can alleviate high dimensionality allowing one to work with smaller transition models to propagate uncertain quantities. For instance, the loads on the component during operation are physically independent of the cost of manufacturing that component, when given the design. Therefore, filtering for loads and manufacturing parameters can be done independently. For nonlinear models, sequential importance sampling involved for the filter updates as well as sampling for Rt and Gt can be accelerated through the use of multifidelity Monte Carlo sampling techniques where inexpensive (but less accurate) models are used in conjunction with expensive (but more accurate) models to reduce the total number of expensive evaluations for sampling [41,42]. In addition, further acceleration for sampling can be achieved through the use of parallel computations.

5 Conclusions

In this article, we presented a methodology to enable decision-making under uncertainty for a digital thread-enabled design process and applied it to a relevant structural composites design problem. This methodology enabled assessing a variety of decision-making strategies involving sensor placement, sensor selection, and structural tailoring as well as high-level decisions involving experimentation or manufacturing and deploying. Implementation of an approximate dynamic programming algorithm that utilizes a combination of function and policy approximation coupled with recursive least squares was also detailed.

In addition to learning of sensor limitations in resolving various uncertain loading inputs in the example, our method recognized that designs can be made robust to normal and tangent loadings where major sizing changes were driven predominately by moment or transverse shear loading. Simultaneously, our method found that it could significantly reduce the effective number of sensors that are active to be able to sense the dominant loading components efficiently while forfeiting learning precisely the other loading components. This translated to reduced costs since fewer effective sensors are needed to make cost-efficient design decisions. In addition, our method was also able to show that sensor placement has only a small impact on the overall costs for this example problem setting.

Overall, our design methodology showcases how data-driven design decisions change based on the sources of uncertainty and the sequence in which we attempt to reduce them. Furthermore, both limitations and advantages of resources can be exploited to drive costs down. Our methodology is able to identity the order in which uncertainty must be reduced to achieve lowest costs. Resulting policies output realizable design geometries that can be assessed for further detailed analysis. The novelty in our method is that sensor placement and selection can be determined directly (and to the degree necessary) from total accrued cost without requiring specification of additional metrics.

Note that our solution method yields a policy, i.e., a function of the digital thread. For the example problem, we tested this policy on just one set of inputs unknown to the policy. However, this same policy can be evaluated for other input scenarios, provided the inputs and initial conditions of these other scenarios are within some reasonable neighborhood of where the policy was trained. Furthermore, the computational effort to train a policy is divided between an offline step (initialization and training of the policy) and inexpensive online evaluations for prediction or subsequent updates.

In our cost modeling for the example design problem, experimental coupon failure test costs are small with respect to manufacturing costs. This may not be the case for larger scale static/fatigue testing of assemblies or systems. Nevertheless, our method is adaptable through appropriate modification of the stage-wise costs and constraints. In addition, input loading variances are relatively high with respect to the mean for this example, so designs generated by the optimized policies reflect robustness to a wide range of possible uncertain inputs. Reducing input variance can lead to more specialized designs (more specific tailoring of fiber angles and thickness) through cost savings obtained by limiting uncertain inputs that are less likely to occur.

Future work will look into multiple loading/operating conditions and failure modes, other recurrent design applications, expanding lifecycle costs to include inspections, maintenance, and repair, as well as applying the methodology to assemblies or larger systems consisting of multiple components and/or assemblies. In the latter, further development may likely need multilevel and sparse representations, reduced parametrization of individual component details, as well as employing reduced-order modeling of physics-based simulations, to manage complexity and retain computational performance.

Acknowledgment

The work was supported in part by AFOSR grant FA9550-16-1-0108 under the Dynamic Data Driven Application System Program (Program Manager Dr. E. Blasch); The MIT-SUTD International Design Center; and the United States Department of Energy Office of Advanced Scientific Computing Research (ASCR) grants DE-FG02-08ER2585 and DE-SC0009297, as part of the DiaMonD Multifaceted Mathematics Integrated Capability Center (program manager Dr. S. Lee).

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

Obtaining Analysis Code

Code to generate all analysis data and figures is available online at https://github.com/victornsi/DT-AVT. All code is written in matlab R2018b on a Windows 10 64-bit machine.

Nomenclature

Superscripts/Subscripts

j =

iteration index of terms in numerical algorithm

t =

time or stage designating sequence of decisions

* =

designation for optimal quantities or functions

Statistical Operators

p =

probability distribution

 =

expected value

 =

probability measure

Time or Stage

 =

set of time or stage indices in consideration

T =

final time index or horizon length from t = 0

Uncertain Inputs

yt =

uncertain inputs at stage t

Ny =

total number of uncertain inputs

ν =

measure or volume over uncertain inputs

Measurements

zt =

measurements across product lifecycle at stage t

Nm =

total number of steps in manufacturing process

Ns =

total number of strain sensors on component

Nz =

total number of measurements

Decisions

ut =

decisions at stage t

Nu =

total number of decision variables

Np =

total number of parameters to define fiber direction, component thickness, and sensor locations

 =

effective sensor utilization during sensor selection for design at stage t

 =

high-level decision between performing coupon testing or manufacturing and deploying a design at stage t

 =

fiber steering, component thickness, and sensor placement parameters for design at stage t

 =

sensor selection probabilities for design at stage t

Digital Thread

 =

information space over product lifecycle

 =

digital thread at stage t

 =

history of collected data at stage t

 =

representation of uncertainty in the product lifecycle at stage t

Rt =

resources related to tools, methods, and processes in the product lifecycle at stage t

Φt =

digital thread transition model at stage t

Ψt =

resource transition model at stage t

Multistage Decision Statement

gt =

stage-wise constraint function at stage t

rt =

stage-wise cost function at stage t

Ng =

total number of stage − wise constraints for design

 =

optimal value function at stage t

γ =

discount factor

 =

optimal policy stage function at stage t

 =

optimal policy at stage t

Numerical Algorithm

at =

vector used for setting initial conditions for the policy function at stage t

bt =

scalar used for setting initial conditions for the value function at stage t

ct =

scalar used for setting initial conditions for St at stage t

At =

matrix of basis coefficients for the policy at stage t

Bt =

matrix of basis coefficients for the value function at stage t

Ct =

matrix of basis coefficients for St at stage t

Gt =

expected value of the stage-wise constraint function at stage t

Ht =

incremental regularization matrix used in the recursive least squares update for the policy and value function at stage t

Jt =

incremental regularization matrix used in the recursive least squares update for St at stage t

Mp =

total number of basis functions used for the parametrization of the policy and value function

Mv =

total number of basis functions used for the parametrization of St at stage t

Ot =

expected value of the stage-wise cost function at stage t

St =

expected value of the forward value function at stage t + 1

ηt =

least squares regularization scaling term for the policy and value function at stage t

κt =

least squares regularization scaling term for St at stage t

ϕt =

vector of basis functions for the value function and policy at stage t

φt =

vector of basis functions for St at stage t

References

1.
US Airforce
2013
, “
Global Horizons Final Report: United States Air Force Global Science and Technology Vision – AF/ST TR 13-01
,” 
United States Air Force
.
2.
Kraft
E.
2015
, “
Hpcmp Create-AV and the Air Force Digital Thread
,” 
AIAA SciTech 2015, 53rd AIAA Aerospace Sciences Meeting
Kissimmee, FL
Jan. 5–9
, pp. 
1
13
10.2514/6.2015-0042
3.
West
T.
, and 
Pyster
A.
2015
, “
Untangling the Digital Thread: The Challenge and Promise of Model-Based Engineering in Defense Acquisition
,” 
INSIGHT
18
(
2
), pp. 
45
55
10.1002/inst.12022
4.
Singh
V.
, and 
Willcox
K.
2018
, “
Engineering Design With Digital Thread
,” 
AIAA J.
56
(
11
), pp. 
4515
4528
10.2514/1.J057255
5.
Singh
V.
2019
, “
Towards a Feedback Design Process Using Digital Thread
,” 
Ph.D. thesis
MIT
Cambridge, MA
.
6.
Stark
J.
2015
Product Lifecycle Management
3rd ed., Vol. 
1
Springer International
New York
.
7.
Wymore
A.
1993
Model-Based Systems Engineering
CRC Press
Boca Raton, FL
.
8.
Ramos
A.
Ferreira
J.
, and 
Barceló
J.
2012
, “
Model-Based Systems Engineering: An Emerging Approach for Modern Systems
,” 
IEEE Trans. Syst., Man, Cyber., Part C (Appl. Rev.)
42
(
1
), pp. 
101
111
10.1109/TSMCC.2011.2106495
9.
Estefan
J.
2009
, “
MBSE Methodology Survey
,” 
INSIGHT
12
(
4
), pp. 
16
18
 10.1002/inst.200912416
10.
Cloutier
R.
2009
, “
Introduction to This Special Edition on Model-Based Systems Engineering
,” 
INSIGHT
12
(
4
), pp. 
7
8
10.1002/inst.20091247
11.
Loper
M.
, ed., 
2015
Modeling and Simulation in the Systems Engineering Life Cycle: Core Concepts and Accompanying Lectures
Springer-Verlag
London
.
12.
Mies
D.
Marsden
W.
, and 
Warde
S.
2016
, “
Overview of Additive Manufacturing Informatics: “A Digital Thread,”
” 
Int. Mater. Manufact. Innovat.
5
(1), pp. 
114
142
10.1186/s40192-016-0050-7
13.
Mahan
T.
Meisel
N.
McComb
C.
, and 
Menold
J.
2019
, “
Pulling at the Digital Thread: Exploring the Tolerance Stack Up Between Automatic Procedures and Expert Strategies in Scan to Print Processes
,” 
ASME. J. Mech. Des.
141
(
2
), p. 
021701
10.1115/1.4041927
14.
Lee
Y.
, and 
Fong
Z.
2020
, “
Study on Building Digital-Twin of Face-Milled Hypoid Gear From Measured Tooth Surface Topographical Data
,” 
ASME. J. Mech. Des.
142
(
11
), p. 
113401
10.1115/1.4046915
15.
Gharbi
A.
Sarojini
D.
Kallou
E.
Harper
D.
Petitgenet
V.
Rancourt
D.
Briceno
S.
, and 
Mavris
D.
2017
, “
Standd: A Single Digital Thread Approach to Detailed Design
,” 
AIAA SciTech 2017, 55th AIAA Aerospace Sciences Meeting
Grapevine, TX
Jan. 9–13
, pp. 
1
13
.
16.
Thomsen
B.
Kokkolaras
M.
Månsson
T.
, and 
Isaksson
O.
2017
, “
Quantitative Assessment of the Impact of Alternative Manufacturing Methods on Aeroengine Component Lifing Decisions
,” 
ASME. J. Mech. Des.
139
(
2
), p. 
021401
10.1115/1.4034883
17.
Smith
R.
2013
Uncertainty Quantification: Theory, Implementation, and Applications
SIAM
.
18.
Zaman
K.
McDonald
M.
, and 
Mahadevan
S.
2011
, “
Probabilistic Framework for Uncertainty Propagation With Both Probabilistic and Interval Variables
,” 
ASME. J. Mech. Des.
133
(
2
), p. 
021010
10.1115/1.4002720
19.
Xi
Z.
2019
, “
Model-Based Reliability Analysis With Both Model Uncertainty and Parameter Uncertainty
,” 
ASME. J. Mech. Des.
141
(
5
), p. 
051404
10.1115/1.4041946
20.
Kall
P.
Wallace
S.
, and 
Kall
P.
1994
Stochastic Programming
John Wiley & Sons
Chichester, UK
.
21.
Bertsimas
D.
Brown
D.
, and 
Caramanis
C.
2011
, “
Theory and Applications of Robust Optimization
,” 
SIAM Rev.
53
(
3
), pp. 
464
501
10.1137/080734510
22.
Yao
W.
Chen
X.
Luo
W.
van Tooren
M.
, and 
Guo
J.
2011
, “
Review of Uncertainty-Based Multidisciplinary Design Optimization Methods for Aerospace Vehicles
,” 
Progress Aeros. Sci.
47
(
6
), pp. 
450
479
10.1016/j.paerosci.2011.05.001
23.
Du
X.
, and 
Chen
W.
2002
, “
Efficient Uncertainty Analysis Methods for Multidisciplinary Robust Design
,” 
AIAA J.
40
(
3
), pp. 
545
552
10.2514/2.1681
24.
Kokkolaras
M.
Mourelatos
Z.
, and 
Papalambros
P.
2004
, “
Design Optimization of Hierarchically Decomposed Multilevel Systems Under Uncertainty
,” 
ASME 2004 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Volume 1: 30th Design Automation Conference
, pp. 
613
624
Paper No. DETC2004-57357
.
25.
Atkinson
A.
Donev
A.
, and 
Tobias
R.
2007
Optimum Experimental Designs, With SAS
, Vol. 
34
 (
Oxford Statistical Science Series
), 
Oxford University Press
Oxford, UK
.
26.
Huan
X.
, and 
Marzouk
Y.
2013
, “
Simulation-Based Optimal Bayesian Experimental Design for Nonlinear Systems
,” 
J. Comput. Phys.
232
(
1
), pp. 
288
317
10.1016/j.jcp.2012.08.013
27.
Mindlin
R.
1951
, “
Influence of Rotatory Inertia and Shear on Flexural Motions of Isotropic, Elastic Plates
,” 
ASME J. Appl. Mech.
18
, pp. 
31
38
.
28.
Reissner
E.
1945
, “
The Effect of Transverse Shear Deformation on the Bending of Elastic Plates
,” 
ASME J. Appl. Mech.
12
, pp. 
A69
A77
.
29.
Gutowski
T.
Hoult
D.
Dillon
G.
Neoh
E.
Muter
S.
Kim
E.
, and 
Tse
M.
1994
, “
Development of a Theoretical Cost Model for Advanced Composite Fabrication
,” 
Compos. Manufact.
5
(
4
), pp. 
231
239
10.1016/0956-7143(94)90138-4
30.
Northrop Corporation
1976
, “
Advanced Composites Cost Estimating Manual (ACCEM)
,” 
AFFDL-TR-76-87
1
, pp. 
1
88
.
31.
Joshi
S.
, and 
Boyd
S.
2009
, “
Sensor Selection Via Convex Optimization
,” 
IEEE Trans. Signal Proc.
57
(
2
), pp. 
451
462
10.1109/TSP.2008.2007095
32.
Doucet
A.
Freitas
N.
, and 
Gordon
N.
, eds., 
2001
Sequential Monte Carlo Methods in Practice
Springer-Verlag
New York
.
33.
Tsai
S.
, and 
Wu
E.
1971
, “
A General Theory of Strength for Anisotropic Materials
,” 
J. Compos. Mater.
5
(
1
), pp. 
58
80
.
34.
Chaudhuri
A.
Norton
M.
, and 
Kramer
B.
2020
, “
Risk-Based Design Optimization Via Probability of Failure, Conditional Value-at-Risk, and Buffered Probability of Failure
,” 
AIAA SciTech 2020, Managing Multiple Information Sources of Multi-Physics Systems
Orlando, FL
Jan. 6–10
, pp. 
1
18
.
35.
Busoniu
L.
Babuska
R.
Schutter
B.
, and 
Ernst
D.
2010
Reinforcement Learning and Dynamic Programming Using Function Approximators
, Vol. 
39
 (
Automation and Control Engineering
), 
CRC Press
Boca Raton, FL
.
36.
Björck
A.
1996
Numerical Methods for Least Squares Problems
Society for Industrial and Applied Mathematics
.
37.
Powell
W.
2011
Approximate Dynamic Programming: Solving the Curses of Dimensionality
2nd ed. (
Wiley Series in Probability and Statistics
), 
Wiley
New York
.
38.
Bertsekas
D.
2012
Dynamic Programming and Optimal Control—Approximate Dynamic Programming
4th ed., Vol. 
II
Athena Scientific
Belmont, MA
.
39.
Benner
P.
Gugercin
S.
, and 
Willcox
K.
2015
, “
A Survey of Projection-Based Model Reduction Methods for Parameteric Dynamical Systems
,” 
SIAM Rev.
57
(
4
), pp. 
483
531
10.1137/130932715
40.
Quarteroni
A.
, and 
Rozza
G.
, eds., 
2014
Reduced Order Methods for Modeling and Computational Reduction
, Vol. 
9
MS&A
Springer
New York
.
41.
Peherstorfer
B.
Willcox
K.
, and 
Gunzburger
M.
2016
, “
Optimal Model Management for Multifidelity Monte Carlo Estimation
,” 
SIAM J. Sci. Comput.
38
(
5
), pp. 
A3163
A3194
10.1137/15M1046472
42.
Kramer
B.
Marques
A.
Peherstorfer
B.
Villa
U.
, and 
Willcox
K.
2019
, “
Multifidelity Probability Estimation Via Fusion of Estimators
,” 
J. Comput. Phys.
392
, pp. 
385
402
10.1016/j.jcp.2019.04.071


n5321 | 2025年7月7日 23:43

SpaceX的研发体系


SpaceX uses 3D printers and a process of relentless refinement to streamline its Raptor engines. In the Raptor 3, plumbing and wiring that had been on the outside were fused into the motor’s metal structure.

What is behind SpaceX’s success? According to one former top employee, it is something called “The Algorithm.”

Tim Berry, head of manufacturing and quality at blended-wing-body startup JetZero, spent a decade at SpaceX leading the Falcon 9 and Falcon Heavy family of rockets upper-stage production team. Berry also led the Dragon Crew and Cargo integration team and was head of additive manufacturing.

  • How the Raptor 3 rocket engine was streamlined

  • “Your requirements are definitely dumb; You have to find a way to make them less dumb.”

The Algorithm was “drilled into our minds,” he said at the American Institute of Aeronautics and Astronautics Aviation Forum in Las Vegas in July. “It’s a five-step process for improving the design, making it ultimately easier to manufacture and finding ways to optimize along the way as well.”

Step 1 is: “Challenge the requirements,” Berry said. “Or as our benefactor used to say, ‘Your requirements are definitely dumb; you have to find a way to make them less dumb.’”

Step 2 involves deleting a part or a process step. “Really looking at the full value chain of what you’re working on and eliminating any unnecessary process steps while also finding opportunities to delete parts ultimately yields an overall optimization, whether it’s a reduction in labor or cycle time or anything like that,” he said.

Step 3: “Find additional opportunities to simplify the design or optimize it,” Berry said. “Make it easier to manufacture or eke a few more points of performance out of it.”

Step 4 is all about speed. “Find even more ways to go faster,” Berry explained. “You add more stations; you ramp up the manufacturing.”

And then, finally—Step 5—you automate. “Most people start with Step 5, and they automate a process that never should have existed in the first place,” said Berry. “It’s really important that you work the steps in order.”

After completing Step 5, rinse and repeat. “You’re never satisfied,” Berry said. “You’re constantly going back and finding opportunities to challenge your requirements, deleting more parts, simplifying, optimizing, going faster, and then finally, opportunities to automate, but only once you’ve really boiled down to the baseline process.”

SpaceX CEO Elon Musk is keen on reducing engineering to its basics via first principles thinking. Aristotle invented the first principles method some 2,400 years ago in his Metaphysics, describing it as trying to understand “the first basis from which a thing is known.”

“The normal way that we conduct our lives is we reason by analogy,” Musk explained in a 2012 interview. “We’re doing this because it’s like something else that was done, or it’s like what other people are doing. It’s mentally easier to reason by analogy rather than from first principles. First principles is a physics way of looking at the world, and what that really means is you kind of boil things down to the most fundamental truths.”

Musk often talks about competing SpaceX’s hardware against the laws of nature rather than other products on the market. That philosophy has driven SpaceX employees to simplify the Raptor rocket engine from something that looked “like a Christmas tree with how much stuff is on it” to a more spartan look, Berry said.

In August, SpaceX revealed the drastically streamlined Raptor 3 engine (see photo) and test-fired it. The company ditched a heat shield on the latest iteration of the methane-fueled engine by taking plumbing and wiring that was previously hanging on the outside and fusing it into the motor’s metal structure. To do so, SpaceX heavily relied on 3D printing, Musk wrote on the social media site X.

The sea-level variant of the Raptor 3 weighs 3,362 lb., compared with the 3,594-lb. Raptor 2, while generating 280 tons of force, compared with the current rocket’s 230 tons of force. The total weight of the Raptor 3 plus vehicle commodities and hardware is 3,792 lb. compared with the Raptor 2’s 6,338 lb.

“The amount of work required to simplify the Raptor engine, internalize secondary flow paths and add regenerative cooling for exposed components was staggering,” Musk said. “Getting close to the limit of known physics,” he added in another post.


n5321 | 2025年7月7日 23:40

COMPUTER AIDED DESIGN I: FORMING THE CAD SOFTWARE INDUSTRY

Starting in the 1950s, automotive and aerospace companies purchased significant amounts of computer hardware. A number of the companies developed their own CAD/CAM programs to support the complexities and scale of their product development process. Not only were there few commercial CAD/CAM companies, but industrial companies also wanted to protect their intellectual property.

CAD/CAM programs supported drafting, 3-D surface modeling, numerical control (NC) programming, and/or engineering analysis. Drafting let users produce engineering drawings that documented designs and contained fabrication and assembly instructions. Some industrial companies, especially in the automotive and aerospace sectors, pushed the CAD envelope into 3-D surface modeling because surfaces define the external skins that drive automotive style and aerospace aerodynamics. Using the geometry CAD produced, CAM programs generated NC instructions for a new class of highly accurate machine tools. Finally, the geometry was essential input to complex engineering analysis programs (such as stress and aerodynamics).

This article begins with a general background and overview of the drafting, engineering, and manufacturing requirements in the automotive and aerospace industries. It then describes some of the technical differences between interactive CAD programs and other scientific and engineering programs in terms of performance, scale, and integration. The article then provides an overview of some of the program functions needed and why most of them were not available on a commercial basis.

This general picture is then followed by a more detailed discussion of CAD/CAM program examples from the two industries up through the mid-1980s. The automotive industry is covered first, with detailed examples from General Motors (GM), Ford, and Renault/Citroën. A similar discussion of the aerospace industry follows, with a focus on Lockheed, Northrop, McDonnell Douglas, Dassault Aviation, and Matra Datavision.

The article ends with a discussion of why and how these companies led the way in high-performance, large-scale, complex 3-D surface and NC programs. By contrast, early commercial CAD/CAM software companies focused on building programs that produced engineering drawings. In some cases, industrial companies purchased commercial programs to produce engineering drawings but relied on internal development for surface design and NC programming.

BACKGROUND

Like most forms of computing technology, CAD systems have evolved significantly. Some advances have been driven by computing technology itself, such as graphics processing units, personal computers, and cloud computing. Other have been driven by brilliant people developing and improving algorithms (such as finite elements for 3D stress analysis and nonuniform rational b-splines). Importantly, industrial companies realized that productivity improvements over manual techniques were possible using interactive graphics.

Automotive and aerospace companies have found benefits in developing and using highly interactive, computer-graphics-based CAD/CAM programs since the late 1950s. Computing helped automotive and aerospace companies move into the world of automated milling and machining with NC systems (CAM), analyzing smooth surfaces to define aerodynamically efficient and aesthetically pleasing external surfaces [computer-aided engineering (CAE)], and producing engineering drawings (CAD). Starting in the 1980s, other industries, such as shipbuilding, architecture, petrochemical plants, and manufacturing/assembly plants, adopted CAD/CAM methods more slowly.

Production-level automotive and aerospace CAD/CAM programs had features commercial companies introduced later. Early commercial offerings, as documented in David Weisberg’s excellent book [28], focused on generating engineering drawings. A few early industrial systems, such as Lockheed’s CADAM system, which became successful commercially [28, pp. 13-1–13-7], addressed engineering drawing, while other companies (such as Boeing and Ford) used commercial drafting systems.

Systems developed by industrial companies included not only 2-D engineering drawings but also CAM, engineering analysis, and 3-D surface design. By contrast, early commercial systems concentrated on producing 2-D engineering drawings. Daniel Cardoso Llach’s article [9] in this issue discusses how the 1950s CAM push to improve input definition for numerically controlled milling machines influenced some of the earliest CAD developments. Engineering analysis and surface-definition capabilities are discussed later in this article and the article by Kasik et al. [17].

Industrial and commercial systems differed for multiple reasons. First, CAD/CAM programs produce the complex, digital geometric representations and annotations needed to design, analyze, manufacture, and assemble products. Industrial companies wrote their own programs to protect their proprietary methods. Second, industrial companies chose to directly hire mathematicians, engineers, and programmers to build customized programs for 3-D surface design and engineering analysis. The programs reflected internal company practices and did not need to be as general as commercial offerings. A significant amount of the computer graphics techniques and mathematics implemented in industrial CAD/CAM programs still exist in today’s commercial offerings. Third, industrial companies were able to purchase mainframe computing. Mainframe performance was especially necessary for surface design and engineering analysis.

OVERVIEW

CAD/CAM programs produce two types of basic data. First, both automotive and aerospace require 3-D geometry to define their products. Second, they require text and 2-D/3-D geometry as input for engineering analysis (CAE) and instructions (such as finish, tolerances, and dimensions) for manufacturing and assembly. (Engineering analysis and CAE systems are beyond the scope of this article.)

Because the documentation medium is something flat (on paper, a computer screen, or microfilm), companies have long used 2-D engineering drawing techniques to represent 3-D geometry. The drawings represent 3-D objects as a collection of views (see Figure 1). Even if the CAD/CAM program defines geometry using 3-D coordinates, rendering techniques (such as shading, perspective, and dynamic rotation) are required to help the user understand the 3-D geometry on flat screens (see Figure 2).

FIGURE 1. Typical engineering drawing. (Source: https://pixabay.com/vectors/car-vehicle-draw-automobile-motot-34762/; used with permission.)

FIGURE 2. Annotated 3-D object. (Source: D. Kasik; used with permission.)

In short, CAD/CAM programs implement the necessary techniques to define, modify, and communicate the 2-D/3-D geometry and text needed to build complex products.

My Boeing job gave me a broad view of both commercial and industrial systems. As chief technical architect for Boeing’s internally developed CAD system [16], I was invited to numerous presentations from vendors and competitors and became acquainted with their internal details. Boeing CAD/CAM research and development work started in the late 1950s and ended in the late 1990s.

Academic systems are not included in this article because the most significant production program development work was being done by commercial CAD software companies and industrial companies. A number of academic research projects inspired CAD/CAM development nonetheless. The Massachusetts Institute of Technology [28, pp. 3-1–3-25] provided excellent late-1950s and early-1960s results focused on interactively generating 2-D geometry [24], 3-D geometry [15], and NC machine programming [23]. Although there were some academic contributions to solid modeling [27], [25], solids did not play a significant modeling role until Boeing used CATIA V3 and CATIA V4 to define the 777 with solids [21].

When assessing automotive and aerospace CAD programs, it is necessary to understand not only the data but also the user community:

  • those with technical expertise in one or more scientific, engineering, or manufacturing fields

  • specialists with interactive CAD systems build 2-D engineering drawings or 3-D models based on specifications from technical experts.

  • programming skills and are willing to write their own software to solve problems not addressed to their satisfaction in commercial software.

The models and text guide the activities of downstream engineering, fabrication, assembly, and maintenance staff. Making the downstream more productive was a prime motivator for the development of early CAD programs. CAM programs started in the mid-1950s because NC machines required very lengthy programs that required part geometry and manufacturing instructions to fabricate individual parts [2]. Generating the geometry for NC programs led to the development of tools to make defining the geometry easier. Engineering programs (such as computational fluid dynamics and finite-element analysis) also relied on geometry that defined external surfaces for aerodynamic analysis, more detailed part forms for structural analysis, and many others.

CAD/CAM PROGRAM CHARACTERISTICS

On a technical level, interactive CAD/CAM programs differ from other scientific/engineering programs and transaction-oriented business systems because of the greater need for performance, scale, and integration. However, CAD/CAM programs and their users did not initially levy specific demands on processor speed, network speed, memory size, and data storage capacity. Instead, users tended to start with whatever technical facilities they could access and then later demanded more processor power, network bandwidth, memory, and data storage.

Performance Requirements

CAD/CAM interactive drafting and design performance must be close to real time to allow users to manipulate geometry (either 2-D or 3-D) efficiently and comfortably. Immediate response (measured as 0.5 seconds or less) [28, pp. 13-1–13-7] for simple operations makes the CAD/CAM program feel like it is responding in real time. Simple operations include sketching a line and rotating, moving, and zooming 3-D models.

By contrast, many other scientific/engineering programs are heavily compute-bound and can generally be run as batch programs. Even when able to be run interactively, users understand how complex the algorithms are and do not expect immediate results. Hence, the necessity for real-time interaction is relaxed.

Most interactive, transaction-oriented business systems do not require near-real-time interactive performance. They often feature form interfaces that require a person to fill out multiple fields prior to processing. Interaction must be fast enough to allow quick navigation from one text field to another. Once input is completed, the user starts transactions processed by a reliable database system and expects some delay.

The real-time interaction aspect of CAD/CAM programs meant that their implementation differed significantly from other types of online programs. Getting acceptable performance for CAD stressed interactive devices, operating systems and programming languages; data storage methods; and computing/network hardware.

Other forms of scientific computing generate or measure vast amounts of data, as in computational fluid dynamics or astronomy. When a person produces a CAD drawing or model, it is most often part of a larger collection of parts, subassemblies, and assemblies that ultimately define the entire product. A complex product, such as a commercial airplane or a building, requires thousands of drawings, hundreds of thousands of unique parts, and millions of individual parts. A configuration management system rather than a CAD system defines and controls interpart relationships and versions. (Configuration management systems are beyond the scope of this article.) The system must be able to handle all of the original data as versions evolve in addition to the data generated by CAE/CAM processes. All versions are stored to document design decisions and evolution.

The thousands of people involved in designing, analyzing, building, and maintaining a complex product put significant stress on the supporting software and hardware. It is critical for the software to keep track and organize all of the parts, drawings, analyses, and manufacturing plans. Tracking and organizing generally required centralized computing resources (yesterday’s mainframes and today’s cloud). Tracking and organizing CAD data on centralized mainframes was difficult enough. The problem got worse as personal computers started having enough computing power and networking resources to move design to a distributed computing environment. Although tracking and organizing mainframe-based data were difficult, and distributed work relied on detailed centralized tracking and organizing, making sure that a user was working on the latest version added complexity.

Scale: Product Complexity and Longevity

The problem of scale stresses computer systems across both size and time. Then, as computer performance improves, users tend to push the limits by attacking more complex problems, producing more design and simulation iterations, generating more numerous and more detailed models, and so on. For example, when Boeing developed the 777 during the late 1980s and early 1990s, each airplane was represented by a collection of models that contained about 300 million polygons. The fourth version of the Dassault Systèmes CATIA CAD system (CATIA V4) was the primary modeling tool. When the 787 started in 2004, the geometric models developed using CATIA V5 required more than 1 billion polygons. Although not necessarily as large in terms of absolute amounts of storage consumed as business systems, geometry data are structurally complex (with both intrapart and interpart relationships) and contain mostly floating-point values (for example, results of algorithms only come close to zero).

Scale is also measured in calendar time. CAD programs generate geometry and documentation data that represent products that could be in use for decades (such as aircraft and military aircraft) or more (such as power generators). CAD/CAM programs tend to have a shorter half-life than the product definition data they produce. This puts significant stress on data compatibility across vendors or across software versions from the same vendor. Different vendors’ implementations of the same type of entity could all too easily result in translation errors. New versions of a single vendor’s product could also result in translation errors.

Data Integration

CAD/CAM program integration has different variations [18]. Effective, active data integration allows different programs to read and potentially write geometry data directly without translation. For example, a finite-element analysis program requires geometry from which it builds a mesh of elements. Many analysis programs (such as NASTRAN) have been in existence for decades and still do not have direct access to CAD geometry models.

Having full data integration across all CAD/CAM/CAE programs is a complex and fragile endeavor that remains a challenge for multiple reasons. Different groups developed the programs and use different internal representational that require translation. For example, CAD-generated geometry must be translated into the nodes and elements that finite-element codes can process. Similarly, different organizations use different brands of CAD/CAM/CAE programs that also require translation. For example, Boeing used two different CAD systems (Computervision for the 757 and Gerber for the 767) that forced the company to develop its own translator.

The translation of geometric data has proven to be nearly as challenging as translating natural language. Programs often have unique data entities, different algorithms for the same function, and even different hardware floating-point representations. The differences mean that 100% accurate and precise translation among systems has yet to be realized.

INTERNAL AUTOMOTIVE AND AEROSPACE PROGRAM DEVELOPMENT

Three factors drove CAD/CAM adoption in the aerospace and automotive industries. First, companies observed that engineering drawing preparation was time-consuming for both an initial release and subsequent modifications. Interactive graphics obviated the need for drafting tables, drafting tools, and erasers. Drafters could generate and modify engineering drawings more quickly. Large plotters on paper or mylar for certification agencies, such as the U.S. Federal Aviation Authority, for approval. Second, engineering analysis showed real promise in terms of virtually analyzing engineering characteristics, such as aerodynamics, structural integrity, and weight. Accurate geometry, especially external surface definitions, was required. Third, NC machines gained popularity and required efficient methods to create the geometry of individual parts.

Many automotive and aerospace companies developed their own programs. Unlike the early commercial CAD/CAM companies, which often relied on minicomputers, automotive and aerospace companies had enough mainframe resources to support a large user community and large amounts of data. A single mainframe could be upgraded to support, test, and even hundreds of CAD/CAM users and provide acceptable interactive performance. In addition, aerospace and automotive companies hired the mathematical and programming talent needed to build CAD/CAM programs. The programs were tuned to internal corporate drafting standards, manufacturing, and surface-modeling techniques.

Commercial CAD software systems were able to penetrate a few large companies in the early days. For example, Boeing used them for 757 and 767 engineering drawings. However, it was more common for large aerospace and automotive companies to develop their own systems to give themselves a competitive advantage in surface modeling and NC programming. A few other large design and build companies in the shipbuilding, architecture, industrial design, process plant, and factory design industries also developed or used early CAD systems, like Fluor [20] and GE [2], but they were the exceptions. Automotive and aerospace led the way, but, in many cases, surface modeling and NC programming were the prime focus. Engineering drawing programs were developed primarily to save documentation labor.

Both commercial software companies and industrial companies developed dozens of CAD/CAM programs that had significant functional overlap. As is the case with other product classes, many competitors initially emerged. However, market evolution saw the many gradually coalesce into a few large players. The CAD/CAM business was no different. Today, a few large players (Autodesk, Dassault Systèmes, Parametric Technology, and Siemens) acquired competitors or forced them into bankruptcy and now dominate the market [28, pp. 8-1–8-51, 13-1–13-7, and 16-1–16-48, and 19-1–19-38].

The internal industrial programs stayed in production through the mid- to late 1980s. Commercial software companies started adding functions for 3-D solid and surface modeling and advanced NC programming. The commercial companies were able to spread development and maintenance costs over multiple clients, and industrial companies realized that commercial systems could provide cost savings.

The power of personal computers based on raster graphics devices also started matching and even exceeding minicomputer and workstation performance. Personal computers, which were much cheaper and offered another cost-savings opportunity, contributed to the demise of mainframe-based systems.

AUTOMOTIVE INDUSTRY

Companies like General Motors [19] and Renault [4] had strong research and development organizations and started recognizing CAD’s benefits in the late 1950s.

FIGURE 3. Coordinate measuring machine. (Source: https://www.foxvalleymetrology.com/products/metrology-systems/coordinate-measuring-machines/wenzel-r-series-horizontal-arm-coordinate-measuring-machines/wenzel-raplus-horizontal-arm-coordinate-measuring-machine/; used with permission.)

Automotive surfaces are often defined using full-scale clay models (see Figure 3). While manual sculpting of new car body designs in clay was hard enough, manually transferring computer-pressable surfaces to support design, engineering, and manufacturing was even harder. Companies still use full-size coordinate measuring machines and numerical surface-fitting algorithms to do so.

The automotive industry especially cares about how a vehicle looks to a potential buyer. Mathematicians like Steve Coons (Massachusetts Institute of Technology, Syracuse, and Ford), Bill Gordon (GM and Syracuse), and Pierre Bézier (Renault) solved complex computational geometry problems both as academics and as employees. Their solutions became the basis for substantial improvements in surface modeling. The methods for defining surfaces, true 3-D objects, varied from company to company. For example, General Motors used full-scale coordinate measuring machines that capture height along the width and the length of a full-scale clay model of a proposed automobile. Bill Gordon’s surface algorithms accounted for height differences in the width and length measurements.

GM

GM started its CAD developments in the late 1950s [19]. The staff at GM Research (GMR) worked with IBM to develop time-sharing and graphics capabilities that were responsive enough to support interactive design. The original computer used was an IBM 704 (upgraded to a 7090 and then a 7094) running a Fortran language compiler. The program itself was called Design Augmented by Computers (DAC-1).

It not only did DAC-1 provide body-styling assistance, but it also forced the IBM–GM team to develop an early time-sharing strategy (the Trap Control System) in 1961. Time sharing itself was in its infancy in the 1960s and generally supported alphanumeric character terminals connected at low speeds (110 or 300 bits per second). Supporting interactive performance required higher bit rates and put more pressure on the operating system. Earlier computers, such as Whirlwind, that supported graphics and light-pen interactive devices were dedicated to a single user.

IBM and GM formed a joint development project to develop a light-pen-driven interactive device to meet GM’s DAC-1 requirements. Even the choice of programming language was scrutinized. The Fortran compiler proved to be too slow, so DAC-1 moved to NOMAD, a customized version of the University of Michigan’s Michigan Algorithm Decoder compiler, in 1961–1962 [19].

Patrick Hanratty and Don Beque worked on the CAM systems that dealt with stamping the designs produced by DAC-1 between 1961 and 1964. Hanratty left GM in 1965 and went to a West Coast company, where he developed his design software. He later took his work and formed an independent company [2], [38, pp. 15-1–15-20].

DAC-1 was formally moved from the GMR to GM operating division in 1967, but they did not use the GM CAD system development. Two different surface-modeling packages, Fisher Body and CADANCE, appeared in the 1970s. Each ran on IBM 360/370 machines using the PL/1 programming language. Most users had access to IBM 2250/3250 graphics terminals [17]. Some GM divisions reported between 50 and 100 GM devices with DEC GT40 vector graphics terminals hooked up to a PDP 11/45. The 11/05 handled communication to and from the mainframe. In the late 1970s, Fisher Body and CADANCE were merged into GM’s Corporate Graphics System (CGS). The systems were based on GM proprietary surface geometry algorithms. Gordon surfaces [13] were particularly useful when fitting surfaces to scans of data collected from automobile clay body models.

GM developed its own mainframe-based solid-modeling system, GMSolid [7], in the early 1980s that was eventually integrated into CGS. GMSolid used both constructive (i.e., users used solid primitives, like spheres, cylinders, and cones) and boundary representations (i.e., solid faces contained arbitrary surfaces).

Ford

Ford developed a minicomputer-based 3-D system for multiple programs in the mid- to late 1970s. The Ford Computer Graphics System [6] used a Lundy HyperGraf refresh graphics terminal connected to a Control Data 18-10 M minicomputer. Ford modified the operating system to maximize performance. There was one terminal per minicomputer.

The programs supported product design with the Product Design Graphics System to define an auto body using Coons [12] or Overhauser [8] surfaces. Other functions included a printed circuit board design, plant layout, die/tool element modeling, and NC. Ford used commercial CAD systems, such as Computervision and Gerber IDS, for drafting and design functions throughout its powertrain (engine, axle, transmission, and chassis). Ford used different minicomputer brands and graphics terminals for different programs. Computervision ran on its own proprietary minicomputer and a Tektronix direct-view storage tube; Gerber IDS ran on an HP 21MX and Tektronix terminal; and the printed circuit board design program ran on a Prime 400 minicomputer and Vector General refresh graphics terminals.

Even though Ford worked in a distributed minicomputer-based (rather than mainframe-based) environment, the company used centralized servers to store, retrieve, and distribute its design files worldwide.

Renault and Citroën

Pierre Bézier popularized and implemented the curve definitions for defining the smooth curves needed for auto bodies [10] developed by Paul de Casteljau (a Citroën employee) in 1959. Bézier developed the nodes and control handles needed to represent and interactively manipulate Bézier curves via interactive graphics. He was responsible for the development of Renault’s UNISURF system [5] for auto body and tool design. System development began in 1968 and went into production in 1975 on IBM 360 mainframes.

Citroën developed two of its own systems (SPAC and SADUSCA) in parallel with Renault [30]. The systems were also based on de Casteljau’s work and ran on IBM 360 and 370 series computers and IBM 2250 graphics terminals.

AEROSPACE INDUSTRY

In the aerospace CAD/CAM world, companies started defining aerodynamically friendly surfaces shortly before human-powered flight at Kitty Hawk. The National Advisory Committee for Aeronautics (NACA) defined, tested, and published families of airfoils [5] in the late 1920s and 1930s. The idea was to assist aircraft development by predefining the aerodynamic characteristics of wing cross sections (see Figure 4).

FIGURE 4. Sample NACA airfoils. (Source: Summary of Airfoil Data, NACA Report 824, NACA, 1945; used with permission.)

Aerospace engineers must design surfaces that balance aerodynamic performance, structural integrity, weight, manufacturability, fuel efficiency, and other parameters. Industrial aerospace CAD systems adopted 3-D surface-definition technology that was consistent with their company surface-lofting practices and could produce surfaces that could be modified relatively easily, represented conics precisely, and exhibited C2 continuity. (C2 means continuous in the second derivative, an advantage when doing aerodynamic analysis.)

Aerospace companies tried to use automotive surface-modeling methods, but they did not work particularly well. Automobile companies care more about the attractiveness of smooth surfaces, although aerodynamics has become more important as fuel efficiency demands have increased. Aerospace is based on aerodynamic efficiency, and demands C2 continuity in analysis, which were not handled well with automotive surfaces. The development and implementation of nonuniform rational b-spline surfaces became and remain the preferred aerospace surface-modeling method [26].

Lockheed

Lockheed focused on producing engineering drawings and NC programming, not surface modeling. The goal was to be able to speed up both processes. Lockheed, California developed computer-aided drafting software internally to run on IBM mainframes and 2250/3250 graphics terminals [28, pp. 13-1–13-7]. Development started in 1965 as “Project Design” to create 2-D engineering drawings quickly. Project Design was rechristened as CADAM in 1972. An acceptable response time was deemed to be 0.5 seconds or less. CADAM operators were often judged by how fast they seemed to be working, even if little was actually happening. Generally, CADAM has a lot of relatively short-duration functions that made operators appear busy.

Project Design drawings were used to drive NC machines as early as 1966. Use of the software spread quickly inside Lockheed, which established a separate business to sell CADAM in 1972. The new business started sending CADAM source code to others in 1974, including IBM Paris, Lockheed Georgia, and Lockheed Missile and Space in Sunnyvale, California. Eventually, IBM started a successful effort to offer CADAM (acquired from Lockheed) to drive mainframe sales.

Additional CAD development occurred at Lockheed Georgia in 1965 [28, pp. 4-3–4-4]. Spearheaded by Sylvan (Chase) Chasen, the software ran on CDC 3300 computers and Digigraphics terminals. The purpose was more to assist in NC program path planning than to create engineering drawings.

Northrop

Northrop military program funding often drove the development of aerospace company systems. Northrop Computer-Aided Design and Northrop Computer-Aided Lofting (NCAD/NCAL) is an excellent example [1]. Northrop based the system design for the mid-1970s B-2 Spirit stealth bomber on NCAD/NCAL. Other Northrop military programs and Northrop subcontractors used NCAD/NCAL for 3-D surface modeling and CADAM for drafting.

Northrop used funds from the B-2 program to develop NCAD/NCAL [14] rather than use similar systems from other contractors. NCAD/NCAL ran on IBM mainframes interconnected with classified networks. Importantly, the mainframes and networks crossed multiple corporate boundaries, including Boeing, Hughes Radar, GE Engines, and Vought. All partners had to use NCAD/NCAL and provide their own IBM mainframes. This approach simplified data integration and transfer issues and resulted in the first military aircraft fully designed on a CAD system. The B-2 program started in the early 1980s, and its first flight occurred 17 July 1989. The airplane is still in service today.

McDonnell Douglas

McDonnell Douglas implemented two distinctly different CAD systems [22]. The first, Computer Aided Design and Drafting (CADD), was developed in 1965 by the McDonnell Aircraft Company. It was initially a 2-D design and drafting system that was extended to 3-D in 1973, integrated with NC software in 1976 and sold commercially beginning in 1977.

McDonnell Douglas Automation (McAuto), the computer services business unit, purchased Unigraphics (UG) from United Computing in 1976. McAuto rewrote and expanded the United Computing system software based on a license to Hanratty’s ADAM software. The first production use of UG occurred at McDonnell Douglas in 1978. Bidirectional data exchange between the two was not completed until 1981 even though both were in production use.

The two systems’ implementations differed substantially. CADD ran on IBM mainframes and its geometry was based on parametric cubic polynomials and evaluators. Graphics support was primarily the IBM 2250, a 2-D-only device. Evans and Sutherland (E&S) [11] sold a number of Multi-Picture Systems (MPSs) as a 2250 alternative. The MPS featured hardware for 3-D transformations, which had the potential to offload the mainframe. E&S modified its controller to allow two terminals to share a single controller through a device called a Watkins box (named after the designer and developer, Gary Watkins). The Watkins box was attached to a small DEC minicomputer, which handled communications to and from the mainframe. This configuration provided enough savings over the 2250/3250 to justify the purchase of dozens of E&S terminals.

UG ran on multiple brands of midrange minicomputers, including DEC PDP and VAX systems as well as the Data General S/250, S/230, and S/200. UG derived its geometry from the ADAM system. Early versions of ADAM relied on canonical forms and special case geometry algorithms. Interactive graphics for UG was provided on Tektronix storage-tube devices.

Dassault Aviation

Dassault Aviation started its journey in computer graphics to help smooth curve and surface data in the late 1960s. In 1974, the company became one of the first licensees of Lockheed’s CADAM software for 2-D drafting.

Designing in 3-D took a different route. In 1976, Dassault Aviation acquired the Renault UNISURF program and its Bézier curve and surface capability to complement CADAM.

CATIA itself started in 1978 as the Computer-Aided Tridimensional Interactive (CATI) system. Francis Bernard [3] gave credit for extending CATI to surface modeling to generate geometry that would be easier to machine, a capability particularly important for wind tunnel models. CATI became CATIA in 1981 when Bernard convinced Dassault Aviation to commercialize the system through the Dassault Systèmes spinout. As both an internal and commercial product, CATIA ran on IBM mainframes with attached IBM 2250/3250 and IBM 5080 graphics terminals. The early underlying geometry forms included Bézier curves and surfaces and grew to include canonical solid definitions and constructive solid geometry operations. Later versions ran on IBM RS/6000s and other Unix-based workstations.

Matra Datavision Euclid

French aerospace company Matra’s Euclid system (not to be confused with the C-Side Subtec Euklid system for NC machining) addressed modeling for fluid flow. Euclid was a modeler sold by the French startup Datavision in 1979. It was originally developed by Brun and Theron in the Computer Science Laboratory for Mechanics and Engineering Sciences in Orsay, France. Its initial purpose was fluid flow modeling. The French conglomerate Matra, which had aerospace components, bought the controlling interest in Euclid in 1980. Dassault Systèmes purchased the software in 1998.

CONCLUSION

Even though internally developed CAD/CAM programs are unusual today, a number of commercial systems had their roots in early industrial programs. Internally developed programs had direct access to user communities and were able to develop math software that matched company practice. The interactive methods and mathematics influenced other industries, such as electronic games and animated films.

Early commercial CAD/CAM programs were packaged as turnkey systems. Each turnkey system supported only a few concurrent users at relatively slow speeds. Industrial companies, which had to support hundreds and even thousands of users, had the computer power (generally large mainframes), the talent (mathematicians and programmers), and the money to build their own proprietary CAD/CAM programs. By the late 1980s, however, there was not enough of a competitive advantage to continue development and support. At that point, commercial companies had developed enough manufacturing, surface design, and other capabilities that internal development and maintenance were no longer cost efficient.

Industrial companies experienced the requirement firsthand by developing their own CAD/CAM programs. Because of that experience, industrial companies can clearly articulate the problems CAD/CAM programs have with performance, scale, and integration to today’s commercial vendors.

As noted earlier, the basic performance and integration requirements for CAD/CAM programs are essentially the same today as in the early days. Scale adds another zero or two to the left of the decimal point as CAD/CAM data quantities grow.

The mainframes and minicomputers of 1960–1985 were supplanted by workstations that cost significantly less. Workstations were overtaken by the ever-increasing compute power and the ability to network personal computers in the mid-1990s. Computing today has a turn-the-clock-back feel as cloud systems are gaining momentum, and current CAD systems are being delivered via the cloud. As was the case with early mainframes, cloud computing centralizes processing and data resources. Users take advantage of high-performance networks and access cloud systems remotely via lower cost PCs. When CAD software is executed in the cloud, license sharing becomes feasible and software updates occur remotely.

When applied to CAD, cloud computing faces the same scale and performance issues present in the early days with centralized mainframes and minicomputers. Cloud scales well from a raw processing perspective. It is easy to add more processing power, and servers are generally in the same physical location, which decreases data transfer costs. What is hard for cloud computing is satisfying CAD systems’ requirement for near-real-time interactive performance, especially at significant distances. Many cloud services are based in data centers that are tens, hundreds, and even thousands of miles away. Such distances make achieving near-real-time interactive performance difficult. Interactive performance continues to force many CAD/CAM applications to run in a distributed manner. Applications run on a PC near the user, whose data are stored on a file server that is configuration managed. When requested, the data are most often checked out from the server, downloaded to the PC, processed locally, and checked back in.

As AI has become more popular, using it to improve CAD/CAM user productivity is also being pursued. There has been significant research into design optimization and automated documentation production, with limited success to date. Design optimization relies on one or more engineering analyses to tweak the geometry. Not only are multiple runs needed to optimize the geometry, but the suggested optimization can force changes to the geometry (such as folds and tears) that to create useful exploded-view drawings typical in a parts catalog from the raw geometry. This seems to be straightforward because assembly components can be easily moved along an x-, y-, or z-axis. The issue is that an exploded view in a parts catalog shows a disassembly/reassembly sequence. Automated disassembly is a task that has been unsuccessfully researched for decades.

CAD/CAM programs are still evolving with significant amounts of work still needed. The field remains as pertinent and as challenging as it was in the early years.


n5321 | 2025年7月7日 22:25

Boeing R&D

最近对boeing的研发体系感兴趣。

As Condit said, “Designing the airplane with no mock-up and doing it all on computer was an order of magnitude change.”


n5321 | 2025年7月7日 22:24

Ansys电磁设计-常见问题解答

RMxprt 中,如何进行单相感应电机同心圆绕组参数化设计,并使之按正弦比例增减匝数?

(1)每层导体数设置: 0,1,>1

  0:for auto design

  1:use the editor turns

  >1:scale the editor turns based on the max turns.


(2)设置绕组变量:如比例系数SCW, 参数化


(3)当计算系数后匝数出现小数位,按四舍五入进行数据处理


RMxprt 中,针对任意非标准槽型,用户自定义槽型编辑器如何设置并何使用?

1.打开槽型编辑器

  (1)在Project tree, 选择Rotor or Stator,双击打开

  (2)在Rotor or Stator Properties窗口下, 点击Slot Type按钮打开,勾选User Defined Slot

  (3)点击User Defined Slot, 然后点击OK


2.槽型编辑器界面设置


3.编辑线段



详解User Defined Data(UDO)功能的使用

1. 如何输入User Defined Data

  – 点击RMxprt/Design Settings…/User Defined Data

  – 选中Enable框

  – 输入模板文件


2. 如何调用User Defined Data模板

  – 在Examples/RMxprt路径下的(machine_type).temp文件


3. 如何使用User Defined Data

  Fractions:此参数为定义模型最小周期数,用于设置Maxwell 2D/3D一键有限元时输出模型的周期数;用于设置输出模型的周期数

  说明

  – 生成模型的周期数:0(最小模型);1(全模型);2(半模型);3(三分之一模型);……

  – 当值为0时, RMxprt会自动计算最小模型周期数

  格式和默认值

  – Fractions, 0

  支持电机类型

  – 所有电机


4. 如何使用User Defined Data: ArcBottom

  说明

  – 槽底形状(铁心在Band外部): 0 (槽底为直线); 1 (槽底为圆弧,只对槽型3和槽型4有效)

  格式和默认值

  – ArcBottom, 0

  支持电机类型

  – 所有电机


5. 如何使用User Defined Data: WireResistivity & WireDensity

  说明

  – 当WireResistivity = 0,默认为0.0217 ohm.mm^2/m (铜导线)

  – 当WireDensity = 0,默认为8900 kg/m^3 (铜导线)

  格式和默认值

  – WireResistivity, 0.0217

  – WireDensity, 8900

  支持电机类型

  – Adjust-Speed Synchronous Motors (ASSM)

  – To be added in Three-Phase Induction Motors (IndM3)

  – Single-Phase Induction Motors (IndM1)


6. 如何使用User Defined Data: LimitedTorque

  说明

  – When LimitedTorque < ComputedRatedTorque, use ComputedRatedTorque for flux weakening control

  – Only for AC voltage simulated in the frequency domain

  格式和默认值

  – LimitedTorque, 0

  支持电机类型

  – Adjust-Speed Synchronous Motors (ASSM)

  – To be added in Three-Phase Induction Motors (IndM3)


7. 如何使用User Defined Data: ControllingIq & ControllingId

  说明

  – ControllingIq: controlling q-axis current for dq-current control

  – ControllingId: controlling d-axis current for dq-current control

  – ControllingIq = 0: without dq-current control

  – Only for AC voltage simulated in the frequency domain

  – LimitedTorque will not be used for dq-current control

  格式和默认值

  – ControllingIq, 0

  支持电机类型

  – Adjust-Speed Synchronous Motors (ASSM)


8. 如何使用User Defined Data: TopSpareSpace & BottomSpareSpace

  说明

  – Used to define top and bottom spare spaces which are occupied by non-working windings for poly-winding adjust-speed induction motors

  – 0 <= TopSpareSpace + BottomSpareSpace < 1

  格式和默认值

  – TopSpareSpace, 0

  – BottomSpareSpace, 0

  支持电机类型

  – Three-Phase Induction Motors (IndM3)


9. 如何使用User Defined Data: SpeedAdjustMode

  说明

  – Speed adjust mode: 0(None); 1(L-Mode); 2(T-Main); 3(T-Aux)

  – Winding setup in Maxwell 2D/3D designs only with non-speed-adjust mode

  格式和默认值

  – SpeedAdjustMode, 0

  支持电机类型

  – Single-Phase Induction Motors (IndM1)


10. 如何使用User Defined Data: AdjustTurnRatio

  说明

  – Turn ratio of the adjusting winding to the original main/aux winding at normal speed

  – For L-Mode: 0 <= AdjustTurnRatio < 1

  – For T-Main & T-Aux: 0 <= AdjustTurnRatio < infinity

  – Winding setup in Maxwell 2D/3D designs only with AdjustTurnRatio = 0

  格式和默认值

  – AdjustTurnRatio, 0

  支持电机类型

  – Single-Phase Induction Motors (IndM1)


11. 如何使用User Defined Data: AuxCoilOnTop

  说明

  – AuxCoilOnTop = 1: aux. winding on the top in slots

  – AuxCoilOnTop = 0: aux. winding on the bottom in slots

  – For concentric windings only

  格式和默认值

  – AuxCoilOnTop, 0

  支持电机类型

  – Single-Phase Induction Motors (IndM1)


12. 如何使用User Defined Data: CapacitivePF

  说明

  – CapacitivePF = 1: with capacitive (power factor) electric load

  – CapacitivePF = 0: with inductive (power factor) electric load

  – For Generator Operation Type only

  格式和默认值

  – CapacitivePF, 0

  支持电机类型

  – Single-Phase Induction Motors (IndM1)


13. 如何使用User Defined Data: Connection

  说明

  – Connection= 0: for wye-connected stator winding

  – Connection= 0: for delta-connected stator winding

  – For three-phase winding only

  格式和默认值

  – Connection, 0

  支持电机类型

  – Claw-Pole Synchronous Generators (CPSG)

RMxprt 中,如何对绕组线径库进行选择和编辑?

(1)打开导线库 Machine\wire


 

(2)选择导线库----tools\option\machine options


(3)自定义导线库:Machine\wire

  编辑后,Export:


(4)激活自定义材料库---- tools\option\machine options

 


(5)多线并绕----Mixed


RMxprt和Maxwell计算出的磁密如何进行换算?

1. RMxprt计算的是平均值,是铁磁物质中的实际磁密;

  由于电机中存在叠压系数、通风沟以及定转子长度不等的影响。在二维有限元分析的时候,所有的部件都必须等效到同一个长度下,一般都等效在定子铁心长度下。

2. Maxwell计算的是具体的磁密分布,分析结果中的云图是等效后的磁密分布,不是铁磁物质中的实际磁密。

  因此,应该先将Maxwell中的磁密换算成实际磁密后,再与RMxprt的计算值比较。

3. 换算方法:

  Maxwell磁密 = RMxprt磁密 * 长度等效系数

  RMxprt输出一键有限元模型的时候,会自动进行长度等效,长度等效系数在材料属性上显示。

4. 案例分析

  在本例中,定转子的长度等效系数都是0.866,应该用红色部分数据和有限元的分析结果做对比。


(1)定子轭部磁密比较


(2)转子轭部磁密比较


(3)转子齿部磁密比较


(4)定子齿部磁密比较


在RMxprt中Embrace(极弧系数)是什么意思?

Embrace极弧系数的定义:针对表贴式磁钢和一字型磁钢,指永磁体在转子表面的弧长(转子侧内弧长)对应的转子圆心与一个转子磁极所对应转子圆心角的比值,如图所示,其取值在0~1之间。

注意:选择磁极类型4时该选项不可用。对于内嵌磁钢,极弧系数的定义如图所示。

 


在RMxprt中Offset(极弧偏心距)是什么意思?

定义:从转子中心到磁极极弧中心的距离值,如图7.12所示。对于类型1~3,该选项可用。输入0,则表示采用均匀气隙。


ANSYS Maxwell中2D和3D的电机斜槽计算对比分析

电机斜槽是一个非常常见的问题,斜槽是三维电磁场问题,但三维电磁场分析时间相对比较长,占用计算资源比较大,80%以上的电机电磁问题可以用2D来解决,用2D来分析斜槽可以用相对较小的时间、较小的计算资源得到较好的结果。

以一台永磁电机的空载反电势分析为例,比较2D和3D的计算结果

(1)模型


(2)二维反电势分析(直槽)


(3)二维反电势分析设置(斜槽)


(4)分布式(DSO)求解


(5)斜槽结果


(6)2D和3D结果比较


 

 


(7)2D和3D计算所用软件


(8)结论


2D和3D都能处理斜槽,且计算结果差不多。

Maxwell2D + OPT + DSO比Maxwell3D + MP计算时间更短

在Maxwell 中,如何从文件中导入参数化扫描Table表格数据?

1. 在Help文档中找到案例


2. 编辑TXT, CSV格式


3. 从文件中添加参数


4. RESULT


5. 求解



在Maxwell 里如何定义硅钢片等材料的磁滞特性?

1. Core Loss Model -> Hysteresis Model


2. 自动打开B-H曲线,能够输入Hci即可。


在Maxwell2D 里有哪些New Mesh技术?

1. Maxwell 2D Classic Mesh 趋肤深度


 

2. Maxwell 2D TAU Mesh趋肤深度


3. Maxwell 2D TAU Mesh/Clone mesh


  


如何远程设置ANSYS Maxwell?

1. 首先,需要两台电脑都安装remote软件


2. 其次需要两台电脑都注册RSM


3. 然后在V14、V15设置Remote求解选项,需要在本机设置--基于对方电脑Ip地址


4. 需要在ANSYS  Maxwell,设置Remote界面



 

5. 指定IP地址计算机求解


6. Remote 支持协同仿真功能


在Maxwell 中如何定义多个求解进程并进行排队管理?

1. 打开Queue option


2. 点击 Analyze All


3. 查看排队列表和求解进度


4. 结果:成功排队并求解


有哪些设置可以加快Maxwell 2D的计算速度?

1. 关闭 2D Report update options,在Maxwell2D瞬态求解中设置:

  Tools > Options > General Options > Desktop Performance and set Report Update to "On completion".


2. Maxwell 2D/3D前后处理多核并行加速设置

  处理器数量设置:在Tools下,如下图界面,Number of Processors缺省值为4(或者机器实际核数除以2)。此选项仅影响在前/后处理界面下的预处理算法,可以充分利用多处理器加速


3. Maxwell 2014 2D:TAU 网格


4. 采用周期模型/对称边界条件,采用自适应网格技术剖分


5. 合理选择步长、定义动态步长



如何观察某点的磁密随时间的变化波形?

1. 任意画一个点


2. 设置Expression cache


3. results/creat field report/rectangular plot


4. 在Results里得到场数据波形


n5321 | 2025年7月3日 23:23

free添加free ssl证书

Let’s Encrypt 是目前全球最常用的 免费 SSL 证书提供服务,可以让你的网站支持 https:// 访问,并获得浏览器的“安全锁🔒”标识。

Nginx 监听 443 端口 + 配置 SSL + Django 处理逻辑

ubuntu 添加

sudo apt update
sudo apt install certbot python3-certbot-nginx

步骤 2:使用 certbot 自动申请证书

sudo certbot --nginx -d autoem.net -d www.autoem.net

步骤 3:修改 Nginx 配置以增强兼容性,然后测试并重启:

sudo nginx -t

sudo systemctl restart nginx


步骤 4:自动续期设置(Let’s Encrypt 有效期90天)

sudo certbot renew --dry-run


反馈:


Saving debug log to /var/log/letsencrypt/letsencrypt.log

Processing /etc/letsencrypt/renewal/autoem.net.conf

Account registered.

Simulating renewal of an existing certificate for autoem.net and www.autoem.net

Congratulations, all simulated renewals succeeded: 

  /etc/letsencrypt/live/autoem.net/fullchain.pem (success)




settings.py 推荐update
SECURE_SSL_REDIRECT = True

SESSION_COOKIE_SECURE = True

CSRF_COOKIE_SECURE = True


n5321 | 2025年7月3日 11:59

About Us

普通电机工程师!
从前只想做最好的电机设计,现在修理电机设计工具。
希望可以帮你解释电磁概念,项目救火,定制ANSYS Maxwell。

了解更多