关于: "CAE":

Ansys电磁设计-常见问题解答

RMxprt 中,如何进行单相感应电机同心圆绕组参数化设计,并使之按正弦比例增减匝数?

(1)每层导体数设置: 0,1,>1

  0:for auto design

  1:use the editor turns

  >1:scale the editor turns based on the max turns.


(2)设置绕组变量:如比例系数SCW, 参数化


(3)当计算系数后匝数出现小数位,按四舍五入进行数据处理


RMxprt 中,针对任意非标准槽型,用户自定义槽型编辑器如何设置并何使用?

1.打开槽型编辑器

  (1)在Project tree, 选择Rotor or Stator,双击打开

  (2)在Rotor or Stator Properties窗口下, 点击Slot Type按钮打开,勾选User Defined Slot

  (3)点击User Defined Slot, 然后点击OK


2.槽型编辑器界面设置


3.编辑线段



详解User Defined Data(UDO)功能的使用

1. 如何输入User Defined Data

  – 点击RMxprt/Design Settings…/User Defined Data

  – 选中Enable框

  – 输入模板文件


2. 如何调用User Defined Data模板

  – 在Examples/RMxprt路径下的(machine_type).temp文件


3. 如何使用User Defined Data

  Fractions:此参数为定义模型最小周期数,用于设置Maxwell 2D/3D一键有限元时输出模型的周期数;用于设置输出模型的周期数

  说明

  – 生成模型的周期数:0(最小模型);1(全模型);2(半模型);3(三分之一模型);……

  – 当值为0时, RMxprt会自动计算最小模型周期数

  格式和默认值

  – Fractions, 0

  支持电机类型

  – 所有电机


4. 如何使用User Defined Data: ArcBottom

  说明

  – 槽底形状(铁心在Band外部): 0 (槽底为直线); 1 (槽底为圆弧,只对槽型3和槽型4有效)

  格式和默认值

  – ArcBottom, 0

  支持电机类型

  – 所有电机


5. 如何使用User Defined Data: WireResistivity & WireDensity

  说明

  – 当WireResistivity = 0,默认为0.0217 ohm.mm^2/m (铜导线)

  – 当WireDensity = 0,默认为8900 kg/m^3 (铜导线)

  格式和默认值

  – WireResistivity, 0.0217

  – WireDensity, 8900

  支持电机类型

  – Adjust-Speed Synchronous Motors (ASSM)

  – To be added in Three-Phase Induction Motors (IndM3)

  – Single-Phase Induction Motors (IndM1)


6. 如何使用User Defined Data: LimitedTorque

  说明

  – When LimitedTorque < ComputedRatedTorque, use ComputedRatedTorque for flux weakening control

  – Only for AC voltage simulated in the frequency domain

  格式和默认值

  – LimitedTorque, 0

  支持电机类型

  – Adjust-Speed Synchronous Motors (ASSM)

  – To be added in Three-Phase Induction Motors (IndM3)


7. 如何使用User Defined Data: ControllingIq & ControllingId

  说明

  – ControllingIq: controlling q-axis current for dq-current control

  – ControllingId: controlling d-axis current for dq-current control

  – ControllingIq = 0: without dq-current control

  – Only for AC voltage simulated in the frequency domain

  – LimitedTorque will not be used for dq-current control

  格式和默认值

  – ControllingIq, 0

  支持电机类型

  – Adjust-Speed Synchronous Motors (ASSM)


8. 如何使用User Defined Data: TopSpareSpace & BottomSpareSpace

  说明

  – Used to define top and bottom spare spaces which are occupied by non-working windings for poly-winding adjust-speed induction motors

  – 0 <= TopSpareSpace + BottomSpareSpace < 1

  格式和默认值

  – TopSpareSpace, 0

  – BottomSpareSpace, 0

  支持电机类型

  – Three-Phase Induction Motors (IndM3)


9. 如何使用User Defined Data: SpeedAdjustMode

  说明

  – Speed adjust mode: 0(None); 1(L-Mode); 2(T-Main); 3(T-Aux)

  – Winding setup in Maxwell 2D/3D designs only with non-speed-adjust mode

  格式和默认值

  – SpeedAdjustMode, 0

  支持电机类型

  – Single-Phase Induction Motors (IndM1)


10. 如何使用User Defined Data: AdjustTurnRatio

  说明

  – Turn ratio of the adjusting winding to the original main/aux winding at normal speed

  – For L-Mode: 0 <= AdjustTurnRatio < 1

  – For T-Main & T-Aux: 0 <= AdjustTurnRatio < infinity

  – Winding setup in Maxwell 2D/3D designs only with AdjustTurnRatio = 0

  格式和默认值

  – AdjustTurnRatio, 0

  支持电机类型

  – Single-Phase Induction Motors (IndM1)


11. 如何使用User Defined Data: AuxCoilOnTop

  说明

  – AuxCoilOnTop = 1: aux. winding on the top in slots

  – AuxCoilOnTop = 0: aux. winding on the bottom in slots

  – For concentric windings only

  格式和默认值

  – AuxCoilOnTop, 0

  支持电机类型

  – Single-Phase Induction Motors (IndM1)


12. 如何使用User Defined Data: CapacitivePF

  说明

  – CapacitivePF = 1: with capacitive (power factor) electric load

  – CapacitivePF = 0: with inductive (power factor) electric load

  – For Generator Operation Type only

  格式和默认值

  – CapacitivePF, 0

  支持电机类型

  – Single-Phase Induction Motors (IndM1)


13. 如何使用User Defined Data: Connection

  说明

  – Connection= 0: for wye-connected stator winding

  – Connection= 0: for delta-connected stator winding

  – For three-phase winding only

  格式和默认值

  – Connection, 0

  支持电机类型

  – Claw-Pole Synchronous Generators (CPSG)

RMxprt 中,如何对绕组线径库进行选择和编辑?

(1)打开导线库 Machine\wire


 

(2)选择导线库----tools\option\machine options


(3)自定义导线库:Machine\wire

  编辑后,Export:


(4)激活自定义材料库---- tools\option\machine options

 


(5)多线并绕----Mixed


RMxprt和Maxwell计算出的磁密如何进行换算?

1. RMxprt计算的是平均值,是铁磁物质中的实际磁密;

  由于电机中存在叠压系数、通风沟以及定转子长度不等的影响。在二维有限元分析的时候,所有的部件都必须等效到同一个长度下,一般都等效在定子铁心长度下。

2. Maxwell计算的是具体的磁密分布,分析结果中的云图是等效后的磁密分布,不是铁磁物质中的实际磁密。

  因此,应该先将Maxwell中的磁密换算成实际磁密后,再与RMxprt的计算值比较。

3. 换算方法:

  Maxwell磁密 = RMxprt磁密 * 长度等效系数

  RMxprt输出一键有限元模型的时候,会自动进行长度等效,长度等效系数在材料属性上显示。

4. 案例分析

  在本例中,定转子的长度等效系数都是0.866,应该用红色部分数据和有限元的分析结果做对比。


(1)定子轭部磁密比较


(2)转子轭部磁密比较


(3)转子齿部磁密比较


(4)定子齿部磁密比较


在RMxprt中Embrace(极弧系数)是什么意思?

Embrace极弧系数的定义:针对表贴式磁钢和一字型磁钢,指永磁体在转子表面的弧长(转子侧内弧长)对应的转子圆心与一个转子磁极所对应转子圆心角的比值,如图所示,其取值在0~1之间。

注意:选择磁极类型4时该选项不可用。对于内嵌磁钢,极弧系数的定义如图所示。

 


在RMxprt中Offset(极弧偏心距)是什么意思?

定义:从转子中心到磁极极弧中心的距离值,如图7.12所示。对于类型1~3,该选项可用。输入0,则表示采用均匀气隙。


ANSYS Maxwell中2D和3D的电机斜槽计算对比分析

电机斜槽是一个非常常见的问题,斜槽是三维电磁场问题,但三维电磁场分析时间相对比较长,占用计算资源比较大,80%以上的电机电磁问题可以用2D来解决,用2D来分析斜槽可以用相对较小的时间、较小的计算资源得到较好的结果。

以一台永磁电机的空载反电势分析为例,比较2D和3D的计算结果

(1)模型


(2)二维反电势分析(直槽)


(3)二维反电势分析设置(斜槽)


(4)分布式(DSO)求解


(5)斜槽结果


(6)2D和3D结果比较


 

 


(7)2D和3D计算所用软件


(8)结论


2D和3D都能处理斜槽,且计算结果差不多。

Maxwell2D + OPT + DSO比Maxwell3D + MP计算时间更短

在Maxwell 中,如何从文件中导入参数化扫描Table表格数据?

1. 在Help文档中找到案例


2. 编辑TXT, CSV格式


3. 从文件中添加参数


4. RESULT


5. 求解



在Maxwell 里如何定义硅钢片等材料的磁滞特性?

1. Core Loss Model -> Hysteresis Model


2. 自动打开B-H曲线,能够输入Hci即可。


在Maxwell2D 里有哪些New Mesh技术?

1. Maxwell 2D Classic Mesh 趋肤深度


 

2. Maxwell 2D TAU Mesh趋肤深度


3. Maxwell 2D TAU Mesh/Clone mesh


  


如何远程设置ANSYS Maxwell?

1. 首先,需要两台电脑都安装remote软件


2. 其次需要两台电脑都注册RSM


3. 然后在V14、V15设置Remote求解选项,需要在本机设置--基于对方电脑Ip地址


4. 需要在ANSYS  Maxwell,设置Remote界面



 

5. 指定IP地址计算机求解


6. Remote 支持协同仿真功能


在Maxwell 中如何定义多个求解进程并进行排队管理?

1. 打开Queue option


2. 点击 Analyze All


3. 查看排队列表和求解进度


4. 结果:成功排队并求解


有哪些设置可以加快Maxwell 2D的计算速度?

1. 关闭 2D Report update options,在Maxwell2D瞬态求解中设置:

  Tools > Options > General Options > Desktop Performance and set Report Update to "On completion".


2. Maxwell 2D/3D前后处理多核并行加速设置

  处理器数量设置:在Tools下,如下图界面,Number of Processors缺省值为4(或者机器实际核数除以2)。此选项仅影响在前/后处理界面下的预处理算法,可以充分利用多处理器加速


3. Maxwell 2014 2D:TAU 网格


4. 采用周期模型/对称边界条件,采用自适应网格技术剖分


5. 合理选择步长、定义动态步长



如何观察某点的磁密随时间的变化波形?

1. 任意画一个点


2. 设置Expression cache


3. results/creat field report/rectangular plot


4. 在Results里得到场数据波形


n5321 | 2025年7月3日 23:23

Maxwell 谈为什么要用ANSYS

Address to the Mathematical and Physical Sections of the British Association

James Clerk Maxwell
Liverpool, September 15, 1870
At several of the recent Meetings of the British Association the varied and important business of the Mathematical and Physical Section has been introduced by an Address, the subject of which has been left to the selection of the President for the time being. The perplexing duty of choosing a subject has not, however, fallen to me.
Professor Sylvester, the President of Section A at the Exeter Meeting, gave us a noble vindication of pure mathematics by laying bare, as it were, the very working of the mathematical mind, and setting before us, not the array of symbols and brackets which form the armoury of the mathematician, or the dry results which are only the monuments of his conquests, but the mathematician himself, with all his human faculties directed by his professional sagacity to the pursuit, apprehension, and exhibition of that ideal harmony which he feels to be the root of all knowledge, the fountain of all pleasure, and the condition of all action. The mathematician has, above all things, an eye for symmetry; and Professor Sylvester has not only recognized the symmetry formed by the combination of his own subject with those of the former Presidents, but has pointed out the duties of his successor in the following characteristic note:—
"Mr Spottiswoode favoured the Section, in his opening Address, with a combined history of the progress of Mathematics and Physics; Dr. Tyndall's address was virtually on the limits of Physical Philosophy; the one here in print," says Prof. Sylvester, "is an attempted faint adumbration of the nature of Mathematical Science in the abstract. What is wanting (like a fourth sphere resting on three others in contact) to build up the Ideal Pyramid is a discourse on the Relation of the two branches (Mathematics and Physics) to, their action and reaction upon, one another, a magnificent theme, with which it is to be hoped that some future President of Section A will crown the edifice and make the Tetralogy (symbolizable by A+A', A, A', AA') complete."
The theme thus distinctly laid down for his successor by our late President is indeed a magnificent one, far too magnificent for any efforts of mine to realize. I have endeavoured to follow Mr Spottiswoode, as with far-reaching vision he distinguishes the systems of science into which phenomena, our knowledge of which is still in the nebulous stage, are growing. I have been carried by the penetrating insight and forcible expression of Dr Tyndall into that sanctuary of minuteness and of power where molecules obey the laws of their existence, clash together in fierce collision, or grapple in yet more fierce embrace, building up in secret the forms of visible things. I have been guided by Prof. Sylvester towards those serene heights
"Where never creeps a cloud, or moves a wind, Nor ever falls the least white star of snow, Nor ever lowest roll of thunder moans, Nor sound of human sorrow mounts to mar Their sacred everlasting calm."
But who will lead me into that still more hidden and dimmer region where Thought weds Fact, where the mental operation of the mathematician and the physical action of the molecules are seen in their true relation? Does not the way to it pass through the very den of the metaphysician, strewed with the remains of former explorers, and abhorred by every man of science? It would indeed be a foolhardy adventure for me to take up the valuable time of the Section by leading you into those speculations which require, as we know, thousands of years even to shape themselves intelligibly.
But we are met as cultivators of mathematics and physics. In our daily work we are led up to questions the same in kind with those of metaphysics; and we approach them, not trusting to the native penetrating power of our own minds, but trained by a long-continued adjustment of our modes of thought to the facts of external nature.
As mathematicians, we perform certain mental operations on the symbols of number or of quantity, and, by proceeding step by step from more simple to more complex operations, we are enabled to express the same thing in many different forms. The equivalence of these different forms, though a necessary consequence of self-evident axioms, is not always, to our minds, self-evident; but the mathematician, who by long practice has acquired a familiarity with many of these forms, and has become expert in the processes which lead from one to another, can often transform a perplexing expression into another which explains its meaning in more intelligible language.
As students of Physics we observe phenomena under varied circumstances, and endeavour to deduce the laws of their relations. Every natural phenomenon is, to our minds, the result of an infinitely complex system of conditions. What we set ourselves to do is to unravel these conditions, and by viewing the phenomenon in a way which is in itself partial and imperfect, to piece out its features one by one, beginning with that which strikes us first, and thus gradually learning how to look at the whole phenomenon so as to obtain a continually greater degree of clearness and distinctness. In this process, the feature which presents itself most forcibly to the untrained inquirer may not be that which is considered most fundamental by the experienced man of science; for the success of any physical investigation depends on the judicious selection of what is to be observed as of primary importance, combined with a voluntary abstraction of the mind from those features which, however attractive they appear, we are not yet sufficiently advanced in science to investigate with profit.
Intellectual processes of this kind have been going on since the first formation of language, and are going on still. No doubt the feature which strikes us first and most forcibly in any phenomenon, is the pleasure or the pain which accompanies it, and the agreeable or disagreeable results which follow after it. A theory of nature from this point of view is embodied in many of our words and phrases, and is by no means extinct even in our deliberate opinions.
It was a great step in science when men became convinced that, in order to understand the nature of things, they must begin by asking, not whether a thing is good or bad, noxious or beneficial, but of what kind is it? and how much is there of it? Quality and Quantity were then first recognized as the primary features to be observed in scientific inquiry.
As science has been developed, the domain of quantity has everywhere encroached on that of quality, till the process of scientific inquiry seems to have become simply the measurement and registration of quantities, combined with a mathematical discussion of the numbers thus obtained. It is this scientific method of directing our attention to those features of phenomena which may be regarded as quantities which brings physical research under the influence of mathematical reasoning. In the work of the Section we shall have abundant examples of the successful application of this method to the most recent conquests of science; but I wish at present to direct your attention to some of the reciprocal effects of the progress of science on those elementary conceptions which are sometimes thought to be beyond the reach of change.
If the skill of the mathematician has enabled the experimentalist to see that the quantities which he has measured are connected by necessary relations, the discoveries of physics have revealed to the mathematician new forms of quantities which he could never have imagined for himself.
Of the methods by which the mathematician may make his labours most useful to the student of nature, that which I think is at present most important is the systematic classification of quantities.
The quantities which we study in mathematics and physics may be classified in two different ways. The student who wishes to master any particular science must make himself familiar with the various kinds of quantities which belong to that science. When he understands all the relations between these quantities, he regards them as forming a connected system, and he classes the whole system of quantities together as belonging to that particular science. This classification is the most natural from a physical point of view, and it is generally the first in order of time.
为什么science 从赛先生变成了科学!
But when the student has become acquainted with several different sciences, he finds that the mathematical processes and trains of reasoning in one science resemble those in another so much that his knowledge of the one science may be made a most useful help in the study of the other.
When he examines into the reason of this, he finds that in the two sciences he has been dealing with systems of quantities, in which the mathematical forms of the relations of the quantities are the same in both systems, though the physical nature of the quantities may be utterly different.
He is thus led to recognize a classification of quantities on a new principle, according to which the physical nature of the quantity is subordinated to its mathematical form. This is the point of view which is characteristic of the mathematician; but it stands second to the physical aspect in order of time, because the human mind, in order to conceive of different kinds of quantities, must have them presented to it by nature.
I do not here refer to the fact that all quantities, as such, are subject to the rules of arithmetic and algebra, and are therefore capable of being submitted to those dry calculations which represent, to so many minds, their only idea of mathematics.
The human mind is seldom satisfied, and is certainly never exercising its highest functions, when it is doing the work of a calculating machine. What the man of science, whether he is a mathematician or a physical inquirer, aims at is, to acquire and develope clear ideas of the things he deals with. For this purpose he is willing to enter on long calculations, and to be for a season a calculating machine, if he can only at last make his ideas clearer.
But if he finds that clear ideas are not to be obtained by means of processes the steps of which he is sure to forget before he has reached the conclusion, it is much better that he should turn to another method, and try to understand the subject by means of well-chosen illustrations derived from subjects with which he is more familiar.
We all know how much more popular the illustrative method of exposition is found, than that in which bare processes of reasoning and calculation form the principal subject of discourse.
Now a truly scientific illustration is a method to enable the mind to grasp some conception or law in one branch of science, by placing before it a conception or a law in a different branch of science, and directing the mind to lay hold of that mathematical form which is common to the corresponding ideas in the two sciences, leaving out of account for the present the difference between the physical nature of the real phenomena.
The correctness of such an illustration depends on whether the two systems of ideas which are compared together are really analogous in form, or whether, in other words, the corresponding physical quantities really belong to the same mathematical class. When this condition is fulfilled, the illustration is not only convenient for teaching science in a pleasant and easy manner, but the recognition of the formal analogy between the two systems of ideas leads to a knowledge of both, more profound than could be obtained by studying each system separately.
There are men who, when any relation or law, however complex, is put before them in a symbolical form, can grasp its full meaning as a relation among abstract quantities. Such men sometimes treat with indifference the further statement that quantities actually exist in nature which fulfil this relation. The mental image of the concrete reality seems rather to disturb than to assist their contemplations. But the great majority of mankind are utterly unable, without long training, to retain in their minds the unembodied symbols of the pure mathematician, so that, if science is ever to become popular, and yet remain scientific, it must be by a profound study and a copious application of those principles of the mathematical classification of quantities which, as we have seen, lie at the root of every truly scientific illustration.
There are, as I have said, some minds which can go on contemplating with satisfaction pure quantities presented to the eye by symbols, and to the mind in a form which none but mathematicians can conceive.
There are others who feel more enjoyment in following geometrical forms, which they draw on paper, or build up in the empty space before them.
Others, again, are not content unless they can project their whole physical energies into the scene which they conjure up. They learn at what a rate the planets rush through space, and they experience a delightful feeling of exhilaration. They calculate the forces with which the heavenly bodies pull at one another, and they feel their own muscles straining with the effort.
To such men momentum, energy, mass are not mere abstract expressions of the results of scientific inquiry. They are words of power, which stir their souls like the memories of childhood.
For the sake of persons of these different types, scientific truth should be presented in different forms, and should be regarded as equally scientific whether it appears in the robust form and the vivid colouring of a physical illustration, or in the tenuity and paleness of a symbolical expression.
Time would fail me if I were to attempt to illustrate by examples the scientific value of the classification of quantities. I shall only mention the name of that important class of magnitudes having direction in space which Hamilton has called vectors, and which form the subject-matter of the Calculus of Quaternions, a branch of mathematics which, when it shall have been thoroughly understood by men of the illustrative type, and clothed by them with physical imagery, will become, perhaps under some new name, a most powerful method of communicating truly scientific knowledge to persons apparently devoid of the calculating spirit.
The mutual action and reaction between the different departments of human thought is so interesting to the student of scientific progress, that, at the risk of still further encroaching on the valuable time of the Section, I shall say a few words on a branch of physics which not very long ago would have been considered rather a branch of metaphysics. I mean the atomic theory, or, as it is now called, the molecular theory of the constitution of bodies.
Not many years ago if we had been asked in what regions of physical science the advance of discovery was least apparent, we should have pointed to the hopelessly distant fixed stars on the one hand, and to the inscrutable delicacy of the texture of material bodies on the other.
Indeed, if we are to regard Comte as in any degree representing the scientific opinion of his time, the research into what takes place beyond our own solar system seemed then to be exceedingly unpromising, if not altogether illusory.
The opinion that the bodies which we see and handle, which we can set in motion or leave at rest, which we can break in pieces and destroy, are composed of smaller bodies which we cannot see or handle, which are always in motion, and which can neither be stopped nor broken in pieces, nor in any way destroyed or deprived of the least of their properties, was known by the name of the Atomic theory. It was associated with the names of Democritus, Epicurus, and Lucretius, and was commonly supposed to admit the existence only of atoms and void, to the exclusion of any other basis of things from the universe.
In many physical reasonings and mathematical calculations we are accustomed to argue as if such substances as air, water, or metal, which appear to our senses uniform and continuous, were strictly and mathematically uniform and continuous.
We know that we can divide a pint of water into many millions of portions, each of which is as fully endowed with all the properties of water as the whole pint was; and it seems only natural to conclude that we might go on subdividing the water for ever, just as we can never come to a limit in subdividing the space in which it is contained. We have heard how Faraday divided a grain of gold into an inconceivable number of separate particles, and we may see Dr Tyndall produce from a mere suspicion of nitrite of butyle an immense cloud, the minute visible portion of which is still cloud, and therefore must contain many molecules of nitrite of butyle.
But evidence from different and independent sources is now crowding in upon us which compels us to admit that if we could push the process of subdivision still further we should come to a limit, because each portion would then contain only one molecule, an individual body, one and indivisible, unalterable by any power in nature.
Even in our ordinary experiments on very finely divided matter we find that the substance is beginning to lose the properties which it exhibits when in a large mass, and that effects depending on the individual action of molecules are beginning to become prominent.
The study of these phenomena is at present the path which leads to the development of molecular science.
That superficial tension of liquids which is called capillary attraction is one of these phenomena. Another important class of phenomena are those which are due to that motion of agitation by which the molecules of a liquid or gas are continually working their way from one place to another, and continually changing their course, like people hustled in a crowd.
On this depends the rate of diffusion of gases and liquids through each other, to the study of which, as one of the keys of molecular science, that unwearied inquirer into nature's secrets, the late Prof. Graham, devoted such arduous labour.
The rate of electrolytic conduction is, according to Wiedemann's theory, influenced by the same cause; and the conduction of heat in fluids depends probably on the same kind of action. In the case of gases, a molecular theory has been developed by Clausius and others, capable of mathematical treatment, and subjected to experimental investigation; and by this theory nearly every known mechanical property of gases has been explained on dynamical principles; so that the properties of individual gaseous molecules are in a fair way to become objects of scientific research.
Now Mr Stoney has pointed out¹ that the numerical results of experiments on gases render it probable that the mean distance of their particles at the ordinary temperature and pressure is a quantity of the same order of magnitude as a millionth of a millimetre, and Sir William Thomson has since² shewn, by several independent lines of argument, drawn from phenomena so different in themselves as the electrification of metals by contact, the tension of soap-bubbles, and the friction of air, that in ordinary solids and liquids the average distance between contiguous molecules is less than the hundred-millionth, and greater than the two-thousand-millionth of a centimetre.
These, of course, are exceedingly rough estimates, for they are derived from measurements some of which are still confessedly very rough; but if at the present time, we can form even a rough plan for arriving at results of this kind, we may hope that, as our means of experimental inquiry become more accurate and more varied, our conception of a molecule will become more definite, so that we may be able at no distant period to estimate its weight with a greater degree of precision.
A theory, which Sir W. Thomson has founded on Helmholtz's splendid hydrodynamical theorems, seeks for the properties of molecules in the ring vortices of a uniform, frictionless, incompressible fluid. Such whirling rings may be seen when an experienced smoker sends out a dexterous puff of smoke into the still air, but a more evanescent phenomenon it is difficult to conceive. This evanescence is owing to the viscosity of the air; but Helmholtz has shewn that in a perfect fluid such a whirling ring, if once generated, would go on whirling for ever, would always consist of the very same portion of the fluid which was first set whirling, and could never be cut in two by any natural cause. The generation of a ring-vortex is of course equally beyond the power of natural causes, but once generated, it has the properties of individuality, permanence in quantity, and indestructibility. It is also the recipient of impulse and of energy, which is all we can affirm of matter; and these ring-vortices are capable of such varied connexions and knotted self-involutions, that the properties of differently knotted vortices must be as different as those of different kinds of molecules can be.
If a theory of this kind should be found, after conquering the enormous mathematical difficulties of the subject, to represent in any degree the actual properties of molecules, it will stand in a very different scientific position from those theories of molecular action which are formed by investing the molecule with an arbitrary system of central forces invented expressly to account for the observed phenomena.
In the vortex theory we have nothing arbitrary, no central forces or occult properties of any other kind. We have nothing but matter and motion, and when the vortex is once started its properties are all determined from the original impetus, and no further assumptions are possible.
Even in the present undeveloped state of the theory, the contemplation of the individuality and indestructibility of a ring-vortex in a perfect fluid cannot fail to disturb the commonly received opinion that a molecule, in order to be permanent, must be a very hard body.
In fact one of the first conditions which a molecule must fulfil is, apparently, inconsistent with its being a single hard body. We know from those spectroscopic researches which have thrown so much light on different branches of science, that a molecule can be set into a state of internal vibration, in which it gives off to the surrounding medium light of definite refrangibility—light, that is, of definite wave-length and definite period of vibration. The fact that all the molecules (say, of hydrogen) which we can procure for our experiments, when agitated by heat or by the passage of an electric spark, vibrate precisely in the same periodic time, or, to speak more accurately, that their vibrations are composed of a system of simple vibrations having always the same periods, is a very remarkable fact.
I must leave it to others to describe the progress of that splendid series of spectroscopic discoveries by which the chemistry of the heavenly bodies has been brought within the range of human inquiry. I wish rather to direct your attention to the fact that, not only has every molecule of terrestrial hydrogen the same system of periods of free vibration, but that the spectroscopic examination of the light of the sun and stars shews that, in regions the distance of which we can only feebly imagine, there are molecules vibrating in as exact unison with the molecules of terrestrial hydrogen as two tuning-forks tuned to concert pitch, or two watches regulated to solar time.
Now this absolute equality in the magnitude of quantities, occurring in all parts of the universe, is worth our consideration.
The dimensions of individual natural bodies are either quite indeterminate, as in the case of planets, stones, trees, &c., or they vary within moderate limits, as in the case of seeds, eggs, &c.; but even in these cases small quantitative differences are met with which do not interfere with the essential properties of the body.
Even crystals, which are so definite in geometrical form, are variable with respect to their absolute dimensions.
Among the works of man we sometimes find a certain degree of uniformity. There is a uniformity among the different bullets which are cast in the same mould, and the different copies of a book printed from the same type.
If we examine the coins, or the weights and measures, of a civilized country, we find a uniformity, which is produced by careful adjustment to standards made and provided by the state. The degree of uniformity of these national standards is a measure of that spirit of justice in the nation which has enacted laws to regulate them and appointed officers to test them.
This subject is one in which we, as a scientific body, take a warm interest; and you are all aware of the vast amount of scientific work which has been expended, and profitably expended, in providing weights and measures for commercial and scientific purposes.
The earth has been measured as a basis for a permanent standard of length, and every property of metals has been investigated to guard against any alteration of the material standards when made. To weigh or measure any thing with modern accuracy, requires a course of experiment and calculation in which almost every branch of physics and mathematics is brought into requisition.
Yet, after all, the dimensions of our earth and its time of rotation, though, relatively to our present means of comparison, very permanent, are not so by any physical necessity. The earth might contract by cooling, or it might be enlarged by a layer of meteorites falling on it, or its rate of revolution might slowly slacken, and yet it would continue to be as much a planet as before.
But a molecule, say of hydrogen, if either its mass or its time of vibration were to be altered in the least, would no longer be a molecule of hydrogen.
If, then, we wish to obtain standards of length, time, and mass which shall be absolutely permanent, we must seek them not in the dimensions, or the motion, or the mass of our planet, but in the wave-length, the period of vibration, and the absolute mass of these imperishable and unalterable and perfectly similar molecules.
When we find that here, and in the starry heavens, there are innumerable multitudes of little bodies of exactly the same mass, so many, and no more, to the grain, and vibrating in exactly the same time, so many times, and no more, in a second, and when we reflect that no power in nature can now alter in the least either the mass or the period of any one of them, we seem to have advanced along the path of natural knowledge to one of those points at which we must accept the guidance of that faith by which we understand that "that which is seen was not made of things which do appear."
One of the most remarkable results of the progress of molecular science is the light it has thrown on the nature of irreversible processes—processes, that is, which always tend towards and never away from a certain limiting state. Thus, if two gases be put into the same vessel, they become mixed, and the mixture tends continually to become more uniform. If two unequally heated portions of the same gas are put into the vessel, something of the kind takes place, and the whole tends to become of the same temperature. If two unequally heated solid bodies be placed in contact, a continual approximation of both to an intermediate temperature takes place.
In the case of the two gases, a separation may be effected by chemical means; but in the other two cases the former state of things cannot be restored by any natural process.
In the case of the conduction or diffusion of heat the process is not only irreversible, but it involves the irreversible diminution of that part of the whole stock of thermal energy which is capable of being converted into mechanical work.
This is Thomson's theory of the irreversible dissipation of energy, and it is equivalent to the doctrine of Clausius concerning the growth of what he calls Entropy.
The irreversible character of this process is strikingly embodied in Fourier's theory of the conduction of heat, where the formulae themselves indicate, for all positive values of the time, a possible solution which continually tends to the form of a uniform diffusion of heat.
But if we attempt to ascend the stream of time by giving to its symbol continually diminishing values, we are led up to a state of things in which the formula has what is called a critical value; and if we inquire into the state of things the instant before, we find that the formula becomes absurd.
We thus arrive at the conception of a state of things which cannot be conceived as the physical result of a previous state of things, and we find that this critical condition actually existed at an epoch not in the utmost depths of a past eternity, but separated from the present time by a finite interval.
This idea of a beginning is one which the physical researches of recent times have brought home to us, more than any observer of the course of scientific thought in former times would have had reason to expect.
But the mind of man is not, like Fourier's heated body, continually settling down into an ultimate state of quiet uniformity, the character of which we can already predict; it is rather like a tree, shooting out branches which adapt themselves to the new aspects of the sky towards which they climb, and roots which contort themselves among the strange strata of the earth into which they delve. To us who breathe only the spirit of our own age, and know only the characteristics of contemporary thought, it is as impossible to predict the general tone of the science of the future as it is to anticipate the particular discoveries which it will make.
Physical research is continually revealing to us new features of natural processes, and we are thus compelled to search for new forms of thought appropriate to these features. Hence the importance of a careful study of those relations between mathematics and Physics which determine the conditions under which the ideas derived from one department of physics may be safely used in forming ideas to be employed in a new department.
The figure of speech or of thought by which we transfer the language and ideas of a familiar science to one with which we are less acquainted may be called Scientific Metaphor.
Thus the words Velocity, Momentum, Force, &c. have acquired certain precise meanings in Elementary Dynamics. They are also employed in the Dynamics of a Connected System in a sense which, though perfectly analogous to the elementary sense, is wider and more general.
These generalized forms of elementary ideas may be called metaphorical terms in the sense in which every abstract term is metaphorical. The characteristic of a truly scientific system of metaphors is that each term in its metaphorical use retains all the formal relations to the other terms of the system which it had in its original use. The method is then truly scientific—that is, not only a legitimate product of science, but capable of generating science in its turn.
There are certain electrical phenomena, again, which are connected together by relations of the same form as those which connect dynamical phenomena. To apply to these the phrases of dynamics with proper distinctions and provisional reservations is an example of a metaphor of a bolder kind; but it is a legitimate metaphor if it conveys a true idea of the electrical relations to those who have been already trained in dynamics.
Suppose, then, that we have successfully introduced certain ideas belonging to an elementary science by applying them metaphorically to some new class of phenomena. It becomes an important philosophical question to determine in what degree the applicability of the old ideas to the new subject may be taken as evidence that the new phenomena are physically similar to the old.
The best instances for the determination of this question are those in which two different explanations have been given of the same thing.
The most celebrated case of this kind is that of the corpuscular and the undulatory theories of light. Up to a certain point the phenomena of light are equally well explained by both; beyond this point, one of them fails.
To understand the true relation of these theories in that part of the field where they seem equally applicable we must look at them in the light which Hamilton has thrown upon them by his discovery that to every brachistochrone problem there corresponds a problem of free motion, involving different velocities and times, but resulting in the same geometrical path. Professor Tait has written a very interesting paper on this subject.
According to a theory of electricity which is making great progress in Germany, two electrical particles act on one another directly at a distance, but with a force which, according to Weber, depends on their relative velocity, and according to a theory hinted at by Gauss, and developed by Riemann, Lorenz, and Neumann, acts not instantaneously, but after a time depending on the distance. The power with which this theory, in the hands of these eminent men, explains every kind of electrical phenomena must be studied in order to be appreciated.
Another theory of electricity, which I prefer, denies action at a distance and attributes electric action to tensions and pressures in an all-pervading medium, these stresses being the same in kind with those familiar to engineers, and the medium being identical with that in which light is supposed to be propagated.
Both these theories are found to explain not only the phenomena by the aid of which they were originally constructed, but other phenomena, which were not thought of or perhaps not known at the time; and both have independently arrived at the same numerical result, which gives the absolute velocity of light in terms of electrical quantities.
That theories apparently so fundamentally opposed should have so large a field of truth common to both is a fact the philosophical importance of which we cannot fully appreciate till we have reached a scientific altitude from which the true relation between hypotheses so different can be seen.
I shall only make one more remark on the relation between Mathematics and Physics. In themselves, one is an operation of the mind, the other is a dance of molecules. The molecules have laws of their own, some of which we select as most intelligible to us and most amenable to our calculation. We form a theory from these partial data, and we ascribe any deviation of the actual phenomena from this theory to disturbing causes. At the same time we confess that what we call disturbing causes are simply those parts of the true circumstances which we do not know or have neglected, and we endeavour in future to take account of them. We thus acknowledge that the so-called disturbance is a mere figment of the mind, not a fact of nature, and that in natural action there is no disturbance.
But this is not the only way in which the harmony of the material with the mental operation may be disturbed. The mind of the mathematician is subject to many disturbing causes, such as fatigue, loss of memory, and hasty conclusions; and it is found that, from these and other causes, mathematicians make mistakes.
I am not prepared to deny that, to some mind of a higher order than ours, each of these errors might be traced to the regular operation of the laws of actual thinking; in fact we ourselves often do detect, not only errors of calculation, but the causes of these errors. This, however, by no means alters our conviction that they are errors, and that one process of thought is right and another process wrong.
One of the most profound mathematicians and thinkers of our time, the late George Boole, when reflecting on the precise and almost mathematical character of the laws of right thinking as compared with the exceedingly perplexing though perhaps equally determinate laws of actual and fallible thinking, was led to another of those points of view from which Science seems to look out into a region beyond her own domain.
"We must admit," he says, "that there exist laws" (of thought) "which even the rigour of their mathematical forms does not preserve from violation. We must ascribe to them an authority, the essence of which does not consist in power, a supremacy which the analogy of the inviolable order of the natural world in no way assists us to comprehend."

Footnotes from the original text:
¹ Phil. Mag., Aug. 1868. ² Nature, March 31, 1870.


n5321 | 2025年7月3日 11:04

ANSYS 电机设计专栏

  电机设计是一个复杂的多物理场问题,它涉及到电磁、结构、流体、温度和控制等多个领域。随着新材料、新工艺以及各种电机新技术的发展,电机设计的要求越来越苛刻,精度要求也越来越高,传统的设计方法和手段已经不能满足现代电机设计的要求,必须借助于现代仿真技术才能解决各种设计难题。

  针对电机永磁化、高速化、无刷化、数字化、集成化、智能化、高效节能化的发展趋势和相关技术挑战,ANSYS能提供集成化设计解决方案和流程,高效实现电机从磁路法到有限元、从部件到系统、从电磁到多物理场耦合的多领域、多层次、集成化电机及驱动/控制系统设计。

  ANSYS集成化电机设计流程主要包括:

  1.电机快速设计和方案优选:采用电机磁路法设计工具RMxprt,快速实现电机的初始方案评估和优化设计,缩小电机的设计空间,并一键输出电机二维或三维有限元模型以及电机的系统仿真模型备用;

  2.电机电磁场有限元精确优化设计:采用Maxwell二维或三维电磁场有限元仿真,并结合内置外电路或Simplorer控制电路,对电机有限元模型进行仿真设计和细节优化,并输出等效电路模型备用;

  3.电驱动系统集成化设计:采用Simplorer进行电机及控制系统仿真,结合SCADE嵌入式控制代码自动生成技术;结合Maxwell场路耦合、瞬态协同仿真技术;结合Q3D线缆、母排、IGBT寄生参数提取技术;对整个电驱动系统进行高精度仿真和性能优化;

  4.电机电磁、热耦合分析:采用Maxwell输出电机的几何模型和分布式损耗到Mechanical或FLUENT等工具中,进行电机温度场仿真,实现电磁、热单/双向耦合分析,预测电机在各种工况下的温升并优化散热系统设计;

  5.电机电磁、振动、噪声耦合分析:采用Maxwell输出电机的几何模型到Mechanical,利用Workbench和ANSYS电机电磁、振动、噪声自动化耦合仿真流程,便捷地分析电机在各种工况下的结构应力、形变以及振动噪音。


电机本体设计

  根据电机本体永磁化、无刷化、高速化、高效节能化的发展趋势、研发需求和技术挑战,ANSYS电机设计专栏全面考虑了电机本体设计的各方面,包括:基于磁路法的电机快速设计、初始方案评估和优化设计;基于瞬态电磁场有限元分析的电机精确分析和参数化/优化设计;基于有限元的热、应力、形变分析;基于有限容积法的流体热分析和散热系统优化;基于电磁、热、结构单/双向耦合的多物理场耦合设计;基于电磁、振动、噪声自动化设计流程的耦合设计等。通过快速优化传统的电机设计方案,实现高效节能化;通过高效探索和积累无刷及永磁电机设计经验,实现无刷化、永磁化;通过优化设计电机在高速时的电磁和多物理场耦合特性,实现高速化。

bar

电机电磁、结构、热等多物理场耦合设计

bar

 

电磁设计

● 一键有限元

● 磁滞材料建模

● 电磁优化设计

● 铁芯损耗计算

● 涡流损耗计算

● 高性能计算

bar

 

结构设计

● 应力与形变

● 模态分析

● 转子动力学、临界转速

● 转轴扰度、强度计算

● 电机装配

● 疲劳寿命

bar

 

热设计

● 结构传热与温升分析

● 流体通风散热分析

● 冷却系统设计

● 热应力和热变形

bar

 

多物理场耦合设计

● 电磁生热

● 通风冷却

● 热应力和热变形

● 振动噪声


电磁设计

  以Maxwell 、RMxprt为核心的电磁设计产品能够快速建立电机模型,计算电机设计中所关心的磁场和磁密分布、矩角特性、电感等参数,并获得电机的电磁发热、电磁力和电磁力矩分布。

  一键有限元:RMxprt可一键生成参数化的二维和三维有限元模型,包括自动建立几何模型、自动添加材料属性、自动剖分设置、自动设置边界条件、自动生成外电路、自动求解设置等,用户可一键求解,避免了繁琐的有限元操作过程,可直接面对电机的设计和优化问题,大大简化了设计流程。

  磁滞材料建模:Maxwell开创性的2D/3D磁滞材料建模功能和精确的瞬态电磁场有限元分析技术,可精确分析电机的铁芯损耗和磁滞电机各种工况下的瞬态电磁性能,包括:磁滞损耗、转矩特性、功率平衡等。

  电磁优化设计:RMxprt和Maxwell内置的参数化/优化算法,可便捷地对电机几何尺寸、材料属性、激励大小和频率等模型参数和仿真工况进行参数化扫描和优化设计,达到个性化的单目标或多目标组合优化。用户可基于RMxprt实现电机快速设计、大范围参数化扫描优选设计方案;基于Maxwell2D/3D瞬态电磁场有限元分析评估并优化电机在各种工况下的瞬态电磁性能、效率、成本等。

  铁芯、涡流损耗计算:Maxwell可精确计算电机在各种正常和故障工况下的损耗,包括绕组铜耗、冲片铁耗、磁钢和导体涡流损耗等,这些损耗对电机效率、散热和温升、永磁体性能等都有很大影响,既有助于优化电机效率,实现高效节能,又有助于优化散热系统设计,降低电机温升。

  高性能计算:Maxwell瞬态电磁场有限元分析全程(网格剖分、矩阵求解、场仿真数据处理等)支持多线程并行求解,且CPU利用率高,可充分利用硬件资源,大大加速单个设计的仿真进程。Maxwell参数化/优化方案支持多节点并行求解,可实现多节点线性加速,大大加速参数化/优化设计方案的多设计仿真进程。


定制化开发

  ANSYS定制化开发工具提供专门针对电机电磁设计和优化的、内置的一键后处理工具(UDO和ToolKit)。此外,Maxwell和Q3D内提供各种定制化的工具,能更好的实现ANSYS界面的友好性,用户操作的方便性,从而更高效的为使用者定制属于个人的专属平台和界面。

典型电机电磁设计流程

  电机设计工具包:UDO和ToolKit是Maxwell内置的、针对电机设计的定制化工具包。UDO能够在电磁场有限元分析结束后,直接输出电机的各种电磁性能数据;ToolKit能够一键完成永磁和感应电机的LdLq、效率Map图等,能够一键输出电机的转矩转速曲线等,且采用MPTA控制算法,并考虑温度、频变交流电阻、斜槽、不同频率下铁耗系数等对电机性能的影响。

  电机等效短路模型提取:基于电机的瞬态电磁场有限元仿真模型,Maxwell定制化的工具包可自动抽取非线性的电机等效电路模型。该模型采用偏微分方程描述电机磁链与电流的关系,可直接导入Simplorer进行系统仿真,更快、更好地分析电机本体和控制系统的相互影响。

  永磁体温度退磁参数提取:基于永磁体不同温度下的退磁曲线, Maxwell定制化永磁体温度退磁工具包可自动提取永磁材料的內禀退磁曲线与退磁曲线间的关系参数α与β,并利用这两个参数在Maxwell 和Fluent环境下对永磁电机的单/双向温度退磁特性进行仿真分析。

  电缆寄生参数提取:Maxwell/Q3D定制化电缆设计工具包能自动、快速、高效建立参数化几何模型并求解,通过电磁性能分析、设计方案优化、高性能计算、电磁参数和系统模型提取,将电缆和电驱动系统设计、传导干扰分析结合起来,有助于实现高精度电驱动系统设计。

  电机设计导航:针对电机企业个性化的研发需求,ANSYS可定制化全自动或半自动的电机设计流程和相应的工具包、中文设计界面、材料库、产品设计报告等,大大加速电机研发进程。中文设计界面可包含设计流程描述、技术指标输入、历史方案检索、初始方案分析、精确电磁分析、设计报告、图纸生成、文件归档以及设计规范查询等功能。

  电机设计平台:针对电机企业个性化的研发需求,ANSYS可提供定制化的电机研发平台,既可无缝集成定制化开发的各种子项目,还可以定制化全面的电机设计流程和研发环境。在该研发环境中,不同的电机具有不同的设计流程和自动化仿真工具包,是根据用户的实际需求量身定制的,可大大提高企业的生产率,加速产品研发进程。



RMxprt 2014中,如何进行单相感应电机同心圆绕组参数化设计,并使之按正弦比例增减匝数?

RMxprt 2014中,针对任意非标准槽型,用户自定义槽型编辑器如何设置并何使用?

详解User Defined Data(UDO)功能的使用

RMxprt 2014中,如何对绕组线径库进行选择和编辑?

RMxprt和Maxwell计算出的磁密如何进行换算?

在RMxprt中Embrace(极弧系数)是什么意思?

在RMxprt中Offset(极弧偏心距)是什么意思?

ANSYS Maxwell中2D和3D的电机斜槽计算对比分析

在Maxwell 2014中,如何从文件中导入参数化扫描Table表格数据?

在Maxwell 2014里如何定义硅钢片等材料的磁滞特性?

在Maxwell2D 2014里有哪些New Mesh技术?

如何远程设置ANSYS Maxwell?

在Maxwell 2014中如何定义多个求解进程并进行排队管理?

有哪些设置可以加快Maxwell 2D的计算速度?

如何观察某点的磁密随时间的变化波形?



n5321 | 2025年7月2日 22:56

taken the drudgery out of the design process

The computer has changed the practice of engineering forever. In the most simplest terms it has taken the drudgery out of the design process.


“In the words of James Clerk Maxwell: ‘the human mind is seldom satisfied, and is certainly never exercising its highest functions, when it is doing the work of a calculating machine


James Clerk Maxwell,  Five of Maxwell's Papers

 But is the student of science to be withdrawn from the study of man, or cut off from every noble feeling, so long as he lives in intellectual fellowship with men who have devoted their lives to the discovery of truth, and the results of whose enquiries have impressed themselves on the ordinary speech and way of thinking of men who never heard their names? Or is the student of history and of man to omit from his consideration the history of the origin and diffusion of those ideas which have produced so great a difference between one age of the world and another? 
Source: Gutenberg
 I shall only make one more remark on the relation between Mathematics and Physics. In themselves, one is an operation of the mind, the other is a dance of molecules. The molecules have laws of their own, some of which we select as most intelligible to us and most amenable to our calculation. We form a theory from these partial data, and we ascribe any deviation of the actual phenomena from this theory to disturbing causes. 
Source: Gutenberg
 I do not here refer to the fact that all quantities, as such, are subject to the rules of arithmetic and algebra, and are therefore capable of being submitted to those dry calculations which represent, to so many minds, their only idea of mathematics.
The human mind is seldom satisfied, and is certainly never exercising its highest functions, when it is doing the work of a calculating machine. What the man of science, whether he is a mathematician or a physical inquirer, aims at is, to acquire and develope clear ideas of the things he deals with.
 
Source: Gutenberg
 The aim of an experiment of illustration is to throw light upon some scientific idea so that the student may be enabled to grasp it. The circumstances of the experiment are so arranged that the phenomenon which we wish to observe or to exhibit is brought into prominence, instead of being obscured and entangled among other phenomena, as it is when it occurs in the ordinary course of nature. To exhibit illustrative experiments, to encourage others to make them, and to cultivate in every way the ideas on which they throw light, forms an important part of our duty. 
Source: Gutenberg
 The quantities which we study in mathematics and physics may be classified in two different ways.
The student who wishes to master any particular science must make himself familiar with the various kinds of quantities which belong to that science. When he understands all the relations between these quantities, he regards them as forming a connected system, and he classes the whole system of quantities together as belonging to that particular science. This classification is the most natural from a physical point of view, and it is generally the first in order of time.
 
Source: Gutenberg
 But why should we labour to prove the advantage of practical science to the University? Let us rather speak of the help which the University may give to science, when men well trained in mathematics and enjoying the advantages of a well-appointed Laboratory, shall unite their efforts to carry out some experimental research which no solitary worker could attempt. 
Source: Gutenberg
 The irreversible character of this process is strikingly embodied in Fourier's theory of the conduction of heat, where the formulae themselves indicate, for all positive values of the time, a possible solution which continually tends to the form of a uniform diffusion of heat.
But if we attempt to ascend the stream of time by giving to its symbol continually diminishing values, we are led up to a state of things in which the formula has what is called a critical value; and if we inquire into the state of things the instant before, we find that the formula becomes absurd.
 
Source: Gutenberg
 We may perhaps tire our eyes and weary our backs, but we do not greatly fatigue our minds.
It is not till we attempt to bring the theoretical part of our training into contact with the practical that we begin to experience the full effect of what Faraday has called "mental inertia"—not only the difficulty of recognising, among the concrete objects before us, the abstract relation which we have learned from books, but the distracting pain of wrenching the mind away from the symbols to the objects, and from the objects back to the symbols.
 
Source: Gutenberg
 That, by working the nut on the axis, we can make the order of colours either red, yellow, green, blue, or the reverse. When the order of colours is in the same direction as the rotation, it indicates that the axis of the instrument is that of greatest moment of inertia. 4thly. That if we screw the two pairs of opposite horizontal bolts to different distances from the axis, the path of the instantaneous pole will no longer be equidistant from the axis, but will describe an ellipse, whose longer axis is in the direction of the mean axis of the instrument. 
Source: Gutenberg
 But the great majority of mankind are utterly unable, without long training, to retain in their minds the unembodied symbols of the pure mathematician, so that, if science is ever to become popular, and yet remain scientific, it must be by a profound study and a copious application of those principles of the mathematical classification of quantities which, as we have seen, lie at the root of every truly scientific illustration. 
Source: Gutenberg
 Two theories of the constitution of bodies have struggled for victory with various fortunes since the earliest ages of speculation: one is the theory of a universal plenum, the other is that of atoms and void. 
Source: Gutenberg
 Now a truly scientific illustration is a method to enable the mind to grasp some conception or law in one branch of science, by placing before it a conception or a law in a different branch of science, and directing the mind to lay hold of that mathematical form which is common to the corresponding ideas in the two sciences, leaving out of account for the present the difference between the physical nature of the real phenomena. 
Source: Gutenberg
 Investigations of this kind, combined with a study of various phenomena of diffusion and of dissipation of energy, have recently added greatly to the evidence in favour of the hypothesis that bodies are systems of molecules in motion.
I hope to be able to lay before you in the course of the term some of the evidence for the existence of molecules, considered as individual bodies having definite properties. The molecule, as it is presented to the scientific imagination, is a very different body from any of those with which experience has hitherto made us acquainted.
 
Source: Gutenberg
 When we mix together blue and yellow paint, we obtain green paint. This fact is well known to all who have handled colours; and it is universally admitted that blue and yellow make green. Red, yellow, and blue, being the primary colours among painters, green is regarded as a secondary colour, arising from the mixture of blue and yellow. Newton, however, found that the green of the spectrum was not the same thing as the mixture of two colours of the spectrum, for such a mixture could be separated by the prism, while the green of the spectrum resisted further decomposition. 
Source: Gutenberg
 This characteristic of modern experiments—that they consist principally of measurements,—is so prominent, that the opinion seems to have got abroad, that in a few years all the great physical constants will have been approximately estimated, and that the only occupation which will then be left to men of science will be to carry on these measurements to another place of decimals. 
Source: Gutenberg
 Even in the present undeveloped state of the theory, the contemplation of the individuality and indestructibility of a ring-vortex in a perfect fluid cannot fail to disturb the commonly received opinion that a molecule, in order to be permanent, must be a very hard body. 
Source: Gutenberg
 It is probable that important results will be obtained by the application of this method, which is as yet little known and is not familiar to our minds. If the actual history of Science had been different, and if the scientific doctrines most familiar to us had been those which must be expressed in this way, it is possible that we might have considered the existence of a certain kind of contingency a self-evident truth, and treated the doctrine of philosophical necessity as a mere sophism. 
Source: Gutenberg
 Such, then, were some of the scientific results which followed in this case from bringing together mathematical power, experimental sagacity, and manipulative skill, to direct and assist the labours of a body of zealous observers. If therefore we desire, for our own advantage and for the honour of our University, that the Devonshire Laboratory should be successful, we must endeavour to maintain it in living union with the other organs and faculties of our learned body. 
Source: Gutenberg


n5321 | 2025年7月1日 23:34

Failed Promises

For some time now, many of the most prominent and colorful pages in Mechanical Engineering magazine have been filled by advertisements for computer software. However, there is a difference between the most recent ads and those of just a few years earlier. In 1990, for example, many software developers emphasized the reliability and ease of use of their packages, with one declaring itself the “most reliable way to take the heat, handle the pressure, and cope with the stress” while another promised to provide “trusted solutions to your design challenges.”

More recent advertising copy is a bit more subdued, with fewer implied promises that the software is going to do the work of the engineer—or take the heat or responsibility. The newer message is that the buck stops with the engineer. Software packages might provide “the right tool for the job,” but the engineer works the tool. A sophisticated system might be “the ultimate testing ground for your ideas,” but the ideas are no longer the machine’s, they are the engineer’s. Options may abound in software packages, but the engineer makes a responsible choice. This is as it should be, of course, but things are not always as they should be, and that is no doubt why there have been subtle and sometimes not-so-subtle changes in technical software marketing and its implied promises. Civil Engineering has also run software advertisements, albeit less prominent and colorful ones. Their messages, explicit or implicit, are more descriptive than promising. Nevertheless, the advertisements also contain few caveats about limitations, pitfalls, or downright errors that might be encountered in using prepackaged, often general-purpose software for a specific engineering design or analysis. The implied optimism of the software advertisements stands in sharp contrast to the concerns about the use of software that have been expressed with growing frequency in the pages of the same engineering magazines. The American Society of Civil Engineers, publisher of Civil Engineering and a host of technical journals and publications full of theoretical and applied discussions of computers and their uses, has among its many committees one on “guidelines for avoiding failures caused by misuse of civil engineering software.” The committee’s parent organization, the Technical Council on Forensic Engineering, was the sponsor of a cautionary session on computer use at the society’s 1992 annual meeting, and one presenter titled his paper, “Computers in Civil Engineering: A Time Bomb!” In simultaneous sessions at the same meeting, other equally fervid engineers were presenting computer-aided designs and analyses of structures of the future. There is no doubt that computer-aided design, manufacturing, and engineering have provided benefits to the profession and to humankind. Engineers are attempting and completing more complex and time-consuming analyses that involve many steps (and therefore opportunities for error) and that might not have been considered practicable in slide-rule days. New hardware and software have enabled more ambitious and extensive designs to be realized, including some of the dramatic structures and ingenious machines that characterize the late twentieth century. Today’s automobiles, for example, possess better crashworthiness and passenger protection because of advanced finite-element modeling, in which a complex structure such as a stylish car body is subdivided into more manageable elements, much as we might construct a gracefully curving walkway out of a large number of rectilinear bricks. For all the achievements made possible by computers, there is growing concern in the engineering-design community that there are numerous pitfalls that can be encountered using software packages. All software begins with some fundamental assumptions that translate to fundamental limitations, but these are not always displayed prominently in advertisements. Indeed, some of the limitations of software might be equally unknown to the vendor and to the customer. Perhaps the most damaging limitation is that it can be misused or used inappropriately by an inexperienced or overconfident engineer. The surest way to drive home the potential dangers of misplaced reliance on computer software is to recite the incontrovertible evidence of failures of structures, machines, and systems that are attributable to use or misuse of software. One such incident occurred in the North Sea in August 1991, when the concrete base of a massive Norwegian oil platform, designated Sleipner A, was being tested for leaks and mechanical operation prior to being mated with its deck. The base of the structure consisted of two dozen circular cylindrical reinforced-concrete cells. Some of the cells were to serve as drill shafts, others as storage tanks for oil, and the remainder as ballast tanks to place and hold the platform on the sea bottom. Some of the tanks were being filled with water when the operators heard a loud bang, followed by significant vibrations and the sound of a great amount of running water. After eight minutes of trying to control the water intake, the crew abandoned the structure. About eighteen minutes after the first bang was heard, Sleipner A disappeared into the sea, and forty-five seconds later a seismic event that registered a 3 on the Richter scale was recorded in southern Norway. The event was the massive concrete base striking the sea floor. An investigation of the structural design of Sleipner A’s base found that the differential pressure on the concrete walls was too great where three cylindrical shells met and left a triangular void open to the full pressure of the sea. It is precisely in the vicinity of such complex geometry that computer-aided analysis can be so helpful, but the geometry must be modeled properly. Investigators found that “unfavorable geometrical shaping of some finite elements in the global analysis … in conjunction with the subsequent post-processing of the analysis results … led to underestimation of the shear forces at the wall supports by some 45%.” (Whether or not due to the underestimation of stresses, inadequate steel reinforcement also contributed to the weakness of the design.) In short, no matter how sound and reliable the software may have been, its improper and incomplete use led to a structure that was inadequate for the loads to which it was subjected. In its November 1991 issue, the trade journal Offshore Engineer reported that the errors in analysis of Sleipner A “should have been picked up by internal control procedures before construction started.” The investigators also found that “not enough attention was given to the transfer of experience from previous projects.” In particular,trouble with an earlier platform, Statfjord A, which suffered cracking in the same critical area, should have drawn attention to the flawed detail. (A similar neglect of prior experience occurred, of course, just before the fatal Challenger accident, when the importance of previous O-ring problems was minimized.) Prior experience with complex engineering systems is not easily built into general software packages used to design advanced structures and machines. Such experience often does not exist before the software is applied, and it can be gained only by testing the products designed by the software. A consortium headed by the Netherlands Foundation for the Coordination of Maritime Research once scheduled a series of full-scale collisions between a single- and a double-hulled ship “to test the [predictive] validity of computer modelling analysis and software.” Such drastic measures are necessary because makers and users of software and computer models cannot ignore the sine qua non of sound engineering—broad experience with what happens in and what can go wrong in the real world. Computer software is being used more and more to design and control large and complex systems, and in these cases it may not be the user who is to blame for accidents. Advanced aircraft such as the F-22 fighter jet employ on-board computers to keep the plane from becoming aerodynamically unstable during maneuvers. When an F-22 crashed during a test flight in 1993, according to a New York Times report, “a senior Air Force official suggested that the F-22’s computer might not have been programmed to deal with the precise circumstances that the plane faced just before it crash-landed.” What the jet was doing, however, was not unusual for a test flight. During an approach about a hundred feet above the runway, the afterburners were turned on to begin an ascent—an expected maneuver for a test pilot—when “the plane’s nose began to bob up and down violently.” The Times reported the Air Force official as saying, “It could have been a computer glitch, but we just don’t know.” Those closest to questions of software safety and reliability worry a good deal about such “fly by wire” aircraft. They also worry about the growing use of computers to control everything from elevators to medical devices. The concern is not that computers should not control such things, but rather that the design and development of the software must be done with the proper checks and balances and tests to ensure reliability as much as is humanly possible. A case study that has become increasingly familiar to software designers unfolded during the mid-1980s, when a series of accidents plagued a high-powered medical device, the Therac-25. The Therac-25 was designed by Atomic Energy of Canada Limited (AECL) to accelerate and deliver a beam of electrons at up to 25 mega-electron-volts to destroy tumors embedded in living tissue. By varying the energy level of the electrons, tumors at different depths in the body could be targeted without significantly affecting surrounding healthy tissue, because beams of higher energy delivered the maximum radiation dose deeper in the body and so could pass through the healthy parts. Predecessors of the Therac-25 had lower peak energies and were less compact and versatile. When they were designed in the early 1970s, various protective circuits and mechanical interlocks to monitor radiation prevented patients from receiving an overdose. These earlier machines were later retrofitted with computer control, but the electrical and mechanical safety devices remained in place. Computer control was incorporated into the Therac-25 from the outset. Some safety features that had depended on hardware were replaced with software monitoring. “This approach,” according to Nancy Leveson, a leading software safety and reliabilty expert, and a student of hers, Clark Turner, “is becoming more common as companies decide that hardware interlocks and backups are not worth the expense, or they put more faith (perhaps misplaced) on software than on hardware reliability.” Furthermore, when hardware is still employed, it is often controlled by software. In their extensive investigation of the Therac-25 case, Leveson and Turner recount the device’s accident history, which began in Marietta, Georgia. On June 3, 1985, at the Kennestone Regional Oncology Center, the Therac-25 was being used to provide follow-up radiation treatment for a woman who had undergone a lumpectomy. When she reported being burned, the technician told her it was impossible for the machine to do that, and she was sent home. It was only after a couple of weeks that it became evident the patient had indeed suffered a severe radiation burn. It was later estimated she received perhaps two orders of magnitude more radiation than that normally prescribed. The woman lost her breast and the use of her shoulder and arm, and she suffered great pain. About three weeks after the incident in Georgia, another woman was undergoing Therac-25 treatment at the Ontario Cancer Foundation for a carcinoma of the cervix when she complained of a burning sensation. Within four months she died of a massive radiation overdose. Four additional cases of overdose occurred, three resulting in death. Two of these were at the Yakima Valley Memorial Hospital in Washington, in 1985 and 1987, and two at the East Texas Cancer Center, in Tyler, in March and April 1986. These latter cases are the subject of the title tale of a collection of horror stories on design, technology, and human error, Set Phasers on Stun, by Steven Casey. Leveson and Turner relate the details of each of the six Therac-25 cases, including the slow and sometimes less-than-forthright process whereby the most likely cause of the overdoses was uncovered. They point out that “concluding that an accident was the result of human error is not very helpful and meaningful,” and they provide an extensive analysis of the problems with the software controlling the machine. According to Leveson and Turner, “Virtually all complex software can be made to behave in an unexpected fashion under certain conditions,” and this is what appears to have happened with the Therac-25. Although they admit that to the day of their writing “some unanswered questions” remained, Leveson and Turner report in considerable detail what appears to have been a common feature in the Therac-25 accidents. The parameters for each patient’s prescribed treatment were entered at the computer keyboard and displayed on the screen before the operator. There were two fundamental modes of treatment, X ray (employing the machine’s full 25 mega-electron-volts) and the relatively low-power electron beam. The first was designated by typing in an “x” and the latter by an “e.” Occasionally, and evidently in at least some if not all of the accident cases, the Therac operator mistyped an “x” for an “e,” but noticed the error before triggering the beam. An “edit” of the input data was performed by using the “arrow up” key to move the cursor to the incorrect entry, changing it, and then returning to the bottom of the screen, where a “beam ready” message was the operator’s signal to enter an instruction to proceed, administering the radiation dose. Unfortunately, in some cases the editing was done so quickly by the fast-typing operators that not all of the machine’s functions were properly reset before the treatment was triggered. Exactly how much overdose was administered, and thus whether it was fatal, depended upon the installation, since “the number of pulses delivered in the 0.3 second that elapsed before interlock shutoff varied because the software adjusted the start-up pulse-repetition frequency to very different values on different machines.”

Anomalous, eccentric, sometimes downright bizarre, and always unexpected behavior of computers and their software is what ties together the horror stories that appear in each issue of Software Engineering Notes, an “informal newsletter” published quarterly by the Association for Computing Machinery. Peter G. Neumann, chairman of the ACM Committee on Computers and Public Policy, is the moderator of the newsletter’s regular department, “Risks to the Public in Computers and Related Systems,” in which contributors pass on reports of computer errors and glitches in applications ranging from health care systems to automatic teller machines. Neumann also writes a regular column, “Inside Risks,” for the magazine Communications of the ACM, in which he discusses some of the more generic problems with computers and software that prompt the many horror tales that get reported in newspapers, magazines, and professional journals and on electronic bulletin boards. Unfortunately, a considerable amount of the software involved in computer-related failures and malfunctions reported in such forums is produced anonymously, packaged in a black box, and poorly documented. The Therac-25 software, for example, was designed by a programmer or programmers about whom no information was forthcoming, even during a lawsuit brought against AECL. Engineers and others who use such software might reflect upon how contrary to normal scientific and engineering practice its use can be. Responsible engineers and scientists approach new software, like a new theory, with healthy skepticism. Increasingly often, however, there is no such skepticism when the most complicated of software is employed to solve the most complex problems. No software can ever be proven with absolute certainty to be totally error-free, and thus its design, construction, and use should be approached as cautiously as that of any major structure, machine, or system upon which human lives depend. Although the reputation and track record of software producers and their packages can be relied upon to a reasonable extent, good engineering involves checking them out. If the black box cannot be opened, a good deal of confidence in it and understanding of its operation can be inferred by testing. The proof tests to which software is subjected should involve the simple and ordinary as well as the complex and bizarre. A lot more might be learned about a finite-element package, for example, by solving a problem whose solution is already known rather than by solving one whose answer is unknown. In the former case, something might be inferred about the limitations of the black box; in the latter, the output from the black box might bedazzle rather than enlighten. In the final analysis it is the proper attention to detail—in the human designer’s mind as well as in the computer software—that causes the most complex and powerful applications to work properly. A fundamental activity of engineering and science is making promises in the form of designs and theories, so it is not fair to discredit computer software solely on the basis that it promises to be a reliable and versatile problem-solving tool or trusted machine operator. Nevertheless, users should approach all software with prudent caution and healthy skepticism, for the history of science and engineering, including the still-young history of software engineering, is littered with failed promises.


n5321 | 2025年6月19日 07:03

Diss CAE

Hacker News new | past | comments | ask | show | jobs | submitlogin

I started my career doing FE modeling and analysis with ANSYS and NASTRAN. Sometimes I miss these days. Thinking about how to simplify a real world problem so far that it is solvable with the computational means available was always fun. Then pushing quads around for hours until the mesh was good had an almost meditative effect. But I don't feel overwhelmingly eager to learn a new software or language.

Much to my surprise, it seems there hasn't been much movement there. ANSYS still seems to be the leader for general simulation and multi-physics. NASTRAN still popular. Still no viable open-source solution.

The only new player seems to be COMSOL. Has anyone experience with it? Would it be worth a try for someone who knows ANSYS and NASTRAN well?




I've used ansys daily for over a decade, and the only movement is in how they name their license tiers. It's a slow muddy death march. Every year I'm fighting the software more and more, the sales men are clearly at the wheel.

They buy "vertical aligned" software, integrate it, then slowly let it die. They just announced they're killing off one of these next year, that they bought ten years ago, because they want to push a competitive product with 20% of the features.

I've been using nastran for half as long but it isn't much better. It's all sales.

I dabbed a bit in abaqus, that seems nice. Probably cause I just dabbed in it.

But here I'm just trying to do my work, and all these companies do is move capabilities around their license tiers and boil the frog as fast as they get away with.


I've gone Abaqus > Ansys > Abaqus/LS-DYNA over my career and hate Ansys with a fiery passion. It's the easiest one to run your first model in, but when you start applying it to real problems its a fully adversarial relationship. The fact you have to make a complete copy of the geometry/mesh to a new Workbench "block" to run a slightly different load case (and you can't read in an orphaned results files) is just horrible.

Abaqus is more difficult to get up to speed in, but its really nice from an advanced usability standpoint. They struggle due to cost though, it is hugely expensive and we've had to fight hard to keep it time and time again.

LS-Dyna is similar to Abaqus (though I'm not fully up in it yet), but we're all just waiting to see how Ansys ruins it, especially now that they got bought out by Synopsys.


I don't know how long ago you used ansys, and i definitely don't want to sell it, but you can share geometry/mesh between those "blocks" (by dragging blocks on top of each other), and you can read in result orphaned result files.


> Still no viable open-source solution.

For the more low-level stuff there's the FEniCS project[1], for solving PDEs using fairly straight forward Python code like this[2]. When I say fairly straight forward, I mean it follows the math pretty closely, it's not exactly high-school level stuff.

[1]: https://fenicsproject.org/

[2]: https://jsdokken.com/dolfinx-tutorial/chapter2/linearelastic...


Interesting. Please bear with me as this is going off 25 year old memories, but my memory is that the workflow for using FEA tools was: Model in some 3D modelling engineering tool (e.g. SolidWorks), ansys to run FEA, iterate if needed, prototype, iterate.

So to have anything useful, you need that entire pipeline? For hobbyists, I assume we need this stack. What are the popular modelling tools?


To get started with Fenics you can maybe use the FEATool GUI, which makes it easier to set up FEA models, and also export Python simulation scripts to learn or modify the Fenics syntax [1].

[1]: https://www.featool.com/tutorial/2017/06/16/Python-Multiphys...


Yeah not my domain so wouldn't really know. For FEniCS I know Gmsh[1] was used. There's some work[2][3] been done to integrate FEniCS with FreeCAD. It seems FreeCAD also supports[4] other FEM solvers.

But, I guess you get what you pay for in this space still.

[1]: https://gmsh.info/

[2]: https://github.com/qingfengxia/Cfd

[3]: https://github.com/qingfengxia/FenicsSolver

[3]: https://wiki.freecad.org/FEM_Solver


You can export other CAD meshes for use in it


> For hobbyists, I assume we need this stack.

Just curious what kind of hobby leads to a finite element analysis?


Electronics (when you start to care about EMI or antenna design), model airplanes (for aerodynamics), rocketry, machining (especially if you want to get into SPIF), robotics, 3-D printing (especially for topology optimization), basically anything that deals with designing solid structures in the physical world. Also, computer graphics, including video games.

Unfortunately the barrier to entry is too high for most hobbyists in these fields to use FEM right now.


There are some obvious downsides and exceptions to this sentiment, but on balance, I really appreciate how the expansive access to information via the internet has fostered this phenomenon: where an unremarkable fella with a dusty media studies degree, a well-equipped garage, and probably too much free time can engineer and construct robotic machines, implement/tweak machine vision mechanisms, microwave radio transceivers, nanometer-scale measurements using laser diodes and optical interferometry, deep-sky astrophotography, etc., etc.. Of course, with burgeoning curiosity and expanding access to surplus university science lab equipment, comes armchair experts and the potential for insufferability[0]. It’s crucial to maintain perspective and be mindful of just how little any one person (especially a person with a media studies degree) can possibly know.

[0] I’m pretty sure “insufferability” isn’t a real word. [Edit: don’t use an asterisk for footnotes.]


comes armchair experts and the potential for insufferability

Hey, I resemble that remark! I'd be maybe a little less armchair with more surplus equipment access, but maybe no less insufferable.

By all accounts, though, a degree of insufferability is no bar to doing worthwhile work; Socrates, Galileo, Newton, Babbage, and Heaviside were all apparently quite insufferable, perhaps as much so as that homeless guy who yells at you about adrenochrome when you walk by his park encampment. (Don't fall into the trap of thinking it's an advantage, though.) Getting sidetracked by trivialities and delusions is a greater risk. Most people spend their whole lives on it.

As for how little any person can know, you can certainly know more than anyone who lived a century ago: more than Einstein, more than Edison, more than Noether, more than Tesla, more than Gauss. Any one of the hobbies you named will put you in contact with information they never had, and you can draw on a century or more of academic literature they didn't have, thanks to Libgen and Sci-Hub (and thus Bitcoin).

And it's easy to know more than an average doctorate holder; all you have to do is study, but not forget everything you study the way university students do, and not fall into traps like ancient aliens and the like. I mean, you can still do good work if you believe in ancient aliens (Newton and Tesla certainly believed dumber things) but probably not good archeological work.

Don't be discouraged by prejudice against autodidacts. Lagrange, Heaviside, and du Châtelet were autodidacts, and Ptolemy seems to have been as well. And they didn't even have Wikipedia or Debian! Nobody gets a Nobel for passing a lot of exams.


IMO, the mathematics underlying finite element methods and related subjects — finite element exterior calculus comes immediately to mind — are interesting enough to constitute a hobby in their own right.


FEniCs is mostly used by academic researchers, I used it for FEM modelling in magnetic for e.g. where the sorts of problems we wanted to solve you can’t do in a commercial package.


COMSOL's big advantage is it ties together a lot of different physics regimes together and makes it very easy to couple different physics together. Want to do coupled structures/fluid? Or coupled electromagnetism/mechanical? Its probably the easiest one to use.

Each individual physics regime is not particularly good on its own - there are far better mechanical, CFD, electromagnetism, etc solvers out there - but they're all made by different vendors and don't play nicely with each other.


> The only new player seems to be COMSOL

Ouch. I kind of know Comsol because it was already taught in my engineering school 15 years ago, so that it still counts as a “new entrant” really gives an idea of how slow the field evolves.


The COMSOL company was started in 1986....


It used to be called FEMLAB :)

But they changed to COMSOL because they didn't have the trademark in Japan and FEM also gave associations to the feminine gender.


I am hoping this open source FEM library will catch on : https://www.dealii.org/. The deal in deal.II stands for Differential Equation Analysis Library.

It's written in C++, makes heavy use of templates and been in development since 2000. It's not meant for solid mechanics or fluid mechanics specifically, but for FEM solutions of general PDEs.

The documentation is vast, the examples are numerous and the library interfaces with other libraries like Petsc, Trilinos etc. You can output results to a variety of formats.

I believe support for triangle and tetrahedral elements has been added only recently. In spite of this, one quirk of the library is that meshes are called "triangulations".


I've worked with COMSOL (I have a smaller amount of ANSYS experience to compare to). For the most part I preferred COMSOL's UI and workflow and leveraged a lot of COMSOL's scripting capabilities which was handy for a big but procedural geometry I had (I don't know ANSYS's capabilities for that). They of course largely do the same stuff. If you have easy access to COMSOL to try it out I'd recommend it just for the experience. I've found sometimes working with other tools make me recognize some capabilities or technique that hadn't clicked for me yet.


Once you have a mesh that's "good enough", you can use any number of numeric solvers. COMSOL has a very good mesher, and a competent geometry editor. It's scriptable, and their solvers are also very good.

There might be better programs for some problems, but COMSOL is quite nice.


OpenFOAM seems like an opensource option but I have found it rather impenetrable - there are some youtube videos and pdf tutorials, but they are quite dense and specific and doens't seem to cover the entire pipeline

Happy to hear if people have good resources!


Still no viable open-source solution.

Wait? What? NASTRAN was originally developed by NASA and open sourced over two decades ago. Is this commercial software built on top that is closed source?

I’m astonished ANSYS and NASTRAN are still the only players in town. I remember using NASTRAN 20 years ago for FE of structures while doing aero engineering. And even then NASTRAN was almost 40 years old and ancient.


There's a bunch of open source fem solvers e.g. Calculix, Code_Aster, OpenRadioss and probably a few unmaintained forks of (NASA) NASTRAN, but there's no multiphysics package I don't think.


These are at least capable of thermomechanical with fluid-structure coupling. Not all-physics but still multi. True that things with multi species diffusion or electromagnetics are missing, but maybe Elmer can fill the gap.


Abaqus is pretty big too. I've worked with both Ansys and Abaqus and I generally prefer the latter.


Abaqus is up there with Ansys aswell as others have mentioned.


As a recovering fe modeler, I understand completely.


I work in this field and it really is stagnant and dominated by high-priced Ansys/etc. For some reason silicon valley's open sourceness hasn't touched it. For open source, there's CalculiX which is full of bugs and Code Aster which everybody I've heard about it from say it's too confusing to use. CalculiX has Prepomax as a fairly new and popular pre/post.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: 


n5321 | 2025年6月15日 23:43

Diss: Eighty Years of the Finite Element Method (2022)

Hacker News new | past | comments | ask | show | jobs | submitlogin
Eighty Years of the Finite Element Method (2022) (springer.com)
203 points by sandwichsphinx 7 months ago hide | past | favorite | 102 comments



I've been a full-time FEM Analyst for 15 years now. It's generally a nice article, though in my opinion paints a far rosier picture of the last couple decades than is warranted.

Actual, practical use of FEM has been stagnate for quite some time. There have been some nice stability improvements to the numerical algorithms that make highly nonlinear problems a little easier; solvers are more optimized; and hardware is of course dramatically more capable (flash storage has been a godsend).

Basically every advanced/"next generation" thing the article touts has fallen flat on its face when applied to real problems. They have some nice results on the world's simplest "laboratory" problem, but accuracy is abysmal on most real-world problems - e.g. it might give good results on a cylinder in simple tension, but fails horribly when adding bending.

There's still nothing better, but looking back I'm pretty surprised I'm still basically doing things the same way I was as an Engineer 1; and not for lack of trying. I've been on countless development projects that seem promising but just won't validate in the real world.

Industry focus has been far more on Verification and Validation (ASME V&V 10/20/40) which has done a lot to point out the various pitfalls and limitations. Academic research and the software vendors haven't been particularly keen to revisit the supposedly "solved" problems we're finding.


I'm a mechanical engineer, and I've been wanting to better understand the computational side of the tools I use every day. Do you have any recommendations for learning resources if one wanted to "relearn" FEA from a computer science perspective?


I learned it for the first time from this[0] course; part of the course covers deal.ii[1] where you program the stuff you're learning in C++.

[0]: https://open.umich.edu/find/open-educational-resources/engin...

[1]: https://www.dealii.org/


Start with FDM. Solve Bernoulli deflection of a beam


Have a look at FEniCs to start with.


>Basically every advanced/"next generation" thing the article touts has fallen flat on its face when applied to real problems

Even Arnold's work? FEEC seemed quite promising last time I was reading about it, but never seemed to get much traction in the wider FEM world.


I kind of thought Neural Operators were slotting into the some problem domains where FEM is used (based on recent work in weather modelling, cloth modelling, etc) and thought there was some sort of FEM -> NO lineage. Did I completely misunderstand that whole thing?


Those are definitely up next in the flashy-new-thing pipeline and I'm not that up to speed on them yet.

Another group within my company is evaluating them right now and the early results seems to be "not very accurate, but directionally correct and very fast" so there may be some value in non-FEM experts using them to quickly tell if A or B is a better design; but will still need a more proper analysis in more accurate tools.

It's still early though and we're just starting to see the first non-research solvers hitting the market.


Very curious, we are getting good results with PiNN and operators, what's your domain?


I was under the impression that the linear systems that come out of FEM methods are in some cases being solved by neural networks (or partially, e.g. as a preconditioner in an iterative scheme), but I don't know the details.


stagnate last 15 years??? Contact elements, bolt preload, modeling individual composite fibers, delamination progressive ply failure, modeling layers of material to a few thousandths of an inch. Design optimization. ANSYS Workbench = FEA For Dummies. The list goes on.


Have you heard of physics informed neural nets?

It seems like a hot candidate to potentially yield better results in the future


Could you write a blogpost-style article on how to model the shallow water wave equation on a sphere? The article would start with the simplest possible method, something that could be implemented in short C program, and would continue with a progressively more accurate and complex methods.


If you are interested in this, I'd recommend following an openfoam tutorial, c++ though.

You could do SWE with finite elements, but generally finite volumes would be your choice to handle any potential discontinuities and is more stable and accurate for practical problems.

Here is a tutorial. https://www.tfd.chalmers.se/~hani/kurser/OS_CFD_2010/johanPi...


I'm looking for something like this, but more advanced. The common problem with such tutorials is that they stop with the simplest geometry (square) and the simplest finite difference method.

What's unclear to me is how do I model the spherical geometry without exploding the complexity of the solution. I know that a fully custom mesh with a pile of formulas for something like beltrami-laplace operator would work, but I want something more elegant than this. For a example, can I use the Fibbonacci spiral to generate a uniform spherical mesh, and then somehow compute gradients and the laplacian?

I suspect that the stability of FE or FV methods is rooted in the fact that the FE functions slightly overlap, so computing the next step is a lot like using an implicit FD scheme, or better, a variation of the compact FD scheme. However I'm interested in how an adept in the field would solve this problem in practice. Again, I'm aware that there are methods of solving such systems (Jacobi, etc.), but those make the solution 10x more complex, buggier and slower.


Interesting that this reads almost like an chatgpt prompt.


Lazy people have been lazy forever. I stumbled across an example of this the other day from the 1990s, I think, and was shocked how much the student emails sounded like LLM prompts: https://www.chiark.greenend.org.uk/~martinh/poems/questions....


At least those had some basic politeness. So often I'm blown away not only how people blithely write "I NEED HELP, GIMME XYZ NOW NERDS" but especially how everyone is just falling over themselves to actually help! WTF?

Basic politeness is absolutely dead, nobody has any concept of acknowledging they are asking for a favour; we just blast Instagram/TikTok reels at top volume and smoke next to children and elderly in packed public spaces etc. I'm 100% sure it's not rose-tinted memories of the 90s making me think, it wasn't always like this...


It reminds me of the old joke that half of the students are below average…


Expect in Lake Woebegone, all of the children are above average


But that's not true, unless by "average" you mean the median.


Normally, it's all the same.


Only if the distribution has zero skewness.

Unless "normally" you mean the normal distribution, which indeed has zero skewness.


Yes, it was a admittedly bad pun.


> Could you write a blogpost-style article on how to model the shallow water wave equation on a sphere?

Typically, Finite Volume Method is used for fluid flow problems. It is possible to use Finite Element Methods, but it is rare.


"As an AI language model, I am happy to comply with your request ( https://chatgpt.com/share/6727b644-b2e0-800b-b613-322072d9d3... ), but good luck finding a data set to verify it, LOL."


I started my career doing FE modeling and analysis with ANSYS and NASTRAN. Sometimes I miss these days. Thinking about how to simplify a real world problem so far that it is solvable with the computational means available was always fun. Then pushing quads around for hours until the mesh was good had an almost meditative effect. But I don't feel overwhelmingly eager to learn a new software or language.

Much to my surprise, it seems there hasn't been much movement there. ANSYS still seems to be the leader for general simulation and multi-physics. NASTRAN still popular. Still no viable open-source solution.

The only new player seems to be COMSOL. Has anyone experience with it? Would it be worth a try for someone who knows ANSYS and NASTRAN well?


I've used ansys daily for over a decade, and the only movement is in how they name their license tiers. It's a slow muddy death march. Every year I'm fighting the software more and more, the sales men are clearly at the wheel.

They buy "vertical aligned" software, integrate it, then slowly let it die. They just announced they're killing off one of these next year, that they bought ten years ago, because they want to push a competitive product with 20% of the features.

I've been using nastran for half as long but it isn't much better. It's all sales.

I dabbed a bit in abaqus, that seems nice. Probably cause I just dabbed in it.

But here I'm just trying to do my work, and all these companies do is move capabilities around their license tiers and boil the frog as fast as they get away with.


I've gone Abaqus > Ansys > Abaqus/LS-DYNA over my career and hate Ansys with a fiery passion. It's the easiest one to run your first model in, but when you start applying it to real problems its a fully adversarial relationship. The fact you have to make a complete copy of the geometry/mesh to a new Workbench "block" to run a slightly different load case (and you can't read in an orphaned results files) is just horrible.

Abaqus is more difficult to get up to speed in, but its really nice from an advanced usability standpoint. They struggle due to cost though, it is hugely expensive and we've had to fight hard to keep it time and time again.

LS-Dyna is similar to Abaqus (though I'm not fully up in it yet), but we're all just waiting to see how Ansys ruins it, especially now that they got bought out by Synopsys.


I don't know how long ago you used ansys, and i definitely don't want to sell it, but you can share geometry/mesh between those "blocks" (by dragging blocks on top of each other), and you can read in result orphaned result files.


> Still no viable open-source solution.

For the more low-level stuff there's the FEniCS project[1], for solving PDEs using fairly straight forward Python code like this[2]. When I say fairly straight forward, I mean it follows the math pretty closely, it's not exactly high-school level stuff.

[1]: https://fenicsproject.org/

[2]: https://jsdokken.com/dolfinx-tutorial/chapter2/linearelastic...


Interesting. Please bear with me as this is going off 25 year old memories, but my memory is that the workflow for using FEA tools was: Model in some 3D modelling engineering tool (e.g. SolidWorks), ansys to run FEA, iterate if needed, prototype, iterate.

So to have anything useful, you need that entire pipeline? For hobbyists, I assume we need this stack. What are the popular modelling tools?


To get started with Fenics you can maybe use the FEATool GUI, which makes it easier to set up FEA models, and also export Python simulation scripts to learn or modify the Fenics syntax [1].

[1]: https://www.featool.com/tutorial/2017/06/16/Python-Multiphys...


Yeah not my domain so wouldn't really know. For FEniCS I know Gmsh[1] was used. There's some work[2][3] been done to integrate FEniCS with FreeCAD. It seems FreeCAD also supports[4] other FEM solvers.

But, I guess you get what you pay for in this space still.

[1]: https://gmsh.info/

[2]: https://github.com/qingfengxia/Cfd

[3]: https://github.com/qingfengxia/FenicsSolver

[3]: https://wiki.freecad.org/FEM_Solver


You can export other CAD meshes for use in it


> For hobbyists, I assume we need this stack.

Just curious what kind of hobby leads to a finite element analysis?


Electronics (when you start to care about EMI or antenna design), model airplanes (for aerodynamics), rocketry, machining (especially if you want to get into SPIF), robotics, 3-D printing (especially for topology optimization), basically anything that deals with designing solid structures in the physical world. Also, computer graphics, including video games.

Unfortunately the barrier to entry is too high for most hobbyists in these fields to use FEM right now.


There are some obvious downsides and exceptions to this sentiment, but on balance, I really appreciate how the expansive access to information via the internet has fostered this phenomenon: where an unremarkable fella with a dusty media studies degree, a well-equipped garage, and probably too much free time can engineer and construct robotic machines, implement/tweak machine vision mechanisms, microwave radio transceivers, nanometer-scale measurements using laser diodes and optical interferometry, deep-sky astrophotography, etc., etc.. Of course, with burgeoning curiosity and expanding access to surplus university science lab equipment, comes armchair experts and the potential for insufferability[0]. It’s crucial to maintain perspective and be mindful of just how little any one person (especially a person with a media studies degree) can possibly know.

[0] I’m pretty sure “insufferability” isn’t a real word. [Edit: don’t use an asterisk for footnotes.]


comes armchair experts and the potential for insufferability

Hey, I resemble that remark! I'd be maybe a little less armchair with more surplus equipment access, but maybe no less insufferable.

By all accounts, though, a degree of insufferability is no bar to doing worthwhile work; Socrates, Galileo, Newton, Babbage, and Heaviside were all apparently quite insufferable, perhaps as much so as that homeless guy who yells at you about adrenochrome when you walk by his park encampment. (Don't fall into the trap of thinking it's an advantage, though.) Getting sidetracked by trivialities and delusions is a greater risk. Most people spend their whole lives on it.

As for how little any person can know, you can certainly know more than anyone who lived a century ago: more than Einstein, more than Edison, more than Noether, more than Tesla, more than Gauss. Any one of the hobbies you named will put you in contact with information they never had, and you can draw on a century or more of academic literature they didn't have, thanks to Libgen and Sci-Hub (and thus Bitcoin).

And it's easy to know more than an average doctorate holder; all you have to do is study, but not forget everything you study the way university students do, and not fall into traps like ancient aliens and the like. I mean, you can still do good work if you believe in ancient aliens (Newton and Tesla certainly believed dumber things) but probably not good archeological work.

Don't be discouraged by prejudice against autodidacts. Lagrange, Heaviside, and du Châtelet were autodidacts, and Ptolemy seems to have been as well. And they didn't even have Wikipedia or Debian! Nobody gets a Nobel for passing a lot of exams.


IMO, the mathematics underlying finite element methods and related subjects — finite element exterior calculus comes immediately to mind — are interesting enough to constitute a hobby in their own right.


FEniCs is mostly used by academic researchers, I used it for FEM modelling in magnetic for e.g. where the sorts of problems we wanted to solve you can’t do in a commercial package.


COMSOL's big advantage is it ties together a lot of different physics regimes together and makes it very easy to couple different physics together. Want to do coupled structures/fluid? Or coupled electromagnetism/mechanical? Its probably the easiest one to use.

Each individual physics regime is not particularly good on its own - there are far better mechanical, CFD, electromagnetism, etc solvers out there - but they're all made by different vendors and don't play nicely with each other.


> The only new player seems to be COMSOL

Ouch. I kind of know Comsol because it was already taught in my engineering school 15 years ago, so that it still counts as a “new entrant” really gives an idea of how slow the field evolves.


The COMSOL company was started in 1986....


It used to be called FEMLAB :)

But they changed to COMSOL because they didn't have the trademark in Japan and FEM also gave associations to the feminine gender.


I am hoping this open source FEM library will catch on : https://www.dealii.org/. The deal in deal.II stands for Differential Equation Analysis Library.

It's written in C++, makes heavy use of templates and been in development since 2000. It's not meant for solid mechanics or fluid mechanics specifically, but for FEM solutions of general PDEs.

The documentation is vast, the examples are numerous and the library interfaces with other libraries like Petsc, Trilinos etc. You can output results to a variety of formats.

I believe support for triangle and tetrahedral elements has been added only recently. In spite of this, one quirk of the library is that meshes are called "triangulations".


I've worked with COMSOL (I have a smaller amount of ANSYS experience to compare to). For the most part I preferred COMSOL's UI and workflow and leveraged a lot of COMSOL's scripting capabilities which was handy for a big but procedural geometry I had (I don't know ANSYS's capabilities for that). They of course largely do the same stuff. If you have easy access to COMSOL to try it out I'd recommend it just for the experience. I've found sometimes working with other tools make me recognize some capabilities or technique that hadn't clicked for me yet.


Once you have a mesh that's "good enough", you can use any number of numeric solvers. COMSOL has a very good mesher, and a competent geometry editor. It's scriptable, and their solvers are also very good.

There might be better programs for some problems, but COMSOL is quite nice.


OpenFOAM seems like an opensource option but I have found it rather impenetrable - there are some youtube videos and pdf tutorials, but they are quite dense and specific and doens't seem to cover the entire pipeline

Happy to hear if people have good resources!


Still no viable open-source solution.

Wait? What? NASTRAN was originally developed by NASA and open sourced over two decades ago. Is this commercial software built on top that is closed source?

I’m astonished ANSYS and NASTRAN are still the only players in town. I remember using NASTRAN 20 years ago for FE of structures while doing aero engineering. And even then NASTRAN was almost 40 years old and ancient.


There's a bunch of open source fem solvers e.g. Calculix, Code_Aster, OpenRadioss and probably a few unmaintained forks of (NASA) NASTRAN, but there's no multiphysics package I don't think.


These are at least capable of thermomechanical with fluid-structure coupling. Not all-physics but still multi. True that things with multi species diffusion or electromagnetics are missing, but maybe Elmer can fill the gap.


Abaqus is pretty big too. I've worked with both Ansys and Abaqus and I generally prefer the latter.


Abaqus is up there with Ansys aswell as others have mentioned.


As a recovering fe modeler, I understand completely.


I work in this field and it really is stagnant and dominated by high-priced Ansys/etc. For some reason silicon valley's open sourceness hasn't touched it. For open source, there's CalculiX which is full of bugs and Code Aster which everybody I've heard about it from say it's too confusing to use. CalculiX has Prepomax as a fairly new and popular pre/post.


During my industrial PhD, I created an Object-Oriented Programming (OOP) framework for Large Scale Air-Pollution (LSAP) simulations.

The OOP framework I created was based on Petrov-Galerkin FEM. (Both proper 2D and "layered" 3D.)

Before my PhD work, the people I worked with (worked for) used spectral methods and Alternate-direction FEM (i.e. using 1D to approximate 2D.)

In some conferences and interviews certain scientists would tell me that programming FEM is easy (for LSAP.) I always kind of agree and ask how many times they have done it. (For LSAP or anything else.) I was not getting an answer from those scientists...

Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.


> Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.

FEM at it's core ends up being just a technique to find approximate solutions to problems expressed with partial differential equations.

Finding solutions to practical problems that meet both boundary conditions and domain is practically impossible to have with analytical methods. FEM trades off correctness with an approximation that can be exact in prescribed boundary conditions but is an approximation in both how domains are expressed and the solution,and has nice properties such as the approximation errors converging to the exact solution by refining the approximation. This means exponentially larger computational budgets.


I also studied FEM in undergrad and grad school. There's something very satisfying about breaking an intractably difficult real-world problem up into finite chunks of simplified, simulated reality and getting a useful, albeit explicitly imperfect, answer out of the other end. I find myself thinking about this approach often.


A 45 comment thread at the time https://news.ycombinator.com/item?id=33480799


Predicting how things evolve in space-time is a fundamental need. Finite element methods deserve the glory of a place at the top of the HN list. I opted for "orthogonal collocation" as the method of choice for my model back in the day because it was faster and more fitting to the problem at hand. A couple of my fellow researchers did use FEM. It was all the rage in the 90s for sure.


From "Chaos researchers can now predict perilous points of no return" (2022) https://news.ycombinator.com/item?id=32862414 :

FEM: Finite Element Method: https://en.wikipedia.org/wiki/Finite_element_method

>> FEM: Finite Element Method (for ~solving coupled PDEs (Partial Differential Equations))

>> FEA: Finite Element Analysis (applied FEM)

awesome-mecheng > Finite Element Analysis: https://github.com/m2n037/awesome-mecheng#fea

And also, "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 re: the "relaxation technique" .. https://news.ycombinator.com/item?id=40396171


Interesting perspective. I just attended an academic conference on isogeometric analysis (IGA), which is briefly mentioned in this article. Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community. IGA has a lot of potential to solve many of the pain points of FEM. It has better convergence rates in general, allows for better timesteps in explicit solvers, has better methods to ensure stability in, e.g., incompressible solids, and perhaps most exciting, enables an immersed approach, where the problem of meshing is all but gone as the geometry is just immersed in a background grid that is easy to mesh. There is still a lot to be done to drive adoption in industry, but this is likely the future of FEM.


> IGA has a lot of potential to solve many of the pain points of FEM.

Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?

If I recall correctly convergence rates are exactly the same, but the whole approach fails to realize that, other than boundaries, geometry and the fields of quantities of interest do not have the same spatial distributions.

IGA has been around for ages, and never materialized beyond the "let's reuse the CAD functions" trick, which ends up making the problem more complex without any tangible return when compared with plain old P-refinent. What is left in terms of potential?

> Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community.

I recall the name Tom Hughes. I have his FEM book and he's been for years (decades) the only one pushing the concept. The reason being that the whole computational mechanics community looked at it,found it interesting, but ultimately wasn't worth the trouble. There are far more interesting and promising ideas in FEM than using splines to build elements.


> Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?

That's how it started, yes. The splines used to specify the geometry are trimmed surfaces, and IGA has expanded from there to the use of splines generally as the shape functions, as well as trimming of volumes, etc. This use of smooth splines as shape functions improves the accuracy per degree of freedom.

> If I recall correctly convergence rates are exactly the same

Okay, looks like I remembered wrong here. What we do definitely see is that in IGA you get the convergence rates of higher degrees without drastically increasing your degree of freedom, meaning that there is better accuracy per degree of freedom for any degree above 1. See for example Figures 16 and 18 in this paper: https://www.researchgate.net/profile/Laurens-Coox/publicatio...

> geometry and the fields of quantities of interest do not have the same spatial distributions.

Using the same shape functions doesn't automatically mean that they will have the same spatial distributions. In fact, with hierarchical refinement in splines you can refine the geometry and any single field of interest separately.

> What is left in terms of potential?

The biggest potential other than higher accuracy per degree of freedom is perhaps trimming. In FEM, trimming your shape functions makes the solution unusable. In IGA, you can immerse your model in a "brick" of smooth spline shape functions, trim off the region outside, and run the simulation while still getting optimal convergence properties. This effectively means little to no meshing required. For a company that is readying this for use in industry, take a look at https://coreform.com/ (disclosure, I used to be a software developer there).


I took a course in undergrad, and was exposed to it in grad school again, and for the life of me I still don't understand the derivations either Galerkin or variational.


I learned from the structural engineering perspective. What are you struggling with? In my mind I have this logic flow: 1. strong form pde; 2. weak form; 3. discretized weak form; 4. compute integrals (numerically) over each element; 5. assemble the linear system; 6. solve the linear system.


Luckily the integrals of step 4 are already worked out in text books and research papers for all the problems people commonly use FEA for so you can almost always skip 1. 2. and 3.


Do you have any textbook recommendations for the structural engineering perspective?


For anyone interested in a contemporary implementation, SELF is a spectral element library in object-oriented fortran [1]. The devs here at Fluid Numerics have upcoming benchmarks on our MI300A system and other cool hardware.

[1] https://github.com/FluidNumerics/SELF


I have such a fondness for FEA. ANSYS and COSMOS were the ones I used, and I’ve written toy modelers and solvers (one for my HP 48g) and even tinkered with using GPUs for getting answers faster (back in the early 2000s).

Unfortunately my experience is that FEA is a blunt instrument with narrow practical applications. Where it’s needed, it is absolutely fantastic. Where it’s used when it isn’t needed, it’s quite the albatross.


My hot take is that, FEM is best used as unit testing of Machine Design, not a guide towards design that it’s often used as. The greatest mechanical engineer I know, once designed an entire mechanical wrist arm with five fingers, actuations, lots of parts and flexible finger tendon. He never used FEM at any part of his design. He instead did it in the old fashioned, design and fab a simple prototype, get a feel for it, use the tolerances you discovered in the next prototype and just keep iterating quickly. If I went to him and told him to model the flexor of his fingers in FEM, and then gave him a book to tell him how to correctly use the FEM software so that you got non “non-sensical” results I would have slowed him down if anything. Just build and you learn the tolerances, and the skill is in building many cheap prototypes to get the best idea of what the final expensive build will look like.


> The greatest mechanical engineer I know, [...]

And with that you wrote the best reply to your own comment. Great programmers of the past wrote amazing systems just in assembly. But you needed to be a great programmer just to get anything done at all.

Nowadays dunces like me can write reasonable software in high level languages with plenty of libraries. That's progress.

Similar for mechanical engineering.

(Doing prototypes etc might still be a good idea, of course. My argument is mainly that what works for the best engineers doesn't necessarily work for the masses.)


Also, might work for a mechanical arm the size of an arm, but not for the size of the Eiffel tower.


Eiffel Tower was built before FEM existed. In fact I doubt they even did FEM like calculations


This is true, although it was notable as an early application of Euler-Bernoulli beam theory in structural engineering, which helped to prove the usefulness of that method.


I ment a mechanical arm the size of the eifel tower. You don't want to iterate physical products at that size.


Going by Boeing vs. SpaceX, iteration seems to be the most effective approach to building robotic physical products the size of the Eiffel Tower.


I'm sure they are doing plenty of calculations beforehand, too.


Unquestionably! Using FEM.


Would FEM be useful for that kind problem? It's more for figuring out if your structure will take the load, where stress concentrations are, what happens with thermal expansion. FEM won't do much for figuring out what the tolerance need to be on intricate mechanisms


To be fair, FEM is not the right tool for mechanical linkage design (if anything, you'd use rigid body dynamics).

FEM is the tool you'd use to tell when and where the mechanical linkage assembly will break.


Garbage in garbage out. If you don't fully understand the model, then small parameter changes can create wildly different results. It's always good to go back to fundamentals and hand check a simplification to get a feel for how it should behave.


If he were designing a bridge, however ...


Its wrong to assume that everyone and every projects can use an iterative method with endless prototypes. Id you do I have a prototype bridge to sell you.


Good luck designing crash resilient structures without simulating it on FEM based software though.


The FEM is just a model of the crash resistant structure. Hopefully it will behave like the actual structure, but that is not guaranteed. We use the FEM because it is faster and cheaper than doing the tests on the actual thing. However if you have the time and money to do your crash resiliency tests on the actual product during the development phase. I expect the results would be much better.


Yes, with infinite time and budget you'd get much better results. That does not sound like an interesting proposition, though.


I’d guess most of the bridges in US were built before FEM existed


Anyone can design a bridge that holds up. Romans did it millenia ago.

Engineering is designing a bridge that holds up to a certain load, with the least amount of material and/or cost. FEM gives you tighter bounds on that.


The average age of a bridge in the US is about 40-50 years old and the title of the article has "80 years of FEM".

https://www.infrastructurereportcard.org/wp-content/uploads/...

I'd posit a large fraction were designed with FEM.


FEM runs on the same math and theories those bridges were designed on on paper.


They did this just fine until without such tools for the majority of innovation in the last century.


Having worked on the design of safety structures with mechanical engineers for a few projects, it is far, far cheaper to do a simulation and iterate over designs and situations than do that in a lab or work it out by hand. The type of stuff you can do on paper without FEM tends to be significantly oversimplified.

It doesn't replace things like actual tests, but it makes designing and understanding testing more efficient and more effective. It is also much easier to convince reviewers you've done your job correctly with them.

I'd argue computer simulation has been an important component a majority of mechanical engineering innovation in the last century. If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit. We did "just fine" without cars for the majority of humanity, but motorized vehicles significantly changed how we do things and changed the reach of what we can do.


> It is also much easier to convince reviewers you've done your job correctly with them.

In other words, the work that doesn't change the underlying reality of the product?

> We did "just fine" without cars for the majority of humanity

We went to the moon, invented aircraft, bridges, skyscrapers, etc, all without FEM. So that's why this is a bad comparison.

> If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit.

Of course. That's what they are accustomed to. 80/20 paper techniques that were replaced by SW were forgotten.

When tests are cheap, you make a lot of them. When they are expensive, you do a few and maximize the information you learn from them.

I'm not arguing FEM doesn't provide net benefit to the industry.


What is your actual assertion? That tools like FEA are needless frippery or that they just dumb down practitioners who could have otherwise accomplished the same things with hand methods? Something else? You're replying to a practicing mechanical engineer whose experience rings true to this aerospace engineer.

Things like modern automotive structural safety or passenger aircraft safety are leagues better today than even as recently as the 1980s because engineers can perform many high-fidelity simulations long before they get to integrated system test. When integrated system test is so expensive, you're not going to explore a lot of new ideas that way.

The argument that computational tools are eroding deep engineering understanding is long-standing, and has aspects of both truth and falsity. Yep, they designed the SR-71 without FEA, but you would never do that today because for the same inflation-adjusted budget, we'd expect a lot more out of the design. Tools like FEA are what help engineers fulfill those expectations today.


> What is your actual assertion?

That the original comment I replied to is false: "Good luck designing crash resilient structures without simulating it on FEM based software."

Now what's my opinion? FEM raises the quality floor of engineering output overall, and more rarely the ceiling. But, excessive reliance on computer simulation often incentivizes complex, fragile, and expensive designs.

> passenger aircraft safety are leagues better today

Yep, but that's just restating the pros. Local iteration and testing.

> You're replying to a practicing mechanical engineer

Oh drpossum and I are getting to know each other.

I agree with his main point. It's an essential tool for combatting certifications and reviews in the world of increasing regulatory and policy based governance.


Replying to finish a discussion no one will probably see, but...

> That the original comment I replied to is false: "Good luck designing crash resilient structures without simulating it on FEM based software."

In refuting the original casually-worded blanket statement, yes, you're right. You can indeed design crash resilient structures without FEA. Especially if they are terrestrial (i.e., civil engineering).

In high-performance applications like aerospace vehicles (excluding general aviation) or automobiles, you will not achieve the required performance on any kind of acceptable timeline or budget without FEA. In these kinds of high-performance applications, the original statement is valid.

> FEM raises the quality floor of engineering output overall, and more rarely the ceiling. But, excessive reliance on computer simulation often incentivizes complex, fragile, and expensive designs.

Do you have any experience in aerospace applications? Because quite often, we reliably achieve structural efficiencies, at prescribed levels of robustness, that we would not achieve sans FEA. It's a matter of making the performance bar, not a matter of simple vs. complex solutions.

> I agree with his main point. It's an essential tool for combatting certifications and reviews in the world of increasing regulatory and policy based governance.

That was one of his points, not the main one. The idea that its primary value is pandering to paper-pushing regulatory bodies and "policy based governance" is specious. Does it help with your certification case? Of course. But the real value is that analyses from these tools are the substantiation we use to determine the if the (expensive) design will meet requirements and survive all its stressing load cases before we approve building it. We then have a high likelihood of what we build, assuming it conforms to design intent, performing as expected.


Except that everything's gotten abysmally complex. Vehicle crash test experiments are a good example of validating the FEM simulation (yes that's the correct order, not vice versa)


How can you assert so confidently you know the cause and effect?

Certainly computers allow more complexity, so there is interplay between what it enables and what’s driven by good engineering.


FEM - because we can't solve PDEs!


Is it related to Galerkin?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4



n5321 | 2025年6月15日 23:31

How to get meaningful and correct results from your finite element model


Martin Bäker Institut für Werkstoffe, Technische Universität Braunschweig, Langer Kamp 8, D-38106 Braunschweig, martin.baeker@tu-bs.de November 15, 2018


Abstract

This document gives guidelines to set up, run, and postprocess correct simulations with the finite element method. It is not an introduction to the method itself, but rather a list of things to check and possible mistakes to watch out for when doing a finite element simulation.


The finite element method (FEM) is probably the most-used simulation technique in engineering. Modern finite-element software makes doing FE simulations easy – too easy, perhaps. Since you have a nice graphical user interface that guides you through the process of creating, solving, and postprocessing a finite element model, it may seem as if there is no need to know much about the inner workings of a finite element program or the underlying theory. However, creating a model without understanding finite elements is similar to flying an airplane without a pilot’s license. You may even land somewhere without crashing, but probably not where you intended to.

This document is not a finite element introduction; see, for example, [3, 7, 10] for that. It is a guideline to give you some ideas how to correctly set up, solve and postprocess a finite element model. The techniques described here were developed working with the program Abaqus [9]; however, most of them should be easily transferable to other codes. I have not explained the theoretical basis for most of them; if you do not understand why a particular consideration is important, I recommend studying finite element theory to find out.

1. Setting up the model

1.1 General considerations

These considerations are not restricted to finite element models, but are useful for any complex simulation method.

  • 1.1-1. Even if you just need some number for your design – the main goal of an FEA is to understand the system. Always design your simulations so that you can at least qualitatively understand the results. Never believe the result of a simulation without thinking about its plausibility.

  • 1.1-2. Define the goal of the simulation as precisely as possible. Which question is to be answered? Which quantities are to be calculated? Which conclusions are you going to draw from the simulation? Probably the most common error made in FE simulations is setting up a simulation without having a clear goal in mind. Be as specific as possible. Never set up a model “to see what happens” or “to see how stresses are distributed”.

  • 1.1-3. Formulate your expectations for the simulation result beforehand and make an educated guess of what the results should be. If possible, estimate at least some quantities of your simulation using simplified assumptions. This will make it easier to spot problems later on and to improve your understanding of the system you are studying.

  • 1.1-4. Based on the answer to the previous items, consider which effects you actually have to simulate. Keep the model as simple as possible. For example, if you only need to know whether a yield stress is exceeded somewhere in a metallic component, it is much easier to perform an elastic calculation and check the von Mises stress in the postprocessor (be wary of extrapolations, see 3.2-1) than to include plasticity in the model.

  • 1.1-5. What is the required precision of your calculation? Do you need an estimate or a precise number? (See also 1.4-1 below.)

  • 1.1-6. If your model is complex, create it in several steps. Start with simple materials, assume frictionless behaviour etc. Add complications step by step. Setting up the model in steps has two advantages: (i) if errors occur, it is much easier to find out what caused them; (ii) understanding the behaviour of the system is easier this way because you understand which addition caused which change in the model behaviour. Note, however, that checks you made in an early stage (for example on the mesh density) may have to be repeated later.

  • 1.1-7. Be careful with units. Many FEM programs (like ABAQUS) are inherently unit-free – they assume that all numbers you give can be converted without additional conversion factors. You cannot define you model geometry in millimeter, but use SI units without prefixes everywhere else. Be especially careful in thermomechanical simulations due to the large number of different physical quantities needed there. And of course, be also careful if you use antiquated units like inch, slug, or BTU.

1.2 Basic model definition

  • 1.2-1. Choose the correct type of simulation (static, quasi-static, dynamic, coupled etc.). Dynamic simulations require the presence of inertial forces (elastic waves, changes in kinetic energies). If inertial forces are irrelevant, you should use static simulations.

  • 1.2-2. As a rule of thumb, a simulation is static or quasi-static if the excitation frequency is less than 1/5 of the lowest natural frequency of the structure [2].

  • 1.2-3. In a dynamic analysis, damping may be required to avoid unrealistic multiple reflections of elastic waves that may affect the results [2].

  • 1.2-4. Explicit methods are inherently dynamic. In some cases, explicit methods may be used successfully for quasi-static problems to avoid convergence problems (see 2.1-9 below). If you use mass scaling in your explicit quasi-static analysis, carefully check that the scaling parameter does not affect your solution. Vary the scaling factor (the nominal density) to ensure that the kinetic energy in the model remains small [12].

  • 1.2-5. In a static or quasi-static analysis, make sure that all parts of the model are constrained so that no rigid-body movement is possible. (In a contact problem, special stabilization techniques may be available to ensure correct behaviour before contact is established.)

  • 1.2-6. If you are studying a coupled problem (for example thermo-mechanical) think about the correct form of coupling. If stresses and strains are affected by temperature but not the other way round, it may be more efficient to first calculate the thermal problem and then use the result to calculate thermal stresses. A full coupling of the thermal and mechanical problem is only needed if temperature affects stresses/strains (e. g., due to thermal expansion or temperature-dependent material problems) and if stresses and strains also affect the thermal problem (e. g., due to plastic heat generation or the change in shape affecting heat conduction).

  • 1.2-7. Every FE program uses discrete time steps (except for a static, linear analysis, where no time incrementation is needed). This may affect the simulation. If, for example, the temperature changes during a time increment, the material behaviour may strongly differ between the beginning and the end of the increment (this often occurs in creep problems where the properties change drastically with temperature). Try different maximal time increments and make sure that time increments are sufficiently small so that these effects are small.

  • 1.2-8. Critically check whether non-linear geometry is required. As a rule of thumb, this is almost always the case if strains exceed 5%. If loads are rotating with the structure (think of a fishing rod that is loaded in bending initially, but in tension after it has started to deform), the geometry is usually non-linear. If in doubt, critically compare a geometrically linear and non-linear simulation.

1.3 Symmetries, boundary conditions and loads

  • 1.3-1. Exploit symmetries of the model. In a plane 2D-model, think about whether plane stress, plane strain or generalized plane strain is the appropriate symmetry. (If thermal stresses are relevant, plane strain is almost always wrong because thermal expansion in the 3-direction is suppressed, causing large thermal stresses. Note that these 33-stresses may affect other stress components as well, for example, due to von Mises plasticity.) Keep in mind that the loads and the deformations must conform to the same symmetry.

  • 1.3-2. Check boundary conditions and constraints. After calculating the model, take the time to ensure that nodes were constrained in the desired way in the postprocessor.

  • 1.3-3. Point loads at single nodes may cause unrealistic stresses in the adjacent elements. Be especially careful if the material or the geometry is non-linear. If in doubt, distribute the load over several elements (using a local mesh refinement if necessary).

  • 1.3-4. If loads are changing direction during the calculation, non-linear geometry is usually required, see 1.2-8.

  • 1.3-5. The discrete time-stepping of the solution process may also be important in loading a structure. If, for example, you abruptly change the heat flux at a certain point in time, discrete time stepping may not capture the exact point at which the change occurs, see fig. 1. (Your software may use some averaging procedure to alleviate this.) Define load steps or use other methods to ensure that the time of the abrupt change actually corresponds to a time step in the simulation. This may also improve convergence because it allows to control the increments at the moment of abrupt change, see also 2.1-4.

1.4 Input data

  • 1.4-1. A simulation cannot be more precise than its input data allow. This is especially true for the material behaviour. Critically consider how precise your material data really are. How large are the uncertainties? If in doubt, vary material parameters to see how results are affected by the uncertainties.

  • 1.4-2. Be careful when combining material data from different sources and make sure that they are referring to identical materials. In metals, don’t forget to check the influence of heat treatment; in ceramics, powder size or the processing route may affect the properties; in polymers, the chain length or the content of plasticizers is important [13]. Carefully document your sources for material data and check for inconsistencies.

  • 1.4-3. Be careful when extrapolating material data. If data have been described using simple relations (for example a Ramberg-Osgood law for plasticity), the real behaviour may strongly deviate from this.

  • 1.4-4. Keep in mind that your finite element software usually cannot extrapolate material data beyond the values given. If plastic strains exceed the maximum value specified, usually no further hardening of the material will be considered. The same holds, for example, for thermal expansion coefficients which usually increase with temperature. Using different ranges in different materials may thus cause spurious thermal stresses. Fig. 2 shows an example.

  • 1.4-5. If material data are given as equations, be aware that parameters may not be unique. Frequently, data can be fitted using different parameters. As an illustration, plot the simple hardening law A+Bεⁿ with values (130, 100, 0.5) and (100, 130, 0.3) for (A, B, n), see fig. 3. Your simulation results may be indifferent to some changes in the parameters because of this.

  • 1.4-6. If it is not possible to determine material behaviour precisely, finite element simulations may still help to understand how the material behaviour affects the system. Vary parameters in plausible regions and study the answer of the system.

  • 1.4-7. Also check the precision of external loads. If loads are not known precisely, use a conservative estimate.

  • 1.4-8. Thermal loads may be especially problematic because heat transfer coefficients or surface temperatures may be difficult to measure. Use the same considerations as for materials.

  • 1.4-9. If you vary parameters (for example the geometry of your component or the material), make sure that you correctly consider how external loads are changed by this. If, for example, you specify an external load as a pressure, increasing the surface also increases the load. If you change the thermal conductivity of your material, the total heat flux through the structure will change; you may have to specify the thermal load accordingly.

  • 1.4-10. Frictional behaviour and friction coefficients are also frequently unknown. Critically check the parameters you use and also check whether the friction law you are using is correct – not all friction is Coulombian.

  • 1.4-11. If a small number of parameters are unknown, you can try to vary them until your simulation matches experimental data, possibly using a numerical optimization method. (This is the so-called inverse parameter identification [6].) Be aware that the experimental data used this way cannot be used to validate your model (see section 3.3).

1.5 Choice of the element type

Warning: Choosing the element type is often the crucial step in creating a finite element model. Never accept the default choice of your program without thinking about it.¹ Carefully check which types are available and make sure you understand how a finite element simulation is affected by the choice of element type. You should understand the concepts of element order and integration points (also known as Gauß points) and know the most common errors caused by an incorrectly chosen element type (shear locking, volumetric locking, hourglassing [1,3]).

The following points give some guidelines for the correct choice:

  • 1.5-1. If your problem is linear-elastic, use second-order elements. Reduced integration may save computing time without strongly affecting the results.

  • 1.5-2. Do not use fully-integrated first order elements if bending occurs in your structure (shear locking). Incompatible mode elements may circumvent this problem, but their performance strongly depends on the element shape [7].

  • 1.5-3. If you use first-order elements with reduced integration, check for hourglassing. Keep in mind that hourglassing may occur only in the interior of a three-dimensional structure where seeing it is not easy. Exaggerating the displacements may help in visualizing hourglassing. Most programs use numerical techniques to suppress hourglass modes; however, these may also affect results due to artificial damping. Therefore, also check the energy dissipated by this artificial damping and make sure that it is small compared to other energies in the model.

  • 1.5-4. In contact problems, first-order elements may improve convergence because if one corner and one edge node are in contact, the second-order interpolation of the element edge causes overlaps, see fig. 4. This may especially cause problems in a crack-propagation simulation with a node-release scheme [4, 11].

  • 1.5-5. Discontinuities in stresses or strains may be captured better with first-order elements in some circumstances.

  • 1.5-6. If elements distort strongly, first-order elements may be better than second-order elements.

  • 1.5-7. Avoid triangular or tetrahedral first-order elements since they are much too stiff, especially in bending. If you have to use these elements (which may be necessary in a large model with complex geometry), use a very fine mesh and carefully check for mesh convergence. Think about whether partitioning your model and meshing with quadrilateral/hexahedral elements (at least in critical regions) may be worth the effort. Fig. 5 shows an example where a very complex geometry has to be meshed with tetrahedral elements. Although the mesh looks reasonably fine, the system answer with linear elements is much too stiff.

  • 1.5-8. If material behaviour is incompressible or almost incompressible, use hybrid elements to avoid volumetric locking. They may also be useful if plastic deformation is large because (metal) plasticity is also volume conserving.

  • 1.5-9. Do not mix elements with different order. This can cause overlaps or gaps forming at the interface (possibly not shown by your postprocessor) even if there are no hanging nodes (see fig. 6). If you have to use different order of elements in different regions of your model, tie the interface between the regions using a surface constraint. Be aware that this interface may cause a discontinuity in the stresses and strains due to different stiffness of the element types.

  • 1.5-10. In principle, it is permissible to mix reduced and fully integrated elements of the same order. However, since they differ in stiffness, spurious stress or strain discontinuities may result.

  • 1.5-11. If you use shell or beam elements or similar, make sure to use the correct formulation. Shells and membranes look similar, but behave differently. Make sure that you use the correct mathematical formulation; there are a large number of different types of shell or beam elements with different behaviour.

¹The only acceptable exception may be a simple linear-elastic simulation if your program uses second-order elements. But if all you do is linear elasticity, this article is probably not for you.

1.6 Generating a mesh

  • 1.6-1. If possible, use quadrilateral/hexahedral elements. Meshing 3D-structures this way may be laborious, but it is often worth the effort (see also 1.5-7).

  • 1.6-2. A fine mesh is needed where gradients in stress and strain are large.

  • 1.6-3. A preliminary simulation with a coarse mesh may help to identify the regions where a greater mesh density is required.

  • 1.6-4. Keep in mind that the required mesh density depends on the quantities you want to extract and on the required precision. For example, displacements are often calculated more precisely than strains (or stresses) because strains involve derivatives, i.e. the differences in displacements between nodes.

  • 1.6-5. A mesh convergence study can be used to check whether the model behaves too stiff (as is often the case for fully integrated first-order elements, see fig. 5) or too soft (which happens with reduced-integration elements). Be careful in evaluating this study: If your model is load-controlled, evaluate displacements or strains to check for convergence, if it is strain-controlled, evaluate forces or stresses. (Stiffness relates forces to displacements, so to check for stiffness you need to check both.) If you use, for example, displacement control, displacements are not sensitive to the actual stiffness of your model since you prescribe the displacement.

  • 1.6-6. Check shape and size of the elements. Inner angles should not deviate too much from those of a regularly shaped element. Use the tools provided by your software to highlight critical elements. Keep in mind that critical regions may be situated inside a 3D-component and may not be directly visible. Avoid badly-shaped elements especially in region where high gradients occur and in regions of interest.

  • 1.6-7. If you use local mesh refinement, the transition between regions of different element sizes should be smooth. As a rule of thumb, adjacent elements should not differ by more than a factor of 2–3 in their area (or volume). If the transition is too abrupt, spurious stresses may occur in this region because a region that is meshed finer is usually less stiff. Furthermore, the fine mesh may be constrained by the coarser mesh. (As an extreme case, consider a finely meshed quadratic region that is bounded by only four first-order elements – in this case, the region as a whole can only deform as a parallelogram, no matter how fine the interior mesh is.)

  • 1.6-8. Be aware that local mesh refinement may strongly affect the simulation time in an explicit simulation because the stable time increment is determined by the size of the smallest element in the structure. A single small or badly shaped element can drastically increase the simulation time.

  • 1.6-9. If elements are distorting strongly, remeshing may improve the shape of the elements and the solution quality. For this, solution variables have to be interpolated from the old to the new mesh. This interpolation may dampen strong gradients or local extrema. Make sure that this effect is sufficiently small by comparing the solution before and after the remeshing in a contour plot and at the integration points.

  • 1.6-10. Another way of dealing with strong mesh distortions is to start with a mesh that is initially distorted and becomes more regular during deformation. This method usually requires some experimentation, but it may yield good solutions without the additional effort of remeshing.

1.7 Defining contact problems

  • 1.7-1. Correctly choose master and slave surfaces in a master-slave algorithm. In general, the stiffer (and more coarsely meshed) surface should be the master.

  • 1.7-2. Problems may occur if single nodes get in contact and if surfaces with corners are sliding against each other. Smoothing the surfaces may be helpful.

  • 1.7-3. Nodes of the master surface may penetrate the slave surface; again, smoothing the surfaces may reduce this, see fig. 7.

  • 1.7-4. Some discretization error is usually unavoidable if curved surfaces are in contact. With a pure master-slave algorithm, penetration and material overlap are the most common problem; with a symmetric choice (both surfaces are used as master and as slave), gaps may open between the surfaces, see fig. 8. Check for discretization errors in the postprocessor.

  • 1.7-5. Discretization errors may also affect the contact force. Consider, for example, the Hertzian contact problem of two cylinders contacting each other. If the mesh is coarse, there will be a notable change in the contact force whenever the next node comes into contact. Spurious oscillations of the force may be caused by this.

  • 1.7-6. Make sure that rigid-body motion of contact partners before the contact is established is removed either by adding appropriate constraints or by using a stabilization procedure.

  • 1.7-7. Second-order elements may cause problems in contact (see 1.5-4 and fig. 4) [4, 11]; if they do, try switching to first-order elements.

1.8 Other considerations

  • 1.8-1. If you are inexperienced in using finite elements, start with simple models. Do not try to directly set up a complex model from scratch and make sure that you understand what your program does and what different options are good for. It is almost impossible to find errors in a large and complex model if you do not have long experience and if you do not know what results you expect beforehand.

  • 1.8-2. Many parameters that are not specified by the user are set to default values in finite element programs. You should check whether these defaults are correct; especially for those parameters that directly affect the solution (like element types, material definitions etc.). If you do not know what a parameter does and whether the default is appropriate, consult the manual. For parameters that only affect the efficiency of the solution (for example, which solution scheme is used to solve matrix equations), understanding the parameters is less important because a wrongly chosen parameter will not affect the final solution, but only the CPU time or whether a solution is found at all.

  • 1.8-3. Modern finite element software is equipped with a plethora of complex special techniques (XFEM, element deletion, node separation, adaptive error-controlled mesh-refinement, mixed Eulerian-Lagrangian methods, particle based methods, fluid-structure interaction, multi-physics, user-defined subroutines etc.). If you plan to use these techniques, make sure that you understand them and test them using simple models. If possible, build up a basic model without these features first and then add the complex behaviour. Keep in mind that the impressive simulations you see in presentations were created by experts and may have been carefully selected and may not be typical for the performance.

2. Solving the model

Even if your model is solved without any convergence problems, nevertheless look at the log file written by the solver to check for warning messages. They may be harmless, but they may indicate some problem in defining your model.

Convergence problems are usually reported by the program with warning or error messages. You can also see that your model has not converged if the final time in the time step is not the end time you specified in the model definition.

There are two reasons for convergence problems: On the one hand, the solution algorithm may fail to find a solution albeit a solution of the problem does exist. In this case, modifying the solution algorithm may solve the problem (see section 2.2). On the other hand, the problem definition may be faulty so that the problem is unstable and does not have a solution (section 2.3).

If you are new to finite element simulations, you may be tempted to think that these errors are simply caused by specifying an incorrect option or forgetting something in the model definition. Errors of this type exist as well, but they are usually detected before calculation of your model begins (and are not discussed here). Instead, treat the non-convergence of your simulation in the same way as any other scientific problem. Formulate hypotheses why the simulation fails to converge. Modify your model to prove² or disprove these hypotheses to find the cause of the problems.

²Of course natural science is not dealing with “proofs”, but this is not the place to think about the philosophy of science. Replace “prove” with “strengthen” or “find evidence for” if you like.

2.1 General considerations

  • 2.1-1. In an implicit simulation, the size of the time increments is usually automatically controlled by the program. If convergence is difficult, the time increments are reduced.³ Usually, the program stops if the time increment is too small or if the convergence problems persist even after several cutbacks of the time increment. (In Abaqus, you get the error messages Time increment smaller than minimum or Too many attempts, respectively.) These messages themselves thus do not tell you anything about the reason for the convergence problems. To find the cause of the convergence problems, look at the solver log file in the increment(s) before the final error message. You will probably see warnings that tell you what kind of convergence problem was responsible (for example, the residual force is too large, the contact algorithm did not converge, the temperature increments were too large). If available, also look at the unconverged solution and compare it to the last, converged timestep. Frequently, large changes in some quantity may indicate the location of the problem.

  • 2.1-2. Use the postprocessor to identify the node with the largest residual force and the largest change in displacement in the final increment. Often (but not always) this tells you where the problem in the model occurs. (Apply the same logic in a thermal simulation looking at the temperature changes and heat fluxes.)

  • 2.1-3. If the first increment does not converge, set the size of the first time increment to a very small value. If the problem persist, the model itself may be unstable (missing boundary conditions, initial overlap of contacting surfaces). To find the cause of the problem, you can remove all external loads step by step or add further boundary conditions to make sure that the model is properly constrained (if you pin two nodes for each component, rigid body movements should be suppressed – if the model converges in this case, you probably did not have sufficient boundary conditions in your original model). Alternatively or additionally, you may add numerical stabilization to the problem definition. (In numerical stabilization, artificial friction is added to the movement of nodes so that stabilizing forces are generated if nodes start to move rapidly.) However, make sure that the stabilization does not affect your results too strongly. Also check for abrupt jumps in some boundary conditions, for example a finite displacement that is defined at the beginning of a step or a sudden jump in temperature or load. If you apply a load instantaneously, cutting back the time increments does not help the solution process. If this occurs, ramp your load instead.

  • 2.1-4. Avoid rapid changes in an amplitude within a calculation step (see also 1.2-7 and 1.3-5). For example, if you hold a heat flux (or temperature or stress) for a long time and then abruptly reduce it within the same calculation step, the time increment will suddenly jump to a point where the temperature is strongly reduced. This abrupt change may cause convergence problems. Define a second step and choose small increments at the beginning of the second step where large changes in the model can be expected.

  • 2.1-5. Try the methods described in section 2.2 to see whether the problem can be resolved by changing the solution algorithm.

  • 2.1-6. Sometimes, it is the calculation of the material law at an integration point that does not converge (to calculate stresses from strains at integration point inside the solver, another Newton algorithm is used at each integration point [3]). If this is the case, the material definition may be incorrect or problematic (for example, due to incorrectly specified material parameters or because there is extreme softening at a point).

  • 2.1-7. Simplify your model step by step to find the reason of the convergence problems. Use simpler material laws (simple plasticity instead of damage, elasticity instead of plasticity), switch off non-linear geometry, remove external loads etc. If the problem persists, try to create a minimum example – the smallest example you can find that shows the same problem. This has several advantages: the minimum example is easier to analyse, needs less computing time so that trying things is faster, and it can also be shown to others if you are looking for help (see section 4).

  • 2.1-8. If your simulation is static, switching to an implicit dynamic simulation may help because the inertial forces act as natural stabilizers. If possible, use a quasi-static option.

  • 2.1-9. Explicit simulations usually have less convergence problems. A frequently-heard advice to solve convergence problems is to switch from implicit to explicit models. I strongly recommend to only switch from implicit static to explicit quasi-static for convergence reasons if you understand the reasons of the convergence problems and cannot overcome them with the techniques described here. You should also keep in mind that explicit programs may offer a different functionality (for example, different element types). If your problem is static, you can only use a quasi-static explicit analysis which may also have problems (see 1.2-4). Be aware that in an explicit simulations, elastic waves may occur that may change the stress patterns.

³The rationale behind this is that the solution from the previous increment is a better initial guess for the next increment if the change in the load is reduced.

2.2 Modifying the solution algorithm

If your solution algorithm does not converge for numerical reasons, these modifications may help. They are useless if there is a true model instability, see section 2.3.

  • 2.2-1. Finite element programs use default values to control the Newton iterations. If no convergence is reached after a fixed number of iterations, the time step is cut back. In strongly non-linear problems, these default values may be too tight. For example, Abaqus cuts back on the time increment if the Newton algorithm does not converge after 4 iterations; setting this number to a larger value is often sufficient to reach convergence (for example, by adding *Controls, analysis=discontinuous to the input file).

  • 2.2-2. If the Newton algorithm does not converge, the time increment is cut back. If it becomes smaller than a pre-defined minimum value, the simulation stops with an error message. This minimum size of the time increment can be adjusted. Furthermore, if a sudden loss in stability (or change in load) occurs so that time increments need to be changed by several orders of magnitude, the number of cutbacks also needs to be adapted (see next point). In this case, another option is to define a new time step (see 2.1-4) that starts at this critical point and that has a small initial increment.

  • 2.2-3. The allowed number of cutbacks per increment can also be adapted (in Abaqus, use *CONTROLS, parameters=time incrementation). This may be helpful if the simulation proceeds at first with large increments before some difficulty is reached – allowing for a larger number of cutbacks enables the program to use large timesteps at the beginning. Alternatively, you can reduce the maximum time increment (so that the size of the necessary cutback is reduced) or you can split your simulation step in two with different time incrementation settings in the step where the problem occurs (see 2.1-4).

  • 2.2-4. Be aware that the previous two points will work sometimes, but not always. There is usually no sense in allowing a smallest time increment that is ten or twenty orders of magnitude smaller than the step size or to allow for dozens of cutbacks, this only increases the CPU time.

  • 2.2-5. Depending on your finite element software, there may be many more options to tune the solution process. In Abaqus, for example, the initial guess for the solution of a time increment is calculated by extrapolation from the previous steps. Usually this improves convergence, but it may cause problems if something in the model changes abruptly. In this case, you can switch the extrapolation off (STEP, extrapolation=no). You can also add a line search algorithm that scales the calculated displacements to find a better solution (CONTROLS, parameters=line search). Consult the manual for options to improve convergence.

  • 2.2-6. While changing the iteration control (as explained in the previous points) is often needed to achieve convergence, the solution controls that are used to determine whether a solution has converged should only be changed if absolutely necessary. Only do so (in Abaqus, use *CONTROLS, parameters=field) if you know exactly what you are doing. One example where changing the controls may be necessary is when the stress is strongly concentrated in a small part of a very large structure [5]. In this case, an average nodal force that is used to determine convergence may impose too strong a constraint on the convergence of the solution, so that convergence should be based on local forces in the region of stress concentration. Be aware that since forces, not stresses, are used in determining the convergence, changing the mesh density requires changing the solution controls. Make sure that the accepted solution is indeed a solution and that your controls are sufficiently strict. Vary the controls to ensure that their value does not affect the solution.

  • 2.2-7. Contact problems sometimes do not converge due to problems in establishing which nodes are in contact (sometimes called “zig-zagging” [14]). This often happens if the first contact is made by a single node. Smoothing the contact surfaces may help.

  • 2.2-8. If available and possible, use general contact definitions where the contact surfaces are determined automatically.

  • 2.2-9. If standard contact algorithms do not converge, soft contact formulations (which implement a soft transition between “no contact” and “full contact”) may improve convergence; however, they may allow for some penetration of the surfaces and thus affect the results.

2.3 Finding model instabilities

A model is unstable if there actually is no solution to the mechanical problem.

  • 2.3-1. Instabilities are frequently due to a loss in load bearing capacity of the structure. There are several reasons for that:

    • The material definition may be incorrect. If, for example, a plastic material is defined without hardening, the load cannot increase after the component has fully plastified. Simple typos or incorrectly used units may also cause a loss in material strength.

    • Thermal softening (the reduction of strength with increasing temperature) may cause an instability in a thermo-mechanical problem.

    • Non-linear geometry may cause an instability because the cross section of a load-bearing component reduces during deformation.

    • A change in contact area, a change from sticking to sliding in a simulation with friction or a complete loss of contact between two bodies may also cause instabilities because the structure may not be able to bear an increase in the load.

  • 2.3-2. Local instabilities may cause highly distorted meshes that prevent convergence. It may be helpful to define the mesh in such a way that elements become more regular during deformation (see also 1.6-10).

  • 2.3-3. If your model is load-controlled (a force is applied), switch to a displacement-controlled loading. This avoids instabilities due to loss in load-bearing capacity.

  • 2.3-4. Artificial damping (stabilization) may be added to stabilize an unstable model. However, check carefully that the solution is not unduly affected by this. Adding artificial damping may also help to determine the cause of the instability. If your model converges with damping, you know that an instability is present.

2.4 Problems in explicit simulations

As already stated in 2.1-9, explicit simulations have less convergence problems than implicit simulations. However, sometimes even an explicit simulation may run into trouble.

  • 2.4-1. During simulation, elements may distort excessively. This may happen for example if a concentrated load acts on a node or if the displacement of a node becomes very large due to a loss in stability (for example in a damage model). In this case, the element shape might become invalid (crossing over of element edges, negative volumes at integration points etc.). If this happens, changing the mesh might help – elements that have a low quality (large aspect ratio, small initial volume) are especially prone to this type of problem. Note that second-order elements are often more sensitive to this problem than first-order elements.

  • 2.4-2. The stable time increment in an explicit simulation is given by the time a sound wave needs to travel through the smallest element. If elements distort strongly, they may become very thin in one direction so that the stable time increment becomes unreasonably small. In this case, changing the mesh might help.

3. Postprocessing

There are two aspects to checking that a model is correct: Verification is the process of showing that the model was correctly specified and actually does what it was created to do (loads, boundary conditions, material behaviour etc. are correct). Validation means to check the model by making an independent prediction (i. e., a prediction that was not used in specifying or calibrating the model) and checking this prediction in some other way (for example, experimentally).⁴

General advice: If you modify your model significantly (because you build up a complicated model in steps, have to correct errors or add more complex material behaviour to get agreement with experimental results etc.), you should again check the model. It is not clear that the mesh density that was sufficient for your initial model is still sufficient for the modified model. The same is true for other considerations (like the choice of element type etc.).

⁴Note that the terms “verification” and “validation” are used differently in different fields.

3.1 Checking the plausibility and verifying the model

  • 3.1-1. Check the plausibility of your results. If your simulation deviates from your intuition, continue checking until you are sure that you understand why your intuition (or the simulation) was incorrect. Never believe a result of a simulation that you do not understand and that should be different according to your intuition. Either the model or your understanding of the physical problem is incorrect – in both cases, it is important to understand all effects.

  • 3.1-2. Check your explanations for the solution, possibly with additional simulations. For example, if you assume that thermal expansion is the cause of a local stress maximum, re-run the simulation with a different or vanishing coefficient of thermal expansion. Predict the results of such a simulation and check whether your prediction was correct.

  • 3.1-3. Check all important solution variables. Even if you are only interested in, for example, displacements of a certain point, check stresses and strains throughout the model.

  • 3.1-4. In 3D-simulations, do not only look at contour plots of the component’s surface; also check the results inside the component by cutting through it.

  • 3.1-5. Make sure you understand which properties are vectors or tensors. Which component of stresses or strains are relevant depends on your model, the material, and the question you are trying to answer. Default settings of the postprocessor are not always appropriate, for example, Abaqus plots the von-Mises-stress as default stress variable, which is not very helpful for ceramic materials.

  • 3.1-6. Check the boundary conditions again. Are all nodes constrained in the desired manner? Exaggerating the deformation (use Common plot options in Abaqus) or picking nodes with the mouse may be helpful to check this precisely.

  • 3.1-7. Check the mesh density (see 1.6-5). If possible, calculate the model with different mesh densities (possibly for a simplified problem) and make sure that the mesh you finally use is sufficiently fine. When comparing different meshes, the variation in the mesh density should be sufficiently large to make sure that you can actually see an effect.

  • 3.1-8. Check the mesh quality again, paying special attention on regions where gradients are large. Check that the conditions explained in section 1.6 (element shapes and sizes, no strong discontinuities in the element sizes) are fulfilled and that discontinuities in the stresses are not due to a change in the numerical stiffness (due to a change in the integration scheme or element size).

  • 3.1-9. Check that stresses are continuous between elements. At interfaces between different materials, check that normal stresses and tangential strains are continuous.

  • 3.1-10. Check that the normal stress at any free surface is zero.

  • 3.1-11. Check the mesh density at contact surfaces: can the actual movement and deformation of the surfaces be represented by the mesh? For example, if a mesh is too coarse, nodes may be captured in a corner or a surface may not be able to deform correctly.

  • 3.1-12. Keep in mind that discretization errors at contact surfaces also influence stresses and strains. If you use non-standard contact definitions (2.2-9), try to evaluate how these influence the stresses (for example by comparing actual node positions with what you would expect for hard contact).

  • 3.1-13. Watch out for divergencies. The stress at a sharp notch or crack tip is theoretically infinite – the value shown by your program is then solely determined by the mesh density and, if you use a contour plot, by the extrapolation used by the postprocessor (see 3.2-1).

  • 3.1-14. In dynamic simulations, elastic waves propagate through the structure. They may dominate the stress field. Watch out for reflections of elastic waves and keep in mind that, in reality, these waves are dampened.

  • 3.1-15. If you assumed linear geometry, check whether strains and deformations are sufficiently small to justify this assumption, see 1.2-8.

3.2 Implementation issues

  • 3.2-1. Quantities like stresses or strains are only defined at integration points. Do not rely on extreme values from a contour plot – these values are extrapolated. It strongly depends on the problem whether these extrapolated values are accurate or not. For example, in an elastic material, the extrapolation is usually reasonable, in an ideally-plastic material, extrapolated von Mises stresses may exceed the actual yield stress by a factor of 2 or more. Furthermore, the contour lines themselves may show incorrect maxima or minima, see fig. 9 for an example.

  • 3.2-2. It is often helpful to use “quilt” plots where each element is shown in a single color averaged from the integration point values (see also fig. 9).

  • 3.2-3. The frequently used rainbow color spectrum has been shown to be misleading and should not be used [8]. Gradients may be difficult to interpret because human color vision has a different sensitivity in different parts of the spectrum. Furthermore, many people have a color vision deficiency and are unable to discern reds, greens and yellows. For variables that run from zero to a maximum value (temperature, von-Mises stress), use a sequential spectrum (for example, from black to red to yellow), for variables that can be positive and negative, use a diverging spectrum with a neutral color at zero, see fig. 10.

  • 3.2-4. Discrete time-stepping (see 1.2-7) may also influence the post-processing of results. If you plot the stress-strain curve of a material point by connecting values measured at the discrete simulation times, the resulting curve will not coincide perfectly with the true stress-strain although the data points themselves are correct.

  • 3.2-5. Complex simulation techniques (like XFEM, element deletion etc., see 1.8-3) frequently use internal parameters to control the simulation that may affect the solution process. Do not rely on default values for these parameters and check that the values do not affect the solution inappropriately.

  • 3.2-6. If you use element deletion, be aware that removing elements from the simulation is basically an unphysical process since material is removed. This may affect the energy balance or stress fields near the removed elements. For example, in models of machining processes, removing elements at the tool tip to separate the material strongly influences the residual stress field.

3.3 Validation

  • 3.3-1. If possible, use your model to make an independent prediction that can be tested.

  • 3.3-2. If you used experimental data to adapt unknown parameters (see 1.4), correctly reproducing these data with the model does not validate it, but only verifies it.

  • 3.3-3. The previous point also holds if you made a prediction and afterwards had to change your model to get agreement with an experiment. After this model change, the experiment cannot be considered an independent verification.

4. Getting help

If you cannot solve your problem, you can try to get help from the support of your software (provided you are entitled to support) or also from the internet (for example on ResearchGate or iMechanica). To get helpful answers, please observe the following points:

  • 4-1. Check that you have read relevant pages in the manual and that your question is not answered there.

  • 4-2. Describe your problem as precisely as possible. Which error did occur? What was the exact error message and which warnings did occur? Show pictures of the model and describe the model (which element type, which material, what kind of problem – static, dynamic, explicit, implicit etc.).

  • 4-3. If possible, provide a copy of your model or, even better, provide a minimum example that shows the problem (see 2.1-7).

  • 4-4. If you get answers to your request, give feedback whether this has solved your problem, especially if you are in an internet forum or similar. People are sacrificing their time to help you and will be interested to see whether their advice was actually helpful and what the solution to the problem was. Providing feedback will also help others who find your post because they are facing similar problems.

Acknowledgement

Thanks to Philipp Seiler for many discussions and for reading a draft version of this manuscript, and to Axel Reichert for sharing his experience on getting models to converge.

References

[1] F Armero. On the locking and stability of finite elements in finite deformation plane strain problems. Computers & Structures, 75(3):261–290, 2000. [2] CAE associates. Practical FEA simulations. https://caeai.com/blog/practical-fea-simulations?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=caeai. Accessed 31.5.2017. [3] Martin Bäker. Numerische Methoden in der Materialwissenschaft. Fachbereich Maschinenbau der TU Braunschweig, 2002. [4] Martin Bäker, Stefanie Reese, and Vadim V. Silberschmidt. Simulation of crack propagation under mixed-mode loading. In Siegfried Schmauder, Chuin-Shan Chen, Krishan K. Chawla, Nikhilesh Chawla, Weiqiu Chen, and Yutaka Kagawa, editors, Handbook of Mechanics of Materials. Springer Singapore, Singapore, 2018. [5] Martin Bäker, Joachim Rösler, and Carsten Siemers. A finite element model of high speed metal cutting with adiabatic shearing. Computers & Structures, 80(5):495–513, 2002. [6] Martin Bäker and Aviral Shrot. Inverse parameter identification with finite element simulations using knowledge-based descriptors. Computational Materials Science, 69:128–136, 2013. [7] Klaus-Jürgen Bathe. Finite element procedures. Klaus-Jurgen Bathe, 2006. [8] David Borland and Russell M Taylor II. Rainbow color map (still) considered harmful. IEEE computer graphics and applications, (2):14–17, 2007. [9] Dassault Systems. Abaqus Manual, 2017. [10] Guido Dhondt. The Finite Element Method for Three-Dimensional Thermomechanical Applications. Wiley, 2004. [11] Ronald Krueger. Virtual crack closure technique: History, approach, and applications. Applied Mechanics Reviews, 57(2):109, 2004. [12] AM Prior. Applications of implicit and explicit finite element techniques to metal forming. Journal of Materials Processing Technology, 45(1):649–656, 1994. [13] Joachim Rösler, Harald Harders, and Martin Bäker. Mechanical behaviour of engineering materials: metals, ceramics, polymers, and composites. Springer Science & Business Media, 2007. [14] Peter Wriggers and Tod A Laursen. Computational contact mechanics, volume 30167. Springer, 2006.


n5321 | 2025年6月15日 23:24

A Possible First Use of CAM/CAD


Norman Sanders Cambridge Computer Lab Ring, William Gates Building, Cambridge, England ProjX, Walnut Tree Cottage, Tattingstone Park, Ipswich, Suffolk IP9 2NF, England


Abstract

This paper is a discussion of the early days of CAM-CAD at the Boeing Company, covering the period approximately 1956 to 1965. This period saw probably the first successful industrial application of ideas that were gaining ground during the very early days of the computing era. Although the primary goal of the CAD activity was to find better ways of building the 727 airplane, this activity led quickly to the more general area of computer graphics, leading eventually to today’s picture-dominated use of computers.

Keywords: CAM, CAD, Boeing, 727 airplane, numerical-control.


1. Introduction to Computer-Aided Design and Manufacturing

Some early attempts at CAD and CAM systems occurred in the 1950s and early 1960s. We can trace the beginnings of CAD to the late 1950s when Dr. Patrick J. Hanratty developed Pronto, the first commercial numerical-control (NC) programming system. In 1960, Ivan Sutherland at MIT's Lincoln Laboratory created Sketchpad, which demonstrated the basic principles and feasibility of computer-aided technical drawing.

There seems to be no generally agreed date or place where Computer-Aided Design and Manufacturing saw the light of day as a practical tool for making things. However, I know of no earlier candidate for this role than Boeing’s 727 aircraft. Certainly the dates given in the current version of Wikipedia are woefully late; ten years or so.

So, this section is a description of what we did at Boeing from about the mid-fifties to the early sixties. It is difficult to specify precisely when this project started – as with most projects. They don’t start, but having started they can become very difficult to finish. But at least we can talk in terms of mini eras, approximate points in time when ideas began to circulate and concrete results to emerge.

Probably the first published ideas for describing physical surfaces mathematically was Roy Liming’s Practical Analytic Geometry with Applications to Aircraft, Macmillan, 1944. His project was the Mustang fighter. However, Liming was sadly way ahead of his time; there weren’t as yet any working computers or ancillary equipment to make use of his ideas. Luckily, we had a copy of the book at Boeing, which got us off to a flying start. We also had a mighty project to try our ideas on – and a team of old B-17/29 engineers who by now were running the company, rash enough to allow us to commit to an as yet unused and therefore unproven technology.

Computer-aided manufacturing (CAM) comprises the use of computer-controlled manufacturing machinery to assist engineers and machinists in manufacturing or prototyping product components, either with or without the assistance of CAD. CAM certainly preceded CAD and played a pivotal role in bringing CAD to fruition by acting as a drafting machine in the very early stages. All early CAM parts were made from the engineering drawing. The origins of CAM were so widespread that it is difficult to know whether any one group was aware of another. However, the NC machinery suppliers, Kearney & Trecker etc, certainly knew their customers and would have catalysed their knowing one another, while the Aero-Space industry traditionally collaborated at the technical level however hard they competed in the selling of airplanes.

2. Computer-Aided Manufacturing (CAM) in the Boeing Aerospace Factory in Seattle

(by Ken McKinley)

The world’s first two computers, built in Manchester and Cambridge Universities, began to function as early as 1948 and 1949 respectively, and were set to work to carry out numerical computations to support the solution of scientific problems of a mathematical nature. Little thought, if any, was entertained by the designers of these machines to using them for industrial purposes. However, only seven years later the range of applications had already spread out to supporting industry, and by 1953 Boeing was able to order a range of Numerically-Controlled machine tools, requiring computers to transform tool-makers’ instructions to machine instructions. This is a little remembered fact of the early history of computers, but it was probably the first break of computer application away from the immediate vicinity of the computer room.

The work of designing the software, the task of converting the drawing of a part to be milled to the languages of the machines, was carried out by a team of about fifteen people from Seattle and Wichita under my leadership. It was called the Boeing Parts-Programming system, the precursor to an evolutionary series of Numerical Control languages, including APT – Automatically Programmed Tooling, designed by Professor Doug Ross of MIT. The astounding historical fact here is that this was among the first ever computer compilers. It followed very closely on the heels of the first version of FORTRAN. Indeed it would be very interesting to find out what, if anything preceded it.

As early as it was in the history of the rise of computer languages, members of the team were already aficionados of two rival contenders for the job, FORTRAN on the IBM 704 in Seattle, and COBOL on the 705 in Wichita. This almost inevitably resulted in the creation of two systems (though they appeared identical to the user): Boeing and Waldo, even though ironically neither language was actually used in the implementation. Remember, we were still very early on in the development of computers and no one yet had any monopoly of wisdom in how to do anything.

The actual programming of the Boeing system was carried out in computer machine language rather than either of the higher-level languages, since the latter were aimed at a very different problem area to that of determining the requirements of machine tools.

A part of the training of the implementation team consisted of working with members of the Manufacturing Department, probably one of the first ever interdisciplinary enterprises involving computing. The computer people had to learn the language of the Manufacturing Engineer to describe aluminium parts and the milling machine processes required to produce them. The users of this new language were to be called Parts Programmers (as opposed to computer programmers).

A particularly tough part of the programming effort was to be found in the “post processors”, the detailed instructions output from the computer to the milling machine. To make life interesting there was no standardisation between the available machine tools. Each had a different physical input mechanism; magnetic tape, analog or digital, punched Mylar tape or punched cards. They also had to accommodate differences in the format of each type of data. This required lots of discussion with the machine tool manufacturers - all very typical of a new industry before standards came about.

A memorable sidelight, just to make things even more interesting, was that Boeing had one particular type of machine tool that required analog magnetic tape as input. To produce it the 704 system firstly punched the post processor data into standard cards. These were then sent from the Boeing plant to downtown Seattle for conversion to a magnetic tape, then back to the Boeing Univac 1103A for conversion from magnetic to punched tape, which was in turn sent to Wichita to produce analog magnetic tape. This made the 1103A the world’s largest, most expensive punched tape machine. As a historical footnote, anyone brought up in the world of PCs and electronic data transmission should be aware of what it was like back in the good old days!

Another sidelight was that detecting and correcting parts programming errors was a serious problem, both in time and material. The earliest solution was to do an initial cut on wood or plastic foam, or on suitable machine tools, to replace the cutter with a pen or diamond scribe to ‘draw’ the part. Thus the first ever use of an NC machine tool as a computer-controlled drafting machine, a technique vital later to the advent of Computer-Aided Design.

Meanwhile the U. S. Air Force recognised that the cost and complication of the diverse solutions provided by their many suppliers of Numerical Control equipment was a serious problem. Because of the Air Force’s association with MIT they were aware of the efforts of Professor Doug Ross to develop a standard NC computer language. Ken McKinley, as the Boeing representative, spent two weeks at the first APT (Automatic Programmed Tooling) meeting at MIT in late 1956, with representatives from many other aircraft-related companies, to agree on the basic concepts of a common system where each company would contribute a programmer to the effort for a year. Boeing committed to support mainly the ‘post processor’ area. Henry Pinter, one of their post-processor experts, was sent to San Diego for a year, where the joint effort was based. As usually happened in those pioneering days it took more like 18 months to complete the project. After that we had to implement APT in our environment at Seattle.

Concurrently with the implementation we had to sell ourselves and the users on the new system. It was a tough sell believe me, as Norm Sanders was to discover later over at the Airplane Division. Our own system was working well after overcoming the many challenges of this new technology, which we called NC. The users of our system were not anxious to change to an unknown new language that was more complex. But upper management recognized the need to change, not least because of an important factor, the imminence of another neophytic technology called Master Dimensions.

3. Computer-Aided Design (CAD) in the Boeing Airplane Division in Renton

(by Norman Sanders)

The year was 1959. I had just joined Boeing in Renton, Washington, at a time when engineering design drawings the world over were made by hand, and had been since the beginning of time; the definition of every motorcar, aircraft, ship and mousetrap consisted of lines drawn on paper, often accompanied by mathematical calculations where necessary and possible. What is more, all animated cartoons were drawn by hand. At that time, it would have been unbelievable that what was going on in the aircraft industry would have had any effect on The Walt Disney Company or the emergence of the computer games industry. Nevertheless, it did. Hence, this is a strange fact of history that needs a bit of telling.

I was very fortunate to find myself working at Boeing during the years following the successful introduction of its 707 aircraft into the world’s airlines. It exactly coincided with the explosive spread of large computers into the industrial world. A desperate need existed for computer power and a computer manufacturer with the capacity to satisfy that need. The first two computers actually to work started productive life in 1948 and 1949; these were at the universities of Manchester and Cambridge in England. The Boeing 707 started flying five years after that, and by 1958, it was in airline service. The stage was set for the global cheap travel revolution. This took everybody by surprise, not least Boeing. However, it was not long before the company needed a shorter-takeoff airplane, namely the 727, a replacement for the Douglas DC-3. In time, Boeing developed a smaller 737, and a large capacity airplane – the 747. All this meant vast amounts of computing and as the engineers got more accustomed to using the computer there was no end to their appetite.

And it should perhaps be added that computers in those days bore little superficial similarity to today’s computers; there were certainly no screens or keyboards! Though the actual computing went at electronic speeds, the input-output was mechanical - punched cards, magnetic tape and printed paper. In the 1950s, the computer processor consisted of vacuum tubes, the memory of ferrite core, while the large-scale data storage consisted of magnetic tape drives. We had a great day if the computer system didn’t fail during a 24 hour run; the electrical and electronic components were very fragile.

We would spend an entire day preparing for a night run on the computer. The run would take a few minutes and we would spend the next day wading through reams of paper printout in search of something, sometimes searching for clues to the mistakes we had made. We produced masses of paper. You would not dare not print for fear of letting a vital number escape. An early solution to this was faster printers. About 1960 Boeing provided me with an ANalex printer. It could print one thousand lines a minute! Very soon, of course, we had a row of ANalex printers, wall to wall, as Boeing never bought one of anything. The timber needed to feed our computer printers was incalculable.

4. The Emergence of Computer Plots

With that amount of printing going on it occurred to me to ask the consumers of printout what they did with it all. One of the most frequent answers was that they plotted it. There were cases of engineers spending three months drawing curves resulting from a single night’s computer run. A flash of almost heresy then struck my digital mind. Was it possible that we could program a digital computer to draw (continuous) lines? In the computing trenches at Boeing we were not aware of the experimentation occurring at research labs in other places. Luckily at Boeing we were very fortunate at that time to have a Swiss engineer in our computer methods group who could both install hardware and write software for it; he knew hardware and software, both digital and analog. His name was Art Dietrich. I asked Art about it, which was to me the unaskable; to my surprise Art thought it was possible. So off he went in search of a piece of hardware that we could somehow connect to our computer that could draw lines on paper.

Art found two companies that made analog plotters that might be adaptable. One company was Electro Instruments in San Diego and the other was Electronic Associates in Long Branch, New Jersey. After yo-yoing back and forth, we chose the Electronic Associates machine. The machine could draw lines on paper 30x30 inches, at about twenty inches per second. It was fast! But as yet it hadn’t been attached to a computer anywhere. Moreover, it was accurate - enough for most purposes. To my knowledge, this was the first time anyone had put a plotter in the computer room and produced output directly in the form of lines. It could have happened elsewhere, though I was certainly not aware of it at the time. There was no software, of course, so I had to write it myself. The first machine ran off cards punched as output from the user programs, and I wrote a series of programs: Plot1, Plot2 etc. Encouraged by the possibility of selling another machine or two around the world, the supplier built a faster one running off magnetic tape, so I had to write a new series of programs: Tplot1, Tplot2, etc, (T for tape). In addition, the supplier bought the software from us - Boeing’s first software sale!

While all this was going on we were pioneering something else. We called it Master Dimensions. Indeed, we pioneered many computing ideas during the 1960s. At that time Boeing was probably one of the leading users of computing worldwide and it seemed that almost every program we wrote was a brave new adventure. Although North American defined mathematically the major external surfaces of the wartime Mustang P-51 fighter, it could not make use of computers to do the mathematics or to construct it because there were no computers. An account of this truly epochal work appears in Roy Liming’s book.

By the time the 727 project was started in 1960, however, we were able to tie the computer to the manufacturing process and actually define the airplane using the computer. We computed the definition of the outer surface of the 727 and stored it inside the computer, making all recourse to the definition via a computer run, as opposed to an engineer looking at drawings using a magnifying glass. This was truly an industrial revolution.

Indeed, when I look back on the history of industrial computing as it stood fifty years ago I cringe with fear. It should never have been allowed to happen, but it did. And the reason why it did was because we had the right man, Grant W. Erwin Jr, in the right place, and he was the only man on this planet who could have done it. Grant was a superb leader – as opposed to manager – and he knew his stuff like no other. He knew the mathematics, Numerical Analysis, and where it didn’t exist he created new methods. He was loved by his team; they would work all hours and weekends without a quibble whenever he asked them to do so. He was an elegant writer and inspiring teacher. He knew what everyone was doing; he held the plan in his head. If any single person can be regarded as the inventor of CAD it was Grant. Very sadly he died, at the age of 94, just as the ink of this chapter was drying.

When the Master Dimensions group first wrote the programs, all we could do was print numbers and draw plots on 30x30 inch paper with our novel plotter. Mind-blowing as this might have been it did not do the whole job. It did not draw full scale, highly accurate engineering lines. Computers could now draw but they could not draw large pictures or accurate ones or so we thought.

5. But CAM to the Rescue!

Now there seems to be a widely-held belief that computer-aided design (CAD) preceded computer-aided manufacturing (CAM). All mention of the topic carries the label CAD-CAM rather than the reverse, as though CAD led CAM. However, this was not the case, as comes out clearly in Ken McKinley’s section above. Since both started in the 1956-1960 period, it seems a bit late in the day now to raise an old discussion. However, there may be a few people around still with the interest and the memory to try to get the story right. The following is the Boeing version, at least, as remembered by some long retired participants.

5.1 Numerical Control Systems

The Boeing Aerospace division began to equip its factory about 1956 with NC machinery. There were several suppliers and control systems, among them Kearney & Trecker, Stromberg-Carlson and Thompson Ramo Waldridge (TRW). Boeing used them for the production of somewhat complicated parts in aluminium, the programming being carried out by specially trained programmers. I hasten to say that these were not computer programmers; they were highly experienced machinists known as parts programmers. Their use of computers was simply to convert an engineering drawing into a series of simple steps required to make the part described. The language they used was similar in principle to basic computer languages in that it required a problem to be analyzed down to a series of simple steps; however, the similarity stopped right there. An NC language needs commands such as select tool, move tool to point (x,y), lower tool, turn on coolant. The process required a deep knowledge of cutting metal; it did not need to know about memory allocation or floating point.

It is important to recognize that individual initiative from below very much characterized the early history of computing - much more than standard top-down managerial decisions. Indeed, it took an unconscionable amount of time before the computing bill reached a level of managerial attention. It should not have been the cost, it should have been the value of computing that brought management to the punch. But it wasn’t. I think the reason for that was that we computer folk were not particularly adept at explaining to anyone beyond our own circles what it was that we were doing. We were a corporate ecological intrusion which took some years to adjust to.

5.2 Information Consolidation at Boeing

It happened that computing at Boeing started twice, once in engineering and once in finance. My guess is that neither group was particularly aware of the other at the start. It was not until 1965 or so, after a period of conflict, that Boeing amalgamated the two areas, the catalyst being the advent of the IBM 360 system that enabled both types of computing to cohabit the same hardware. The irony here was that the manufacturing area derived the earliest company tangible benefits from computing, but did not have their own computing organization; they commissioned their programs to be written by the engineering or finance departments, depending more or less on personal contacts out in the corridor.

As Ken McKinley describes above, in the factory itself there were four different control media; punched Mylar tape, 80-column punched cards, analog magnetic tape and digital magnetic tape. It was rather like biological life after the Cambrian Explosion of 570 million years ago – on a slightly smaller scale. Notwithstanding, it worked! Much investment had gone into it. By 1960, NC was a part of life in the Boeing factory and many other American factories. Manufacturing management was quite happy with the way things were and they were certainly not looking for any more innovation. ‘Leave us alone and let’s get the job done’ was their very understandable attitude. Nevertheless, modernisation was afoot, and they embraced it.

The 1950s was a period of explosive computer experimentation and development. In just one decade, we went from 1K to 32K memory, from no storage backup at all to multiple drives, each handling a 2,400-foot magnetic tape, and from binary programming to Fortran 1 and COBOL. At MIT, Professor Doug Ross, learning from the experience of the earlier NC languages, produced a definition for the Automatically Programmed Tooling (APT) language, the intention being to find a modern replacement for the already archaic languages that proliferated the 1950s landscape. How fast things were beginning to move suddenly, though it didn’t seem that way at the time.

5.3 New Beginnings

Since MIT had not actually implemented APT, the somewhat loose airframe manufacturers’ computer association got together to write an APT compiler for the IBM 7090 computers in 1961. Each company sent a single programmer to Convair in San Diego and it took about a year to do the job, including the user documentation. This was almost a miracle, and was largely due to Professor Ross’s well-thought through specification.

When our representative, Henry Pinter, returned from San Diego, I assumed the factory would jump on APT, but they didn’t. At the Thursday morning interdepartmental meetings, whenever I said, “APT is up and running folks, let’s start using it”, Don King from Manufacturing would say, “but APT don’t cut no chips”. (That’s how we talked up there in the Pacific Northwest.) He was dead against these inter-company initiatives; he daren’t commit the company to anything we didn’t have full control over. However, eventually I heard him talking. The Aerospace Division (Ed Carlberg and Ken McKinley) were testing the APT compiler but only up to the point of a printout; no chips were being cut because Aerospace did not have a project at that time. So I asked them to make me a few small parts and some chips swept up from the floor, which they kindly did. I secreted the parts in my bag and had my secretary tape the chips to a piece of cardboard labeled ‘First ever parts cut by APT’. At the end of the meeting someone brought up the question of APT. ‘APT don’t cut no chips’ came the cry, at which point I pulled out my bag from under the table and handed out the parts for inspection. Not a word was spoken - King’s last stand. (That was how we used to make decisions in those days.)

These things happened in parallel with Grant Erwin’s development of the 727-CAD system. In addition, one of the facilities of even the first version of APT was to accept interpolated data points from CAD which made it possible to tie the one system in with the other in what must have been the first ever CAM-CAD system. When I look back on this feature alone nearly fifty years later I find it nothing short of miraculous, thanks to Doug Ross’s deep understanding of what the manufacturing world would be needing. Each recourse to the surface definition was made in response to a request from the Engineering Department, and each numerical cut was given a running Master Dimensions Identifier (MDI) number. This was not today’s CAM-CAD system in action; again, no screen, no light pen, no electronic drawing. Far from it; but it worked! In the early 1960s the system was a step beyond anything that anyone else seemed to be doing - you have to start somewhere in life.

6. Developing Accurate Lines

An irony of history was that the first mechanical movements carried out by computers were not a simple matter of drawing lines; they were complicated endeavors of cutting metal. The computer-controlled equipment was vast multi-ton machines spraying aluminum chips in all directions. The breakthrough was to tame the machines down from three dimensions to two, which happened in the following extraordinary way. It is perhaps one of the strangest events in the history of computing and computing graphics, though I don’t suppose anyone has ever published this story. Most engineers know about CAD; however, I do not suppose anyone outside Boeing knows how it came about.

6.1 So, from CAM to CAD

Back to square one for a moment. As soon as we got the plotter up and running, Art Dietrich showed some sample plots to the Boeing drafting department management. Was the plotting accuracy good enough for drafting purposes? The answer - a resounding No! The decision was that Boeing would continue to draft by hand until the day someone could demonstrate something that was superior to what we were able to produce. That was the challenge. However, how could we meet that challenge? Boeing would not commit money to acquiring a drafting machine (which did not exist anyway) without first subjecting its output to intense scrutiny. Additionally, no machine tool company would invest in such an expensive piece of new equipment without an order or at least a modicum of serious interest. How do you cut this Gordian knot?

In short, at that time computers could master-mind the cutting of metal with great accuracy using three-dimensional milling machines. Ironically, however, they could not draw lines on paper accurately enough for design purposes; they could do the tough job but not the easy one.

However, one day there came a blinding light from heaven. If you can cut in three dimensions, you can certainly scratch in two. Don’t do it on paper; do it on aluminium. It had the simplicity of the paper clip! Why hadn’t we thought of that before? We simply replaced the cutter head of the milling machine with a tiny diamond scribe (a sort of diamond pen) and drew lines on sheets of aluminium. Hey presto! The computer had drawn the world’s first accurate lines. This was done in 1961.

The next step was to prove to the 727 aircraft project manager that the definition that we had of the airplane was accurate, and that our programs worked. To prove it they gave us the definition of the 707, an aircraft they knew intimately, and told us to make nineteen random drawings (canted cuts) of the wing using this new idea. This we did. We trucked the inscribed sheets of aluminium from the factory to the engineering building and for a month or so engineers on their hands and knees examined the lines with microscopes. The Computer Department held its breath. To our knowledge this had never happened before. Ever! Anywhere! We ourselves could not be certain that the lines the diamond had scribed would match accurately enough the lines drawn years earlier by hand for the 707. At the end of the exercise, however, industrial history emerged at a come-to-God meeting. In a crowded theatre the chief engineer stood on his feet and said simply that the design lines that the computer had produced had been under the microscope for several weeks and were the most accurate lines ever drawn - by anybody, anywhere, at any time. We were overjoyed and the decision was made to build the 727 with the computer. That is the closest I believe anyone ever came to the birth of Computer-Aided Design. We called it Design Automation. Later, someone changed the name. I do not know who it was, but it would be fascinating to meet that person.

6.2 CAM-CAD Takes to the Air

Here are pictures of the first known application of CAM-CAD. The first picture is that of the prototype of the 727. Here you can clearly see the centre engine inlet just ahead of the tail plane. Seen from the front it is elliptical, as can be seen from the following sequence of manufacturing stages:- (Images of the manufacturing stages of the 727 engine inlet are shown here)

6.3 An Unanticipated Extra Benefit

One of the immediate, though unanticipated, benefits of CAD was transferring detailed design to subcontractors. Because of our limited manufacturing capacity, we subcontracted a lot of parts, including the rear engine nacelles (the covers) to the Rohr Aircraft Company of Chula Vista in California. When their team came up to Seattle to acquire the drawings, we instead handed them boxes of data in punched card form. We also showed them how to write the programs and feed their NC machinery. Their team leader, Nils Olestein, could not believe it. He had dreamed of the idea but he never thought he would ever see it in his lifetime: accuracy in a cardboard box! Remember that in those days we did not have email or the ability to send data in the form of electronic files.

6.4 Dynamic Changes

The cultural change to Boeing due to the new CAD systems was profound. Later on we acquired a number of drafting machines from the Gerber Company, who now knew that there was to be a market in computer-controlled drafting, and the traditional acres of drafting tables began slowly to disappear. Hand drafting had been a profession since time immemorial. Suddenly its existence was threatened, and after a number of years, it no longer existed. That also goes for architecture and almost any activity involving drawing.

Shortly afterwards, as the idea caught on, people started writing CAD systems which they marketed widely throughout the manufacturing industry as well as in architecture. Eventually our early programs vanished from the scene after being used on the 737 and 747, to be replaced by standard CAD systems marketed by specialist companies. I suppose, though, that even today’s Boeing engineers are unaware of what we did in the early 1960s; generally, corporations are not noted for their memory.

Once the possibility of drawing with the computer became known, the idea took hold all over the place. One of the most fascinating areas was to make movie frames. We already had flight simulation; Boeing ‘flew’ the Douglas DC-8 before Douglas had finished building it. We could actually experience the airplane from within. We did this with analog computers rather than digital. Now, with digital computers, we could look at an airplane from the outside. From drawing aircraft one could very easily draw other things such as motorcars and animated cartoons. At Boeing we established a Computer Graphics Department around 1962 and by 1965 they were making movies by computer. (I have a video tape made from Boeing’s first ever 16mm movie if anyone’s interested.) Although slow and simple by today’s standards, it had become an established activity. The rest is part of the explosive story of computing, leading up to today’s marvels such as computer games, Windows interfaces, computer processing of film and all the other wonders of modern life that people take for granted. From non-existent to all-pervading within a lifetime!

7. The Cosmic Dice

Part of the excitement of this computer revolution that we have brought about in these sixty years was the unexpected benefits. To be honest, a lot of what we did, especially in the early days, was pure serendipity; it looked like a good idea at the time but there was no way we could properly justify it. I think had we had to undertake a solid financial analysis most of the projects would never have got off the ground and the computer industry would not have got anywhere near today’s levels of technical sophistication or profitability. Some of the real payoffs have been a result of the cosmic dice throwing us a seven. This happened already twice with the first 727.

The 727 rolled out in November, 1962, on time and within budget, and flew in April, 1963. The 727 project team were, of course, dead scared that it wouldn’t. But the irony is that it would not have happened had we not used CAD. During the early period, before building the first full-scale mockup, as the computer programs were being integrated, we had a problem fitting the wing to the wing-shaped hole in the body; the wing-body join. The programmer responsible for that part of the body program was yet another Swiss by name Raoul Etter. He had what appeared to be a deep bug in his program and spent a month trying to find it. As all good programmers do, he assumed that it was his program that was at fault. But in a moment of utter despair, as life was beginning to disappear down a deep black hole, he went cap in hand to the wing project to own up. “I just can’t get the wing data to match the body data, and time is no longer on my side.” “Show us your wing data. Hey where did you get this stuff?” “From the body project.” “But they’ve given you old data; you’ve been trying to fit an old wing onto a new body.” (The best time to make a design change is before you’ve actually built the thing!) An hour later life was restored and the 727 became a single numerical entity. But how would this have been caught had we not gone numerical? I asked the project. At full-scale mockup stage, they said. In addition to the serious delay what would the remake have cost? In the region of a million dollars. Stick that in your project analysis!

The second occasion was just days prior to roll-out. The 727 has leading-edge flaps, but at installation they were found not to fit. New ones had to be produced over night, again with the right data. But thanks to the NC machinery we managed it. Don’t hang out the flags before you’ve swept up the final chip.

8. A Fascinating Irony

This discussion is about using the computer to make better pictures of other things. At no time did any of us have the idea of using pictures to improve the way we ran computers. This had to wait for Xerox PARC, a decade or so later, to throw away our punched cards and rub our noses into a colossal missed opportunity. I suppose our only defence is that we were being paid to build airplanes not computers.

9. Conclusion

In summary, CAM came into existence during the late 1950s, catalyzing the advent of CAD in the early 1960s. This mathematical definition of line drawing by computers then radiated out in three principal directions with (a) highly accurate engineering lines and surfaces, (b) faster and more accurate scientific plotting and (c) very high-speed animation. Indeed, the world of today’s computer user consists largely of pictures; the interface is a screen of pictures; a large part of technology lessons at school uses computer graphics. And we must remember that the computers at that time were miniscule compared to the size of today’s PC in terms of memory and processing speed. We’ve come a long way from that 727 wing design.


n5321 | 2025年6月15日 23:23

Analysis Origins - Fluent

This article chronicles the origins of Fluent, a pioneering Computational Fluid Dynamics (CFD) code in the 1980s that became the dominant market leader by the late 90s and is today part of ANSYS Inc., one of the leading simulation software providers for engineering.

“CHAM showed the world that fluid dynamics problems could be solved on a computer. Fluent, on the other hand, proved that engineers could use this software to solve real world problems.” Attributed to Brian Spalding

Many of today’s leading software companies emerged from the vision of a single pioneer. Fluent, on the other hand, grew out of the contributions of multiple personalities. The earliest was Hasan Ferit Boysan who came to Sheffield University in the United Kingdom in 1975 for graduate work in fluid mechanics, which at this time was almost universally performed with hand calculations. Boysan met Ali Turan, another student from Turkey, who was working with the Cora3 code, one of the earliest CFD codes developed by Professor Brian Spalding of Imperial College, London to model combustion in a dump combustor. As with the other CFD codes available at this time, users created an input deck of punch cards for Cora3. Errors in the deck were often discovered only after the solver crashed. Turan asked Boysan to help use Cora3 to solve a problem he was working on for his PhD thesis. Progress was slow because every time the researchers changed the geometry or boundary conditions, they had to manually recode the input deck. It was a painful experience, but they achieved enough results for Turan to complete his thesis. Boysan went back to Turkey in 1976 with a reputation of being able to get results from a CFD code.

In 1979, Jim Swithenbank, at the time Professor of Chemical Engineering at the University of Sheffield, invited Boysan back to Sheffield to help him develop a code capable of interactively defining geometry and boundary conditions for a specific problem involving cyclone separators. The resulting software was developed with a student, Bill Ayers as part of his final year research project and was published in the Transactions of the Institute of Chemical Engineers. With the permission of the authors, the editor of the publication added a note that readers could contact the authors to obtain a copy of the source code. Swithenbank and Boysan were surprised to receive several hundred requests for the code, alerting them to the commercial potential of an interactive CFD code.

 

Fluent's UK history

Figure 1: Painting by Sheffield artist Joe Scarborough, showing locations from Fluent’s UK history.

“The picture is specific to what was the Fluent Europe entity, by local artist, Joe Scarborough, commissioned in 2000 when the company moved to its new premises next to Sheffield Airport (there was an airport) and tracking the history from Sheffield University.

The Sheffield University building on Mappin Street (top left) was where the early version of Fluent was developed. Next to that (narrow red building) is the original office on West Street when Fluent Europe was opened. It was in a few rooms above a book shop, where Ferit Boyson and Bill Ayres worked alongside a very large computer (physically, although not necessarily in terms of computational capacity). The supertram system was installed on West Street when the office was there, creating huge disruption to the center of Sheffield. The advertising hoarding for Rolls-Royce is a nod to them being the largest customer at that time.

Then on the right are the gardens and rear of Holmwood House, the building on Cortworth Road where Fluent Europe moved to in the 90s during expansion. The gardens of Holmwood House show families enjoying picnics - at that time there were a lot of people in the company starting to have families and the summer BBQ was typically a party in the garden. I haven't been able to find out anything about the greenhouse. When Holmwood House was sold, it was bought by one of the band members of the one-time popular rock combo Def Leopard.

Shown next to Holmwood House is the building at Sheffield Airport Business Park (the 'Airport' has subsequently been dropped from the name). Rather ironically the build of the new offices was delayed by delivery of the steel work – which came from Holland – not Sheffield.

The significance of the flags outside the new unit are the Turkish flag reflecting Ferit Boysan’s origins, the Union Jack obviously indicates UK input and the stars & stripes reflect the American ownership – originally Creare. Not sure about the European flag but maybe there was a contribution to the cost of the new building from the EU?

In the distance top left, beyond the somewhat displaced ocean, there is a reference to Ferit's Turkish background and on the right is the Lebanon office in New Hampshire.

Towards the bottom on the left, you can see a few local details: Jessop's Hospital (just captured at the far-left) showing an expectant couple (again a reference to the number of young families) and the Red Deer pub that was a likely source of inspiration for those at Sheffield University due to its location next to Mappin Street and later a place that Fluent staff frequented. The football pitch is either capturing the 5-a-side team (that played late 90s to early 00s) or a local acknowledgement to Hallam FC - the oldest ground in the world.

In the foreground people are shown enjoying outdoor pursuits in the Peak District (cycle/ climb/ walk) - a common interest for many staff.

We're not sure about the sports cars but suspect one (maybe both?) was Ferit's.

There is a possibility that Ferit and his wife as well as Jim Swithenbank and his wife are shown somewhere too. We suspect Joan Swithenback is standing at the back doorway of Holmwood House.”

Innovative Code made CFD Faster, More Accessible

Boysan and Ayers, a Sheffield graduate student, wrote a general-purpose version of this software that represented a major departure from the CFD codes of that era by featuring an interactive interface that enabled users to graphically change the geometry and boundary conditions and observe the resulting effects. The software also stepped the user through pre-processing, solving and post-processing. Called Tempest, the software could solve a 400-node geometry on the university’s Perkin Elmer 3205 computer that filled a room despite having only 1 megabyte of random-access memory.

Ayers showed the code to Combustion Engineering and Battelle Laboratories in the United States and both companies bought the source code for a few thousand dollars. Swithenbank and Boysan met with the Sheffield University finance director and legal officer and asked if the university wanted to invest in commercializing the code. Searching for an example to explain the business proposition to non-technical people, they pointed to a building in Sheffield which had been designed with a decorative pool. After the building was completed, the flow of air around the building splashed water onto pedestrians and made it necessary to pave over the pool. Swithenbank and Boysan said that Tempest could calculate the flow around the building and predict such problems in advance. The university officials, however, were alarmed to hear this and envisioned the building collapsing and the university being deluged with litigation. They told the erstwhile entrepreneurs that the university wanted nothing to do with their software.

License with Unlimited Support Puts Fluent on Growth Fast Track

Swithenbank freelanced for a consulting company in New Hampshire called Creare and wrote to the company in late 1982 asking for help in commercializing the software. (Over the years, Creare has proven to be a fertile serial company launcher [1].) The letter was passed to Peter Rundstandler, who circulated it to the partners of the firm to ask if anyone was interested in pursuing. Everyone answered no except for Bart Patel who sent a note back to Rundstandler saying “this could be fun.” Ayers installed the code on Creare’s Digital Equipment Company PDP-11 minicomputer and showed it to Patel who liked what he saw. Boysan and Ayers formed a company called Boteb. Creare purchased commercial rights to the software from Boteb, offering a 10% royalty on sales with $25,000 guaranteed and agreed to hire Boteb for at least 1,000 hours of development and support services. Patel felt that the name Tempest sounded too complicated, so he changed it to Fluent to emphasize its ease of use.

"Several key business decisions by the founders in the early years were instrumental in helping distinguish Fluent in the emerging CFD marketplace," said Dr. S. Subbiah, one of the early employees at Fluent and subsequently a member of its executive team. Other developers of fluid dynamics software at the time sold a perpetual license and charged for support by the hour. Patel felt that users would require a lot of support and if they had to pay by the hour, they would use less support than they needed and end up not achieving results. So, he decided to instead sell an annual license that included unlimited support for a fee close to what competitors were charging for a perpetual license. This was a crucial point of distinction and it played a key role in Fluent’s eventual business success.

Another key decision was to bundle all physics models and solvers in Fluent and to offer it for a single annual lease price. At that time, the market leader CHAM, offered a menu of various modules and solvers -- each at their own price. Customers found it hard to determine what solvers and modules they needed upfront, so found Fluent's single all-in-one price attractive when considering investing in a new technology.

First Fluent Seminar Results in Sales to 80% of Attending Companies

Patel avoided a head-on assault on CHAM by focusing the initial marketing effort on combustion and particularly gas turbines. He asked Boysan and Ayers to add physical models to handle the movement of entrained droplets and particles and to integrate these models into the interactive user interface which set the new software apart.

Fluent seminar

Figure 2: Cover of invitation to first Fluent seminar

 

Fluent simulation results from a brochure

Figure 3: Fluent simulation results from a brochure produced in the early 1980s.

To kick off the marketing effort, in 1983 Patel invited Creare’s clients from leading combustion engineering companies to a seminar (Figure 2). A brochure was prepared on Fluent and distributed to prospective attendees (Figure 3). Patel asked attendees to submit test problems in advance and offered to present solutions during the seminar. Realizing that many of the attendees would be engineers who did not have purchasing authority, he created a video describing the capabilities of the software that attendees could show their managers. About 40 people attended the seminar. The attendees purchased $150,000 worth of software during the seminar and 80% ended up eventually buying Fluent. Patel hired the first employee, Barbara Hutchings, who handled technical support. Hutchings developed a team of customer support engineers that extended the “unlimited technical support” business model to include a sincere focus on doing whatever it took to help the customer become successful with Fluent. This approach helped develop customer loyalty and enabled management to use the support function as the "eyes and ears" of the company to understand where customers were struggling, what projects their management was lining up for them to tackle next, and what competitors were doing.

Fluent used multiple field teams, each focused on selling Fluent to a specific industry segment. These field teams were multi-disciplinary (sales. marketing, customer support and consulting) and they were managed as individual profit centers. "The early team leaders gained a lot of experience in developing a profitable business and many went on to successful management positions at Fluent and elsewhere," Subbiah said.

Boysan and Ayers remained in Sheffield and did most of the development work through the 1980s. When a problem arose, Patel called Boysan, sometimes in the middle of the night, and Boysan got up and tried to figure out what was going wrong. Boteb became the European distributor of Fluent and was eventually purchased by Fluent and re-christened it Fluent Europe.

Keith Hanna described his experience in 1989 when he was a young researcher at British Steel PLC in Teesside, and the company was choosing between the then market leading general purpose CFD code, PHOENICS from Cham, STAR-CD from Computational Dynamics in London, and Fluent. Hanna said in his blog [2] : “Even back then Brian [Spalding] was viewed as a colossus in the CFD field, and the British Steel Fluid Flow experts were in awe of him. However, PHOENICS then had a very complex multi-code structure with “planets” and “satellites”, as Brian called them, and much scripting between codes. FLUENT with its integrated geometry engine, mesher, solver and postprocessor had less technical capabilities overall but even back then such a simple thing as ease-of-use and user experience had a big impact on potential users and FLUENT was chosen. PHOENICS only worked in batch mode at the time, whereas we liked the fact that you could stop the FLUENT solver during an iteration and view the flow field!”

University Relationships Supplied Key Talent

Patel early on began offering the software at low rates to universities and establishing relationships with key professors in order to obtain their assistance in recruiting their best students. He focused on hiring multi-talented individuals who had a passion for CFD. Candidates were subjected to interviews lasting a day or more. Key early hires included Dipankar Choudhury from University of Minnesota, who is currently Vice President of Research for ANSYS Inc., Wayne Smith from Cornell University, who led the development of Fluent’s unstructured mesh CFD solver and went on to become Senior Vice President Software Development at the CD-adapco business unit of Siemens PLM Software, and Zahed Sheikh, from the University of Iowa, who led the Fluent sales force in the early years and later went on to be an executive at Flomerics.

From a Structured to an Unstructured Mesh

The original Fluent code was a cartesian mesh program which meant that meshes could not be applied to arbitrary computer aided design (CAD) geometries but rather had to be stair-stepped in areas where the boundary was curved. After a few abortive efforts, Boysan, Sergio Vasquez and Kanchan Kelkar developed a boundary fitted version of Fluent in the early 1990s.

Product tree showing ANSYS acquisitions

Figure 4: Product tree showing ANSYS acquisitions in CFD space

Another limitation of early Fluent was that it utilized structured meshes which were labor intensive in terms of mesh generation and not well suited to modeling complex geometries or capturing flow physics efficiently.

"The Fluent founders made many courageous decisions," Subbiah said. “One that sticks in my mind was the decision not to pursue block-structured meshing. In the early 1990s, Fluent was a single block code while competitors were offering multi-block solutions that offered significantly greater flexibility in meshing. Although there was strong market pressure on management to develop multi-block technology in Fluent, management decided to leapfrog them by investing in automated, unstructured technology - which, at that time, was largely unproven. This calculated risk led to Fluent leading the industry with the first release of automated unstructured meshing."

Another Creare employee, Wayne Smith, received Small Business Innovation Research (SBIR) funding from NASA to develop unstructured-mesh CFD software that could adapt during the solution, for example by increasing mesh density in areas with high gradients. After completing the SBIR, Smith and his team, which included Ken Blake and Chris Morley, transferred to Patel’s group within Creare to work on a commercial version of the new software. The results of their work were released in 1991 as the TGrid tetrahedral mesher and Rampant solver which targeted high Mach number compressible flows in aerospace applications.

With Rampant being limited to a relatively narrow range of problems, the original structured mesh Fluent code remained the flagship application through the early 1990s. Several key advances were made on Rampant between 1991 and 1993, however, that were to prove to be vital in Fluent’s future growth. These include the introduction of client-server Cortex architecture, domaindecomposition parallel capability and Joe Maruszewski’s implementation of a pressure-based control-volume finite element method utilizing algebraic multigrid optimized for solving incompressible flow problems. This version was introduced in the market as Fluent/UNS 1.0 in 1994. Later that year, Jayathi Murthy, who at that time led Fluent’s research and development team, and Sanjay Mathur, rewrote Fluent/UNS over a matter of weeks, switching over to a more efficient finite-volume formulation well-suited for building in methods and physics for the majority of CFD applications. This version of the code was released as Fluent/UNS 3.2 in 1995. Murthy went on to an illustrious academic career and is currently Dean of Engineering at the University of California Los Angeles. Rampant and Fluent/UNS continued as concurrent codes for several years and were combined into a single code with release 5 in 1998, the original structured mesh Fluent code to be discontinued. With Fluent 5, all the major ingredients of a potent CFD modeling capability were together in a single offering - unstructured mesh methods for the entire range of flow regimes, and a client-server architecture with an easy-to-use interactive user interface well suited to run on parallel supercomputers or clusters of the new generation of workstations made by Silicon Graphics, Sun and Hewlett-Packard.

Avid Thermalloy Provides Capital for Growth

Fluent spun off from Creare in 1991 with Creare retaining a substantial minority stake. Patel talked to several investment banks seeking funding to buy out Creare’s stake and take the company to the next level, but despite Fluent generating a considerable cash flow they were not interested in financing a leveraged buyout. Patel happened to play golf during this period with the CEO of Aavid Thermalloy, a New Hampshire company producing heat sinks for electronics applications. Aavid was also looking for capital, so Fluent merged with Aavid and the combined company issued an Initial Public Offering (IPO) in January 1996 that made it possible to buy out the Creare shareholders and fund the expansion of the business.

The IPO also made it possible to issue stock options to employees and acquire several competitive CFD software companies to obtain their technology and engineering teams. These included Fluid Dynamics International and its FIDAP general purpose CFD software, and Polyflow S.A., whose Polyflow CFD software was designed to handle laminar viscoelastic flows. In January 2000, Willis Stein & Partners, a private equity investment firm, acquired the Aavid Thermal Technologies business unit. Meanwhile, Fluent sales grew from $8 million in 1995 to $100 million in 2004.

Acquisition by ANSYS, Inc.

In May 2006, Fluent Inc. was acquired by ANSYS, Inc., a computer aided engineering software company that up to that point specialized in solid mechanics simulation. A product tree showing ANSYS CFD acquisitions is shown in Figure 4. When ANSYS acquired Fluent, the two companies were roughly equal in revenues and as a result, former Fluent employees had a considerable influence on the operation of the combined company.

Brian Spalding, universally considered to be the father of the CFD industry, best defined the influence of Fluent on the CFD industry. Spalding once said that the company he founded, CHAM, showed the world that fluid dynamics problems could be solved on a computer. He said that Fluent, on other hand, proved that engineers could use this software to solve real world problems. His statement reaffirms the success of Fluent in achieving its initial goals of providing interactive software combined with strong technical support that enabled engineers to, quoting the original 1983 Fluent brochure, “apply stateof- the-art computer simulation methods to analyze and solve practical design problems without costly, timeconsuming computer programming.”


n5321 | 2025年6月15日 09:09

About Us

普通电机工程师!
从前只想做最好的电机设计,现在修理电机设计工具。
希望可以帮你解释电磁概念,项目救火,定制ANSYS Maxwell。

了解更多