Failed Promises

For some time now, many of the most prominent and colorful pages in Mechanical Engineering magazine have been filled by advertisements for computer software. However, there is a difference between the most recent ads and those of just a few years earlier. In 1990, for example, many software developers emphasized the reliability and ease of use of their packages, with one declaring itself the “most reliable way to take the heat, handle the pressure, and cope with the stress” while another promised to provide “trusted solutions to your design challenges.”

More recent advertising copy is a bit more subdued, with fewer implied promises that the software is going to do the work of the engineer—or take the heat or responsibility. The newer message is that the buck stops with the engineer. Software packages might provide “the right tool for the job,” but the engineer works the tool. A sophisticated system might be “the ultimate testing ground for your ideas,” but the ideas are no longer the machine’s, they are the engineer’s. Options may abound in software packages, but the engineer makes a responsible choice. This is as it should be, of course, but things are not always as they should be, and that is no doubt why there have been subtle and sometimes not-so-subtle changes in technical software marketing and its implied promises. Civil Engineering has also run software advertisements, albeit less prominent and colorful ones. Their messages, explicit or implicit, are more descriptive than promising. Nevertheless, the advertisements also contain few caveats about limitations, pitfalls, or downright errors that might be encountered in using prepackaged, often general-purpose software for a specific engineering design or analysis. The implied optimism of the software advertisements stands in sharp contrast to the concerns about the use of software that have been expressed with growing frequency in the pages of the same engineering magazines. The American Society of Civil Engineers, publisher of Civil Engineering and a host of technical journals and publications full of theoretical and applied discussions of computers and their uses, has among its many committees one on “guidelines for avoiding failures caused by misuse of civil engineering software.” The committee’s parent organization, the Technical Council on Forensic Engineering, was the sponsor of a cautionary session on computer use at the society’s 1992 annual meeting, and one presenter titled his paper, “Computers in Civil Engineering: A Time Bomb!” In simultaneous sessions at the same meeting, other equally fervid engineers were presenting computer-aided designs and analyses of structures of the future. There is no doubt that computer-aided design, manufacturing, and engineering have provided benefits to the profession and to humankind. Engineers are attempting and completing more complex and time-consuming analyses that involve many steps (and therefore opportunities for error) and that might not have been considered practicable in slide-rule days. New hardware and software have enabled more ambitious and extensive designs to be realized, including some of the dramatic structures and ingenious machines that characterize the late twentieth century. Today’s automobiles, for example, possess better crashworthiness and passenger protection because of advanced finite-element modeling, in which a complex structure such as a stylish car body is subdivided into more manageable elements, much as we might construct a gracefully curving walkway out of a large number of rectilinear bricks. For all the achievements made possible by computers, there is growing concern in the engineering-design community that there are numerous pitfalls that can be encountered using software packages. All software begins with some fundamental assumptions that translate to fundamental limitations, but these are not always displayed prominently in advertisements. Indeed, some of the limitations of software might be equally unknown to the vendor and to the customer. Perhaps the most damaging limitation is that it can be misused or used inappropriately by an inexperienced or overconfident engineer. The surest way to drive home the potential dangers of misplaced reliance on computer software is to recite the incontrovertible evidence of failures of structures, machines, and systems that are attributable to use or misuse of software. One such incident occurred in the North Sea in August 1991, when the concrete base of a massive Norwegian oil platform, designated Sleipner A, was being tested for leaks and mechanical operation prior to being mated with its deck. The base of the structure consisted of two dozen circular cylindrical reinforced-concrete cells. Some of the cells were to serve as drill shafts, others as storage tanks for oil, and the remainder as ballast tanks to place and hold the platform on the sea bottom. Some of the tanks were being filled with water when the operators heard a loud bang, followed by significant vibrations and the sound of a great amount of running water. After eight minutes of trying to control the water intake, the crew abandoned the structure. About eighteen minutes after the first bang was heard, Sleipner A disappeared into the sea, and forty-five seconds later a seismic event that registered a 3 on the Richter scale was recorded in southern Norway. The event was the massive concrete base striking the sea floor. An investigation of the structural design of Sleipner A’s base found that the differential pressure on the concrete walls was too great where three cylindrical shells met and left a triangular void open to the full pressure of the sea. It is precisely in the vicinity of such complex geometry that computer-aided analysis can be so helpful, but the geometry must be modeled properly. Investigators found that “unfavorable geometrical shaping of some finite elements in the global analysis … in conjunction with the subsequent post-processing of the analysis results … led to underestimation of the shear forces at the wall supports by some 45%.” (Whether or not due to the underestimation of stresses, inadequate steel reinforcement also contributed to the weakness of the design.) In short, no matter how sound and reliable the software may have been, its improper and incomplete use led to a structure that was inadequate for the loads to which it was subjected. In its November 1991 issue, the trade journal Offshore Engineer reported that the errors in analysis of Sleipner A “should have been picked up by internal control procedures before construction started.” The investigators also found that “not enough attention was given to the transfer of experience from previous projects.” In particular,trouble with an earlier platform, Statfjord A, which suffered cracking in the same critical area, should have drawn attention to the flawed detail. (A similar neglect of prior experience occurred, of course, just before the fatal Challenger accident, when the importance of previous O-ring problems was minimized.) Prior experience with complex engineering systems is not easily built into general software packages used to design advanced structures and machines. Such experience often does not exist before the software is applied, and it can be gained only by testing the products designed by the software. A consortium headed by the Netherlands Foundation for the Coordination of Maritime Research once scheduled a series of full-scale collisions between a single- and a double-hulled ship “to test the [predictive] validity of computer modelling analysis and software.” Such drastic measures are necessary because makers and users of software and computer models cannot ignore the sine qua non of sound engineering—broad experience with what happens in and what can go wrong in the real world. Computer software is being used more and more to design and control large and complex systems, and in these cases it may not be the user who is to blame for accidents. Advanced aircraft such as the F-22 fighter jet employ on-board computers to keep the plane from becoming aerodynamically unstable during maneuvers. When an F-22 crashed during a test flight in 1993, according to a New York Times report, “a senior Air Force official suggested that the F-22’s computer might not have been programmed to deal with the precise circumstances that the plane faced just before it crash-landed.” What the jet was doing, however, was not unusual for a test flight. During an approach about a hundred feet above the runway, the afterburners were turned on to begin an ascent—an expected maneuver for a test pilot—when “the plane’s nose began to bob up and down violently.” The Times reported the Air Force official as saying, “It could have been a computer glitch, but we just don’t know.” Those closest to questions of software safety and reliability worry a good deal about such “fly by wire” aircraft. They also worry about the growing use of computers to control everything from elevators to medical devices. The concern is not that computers should not control such things, but rather that the design and development of the software must be done with the proper checks and balances and tests to ensure reliability as much as is humanly possible. A case study that has become increasingly familiar to software designers unfolded during the mid-1980s, when a series of accidents plagued a high-powered medical device, the Therac-25. The Therac-25 was designed by Atomic Energy of Canada Limited (AECL) to accelerate and deliver a beam of electrons at up to 25 mega-electron-volts to destroy tumors embedded in living tissue. By varying the energy level of the electrons, tumors at different depths in the body could be targeted without significantly affecting surrounding healthy tissue, because beams of higher energy delivered the maximum radiation dose deeper in the body and so could pass through the healthy parts. Predecessors of the Therac-25 had lower peak energies and were less compact and versatile. When they were designed in the early 1970s, various protective circuits and mechanical interlocks to monitor radiation prevented patients from receiving an overdose. These earlier machines were later retrofitted with computer control, but the electrical and mechanical safety devices remained in place. Computer control was incorporated into the Therac-25 from the outset. Some safety features that had depended on hardware were replaced with software monitoring. “This approach,” according to Nancy Leveson, a leading software safety and reliabilty expert, and a student of hers, Clark Turner, “is becoming more common as companies decide that hardware interlocks and backups are not worth the expense, or they put more faith (perhaps misplaced) on software than on hardware reliability.” Furthermore, when hardware is still employed, it is often controlled by software. In their extensive investigation of the Therac-25 case, Leveson and Turner recount the device’s accident history, which began in Marietta, Georgia. On June 3, 1985, at the Kennestone Regional Oncology Center, the Therac-25 was being used to provide follow-up radiation treatment for a woman who had undergone a lumpectomy. When she reported being burned, the technician told her it was impossible for the machine to do that, and she was sent home. It was only after a couple of weeks that it became evident the patient had indeed suffered a severe radiation burn. It was later estimated she received perhaps two orders of magnitude more radiation than that normally prescribed. The woman lost her breast and the use of her shoulder and arm, and she suffered great pain. About three weeks after the incident in Georgia, another woman was undergoing Therac-25 treatment at the Ontario Cancer Foundation for a carcinoma of the cervix when she complained of a burning sensation. Within four months she died of a massive radiation overdose. Four additional cases of overdose occurred, three resulting in death. Two of these were at the Yakima Valley Memorial Hospital in Washington, in 1985 and 1987, and two at the East Texas Cancer Center, in Tyler, in March and April 1986. These latter cases are the subject of the title tale of a collection of horror stories on design, technology, and human error, Set Phasers on Stun, by Steven Casey. Leveson and Turner relate the details of each of the six Therac-25 cases, including the slow and sometimes less-than-forthright process whereby the most likely cause of the overdoses was uncovered. They point out that “concluding that an accident was the result of human error is not very helpful and meaningful,” and they provide an extensive analysis of the problems with the software controlling the machine. According to Leveson and Turner, “Virtually all complex software can be made to behave in an unexpected fashion under certain conditions,” and this is what appears to have happened with the Therac-25. Although they admit that to the day of their writing “some unanswered questions” remained, Leveson and Turner report in considerable detail what appears to have been a common feature in the Therac-25 accidents. The parameters for each patient’s prescribed treatment were entered at the computer keyboard and displayed on the screen before the operator. There were two fundamental modes of treatment, X ray (employing the machine’s full 25 mega-electron-volts) and the relatively low-power electron beam. The first was designated by typing in an “x” and the latter by an “e.” Occasionally, and evidently in at least some if not all of the accident cases, the Therac operator mistyped an “x” for an “e,” but noticed the error before triggering the beam. An “edit” of the input data was performed by using the “arrow up” key to move the cursor to the incorrect entry, changing it, and then returning to the bottom of the screen, where a “beam ready” message was the operator’s signal to enter an instruction to proceed, administering the radiation dose. Unfortunately, in some cases the editing was done so quickly by the fast-typing operators that not all of the machine’s functions were properly reset before the treatment was triggered. Exactly how much overdose was administered, and thus whether it was fatal, depended upon the installation, since “the number of pulses delivered in the 0.3 second that elapsed before interlock shutoff varied because the software adjusted the start-up pulse-repetition frequency to very different values on different machines.”

Anomalous, eccentric, sometimes downright bizarre, and always unexpected behavior of computers and their software is what ties together the horror stories that appear in each issue of Software Engineering Notes, an “informal newsletter” published quarterly by the Association for Computing Machinery. Peter G. Neumann, chairman of the ACM Committee on Computers and Public Policy, is the moderator of the newsletter’s regular department, “Risks to the Public in Computers and Related Systems,” in which contributors pass on reports of computer errors and glitches in applications ranging from health care systems to automatic teller machines. Neumann also writes a regular column, “Inside Risks,” for the magazine Communications of the ACM, in which he discusses some of the more generic problems with computers and software that prompt the many horror tales that get reported in newspapers, magazines, and professional journals and on electronic bulletin boards. Unfortunately, a considerable amount of the software involved in computer-related failures and malfunctions reported in such forums is produced anonymously, packaged in a black box, and poorly documented. The Therac-25 software, for example, was designed by a programmer or programmers about whom no information was forthcoming, even during a lawsuit brought against AECL. Engineers and others who use such software might reflect upon how contrary to normal scientific and engineering practice its use can be. Responsible engineers and scientists approach new software, like a new theory, with healthy skepticism. Increasingly often, however, there is no such skepticism when the most complicated of software is employed to solve the most complex problems. No software can ever be proven with absolute certainty to be totally error-free, and thus its design, construction, and use should be approached as cautiously as that of any major structure, machine, or system upon which human lives depend. Although the reputation and track record of software producers and their packages can be relied upon to a reasonable extent, good engineering involves checking them out. If the black box cannot be opened, a good deal of confidence in it and understanding of its operation can be inferred by testing. The proof tests to which software is subjected should involve the simple and ordinary as well as the complex and bizarre. A lot more might be learned about a finite-element package, for example, by solving a problem whose solution is already known rather than by solving one whose answer is unknown. In the former case, something might be inferred about the limitations of the black box; in the latter, the output from the black box might bedazzle rather than enlighten. In the final analysis it is the proper attention to detail—in the human designer’s mind as well as in the computer software—that causes the most complex and powerful applications to work properly. A fundamental activity of engineering and science is making promises in the form of designs and theories, so it is not fair to discredit computer software solely on the basis that it promises to be a reliable and versatile problem-solving tool or trusted machine operator. Nevertheless, users should approach all software with prudent caution and healthy skepticism, for the history of science and engineering, including the still-young history of software engineering, is littered with failed promises.


n5321 | 2025年6月19日 07:03

Diss CAE

Hacker News new | past | comments | ask | show | jobs | submitlogin

I started my career doing FE modeling and analysis with ANSYS and NASTRAN. Sometimes I miss these days. Thinking about how to simplify a real world problem so far that it is solvable with the computational means available was always fun. Then pushing quads around for hours until the mesh was good had an almost meditative effect. But I don't feel overwhelmingly eager to learn a new software or language.

Much to my surprise, it seems there hasn't been much movement there. ANSYS still seems to be the leader for general simulation and multi-physics. NASTRAN still popular. Still no viable open-source solution.

The only new player seems to be COMSOL. Has anyone experience with it? Would it be worth a try for someone who knows ANSYS and NASTRAN well?




I've used ansys daily for over a decade, and the only movement is in how they name their license tiers. It's a slow muddy death march. Every year I'm fighting the software more and more, the sales men are clearly at the wheel.

They buy "vertical aligned" software, integrate it, then slowly let it die. They just announced they're killing off one of these next year, that they bought ten years ago, because they want to push a competitive product with 20% of the features.

I've been using nastran for half as long but it isn't much better. It's all sales.

I dabbed a bit in abaqus, that seems nice. Probably cause I just dabbed in it.

But here I'm just trying to do my work, and all these companies do is move capabilities around their license tiers and boil the frog as fast as they get away with.


I've gone Abaqus > Ansys > Abaqus/LS-DYNA over my career and hate Ansys with a fiery passion. It's the easiest one to run your first model in, but when you start applying it to real problems its a fully adversarial relationship. The fact you have to make a complete copy of the geometry/mesh to a new Workbench "block" to run a slightly different load case (and you can't read in an orphaned results files) is just horrible.

Abaqus is more difficult to get up to speed in, but its really nice from an advanced usability standpoint. They struggle due to cost though, it is hugely expensive and we've had to fight hard to keep it time and time again.

LS-Dyna is similar to Abaqus (though I'm not fully up in it yet), but we're all just waiting to see how Ansys ruins it, especially now that they got bought out by Synopsys.


I don't know how long ago you used ansys, and i definitely don't want to sell it, but you can share geometry/mesh between those "blocks" (by dragging blocks on top of each other), and you can read in result orphaned result files.


> Still no viable open-source solution.

For the more low-level stuff there's the FEniCS project[1], for solving PDEs using fairly straight forward Python code like this[2]. When I say fairly straight forward, I mean it follows the math pretty closely, it's not exactly high-school level stuff.

[1]: https://fenicsproject.org/

[2]: https://jsdokken.com/dolfinx-tutorial/chapter2/linearelastic...


Interesting. Please bear with me as this is going off 25 year old memories, but my memory is that the workflow for using FEA tools was: Model in some 3D modelling engineering tool (e.g. SolidWorks), ansys to run FEA, iterate if needed, prototype, iterate.

So to have anything useful, you need that entire pipeline? For hobbyists, I assume we need this stack. What are the popular modelling tools?


To get started with Fenics you can maybe use the FEATool GUI, which makes it easier to set up FEA models, and also export Python simulation scripts to learn or modify the Fenics syntax [1].

[1]: https://www.featool.com/tutorial/2017/06/16/Python-Multiphys...


Yeah not my domain so wouldn't really know. For FEniCS I know Gmsh[1] was used. There's some work[2][3] been done to integrate FEniCS with FreeCAD. It seems FreeCAD also supports[4] other FEM solvers.

But, I guess you get what you pay for in this space still.

[1]: https://gmsh.info/

[2]: https://github.com/qingfengxia/Cfd

[3]: https://github.com/qingfengxia/FenicsSolver

[3]: https://wiki.freecad.org/FEM_Solver


You can export other CAD meshes for use in it


> For hobbyists, I assume we need this stack.

Just curious what kind of hobby leads to a finite element analysis?


Electronics (when you start to care about EMI or antenna design), model airplanes (for aerodynamics), rocketry, machining (especially if you want to get into SPIF), robotics, 3-D printing (especially for topology optimization), basically anything that deals with designing solid structures in the physical world. Also, computer graphics, including video games.

Unfortunately the barrier to entry is too high for most hobbyists in these fields to use FEM right now.


There are some obvious downsides and exceptions to this sentiment, but on balance, I really appreciate how the expansive access to information via the internet has fostered this phenomenon: where an unremarkable fella with a dusty media studies degree, a well-equipped garage, and probably too much free time can engineer and construct robotic machines, implement/tweak machine vision mechanisms, microwave radio transceivers, nanometer-scale measurements using laser diodes and optical interferometry, deep-sky astrophotography, etc., etc.. Of course, with burgeoning curiosity and expanding access to surplus university science lab equipment, comes armchair experts and the potential for insufferability[0]. It’s crucial to maintain perspective and be mindful of just how little any one person (especially a person with a media studies degree) can possibly know.

[0] I’m pretty sure “insufferability” isn’t a real word. [Edit: don’t use an asterisk for footnotes.]


comes armchair experts and the potential for insufferability

Hey, I resemble that remark! I'd be maybe a little less armchair with more surplus equipment access, but maybe no less insufferable.

By all accounts, though, a degree of insufferability is no bar to doing worthwhile work; Socrates, Galileo, Newton, Babbage, and Heaviside were all apparently quite insufferable, perhaps as much so as that homeless guy who yells at you about adrenochrome when you walk by his park encampment. (Don't fall into the trap of thinking it's an advantage, though.) Getting sidetracked by trivialities and delusions is a greater risk. Most people spend their whole lives on it.

As for how little any person can know, you can certainly know more than anyone who lived a century ago: more than Einstein, more than Edison, more than Noether, more than Tesla, more than Gauss. Any one of the hobbies you named will put you in contact with information they never had, and you can draw on a century or more of academic literature they didn't have, thanks to Libgen and Sci-Hub (and thus Bitcoin).

And it's easy to know more than an average doctorate holder; all you have to do is study, but not forget everything you study the way university students do, and not fall into traps like ancient aliens and the like. I mean, you can still do good work if you believe in ancient aliens (Newton and Tesla certainly believed dumber things) but probably not good archeological work.

Don't be discouraged by prejudice against autodidacts. Lagrange, Heaviside, and du Châtelet were autodidacts, and Ptolemy seems to have been as well. And they didn't even have Wikipedia or Debian! Nobody gets a Nobel for passing a lot of exams.


IMO, the mathematics underlying finite element methods and related subjects — finite element exterior calculus comes immediately to mind — are interesting enough to constitute a hobby in their own right.


FEniCs is mostly used by academic researchers, I used it for FEM modelling in magnetic for e.g. where the sorts of problems we wanted to solve you can’t do in a commercial package.


COMSOL's big advantage is it ties together a lot of different physics regimes together and makes it very easy to couple different physics together. Want to do coupled structures/fluid? Or coupled electromagnetism/mechanical? Its probably the easiest one to use.

Each individual physics regime is not particularly good on its own - there are far better mechanical, CFD, electromagnetism, etc solvers out there - but they're all made by different vendors and don't play nicely with each other.


> The only new player seems to be COMSOL

Ouch. I kind of know Comsol because it was already taught in my engineering school 15 years ago, so that it still counts as a “new entrant” really gives an idea of how slow the field evolves.


The COMSOL company was started in 1986....


It used to be called FEMLAB :)

But they changed to COMSOL because they didn't have the trademark in Japan and FEM also gave associations to the feminine gender.


I am hoping this open source FEM library will catch on : https://www.dealii.org/. The deal in deal.II stands for Differential Equation Analysis Library.

It's written in C++, makes heavy use of templates and been in development since 2000. It's not meant for solid mechanics or fluid mechanics specifically, but for FEM solutions of general PDEs.

The documentation is vast, the examples are numerous and the library interfaces with other libraries like Petsc, Trilinos etc. You can output results to a variety of formats.

I believe support for triangle and tetrahedral elements has been added only recently. In spite of this, one quirk of the library is that meshes are called "triangulations".


I've worked with COMSOL (I have a smaller amount of ANSYS experience to compare to). For the most part I preferred COMSOL's UI and workflow and leveraged a lot of COMSOL's scripting capabilities which was handy for a big but procedural geometry I had (I don't know ANSYS's capabilities for that). They of course largely do the same stuff. If you have easy access to COMSOL to try it out I'd recommend it just for the experience. I've found sometimes working with other tools make me recognize some capabilities or technique that hadn't clicked for me yet.


Once you have a mesh that's "good enough", you can use any number of numeric solvers. COMSOL has a very good mesher, and a competent geometry editor. It's scriptable, and their solvers are also very good.

There might be better programs for some problems, but COMSOL is quite nice.


OpenFOAM seems like an opensource option but I have found it rather impenetrable - there are some youtube videos and pdf tutorials, but they are quite dense and specific and doens't seem to cover the entire pipeline

Happy to hear if people have good resources!


Still no viable open-source solution.

Wait? What? NASTRAN was originally developed by NASA and open sourced over two decades ago. Is this commercial software built on top that is closed source?

I’m astonished ANSYS and NASTRAN are still the only players in town. I remember using NASTRAN 20 years ago for FE of structures while doing aero engineering. And even then NASTRAN was almost 40 years old and ancient.


There's a bunch of open source fem solvers e.g. Calculix, Code_Aster, OpenRadioss and probably a few unmaintained forks of (NASA) NASTRAN, but there's no multiphysics package I don't think.


These are at least capable of thermomechanical with fluid-structure coupling. Not all-physics but still multi. True that things with multi species diffusion or electromagnetics are missing, but maybe Elmer can fill the gap.


Abaqus is pretty big too. I've worked with both Ansys and Abaqus and I generally prefer the latter.


Abaqus is up there with Ansys aswell as others have mentioned.


As a recovering fe modeler, I understand completely.


I work in this field and it really is stagnant and dominated by high-priced Ansys/etc. For some reason silicon valley's open sourceness hasn't touched it. For open source, there's CalculiX which is full of bugs and Code Aster which everybody I've heard about it from say it's too confusing to use. CalculiX has Prepomax as a fairly new and popular pre/post.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: 


n5321 | 2025年6月15日 23:43

Diss: Eighty Years of the Finite Element Method (2022)

Hacker News new | past | comments | ask | show | jobs | submitlogin
Eighty Years of the Finite Element Method (2022) (springer.com)
203 points by sandwichsphinx 7 months ago hide | past | favorite | 102 comments



I've been a full-time FEM Analyst for 15 years now. It's generally a nice article, though in my opinion paints a far rosier picture of the last couple decades than is warranted.

Actual, practical use of FEM has been stagnate for quite some time. There have been some nice stability improvements to the numerical algorithms that make highly nonlinear problems a little easier; solvers are more optimized; and hardware is of course dramatically more capable (flash storage has been a godsend).

Basically every advanced/"next generation" thing the article touts has fallen flat on its face when applied to real problems. They have some nice results on the world's simplest "laboratory" problem, but accuracy is abysmal on most real-world problems - e.g. it might give good results on a cylinder in simple tension, but fails horribly when adding bending.

There's still nothing better, but looking back I'm pretty surprised I'm still basically doing things the same way I was as an Engineer 1; and not for lack of trying. I've been on countless development projects that seem promising but just won't validate in the real world.

Industry focus has been far more on Verification and Validation (ASME V&V 10/20/40) which has done a lot to point out the various pitfalls and limitations. Academic research and the software vendors haven't been particularly keen to revisit the supposedly "solved" problems we're finding.


I'm a mechanical engineer, and I've been wanting to better understand the computational side of the tools I use every day. Do you have any recommendations for learning resources if one wanted to "relearn" FEA from a computer science perspective?


I learned it for the first time from this[0] course; part of the course covers deal.ii[1] where you program the stuff you're learning in C++.

[0]: https://open.umich.edu/find/open-educational-resources/engin...

[1]: https://www.dealii.org/


Start with FDM. Solve Bernoulli deflection of a beam


Have a look at FEniCs to start with.


>Basically every advanced/"next generation" thing the article touts has fallen flat on its face when applied to real problems

Even Arnold's work? FEEC seemed quite promising last time I was reading about it, but never seemed to get much traction in the wider FEM world.


I kind of thought Neural Operators were slotting into the some problem domains where FEM is used (based on recent work in weather modelling, cloth modelling, etc) and thought there was some sort of FEM -> NO lineage. Did I completely misunderstand that whole thing?


Those are definitely up next in the flashy-new-thing pipeline and I'm not that up to speed on them yet.

Another group within my company is evaluating them right now and the early results seems to be "not very accurate, but directionally correct and very fast" so there may be some value in non-FEM experts using them to quickly tell if A or B is a better design; but will still need a more proper analysis in more accurate tools.

It's still early though and we're just starting to see the first non-research solvers hitting the market.


Very curious, we are getting good results with PiNN and operators, what's your domain?


I was under the impression that the linear systems that come out of FEM methods are in some cases being solved by neural networks (or partially, e.g. as a preconditioner in an iterative scheme), but I don't know the details.


stagnate last 15 years??? Contact elements, bolt preload, modeling individual composite fibers, delamination progressive ply failure, modeling layers of material to a few thousandths of an inch. Design optimization. ANSYS Workbench = FEA For Dummies. The list goes on.


Have you heard of physics informed neural nets?

It seems like a hot candidate to potentially yield better results in the future


Could you write a blogpost-style article on how to model the shallow water wave equation on a sphere? The article would start with the simplest possible method, something that could be implemented in short C program, and would continue with a progressively more accurate and complex methods.


If you are interested in this, I'd recommend following an openfoam tutorial, c++ though.

You could do SWE with finite elements, but generally finite volumes would be your choice to handle any potential discontinuities and is more stable and accurate for practical problems.

Here is a tutorial. https://www.tfd.chalmers.se/~hani/kurser/OS_CFD_2010/johanPi...


I'm looking for something like this, but more advanced. The common problem with such tutorials is that they stop with the simplest geometry (square) and the simplest finite difference method.

What's unclear to me is how do I model the spherical geometry without exploding the complexity of the solution. I know that a fully custom mesh with a pile of formulas for something like beltrami-laplace operator would work, but I want something more elegant than this. For a example, can I use the Fibbonacci spiral to generate a uniform spherical mesh, and then somehow compute gradients and the laplacian?

I suspect that the stability of FE or FV methods is rooted in the fact that the FE functions slightly overlap, so computing the next step is a lot like using an implicit FD scheme, or better, a variation of the compact FD scheme. However I'm interested in how an adept in the field would solve this problem in practice. Again, I'm aware that there are methods of solving such systems (Jacobi, etc.), but those make the solution 10x more complex, buggier and slower.


Interesting that this reads almost like an chatgpt prompt.


Lazy people have been lazy forever. I stumbled across an example of this the other day from the 1990s, I think, and was shocked how much the student emails sounded like LLM prompts: https://www.chiark.greenend.org.uk/~martinh/poems/questions....


At least those had some basic politeness. So often I'm blown away not only how people blithely write "I NEED HELP, GIMME XYZ NOW NERDS" but especially how everyone is just falling over themselves to actually help! WTF?

Basic politeness is absolutely dead, nobody has any concept of acknowledging they are asking for a favour; we just blast Instagram/TikTok reels at top volume and smoke next to children and elderly in packed public spaces etc. I'm 100% sure it's not rose-tinted memories of the 90s making me think, it wasn't always like this...


It reminds me of the old joke that half of the students are below average…


Expect in Lake Woebegone, all of the children are above average


But that's not true, unless by "average" you mean the median.


Normally, it's all the same.


Only if the distribution has zero skewness.

Unless "normally" you mean the normal distribution, which indeed has zero skewness.


Yes, it was a admittedly bad pun.


> Could you write a blogpost-style article on how to model the shallow water wave equation on a sphere?

Typically, Finite Volume Method is used for fluid flow problems. It is possible to use Finite Element Methods, but it is rare.


"As an AI language model, I am happy to comply with your request ( https://chatgpt.com/share/6727b644-b2e0-800b-b613-322072d9d3... ), but good luck finding a data set to verify it, LOL."


I started my career doing FE modeling and analysis with ANSYS and NASTRAN. Sometimes I miss these days. Thinking about how to simplify a real world problem so far that it is solvable with the computational means available was always fun. Then pushing quads around for hours until the mesh was good had an almost meditative effect. But I don't feel overwhelmingly eager to learn a new software or language.

Much to my surprise, it seems there hasn't been much movement there. ANSYS still seems to be the leader for general simulation and multi-physics. NASTRAN still popular. Still no viable open-source solution.

The only new player seems to be COMSOL. Has anyone experience with it? Would it be worth a try for someone who knows ANSYS and NASTRAN well?


I've used ansys daily for over a decade, and the only movement is in how they name their license tiers. It's a slow muddy death march. Every year I'm fighting the software more and more, the sales men are clearly at the wheel.

They buy "vertical aligned" software, integrate it, then slowly let it die. They just announced they're killing off one of these next year, that they bought ten years ago, because they want to push a competitive product with 20% of the features.

I've been using nastran for half as long but it isn't much better. It's all sales.

I dabbed a bit in abaqus, that seems nice. Probably cause I just dabbed in it.

But here I'm just trying to do my work, and all these companies do is move capabilities around their license tiers and boil the frog as fast as they get away with.


I've gone Abaqus > Ansys > Abaqus/LS-DYNA over my career and hate Ansys with a fiery passion. It's the easiest one to run your first model in, but when you start applying it to real problems its a fully adversarial relationship. The fact you have to make a complete copy of the geometry/mesh to a new Workbench "block" to run a slightly different load case (and you can't read in an orphaned results files) is just horrible.

Abaqus is more difficult to get up to speed in, but its really nice from an advanced usability standpoint. They struggle due to cost though, it is hugely expensive and we've had to fight hard to keep it time and time again.

LS-Dyna is similar to Abaqus (though I'm not fully up in it yet), but we're all just waiting to see how Ansys ruins it, especially now that they got bought out by Synopsys.


I don't know how long ago you used ansys, and i definitely don't want to sell it, but you can share geometry/mesh between those "blocks" (by dragging blocks on top of each other), and you can read in result orphaned result files.


> Still no viable open-source solution.

For the more low-level stuff there's the FEniCS project[1], for solving PDEs using fairly straight forward Python code like this[2]. When I say fairly straight forward, I mean it follows the math pretty closely, it's not exactly high-school level stuff.

[1]: https://fenicsproject.org/

[2]: https://jsdokken.com/dolfinx-tutorial/chapter2/linearelastic...


Interesting. Please bear with me as this is going off 25 year old memories, but my memory is that the workflow for using FEA tools was: Model in some 3D modelling engineering tool (e.g. SolidWorks), ansys to run FEA, iterate if needed, prototype, iterate.

So to have anything useful, you need that entire pipeline? For hobbyists, I assume we need this stack. What are the popular modelling tools?


To get started with Fenics you can maybe use the FEATool GUI, which makes it easier to set up FEA models, and also export Python simulation scripts to learn or modify the Fenics syntax [1].

[1]: https://www.featool.com/tutorial/2017/06/16/Python-Multiphys...


Yeah not my domain so wouldn't really know. For FEniCS I know Gmsh[1] was used. There's some work[2][3] been done to integrate FEniCS with FreeCAD. It seems FreeCAD also supports[4] other FEM solvers.

But, I guess you get what you pay for in this space still.

[1]: https://gmsh.info/

[2]: https://github.com/qingfengxia/Cfd

[3]: https://github.com/qingfengxia/FenicsSolver

[3]: https://wiki.freecad.org/FEM_Solver


You can export other CAD meshes for use in it


> For hobbyists, I assume we need this stack.

Just curious what kind of hobby leads to a finite element analysis?


Electronics (when you start to care about EMI or antenna design), model airplanes (for aerodynamics), rocketry, machining (especially if you want to get into SPIF), robotics, 3-D printing (especially for topology optimization), basically anything that deals with designing solid structures in the physical world. Also, computer graphics, including video games.

Unfortunately the barrier to entry is too high for most hobbyists in these fields to use FEM right now.


There are some obvious downsides and exceptions to this sentiment, but on balance, I really appreciate how the expansive access to information via the internet has fostered this phenomenon: where an unremarkable fella with a dusty media studies degree, a well-equipped garage, and probably too much free time can engineer and construct robotic machines, implement/tweak machine vision mechanisms, microwave radio transceivers, nanometer-scale measurements using laser diodes and optical interferometry, deep-sky astrophotography, etc., etc.. Of course, with burgeoning curiosity and expanding access to surplus university science lab equipment, comes armchair experts and the potential for insufferability[0]. It’s crucial to maintain perspective and be mindful of just how little any one person (especially a person with a media studies degree) can possibly know.

[0] I’m pretty sure “insufferability” isn’t a real word. [Edit: don’t use an asterisk for footnotes.]


comes armchair experts and the potential for insufferability

Hey, I resemble that remark! I'd be maybe a little less armchair with more surplus equipment access, but maybe no less insufferable.

By all accounts, though, a degree of insufferability is no bar to doing worthwhile work; Socrates, Galileo, Newton, Babbage, and Heaviside were all apparently quite insufferable, perhaps as much so as that homeless guy who yells at you about adrenochrome when you walk by his park encampment. (Don't fall into the trap of thinking it's an advantage, though.) Getting sidetracked by trivialities and delusions is a greater risk. Most people spend their whole lives on it.

As for how little any person can know, you can certainly know more than anyone who lived a century ago: more than Einstein, more than Edison, more than Noether, more than Tesla, more than Gauss. Any one of the hobbies you named will put you in contact with information they never had, and you can draw on a century or more of academic literature they didn't have, thanks to Libgen and Sci-Hub (and thus Bitcoin).

And it's easy to know more than an average doctorate holder; all you have to do is study, but not forget everything you study the way university students do, and not fall into traps like ancient aliens and the like. I mean, you can still do good work if you believe in ancient aliens (Newton and Tesla certainly believed dumber things) but probably not good archeological work.

Don't be discouraged by prejudice against autodidacts. Lagrange, Heaviside, and du Châtelet were autodidacts, and Ptolemy seems to have been as well. And they didn't even have Wikipedia or Debian! Nobody gets a Nobel for passing a lot of exams.


IMO, the mathematics underlying finite element methods and related subjects — finite element exterior calculus comes immediately to mind — are interesting enough to constitute a hobby in their own right.


FEniCs is mostly used by academic researchers, I used it for FEM modelling in magnetic for e.g. where the sorts of problems we wanted to solve you can’t do in a commercial package.


COMSOL's big advantage is it ties together a lot of different physics regimes together and makes it very easy to couple different physics together. Want to do coupled structures/fluid? Or coupled electromagnetism/mechanical? Its probably the easiest one to use.

Each individual physics regime is not particularly good on its own - there are far better mechanical, CFD, electromagnetism, etc solvers out there - but they're all made by different vendors and don't play nicely with each other.


> The only new player seems to be COMSOL

Ouch. I kind of know Comsol because it was already taught in my engineering school 15 years ago, so that it still counts as a “new entrant” really gives an idea of how slow the field evolves.


The COMSOL company was started in 1986....


It used to be called FEMLAB :)

But they changed to COMSOL because they didn't have the trademark in Japan and FEM also gave associations to the feminine gender.


I am hoping this open source FEM library will catch on : https://www.dealii.org/. The deal in deal.II stands for Differential Equation Analysis Library.

It's written in C++, makes heavy use of templates and been in development since 2000. It's not meant for solid mechanics or fluid mechanics specifically, but for FEM solutions of general PDEs.

The documentation is vast, the examples are numerous and the library interfaces with other libraries like Petsc, Trilinos etc. You can output results to a variety of formats.

I believe support for triangle and tetrahedral elements has been added only recently. In spite of this, one quirk of the library is that meshes are called "triangulations".


I've worked with COMSOL (I have a smaller amount of ANSYS experience to compare to). For the most part I preferred COMSOL's UI and workflow and leveraged a lot of COMSOL's scripting capabilities which was handy for a big but procedural geometry I had (I don't know ANSYS's capabilities for that). They of course largely do the same stuff. If you have easy access to COMSOL to try it out I'd recommend it just for the experience. I've found sometimes working with other tools make me recognize some capabilities or technique that hadn't clicked for me yet.


Once you have a mesh that's "good enough", you can use any number of numeric solvers. COMSOL has a very good mesher, and a competent geometry editor. It's scriptable, and their solvers are also very good.

There might be better programs for some problems, but COMSOL is quite nice.


OpenFOAM seems like an opensource option but I have found it rather impenetrable - there are some youtube videos and pdf tutorials, but they are quite dense and specific and doens't seem to cover the entire pipeline

Happy to hear if people have good resources!


Still no viable open-source solution.

Wait? What? NASTRAN was originally developed by NASA and open sourced over two decades ago. Is this commercial software built on top that is closed source?

I’m astonished ANSYS and NASTRAN are still the only players in town. I remember using NASTRAN 20 years ago for FE of structures while doing aero engineering. And even then NASTRAN was almost 40 years old and ancient.


There's a bunch of open source fem solvers e.g. Calculix, Code_Aster, OpenRadioss and probably a few unmaintained forks of (NASA) NASTRAN, but there's no multiphysics package I don't think.


These are at least capable of thermomechanical with fluid-structure coupling. Not all-physics but still multi. True that things with multi species diffusion or electromagnetics are missing, but maybe Elmer can fill the gap.


Abaqus is pretty big too. I've worked with both Ansys and Abaqus and I generally prefer the latter.


Abaqus is up there with Ansys aswell as others have mentioned.


As a recovering fe modeler, I understand completely.


I work in this field and it really is stagnant and dominated by high-priced Ansys/etc. For some reason silicon valley's open sourceness hasn't touched it. For open source, there's CalculiX which is full of bugs and Code Aster which everybody I've heard about it from say it's too confusing to use. CalculiX has Prepomax as a fairly new and popular pre/post.


During my industrial PhD, I created an Object-Oriented Programming (OOP) framework for Large Scale Air-Pollution (LSAP) simulations.

The OOP framework I created was based on Petrov-Galerkin FEM. (Both proper 2D and "layered" 3D.)

Before my PhD work, the people I worked with (worked for) used spectral methods and Alternate-direction FEM (i.e. using 1D to approximate 2D.)

In some conferences and interviews certain scientists would tell me that programming FEM is easy (for LSAP.) I always kind of agree and ask how many times they have done it. (For LSAP or anything else.) I was not getting an answer from those scientists...

Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.


> Applying FEM to real-life problems can involve the resolving of quite a lot of "little" practical and theoretical gotchas, bugs, etc.

FEM at it's core ends up being just a technique to find approximate solutions to problems expressed with partial differential equations.

Finding solutions to practical problems that meet both boundary conditions and domain is practically impossible to have with analytical methods. FEM trades off correctness with an approximation that can be exact in prescribed boundary conditions but is an approximation in both how domains are expressed and the solution,and has nice properties such as the approximation errors converging to the exact solution by refining the approximation. This means exponentially larger computational budgets.


I also studied FEM in undergrad and grad school. There's something very satisfying about breaking an intractably difficult real-world problem up into finite chunks of simplified, simulated reality and getting a useful, albeit explicitly imperfect, answer out of the other end. I find myself thinking about this approach often.


A 45 comment thread at the time https://news.ycombinator.com/item?id=33480799


Predicting how things evolve in space-time is a fundamental need. Finite element methods deserve the glory of a place at the top of the HN list. I opted for "orthogonal collocation" as the method of choice for my model back in the day because it was faster and more fitting to the problem at hand. A couple of my fellow researchers did use FEM. It was all the rage in the 90s for sure.


From "Chaos researchers can now predict perilous points of no return" (2022) https://news.ycombinator.com/item?id=32862414 :

FEM: Finite Element Method: https://en.wikipedia.org/wiki/Finite_element_method

>> FEM: Finite Element Method (for ~solving coupled PDEs (Partial Differential Equations))

>> FEA: Finite Element Analysis (applied FEM)

awesome-mecheng > Finite Element Analysis: https://github.com/m2n037/awesome-mecheng#fea

And also, "Learning quantum Hamiltonians at any temperature in polynomial time" (2024) https://arxiv.org/abs/2310.02243 re: the "relaxation technique" .. https://news.ycombinator.com/item?id=40396171


Interesting perspective. I just attended an academic conference on isogeometric analysis (IGA), which is briefly mentioned in this article. Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community. IGA has a lot of potential to solve many of the pain points of FEM. It has better convergence rates in general, allows for better timesteps in explicit solvers, has better methods to ensure stability in, e.g., incompressible solids, and perhaps most exciting, enables an immersed approach, where the problem of meshing is all but gone as the geometry is just immersed in a background grid that is easy to mesh. There is still a lot to be done to drive adoption in industry, but this is likely the future of FEM.


> IGA has a lot of potential to solve many of the pain points of FEM.

Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?

If I recall correctly convergence rates are exactly the same, but the whole approach fails to realize that, other than boundaries, geometry and the fields of quantities of interest do not have the same spatial distributions.

IGA has been around for ages, and never materialized beyond the "let's reuse the CAD functions" trick, which ends up making the problem more complex without any tangible return when compared with plain old P-refinent. What is left in terms of potential?

> Tom Hughes, who is mentioned several times, is now the de facto leader of the IGA research community.

I recall the name Tom Hughes. I have his FEM book and he's been for years (decades) the only one pushing the concept. The reason being that the whole computational mechanics community looked at it,found it interesting, but ultimately wasn't worth the trouble. There are far more interesting and promising ideas in FEM than using splines to build elements.


> Isn't IGA's shtick just replacing classical shape functions with the splines used to specify the geometry?

That's how it started, yes. The splines used to specify the geometry are trimmed surfaces, and IGA has expanded from there to the use of splines generally as the shape functions, as well as trimming of volumes, etc. This use of smooth splines as shape functions improves the accuracy per degree of freedom.

> If I recall correctly convergence rates are exactly the same

Okay, looks like I remembered wrong here. What we do definitely see is that in IGA you get the convergence rates of higher degrees without drastically increasing your degree of freedom, meaning that there is better accuracy per degree of freedom for any degree above 1. See for example Figures 16 and 18 in this paper: https://www.researchgate.net/profile/Laurens-Coox/publicatio...

> geometry and the fields of quantities of interest do not have the same spatial distributions.

Using the same shape functions doesn't automatically mean that they will have the same spatial distributions. In fact, with hierarchical refinement in splines you can refine the geometry and any single field of interest separately.

> What is left in terms of potential?

The biggest potential other than higher accuracy per degree of freedom is perhaps trimming. In FEM, trimming your shape functions makes the solution unusable. In IGA, you can immerse your model in a "brick" of smooth spline shape functions, trim off the region outside, and run the simulation while still getting optimal convergence properties. This effectively means little to no meshing required. For a company that is readying this for use in industry, take a look at https://coreform.com/ (disclosure, I used to be a software developer there).


I took a course in undergrad, and was exposed to it in grad school again, and for the life of me I still don't understand the derivations either Galerkin or variational.


I learned from the structural engineering perspective. What are you struggling with? In my mind I have this logic flow: 1. strong form pde; 2. weak form; 3. discretized weak form; 4. compute integrals (numerically) over each element; 5. assemble the linear system; 6. solve the linear system.


Luckily the integrals of step 4 are already worked out in text books and research papers for all the problems people commonly use FEA for so you can almost always skip 1. 2. and 3.


Do you have any textbook recommendations for the structural engineering perspective?


For anyone interested in a contemporary implementation, SELF is a spectral element library in object-oriented fortran [1]. The devs here at Fluid Numerics have upcoming benchmarks on our MI300A system and other cool hardware.

[1] https://github.com/FluidNumerics/SELF


I have such a fondness for FEA. ANSYS and COSMOS were the ones I used, and I’ve written toy modelers and solvers (one for my HP 48g) and even tinkered with using GPUs for getting answers faster (back in the early 2000s).

Unfortunately my experience is that FEA is a blunt instrument with narrow practical applications. Where it’s needed, it is absolutely fantastic. Where it’s used when it isn’t needed, it’s quite the albatross.


My hot take is that, FEM is best used as unit testing of Machine Design, not a guide towards design that it’s often used as. The greatest mechanical engineer I know, once designed an entire mechanical wrist arm with five fingers, actuations, lots of parts and flexible finger tendon. He never used FEM at any part of his design. He instead did it in the old fashioned, design and fab a simple prototype, get a feel for it, use the tolerances you discovered in the next prototype and just keep iterating quickly. If I went to him and told him to model the flexor of his fingers in FEM, and then gave him a book to tell him how to correctly use the FEM software so that you got non “non-sensical” results I would have slowed him down if anything. Just build and you learn the tolerances, and the skill is in building many cheap prototypes to get the best idea of what the final expensive build will look like.


> The greatest mechanical engineer I know, [...]

And with that you wrote the best reply to your own comment. Great programmers of the past wrote amazing systems just in assembly. But you needed to be a great programmer just to get anything done at all.

Nowadays dunces like me can write reasonable software in high level languages with plenty of libraries. That's progress.

Similar for mechanical engineering.

(Doing prototypes etc might still be a good idea, of course. My argument is mainly that what works for the best engineers doesn't necessarily work for the masses.)


Also, might work for a mechanical arm the size of an arm, but not for the size of the Eiffel tower.


Eiffel Tower was built before FEM existed. In fact I doubt they even did FEM like calculations


This is true, although it was notable as an early application of Euler-Bernoulli beam theory in structural engineering, which helped to prove the usefulness of that method.


I ment a mechanical arm the size of the eifel tower. You don't want to iterate physical products at that size.


Going by Boeing vs. SpaceX, iteration seems to be the most effective approach to building robotic physical products the size of the Eiffel Tower.


I'm sure they are doing plenty of calculations beforehand, too.


Unquestionably! Using FEM.


Would FEM be useful for that kind problem? It's more for figuring out if your structure will take the load, where stress concentrations are, what happens with thermal expansion. FEM won't do much for figuring out what the tolerance need to be on intricate mechanisms


To be fair, FEM is not the right tool for mechanical linkage design (if anything, you'd use rigid body dynamics).

FEM is the tool you'd use to tell when and where the mechanical linkage assembly will break.


Garbage in garbage out. If you don't fully understand the model, then small parameter changes can create wildly different results. It's always good to go back to fundamentals and hand check a simplification to get a feel for how it should behave.


If he were designing a bridge, however ...


Its wrong to assume that everyone and every projects can use an iterative method with endless prototypes. Id you do I have a prototype bridge to sell you.


Good luck designing crash resilient structures without simulating it on FEM based software though.


The FEM is just a model of the crash resistant structure. Hopefully it will behave like the actual structure, but that is not guaranteed. We use the FEM because it is faster and cheaper than doing the tests on the actual thing. However if you have the time and money to do your crash resiliency tests on the actual product during the development phase. I expect the results would be much better.


Yes, with infinite time and budget you'd get much better results. That does not sound like an interesting proposition, though.


I’d guess most of the bridges in US were built before FEM existed


Anyone can design a bridge that holds up. Romans did it millenia ago.

Engineering is designing a bridge that holds up to a certain load, with the least amount of material and/or cost. FEM gives you tighter bounds on that.


The average age of a bridge in the US is about 40-50 years old and the title of the article has "80 years of FEM".

https://www.infrastructurereportcard.org/wp-content/uploads/...

I'd posit a large fraction were designed with FEM.


FEM runs on the same math and theories those bridges were designed on on paper.


They did this just fine until without such tools for the majority of innovation in the last century.


Having worked on the design of safety structures with mechanical engineers for a few projects, it is far, far cheaper to do a simulation and iterate over designs and situations than do that in a lab or work it out by hand. The type of stuff you can do on paper without FEM tends to be significantly oversimplified.

It doesn't replace things like actual tests, but it makes designing and understanding testing more efficient and more effective. It is also much easier to convince reviewers you've done your job correctly with them.

I'd argue computer simulation has been an important component a majority of mechanical engineering innovation in the last century. If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit. We did "just fine" without cars for the majority of humanity, but motorized vehicles significantly changed how we do things and changed the reach of what we can do.


> It is also much easier to convince reviewers you've done your job correctly with them.

In other words, the work that doesn't change the underlying reality of the product?

> We did "just fine" without cars for the majority of humanity

We went to the moon, invented aircraft, bridges, skyscrapers, etc, all without FEM. So that's why this is a bad comparison.

> If you asked a mechanical engineer to ignore those tools in their job they'd (rightly) throw a fit.

Of course. That's what they are accustomed to. 80/20 paper techniques that were replaced by SW were forgotten.

When tests are cheap, you make a lot of them. When they are expensive, you do a few and maximize the information you learn from them.

I'm not arguing FEM doesn't provide net benefit to the industry.


What is your actual assertion? That tools like FEA are needless frippery or that they just dumb down practitioners who could have otherwise accomplished the same things with hand methods? Something else? You're replying to a practicing mechanical engineer whose experience rings true to this aerospace engineer.

Things like modern automotive structural safety or passenger aircraft safety are leagues better today than even as recently as the 1980s because engineers can perform many high-fidelity simulations long before they get to integrated system test. When integrated system test is so expensive, you're not going to explore a lot of new ideas that way.

The argument that computational tools are eroding deep engineering understanding is long-standing, and has aspects of both truth and falsity. Yep, they designed the SR-71 without FEA, but you would never do that today because for the same inflation-adjusted budget, we'd expect a lot more out of the design. Tools like FEA are what help engineers fulfill those expectations today.


> What is your actual assertion?

That the original comment I replied to is false: "Good luck designing crash resilient structures without simulating it on FEM based software."

Now what's my opinion? FEM raises the quality floor of engineering output overall, and more rarely the ceiling. But, excessive reliance on computer simulation often incentivizes complex, fragile, and expensive designs.

> passenger aircraft safety are leagues better today

Yep, but that's just restating the pros. Local iteration and testing.

> You're replying to a practicing mechanical engineer

Oh drpossum and I are getting to know each other.

I agree with his main point. It's an essential tool for combatting certifications and reviews in the world of increasing regulatory and policy based governance.


Replying to finish a discussion no one will probably see, but...

> That the original comment I replied to is false: "Good luck designing crash resilient structures without simulating it on FEM based software."

In refuting the original casually-worded blanket statement, yes, you're right. You can indeed design crash resilient structures without FEA. Especially if they are terrestrial (i.e., civil engineering).

In high-performance applications like aerospace vehicles (excluding general aviation) or automobiles, you will not achieve the required performance on any kind of acceptable timeline or budget without FEA. In these kinds of high-performance applications, the original statement is valid.

> FEM raises the quality floor of engineering output overall, and more rarely the ceiling. But, excessive reliance on computer simulation often incentivizes complex, fragile, and expensive designs.

Do you have any experience in aerospace applications? Because quite often, we reliably achieve structural efficiencies, at prescribed levels of robustness, that we would not achieve sans FEA. It's a matter of making the performance bar, not a matter of simple vs. complex solutions.

> I agree with his main point. It's an essential tool for combatting certifications and reviews in the world of increasing regulatory and policy based governance.

That was one of his points, not the main one. The idea that its primary value is pandering to paper-pushing regulatory bodies and "policy based governance" is specious. Does it help with your certification case? Of course. But the real value is that analyses from these tools are the substantiation we use to determine the if the (expensive) design will meet requirements and survive all its stressing load cases before we approve building it. We then have a high likelihood of what we build, assuming it conforms to design intent, performing as expected.


Except that everything's gotten abysmally complex. Vehicle crash test experiments are a good example of validating the FEM simulation (yes that's the correct order, not vice versa)


How can you assert so confidently you know the cause and effect?

Certainly computers allow more complexity, so there is interplay between what it enables and what’s driven by good engineering.


FEM - because we can't solve PDEs!


Is it related to Galerkin?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4



n5321 | 2025年6月15日 23:31

How to get meaningful and correct results from your finite element model


Martin Bäker Institut für Werkstoffe, Technische Universität Braunschweig, Langer Kamp 8, D-38106 Braunschweig, martin.baeker@tu-bs.de November 15, 2018


Abstract

This document gives guidelines to set up, run, and postprocess correct simulations with the finite element method. It is not an introduction to the method itself, but rather a list of things to check and possible mistakes to watch out for when doing a finite element simulation.


The finite element method (FEM) is probably the most-used simulation technique in engineering. Modern finite-element software makes doing FE simulations easy – too easy, perhaps. Since you have a nice graphical user interface that guides you through the process of creating, solving, and postprocessing a finite element model, it may seem as if there is no need to know much about the inner workings of a finite element program or the underlying theory. However, creating a model without understanding finite elements is similar to flying an airplane without a pilot’s license. You may even land somewhere without crashing, but probably not where you intended to.

This document is not a finite element introduction; see, for example, [3, 7, 10] for that. It is a guideline to give you some ideas how to correctly set up, solve and postprocess a finite element model. The techniques described here were developed working with the program Abaqus [9]; however, most of them should be easily transferable to other codes. I have not explained the theoretical basis for most of them; if you do not understand why a particular consideration is important, I recommend studying finite element theory to find out.

1. Setting up the model

1.1 General considerations

These considerations are not restricted to finite element models, but are useful for any complex simulation method.

  • 1.1-1. Even if you just need some number for your design – the main goal of an FEA is to understand the system. Always design your simulations so that you can at least qualitatively understand the results. Never believe the result of a simulation without thinking about its plausibility.

  • 1.1-2. Define the goal of the simulation as precisely as possible. Which question is to be answered? Which quantities are to be calculated? Which conclusions are you going to draw from the simulation? Probably the most common error made in FE simulations is setting up a simulation without having a clear goal in mind. Be as specific as possible. Never set up a model “to see what happens” or “to see how stresses are distributed”.

  • 1.1-3. Formulate your expectations for the simulation result beforehand and make an educated guess of what the results should be. If possible, estimate at least some quantities of your simulation using simplified assumptions. This will make it easier to spot problems later on and to improve your understanding of the system you are studying.

  • 1.1-4. Based on the answer to the previous items, consider which effects you actually have to simulate. Keep the model as simple as possible. For example, if you only need to know whether a yield stress is exceeded somewhere in a metallic component, it is much easier to perform an elastic calculation and check the von Mises stress in the postprocessor (be wary of extrapolations, see 3.2-1) than to include plasticity in the model.

  • 1.1-5. What is the required precision of your calculation? Do you need an estimate or a precise number? (See also 1.4-1 below.)

  • 1.1-6. If your model is complex, create it in several steps. Start with simple materials, assume frictionless behaviour etc. Add complications step by step. Setting up the model in steps has two advantages: (i) if errors occur, it is much easier to find out what caused them; (ii) understanding the behaviour of the system is easier this way because you understand which addition caused which change in the model behaviour. Note, however, that checks you made in an early stage (for example on the mesh density) may have to be repeated later.

  • 1.1-7. Be careful with units. Many FEM programs (like ABAQUS) are inherently unit-free – they assume that all numbers you give can be converted without additional conversion factors. You cannot define you model geometry in millimeter, but use SI units without prefixes everywhere else. Be especially careful in thermomechanical simulations due to the large number of different physical quantities needed there. And of course, be also careful if you use antiquated units like inch, slug, or BTU.

1.2 Basic model definition

  • 1.2-1. Choose the correct type of simulation (static, quasi-static, dynamic, coupled etc.). Dynamic simulations require the presence of inertial forces (elastic waves, changes in kinetic energies). If inertial forces are irrelevant, you should use static simulations.

  • 1.2-2. As a rule of thumb, a simulation is static or quasi-static if the excitation frequency is less than 1/5 of the lowest natural frequency of the structure [2].

  • 1.2-3. In a dynamic analysis, damping may be required to avoid unrealistic multiple reflections of elastic waves that may affect the results [2].

  • 1.2-4. Explicit methods are inherently dynamic. In some cases, explicit methods may be used successfully for quasi-static problems to avoid convergence problems (see 2.1-9 below). If you use mass scaling in your explicit quasi-static analysis, carefully check that the scaling parameter does not affect your solution. Vary the scaling factor (the nominal density) to ensure that the kinetic energy in the model remains small [12].

  • 1.2-5. In a static or quasi-static analysis, make sure that all parts of the model are constrained so that no rigid-body movement is possible. (In a contact problem, special stabilization techniques may be available to ensure correct behaviour before contact is established.)

  • 1.2-6. If you are studying a coupled problem (for example thermo-mechanical) think about the correct form of coupling. If stresses and strains are affected by temperature but not the other way round, it may be more efficient to first calculate the thermal problem and then use the result to calculate thermal stresses. A full coupling of the thermal and mechanical problem is only needed if temperature affects stresses/strains (e. g., due to thermal expansion or temperature-dependent material problems) and if stresses and strains also affect the thermal problem (e. g., due to plastic heat generation or the change in shape affecting heat conduction).

  • 1.2-7. Every FE program uses discrete time steps (except for a static, linear analysis, where no time incrementation is needed). This may affect the simulation. If, for example, the temperature changes during a time increment, the material behaviour may strongly differ between the beginning and the end of the increment (this often occurs in creep problems where the properties change drastically with temperature). Try different maximal time increments and make sure that time increments are sufficiently small so that these effects are small.

  • 1.2-8. Critically check whether non-linear geometry is required. As a rule of thumb, this is almost always the case if strains exceed 5%. If loads are rotating with the structure (think of a fishing rod that is loaded in bending initially, but in tension after it has started to deform), the geometry is usually non-linear. If in doubt, critically compare a geometrically linear and non-linear simulation.

1.3 Symmetries, boundary conditions and loads

  • 1.3-1. Exploit symmetries of the model. In a plane 2D-model, think about whether plane stress, plane strain or generalized plane strain is the appropriate symmetry. (If thermal stresses are relevant, plane strain is almost always wrong because thermal expansion in the 3-direction is suppressed, causing large thermal stresses. Note that these 33-stresses may affect other stress components as well, for example, due to von Mises plasticity.) Keep in mind that the loads and the deformations must conform to the same symmetry.

  • 1.3-2. Check boundary conditions and constraints. After calculating the model, take the time to ensure that nodes were constrained in the desired way in the postprocessor.

  • 1.3-3. Point loads at single nodes may cause unrealistic stresses in the adjacent elements. Be especially careful if the material or the geometry is non-linear. If in doubt, distribute the load over several elements (using a local mesh refinement if necessary).

  • 1.3-4. If loads are changing direction during the calculation, non-linear geometry is usually required, see 1.2-8.

  • 1.3-5. The discrete time-stepping of the solution process may also be important in loading a structure. If, for example, you abruptly change the heat flux at a certain point in time, discrete time stepping may not capture the exact point at which the change occurs, see fig. 1. (Your software may use some averaging procedure to alleviate this.) Define load steps or use other methods to ensure that the time of the abrupt change actually corresponds to a time step in the simulation. This may also improve convergence because it allows to control the increments at the moment of abrupt change, see also 2.1-4.

1.4 Input data

  • 1.4-1. A simulation cannot be more precise than its input data allow. This is especially true for the material behaviour. Critically consider how precise your material data really are. How large are the uncertainties? If in doubt, vary material parameters to see how results are affected by the uncertainties.

  • 1.4-2. Be careful when combining material data from different sources and make sure that they are referring to identical materials. In metals, don’t forget to check the influence of heat treatment; in ceramics, powder size or the processing route may affect the properties; in polymers, the chain length or the content of plasticizers is important [13]. Carefully document your sources for material data and check for inconsistencies.

  • 1.4-3. Be careful when extrapolating material data. If data have been described using simple relations (for example a Ramberg-Osgood law for plasticity), the real behaviour may strongly deviate from this.

  • 1.4-4. Keep in mind that your finite element software usually cannot extrapolate material data beyond the values given. If plastic strains exceed the maximum value specified, usually no further hardening of the material will be considered. The same holds, for example, for thermal expansion coefficients which usually increase with temperature. Using different ranges in different materials may thus cause spurious thermal stresses. Fig. 2 shows an example.

  • 1.4-5. If material data are given as equations, be aware that parameters may not be unique. Frequently, data can be fitted using different parameters. As an illustration, plot the simple hardening law A+Bεⁿ with values (130, 100, 0.5) and (100, 130, 0.3) for (A, B, n), see fig. 3. Your simulation results may be indifferent to some changes in the parameters because of this.

  • 1.4-6. If it is not possible to determine material behaviour precisely, finite element simulations may still help to understand how the material behaviour affects the system. Vary parameters in plausible regions and study the answer of the system.

  • 1.4-7. Also check the precision of external loads. If loads are not known precisely, use a conservative estimate.

  • 1.4-8. Thermal loads may be especially problematic because heat transfer coefficients or surface temperatures may be difficult to measure. Use the same considerations as for materials.

  • 1.4-9. If you vary parameters (for example the geometry of your component or the material), make sure that you correctly consider how external loads are changed by this. If, for example, you specify an external load as a pressure, increasing the surface also increases the load. If you change the thermal conductivity of your material, the total heat flux through the structure will change; you may have to specify the thermal load accordingly.

  • 1.4-10. Frictional behaviour and friction coefficients are also frequently unknown. Critically check the parameters you use and also check whether the friction law you are using is correct – not all friction is Coulombian.

  • 1.4-11. If a small number of parameters are unknown, you can try to vary them until your simulation matches experimental data, possibly using a numerical optimization method. (This is the so-called inverse parameter identification [6].) Be aware that the experimental data used this way cannot be used to validate your model (see section 3.3).

1.5 Choice of the element type

Warning: Choosing the element type is often the crucial step in creating a finite element model. Never accept the default choice of your program without thinking about it.¹ Carefully check which types are available and make sure you understand how a finite element simulation is affected by the choice of element type. You should understand the concepts of element order and integration points (also known as Gauß points) and know the most common errors caused by an incorrectly chosen element type (shear locking, volumetric locking, hourglassing [1,3]).

The following points give some guidelines for the correct choice:

  • 1.5-1. If your problem is linear-elastic, use second-order elements. Reduced integration may save computing time without strongly affecting the results.

  • 1.5-2. Do not use fully-integrated first order elements if bending occurs in your structure (shear locking). Incompatible mode elements may circumvent this problem, but their performance strongly depends on the element shape [7].

  • 1.5-3. If you use first-order elements with reduced integration, check for hourglassing. Keep in mind that hourglassing may occur only in the interior of a three-dimensional structure where seeing it is not easy. Exaggerating the displacements may help in visualizing hourglassing. Most programs use numerical techniques to suppress hourglass modes; however, these may also affect results due to artificial damping. Therefore, also check the energy dissipated by this artificial damping and make sure that it is small compared to other energies in the model.

  • 1.5-4. In contact problems, first-order elements may improve convergence because if one corner and one edge node are in contact, the second-order interpolation of the element edge causes overlaps, see fig. 4. This may especially cause problems in a crack-propagation simulation with a node-release scheme [4, 11].

  • 1.5-5. Discontinuities in stresses or strains may be captured better with first-order elements in some circumstances.

  • 1.5-6. If elements distort strongly, first-order elements may be better than second-order elements.

  • 1.5-7. Avoid triangular or tetrahedral first-order elements since they are much too stiff, especially in bending. If you have to use these elements (which may be necessary in a large model with complex geometry), use a very fine mesh and carefully check for mesh convergence. Think about whether partitioning your model and meshing with quadrilateral/hexahedral elements (at least in critical regions) may be worth the effort. Fig. 5 shows an example where a very complex geometry has to be meshed with tetrahedral elements. Although the mesh looks reasonably fine, the system answer with linear elements is much too stiff.

  • 1.5-8. If material behaviour is incompressible or almost incompressible, use hybrid elements to avoid volumetric locking. They may also be useful if plastic deformation is large because (metal) plasticity is also volume conserving.

  • 1.5-9. Do not mix elements with different order. This can cause overlaps or gaps forming at the interface (possibly not shown by your postprocessor) even if there are no hanging nodes (see fig. 6). If you have to use different order of elements in different regions of your model, tie the interface between the regions using a surface constraint. Be aware that this interface may cause a discontinuity in the stresses and strains due to different stiffness of the element types.

  • 1.5-10. In principle, it is permissible to mix reduced and fully integrated elements of the same order. However, since they differ in stiffness, spurious stress or strain discontinuities may result.

  • 1.5-11. If you use shell or beam elements or similar, make sure to use the correct formulation. Shells and membranes look similar, but behave differently. Make sure that you use the correct mathematical formulation; there are a large number of different types of shell or beam elements with different behaviour.

¹The only acceptable exception may be a simple linear-elastic simulation if your program uses second-order elements. But if all you do is linear elasticity, this article is probably not for you.

1.6 Generating a mesh

  • 1.6-1. If possible, use quadrilateral/hexahedral elements. Meshing 3D-structures this way may be laborious, but it is often worth the effort (see also 1.5-7).

  • 1.6-2. A fine mesh is needed where gradients in stress and strain are large.

  • 1.6-3. A preliminary simulation with a coarse mesh may help to identify the regions where a greater mesh density is required.

  • 1.6-4. Keep in mind that the required mesh density depends on the quantities you want to extract and on the required precision. For example, displacements are often calculated more precisely than strains (or stresses) because strains involve derivatives, i.e. the differences in displacements between nodes.

  • 1.6-5. A mesh convergence study can be used to check whether the model behaves too stiff (as is often the case for fully integrated first-order elements, see fig. 5) or too soft (which happens with reduced-integration elements). Be careful in evaluating this study: If your model is load-controlled, evaluate displacements or strains to check for convergence, if it is strain-controlled, evaluate forces or stresses. (Stiffness relates forces to displacements, so to check for stiffness you need to check both.) If you use, for example, displacement control, displacements are not sensitive to the actual stiffness of your model since you prescribe the displacement.

  • 1.6-6. Check shape and size of the elements. Inner angles should not deviate too much from those of a regularly shaped element. Use the tools provided by your software to highlight critical elements. Keep in mind that critical regions may be situated inside a 3D-component and may not be directly visible. Avoid badly-shaped elements especially in region where high gradients occur and in regions of interest.

  • 1.6-7. If you use local mesh refinement, the transition between regions of different element sizes should be smooth. As a rule of thumb, adjacent elements should not differ by more than a factor of 2–3 in their area (or volume). If the transition is too abrupt, spurious stresses may occur in this region because a region that is meshed finer is usually less stiff. Furthermore, the fine mesh may be constrained by the coarser mesh. (As an extreme case, consider a finely meshed quadratic region that is bounded by only four first-order elements – in this case, the region as a whole can only deform as a parallelogram, no matter how fine the interior mesh is.)

  • 1.6-8. Be aware that local mesh refinement may strongly affect the simulation time in an explicit simulation because the stable time increment is determined by the size of the smallest element in the structure. A single small or badly shaped element can drastically increase the simulation time.

  • 1.6-9. If elements are distorting strongly, remeshing may improve the shape of the elements and the solution quality. For this, solution variables have to be interpolated from the old to the new mesh. This interpolation may dampen strong gradients or local extrema. Make sure that this effect is sufficiently small by comparing the solution before and after the remeshing in a contour plot and at the integration points.

  • 1.6-10. Another way of dealing with strong mesh distortions is to start with a mesh that is initially distorted and becomes more regular during deformation. This method usually requires some experimentation, but it may yield good solutions without the additional effort of remeshing.

1.7 Defining contact problems

  • 1.7-1. Correctly choose master and slave surfaces in a master-slave algorithm. In general, the stiffer (and more coarsely meshed) surface should be the master.

  • 1.7-2. Problems may occur if single nodes get in contact and if surfaces with corners are sliding against each other. Smoothing the surfaces may be helpful.

  • 1.7-3. Nodes of the master surface may penetrate the slave surface; again, smoothing the surfaces may reduce this, see fig. 7.

  • 1.7-4. Some discretization error is usually unavoidable if curved surfaces are in contact. With a pure master-slave algorithm, penetration and material overlap are the most common problem; with a symmetric choice (both surfaces are used as master and as slave), gaps may open between the surfaces, see fig. 8. Check for discretization errors in the postprocessor.

  • 1.7-5. Discretization errors may also affect the contact force. Consider, for example, the Hertzian contact problem of two cylinders contacting each other. If the mesh is coarse, there will be a notable change in the contact force whenever the next node comes into contact. Spurious oscillations of the force may be caused by this.

  • 1.7-6. Make sure that rigid-body motion of contact partners before the contact is established is removed either by adding appropriate constraints or by using a stabilization procedure.

  • 1.7-7. Second-order elements may cause problems in contact (see 1.5-4 and fig. 4) [4, 11]; if they do, try switching to first-order elements.

1.8 Other considerations

  • 1.8-1. If you are inexperienced in using finite elements, start with simple models. Do not try to directly set up a complex model from scratch and make sure that you understand what your program does and what different options are good for. It is almost impossible to find errors in a large and complex model if you do not have long experience and if you do not know what results you expect beforehand.

  • 1.8-2. Many parameters that are not specified by the user are set to default values in finite element programs. You should check whether these defaults are correct; especially for those parameters that directly affect the solution (like element types, material definitions etc.). If you do not know what a parameter does and whether the default is appropriate, consult the manual. For parameters that only affect the efficiency of the solution (for example, which solution scheme is used to solve matrix equations), understanding the parameters is less important because a wrongly chosen parameter will not affect the final solution, but only the CPU time or whether a solution is found at all.

  • 1.8-3. Modern finite element software is equipped with a plethora of complex special techniques (XFEM, element deletion, node separation, adaptive error-controlled mesh-refinement, mixed Eulerian-Lagrangian methods, particle based methods, fluid-structure interaction, multi-physics, user-defined subroutines etc.). If you plan to use these techniques, make sure that you understand them and test them using simple models. If possible, build up a basic model without these features first and then add the complex behaviour. Keep in mind that the impressive simulations you see in presentations were created by experts and may have been carefully selected and may not be typical for the performance.

2. Solving the model

Even if your model is solved without any convergence problems, nevertheless look at the log file written by the solver to check for warning messages. They may be harmless, but they may indicate some problem in defining your model.

Convergence problems are usually reported by the program with warning or error messages. You can also see that your model has not converged if the final time in the time step is not the end time you specified in the model definition.

There are two reasons for convergence problems: On the one hand, the solution algorithm may fail to find a solution albeit a solution of the problem does exist. In this case, modifying the solution algorithm may solve the problem (see section 2.2). On the other hand, the problem definition may be faulty so that the problem is unstable and does not have a solution (section 2.3).

If you are new to finite element simulations, you may be tempted to think that these errors are simply caused by specifying an incorrect option or forgetting something in the model definition. Errors of this type exist as well, but they are usually detected before calculation of your model begins (and are not discussed here). Instead, treat the non-convergence of your simulation in the same way as any other scientific problem. Formulate hypotheses why the simulation fails to converge. Modify your model to prove² or disprove these hypotheses to find the cause of the problems.

²Of course natural science is not dealing with “proofs”, but this is not the place to think about the philosophy of science. Replace “prove” with “strengthen” or “find evidence for” if you like.

2.1 General considerations

  • 2.1-1. In an implicit simulation, the size of the time increments is usually automatically controlled by the program. If convergence is difficult, the time increments are reduced.³ Usually, the program stops if the time increment is too small or if the convergence problems persist even after several cutbacks of the time increment. (In Abaqus, you get the error messages Time increment smaller than minimum or Too many attempts, respectively.) These messages themselves thus do not tell you anything about the reason for the convergence problems. To find the cause of the convergence problems, look at the solver log file in the increment(s) before the final error message. You will probably see warnings that tell you what kind of convergence problem was responsible (for example, the residual force is too large, the contact algorithm did not converge, the temperature increments were too large). If available, also look at the unconverged solution and compare it to the last, converged timestep. Frequently, large changes in some quantity may indicate the location of the problem.

  • 2.1-2. Use the postprocessor to identify the node with the largest residual force and the largest change in displacement in the final increment. Often (but not always) this tells you where the problem in the model occurs. (Apply the same logic in a thermal simulation looking at the temperature changes and heat fluxes.)

  • 2.1-3. If the first increment does not converge, set the size of the first time increment to a very small value. If the problem persist, the model itself may be unstable (missing boundary conditions, initial overlap of contacting surfaces). To find the cause of the problem, you can remove all external loads step by step or add further boundary conditions to make sure that the model is properly constrained (if you pin two nodes for each component, rigid body movements should be suppressed – if the model converges in this case, you probably did not have sufficient boundary conditions in your original model). Alternatively or additionally, you may add numerical stabilization to the problem definition. (In numerical stabilization, artificial friction is added to the movement of nodes so that stabilizing forces are generated if nodes start to move rapidly.) However, make sure that the stabilization does not affect your results too strongly. Also check for abrupt jumps in some boundary conditions, for example a finite displacement that is defined at the beginning of a step or a sudden jump in temperature or load. If you apply a load instantaneously, cutting back the time increments does not help the solution process. If this occurs, ramp your load instead.

  • 2.1-4. Avoid rapid changes in an amplitude within a calculation step (see also 1.2-7 and 1.3-5). For example, if you hold a heat flux (or temperature or stress) for a long time and then abruptly reduce it within the same calculation step, the time increment will suddenly jump to a point where the temperature is strongly reduced. This abrupt change may cause convergence problems. Define a second step and choose small increments at the beginning of the second step where large changes in the model can be expected.

  • 2.1-5. Try the methods described in section 2.2 to see whether the problem can be resolved by changing the solution algorithm.

  • 2.1-6. Sometimes, it is the calculation of the material law at an integration point that does not converge (to calculate stresses from strains at integration point inside the solver, another Newton algorithm is used at each integration point [3]). If this is the case, the material definition may be incorrect or problematic (for example, due to incorrectly specified material parameters or because there is extreme softening at a point).

  • 2.1-7. Simplify your model step by step to find the reason of the convergence problems. Use simpler material laws (simple plasticity instead of damage, elasticity instead of plasticity), switch off non-linear geometry, remove external loads etc. If the problem persists, try to create a minimum example – the smallest example you can find that shows the same problem. This has several advantages: the minimum example is easier to analyse, needs less computing time so that trying things is faster, and it can also be shown to others if you are looking for help (see section 4).

  • 2.1-8. If your simulation is static, switching to an implicit dynamic simulation may help because the inertial forces act as natural stabilizers. If possible, use a quasi-static option.

  • 2.1-9. Explicit simulations usually have less convergence problems. A frequently-heard advice to solve convergence problems is to switch from implicit to explicit models. I strongly recommend to only switch from implicit static to explicit quasi-static for convergence reasons if you understand the reasons of the convergence problems and cannot overcome them with the techniques described here. You should also keep in mind that explicit programs may offer a different functionality (for example, different element types). If your problem is static, you can only use a quasi-static explicit analysis which may also have problems (see 1.2-4). Be aware that in an explicit simulations, elastic waves may occur that may change the stress patterns.

³The rationale behind this is that the solution from the previous increment is a better initial guess for the next increment if the change in the load is reduced.

2.2 Modifying the solution algorithm

If your solution algorithm does not converge for numerical reasons, these modifications may help. They are useless if there is a true model instability, see section 2.3.

  • 2.2-1. Finite element programs use default values to control the Newton iterations. If no convergence is reached after a fixed number of iterations, the time step is cut back. In strongly non-linear problems, these default values may be too tight. For example, Abaqus cuts back on the time increment if the Newton algorithm does not converge after 4 iterations; setting this number to a larger value is often sufficient to reach convergence (for example, by adding *Controls, analysis=discontinuous to the input file).

  • 2.2-2. If the Newton algorithm does not converge, the time increment is cut back. If it becomes smaller than a pre-defined minimum value, the simulation stops with an error message. This minimum size of the time increment can be adjusted. Furthermore, if a sudden loss in stability (or change in load) occurs so that time increments need to be changed by several orders of magnitude, the number of cutbacks also needs to be adapted (see next point). In this case, another option is to define a new time step (see 2.1-4) that starts at this critical point and that has a small initial increment.

  • 2.2-3. The allowed number of cutbacks per increment can also be adapted (in Abaqus, use *CONTROLS, parameters=time incrementation). This may be helpful if the simulation proceeds at first with large increments before some difficulty is reached – allowing for a larger number of cutbacks enables the program to use large timesteps at the beginning. Alternatively, you can reduce the maximum time increment (so that the size of the necessary cutback is reduced) or you can split your simulation step in two with different time incrementation settings in the step where the problem occurs (see 2.1-4).

  • 2.2-4. Be aware that the previous two points will work sometimes, but not always. There is usually no sense in allowing a smallest time increment that is ten or twenty orders of magnitude smaller than the step size or to allow for dozens of cutbacks, this only increases the CPU time.

  • 2.2-5. Depending on your finite element software, there may be many more options to tune the solution process. In Abaqus, for example, the initial guess for the solution of a time increment is calculated by extrapolation from the previous steps. Usually this improves convergence, but it may cause problems if something in the model changes abruptly. In this case, you can switch the extrapolation off (STEP, extrapolation=no). You can also add a line search algorithm that scales the calculated displacements to find a better solution (CONTROLS, parameters=line search). Consult the manual for options to improve convergence.

  • 2.2-6. While changing the iteration control (as explained in the previous points) is often needed to achieve convergence, the solution controls that are used to determine whether a solution has converged should only be changed if absolutely necessary. Only do so (in Abaqus, use *CONTROLS, parameters=field) if you know exactly what you are doing. One example where changing the controls may be necessary is when the stress is strongly concentrated in a small part of a very large structure [5]. In this case, an average nodal force that is used to determine convergence may impose too strong a constraint on the convergence of the solution, so that convergence should be based on local forces in the region of stress concentration. Be aware that since forces, not stresses, are used in determining the convergence, changing the mesh density requires changing the solution controls. Make sure that the accepted solution is indeed a solution and that your controls are sufficiently strict. Vary the controls to ensure that their value does not affect the solution.

  • 2.2-7. Contact problems sometimes do not converge due to problems in establishing which nodes are in contact (sometimes called “zig-zagging” [14]). This often happens if the first contact is made by a single node. Smoothing the contact surfaces may help.

  • 2.2-8. If available and possible, use general contact definitions where the contact surfaces are determined automatically.

  • 2.2-9. If standard contact algorithms do not converge, soft contact formulations (which implement a soft transition between “no contact” and “full contact”) may improve convergence; however, they may allow for some penetration of the surfaces and thus affect the results.

2.3 Finding model instabilities

A model is unstable if there actually is no solution to the mechanical problem.

  • 2.3-1. Instabilities are frequently due to a loss in load bearing capacity of the structure. There are several reasons for that:

    • The material definition may be incorrect. If, for example, a plastic material is defined without hardening, the load cannot increase after the component has fully plastified. Simple typos or incorrectly used units may also cause a loss in material strength.

    • Thermal softening (the reduction of strength with increasing temperature) may cause an instability in a thermo-mechanical problem.

    • Non-linear geometry may cause an instability because the cross section of a load-bearing component reduces during deformation.

    • A change in contact area, a change from sticking to sliding in a simulation with friction or a complete loss of contact between two bodies may also cause instabilities because the structure may not be able to bear an increase in the load.

  • 2.3-2. Local instabilities may cause highly distorted meshes that prevent convergence. It may be helpful to define the mesh in such a way that elements become more regular during deformation (see also 1.6-10).

  • 2.3-3. If your model is load-controlled (a force is applied), switch to a displacement-controlled loading. This avoids instabilities due to loss in load-bearing capacity.

  • 2.3-4. Artificial damping (stabilization) may be added to stabilize an unstable model. However, check carefully that the solution is not unduly affected by this. Adding artificial damping may also help to determine the cause of the instability. If your model converges with damping, you know that an instability is present.

2.4 Problems in explicit simulations

As already stated in 2.1-9, explicit simulations have less convergence problems than implicit simulations. However, sometimes even an explicit simulation may run into trouble.

  • 2.4-1. During simulation, elements may distort excessively. This may happen for example if a concentrated load acts on a node or if the displacement of a node becomes very large due to a loss in stability (for example in a damage model). In this case, the element shape might become invalid (crossing over of element edges, negative volumes at integration points etc.). If this happens, changing the mesh might help – elements that have a low quality (large aspect ratio, small initial volume) are especially prone to this type of problem. Note that second-order elements are often more sensitive to this problem than first-order elements.

  • 2.4-2. The stable time increment in an explicit simulation is given by the time a sound wave needs to travel through the smallest element. If elements distort strongly, they may become very thin in one direction so that the stable time increment becomes unreasonably small. In this case, changing the mesh might help.

3. Postprocessing

There are two aspects to checking that a model is correct: Verification is the process of showing that the model was correctly specified and actually does what it was created to do (loads, boundary conditions, material behaviour etc. are correct). Validation means to check the model by making an independent prediction (i. e., a prediction that was not used in specifying or calibrating the model) and checking this prediction in some other way (for example, experimentally).⁴

General advice: If you modify your model significantly (because you build up a complicated model in steps, have to correct errors or add more complex material behaviour to get agreement with experimental results etc.), you should again check the model. It is not clear that the mesh density that was sufficient for your initial model is still sufficient for the modified model. The same is true for other considerations (like the choice of element type etc.).

⁴Note that the terms “verification” and “validation” are used differently in different fields.

3.1 Checking the plausibility and verifying the model

  • 3.1-1. Check the plausibility of your results. If your simulation deviates from your intuition, continue checking until you are sure that you understand why your intuition (or the simulation) was incorrect. Never believe a result of a simulation that you do not understand and that should be different according to your intuition. Either the model or your understanding of the physical problem is incorrect – in both cases, it is important to understand all effects.

  • 3.1-2. Check your explanations for the solution, possibly with additional simulations. For example, if you assume that thermal expansion is the cause of a local stress maximum, re-run the simulation with a different or vanishing coefficient of thermal expansion. Predict the results of such a simulation and check whether your prediction was correct.

  • 3.1-3. Check all important solution variables. Even if you are only interested in, for example, displacements of a certain point, check stresses and strains throughout the model.

  • 3.1-4. In 3D-simulations, do not only look at contour plots of the component’s surface; also check the results inside the component by cutting through it.

  • 3.1-5. Make sure you understand which properties are vectors or tensors. Which component of stresses or strains are relevant depends on your model, the material, and the question you are trying to answer. Default settings of the postprocessor are not always appropriate, for example, Abaqus plots the von-Mises-stress as default stress variable, which is not very helpful for ceramic materials.

  • 3.1-6. Check the boundary conditions again. Are all nodes constrained in the desired manner? Exaggerating the deformation (use Common plot options in Abaqus) or picking nodes with the mouse may be helpful to check this precisely.

  • 3.1-7. Check the mesh density (see 1.6-5). If possible, calculate the model with different mesh densities (possibly for a simplified problem) and make sure that the mesh you finally use is sufficiently fine. When comparing different meshes, the variation in the mesh density should be sufficiently large to make sure that you can actually see an effect.

  • 3.1-8. Check the mesh quality again, paying special attention on regions where gradients are large. Check that the conditions explained in section 1.6 (element shapes and sizes, no strong discontinuities in the element sizes) are fulfilled and that discontinuities in the stresses are not due to a change in the numerical stiffness (due to a change in the integration scheme or element size).

  • 3.1-9. Check that stresses are continuous between elements. At interfaces between different materials, check that normal stresses and tangential strains are continuous.

  • 3.1-10. Check that the normal stress at any free surface is zero.

  • 3.1-11. Check the mesh density at contact surfaces: can the actual movement and deformation of the surfaces be represented by the mesh? For example, if a mesh is too coarse, nodes may be captured in a corner or a surface may not be able to deform correctly.

  • 3.1-12. Keep in mind that discretization errors at contact surfaces also influence stresses and strains. If you use non-standard contact definitions (2.2-9), try to evaluate how these influence the stresses (for example by comparing actual node positions with what you would expect for hard contact).

  • 3.1-13. Watch out for divergencies. The stress at a sharp notch or crack tip is theoretically infinite – the value shown by your program is then solely determined by the mesh density and, if you use a contour plot, by the extrapolation used by the postprocessor (see 3.2-1).

  • 3.1-14. In dynamic simulations, elastic waves propagate through the structure. They may dominate the stress field. Watch out for reflections of elastic waves and keep in mind that, in reality, these waves are dampened.

  • 3.1-15. If you assumed linear geometry, check whether strains and deformations are sufficiently small to justify this assumption, see 1.2-8.

3.2 Implementation issues

  • 3.2-1. Quantities like stresses or strains are only defined at integration points. Do not rely on extreme values from a contour plot – these values are extrapolated. It strongly depends on the problem whether these extrapolated values are accurate or not. For example, in an elastic material, the extrapolation is usually reasonable, in an ideally-plastic material, extrapolated von Mises stresses may exceed the actual yield stress by a factor of 2 or more. Furthermore, the contour lines themselves may show incorrect maxima or minima, see fig. 9 for an example.

  • 3.2-2. It is often helpful to use “quilt” plots where each element is shown in a single color averaged from the integration point values (see also fig. 9).

  • 3.2-3. The frequently used rainbow color spectrum has been shown to be misleading and should not be used [8]. Gradients may be difficult to interpret because human color vision has a different sensitivity in different parts of the spectrum. Furthermore, many people have a color vision deficiency and are unable to discern reds, greens and yellows. For variables that run from zero to a maximum value (temperature, von-Mises stress), use a sequential spectrum (for example, from black to red to yellow), for variables that can be positive and negative, use a diverging spectrum with a neutral color at zero, see fig. 10.

  • 3.2-4. Discrete time-stepping (see 1.2-7) may also influence the post-processing of results. If you plot the stress-strain curve of a material point by connecting values measured at the discrete simulation times, the resulting curve will not coincide perfectly with the true stress-strain although the data points themselves are correct.

  • 3.2-5. Complex simulation techniques (like XFEM, element deletion etc., see 1.8-3) frequently use internal parameters to control the simulation that may affect the solution process. Do not rely on default values for these parameters and check that the values do not affect the solution inappropriately.

  • 3.2-6. If you use element deletion, be aware that removing elements from the simulation is basically an unphysical process since material is removed. This may affect the energy balance or stress fields near the removed elements. For example, in models of machining processes, removing elements at the tool tip to separate the material strongly influences the residual stress field.

3.3 Validation

  • 3.3-1. If possible, use your model to make an independent prediction that can be tested.

  • 3.3-2. If you used experimental data to adapt unknown parameters (see 1.4), correctly reproducing these data with the model does not validate it, but only verifies it.

  • 3.3-3. The previous point also holds if you made a prediction and afterwards had to change your model to get agreement with an experiment. After this model change, the experiment cannot be considered an independent verification.

4. Getting help

If you cannot solve your problem, you can try to get help from the support of your software (provided you are entitled to support) or also from the internet (for example on ResearchGate or iMechanica). To get helpful answers, please observe the following points:

  • 4-1. Check that you have read relevant pages in the manual and that your question is not answered there.

  • 4-2. Describe your problem as precisely as possible. Which error did occur? What was the exact error message and which warnings did occur? Show pictures of the model and describe the model (which element type, which material, what kind of problem – static, dynamic, explicit, implicit etc.).

  • 4-3. If possible, provide a copy of your model or, even better, provide a minimum example that shows the problem (see 2.1-7).

  • 4-4. If you get answers to your request, give feedback whether this has solved your problem, especially if you are in an internet forum or similar. People are sacrificing their time to help you and will be interested to see whether their advice was actually helpful and what the solution to the problem was. Providing feedback will also help others who find your post because they are facing similar problems.

Acknowledgement

Thanks to Philipp Seiler for many discussions and for reading a draft version of this manuscript, and to Axel Reichert for sharing his experience on getting models to converge.

References

[1] F Armero. On the locking and stability of finite elements in finite deformation plane strain problems. Computers & Structures, 75(3):261–290, 2000. [2] CAE associates. Practical FEA simulations. https://caeai.com/blog/practical-fea-simulations?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=caeai. Accessed 31.5.2017. [3] Martin Bäker. Numerische Methoden in der Materialwissenschaft. Fachbereich Maschinenbau der TU Braunschweig, 2002. [4] Martin Bäker, Stefanie Reese, and Vadim V. Silberschmidt. Simulation of crack propagation under mixed-mode loading. In Siegfried Schmauder, Chuin-Shan Chen, Krishan K. Chawla, Nikhilesh Chawla, Weiqiu Chen, and Yutaka Kagawa, editors, Handbook of Mechanics of Materials. Springer Singapore, Singapore, 2018. [5] Martin Bäker, Joachim Rösler, and Carsten Siemers. A finite element model of high speed metal cutting with adiabatic shearing. Computers & Structures, 80(5):495–513, 2002. [6] Martin Bäker and Aviral Shrot. Inverse parameter identification with finite element simulations using knowledge-based descriptors. Computational Materials Science, 69:128–136, 2013. [7] Klaus-Jürgen Bathe. Finite element procedures. Klaus-Jurgen Bathe, 2006. [8] David Borland and Russell M Taylor II. Rainbow color map (still) considered harmful. IEEE computer graphics and applications, (2):14–17, 2007. [9] Dassault Systems. Abaqus Manual, 2017. [10] Guido Dhondt. The Finite Element Method for Three-Dimensional Thermomechanical Applications. Wiley, 2004. [11] Ronald Krueger. Virtual crack closure technique: History, approach, and applications. Applied Mechanics Reviews, 57(2):109, 2004. [12] AM Prior. Applications of implicit and explicit finite element techniques to metal forming. Journal of Materials Processing Technology, 45(1):649–656, 1994. [13] Joachim Rösler, Harald Harders, and Martin Bäker. Mechanical behaviour of engineering materials: metals, ceramics, polymers, and composites. Springer Science & Business Media, 2007. [14] Peter Wriggers and Tod A Laursen. Computational contact mechanics, volume 30167. Springer, 2006.


n5321 | 2025年6月15日 23:24

A Possible First Use of CAM/CAD


Norman Sanders Cambridge Computer Lab Ring, William Gates Building, Cambridge, England ProjX, Walnut Tree Cottage, Tattingstone Park, Ipswich, Suffolk IP9 2NF, England


Abstract

This paper is a discussion of the early days of CAM-CAD at the Boeing Company, covering the period approximately 1956 to 1965. This period saw probably the first successful industrial application of ideas that were gaining ground during the very early days of the computing era. Although the primary goal of the CAD activity was to find better ways of building the 727 airplane, this activity led quickly to the more general area of computer graphics, leading eventually to today’s picture-dominated use of computers.

Keywords: CAM, CAD, Boeing, 727 airplane, numerical-control.


1. Introduction to Computer-Aided Design and Manufacturing

Some early attempts at CAD and CAM systems occurred in the 1950s and early 1960s. We can trace the beginnings of CAD to the late 1950s when Dr. Patrick J. Hanratty developed Pronto, the first commercial numerical-control (NC) programming system. In 1960, Ivan Sutherland at MIT's Lincoln Laboratory created Sketchpad, which demonstrated the basic principles and feasibility of computer-aided technical drawing.

There seems to be no generally agreed date or place where Computer-Aided Design and Manufacturing saw the light of day as a practical tool for making things. However, I know of no earlier candidate for this role than Boeing’s 727 aircraft. Certainly the dates given in the current version of Wikipedia are woefully late; ten years or so.

So, this section is a description of what we did at Boeing from about the mid-fifties to the early sixties. It is difficult to specify precisely when this project started – as with most projects. They don’t start, but having started they can become very difficult to finish. But at least we can talk in terms of mini eras, approximate points in time when ideas began to circulate and concrete results to emerge.

Probably the first published ideas for describing physical surfaces mathematically was Roy Liming’s Practical Analytic Geometry with Applications to Aircraft, Macmillan, 1944. His project was the Mustang fighter. However, Liming was sadly way ahead of his time; there weren’t as yet any working computers or ancillary equipment to make use of his ideas. Luckily, we had a copy of the book at Boeing, which got us off to a flying start. We also had a mighty project to try our ideas on – and a team of old B-17/29 engineers who by now were running the company, rash enough to allow us to commit to an as yet unused and therefore unproven technology.

Computer-aided manufacturing (CAM) comprises the use of computer-controlled manufacturing machinery to assist engineers and machinists in manufacturing or prototyping product components, either with or without the assistance of CAD. CAM certainly preceded CAD and played a pivotal role in bringing CAD to fruition by acting as a drafting machine in the very early stages. All early CAM parts were made from the engineering drawing. The origins of CAM were so widespread that it is difficult to know whether any one group was aware of another. However, the NC machinery suppliers, Kearney & Trecker etc, certainly knew their customers and would have catalysed their knowing one another, while the Aero-Space industry traditionally collaborated at the technical level however hard they competed in the selling of airplanes.

2. Computer-Aided Manufacturing (CAM) in the Boeing Aerospace Factory in Seattle

(by Ken McKinley)

The world’s first two computers, built in Manchester and Cambridge Universities, began to function as early as 1948 and 1949 respectively, and were set to work to carry out numerical computations to support the solution of scientific problems of a mathematical nature. Little thought, if any, was entertained by the designers of these machines to using them for industrial purposes. However, only seven years later the range of applications had already spread out to supporting industry, and by 1953 Boeing was able to order a range of Numerically-Controlled machine tools, requiring computers to transform tool-makers’ instructions to machine instructions. This is a little remembered fact of the early history of computers, but it was probably the first break of computer application away from the immediate vicinity of the computer room.

The work of designing the software, the task of converting the drawing of a part to be milled to the languages of the machines, was carried out by a team of about fifteen people from Seattle and Wichita under my leadership. It was called the Boeing Parts-Programming system, the precursor to an evolutionary series of Numerical Control languages, including APT – Automatically Programmed Tooling, designed by Professor Doug Ross of MIT. The astounding historical fact here is that this was among the first ever computer compilers. It followed very closely on the heels of the first version of FORTRAN. Indeed it would be very interesting to find out what, if anything preceded it.

As early as it was in the history of the rise of computer languages, members of the team were already aficionados of two rival contenders for the job, FORTRAN on the IBM 704 in Seattle, and COBOL on the 705 in Wichita. This almost inevitably resulted in the creation of two systems (though they appeared identical to the user): Boeing and Waldo, even though ironically neither language was actually used in the implementation. Remember, we were still very early on in the development of computers and no one yet had any monopoly of wisdom in how to do anything.

The actual programming of the Boeing system was carried out in computer machine language rather than either of the higher-level languages, since the latter were aimed at a very different problem area to that of determining the requirements of machine tools.

A part of the training of the implementation team consisted of working with members of the Manufacturing Department, probably one of the first ever interdisciplinary enterprises involving computing. The computer people had to learn the language of the Manufacturing Engineer to describe aluminium parts and the milling machine processes required to produce them. The users of this new language were to be called Parts Programmers (as opposed to computer programmers).

A particularly tough part of the programming effort was to be found in the “post processors”, the detailed instructions output from the computer to the milling machine. To make life interesting there was no standardisation between the available machine tools. Each had a different physical input mechanism; magnetic tape, analog or digital, punched Mylar tape or punched cards. They also had to accommodate differences in the format of each type of data. This required lots of discussion with the machine tool manufacturers - all very typical of a new industry before standards came about.

A memorable sidelight, just to make things even more interesting, was that Boeing had one particular type of machine tool that required analog magnetic tape as input. To produce it the 704 system firstly punched the post processor data into standard cards. These were then sent from the Boeing plant to downtown Seattle for conversion to a magnetic tape, then back to the Boeing Univac 1103A for conversion from magnetic to punched tape, which was in turn sent to Wichita to produce analog magnetic tape. This made the 1103A the world’s largest, most expensive punched tape machine. As a historical footnote, anyone brought up in the world of PCs and electronic data transmission should be aware of what it was like back in the good old days!

Another sidelight was that detecting and correcting parts programming errors was a serious problem, both in time and material. The earliest solution was to do an initial cut on wood or plastic foam, or on suitable machine tools, to replace the cutter with a pen or diamond scribe to ‘draw’ the part. Thus the first ever use of an NC machine tool as a computer-controlled drafting machine, a technique vital later to the advent of Computer-Aided Design.

Meanwhile the U. S. Air Force recognised that the cost and complication of the diverse solutions provided by their many suppliers of Numerical Control equipment was a serious problem. Because of the Air Force’s association with MIT they were aware of the efforts of Professor Doug Ross to develop a standard NC computer language. Ken McKinley, as the Boeing representative, spent two weeks at the first APT (Automatic Programmed Tooling) meeting at MIT in late 1956, with representatives from many other aircraft-related companies, to agree on the basic concepts of a common system where each company would contribute a programmer to the effort for a year. Boeing committed to support mainly the ‘post processor’ area. Henry Pinter, one of their post-processor experts, was sent to San Diego for a year, where the joint effort was based. As usually happened in those pioneering days it took more like 18 months to complete the project. After that we had to implement APT in our environment at Seattle.

Concurrently with the implementation we had to sell ourselves and the users on the new system. It was a tough sell believe me, as Norm Sanders was to discover later over at the Airplane Division. Our own system was working well after overcoming the many challenges of this new technology, which we called NC. The users of our system were not anxious to change to an unknown new language that was more complex. But upper management recognized the need to change, not least because of an important factor, the imminence of another neophytic technology called Master Dimensions.

3. Computer-Aided Design (CAD) in the Boeing Airplane Division in Renton

(by Norman Sanders)

The year was 1959. I had just joined Boeing in Renton, Washington, at a time when engineering design drawings the world over were made by hand, and had been since the beginning of time; the definition of every motorcar, aircraft, ship and mousetrap consisted of lines drawn on paper, often accompanied by mathematical calculations where necessary and possible. What is more, all animated cartoons were drawn by hand. At that time, it would have been unbelievable that what was going on in the aircraft industry would have had any effect on The Walt Disney Company or the emergence of the computer games industry. Nevertheless, it did. Hence, this is a strange fact of history that needs a bit of telling.

I was very fortunate to find myself working at Boeing during the years following the successful introduction of its 707 aircraft into the world’s airlines. It exactly coincided with the explosive spread of large computers into the industrial world. A desperate need existed for computer power and a computer manufacturer with the capacity to satisfy that need. The first two computers actually to work started productive life in 1948 and 1949; these were at the universities of Manchester and Cambridge in England. The Boeing 707 started flying five years after that, and by 1958, it was in airline service. The stage was set for the global cheap travel revolution. This took everybody by surprise, not least Boeing. However, it was not long before the company needed a shorter-takeoff airplane, namely the 727, a replacement for the Douglas DC-3. In time, Boeing developed a smaller 737, and a large capacity airplane – the 747. All this meant vast amounts of computing and as the engineers got more accustomed to using the computer there was no end to their appetite.

And it should perhaps be added that computers in those days bore little superficial similarity to today’s computers; there were certainly no screens or keyboards! Though the actual computing went at electronic speeds, the input-output was mechanical - punched cards, magnetic tape and printed paper. In the 1950s, the computer processor consisted of vacuum tubes, the memory of ferrite core, while the large-scale data storage consisted of magnetic tape drives. We had a great day if the computer system didn’t fail during a 24 hour run; the electrical and electronic components were very fragile.

We would spend an entire day preparing for a night run on the computer. The run would take a few minutes and we would spend the next day wading through reams of paper printout in search of something, sometimes searching for clues to the mistakes we had made. We produced masses of paper. You would not dare not print for fear of letting a vital number escape. An early solution to this was faster printers. About 1960 Boeing provided me with an ANalex printer. It could print one thousand lines a minute! Very soon, of course, we had a row of ANalex printers, wall to wall, as Boeing never bought one of anything. The timber needed to feed our computer printers was incalculable.

4. The Emergence of Computer Plots

With that amount of printing going on it occurred to me to ask the consumers of printout what they did with it all. One of the most frequent answers was that they plotted it. There were cases of engineers spending three months drawing curves resulting from a single night’s computer run. A flash of almost heresy then struck my digital mind. Was it possible that we could program a digital computer to draw (continuous) lines? In the computing trenches at Boeing we were not aware of the experimentation occurring at research labs in other places. Luckily at Boeing we were very fortunate at that time to have a Swiss engineer in our computer methods group who could both install hardware and write software for it; he knew hardware and software, both digital and analog. His name was Art Dietrich. I asked Art about it, which was to me the unaskable; to my surprise Art thought it was possible. So off he went in search of a piece of hardware that we could somehow connect to our computer that could draw lines on paper.

Art found two companies that made analog plotters that might be adaptable. One company was Electro Instruments in San Diego and the other was Electronic Associates in Long Branch, New Jersey. After yo-yoing back and forth, we chose the Electronic Associates machine. The machine could draw lines on paper 30x30 inches, at about twenty inches per second. It was fast! But as yet it hadn’t been attached to a computer anywhere. Moreover, it was accurate - enough for most purposes. To my knowledge, this was the first time anyone had put a plotter in the computer room and produced output directly in the form of lines. It could have happened elsewhere, though I was certainly not aware of it at the time. There was no software, of course, so I had to write it myself. The first machine ran off cards punched as output from the user programs, and I wrote a series of programs: Plot1, Plot2 etc. Encouraged by the possibility of selling another machine or two around the world, the supplier built a faster one running off magnetic tape, so I had to write a new series of programs: Tplot1, Tplot2, etc, (T for tape). In addition, the supplier bought the software from us - Boeing’s first software sale!

While all this was going on we were pioneering something else. We called it Master Dimensions. Indeed, we pioneered many computing ideas during the 1960s. At that time Boeing was probably one of the leading users of computing worldwide and it seemed that almost every program we wrote was a brave new adventure. Although North American defined mathematically the major external surfaces of the wartime Mustang P-51 fighter, it could not make use of computers to do the mathematics or to construct it because there were no computers. An account of this truly epochal work appears in Roy Liming’s book.

By the time the 727 project was started in 1960, however, we were able to tie the computer to the manufacturing process and actually define the airplane using the computer. We computed the definition of the outer surface of the 727 and stored it inside the computer, making all recourse to the definition via a computer run, as opposed to an engineer looking at drawings using a magnifying glass. This was truly an industrial revolution.

Indeed, when I look back on the history of industrial computing as it stood fifty years ago I cringe with fear. It should never have been allowed to happen, but it did. And the reason why it did was because we had the right man, Grant W. Erwin Jr, in the right place, and he was the only man on this planet who could have done it. Grant was a superb leader – as opposed to manager – and he knew his stuff like no other. He knew the mathematics, Numerical Analysis, and where it didn’t exist he created new methods. He was loved by his team; they would work all hours and weekends without a quibble whenever he asked them to do so. He was an elegant writer and inspiring teacher. He knew what everyone was doing; he held the plan in his head. If any single person can be regarded as the inventor of CAD it was Grant. Very sadly he died, at the age of 94, just as the ink of this chapter was drying.

When the Master Dimensions group first wrote the programs, all we could do was print numbers and draw plots on 30x30 inch paper with our novel plotter. Mind-blowing as this might have been it did not do the whole job. It did not draw full scale, highly accurate engineering lines. Computers could now draw but they could not draw large pictures or accurate ones or so we thought.

5. But CAM to the Rescue!

Now there seems to be a widely-held belief that computer-aided design (CAD) preceded computer-aided manufacturing (CAM). All mention of the topic carries the label CAD-CAM rather than the reverse, as though CAD led CAM. However, this was not the case, as comes out clearly in Ken McKinley’s section above. Since both started in the 1956-1960 period, it seems a bit late in the day now to raise an old discussion. However, there may be a few people around still with the interest and the memory to try to get the story right. The following is the Boeing version, at least, as remembered by some long retired participants.

5.1 Numerical Control Systems

The Boeing Aerospace division began to equip its factory about 1956 with NC machinery. There were several suppliers and control systems, among them Kearney & Trecker, Stromberg-Carlson and Thompson Ramo Waldridge (TRW). Boeing used them for the production of somewhat complicated parts in aluminium, the programming being carried out by specially trained programmers. I hasten to say that these were not computer programmers; they were highly experienced machinists known as parts programmers. Their use of computers was simply to convert an engineering drawing into a series of simple steps required to make the part described. The language they used was similar in principle to basic computer languages in that it required a problem to be analyzed down to a series of simple steps; however, the similarity stopped right there. An NC language needs commands such as select tool, move tool to point (x,y), lower tool, turn on coolant. The process required a deep knowledge of cutting metal; it did not need to know about memory allocation or floating point.

It is important to recognize that individual initiative from below very much characterized the early history of computing - much more than standard top-down managerial decisions. Indeed, it took an unconscionable amount of time before the computing bill reached a level of managerial attention. It should not have been the cost, it should have been the value of computing that brought management to the punch. But it wasn’t. I think the reason for that was that we computer folk were not particularly adept at explaining to anyone beyond our own circles what it was that we were doing. We were a corporate ecological intrusion which took some years to adjust to.

5.2 Information Consolidation at Boeing

It happened that computing at Boeing started twice, once in engineering and once in finance. My guess is that neither group was particularly aware of the other at the start. It was not until 1965 or so, after a period of conflict, that Boeing amalgamated the two areas, the catalyst being the advent of the IBM 360 system that enabled both types of computing to cohabit the same hardware. The irony here was that the manufacturing area derived the earliest company tangible benefits from computing, but did not have their own computing organization; they commissioned their programs to be written by the engineering or finance departments, depending more or less on personal contacts out in the corridor.

As Ken McKinley describes above, in the factory itself there were four different control media; punched Mylar tape, 80-column punched cards, analog magnetic tape and digital magnetic tape. It was rather like biological life after the Cambrian Explosion of 570 million years ago – on a slightly smaller scale. Notwithstanding, it worked! Much investment had gone into it. By 1960, NC was a part of life in the Boeing factory and many other American factories. Manufacturing management was quite happy with the way things were and they were certainly not looking for any more innovation. ‘Leave us alone and let’s get the job done’ was their very understandable attitude. Nevertheless, modernisation was afoot, and they embraced it.

The 1950s was a period of explosive computer experimentation and development. In just one decade, we went from 1K to 32K memory, from no storage backup at all to multiple drives, each handling a 2,400-foot magnetic tape, and from binary programming to Fortran 1 and COBOL. At MIT, Professor Doug Ross, learning from the experience of the earlier NC languages, produced a definition for the Automatically Programmed Tooling (APT) language, the intention being to find a modern replacement for the already archaic languages that proliferated the 1950s landscape. How fast things were beginning to move suddenly, though it didn’t seem that way at the time.

5.3 New Beginnings

Since MIT had not actually implemented APT, the somewhat loose airframe manufacturers’ computer association got together to write an APT compiler for the IBM 7090 computers in 1961. Each company sent a single programmer to Convair in San Diego and it took about a year to do the job, including the user documentation. This was almost a miracle, and was largely due to Professor Ross’s well-thought through specification.

When our representative, Henry Pinter, returned from San Diego, I assumed the factory would jump on APT, but they didn’t. At the Thursday morning interdepartmental meetings, whenever I said, “APT is up and running folks, let’s start using it”, Don King from Manufacturing would say, “but APT don’t cut no chips”. (That’s how we talked up there in the Pacific Northwest.) He was dead against these inter-company initiatives; he daren’t commit the company to anything we didn’t have full control over. However, eventually I heard him talking. The Aerospace Division (Ed Carlberg and Ken McKinley) were testing the APT compiler but only up to the point of a printout; no chips were being cut because Aerospace did not have a project at that time. So I asked them to make me a few small parts and some chips swept up from the floor, which they kindly did. I secreted the parts in my bag and had my secretary tape the chips to a piece of cardboard labeled ‘First ever parts cut by APT’. At the end of the meeting someone brought up the question of APT. ‘APT don’t cut no chips’ came the cry, at which point I pulled out my bag from under the table and handed out the parts for inspection. Not a word was spoken - King’s last stand. (That was how we used to make decisions in those days.)

These things happened in parallel with Grant Erwin’s development of the 727-CAD system. In addition, one of the facilities of even the first version of APT was to accept interpolated data points from CAD which made it possible to tie the one system in with the other in what must have been the first ever CAM-CAD system. When I look back on this feature alone nearly fifty years later I find it nothing short of miraculous, thanks to Doug Ross’s deep understanding of what the manufacturing world would be needing. Each recourse to the surface definition was made in response to a request from the Engineering Department, and each numerical cut was given a running Master Dimensions Identifier (MDI) number. This was not today’s CAM-CAD system in action; again, no screen, no light pen, no electronic drawing. Far from it; but it worked! In the early 1960s the system was a step beyond anything that anyone else seemed to be doing - you have to start somewhere in life.

6. Developing Accurate Lines

An irony of history was that the first mechanical movements carried out by computers were not a simple matter of drawing lines; they were complicated endeavors of cutting metal. The computer-controlled equipment was vast multi-ton machines spraying aluminum chips in all directions. The breakthrough was to tame the machines down from three dimensions to two, which happened in the following extraordinary way. It is perhaps one of the strangest events in the history of computing and computing graphics, though I don’t suppose anyone has ever published this story. Most engineers know about CAD; however, I do not suppose anyone outside Boeing knows how it came about.

6.1 So, from CAM to CAD

Back to square one for a moment. As soon as we got the plotter up and running, Art Dietrich showed some sample plots to the Boeing drafting department management. Was the plotting accuracy good enough for drafting purposes? The answer - a resounding No! The decision was that Boeing would continue to draft by hand until the day someone could demonstrate something that was superior to what we were able to produce. That was the challenge. However, how could we meet that challenge? Boeing would not commit money to acquiring a drafting machine (which did not exist anyway) without first subjecting its output to intense scrutiny. Additionally, no machine tool company would invest in such an expensive piece of new equipment without an order or at least a modicum of serious interest. How do you cut this Gordian knot?

In short, at that time computers could master-mind the cutting of metal with great accuracy using three-dimensional milling machines. Ironically, however, they could not draw lines on paper accurately enough for design purposes; they could do the tough job but not the easy one.

However, one day there came a blinding light from heaven. If you can cut in three dimensions, you can certainly scratch in two. Don’t do it on paper; do it on aluminium. It had the simplicity of the paper clip! Why hadn’t we thought of that before? We simply replaced the cutter head of the milling machine with a tiny diamond scribe (a sort of diamond pen) and drew lines on sheets of aluminium. Hey presto! The computer had drawn the world’s first accurate lines. This was done in 1961.

The next step was to prove to the 727 aircraft project manager that the definition that we had of the airplane was accurate, and that our programs worked. To prove it they gave us the definition of the 707, an aircraft they knew intimately, and told us to make nineteen random drawings (canted cuts) of the wing using this new idea. This we did. We trucked the inscribed sheets of aluminium from the factory to the engineering building and for a month or so engineers on their hands and knees examined the lines with microscopes. The Computer Department held its breath. To our knowledge this had never happened before. Ever! Anywhere! We ourselves could not be certain that the lines the diamond had scribed would match accurately enough the lines drawn years earlier by hand for the 707. At the end of the exercise, however, industrial history emerged at a come-to-God meeting. In a crowded theatre the chief engineer stood on his feet and said simply that the design lines that the computer had produced had been under the microscope for several weeks and were the most accurate lines ever drawn - by anybody, anywhere, at any time. We were overjoyed and the decision was made to build the 727 with the computer. That is the closest I believe anyone ever came to the birth of Computer-Aided Design. We called it Design Automation. Later, someone changed the name. I do not know who it was, but it would be fascinating to meet that person.

6.2 CAM-CAD Takes to the Air

Here are pictures of the first known application of CAM-CAD. The first picture is that of the prototype of the 727. Here you can clearly see the centre engine inlet just ahead of the tail plane. Seen from the front it is elliptical, as can be seen from the following sequence of manufacturing stages:- (Images of the manufacturing stages of the 727 engine inlet are shown here)

6.3 An Unanticipated Extra Benefit

One of the immediate, though unanticipated, benefits of CAD was transferring detailed design to subcontractors. Because of our limited manufacturing capacity, we subcontracted a lot of parts, including the rear engine nacelles (the covers) to the Rohr Aircraft Company of Chula Vista in California. When their team came up to Seattle to acquire the drawings, we instead handed them boxes of data in punched card form. We also showed them how to write the programs and feed their NC machinery. Their team leader, Nils Olestein, could not believe it. He had dreamed of the idea but he never thought he would ever see it in his lifetime: accuracy in a cardboard box! Remember that in those days we did not have email or the ability to send data in the form of electronic files.

6.4 Dynamic Changes

The cultural change to Boeing due to the new CAD systems was profound. Later on we acquired a number of drafting machines from the Gerber Company, who now knew that there was to be a market in computer-controlled drafting, and the traditional acres of drafting tables began slowly to disappear. Hand drafting had been a profession since time immemorial. Suddenly its existence was threatened, and after a number of years, it no longer existed. That also goes for architecture and almost any activity involving drawing.

Shortly afterwards, as the idea caught on, people started writing CAD systems which they marketed widely throughout the manufacturing industry as well as in architecture. Eventually our early programs vanished from the scene after being used on the 737 and 747, to be replaced by standard CAD systems marketed by specialist companies. I suppose, though, that even today’s Boeing engineers are unaware of what we did in the early 1960s; generally, corporations are not noted for their memory.

Once the possibility of drawing with the computer became known, the idea took hold all over the place. One of the most fascinating areas was to make movie frames. We already had flight simulation; Boeing ‘flew’ the Douglas DC-8 before Douglas had finished building it. We could actually experience the airplane from within. We did this with analog computers rather than digital. Now, with digital computers, we could look at an airplane from the outside. From drawing aircraft one could very easily draw other things such as motorcars and animated cartoons. At Boeing we established a Computer Graphics Department around 1962 and by 1965 they were making movies by computer. (I have a video tape made from Boeing’s first ever 16mm movie if anyone’s interested.) Although slow and simple by today’s standards, it had become an established activity. The rest is part of the explosive story of computing, leading up to today’s marvels such as computer games, Windows interfaces, computer processing of film and all the other wonders of modern life that people take for granted. From non-existent to all-pervading within a lifetime!

7. The Cosmic Dice

Part of the excitement of this computer revolution that we have brought about in these sixty years was the unexpected benefits. To be honest, a lot of what we did, especially in the early days, was pure serendipity; it looked like a good idea at the time but there was no way we could properly justify it. I think had we had to undertake a solid financial analysis most of the projects would never have got off the ground and the computer industry would not have got anywhere near today’s levels of technical sophistication or profitability. Some of the real payoffs have been a result of the cosmic dice throwing us a seven. This happened already twice with the first 727.

The 727 rolled out in November, 1962, on time and within budget, and flew in April, 1963. The 727 project team were, of course, dead scared that it wouldn’t. But the irony is that it would not have happened had we not used CAD. During the early period, before building the first full-scale mockup, as the computer programs were being integrated, we had a problem fitting the wing to the wing-shaped hole in the body; the wing-body join. The programmer responsible for that part of the body program was yet another Swiss by name Raoul Etter. He had what appeared to be a deep bug in his program and spent a month trying to find it. As all good programmers do, he assumed that it was his program that was at fault. But in a moment of utter despair, as life was beginning to disappear down a deep black hole, he went cap in hand to the wing project to own up. “I just can’t get the wing data to match the body data, and time is no longer on my side.” “Show us your wing data. Hey where did you get this stuff?” “From the body project.” “But they’ve given you old data; you’ve been trying to fit an old wing onto a new body.” (The best time to make a design change is before you’ve actually built the thing!) An hour later life was restored and the 727 became a single numerical entity. But how would this have been caught had we not gone numerical? I asked the project. At full-scale mockup stage, they said. In addition to the serious delay what would the remake have cost? In the region of a million dollars. Stick that in your project analysis!

The second occasion was just days prior to roll-out. The 727 has leading-edge flaps, but at installation they were found not to fit. New ones had to be produced over night, again with the right data. But thanks to the NC machinery we managed it. Don’t hang out the flags before you’ve swept up the final chip.

8. A Fascinating Irony

This discussion is about using the computer to make better pictures of other things. At no time did any of us have the idea of using pictures to improve the way we ran computers. This had to wait for Xerox PARC, a decade or so later, to throw away our punched cards and rub our noses into a colossal missed opportunity. I suppose our only defence is that we were being paid to build airplanes not computers.

9. Conclusion

In summary, CAM came into existence during the late 1950s, catalyzing the advent of CAD in the early 1960s. This mathematical definition of line drawing by computers then radiated out in three principal directions with (a) highly accurate engineering lines and surfaces, (b) faster and more accurate scientific plotting and (c) very high-speed animation. Indeed, the world of today’s computer user consists largely of pictures; the interface is a screen of pictures; a large part of technology lessons at school uses computer graphics. And we must remember that the computers at that time were miniscule compared to the size of today’s PC in terms of memory and processing speed. We’ve come a long way from that 727 wing design.


n5321 | 2025年6月15日 23:23

Analysis Origins - Fluent

This article chronicles the origins of Fluent, a pioneering Computational Fluid Dynamics (CFD) code in the 1980s that became the dominant market leader by the late 90s and is today part of ANSYS Inc., one of the leading simulation software providers for engineering.

“CHAM showed the world that fluid dynamics problems could be solved on a computer. Fluent, on the other hand, proved that engineers could use this software to solve real world problems.” Attributed to Brian Spalding

Many of today’s leading software companies emerged from the vision of a single pioneer. Fluent, on the other hand, grew out of the contributions of multiple personalities. The earliest was Hasan Ferit Boysan who came to Sheffield University in the United Kingdom in 1975 for graduate work in fluid mechanics, which at this time was almost universally performed with hand calculations. Boysan met Ali Turan, another student from Turkey, who was working with the Cora3 code, one of the earliest CFD codes developed by Professor Brian Spalding of Imperial College, London to model combustion in a dump combustor. As with the other CFD codes available at this time, users created an input deck of punch cards for Cora3. Errors in the deck were often discovered only after the solver crashed. Turan asked Boysan to help use Cora3 to solve a problem he was working on for his PhD thesis. Progress was slow because every time the researchers changed the geometry or boundary conditions, they had to manually recode the input deck. It was a painful experience, but they achieved enough results for Turan to complete his thesis. Boysan went back to Turkey in 1976 with a reputation of being able to get results from a CFD code.

In 1979, Jim Swithenbank, at the time Professor of Chemical Engineering at the University of Sheffield, invited Boysan back to Sheffield to help him develop a code capable of interactively defining geometry and boundary conditions for a specific problem involving cyclone separators. The resulting software was developed with a student, Bill Ayers as part of his final year research project and was published in the Transactions of the Institute of Chemical Engineers. With the permission of the authors, the editor of the publication added a note that readers could contact the authors to obtain a copy of the source code. Swithenbank and Boysan were surprised to receive several hundred requests for the code, alerting them to the commercial potential of an interactive CFD code.

 

Fluent's UK history

Figure 1: Painting by Sheffield artist Joe Scarborough, showing locations from Fluent’s UK history.

“The picture is specific to what was the Fluent Europe entity, by local artist, Joe Scarborough, commissioned in 2000 when the company moved to its new premises next to Sheffield Airport (there was an airport) and tracking the history from Sheffield University.

The Sheffield University building on Mappin Street (top left) was where the early version of Fluent was developed. Next to that (narrow red building) is the original office on West Street when Fluent Europe was opened. It was in a few rooms above a book shop, where Ferit Boyson and Bill Ayres worked alongside a very large computer (physically, although not necessarily in terms of computational capacity). The supertram system was installed on West Street when the office was there, creating huge disruption to the center of Sheffield. The advertising hoarding for Rolls-Royce is a nod to them being the largest customer at that time.

Then on the right are the gardens and rear of Holmwood House, the building on Cortworth Road where Fluent Europe moved to in the 90s during expansion. The gardens of Holmwood House show families enjoying picnics - at that time there were a lot of people in the company starting to have families and the summer BBQ was typically a party in the garden. I haven't been able to find out anything about the greenhouse. When Holmwood House was sold, it was bought by one of the band members of the one-time popular rock combo Def Leopard.

Shown next to Holmwood House is the building at Sheffield Airport Business Park (the 'Airport' has subsequently been dropped from the name). Rather ironically the build of the new offices was delayed by delivery of the steel work – which came from Holland – not Sheffield.

The significance of the flags outside the new unit are the Turkish flag reflecting Ferit Boysan’s origins, the Union Jack obviously indicates UK input and the stars & stripes reflect the American ownership – originally Creare. Not sure about the European flag but maybe there was a contribution to the cost of the new building from the EU?

In the distance top left, beyond the somewhat displaced ocean, there is a reference to Ferit's Turkish background and on the right is the Lebanon office in New Hampshire.

Towards the bottom on the left, you can see a few local details: Jessop's Hospital (just captured at the far-left) showing an expectant couple (again a reference to the number of young families) and the Red Deer pub that was a likely source of inspiration for those at Sheffield University due to its location next to Mappin Street and later a place that Fluent staff frequented. The football pitch is either capturing the 5-a-side team (that played late 90s to early 00s) or a local acknowledgement to Hallam FC - the oldest ground in the world.

In the foreground people are shown enjoying outdoor pursuits in the Peak District (cycle/ climb/ walk) - a common interest for many staff.

We're not sure about the sports cars but suspect one (maybe both?) was Ferit's.

There is a possibility that Ferit and his wife as well as Jim Swithenbank and his wife are shown somewhere too. We suspect Joan Swithenback is standing at the back doorway of Holmwood House.”

Innovative Code made CFD Faster, More Accessible

Boysan and Ayers, a Sheffield graduate student, wrote a general-purpose version of this software that represented a major departure from the CFD codes of that era by featuring an interactive interface that enabled users to graphically change the geometry and boundary conditions and observe the resulting effects. The software also stepped the user through pre-processing, solving and post-processing. Called Tempest, the software could solve a 400-node geometry on the university’s Perkin Elmer 3205 computer that filled a room despite having only 1 megabyte of random-access memory.

Ayers showed the code to Combustion Engineering and Battelle Laboratories in the United States and both companies bought the source code for a few thousand dollars. Swithenbank and Boysan met with the Sheffield University finance director and legal officer and asked if the university wanted to invest in commercializing the code. Searching for an example to explain the business proposition to non-technical people, they pointed to a building in Sheffield which had been designed with a decorative pool. After the building was completed, the flow of air around the building splashed water onto pedestrians and made it necessary to pave over the pool. Swithenbank and Boysan said that Tempest could calculate the flow around the building and predict such problems in advance. The university officials, however, were alarmed to hear this and envisioned the building collapsing and the university being deluged with litigation. They told the erstwhile entrepreneurs that the university wanted nothing to do with their software.

License with Unlimited Support Puts Fluent on Growth Fast Track

Swithenbank freelanced for a consulting company in New Hampshire called Creare and wrote to the company in late 1982 asking for help in commercializing the software. (Over the years, Creare has proven to be a fertile serial company launcher [1].) The letter was passed to Peter Rundstandler, who circulated it to the partners of the firm to ask if anyone was interested in pursuing. Everyone answered no except for Bart Patel who sent a note back to Rundstandler saying “this could be fun.” Ayers installed the code on Creare’s Digital Equipment Company PDP-11 minicomputer and showed it to Patel who liked what he saw. Boysan and Ayers formed a company called Boteb. Creare purchased commercial rights to the software from Boteb, offering a 10% royalty on sales with $25,000 guaranteed and agreed to hire Boteb for at least 1,000 hours of development and support services. Patel felt that the name Tempest sounded too complicated, so he changed it to Fluent to emphasize its ease of use.

"Several key business decisions by the founders in the early years were instrumental in helping distinguish Fluent in the emerging CFD marketplace," said Dr. S. Subbiah, one of the early employees at Fluent and subsequently a member of its executive team. Other developers of fluid dynamics software at the time sold a perpetual license and charged for support by the hour. Patel felt that users would require a lot of support and if they had to pay by the hour, they would use less support than they needed and end up not achieving results. So, he decided to instead sell an annual license that included unlimited support for a fee close to what competitors were charging for a perpetual license. This was a crucial point of distinction and it played a key role in Fluent’s eventual business success.

Another key decision was to bundle all physics models and solvers in Fluent and to offer it for a single annual lease price. At that time, the market leader CHAM, offered a menu of various modules and solvers -- each at their own price. Customers found it hard to determine what solvers and modules they needed upfront, so found Fluent's single all-in-one price attractive when considering investing in a new technology.

First Fluent Seminar Results in Sales to 80% of Attending Companies

Patel avoided a head-on assault on CHAM by focusing the initial marketing effort on combustion and particularly gas turbines. He asked Boysan and Ayers to add physical models to handle the movement of entrained droplets and particles and to integrate these models into the interactive user interface which set the new software apart.

Fluent seminar

Figure 2: Cover of invitation to first Fluent seminar

 

Fluent simulation results from a brochure

Figure 3: Fluent simulation results from a brochure produced in the early 1980s.

To kick off the marketing effort, in 1983 Patel invited Creare’s clients from leading combustion engineering companies to a seminar (Figure 2). A brochure was prepared on Fluent and distributed to prospective attendees (Figure 3). Patel asked attendees to submit test problems in advance and offered to present solutions during the seminar. Realizing that many of the attendees would be engineers who did not have purchasing authority, he created a video describing the capabilities of the software that attendees could show their managers. About 40 people attended the seminar. The attendees purchased $150,000 worth of software during the seminar and 80% ended up eventually buying Fluent. Patel hired the first employee, Barbara Hutchings, who handled technical support. Hutchings developed a team of customer support engineers that extended the “unlimited technical support” business model to include a sincere focus on doing whatever it took to help the customer become successful with Fluent. This approach helped develop customer loyalty and enabled management to use the support function as the "eyes and ears" of the company to understand where customers were struggling, what projects their management was lining up for them to tackle next, and what competitors were doing.

Fluent used multiple field teams, each focused on selling Fluent to a specific industry segment. These field teams were multi-disciplinary (sales. marketing, customer support and consulting) and they were managed as individual profit centers. "The early team leaders gained a lot of experience in developing a profitable business and many went on to successful management positions at Fluent and elsewhere," Subbiah said.

Boysan and Ayers remained in Sheffield and did most of the development work through the 1980s. When a problem arose, Patel called Boysan, sometimes in the middle of the night, and Boysan got up and tried to figure out what was going wrong. Boteb became the European distributor of Fluent and was eventually purchased by Fluent and re-christened it Fluent Europe.

Keith Hanna described his experience in 1989 when he was a young researcher at British Steel PLC in Teesside, and the company was choosing between the then market leading general purpose CFD code, PHOENICS from Cham, STAR-CD from Computational Dynamics in London, and Fluent. Hanna said in his blog [2] : “Even back then Brian [Spalding] was viewed as a colossus in the CFD field, and the British Steel Fluid Flow experts were in awe of him. However, PHOENICS then had a very complex multi-code structure with “planets” and “satellites”, as Brian called them, and much scripting between codes. FLUENT with its integrated geometry engine, mesher, solver and postprocessor had less technical capabilities overall but even back then such a simple thing as ease-of-use and user experience had a big impact on potential users and FLUENT was chosen. PHOENICS only worked in batch mode at the time, whereas we liked the fact that you could stop the FLUENT solver during an iteration and view the flow field!”

University Relationships Supplied Key Talent

Patel early on began offering the software at low rates to universities and establishing relationships with key professors in order to obtain their assistance in recruiting their best students. He focused on hiring multi-talented individuals who had a passion for CFD. Candidates were subjected to interviews lasting a day or more. Key early hires included Dipankar Choudhury from University of Minnesota, who is currently Vice President of Research for ANSYS Inc., Wayne Smith from Cornell University, who led the development of Fluent’s unstructured mesh CFD solver and went on to become Senior Vice President Software Development at the CD-adapco business unit of Siemens PLM Software, and Zahed Sheikh, from the University of Iowa, who led the Fluent sales force in the early years and later went on to be an executive at Flomerics.

From a Structured to an Unstructured Mesh

The original Fluent code was a cartesian mesh program which meant that meshes could not be applied to arbitrary computer aided design (CAD) geometries but rather had to be stair-stepped in areas where the boundary was curved. After a few abortive efforts, Boysan, Sergio Vasquez and Kanchan Kelkar developed a boundary fitted version of Fluent in the early 1990s.

Product tree showing ANSYS acquisitions

Figure 4: Product tree showing ANSYS acquisitions in CFD space

Another limitation of early Fluent was that it utilized structured meshes which were labor intensive in terms of mesh generation and not well suited to modeling complex geometries or capturing flow physics efficiently.

"The Fluent founders made many courageous decisions," Subbiah said. “One that sticks in my mind was the decision not to pursue block-structured meshing. In the early 1990s, Fluent was a single block code while competitors were offering multi-block solutions that offered significantly greater flexibility in meshing. Although there was strong market pressure on management to develop multi-block technology in Fluent, management decided to leapfrog them by investing in automated, unstructured technology - which, at that time, was largely unproven. This calculated risk led to Fluent leading the industry with the first release of automated unstructured meshing."

Another Creare employee, Wayne Smith, received Small Business Innovation Research (SBIR) funding from NASA to develop unstructured-mesh CFD software that could adapt during the solution, for example by increasing mesh density in areas with high gradients. After completing the SBIR, Smith and his team, which included Ken Blake and Chris Morley, transferred to Patel’s group within Creare to work on a commercial version of the new software. The results of their work were released in 1991 as the TGrid tetrahedral mesher and Rampant solver which targeted high Mach number compressible flows in aerospace applications.

With Rampant being limited to a relatively narrow range of problems, the original structured mesh Fluent code remained the flagship application through the early 1990s. Several key advances were made on Rampant between 1991 and 1993, however, that were to prove to be vital in Fluent’s future growth. These include the introduction of client-server Cortex architecture, domaindecomposition parallel capability and Joe Maruszewski’s implementation of a pressure-based control-volume finite element method utilizing algebraic multigrid optimized for solving incompressible flow problems. This version was introduced in the market as Fluent/UNS 1.0 in 1994. Later that year, Jayathi Murthy, who at that time led Fluent’s research and development team, and Sanjay Mathur, rewrote Fluent/UNS over a matter of weeks, switching over to a more efficient finite-volume formulation well-suited for building in methods and physics for the majority of CFD applications. This version of the code was released as Fluent/UNS 3.2 in 1995. Murthy went on to an illustrious academic career and is currently Dean of Engineering at the University of California Los Angeles. Rampant and Fluent/UNS continued as concurrent codes for several years and were combined into a single code with release 5 in 1998, the original structured mesh Fluent code to be discontinued. With Fluent 5, all the major ingredients of a potent CFD modeling capability were together in a single offering - unstructured mesh methods for the entire range of flow regimes, and a client-server architecture with an easy-to-use interactive user interface well suited to run on parallel supercomputers or clusters of the new generation of workstations made by Silicon Graphics, Sun and Hewlett-Packard.

Avid Thermalloy Provides Capital for Growth

Fluent spun off from Creare in 1991 with Creare retaining a substantial minority stake. Patel talked to several investment banks seeking funding to buy out Creare’s stake and take the company to the next level, but despite Fluent generating a considerable cash flow they were not interested in financing a leveraged buyout. Patel happened to play golf during this period with the CEO of Aavid Thermalloy, a New Hampshire company producing heat sinks for electronics applications. Aavid was also looking for capital, so Fluent merged with Aavid and the combined company issued an Initial Public Offering (IPO) in January 1996 that made it possible to buy out the Creare shareholders and fund the expansion of the business.

The IPO also made it possible to issue stock options to employees and acquire several competitive CFD software companies to obtain their technology and engineering teams. These included Fluid Dynamics International and its FIDAP general purpose CFD software, and Polyflow S.A., whose Polyflow CFD software was designed to handle laminar viscoelastic flows. In January 2000, Willis Stein & Partners, a private equity investment firm, acquired the Aavid Thermal Technologies business unit. Meanwhile, Fluent sales grew from $8 million in 1995 to $100 million in 2004.

Acquisition by ANSYS, Inc.

In May 2006, Fluent Inc. was acquired by ANSYS, Inc., a computer aided engineering software company that up to that point specialized in solid mechanics simulation. A product tree showing ANSYS CFD acquisitions is shown in Figure 4. When ANSYS acquired Fluent, the two companies were roughly equal in revenues and as a result, former Fluent employees had a considerable influence on the operation of the combined company.

Brian Spalding, universally considered to be the father of the CFD industry, best defined the influence of Fluent on the CFD industry. Spalding once said that the company he founded, CHAM, showed the world that fluid dynamics problems could be solved on a computer. He said that Fluent, on other hand, proved that engineers could use this software to solve real world problems. His statement reaffirms the success of Fluent in achieving its initial goals of providing interactive software combined with strong technical support that enabled engineers to, quoting the original 1983 Fluent brochure, “apply stateof- the-art computer simulation methods to analyze and solve practical design problems without costly, timeconsuming computer programming.”


n5321 | 2025年6月15日 09:09

Analysis Origins - Altair OptiStructTM

In the late 1980s and early 1990s, the engineering research community began to buzz about a new concept called topology optimization. The homogenization method for topology and shape optimization was first introduced by Martin Bendsoe and Noburo Kikuchi in 1988, and it had the entire research industry talking.

The concept of mathematical optimization allows a user to provide a guided objective and constraints, then let the computer run the study in loop to find the ideal answer. Topology optimization algorithms follow this same process while optimizing the shape and topology of astructure. The resultant outputs produce parts that meet size and functional requirements within the allotted design space while using the minimum amount of material.

Jeff Brennan, Chief Product Officer, Altair 365, was in thethick of this budding movement, first learning about how to apply optimization to engineering problems early in his college career.

“Everyone in my engineering class was pulling an all-nighter to solve a mechanical dynamics problem with Fortran,”said Brennan.

“As I’m leaving to go to the computer lab, my roommate was coming into the dorm with a six-pack. I said ‘Come on Tom. You’re not going to spend the rest of the night in the computer lab like the rest of us?’ He said, ‘No man, I’ll leave that to you guys.’

I found out he had written the algorithm and made the position of each of the statements in the Fortran program a variable in an optimization loop. He hit ‘go’ on that optimization program and it would reorient the various operations until it came to the fastest answer. And, he sat that night and drank all six of those beers. Somebody figured it out. Somebody knows how to use numerical algorithms to do less work and get the best answer. And I thought ‘Bing!’ There's something here.”

Brennan went on to the University of Michigan, studying under Dr. Noboru Kikuchi, one of the fathers of topology optimization, at his mechanical engineering laboratory and applied mechanics group. One of the firstapplications of topology optimization came in the study of biomimicry for osteoporosis and prosthesis research. They studied the factors that encouraged bone growth in healthy individuals with a goal of replicating the same excitations in elderly patients, as well as to encourage bone growth around an implant in the body to create a stronger and more natural bond.

The underlying theory; the body grows bone in an optimal manner. It’s an extension of Wolf's law, which posits that the body responds to mechanical stresses by increasing bone material and density where it is needed to support the stress.

After graduating, Brennan interviewed with Altair, then a small engineering consulting company that had started seeing success with its HyperMesh pre-processing tool. In his interview, Brennan showed Altair the topology optimization work he was applying as a student. Seeing an opportunity to commercialize topology optimization, Altair hired Jeff Brennan.

“We were very excited by the technology,”said Jim Scapa, Founder, Chairman and CEO Altair. “Weended up coming to an agreement with Professor Kikuchi and his partner Alejandro Dias to resell their software into the commercial market. We really believed in it and wanted to take it further.”

Landing OptiStruct’s First Customer

Brennan became OptiStruct’s first evangelist in 1992,traveling around the country, and later the world to pitch this new technology.

“The amount of rejection that I got was tough, “said Brennan. “I was glad I was young when I was starting to sell OptiStruct because it didn't fit into people’s processes, even if they understood the concept behind it. There was no place to put it.”

Despite the early hurdles, OptiStruct started to gain notoriety in the engineering community and stack up its first major wins. By 1994, OptiStruct was recognized by Industry Week magazine as its ‘Technology of the Year’.

Also, in 1994 Altair approached General Motors andpitched OptiStruct. Dr. Keith Meintjes, currently the practice manager, simulation and analysis at CIMdata,Inc., was the simulation manager of GM Powertrain at the time.

“Jeff showed up at GM Powertrain and somehow sold me the first-ever copy of OptiStruct,”said Meintjes.“He didn’t tell me until years and years later that they’d never made a commercial sale of the software before. I can pride myself on being customer 001.

The software at that point was very difficult to use, but in the hands of talented engineers, you could essentially make magic.”

Back to the Drawing Board

As Altair built on its success with OptiStruct, Scapa sought to negotiate a deal with Kikuchi and Dias, the original authors of the software, to purchase the technology. However, an eleventh-hour disagreement over ownership of the intellectual property put the deal in jeopardy. Scapa decided that without the IP, he was going to take the lab code and have Altair develop a new commercial software from the ground up.

“Dias took a hard stand, and so did I,” said Scapa.“ Dias didn’t think I could do it… I didn’t really know if we could do it either, but I figured, with enough perseverance, we could.”

It was a bold step, which came with additional pressure.Altair now had a portfolio of OptiStruct customers, allwaiting for the next version of their software, which itnow had to essentially re-build from scratch.

“I hired Harold Thomas and Yaw-Kang (YK) Shyy tolead the development of the next generation of OptiStruct. Somehow, they were able to puttogether the new software in about six months. They were brilliant, brilliant coders. They became our master coders.”

Soon after the hiring of Thomas and Shyy, Altair soughtout another innovator in the research community, Ming Zhou, to help take OptiStruct to the next level.

Scapa said, “Harold came to me and said, ‘Prettymuch the best optimization guy on the planet is coming out with his PhD. All the most innovative papers are coming from this guy.’ Ming joining was key because he brought this creativity to everything we were doing.”

“Ming is really a pioneer of the technology,” said Uwe Schramm, chief technology officer at Altair. “He’s been there since the whole thing started in academia. Ming, along with Harold and YK, they were the architects of this approach. They are pioneers of commercial topology optimization.”

A Modern Finite Element Analysis SolverTakes Shape

In 1997 and 1998, the development team was implementing key functionality that would shape and enable OptiStruct’s future growth. They shifted the code from the homogenization method to the density methodology and began focusing on adding finite element analysis (FEA) solver functionality and manufacturing constraints.

Altair co-founder George Christ with Jeff Brennan and Jim Scapa

Figure 1: Altair co-founder George Christ with Jeff Brennan and Jim Scapa in 1995

“If you want to do optimization, you have to do good analysis, “said Schramm. “Customers wanted to take their more complex models and run them, so we had to add features to keep themhappy.

“OptiStruct really did become both an optimization code but also a super-capable solver code,” said Brennan. “Now the third generation of that solver code is handling nonlinear problems like a champ- material nonlinearity, geometric nonlinearity, all kinds of gap constraints, contacts, you name it. It's become world class in terms of the greatest implicit linear, nonlinear and optimization code on the market.”

Next-Generation Optimization Takes Flight

Having won some key automotive accounts, Altair began to set its sights on other markets, especially aerospace.One of OptiStruct’s most crucial aerospace wins was the Airbus A380 light-weighting project (Figure 2).

“The Airbus structures group was working with our consulting group in the UK and they had a real need,” said Brennan. “The A380 wing structure was way overweight and this baby was not gettingoff the ground. They had certain manufacturing constraints that they needed. They didn't want tohave 13 different wing ribs that went from inboard to outboard with totally different topologies, totally different truss structures. That would be a nightmare for the wiring harness guys.

The OptiStruct development team developed a methodology for pattern recognition and repetition so we could basically come up with a modified solution that each of the wing rib sets looked similar and had a similar number of holes. That really made the difference to create a manufacturable, workable solution.

The interaction between software development and the applications was one of the core reasons why OptiStruct was successful early on. That flexibility to take customer requests right as they needed them, and sometimes even overnight, code those things, give them back, and solve the problem. That cemented Altair’s reputation as not just an innovative company but a company that delivers.”

 

Simulation of the Airbus A380 wing ribs

Figure 2: Simulation of the Airbus A380 wing ribs

 

The APWorks 3D printed aluminium bike

Figure 3: The APWorks 3D printed aluminium bike

Early adopters were winning with OptiStruct. Altair was stacking up OptiStruct wins at large OEMs, but small customers were also starting to see its potential as a competitive differentiator.

“I'll never forget going out near my hometown, Kalamazoo Michigan, to a company called Nelson Metals,” said Brennan. “They were a small metal casting house. They had a real competitive advantage over the other casting companies because they’d show up at concept meetings and they would say, ‘We can make a part 20 percent cheaper, with better performance, 30 percent lighter.’ and people like be like ‘How did you do this?’

Because of that open mind, these early adopters had a huge advantage. I'd say some of them still have an advantage because they started adopting optimization earlier than everyone else and they've probably outpaced their competition since.”

Crossing the Chasm

Recognizing OptiStruct’s commercial potential, Jim Scapa and Altair’s management team realized the next challenge, how to scale the success.

“There’s a book called Crossing the Chasm by Geoffrey Moore [1], which fundamentally talks about technologies and product life cycle,” said Scapa. “There’s an initial upward curve for early market technologies, where innovators and early adopters start to discover the product, but then a chasm divides these early technologies from reaching a mainstream market. It’s very difficult for true technology companies to make this leap and cross into a more mature and stable market, despite many product’s early successes. Most products end up falling into this chasm.

I saw OptiStruct follow this curve and make the leap across the chasm. It actually took a long time to establish OptiStruct among the product development community at large. We had to develop all these manufacturing constraints so that you could create parts that were truly useful. And, we had to convince engineering organizations that this made sense to use in the design phase.” *

In 1999, OptiStruct became part of Altair’s newly minted units-based licensing model, in which customers purchase a pool of recyclable tokens that can be applied to any of Altair’s CAE applications. There was early reluctance, especially from the sales and finance teams, who feared a loss in OptiStruct revenue.

“Our primary product was HyperMesh, but we clearly saw OptiStruct as a large opportunity, said Scapa. “My problem at that time was ‘How do I take this new product and get a lot more traction around it?’ And that's where I first came up with this idea of Altair’s units-based licensing model. With the unit model, I could basically stop selling HyperMesh and instead start selling units, allowing customers to immediately have access to OptiStruct. That way I eliminated the friction of having to sell a second product into the account and it started to grow from there. It was huge because OptiStruct may have fallen into the chasm, quite frankly, if I hadn’t done this. In the long run, I think the unit model is a big reason why we’ve been as successful as we are as a company.”

The Additive Manufacturing Revolution

“The technology was always super-cool,” said Scapa.“ But our competitors stayed away from it for a long time. The competition woke up when additive manufacturing became interesting. There was all this hype around additive and topology optimization, and the solutions that we were offering with OptiStruct were perfect for additive because you could make parts with internal voids and would make all kinds of shapes that you couldn't make with castings or stamping or other traditional manufacturing processes.”

With the rise of additive manufacturing, competition began to join the marketplace. With that came challenges to OptiStruct’s topology optimization throne, but also exciting opportunities.

“There's still so much out there,” said Brennan. “The ability to tie a digital twin directly to 3Dprinting and be able to adjust shapes quickly, evaluate them quickly. Part replacement could be a major opportunity with topology optimized structures for things that are aging. You might not ever have to have a CAD file for a part that needs to be replaced from 1960, you’d just need to know its position, its volume, its load cases, and you could quickly generate the ideal shape, have it printed and you'll have a replacement overnight from someplace like Amazon. That's just Jetsons stuff.”

Legacy and Future of OptiStruct

Worldwide, Altair now has more than 3,000 companies using OptiStruct.

“We've been on the leading edge of where simulation is going, promoting this idea of ‘simulation-driven design’ and the possibilities offered by pervasive optimization,” said Scapa.“OptiStruct has had such a widespread impact over the last 25 years. I think it helped launch this whole light-weighting movement that's going on.”

“OptiStruct has changed companies and certainly has put them into a much more competitive position, which is tough to do in the global marketplace now,” said Brennan.

Although OptiStruct has reached many milestones that would have seemed impossible back in 1993, the Altair team continues to look forward at the potential evolution of the tool.

“Today, a lot of people are excited about lattices, new materials, mixed materials, and mixed topology with shape which we were always trying to do but are now doing in a more seamless integrated process,” said Scapa. “We're also adding a tremendous amount of nonlinear simulation to OptiStruct. We have displaced a lot of the traditional linear analysis solutions with OptiStruct over the years and we’re now starting to do the same with the nonlinear analysis solutions. A lot of the reason for that is because of how integrated optimization is within all our solutions.”

Machine learning and the Internet of Things (IoT) also present exciting opportunities in combination with advanced structural analysis and optimization.

“We are beginning many, many projects where we're trying to apply AI and machine learning mixed with our shape and topology optimization algorithms to be able to solve more complex, highly nonlinear problems,” said Scapa.

“The ability to take data from postproduction in the field about how it's really operating and use that to inform the digital twin has great potential,” said Brennan. “The idea that your topology or your design shape is a living thing and it gets feedback from its environment through this digital twinning is mind blowing to me, but I hope it happens.”

From humble beginnings, OptiStruct pioneered the concept of topology optimization in the commercial market. Across virtually every industry around the globe, it has not only enabled weight savings and performance improvements that were previously thought to be impossible, but fundamentally changed how designers and engineers approach product design. No one knows exactly what the future will hold, but OptiStruct is striving to adapt to advances in manufacturing, computing, and data intelligence in order to continue its legacy of innovation.


n5321 | 2025年6月15日 09:08

The MSC History

Source: International Directory of Company Histories, Vol. 25. St. James Press, 1999.

Company Perspectives:

Simply, we enable our customers to design and build better products faster. We do this with computer aided engineering software and services. We minimize the need for costly prototypes and time-consuming tests with computer simulations of product performance and the manufacturing process.

Company History:

The world's largest provider of mechanical computer-aided engineering (MCAE), The MacNeal-Schwendler Corporation develops software that simulates the functionality of complex engineering designs. With the software developed by MacNeal-Schwendler, engineers gained the ability to determine design flaws before embarking on the final stages of development. The company began providing such capability to engineers in 1963, when Richard MacNeal and Robert Schwendler developed design verification solutions for the aerospace industry. MacNeal-Schwendler's signature software, MSC/NASTRAN, was introduced in 1971 and was joined in the 1990s by MSC/PATRAN, a pre- and post-processor for engineering analysis. With operations in 38 countries and 50 direct sales offices, MacNeal-Schwendler marketed its products to aerospace, automotive, industrial, computer, and electronics manufacturers. The company also offered products such as geometric modeling and automatic meshing tools, which were used by engineers during a product's development stage, and a product that solved problems involving high-speed impact.

The Early Years

MacNeal-Schwendler's most senior employee during the late 1990s was the company's founder, Richard MacNeal, whose penchant for self-deprecation masked one of the pioneering minds in computer software development. Born in Warsaw, Indiana, MacNeal moved with his family at age three to Philadelphia, where the well-to-do MacNeals enrolled their son in Penn Charter, a 300-year-old private school run by the Quakers. A student at Penn Charter through the 12th grade, MacNeal distinguished himself in his studies, but by his own admission he was a failure in nearly every other pursuit. He characterized his social development as "retarded." He described himself as a "nerd." His hours away from the classroom were painfully frustrating. "I was the worst football player you have ever seen," MacNeal remarked to a reporter from Forbes magazine. "I was 17 and hadn't kissed a girl."

MacNeal's embarrassments outside the classroom did not disappear after he left Penn Charter for Harvard University. Feeling a need to distance himself from his youth, MacNeal applied to the revered Ivy League university because "Harvard was the best, and I wanted to get away from home," but the experiences in Philadelphia repeated themselves at Harvard. Intent on studying engineering, MacNeal's extracurricular foibles surfaced again, but this time in the classroom. "I had the highest glass-breaking bill in the history of the school," he remembered, referring to the glass beakers used in experiments. But as he had at Penn Charter, MacNeal excelled in the classroom, completing his studies at Harvard in three years. After graduation, he joined the Army at the height of World War II, ending up at the predecessor facility to Edwards Air Force Base, where his duties included calculating the trajectories of bombs. At the end of the war he took the $250 soldiers received to pay their way back home and used the stipend to settle in southern California, where he enrolled at California Institute of Technology (Cal Tech) to continue his studies. At Cal Tech, MacNeal earned a Ph.D. in electrical engineering and stayed at the university for a short time to teach and work for a company formed by Cal Tech students and faculty called Computer Engineering Associates.

Once MacNeal cut his ties to Cal Tech during his late 30s, he began working for Lockheed Corp., the giant aerospace company. His stint at Lockheed was brief, lasting only a year: "I was impatient," MacNeal recollected, "I just didn't fit into a big company. On my exit interview, they asked me if I wanted to know what my supervisor wrote about me. He said I was intelligent and talented and stuff like that, and then he said I was lacking in tact." After he left Lockheed, MacNeal teamed up with Robert Schwendler, and the pair formed MacNeal-Schwendler. The formation of his own company marked a turning point in MacNeal's life, a signal transition that he needed to make. "Here I was," he recalled, thinking back to the months prior to MacNeal-Schwendler's formation, "39 years old and hadn't really done anything. I wasn't satisfied."

MacNeal-Schwendler Gets Under Way in the 1960s

For MacNeal, the awkward genius unable to fit in wherever he went, a career as an entrepreneur at last provided his niche in life. The years of searching were over and personal satisfaction was at hand. Working with an initial investment of $18,000, MacNeal and his partner developed their first program in 1963, an innovation called SADSAM. SADSAM was an acronym for Structural Analysis by Digital Simulation of Analog Methods, a product they designed for the aerospace industry. As with all their programs, MacNeal and Schwendler built products that helped manufacturers build their own products faster, better, and cheaper. The concept, a revolutionary idea in the early 1960s that would become commonplace by the end of the century, centered on simulating the effectiveness of a product well before the particular product reached the final stages of its development. By gaining the ability to determine fundamental flaws related to stress, vibration, and other conditions early in the design process, manufacturers involved in complex engineering businesses could make necessary adjustments before their products reached final development stages. The result was tremendous cost savings to the manufacturer, cash that otherwise would have been earmarked for the construction of prototypes and the innumerable revisions to a product's original design. It was a method for foreseeing the problems inherent in creating sophisticated machinery that would become known as computer-aided engineering (CAE). With the introduction of SADSAM in 1963, MacNeal and Schwendler had positioned themselves as important innovators in the promising science of CAE, a field that would become an integral partner in the growth of high-technology industries during the latter half of the 20th century.

With the power of hindsight, MacNeal's and Schwendler's position in 1963 appeared poised on the brink of resounding success, but from the pair's perspective after the introduction of SADSAM, there was much to be concerned about. The partners had a product, they believed, that would serve as a valuable aid for the aerospace clientele they courted, but unless their prospective customers believed in the value of SADSAM, MacNeal and Schwendler would have little cause for celebration. Fortunately, the pivotal struggle to secure contracts received a boon when the company participated in a project sponsored by the National Aeronautic and Space Administration (NASA) in 1965. The NASA project called for the development of a unified approach to computerized structural analysis, resulting in the creation of NASTRAN, or NASA Structural Analysis Program. NASTRAN represented one of the first efforts to consolidate structural mechanics into a single computer program. It was a signal step forward. As the leading edge of CAE development moved forward, MacNeal and Schwendler were again positioned at the forefront. Their biggest contribution to the evolution of CAE occurred in the wake of the 1965 NASA-sponsored project; its arrival secured a lasting future for The MacNeal-Schwendler Corporation.

MSC/NASTRAN Debuts in 1971

Six years after MacNeal-Schwendler participated in the NASA-sponsored NASTRAN project, the company developed its proprietary version of NASTRAN, a program dubbed MSC/NASTRAN. The 1971 introduction of MSC/NASTRAN marked a momentous leap forward for the company, giving it a powerful and consistent revenue-generating engine to propel itself in the decades ahead. (The market strength of MSC/NASTRAN was represented by its longevity as revenue producer--by 1995, MSC/NASTRAN was in its 68th release.) From the business attracted by MSC/NASTRAN, the small entrepreneurial partnership formed by MacNeal-Schwendler developed into a genuine corporation, its structure and geographic range of operations blossoming as the 1970s progressed. Two years after it began marketing MSC/NASTRAN, the company had the financial wherewithal to make its first foray into foreign markets, establishing an office in Munich, Germany in 1973. Three years after entering Europe, MacNeal-Schwendler turned its sights eastward and opened an office in Tokyo, Japan. Highlighted by these important first steps overseas and underpinned by steady and meaningful growth on the domestic front, the company matured during the 1970s, a decade that witnessed the legitimization of MacNeal-Schwendler as a recognized world leader in CAE software. The decade also brought its own significant misfortune: the death of one its founders. In 1979 Schwendler died unexpectedly, a traumatic experience for MacNeal that left him in charge of achieving the dream the two partners had envisioned. Although the loss of Schwendler represented a severe personal blow to MacNeal, the company pressed forward with little hesitation as the 1980s began, inching toward greatness in the CAE field.

In 1983 MacNeal-Schwendler made its debut as a publicly traded company, completing an initial public offering that raised proceeds for future expansion and established the company's ticker symbol on the over-the-counter exchange. A year later, the company's stock migrated to the American Stock Exchange, a move toward greater prominence that befitted the stature of a fast-growing, industry innovator. The company's MSC/NASTRAN software by this point had grown into an industry standard. In the engineering departments of many of the leading high-technology corporations--companies that involved the aerospace, automotive, heavy machinery, and shipbuilding industries--MSC/NASTRAN was relied heavily upon to provide detailed simulation and design verification data. MacNeal-Schwendler's mainstay product had become instrumental to the success achieved by those companies undertaking projects facing complex engineering challenges. As the 1980s progressed, however, the company had to contend with its own complex challenges. Its industry was evolving rapidly, changing the dynamics of its business environment.

Technological breakthroughs during the 1980s engendered tremendous advances in interactive computer graphics and lowered the cost of producing powerful engineering workstations. These advantages broadened MacNeal-Schwendler's customer base, increasing the size of the company's potential consumer community. At the same time, the technological breakthroughs also caused the general computer industry to grow explosively, which, in turn, attracted a legion of new competitors in the industry, stiffening competition. This new surge of competition coupled with defense industry cuts toward the end of the decade conspired against MacNeal-Schwendler, tarnishing the company's long record of consistent success. The problems intensified as the 1990s began, prompting one analyst to remark, "They have an excellent product and they're an extremely well-run company, but they have not anticipated the kind of competition they've gotten."

Although MacNeal-Schwendler was by no means in a precarious financial situation as the 1990s began, the superficial damage stemming from an overdependence on military spending did provoke changes at the company. Focus shifted to the commercial market, leading the company to create testing software for automobile and satellite makers, as well as for large manufacturing companies. Toward this end, the company entered into a joint marketing and development partnership with Aries Technology in 1992. The following year (its 30th year of business) the company allied itself with Aries Technology to the fullest extent by acquiring its joint venture partner and thereby widening the design engineer audience the company targeted. The company's anniversary year also marked the establishment of a subsidiary office in Moscow, where MacNeal-Schwendler hoped to take part in the massive development under way in Eastern Europe.

As the company's 30th anniversary celebrations were winding down, so too was the development work for a significant product introduction, MSC/NASTRAN for Windows. The software, introduced in 1994, made MacNeal-Schwendler's signature code--ranked as the world's most popular finite element analysis software&mdash…ailable to personal computer users. This achievement was followed by another encouraging development, an acquisition that swelled the company's stature to unrivaled size. In 1994 MacNeal-Schwendler acquired PDA Engineering, a producer of pre- and post-processing software. Once completed, the absorption of PDA Engineering into the company's fold made MacNeal-Schwendler the largest single provider of products to the mechanical CAE market in the world.

MacNeal-Schwendler achieved global dominance just as its pioneering leader was beginning to step aside and make room for a new generation of leadership. MacNeal, in his early 70s during the mid-1990s, vacated the presidential post in 1995, setting the stage for the appointment of Tim Curry to the office of president and chief operations officer. The following year, when the company expanded its operations in Latin America by opening a new office in Brazil, Curry was named chief executive officer, as MacNeal reduced his work schedule to three-and-a-half days a week. Under Curry's stewardship, the company moved toward the late 1990s, its position as an industry pioneer and a market leader secured by three decades of MacNeal's leadership. Hope for the future rested on the shoulders of the company's new leader and new, innovative software solutions for the engineering challenges of the 21st century.

Principal Subsidiaries: MacNeal-Schwendler GmbH.

Principal Operating Units: Aerospace; Automotive; OEM; Growth Industries.

Further Reading:

  • "Competition, Defense Industry Cuts Hurt Price of MacNeal-Schwendler Corp. Stock," Los Angeles Business Journal, June 4, 1990, p. 32.
  • Deady, Tim, "Revenge of the Nerd," Los Angeles Business Journal, April 29, 1996, p. 13.
  • "MacNeal-Schwendler Corp.," Machine Design, November 26, 1992, p. 103.
  • Teague, Paul E., "Pioneer in Engineering Analysis: Dick MacNeal Conceived One of the Most Widely Used Finite Element Analysis Codes in the World," Design News, July 10, 1995, p.50.

Source: International Directory of Company Histories, Vol. 25. St. James Press, 1999.


n5321 | 2025年6月15日 09:06

Analysis Origins - MSC and NASTRAN

by Dennis Nagy Principal, BeyondCAE

Birth, Growth, Success, Stagnation, and Re-Birth

MSC’s 55-year history has been fascinating, exciting, and disappointing, all at the same time. This article attempts to highlight all those facets from the personal viewpoint of someone who was a part of it for 12 years (1985-97) and has followed MSC’s course closely since then. Like the famous cartoon of 5 blind men trying to describe an elephant from where they are touching it, this overview is a mixture of facts and personal opinions and I encourage any reader to contact me at dennis.nagy@beyondcae.com with comments, more detailed back-stories, or different viewpoints.

Contrary to what many people might assume today, MSC was not founded to develop a general-purpose finite element software system (NASTRAN), much less to be a global pioneer and, for considerable time, the market leader in the business of Computer-Aided Engineering (CAE). As the main co-founder, the late Richard H. “Dick” MacNeal once said in my presence (my recollection): “I started MSC because I didn’t like working for someone else in a large company.” Of course, the origins and motivations were more complex than that, but it’s a good summary of how MSC started.

Figure 1: Richard H. MacNeal 1924-2018

 

Figure 1: Richard H. MacNeal 1924-2018

MSC (called The MacNeal-Schwendler Corporation at that time) was incorporated on February 1, 1963, by Dick MacNeal and Robert “Bob” Schwendler. Caleb “Mack” McCormick was with them from the early years but didn’t get his name in the company name. Dick MacNeal was the theoretical/scientific driver for starting MSC as a company to develop analog computer technology for, among other things, helicopter rotor dynamics, and to do consulting related to that. These three pioneers and others, however, saw the coming demise of analog computing and the digital computer revolution on the near horizon. Through various aerospace industry and NASA contacts, they were able to team with Martin Baltimore and Computer Sciences Corporation (CSC) to submit a winning bid in late 1965-early 1966 to NASA to develop a very comprehensive (by mid-1960s standards) finite element analysis (FEA) software system. Hence NASTRAN (NASA Structural Analysis) was born.

The intervening years from the mid-1960s to early 1980s were filled with fascinating developments and details. Unfortunately, that was before I was part of MSC but, fortunately, Dick MacNeal chose to write an excellent book entitled “The MacNeal-Schwendler Corporation: The First Twenty Years” upon his retirement from fulltime company management and on the occasion of MSC’s going public (on the American Stock Exchange) on May 5, 1983. Dick’s excellent writing style, coupled with his unequalled wealth of detailed personal recollections, produced a unique book which I cannot even begin to adequately summarize here. There is, to my knowledge, no other summary of MSC’s history shorter than that book but more detailed than this present article, so any reader interested in those fascinating years must read the book, which I strongly encourage you to try to do (if you can find a copy in print—I have one signed by Dick MacNeal on April 13, 1988 and there are some others floating around among former MSC employees). There is also one other brief historical summary, vintage late 1990s and viewing MSC’s future prospects from that vantagepoint [1]. I will jump quickly to the period 1985- 2018 for the remainder of this article, after briefly covering the history of MSC’s legal dispute with NASA, how MSC’s pricing model sustained financial growth up to and well beyond 1985 and how MSC entered the Supercomputing era with MSC/NASTRAN on a Cray.

Richard MacNeal (right) and Robert Schwendler

 

Figure 2: Dr. Richard MacNeal (right) and Robert Schwendler in the early 1960s working on an analog computer.

The birth of a proprietary version, called MSC/NASTRAN, occurred in 1971 because NASA had not provided for any ongoing user support, error correction, and further enhancements for the public-domain (NASA) version of NASTRAN. Since, on day one, the actual MSC/NASTRAN software was identical to what anyone could obtain from the NASA COSMIC distribution center at the University of Georgia for a modest fee of $1,750 (mostly tape-copying and shipping fee), MSC could not charge any significant price for a perpetual (paid-up) license of the MSC/NASTRAN software itself. Hence the idea of charging a monthly fee (which became known as a lease fee) for “hot line” telephone support, ongoing bug-fixing and enhancements, better documentation, and user training courses was used to start generating revenue for MSC. It proved successful enough that MSC grew well throughout the 1970s without any need for outside investment. It is worth noting that MSC held onto primarily a lease-based pricing model for decades while other software companies needed to resort to frontloaded paid-up licensing to generate enough early cashflow for growth. Much more recently, major successful software companies have introduced “subscription pricing” to replace perpetual licenses, as if this were a revolutionary new idea.

There was one legal “hiccup” with NASA during the period 1980-1982 that is worth brief mention here, although covered by a whole chapter (Chapter Ten: Dispute With NASA) in Dick McNeal’s book. Someone (later known to MSC and Dick MacNeal but not mentioned directly by name in the book) had formally complained to NASA about MSC’s “misuse of NASA data” without prior NASA approval, and of course NASA had gotten its legal department involved. If you love to read about gory legal details, read Chapter Ten. Here I will just summarize a long list of painful (for MSC) iterations and corresponding legal fees that ensued (even noted Pulitzer Prize-winning U.S. political columnist of that time, Jack Anderson, had written about the MSC-NASA issue in February 1981). The need to reach a negotiated settlement was quickly clear to both sides.

Many of the later iterations were about what MSC should pay to NASA/the U.S. Government to settle the dispute. Initial (posturing) amounts ranged from a few thousand U.S. dollars up to US$900,000. After a lot of careful and admirably detailed calculations on both sides, MSC and NASA arrived, on October 22, 1982, at a deal of US$125,000 and the right for MSC to use the entire software it had developed, and the name MSC/NASTRAN, from that point forward. (Joe Gloudeman later referred to this settlement as a “Quit Claim Deed”.) Even at that time, and much more obvious now in retrospect, that amount was seen by many as “peanuts” compared to what MSC/NASTRAN could (and did) produce in revenues and profits over the ensuing 3+ decades. A related issue to this apparently settled dispute did, however, appear in in 2001 and is summarized further below.

Robert Schwendler

Figure 3: Robert Schwendler, with MSC’s first superminicomputer (VAX 11/780), late 1978, shortly before his untimely passing

In addition to entering the super-minicomputer market with an MSC/NASTRAN version on a Digital Equipment Corporation (DEC) 11/780 in late 1978, MSC’s most significant “porting” of MSC/NASTRAN was to a new class of vector-architecture supercomputers via the creation of a Cray version in 1979-81. At that time of commercial supercomputing infancy, MSC and Cray estimated that, at most, approximately 25 Cray computers would ever be sold globally by Cray for running MSC/NASTRAN. That original estimate was already exceeded by at least a factor of 10 during the next 10-15 years! The cost of Cray computers meant that only the largest MSC customers in the Aerospace and Automotive industries, as well as some NASA data centers, could afford them. MSC’s software pricing was, at that time, linked to measured usage of compute cycles (and the hardware cost of such cycles) in order for MSC to get a piece of the economic benefit from large mainframe- and supercomputer-based “time-sharing” data centers. Ironically, the Space Shuttle Challenger disaster in 1985 eventually triggered the slow unraveling of MSC during the 1990s. After the Challenger explosion, NASA’s subsequent intense MSC/NASTRAN usage (on Craybased data centers) to help understand what caused the failure led to a significant revenue windfall for MSC due to such usage-based pricing over the next 4 years.

The Beginnings of Stagnation and Two Decades of Turmoil

That windfall, in turn, gave MSC a somewhat false sense of financial confidence (and the stock market a false sense of MSC’s growth potential) which partially led MSC to start acquiring small complementary CAE software companies with offerings in subject areas where MSC/NASTRAN was difficult to enhance. These acquisitions were not easy to absorb culturally/organizationally and hence were a drag on MSC’s bottom line. When the windfall stopped blowing in 1989, MSC’s bottom line started to suffer measureably, leading (with some other extenuating circumstances) to MSC’s Board removing Dr. Joseph “Joe” Gloudeman as CEO in September 1991 (he had been running MSC since 1984 when Dick MacNeal had retired) and bringing back Dick MacNeal to run the company. At age 67, Dick MacNeal viewed his return as a temporary move and proceeded (with the Board) to look actively for a new CEO.

In the mid-1980s MSC recognized that the 1960s-vintage foundation architecture (the Executive System [ES], for handling all module and data control) of MSC/NASTRAN was no longer an adequate base for longer-term enhancements and extensions. A complete re-write of the ES was performed during 1985-87, took longer and cost more than originally estimated, but was successful enough to make MSC/NASTRAN a more viable development platform for decades to come.

As affordable graphics hardware and workstations/servers emerged, market interest broadened to require more powerful pre/ post-processing tools (interactive preparation of finite element models, meshing, and visual portrayal of results). MSC struggled with its own in-house developments in this area (MSC/Grasp and then MSC/XL) from 1983-1991 before deciding to acquire Massachusetts-based Aries (privately held developer of FEA graphics and related solid modelling software) in 1993 for ~US$15M and then (because it suddenly became available) publicly-traded PDA Engineering (PDA/Patran) for ~US$60M in mid-1994. PDA/Patran and SDRC’s I-DEAS were at that time the coleaders in FEA pre/post-processing software and both were used heavily by MSC/NASTRAN. PDA/Patran, originally written in the late 1970s, was completely rewritten in 1992-94, ironically a reason for PDA’s earnings and stock-price collapse in early 1994, leading to their “availability” to be rescued by the Dick MacNeal-driven deep pockets purchase.

The major acquisition of PDA in 1994 coincided with a CEO “musical chairs” that MSC experienced going forward, which in turn played an important role in MSC’s stagnation and slow unraveling. After Dick McNeal’s 2- year return as CEO (1991-93), MSC chose Larry McArthur (CEO of acquired Aries) to succeed him in 1993, removed MacArthur in 1994 (at the urging of still Board Chairman MacNeal) after the PDA acquisition (which in hindsight made the Aries acquisition appear superfluous, to this author at least, except for the money spent on acquiring it) and then chose Tom Curry (PDA’s President) to run MSC in 1996. Recognizing the precariousness of his new role, Tom, with the aid of the former PDA and now MSC Board member Frank Perna, convinced the Board to remove Dick MacNeal from the Chairmanship and the Board in 1997. This didn’t lead to the stability Tom had hoped for though and at the end of 1998 Frank Perna replaced Tom Curry as CEO.

A Period of Difficult Acquisitions

The Frank Perna era (1999-2004) was characterized by several ambitious acquisitions and (generally unsuccessful) attempts to broaden MSC beyond a pure FEA-based MCAE leader MSC acquired the two small (but annoying due to their low pricing) other NASTRAN vendors (who had built their NASTRAN products from the same public-domain source code MSC had developed for NASA) in 1999. These acquisitions were to come back and bite MSC financially a few years later (see below).

Privately-held MARC, a co-leader (along with HKS’s Abaqus) in the nonlinear FEA market niche, was acquired in 1999. MSC had struggled throughout the 1990s to implement nonlinear capability in MSC/NASTRAN, which was basically still a linear-FEA architected software system. Nonlinear FEA was becoming more important to MSC’s automotive and aerospace customers because the rapid decrease in price-performance ratios for relevant processing hardware made useful nonlinear simulations much more practical in industry. At that point, in addition to MARC and ABAQUS, Ansys also had better nonlinear simulation capability than MSC/NASTRAN. Although a wise acquisition in principle, the other management/strategy problems MSC had from that point onward produced the result that MSC.MARC (somewhere in there MSC switched from / to . in its product naming convention) was only really integrated with MSC.NASTRAN 13 years later.

Attempts to broaden MSC led to the acquisition of AES (a primarily U.S. active DS/CATIA and bundled workstation hardware re-seller/implementation consulting house) for ~$100M in 2001. Many factors (too speculative to discuss here in writing) led to MSC’s eventual sell-off of the remains of AES a few years later for a small fraction of what they paid. In the same timeframe, MSC also attempted to configure and sell compute servers based on an MSC-created version of Linux, another venture with much greater start-up costs than eventual revenue, which was abandoned within 2 years.

Perhaps the best (from a complementary technology and strong, overlapping market presence viewpoint) acquisition MSC made during that period was the purchase of publicly-traded MDI (multibody dynamics [MBD] simulation called ADAMS) in 2002 for slightly more than US$120M. MDI, at ~US$60M in revenue, was by far the leader in the MBD segment, but a rumored acquisition battle with Ansys may have (again after the fact speculation) caused MSC to pay too much for MDI. The combined organization after that acquisition was ~1700 people. In hindsight, MSC.ADAMS and its dominant MBD market segment share has played a key role in retaining the loyalty of some of the same automotive customers battered by the gyrations of MSC in the “NASTRAN” market. MSC.NASTRAN was used for “bread and butter” linear FEA simulation of large automotive components and sub-systems and many customers realized that they could reliably do the same simulations with other “lesser and cheaper” NASTRANs or even with non-NASTRAN FEA (mainly Ansys, Abaqus and Altair’s Radioss more recently).

MSC was clearly stagnating and slowly unraveling in the early 2000s. To make matters worse, in 2001 the Federal Trade Commission (FTC) hit MSC with a claim (called an “administrative challenge”) of monopolistic practices in the “NASTRAN market” for acquiring the two other, much smaller NASTRAN vendors (UAI and CSAR, both Los Angeles-based) back in 1999. The challenge was rumored to have come from a complaint to the FTC by MSC’s largest aerospace customer at the time (over US$4 million in annual lease/usage payments to MSC) related to their fear of monopolistic future pricing. The FTC’s challenge was persistent despite many MCAE experts, including this author, providing testimony to the FTC that there was an FEA market, but no longer any “NASTRAN” market at that time due to the continued growth of both Ansys and HKS/Abaqus. MSC ended up spending ~$7M in legal services and still lost the challenge in 2002 and were forced to “divest of their monopoly” by offering for sale (at a “reasonable” price) a clone of MSC.NASTRAN source code and full documentation, plus allowing the chosen acquirer to hire a sufficient number of key developers to be competitive with MSC going forward. Unigraphics (UG, a major MCAD vendor with its NX-CAD environment) ended up buying the clone, productizing it as NX/NASTRAN and marketing it at a lower price than MSC.NASTRAN.

MSC’s Board removed Frank Perna as CEO in 2004 and brought in William “Bill” Weyand, the former SDRC CEO who had successfully sold SDRC to Electronic Data Systems (EDS). Outside rumor was that Bill Weyand would have four years to improve MSC’s stagnant financial performance and find a buyer to take MSC private. MSC continued to be revenue-flat and Bill Weyand left at the end of a four-year term. The MSC Board inserted a temporary placeholder CEO while directly looking for a buyer, which they found (STG: Symphony Technology Group, a private equity firm) in mid-2009 at a purchase price of US$360 M.

The Beginnings of Rebirth

STG subsequently hired CAD Industry veteran Dominic Gallello as CEO, who returned MSC to break-even and embarked upon an ambitious, aggressive development plan starting in 2011 to replace MSC.PATRAN (now 18 years old in its underlying architecture and surviving during a period of dramatic software methods improvement within the industry). MSC.APEX was the result of this ambitious project and seems to be making reasonable headway today.

As is the case with many successful private-equity companies, they have an approximately 5-year ROI horizon. STG attempted to sell MSC in mid-2014 but was unable to find a willing buyer at the price they wanted to achieve. STG attempted again in 2017 and successfully sold MSC to Swedish-based Hexagon AB, a leading global provider of industrial information technology software and services, for US$834 M. Although MSC had made a number of technology-attractive investments and acquisitions in the period 2010-2017, they had insufficient impact on MSC’s top line so that MSC’s revenue had languished in the low US$200 M range for many years prior to this successful acquisition. It is of course too early to tell, but this acquisition has the potential to get MSC back onto a more aggressive growth path with the synergy of Hexagon, a major industrial player instead of a private equity company, as its new parent.

References


n5321 | 2025年6月15日 09:02

Analysis Origins - ABAQUS

by Lynn Manning

In the first of this new regular series on the roots of some of the major players in the analysis world, we take a look at the origins of HKS and ABAQUS, from its beginnings in the 1970’s, to the present day.

An Englishman, a Swede and an American who grew up in Ecuador met in Providence, Rhode Island—sounds like a humorous opening line, but it’s actually the beginning of the story of ABAQUS software.

David Hibbitt on his Vincent Black Shadow, 1964In 1968 the Englishman, David Hibbitt, was a couple of years into his Ph.D. in solid mechanics at Brown University in Providence when he switched advisers to work with Pedro Marcal, a young assistant professor who'd arrived from London with two boxes of punch cards containing a version of the SAP finite element program. Pedro was trying to extend the program to model nonlinear problems including plasticity, large motions and deformations; David had recently discovered Fortran programming and found that he liked writing code, as well as the challenge of applying solid mechanics to engineering design.

At the time people had already recognized the potential of finite element methods, and several commercial codes were available. “But the nonlinear area was pretty wide open,” says Hibbitt. “There was plenty of promise for useful application.”

First foray into the nonlinear realm

Hibbitt’s thesis was funded by a U.S. Navy contract that called for the development of a finite element capability to model the multi-pass welding of submarine hulls and predict the loss of performance caused by residual distortion. So he had to develop heat-transfer capabilities that could handle the latent heat effect when molten metal solidifies, use those temperature predictions to model the mechanical response of the structure, including plasticity and creep, throughout the multi-pass welding process, and then do a buckling analysis of the distorted structure.

“That was a tall order,” says Hibbitt. “And well beyond the capabilities of the computers in those days.” Brown University had one IBM 360/50 computer for the whole campus, which meant limited time parcelled out to researchers. Moore’s law tells us that a single cellphone today has the compute power of 33 million such machines. And, as you well know, Abaqus and other codes need far more processing power than a cellphone.

As the research progressed, Marcal’s academic group was increasingly receiving calls from various industries asking whether his new code (Marc) could help them. Enough business started coming in that he incorporated MarcAnalysis in 1971, with Hibbitt as a minor co-owner and the first full-time employee. As the company grew, PaulSorensen (the American in our tale, who had come to theU.S. from Ecuador as a teenager) joined the group for a time but then left to do a doctorate in fracture mechanics, after which he began working at General Motors Research Labs in Detroit.

Control Data Corporation (CDC) supplemented the groups computer resources by letting Marc run on the mainframes in their data centres on a “pay per hour” basis. A CDC support analyst in the data centre in Stockholm, Sweden—Bengt Karlsson—started using the code, and found it intriguing enough that he asked to join the company. He was hired. When Marcal moved to California to take the business in a different direction, Hibbitt and Karlsson stayed in Rhode Island with the finite element group. But they found it difficult to satisfy the needs of engineers with design responsibility who had no time to rebuild or debug what was still really a research code for each application.“We thought the sensible way forward was to develop a robust ‘black box’ tool for engineers needing to do nonlinear calculations to solve industry problems,” says Hibbitt. But Pedro had no interest in such an investment.

Creating an all-new code

The ABAQUS LogoSo David and Bengt decided to try on their own. “Almost everyone whose advice we sought told us that we would fail,” David remembers.“There were already 22 viable FE programs out there, competing for industry business, and even the largest computers were too limited to do nonlinear calculations of practical size.” But they had enough in savings to feed their families and pay their mortgages for a year. And so the ABAQUS software was conceived. There’s a message in the company’s first logo, a stylized abacus calculator: its beads are set to the company’s official launch date of February 1, 1978 (2-1-1978).

A lot of tech startups begin in garages; Hibbitt and Karlsson had the relative luxury of the front parlour of the Englishman’s early 19th century Rhode Island farmhouse, plus the dining room table at which David’s wife Susan paid the bills and did the typing, using a rented IBM Selectric typewriter.

“We were a software company with no software, no computer, no customers or prospective customers, and almost no money,” says David. They first wrote a User’s Manual with the goal of making it easy and intuitive to put a problem definition together, then designed the code’s architecture. Hibbitt often wrote code on airplanes while commuting to teach a graduate course in plasticity as an adjunct professor at the University of Texas in Austin—a job he took to bring in some revenue to the starving company.

In line with their shared philosophy of creating practical code to solve real-world problems, Version 1 was created for a specific client. Through a chance meeting at an ASME conference, Hibbitt made a contact at the Hanford nuclear development site lab, which needed to design the mechanical restraints for the core of a prototype fast breeder reactor.

“We knew that if we didn’t deliver code to them in three months we wouldn’t get paid,” says Karlsson. “That was a big incentive!” On time, they delivered 15,000 lines of FORTRAN with just four elements—beam, gap (point contact), truss and SPHEX (to model the deformable section of the fuel rods where they come into contact)—and modeling the thermal expansion, creep, and irradiation swelling of the metals in the fuel rods and restraints.

When Paul Sorensen came back to Providence for a Christmas visit with his in-laws he visited David and Bengt and, over a weekend of discussions with them and (separately) with his wife Joan, chose to join them. So the company became HKS—an amalgam of their three surnames. Paul’s background in finite element simulation of steady-state crack growth was a plus for the work on behalf of the company’s first customers.

Another early client was Exxon Production Research, which needed a code for offshore piping installations and marine riser analysis; this became the predecessor to the ABAQUS/Aqua capability. “The technical difficulty was the relative slenderness of the 10-inch pipes that were hundreds of feet long,” remembers Hibbitt. “The usual stiffness-based beam elements could not handle this in the context of the large motions involved. So we investigated the use of mixed, ‘hybrid’ formulation elements.” The hybrid beam and drag chain elements that were essential to the success of this kind of analysis are still in use today.

Customer projects build up the code

ABAQUS’ capabilities continued to grow during the 1980s along with the number of HKS employees. Shell and continuum elements were added, along with simulation of plasticity, dynamics, heat transfer, and more. “Every customer was important to us and their requirements drove our development,” says Karlsson. “But we were always aware of the need to deliver useful capabilities for specific applications while keeping the code general-purpose.”

Woman feeds punch cards into machineIn those days HKS personnel would physically install their software at a customer's site whenever a new license was purchased. They’d bring the source code on tape to the customer's location, compile the program and make it work on that customer’s computer, run all the examples, and then check the printed output against microfiche copies of the results from previous versions of the code.

The advent of the first supercomputers sent punch cards to the history books as Abaqus developed a reputation as a sophisticated scientific and engineering software application that could run on these then-powerful machines. As current simulation users now know, the future would bring an explosion of high-performance computing (HPC) that fed the evolution of each new release of ABAQUS, with models leaping forward in the numbers of elements and degrees of freedom, tremendous speed-up in processing time, and increasingly realistic, 3D visualization.

Back in the 1970s, of course, FEA was still in its infancy by today’s standards. For HKS, the market for robust,“industry strength” nonlinear analysis was already there. Nuclear power continued to be an early source of work, with challenges such as nonlinear pipe whip and the development of “elbow” elements to include modelling of plasticity and creep in pipe bends in a computationally efficient manner (considered to be one of the “crown jewels” of the ABAQUS flagship finite elements). Other pioneering mechanics development work included special elements and material models for soils analysis, and modelling of concrete and rebar for analyses of civil engineering structures and nuclear power-plant containment buildings.

The before and after of FEA processing

As ABAQUS’ capabilities grew, it became clear that it couldn't continue as an independent product without a preprocessor to formulate the problem and a postprocessor to review the results graphically.

In the earliest days of FEA, engineers had to draw conclusions from their analyses by studying printed tables of their results. Looking towards the future, however, the ABAQUS 4.0 User's Manual suggested that "the presentation of results in graphic or pictorial form provides for greater insight, for most problems, than columns of numbers."

To meet this need for result images, the concept of the"plot output file" was born. With this mechanism, users inserted plot commands requesting contours or displacements at intervals during the analysis, and a plot file was generated and displayed on a Calcomp plotter.

A major graphics breakthrough occurred when Hibbitt had the idea of a "neutral" plot file, which was implemented during an installation at ABAQUS customer Electric Boat.This contained device-independent plot commands that were converted into device-specific commands by drivers.Drawing a simple line involved commands to select a color, position, lower, reposition, and then raise the pen. Image files were then displayed on pen plotters or Tektronix terminals; back then it took half a day for a job of just a few hundred elements to run.

In 1987 ABAQUS/Post was released as a standalone postprocessor with command entry and the satisfaction of onscreen, visual feedback.

ABAQYS Post

Developing a preprocessor was a challenge that engaged the ABAQUS team for years. “We wanted a fully interactive code,” says Hibbitt. “We felt that a ‘clean sheet of paper’approach was the best long-term investment for fully developing our product’s strengths.” The company devoted significant resources to ABAQUS/CAE, which was intended to be the window into the solvers, making it easy to create, manage, and visualize complex simulations and to customize ABAQUS for specific applications.

Explicit, Viewer and GUI customization

At the same time that pre- and post-processing development was underway, considerable effort was also going into a commercial explicit dynamics code, which had to have its own architecture. Remembers Hibbitt, “Our main target for ABAQUS/Explicit back then was crash analysis. But because—unlike some of the competition—we were very careful not to trade quality of mechanics for speed in computing, ABAQUS/Explicit also proved useful for events like drop test simulation, to ensure that a mobile phone could survive being dropped, for example.”

The first official release of ABAQUS/Explicit was hand delivered to MIT in 1992. Version 0 of ABAQUS/Viewer was released as a standalone product in 1998. The same features were made available as the Visualization module of ABAQUS/CAE in 1999. Users could now interact with ABAQUS/Viewer through a graphic user interface (GUI), selecting actions and options via icons, menus, and dialog boxes.

ABAQUS CAE

Once again, a customer relationship contributed to the rollout of these improved capabilities. In 1996 HKS had entered into an agreement with British Steel to deliver a customized roll pass design system. The project had a profound effect on development activities and was instrumental in driving plans and deliverables for the CAE and Explicit products. It also spurred an integrated approach to development, which impacted both the code architecture (particularly the output database) and the organizational structure and was also the first to take advantage of GUI customization.

One of the goals for British Steel was to capture the knowledge of their expert engineers and analysts (some of whom were close to retirement) in an automated system that could be used by designers.GUI customization created applications that provided user interfaces in terms that designers were familiar with, while hiding the details of the analyses that take place behind the scenes. This was an important part of the process automation strategy—to enable customization that allowed less-experienced users to access and apply FEA software.

After 23 years of leadership, David Hibbitt retired in 2001; Bengt Karlsson and Paul Sorensen followed suit in the following year. All three are still living in New England. In November 2002 HKS, Inc. changed its name to ABAQUS, Inc., just before the company’s 25th anniversary. Three years later ABAQUS Inc. became the beginning of the SIMULIA brand of Dassault Systèmes. Some 350+ SIMULIA engineers and programmers are now based at the brand headquarters in Johnston, Rhode Island, with another 1350 in 22 countries around the world who serve customers ranging from industry-leading OEMs to tech startups operating solely on the cloud.

Diversifying into Multiphysics

“We used to joke a few years ago that we didn’t really need to develop another finite element, as we already had about 350 of them in the code and didn’t need a 351st,” says Alan Prior, Senior Director, Technical Sales. “But we had to diversify beyond Abaqus [written in lower case letters since 2005] into wider physics, and we had to democratize simulation to a wider range of users because otherwise, we’d be self-limiting.”

Today, forty years of Abaqus FEA technology remain at the core of an expanded cornucopia of leading multiphysics capabilities. Starting with the Engineoussoftware’s Isight in 2008, SIMULIA has added capabilities in optimization, template creation and process automation, CFD, injection molding, fatigue and durability, topology optimization, electromagnetics, vibroacoustics, and more.

“The attraction of Abaqus has brought in all these other products that have been developed with the same sort of passion as the founders of HKS,” says Prior. “We’ve acted as a magnet for other technology companies with a similar philosophy.” That shared enthusiasm for taking on the toughest simulation problems is foundational for every company that joins the SIMULIA family, he says.“But we never lose sight of the ‘why’—we always see the value for a customer, or an industry, or the engineering community as a whole.”

The Simulia Living Heart ProjectThat “why” was reintroduced to David Hibbitt recently when Steve Levine, the leader of SIMULIA’s Living Heart Project, had the opportunity to put 3D glasses on him, so the Abaqus founder could view the company’s flagship 3Dmodel of a beating human heart, complete with bloodflow, electrical activation, cardiac cycles, and tissue response on the molecular level.

Hibbitt’s response? “This is a wonderful example of what we hoped for as a result of Dassault Systèmes’ acquisition of Abaqus: the delivery of highly sophisticated mechanics, packaged for use by someone with responsibility for a product or process who knows nothing of the complexities buried within the software. In this case that user is a cardiac surgeon, who now has a tool to plan his work specific to the needs of each patient and so dramatically improve his ability to do good…Clearly, there is plenty still to do to deliver the full value needed for this application.But how wonderful to continue spending your time on such endeavours. We wish you well as you pursue them.”


n5321 | 2025年6月15日 09:01

About Us

普通电机工程师!
从前只想做最好的电机设计,现在修理电机设计工具。
希望可以帮你解释电磁概念,项目救火,定制ANSYS Maxwell。

了解更多