Programming as Theory Building

Peter Naur 1985

Peter Naur’s classic 1985 essay “Programming as Theory Building” argues that a program is not its source code. A program is a shared mental construct (he uses the word theory) that lives in the minds of the people who work on it. If you lose the people, you lose the program. The code is merely a written representation of the program, and it’s lossy, so you can’t reconstruct a program from its code.

Introduction

The present discussion is a contribution to the understanding of what programming is. It suggests that programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insight, a theory, of the matters at hand. This suggestion is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program and certain other texts.

Some of the background of the views presented here is to be found in certain observations of what actually happens to programs and the teams of programmers dealing with them, particularly in situations arising from unexpected and perhaps erroneous program executions or reactions, and on the occasion of modifications of programs. The difficulty of accommodating such observations in a production view of programming suggests that this view is misleading. The theory building view is presented as an alternative.

A more general background of the presentation is a conviction that it is important to have an appropriate understanding of what programming is. If our understanding is inappropriate we will misunderstand the difficulties that arise in the activity and our attempts to overcome them will give rise to conflicts and frustrations.

In the present discussion some of the crucial background experience will first be outlined. This is followed by an explanation of a theory of what programming is, denoted the Theory Building View. The subsequent sections enter into some of the consequences of the Theory Building View.

Programming and the Programmers’ Knowledge

I shall use the word programming to denote the whole activity of design and implementation of programmed solutions. What I am concerned with is the activity of matching some significant part and aspect of an activity in the real world to the formal symbol manipulation that can be done by a program running on a computer. With such a notion it follows directly that the programming activity I am talking about must include the development in time corresponding to the changes taking place in the real world activity being matched by the program execution, in other words program modifications.

One way of stating the main point I want to make is that programming in this sense primarily must be the programmers’ building up knowledge of a certain kind, knowledge taken to be basically the programmers’ immediate possession, any documentation being an auxiliary, secondary product.

As a background of the further elaboration of this view given in the following sections, the remainder of the present section will describe some real experience of dealing with large programs that has seemed to me more and more significant as I have pondered over the problems. In either case the experience is my own or has been communicated to me by persons having first hand contact with the activity in question.

Case 1 concerns a compiler. It has been developed by a group A for a Language L and worked very well on computer X. Now another group B has the task to write a compiler for a language L + M, a modest extension of L, for computer Y . Group B decides that the compiler for L developed by group A will be a good starting point for their design, and get a contract with group A that they will get support in the form of full documentation, including annotated program texts and much additional written design discussion, and also personal advice. The arrangement was effective and group B managed to develop the compiler they wanted. In the present context the significant issue is the importance of the personal advice from group A in the matters that concerned how to implement the extensions M to the language. During the design phase group B made suggestions for the manner in which the extensions should be accommodated and submitted them to group A for review. In several major cases it turned out that the solutions suggested by group B were found by group A to make no use of the facilities that were not only inherent in the structure of the existing compiler but were discussed at length in its documentation, and to be based instead on additions to that structure in the form of patches that effectively destroyed its power and simplicity. The members of group A were able to spot these cases instantly and could propose simple and effective solutions, framed entirely within the existing structure. This is an example of how the full program text and additional documentation is insufficient in conveying to even the highly motivated group B the deeper insight into the design, that theory which is immediately present to the members of group A.

In the years following these events the compiler developed by group B was taken over by other programmers of the same organization, without guidance from group A. Information obtained by a member of group A about the compiler resulting from the further modification of it after about 10 years made it clear that at that later stage the original powerful structure was still visible, but made entirely ineffective by amorphous additions of many different kinds. Thus, again, the program text and its documentation has proved insufficient as a carrier of some of the most important design ideas.

Case 2 concerns the installation and fault diagnosis of a large real–time system for monitoring industrial production activities. The system is marketed by its producer, each delivery of the system being adapted individually to its specific environment of sensors and display devices. The size of the program delivered in each installation is of the order of 200,000 lines. The relevant experience from the way this kind of system is handled concerns the role and manner of work of the group of installation and fault finding programmers. The facts are, first that these programmers have been closely concerned with the system as a full time occupation over a period of several years, from the time the system was under design. Second, when diagnosing a fault these programmers rely almost exclusively on their ready knowledge of the system and the annotated program text, and are unable to conceive of any kind of additional documentation that would be useful to them. Third, other programmers’ groups who are responsible for the operation of particular installations of the system, and thus receive documentation of the system and full guidance on its use from the producer’s staff, regularly encounter difficulties that upon consultation with the producer’s installation and fault finding programmer are traced to inadequate understanding of the existing documentation, but which can be cleared up easily by the installation and fault finding programmers.

The conclusion seems inescapable that at least with certain kinds of large programs, the continued adaption, modification, and correction of errors in them, is essentially dependent on a certain kind of knowledge possessed by a group of programmers who are closely and continuously connected with them.

Ryle’s Notion of Theory

If it is granted that programming must involve, as the essential part, a building up of the programmers’ knowledge, the next issue is to characterize that knowledge more closely. What will be considered here is the suggestion that the programmers’ knowledge properly should be regarded as a theory, in the sense of Ryle. Very briefly, a person who has or possesses a theory in this sense knows how to do certain things and in addition can support the actual doing with explanations, justifications, and answers to queries, about the activity of concern. It may be noted that Ryle’s notion of theory appears as an example of what K. Popper calls unembodied World 3 objects and thus has a defensible philosophical standing. In the present section we shall describe Ryle’s notion of theory in more detail.

Ryle develops his notion of theory as part of his analysis of the nature of intellectual activity, particularly the manner in which intellectual activity differs from, and goes beyond, activity that is merely intelligent. In intelligent behaviour the person displays, not any particular knowledge of facts, but the ability to do certain things, such as to make and appreciate jokes, to talk grammatically, or to fish. More particularly, the intelligent performance is characterized in part by the person’s doing them well, according to certain criteria, but further displays the person’s ability to apply the criteria so as to detect and correct lapses, to learn from the examples of others, and so forth. It may be noted that this notion of intelligence does not rely on any notion that the intelligent behaviour depends on the person’s following or adhering to rules, prescriptions, or methods. On the contrary, the very act of adhering to rules can be done more or less intelligently; if the exercise of intelligence depended on following rules there would have to be rules about how to follow rules, and about how to follow the rules about following rules, etc. in an infinite regress, which is absurd.

What characterizes intellectual activity, over and beyond activity that is merely intelligent, is the person’s building and having a theory, where theory is understood as the knowledge a person must have in order not only to do certain things intelligently but also to explain them, to answer queries about them, to argue about them, and so forth. A person who has a theory is prepared to enter into such activities; while building the theory the person is trying to get it.

The notion of theory in the sense used here applies not only to the elaborate constructions of specialized fields of enquiry, but equally to activities that any person who has received education will participate in on certain occasions. Even quite unambitious activities of everyday life may give rise to people’s theorizing, for example in planning how to place furniture or how to get to some place by means of certain means of transportation.

The notion of theory employed here is explicitly not confined to what may be called the most general or abstract part of the insight. For example, to have Newton’s theory of mechanics as understood here it is not enough to understand the central laws, such as that force equals mass times acceleration. In addition, as described in more detail by Kuhn, the person having the theory must have an understanding of the manner in which the central laws apply to certain aspects of reality, so as to be able to recognize and apply the theory to other similar aspects. A person having Newton’s theory of mechanics must thus understand how it applies to the motions of pendulums and the planets, and must be able to recognize similar phenomena in the world, so as to be able to employ the mathematically expressed rules of the theory properly.

The dependence of a theory on a grasp of certain kinds of similarity between situations and events of the real world gives the reason why the knowledge held by someone who has the theory could not, in principle, be expressed in terms of rules. In fact, the similarities in question are not, and cannot be, expressed in terms of criteria, no more than the similarities of many other kinds of objects, such as human faces, tunes, or tastes of wine, can be thus expressed.

The Theory To Be Built by the Programmer

In terms of Ryle’s notion of theory, what has to be built by the programmer is a theory of how certain affairs of the world will be handled by, or supported by, a computer program. On the Theory Building View of programming the theory built by the programmers has primacy over such other products as program texts, user documentation, and additional documentation such as specifications.

In arguing for the Theory Building View, the basic issue is to show how the knowledge possessed by the programmer by virtue of his or her having the theory necessarily, and in an essential manner, transcends that which is recorded in the documented products. The answers to this issue is that the programmer’s knowledge transcends that given in documentation in at least three essential areas:

1) The programmer having the theory of the program can explain how the solution relates to the affairs of the world that it helps to handle. Such an explanation will have to be concerned with the manner in which the affairs of the world, both in their overall characteristics and their details, are, in some sense, mapped into the program text and into any additional documentation. Thus the programmer must be able to explain, for each part of the program text and for each of its overall structural characteristics, what aspect or activity of the world is matched by it. Conversely, for any aspect or activity of the world the programmer is able to state its manner of mapping into the program text. By far the largest part of the world aspects and activities will of course lie outside the scope of the program text, being irrelevant in the context. However, the decision that a part of the world is relevant can only be made by someone who understands the whole world. This understanding must be contributed by the programmer.

2) The programmer having the theory of the program can explain why each part of the program is what it is, in other words is able to support the actual program text with a justification of some sort. The final basis of the justification is and must always remain the programmer’s direct, intuitive knowledge or estimate. This holds even where the justification makes use of reasoning, perhaps with application of design rules, quantitative estimates, comparisons with alternatives, and such like, the point being that the choice of the principles and rules, and the decision that they are relevant to the situation at hand, again must in the final analysis remain a matter of the programmer’s direct knowledge.

3) The programmer having the theory of the program is able to respond constructively to any demand for a modification of the program so as to support the affairs of the world in a new manner. Designing how a modification is best incorporated into an established program depends on the perception of the similarity of the new demand with the operational facilities already built into the program. The kind of similarity that has to be perceived is one between aspects of the world. It only makes sense to the agent who has knowledge of the world, that is to the programmer, and cannot be reduced to any limited set of criteria or rules, for reasons similar to the ones given above why the justification of the program cannot be thus reduced.

While the discussion of the present section presents some basic arguments for adopting the Theory Building View of programming, an assessment of the view should take into account to what extent it may contribute to a coherent understanding of programming and its problems. Such matters will be discussed in the following sections.

Problems and Costs of Program Modifications

A prominent reason for proposing the Theory Building View of programming is the desire to establish an insight into programming suitable for supporting a sound understanding of program modifications. This question will therefore be the first one to be taken up for analysis.

One thing seems to be agreed by everyone, that software will be modified. It is invariably the case that a program, once in operation, will be felt to be only part of the answer to the problems at hand. Also the very use of the program itself will inspire ideas for further useful services that the program ought to provide. Hence the need for ways to handle modifications.

The question of program modifications is closely tied to that of programming costs. In the face of a need for a changed manner of operation of the program, one hopes to achieve a saving of costs by making modifications of an existing program text, rather than by writing an entirely new program.

The expectation that program modifications at low cost ought to be possible is one that calls for closer analysis. First it should be noted that such an expectation cannot be supported by analogy with modifications of other complicated man–made constructions. Where modifications are occasionally put into action, for example in the case of buildings, they are well known to be expensive and in fact complete demolition of the existing building followed by new construction is often found to be preferable economically. Second, the expectation of the possibility of low cost program modifications conceivably finds support in the fact that a program is a text held in a medium allowing for easy editing. For this support to be valid it must clearly be assumed that the dominating cost is one of text manipulation. This would agree with a notion of programming as text production. On the Theory Building View this whole argument is false. This view gives no support to an expectation that program modifications at low cost are generally possible.

A further closely related issue is that of program flexibility. In including flexibility in a program we build into the program certain operational facilities that are not immediately demanded, but which are likely to turn out to be useful. Thus a flexible program is able to handle certain classes of changes of external circumstances without being modified.

It is often stated that programs should be designed to include a lot of flexibility, so as to be readily adaptable to changing circumstances. Such advice may be reasonable as far as flexibility that can be easily achieved is concerned. However, flexibility can in general only be achieved at a substantial cost. Each item of it has to be designed, including what circumstances it has to cover and by what kind of parameters it should be controlled. Then it has to be implemented, tested, and described. This cost is incurred in achieving a program feature whose usefulness depends entirely on future events. It must be obvious that built–in program flexibility is no answer to the general demand for adapting programs to the changing circumstances of the world.

In a program modification an existing programmed solution has to be changed so as to cater for a change in the real world activity it has to match. What is needed in a modification, first of all, is a confrontation of the existing solution with the demands called for by the desired modification. In this confrontation the degree and kind of similarity between the capabilities of the existing solution and the new demands has to be determined. This need for a determination of similarity brings out the merit of the Theory Building View. Indeed, precisely in a determination of similarity the shortcoming of any view of programming that ignores the central requirement for the direct participation of persons who possess the appropriate insight becomes evident. The point is that the kind of similarity that has to be recognized is accessible to the human beings who possess the theory of the program, although entirely outside the reach of what can be determined by rules, since even the criteria on which to judge it cannot be formulated. From the insight into the similarity between the new requirements and those already satisfied by the program, the programmer is able to design the change of the program text needed to implement the modification.

In a certain sense there can be no question of a theory modification, only of a program modification. Indeed, a person having the theory must already be prepared to respond to the kinds of questions and demands that may give rise to program modifications. This observation leads to the important conclusion that the problems of program modification arise from acting on the assumption that programming consists of program text production, instead of recognizing programming as an activity of theory building.

On the basis of the Theory Building View the decay of a program text as a result of modifications made by programmers without a proper grasp of the underlying theory becomes understandable. As a matter of fact, if viewed merely as a change of the program text and of the external behaviour of the execution, a given desired modification may usually be realized in many different ways, all correct. At the same time, if viewed in relation to the theory of the program these ways may look very different, some of them perhaps conforming to that theory or extending it in a natural way, while others may be wholly inconsistent with that theory, perhaps having the character of unintegrated patches on the main part of the program. This difference of character of various changes is one that can only make sense to the programmer who possesses the theory of the program. At the same time the character of changes made in a program text is vital to the longer term viability of the program. For a program to retain its quality it is mandatory that each modification is firmly grounded in the theory of it. Indeed, the very notion of qualities such as simplicity and good structure can only be understood in terms of the theory of the program, since they characterize the actual program text in relation to such program texts that might have been written to achieve the same execution behaviour, but which exist only as possibilities in the programmer’s understanding.

Program Life, Death, and Revival

A main claim of the Theory Building View of programming is that an essential part of any program, the theory of it, is something that could not conceivably be expressed, but is inextricably bound to human beings. It follows that in describing the state of the program it is important to indicate the extent to which programmers having its theory remain in charge of it. As a way in which to emphasize this circumstance one might extend the notion of program building by notions of program life, death, and revival. The building of the program is the same as the building of the theory of it by and in the team of programmers. During the program life a programmer team possessing its theory remains in active control of the program, and in particular retains control over all modifications. The death of a program happens when the programmer team possessing its theory is dissolved. A dead program may continue to be used for execution in a computer and to produce useful results. The actual state of death becomes visible when demands for modifications of the program cannot be intelligently answered. Revival of a program is the rebuilding of its theory by a new programmer team.

The extended life of a program according to these notions depends on the taking over by new generations of programmers of the theory of the program. For a new programmer to come to possess an existing theory of a program it is insufficient that he or she has the opportunity to become familiar with the program text and other documentation. What is required is that the new programmer has the opportunity to work in close contact with the programmers who already possess the theory, so as to be able to become familiar with the place of the program in the wider context of the relevant real world situations and so as to acquire the knowledge of how the program works and how unusual program reactions and program modifications are handled within the program theory. This problem of education of new programmers in an existing theory of a program is quite similar to that of the educational problem of other activities where the knowledge of how to do certain things dominates over the knowledge that certain things are the case, such as writing and playing a music instrument. The most important educational activity is the student’s doing the relevant things under suitable supervision and guidance. In the case of programming the activity should include discussions of the relation between the program and the relevant aspects and activities of the real world, and of the limits set on the real world matters dealt with by the program.

A very important consequence of the Theory Building View is that program revival, that is reestablishing the theory of a program merely from the documentation, is strictly impossible. Lest this consequence may seem unreasonable it may be noted that the need for revival of an entirely dead program probably will rarely arise, since it is hardly conceivable that the revival would be assigned to new programmers without at least some knowledge of the theory had by the original team. Even so the Theory Building View suggests strongly that program revival should only be attempted in exceptional situations and with full awareness that it is at best costly, and may lead to a revived theory that differs from the one originally had by the program authors and so may contain discrepancies with the program text.

In preference to program revival, the Theory Building View suggests, the existing program text should be discarded and the new–formed programmer team should be given the opportunity to solve the given problem afresh. Such a procedure is more likely to produce a viable program than program revival, and at no higher, and possibly lower, cost. The point is that building a theory to fit and support an existing program text is a difficult, frustrating, and time consuming activity. The new programmer is likely to feel torn between loyalty to the existing program text, with whatever obscurities and weaknesses it may contain, and the new theory that he or she has to build up, and which, for better or worse, most likely will differ from the original theory behind the program text.

Similar problems are likely to arise even when a program is kept continuously alive by an evolving team of programmers, as a result of the differences of competence and background experience of the individual programmers, particularly as the team is being kept operational by inevitable replacements of the individual members.

Method and Theory Building

Recent years has seen much interest in programming methods. In the present section some comments will be made on the relation between the Theory Building View and the notions behind programming methods.

To begin with, what is a programming method? This is not always made clear, even by authors who recommend a particular method. Here a programming method will be taken to be a set of work rules for programmers, telling what kind of things the programmers should do, in what order, which notations or languages to use, and what kinds of documents to produce at various stages.

In comparing this notion of method with the Theory Building View of programming, the most important issue is that of actions or operations and their ordering. A method implies a claim that program development can and should proceed as a sequence of actions of certain kinds, each action leading to a particular kind of documented result. In building the theory there can be no particular sequence of actions, for the reason that a theory held by a person has no inherent division into parts and no inherent ordering. Rather, the person possessing a theory will be able to produce presentations of various sorts on the basis of it, in response to questions or demands.

As to the use of particular kinds of notation or formalization, again this can only be a secondary issue since the primary item, the theory, is not, and cannot be, expressed, and so no question of the form of its expression arises.

It follows that on the Theory Building View, for the primary activity of the programming there can be no right method.

This conclusion may seem to conflict with established opinion, in several ways, and might thus be taken to be an argument against the Theory Building View. Two such apparent contradictions shall be taken up here, the first relating to the importance of method in the pursuit of science, the second concerning the success of methods as actually used in software development.

The first argument is that software development should be based on scientific manners, and so should employ procedures similar to scientific methods. The flaw of this argument is the assumption that there is such a thing as scientific method and that it is helpful to scientists. This question has been the subject of much debate in recent years, and the conclusion of such authors as Feyerabend, taking his illustrations from the history of physics, and Medawar, arguing as a biologist, is that the notion of scientific method as a set of guidelines for the practising scientist is mistaken.

This conclusion is not contradicted by such work as that of Polya on problem solving. This work takes its illustrations from the field of mathematics and leads to insight which is also highly relevant to programming. However, it cannot be claimed to present a method on which to proceed. Rather, it is a collection of suggestions aiming at stimulating the mental activity of the problem solver, by pointing out different modes of work that may be applied in any sequence.

The second argument that may seem to contradict the dismissal of method of the Theory Building View is that the use of particular methods has been successful, according to published reports. To this argument it may be answered that a methodically satisfactory study of the efficacy of programming methods so far never seems to have been made. Such a study would have to employ the well established technique of controlled experiments (cf. Brooks, 1980 or Moher and Schneider, 1982). The lack of such studies is explainable partly by the high cost that would undoubtedly be incurred in such investigations if the results were to be significant, partly by the problems of establishing in an operational fashion the concepts underlying what is called methods in the field of program development. Most published reports on such methods merely describe and recommend certain techniques and procedures, without establishing their usefulness or efficacy in any systematic way. An elaborate study of five different methods by C. Floyd and several co–workers concludes that the notion of methods as systems of rules that in an arbitrary context and mechanically will lead to good solutions is an illusion. What remains is the effect of methods in the education of programmers. This conclusion is entirely compatible with the Theory Building View of programming. Indeed, on this view the quality of the theory built by the programmer will depend to a large extent on the programmer’s familiarity with model solutions of typical problems, with techniques of description and verification, and with principles of structuring systems consisting of many parts in complicated interactions. Thus many of the items of concern of methods are relevant to theory building. Where the Theory Building View departs from that of the methodologists is on the question of which techniques to use and in what order. On the Theory Building View this must remain entirely a matter for the programmer to decide, taking into account the actual problem to be solved.

Programmers’ Status and the Theory Building View

The areas where the consequences of the Theory Building View contrast most strikingly with those of the more prevalent current views are those of the programmers’ personal contribution to the activity and of the programmers’ proper status.

The contrast between the Theory Building View and the more prevalent view of the programmers’ personal contribution is apparent in much of the common discussion of programming. As just one example, consider the study of modifiability of large software systems by Oskarsson. This study gives extensive information on a considerable number of modifications in one release of a large commercial system. The description covers the background, substance, and implementation, of each modification, with particular attention to the manner in which the program changes are confined to particular program modules. However, there is no suggestion whatsoever that the implementation of the modifications might depend on the background of the 500 programmers employed on the project, such as the length of time they have been working on it, and there is no indication of the manner in which the design decisions are distributed among the 500 programmers. Even so the significance of an underlying theory is admitted indirectly in statements such as that ‘decisions were implemented in the wrong block’ and in a reference to ‘a philosophy of AXE’. However, by the manner in which the study is conducted these admissions can only remain isolated indications.

More generally, much current discussion of programming seems to assume that programming is similar to industrial production, the programmer being regarded as a component of that production, a component that has to be controlled by rules of procedure and which can be replaced easily. Another related view is that human beings perform best if they act like machines, by following rules, with a consequent stress on formal modes of expression, which make it possible to formulate certain arguments in terms of rules of formal manipulation. Such views agree well with the notion, seemingly common among persons working with computers, that the human mind works like a computer. At the level of industrial management these views support treating programmers as workers of fairly low responsibility, and only brief education.

On the Theory Building View the primary result of the programming activity is the theory held by the programmers. Since this theory by its very nature is part of the mental possession of each programmer, it follows that the notion of the programmer as an easily replaceable component in the program production activity has to be abandoned. Instead the programmer must be regarded as a responsible developer and manager of the activity in which the computer is a part. In order to fill this position he or she must be given a permanent position, of a status similar to that of other professionals, such as engineers and lawyers, whose active contributions as employers of enterprises rest on their intellectual proficiency.

The raising of the status of programmers suggested by the Theory Building View will have to be supported by a corresponding reorientation of the programmer education. While skills such as the mastery of notations, data representations, and data processes, remain important, the primary emphasis would have to turn in the direction of furthering the understanding and talent for theory formation. To what extent this can be taught at all must remain an open question. The most hopeful approach would be to have the student work on concrete problems under guidance, in an active and constructive environment.

Conclusions

Accepting program modifications demanded by changing external circumstances to be an essential part of programming, it is argued that the primary aim of programming is to have the programmers build a theory of the way the matters at hand may be supported by the execution of a program. Such a view leads to a notion of program life that depends on the continued support of the program by programmers having its theory. Further, on this view the notion of a programming method, understood as a set of rules of procedure


n5321 | 2025年8月16日 12:33

free添加free ssl证书

Let’s Encrypt 是目前全球最常用的 免费 SSL 证书提供服务,可以让你的网站支持 https:// 访问,并获得浏览器的“安全锁🔒”标识。

Nginx 监听 443 端口 + 配置 SSL + Django 处理逻辑

ubuntu 添加

sudo apt update
sudo apt install certbot python3-certbot-nginx

步骤 2:使用 certbot 自动申请证书

sudo certbot --nginx -d autoem.net -d www.autoem.net

步骤 3:修改 Nginx 配置以增强兼容性,然后测试并重启:

sudo nginx -t

sudo systemctl restart nginx


步骤 4:自动续期设置(Let’s Encrypt 有效期90天)

sudo certbot renew --dry-run


反馈:


Saving debug log to /var/log/letsencrypt/letsencrypt.log

Processing /etc/letsencrypt/renewal/autoem.net.conf

Account registered.

Simulating renewal of an existing certificate for autoem.net and www.autoem.net

Congratulations, all simulated renewals succeeded: 

  /etc/letsencrypt/live/autoem.net/fullchain.pem (success)




settings.py 推荐update
SECURE_SSL_REDIRECT = True

SESSION_COOKIE_SECURE = True

CSRF_COOKIE_SECURE = True


n5321 | 2025年7月3日 11:59

Computers—Work in progress

The history of computers, particularly digital ones, dates from the first quarter of the 17th century (see ref. 1 for review). The first known machine was built by Wilhelm Schickard, a professor at Tübingen and a friend of Kepler's. Interestingly, this occurred at the same time that Napier invented logarithms. The device was built, but it, as well as the copy for Kepler, and the inventor himself were destroyed by the fires and plagues of the Thirty Years' War. The next machine, copies of which still exist, was built by Pascal and was described in Diderot's Encylopédie. This device became an important part of a desk calculator designed and constructed by Leibniz who said:

"Also the astronomers surely will not have to continue to exercise the patience which is required for computation. It is this that deters them from computing or correcting tables, from the construction of Ephemerides, from working on hypotheses, and from discussions of observations with each other. For it is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if machines were used."

This enunciation by Leibniz of a purpose for automatic computing is a memorable one. Science, or at least mathematical astronomy, had advanced sufficiently by his time so that calculation was a real burden and it was recognized to some extent how this burden could be lightened. Certainly Kepler, using tables of logarithms he himself calculated based on Napier's schema, did extensive calculations in order to produce his Rudolphine Tables.

The time for the digital principle, however, had still not come, and even by the early part of the 19th century the Nautical Almanac was being calculated by groups of humans all busy making separate and independent calculations with attendant errors. The situation was so bad by 1823 that Charles Babbage, one of the founders of the Royal Astronomical Society in 1820, set out to create a digital device to construct tables by a method, certainly well-known to Newton, called subtabulation. In this method, one first calculates a comparatively few entries in a table by hand a priori, and then the entries lying between them are filled in by systematic interpolation using essentially only additions and subtractions.

For various reasons this was a propitious time in English history to attempt to automate computation. Trevelyan (2) tells us:

"A new civilization, not only for England but ultimately for all mankind, was implicit in the substitution of scientific machinery for handwork. The Greek scientists of the Alexandrine age, the contemporaries of Archimedes and of Euclid, had failed, to apply their discoveries to industry on the big scale. Perhaps this failure was due to the contempt with which the high-souled philosophy of Hellas regarded the industrial arts conducted by slave labour; perhaps the great change was prevented by the disturbed and war-like state of the Mediterranean world during those three hundred years between Alexander and Augustus, when Greek science was in its most advanced state. In any case it was left to the peaceful, cultivated but commercially minded England of the generation that followed Newton's death, to harness philosophic thought and experiment to the commonest needs of daily life. The great English inventors negotiated the alliance of science and business."

In the process of trying to build his machine, Babbage went to the continent and saw the Jacquard loom. He wrote:

"It is known as a fact that the Jacquard loom is capable of weaving any design that the imagination of man may conceive... holes [are punched] in a set of pasteboard cards in such a manner that when these cards are placed in a Jacquard loom, it will then weave... the exact pattern designed by the artist.

"Now the manufacturer may use, for the warp and weft of his work, threads that are all of the same colour; let us suppose them to be unbleached or white threads. In that case the cloth will be woven all in one colour; but there will be a damask pattern upon it such as the artist designed.

"But the manufacturer might use the same card, and put into the warp threads of any other colour.

"The analogy of the Analytical Engine with this well-known process is nearly perfect.

"The Analytical Engine consists of two parts:

"1st. The store in which all the variables to be operated upon as well as all those quantities which have arisen from the results of other operations, are placed.

"2nd. The mill into which the quantities about to be operated upon are always brought.

"Every formula which the Analytical Engine can be required to compute consists of certain algebraical operations to be performed upon given letters, and of certain other modifications depending on the numerical value assigned to those letters.

"The Analytical Engine is therefore a machine of the most general nature. Whatever formula it is required to develop, the law of its development must be communicated to it by two sets of cards. When these have been placed, the engine is special for that particular formula.

"Every set of cards made for any formula will at any future time, recalculate that formula with whatever constants may be required."

We see here for the first time a clear statement of a computer similar to the later electromechanical machines built in our time. Its chief flaw lay in its speed, which was roughly that of a human. Interestingly enough, Babbage understood a good bit about programming and had, as his programmer, Ada Lady Lovelace, the beautiful and talented daughter of Byron, the poet. She actually wrote out a program for calculating the so-called Bernoulli numbers, but the machine was never completed. In fact, in 1915 a Lord Moulton wrote of Babbage:

"When he died a few years later, not only had he constructed no machine, but the verdict of a jury of kind and sympathetic scientific men who were deputed to pronounce upon what he had left behind him, either in papers or mechanism, was that everything was too incomplete to be capable of being put to any useful purpose."

When Thomson and Tait had need for numerical calculation, they correctly dismissed Babbage's ideas as impractical and too slow to be useful in physics. Instead, they, as well as Maxwell, became deeply interested in analog computers. In this connection here is what Maxwell said about computing:

"I do not here refer to the fact that all quantities, as such, are subject to the rules of arithmetic and algebra, and are therefore capable of being submitted to those dry calculations which represent, to so many minds their only idea of mathematics.

"The human mind is seldom satisfied, and is certainly never exercising its highest functions, when it is doing the work of a calculating machine. What the man of science, whether he is a mathematician or a physical inquirer, aims at is to acquire and develope clear ideas of the things he deals with. For this purpose he is willing to enter on long calculations, and to be for a season a calculating machine, if he can only at least make his ideas clearer."

Kelvin built a harmonic analyzer to analyze tidal motion, which was capable of handling eight integrals simultaneously. He said on the occasion when his machine was dedicated that he was "substituting brass for brains." He also conceived of, but was unable to build, the first differential analyzer. In the 1930s this was done by Vannevar Bush as part of an extensive program he conceived to automate computation. Unhappily, Bush and his colleagues became so immersed in the intricacies of analog devices that they overlooked or had little confidence in the beautiful simplicity of the digital approach by electronic means.

The onset of the Second World War in the United States found us with a considerable and rapidly growing knowledge of electronics because of the work in England and the United States on radar and fire control directors as well as counters of many sorts. Also at this same time there was a great demand for computation by the Ordnance Department of the U.S. Army for the production of firing and bombing tables for a wide variety of guns and bombs and by the Los Alamos Scientific Laboratory. It was this confluence of developing technology and need which led to the more or less inevitable development at the University of Pennsylvania of the first electronic computer, the ENIAC, at the beginning of 1946. This monstrously large device was capable of performing 300 multiplications per second and of storing at electronic speed 20 words of 10 digits each or 40 of 5 digits.

The speed of this machine was the stuff out of which revolutions are made: an increase in speed of more than 2 orders of magnitude over the 1 multiplication per second of the best electromechanical devices of the same period. Thus, man at long last had in hand a computing device, with the prospect for others, that totally transcended the capability of any other system. By means of this device it immediately became possible to discuss the solution of entirely new classes of problems.

At this time, the Los Alamos people, largely under the leadership and goading of von Neumann, were becoming skilled in the mathematical formulation of some of the complex problems facing the developers of the atomic and hydrogen bombs. Among other things they needed a machine that could operate at speeds such that it was practicable to solve partial differential equations numerically. The ENIAC was just such a device, and von Neumann jumped at the chance. The test problem for the ENIAC that we agreed upon was such a problem, and after much travail it ran successfully, thereby opening up a new era. This was in a sense the most obvious and primitive use for the computer: the solution of scientific (i.e., applied mathematical) problems by numerical means rather then by some species of physical experimentation.

By this I mean that much physical experimentation of that period was not concerned with the determination of physical constants but rather was a form of analog computation since the differential equations of motion could in fact be written down but not solved except by some special-purpose analog device.

It was a successful project led by von Neumann and myself at the Institute for Advanced Study that led to the design and development of what may be viewed as the prototype of the modern computer in the early 1950s. In fact, to this day, computers are often called von Neumann machines. This project was successful because it was multifacetted: there was a fine engineering group; a numerical analysis and logical design group that may safely be said to have started the modern field; a group that started the field we today call "programming" or "software"; and a numerical meteorology group that showed the scientific community of the world the importance of digital computers and computing to one important aspect of society, the weather.

The Institute computer was copied and the copies were copied. Many other universities became interested and took active parts. But in my opinion the next great step was the entrance of industry into the picture. This is what made all the difference in what followed. All those in the academic world had busily erected a tower of Babel, and no one could any longer understand his neighbor. It was the introduction by industry and, if I may be permitted "a plug," it was the introduction by IBM of the 650 and the 701 that made the difference. All at once there were large numbers of identical machines in many places so that it was possible for John Backus and his colleagues to introduce FORTRAN into the scientific community as a lingua franca in 1953 and 1954. The value of this language can be perceived from the fact that today it is still perhaps the most widely used language in the computer field.

So far I have made mention only of scientific users of the computer and have indicated that it is perhaps the simplest and most obvious such usage. We need now to consider the deeper and certainly more exciting uses that well may be more influential in our society.

There were two pioneering organizations that made early use of electronic computers for nonscientific computations: the Lyons Tea Shops, a chain of wholesale grocers in England that built its own machine (LEO), and our Census Bureau that bought one of the earliest electronic computers (UNIVAC). The large-scale use by business and government for nonscientific purposes did not occur until considerably later for three reasons: the innate lack of understanding by business people of the importance of data; the failure of the computer industry to understand that separate machines for business and science were not needed; and the lack of understanding among business people of the forthcoming shortage of humans to undertake burdensome tasks.

On the technical side I believe that the "great leap forward" to commercial or nonscientific computation came when two entirely independent events occurred. On the technological side the magnetic core was invented and almost overnight became the prime device for building much larger and more reliable memories than had previously been possible; and nearly at the same time, solid-state devices—diodes and transistors—came on the scene to provide both smaller, faster, and more reliable switches. On the programming side the work of Backus and his associates had begun to show people how to manipulate information in a way hardly attempted before. Prior to this time, computer users had primarily manipulated numerical material, and the great utility of the computer lay precisely in its speed in this manipulative process. With the advent of new programming languages, computers were used to interpret and translate sentences from one language to another so that information was for the first time being manipulated per se. I believe this ability rapidly developed to the point where it became apparent that all manner of information or data could profitably be stored, manipulated, retrieved, and altered.

One of the early and highly successful applications of this sort was the American Airlines reservation system. When this was introduced it was a most important experiment in which a considerable wealth of data could be processed very rapidly on a nationwide basis by clerks with only a modest amount of training; the result was better and faster service for the customer and a more efficient use of planes and seats by the airline. This system also served to bring to the attention of the business community the elegant tying together, for greater efficiency, of far-flung branches. Prior to this time, branch offices—for example, of banks or great sales companies—had only minimal connection to their main offices. As soon as the American Airlines experiment appeared it began to become apparent how tightly one could knit together the widespread offices of a great company.

The information field today directly contributes more than 80 billion dollars to the gross national product. This helps to show the value of information now in contrast to the days of the ENIAC. Today we live in a society in which information is used, valued, and transmitted worldwide in great quantities.

To understand the nature of these processes let us recall that in our early discussion we spoke of the numbers of multiplications per second. We introduced this figure of merit because in the scientific calculations of the 1940s and 1950s the dominant time spent during a calculation was on doing multiplications. This is no longer the case. For the computing systems of today a better figure of merit is the number of instructions performed per second and is commonly quoted in the unit of MIPS, millions of operations per second. (Good-size machines may well have ratings of 5-10 MIPS.) It is also sometimes convenient to measure commercial performance by the number of transactions per second. A transaction is one complete interaction between a customer and clerk as carried out by a computer. Such transactions may contain as few as 10⁴ instructions or as many as 10⁶ instructions. Evidently each company needs to decide how long customers can reasonably spend on a typical transaction and it tries to procure a computer that will keep transaction times down so that each customer is served promptly and all clerks are used efficiently.

The technological advances on the hardware side have been truly incredible and have made possible the advances suggested above. Since the time of the ENIAC, speed of circuitry or switching speed, which used to be measured in hundreds thousandths of a second, is now measured in trillionths of a second, an increase of 7 orders of magnitude. To make use of these new speeds it has been necessary to make circuits much smaller because, in a trillionth of a second, light, and hence an electrical signal, will travel about 0.5 m. Thus, to maintain efficient operation of computing machine circuits it has been necessary to shrink circuit dimensions dramatically. To produce these dimensional sizes it has been necessary to resort to processes of printing circuits by means of optical and electron beams. With these compressions in size and expansions of speed have fortunately come corresponding economies of scale so that the present-day circuits cost much less than did earlier ones. This has resulted in real cost benefits to customers. It is worth noting at this point, however, that these extremely tightly packed circuits generate sizable amounts of heat whose dissipation has become a possible barrier to increased miniaturization. The solution to this problem is a key step on the path to further progress.

Not only have circuits become smaller but even the storage of information on such media as magnetic disks has improved so that today we can store about 10⁷ bits per square inch compared to a few times 10⁴ just 25 years ago.

Along with the miniaturization of circuits have come a number of exciting new prospective technologies. To avoid the heat problem mentioned earlier, techniques based upon the so-called Josephson effect are attractive. By operating at temperatures near absolute zero, switches can be made to operate at speeds as much as 10 times as fast as the best we have today. These devices will have heat dissipations of perhaps 1/1000th those of present ones.

Still another elegant technology already on the market is that of so-called magnetic bubbles—i.e., magnetic domains on a thin sheet of magnetic material, that can be moved about by appropriate fields and their presence or absence detected. Potentially at least, this type of memory should be much faster than a disk; the rates at which data can be transferred is very high, but the cost is not cheap as yet.

Happily, along with miniaturization has come great cost reductions with corresponding large increases in the sizes of the memories and of the control circuitry available for computers. Indeed, the ENIAC can today be put upon a silicon chip 1 cm on a side. The continued increase in the packing density of electronic circuits is steadily increasing. It is believed that during the next 5 or 10 years it will be possible to buy commercially a memory chip containing 10⁶ bits. This has led, as we all know, to the extremely rapid development of a new subfield, the microprocessor field. That it will have far-flung applications is already quite evident. One of the most obvious changes in our society caused by this new technology is the number of shops selling "home computers."

Another impressive trend is the coalescing of the fields of telecommunications and computers. This has come about both because data are now quite valuable and because satellite technology has made it possible to offer data for sale worldwide, just like any other commodity. In this connection, the use of fiber optics is likely to lead to elegant new applications in the computer/telecommunicating field.

Spectacular as has been the development of hardware for computers, the development of programming has, if anything, been comparably exciting. It has been estimated that as much as 100 billion dollars has been spent in the field of programming in the last 30 years. It is estimated that the costs of all computers installed across the world is about the same amount. We thus see the importance of programming to our world society. This importance will probably continue to increase as we become ever more dependent upon computer-stored information of all sorts. Indeed, modern business now is so structured that the computer is no longer just a useful adjunct—it is an integral part of very many businesses and provides ever-increasing services to customers.

One of the greatest challenges facing us is the task of learning how to bring down the costs of programming in some major way. The field is unhappily highly labor-intensive and as such is an expensive business. Because the costs of this activity are going up all the time, there are and will continue to be great efforts made to automate substantial parts. To what extent and when this will materialize is a major problem of the industry.

It is dangerous to conjecture how rapidly new technologies will appear, but in the beginning of the next century the speeds of large modern computers could be 2 orders of magnitude greater than they are at present. Such speeds would imply almost necessarily very large storage capacities so that the flow of data in perhaps 25 years will be extremely large.

The problem of how to design, construct, build, and test the circuits implicit in such machines is a challenging one, which has certainly to be faced before progress can be made; it must also be realized that the development of these machines will require great advances in programming as well. Indeed, unless these problems receive satisfactory solutions there will be no great advances.


Footnote

* Presented on 21 April 1980 at the Annual Meeting of the National Academy of Sciences of the United States of America.

References

  1. Goldstine, H. H. (1972) The Computer from Pascal to von Neumann (Princeton Univ. Press, Princeton, NJ).

  2. Trevelyan, G. M. (1966) British History in the Nineteenth Century and After, 1782-1919.


n5321 | 2025年7月3日 11:37

Can Digital Computers Think? (1951)——Alan Turing 轮AI

媒体把一个概念炒热以后,AI的概念就脏掉了,把很多幻想的东西当现实。很多年前大佬接受BBC采访。所以这个文本一开始就是说给麻瓜听的,但是又是开创行业的大佬,所以足够浅显、通俗、犀利、深刻。


Digital computers have often been described as mechanical brains. Most scientists probably regard this description as a mere newspaper stunt, but some do not. One mathematician has expressed the opposite point of view to me rather forcefully in the words ‘It is commonly said that these machines are not brains, but you and I know that they are.’ In this talk I shall try to explain the ideas behind the various possible points of view, though not altogether impartially. I shall give most attention to the view which I hold myself, that it is not altogether unreasonable to describe digital computers as brains. A different point of view has already been put by Professor Hartree.

First we may consider the naive point of view of the man in the street. He hears amazing accounts of what these machines can do: most of them apparently involve intellectual feats of which he would be quite incapable. He can only explain it by supposing that the machine is a sort of brain, though he may prefer simply to disbelieve what he has heard.

The majority of scientists are contemptuous of this almost superstitious attitude. They know something of the principles on which the machines are constructed and of the way in which they are used. Their outlook was well summed up by Lady Lovelace over a hundred years ago, speaking of Babbage’s Analytical Engine. She said, as Hartree has already quoted, ‘The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.’ This very well describes the way in which digital computers are actually used at the present time, and in which they will probably mainly be used for many years to come. For any one calculation the whole procedure that the machine is to go through is planned out in advance by a mathematician. The less doubt there is about what is going to happen the better the mathematician is pleased. It is like planning a military operation. Under these circumstances it is fair to say that the machine doesn’t originate anything.

There is however a third point of view, which I hold myself. I agree with Lady Lovelace’s dictum as far as it goes, but I believe that its validity depends on considering how digital computers are used rather than how they could be used. In fact I believe that they could be used in such a manner that they could appropriately be described as brains. I should also say that ‘If any machine can appropriately be described as a brain, then any digital computer can be so described.’

This last statement needs some explanation. It may appear rather startling, but with some reservations it appears to be an inescapable fact. It can be shown to follow from a characteristic property of digital computers, which I will call their universality. A digital computer is a universal machine in the sense that it can be made to replace any machine of a certain very wide class. It will not replace a bulldozer or a steam-engine or a telescope, but it will replace any rival design of calculating machine, that is to say any machine into which one can feed data and which will later print out results. In order to arrange for our computer to imitate a given machine it is only necessary to programme the computer to calculate what the machine in question would do under given circumstances, and in particular what answers it would print out. The computer can then be made to print out the same answers.

If now some particular machine can be described as a brain we have only to programme our digital computer to imitate it and it will also be a brain. If it is accepted that real brains, as found in animals, and in particular in men, are a sort of machine it will follow that our digital computer, suitably programmed, will behave like a brain.

This argument involves several assumptions which can quite reasonably be challenged. I have already explained that the machine to be imitated must be more like a calculator than a bulldozer. This is merely a reflection of the fact that we are speaking of mechanical analogues of brains, rather than of feet or jaws. It was also necessary that this machine should be of the sort whose behaviour is in principle predictable by calculation. We certainly do not know how any such calculation should be done, and it was even argued by Sir Arthur Eddington that on account of the indeterminacy principle in quantum mechanics no such prediction is even theoretically possible.

Another assumption was that the storage capacity of the computer used should be sufficient to carry out the prediction of the behaviour of the machine to be imitated. It should also have sufficient speed. Our present computers probably have not got the necessary storage capacity, though they may well have the speed. This means in effect that if we wish to imitate anything so complicated as the human brain we need a very much larger machine than any of the computers at present available. We probably need something at least a hundred times as large as the Manchester Computer. Alternatively of course a machine of equal size or smaller would do if sufficient progress were made in the technique of storing information.

It should be noticed that there is no need for there to be any increase in the complexity of the computers used. If we try to imitate ever more complicated machines or brains we must use larger and larger computers to do it. We do not need to use successively more complicated ones. This may appear paradoxical, but the explanation is not difficult. The imitation of a machine by a computer requires not only that we should have made the computer, but that we should have programmed it appropriately. The more complicated the machine to be imitated the more complicated must the programme be.

This may perhaps be made clearer by an analogy. Suppose two men both wanted to write their autobiographies, and that one had had an eventful life, but very little had happened to the other. There would be two difficulties troubling the man with the more eventful life more seriously than the other. He would have to spend more on paper and he would have to take more trouble over thinking what to say. The supply of paper would not be likely to be a serious difficulty, unless for instance he were on a desert island, and in any case it could only be a technical or a financial problem. The other difficulty would be more fundamental and would become more serious still if he were not writing his life but a work on something he knew nothing about, let us say about family life on Mars. Our problem of programming a computer to behave like a brain is something like trying to write this treatise on a desert island. We cannot get the storage capacity we need: in other words we cannot get enough paper to write the treatise on, and in any case we don’t know what we should write down if we had it. This is a poor state of affairs, but, to continue the analogy, it is something to know how to write, and to appreciate the fact that most knowledge can be embodied in books.

In view of this it seems that the wisest ground on which to criticise the description of digital computers as ‘mechanical brains’ or ‘electronic brains’ is that, although they might be programmed to behave like brains, we do not at present know how this should be done. With this outlook I am in full agreement. It leaves open the question as to whether we will or will not eventually succeed in finding such a programme. I, personally, am inclined to believe that such a programme will be found. I think it is probable for instance that at the end of the century it will be possible to programme a machine to answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine. I am imagining something like a viva-voce examination, but with the questions and answers all typewritten in order that we need not consider such irrelevant matters as the faithfulness with which the human voice can be imitated. This only represents my opinion; there is plenty of room for others.

There are still some difficulties. To behave like a brain seems to involve free will, but the behaviour of a digital computer, when it has been programmed, is completely determined. These two facts must somehow be reconciled, but to do so seems to involve us in an age-old controversy, that of ‘free will and determinism’. There are two ways out. It may be that the feeling of free will which we all have is an illusion. Or it may be that we really have got free will, but yet there is no way of telling from our behaviour that this is so. In the latter case, however well a machine imitates a man’s behaviour it is to be regarded as a mere sham. I do not know how we can ever decide between these alternatives but whichever is the correct one it is certain that a machine which is to imitate a brain must appear to behave as if it had free will, and it may well be asked how this is to be achieved. One possibility is to make its behaviour depend on something like a roulette wheel or a supply of radium. The behaviour of these may perhaps be predictable, but if so, we do not know how to do the prediction.

It is, however, not really even necessary to do this. It is not difficult to design machines whose behaviour appears quite random to anyone who does not know the details of their construction. Naturally enough the inclusion of this random element, whichever technique is used, does not solve our main problem, how to programme a machine to imitate a brain, or as we might say more briefly, if less accurately, to think. But it gives us some indication of what the process will be like. We must not always expect to know what the computer is going to do. We should be pleased when the machine surprises us, in rather the same way as one is pleased when a pupil does something which he had not been explicitly taught to do.

Let us now reconsider Lady Lovelace’s dictum. ‘The machine can do whatever we know how to order it to perform.’ The sense of the rest of the passage is such that one is tempted to say that the machine can only do what we know how to order it to perform. But I think this would not be true. Certainly the machine can only do what we do order it to perform, anything else would be a mechanical fault. But there is no need to suppose that, when we give it its orders we know what we are doing, what the consequences of these orders are going to be. One does not need to be able to understand how these orders lead to the machine’s subsequent behaviour, any more than one needs to understand the mechanism of germination when one puts a seed in the ground. The plant comes up whether one understands or not. If we give the machine a programme which results in its doing something interesting which we had not anticipated I should be inclined to say that the machine had originated something, rather than to claim that its behaviour was implicit in the programme, and therefore that the originality lies entirely with us.

I will not attempt to say much about how this process of ‘programming a machine to think’ is to be done. The fact is that we know very little about it, and very little research has yet been done. There are plentiful ideas, but we do not yet know which of them are of importance. As in the detective stories, at the beginning of the investigation any trifle may be of importance to the investigator. When the problem has been solved, only the essential facts need to be told to the jury. But at present we have nothing worth putting before a jury. I will only say this, that I believe the process should bear a close relation of that of teaching.

I have tried to explain what are the main rational arguments for and against the theory that machines could be made to think, but something should also be said about the irrational arguments. Many people are extremely opposed to the idea of a machine that thinks, but I do not believe that it is for any of the reasons that I have given, or any other rational reason, but simply because they do not like the idea. One can see many features which make it unpleasant. If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. A similar danger and humiliation threatens us from the possibility that we might be superseded by the pig or the rat. This is a theoretical possibility which is hardly controversial, but we have lived with pigs and rats for so long without their intelligence much increasing, that we no longer trouble ourselves about this possibility. We feel that if it is to happen at all it will not be for several million years to come. But this new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety.

It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. It might for instance be said that no machine could write good English, or that it could not be influenced by sex-appeal or smoke a pipe. I cannot offer any such comfort, for I believe that no such bounds can be set. But I certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual characteristics such as the shape of the human body; it appears to me to be quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers. Attempts to produce a thinking machine seem to me to be in a different category. The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves.


n5321 | 2025年6月30日 00:01

Add 404 and 500 page

404 page 是页面不存在

500 page 是你的后台代码或模板中出现了运行时错误,而你没有处理它,Django 默认就返回了一个“服务器内部错误”的页面。

控制是通过setting.py 中的debug=false or true

实现形式:

  1. 准备404page and 500 page

  2. settings.py 中

    1. X  DEBUG = True 静态语句改动态语句 
    2. DEBUG = os.environ.get('DJANGO_DEBUG', '') != 'False'
    3. 设置environment值。

    4. bash复制编辑# Linux/macOS
      export DJANGO_DEBUG=False
      python manage.py runserver

      # Windows CMD
      set DJANGO_DEBUG=False
      python manage.py runserver

      # Windows PowerShell
      $env:DJANGO_DEBUG = "False"
      python manage.py runserver
    5. windows内以上设置OK。ubuntu需要改gunicorn 配置文件。

      1. 编辑 Gunicorn 的 systemd 服务文件

      2. 找到 [Service] 部分,添加:Environment="DJANGO_DEBUG=False"

    6. project directory添加views.py , 添加

      1. from django.shortcuts import render

        def custom_404(request, exception):
          return render(request, 'home/page_error_404.html', status=404)
    7. project urls.py 中添加语句:

      1. handler404 = 'mysite.views.custom_404'
    8. 500page的做法更加单。page命名为500.html ,直接放在template directory下面。(系统自动搜索)


最终效果



n5321 | 2025年6月27日 16:52

Accounts App

rethinking multiuser site

试用了一下以前写的signup page。全是surprise

1. 注册不成功!换了几个注册名,最后终于搞了一个成功了。

2.注册成功之后需要通过邮箱激活。最后终于通过邮箱激活了。but。中间细节的logic几乎全部都忘了。

激活mail看上去还是漂亮的:

multuser的问题稍微排到后面一点去解决!

test driven design的问题倒是可以先好好想一想!




n5321 | 2025年6月26日 16:09

temp0626

git 问题

背景:

为了track request,对db做了一个拆分。把tracking user request项的东西拆到另外一个db,命名track.sqlite3里面。

然后再gitignore把这个db添加进去。目的是在开发环境和生产环境都有track.sqlite3的db,但是数据不sync.

因为这个db原来是track 过的,现在突然不track了!所以remove cache,可是不记得那个step有delte 这个cache的步骤。

总之在原来动作的时候,又把它恢复过来了!

现在为了添加新的homepage,然后想要try git branch

就新添了一个branch newHome

在site上面增删了一些东西。

感觉意义不大。想要回到master,merge newHome。merge以后就问题出来了!

track.sqlte3找不到了!

为什么?!

它中间delete track.sqlite3一次,就没辙了!


n5321 | 2025年6月26日 00:53

fix db manage bug

暂时用的sqlite

pycharm下面有plugin——Database Tools and SQL。

之前一直挺好用。突然有一天db打不开了,持续报警!

Driver class 'org.slf4j.LoggerFactory' not found

莫名其妙!

Project A里面是好的,Project B就不行!

尝试了大半天,搞清楚logic!

“Database Tools and SQL” 插件通过 JDBC 管理数据库连接。里面三个选项,DB source Drivers and DDL Mappings

在Data Sources 的general tab,可以选择driver

Drivers里面可以配置。 配置了SQLite

问题出在Drivers Files之中。

它需要添加Custorm jars and library path

Driver class 'org.slf4j.LoggerFactory' not found需要缺jar包的问题。实际就要求添加jar包:

slf4j-api-2.0.9.jar and slf4j-simple-2.0.9.jar包

原来的try fix 过一次,添加的slf4j-api-2.0.9.jar,但是把它当做了library path。结果就是在“Database Tools and SQL”不能manage db!

OK 版本!

img


2023版的pycharm,为什么一关掉就提示closing project,而且等好久窗口都还在?

pycharm 主页 Help -> Find Action -> 输入 Registry -> 禁用ide.await.scope.completion


n5321 | 2025年6月24日 22:59

Nginx Log

goaccess can track request.

但是记录的ip地址不知道是哪里的。

尝试方式一:命令行加 --enable-geoip--enable-geo-resolver

安装:

sudo apt install geoip-bin geoip-database

无效。

用nano写script

#!/bin/bash
echo "🔍 正在检查 GoAccess 是否安装..."
if ! command -v goaccess &> /dev/null; then
  echo "❌ 未检测到 GoAccess,正在安装旧版本..."
  sudo apt update
  sudo apt install -y goaccess
fi
echo "✅ GoAccess 已安装,检测版本和 GeoIP 支持情况..."
goaccess --version | grep -q "GeoIP2 support"
if [ $? -eq 0 ]; then
  echo "🎉 当前 GoAccess 已支持 GeoIP2,无需修复。"
  exit 0
else
  echo "⚠️ 当前 GoAccess 未启用 GeoIP 支持,准备自动编译带 GeoIP2 的版本..."
fi
# 安装编译依赖
echo "📦 安装依赖中..."
sudo apt update
sudo apt install -y build-essential libncursesw5-dev libgeoip-dev \
  libmaxminddb-dev libtokyocabinet-dev git autotools-dev automake
# 克隆源码
echo "📥 下载 GoAccess 最新源代码..."
cd ~
rm -rf goaccess  # 避免旧版本冲突
git clone https://github.com/allinurl/goaccess.git
cd goaccess
echo "🔧 开始编译 GoAccess with GeoIP2 支持..."
autoreconf -fi
./configure --enable-utf8 --enable-geoip=mmdb
make -j$(nproc)
sudo make install
# 检查是否成功
echo "✅ 编译完成,检查 GeoIP2 支持:"
goaccess --version | grep GeoIP2 && echo "✅ 成功安装带 GeoIP2 支持的 GoAccess!" || echo "❌ 安装失败,请手动检查"
# 提示用户数据库位置
echo ""
echo "📍 你需要下载 MaxMind 的 GeoLite2-City.mmdb 数据库:"
echo "1. 访问:https://dev.maxmind.com/geoip/geolite2/"
echo "2. 注册账号,下载 GeoLite2-City.mmdb"
echo "3. 保存到例如:/usr/local/share/GeoIP/GeoLite2-City.mmdb"
echo ""
echo "📊 之后你可以这样运行 goaccess:"
echo "  zcat /var/log/nginx/access.log.*.gz | goaccess \\"
echo "    --log-format=COMBINED \\"
echo "    --geoip-database /usr/local/share/GeoIP/GeoLite2-City.mmdb \\"
echo "    -o report.html"

download  GeoLite2-City.mmdb 

sudo mkdir -p /usr/local/share/GeoIP
sudo cp GeoLite2-City.mmdb /usr/local/share/GeoIP/
sudo chmod 644 /usr/local/share/GeoIP/GeoLite2-City.mmdb

搞定

14天的hits!




n5321 | 2025年6月11日 23:52

Books App

目的是提供一个平台,可以分享一些对工程师来说有足够价值的书籍,工具等等。

技术框架是django,然后加了几个第三方工具。关键的工具是filer

  1. 前端问题,应该在电机图片之后进入detail page not 显示图片!
  2. django filer里面有若干项配置需要更改
  3. 目前model里3个class,Category,Document, and Note


n5321 | 2025年5月20日 22:04