Youry's Blog

Youry's Blog

Archive for the ‘Cmputer Science’ Category

The end of software engineering and the last methodologist

leave a comment »

The end of software engineering and the last methodologist

Copy from Bertrand Meyer’s technology+ blog

(Reposted from the CACM blog [*].)

Software engineering was never a popular subject. It started out as “programming methodology”, evoking the image of bearded middle-aged men telling you with a Dutch, Swiss-German or Oxford accent to repent and mend your ways. Consumed (to paraphrase Mark Twain) by the haunting fear that someone, somewhere, might actually enjoy coding.

That was long ago. With a few exceptions including one mentioned below, to the extent that anyone still studies programming methodology, it’s in the agile world, where the decisive argument is often “I always say…”. (Example from a consultant’s page:  “I always tell teams: `I’d like a [user] story to be small, to fit in one iteration but that isn’t always the way.’“) Dijkstra did appeal to gut feeling but he backed it through strong conceptual arguments.

The field of software engineering, of which programming methodology is today just a small part, has enormously expanded in both depth and width. Conferences such as ICSE and ESEC still attract a good crowd, the journals are buzzing, the researchers are as enthusiastic as ever about their work, but… am I the only one to sense frustration? It is not clear that anyone outside of the community is interested. The world seems to view software engineering as something that everyone in IT knows because we all develop software or manage people who develop software. In the 2017 survey of CS faculty hiring in the U.S., software engineering accounted, in top-100 Ph.D.-granting universities, for 3% of hires! (In schools that stop at the master’s level, the figure is 6%; not insignificant, but not impressive either given that these institutions largely train future software engineers.) From an academic career perspective, the place to go is obviously  “Artificial Intelligence, Data Mining, and Machine Learning”, which in those top-100 universities got 23% of hires.

Nothing against our AI colleagues; I always felt “AI winter” was an over-reaction [1], and they are entitled to their spring. Does it mean software engineering now has to go into a winter of its own? That is crazy. Software engineering is more important than ever. The recent Atlantic  “software apocalypse” article (stronger on problems than solutions) is just the latest alarm-sounding survey. Or, for just one recent example, see the satellite loss in Russia [2] (juicy quote, which you can use the next time you teach a class about the challenges of software testing: this revealed a hidden problem in the algorithm, which was not uncovered in decades of successful launches of the Soyuz-Frigate bundle).

Such cases, by the way, illustrate what I would call the software professor’s dilemma, much more interesting in my opinion than the bizarre ethical brain-teasers (you see what I mean, trolley levers and the like) on which people in philosophy departments spend their days: is it ethical for a professor of software engineering, every morning upon waking up, to go to in the hope that a major software-induced disaster has occurred,  finally legitimizing the profession? The answer is simple: no, that is not ethical. Still, if you have witnessed the actual state of ordinary software development, it is scary to think about (although not to wish for) all the catastrophes-in-waiting that you suspect are lying out there just waiting for the right circumstances .

So yes, software engineering is more relevant than ever, and so is programming methodology. (Personal disclosure: I think of myself as the very model of a modern methodologist [3], without a beard or a Dutch accent, but trying to carry, on today’s IT scene, the torch of the seminal work of the 1970s and 80s.)

What counts, though, is not what the world needs; it is what the world believes it needs. The world does not seem to think it needs much software engineering. Even when software causes a catastrophe, we see headlines for a day or two, and then nothing. Radio silence. I have argued to the point of nausea, including at least four times in this blog (five now), for a simple rule that would require a public auditing of any such event; to quote myself: airline transportation did not become safer by accident but by accidents. Such admonitions fall on deaf ears. As another sign of waning interest, many people including me learned much of what they understand of software engineering through the ACM Risks Forum, long a unique source of technical information on software troubles. The Forum still thrives, and still occasionally reports about software engineering issues, but most of the traffic is about privacy and security (with a particular fondness for libertarian rants against any reasonable privacy rule that the EU passes). Important topics indeed, but where do we go for in-depth information about what goes wrong with software?

Yet another case in point is the evolution of programming languages. Language creation is abuzz again with all kinds of fancy new entrants. I can think of one example (TypeScript) in which the driving force is a software engineering goal: making Web programs safer, more scalable and more manageable by bringing some discipline into the JavaScript world. But that is the exception. The arguments for many of the new languages tend to be how clever they are and what expressive new constructs they introduce. Great. We need new ideas. They would be even more convincing if they addressed the old, boring problems of software engineering: correctness, robustness, extendibility, reusability.

None of this makes software engineering less important, or diminishes in the least the passion of those of us who have devoted our careers to the field. But it is time to don our coats and hats: winter is upon us.


[1] AI was my first love, thanks to Jean-Claude Simon at Polytechnique/Paris VI and John McCarthy at Stanford.

[2] Thanks to Nikolay Shilov for alerting me to this information. The text is in Russian but running it through a Web translation engine (maybe this link will work) will give the essentials.

[3] This time borrowing a phrase from James Noble.

[*] I am reposting these CACM blog articles rather than just putting a link, even though as a software engineer I do not like copy-paste. This is my practice so far, and it might change since it raises obvious criticism, but here are the reasons: (A) The audiences for the two blogs are, as experience shows, largely disjoint. (B) I like this site to contain a record of all my blog articles, regardless of what happens to other sites. (C) I can use my preferred style conventions.


Written by youryblog

January 25, 2018 at 4:01 PM

The value of university

leave a comment »

 And the winners are…

Four-year non-vocational American colleges, ranked by alumni earnings above expectation

Our first-ever college rankings Oct 29th 2015, 15:41 BY D.R.


Rank▲ %ile College State Expected earnings Median earnings Over/Under
1 99 Washington and Lee University VA
2 99 Babson College MA
3 99 Villanova University PA
4 99 Harvard University MA
5 99 Bentley University MA
6 99 Otis College of Art and Design CA
7 99 Lehigh University PA
8 99 Alderson Broaddus University WV
9 99 Texas A & M International University TX
10 99 California State University-Bakersfield CA
11 99 Holy Family University PA
12 99 University of the Pacific CA
13 99 University of Saint Joseph CT
14 99 Bucknell University PA
15 98 University of Pennsylvania PA
16 98 Georgetown University DC
17 98 Drake University IA
18 98 Rensselaer Polytechnic Institute NY
19 98 California Lutheran University CA
20 98 California State University-Stanislaus CA
Sources: US Department of Education; The Economist

AMERICAN universities claim to hate the simplistic, reductive college rankings published by magazines like US News, which wield ever-growing influence over where students attend. Many have even called for an information boycott against the authors of such ratings. Among the well-founded criticisms of these popular league tables is that they do not measure how much universities help their students, but rather what type of students choose to attend each college. A well-known economics paper by Stacy Dale and Alan Krueger found that people who attended elite colleges do not make more money than do workers who were accepted to the same institutions but chose less selective ones instead—suggesting that former attendees and graduates of Harvard tend to be rich because they were already intelligent and hard-working before they entered college, not because of the education or opportunities the university provided.

On September 12th America’s Department of Education unveiled a “college scorecard” website containing a cornucopia of data about universities. The government generated the numbers by matching individuals’ student-loan applications to their subsequent tax returns, making it possible to compare pupils’ qualifications and demographic characteristics when they entered college with their salaries ten years later. That information offers the potential to disentangle student merit from university contributions, and thus to determine which colleges deliver the greatest return and why.

The Economist’s first-ever college rankings are based on a simple, if debatable, premise: the economic value of a university is equal to the gap between how much money its students subsequently earn, and how much they might have made had they studied elsewhere. Thanks to the scorecard, the first number is easily accessible. The second, however, can only be estimated. To calculate this figure, we ran the scorecard’s earnings data through a multiple regression analysis, a common method of measuring the relationships between variables.

We wanted to know how a wide range of factors would affect the median earnings in 2011 of a college’s former students. Most of the data were available directly from the scorecard: for the entering class of 2001, we used average SAT scores, sex ratio, race breakdown, college size, whether a university was public or private, and the mix of subjects students chose to study. There were 1,275 four-year, non-vocational colleges in the scorecard database with available figures in all of these categories. We complemented these inputs with information from other sources: whether a college is affiliated with the Catholic Church or a Protestant Christian denomination; the wealth of its state (using a weighted average of Maryland, Virginia and the District of Columbia for Washington) and prevailing wages in its city (with a flat value for colleges in rural areas); whether it has a ranked undergraduate business school (and is thus likely to attract business-minded students); the percentage of its students who receive federal Pell grantsgiven to working-class students (a measure of family income); and whether it is a liberal-arts college. Finally, to avoid penalising universities that tend to attract students who are disinclined to pursue lucrative careers, we created a “Marx and Marley index”, based on colleges’ appearances during the past 15 years on the Princeton Review’s top-20 lists for political leftism and “reefer madness”. (For technically minded readers, all of these variables were statistically significant at the 1% level, and the overall r-squared was .8538, meaning that 85% of the variation in graduate salaries between colleges was explained by these factors. We also tested the model using 2009 earnings figures rather than 2011, and for the entering class of 2003 rather than 2001, and got virtually identical results.)

After feeding this information into the regression, our statistical software produced an estimate for each college based exclusively on these factors of how much money its former students would make. Its upper tiers are dominated by colleges that emphasise engineering (such as Worcester Polytechnic) and attract students with high SAT scores (like Stanford). The lower extreme is populated by religious and art-focused colleges, particularly those in the south and Midwest. This number represents the benchmark against which we subsequently compare each college’s earnings figure to produce the rankings. The bar is set extremely high for universities like Caltech, which are selective, close to prosperous cities and teach mainly lucrative subjects. If their students didn’t go on to extremely high-paying careers, the college would probably be doing something gravely wrong. Conversely, a southern art school with low-scoring, working-class students, such as the Memphis College of Art, might actually be giving its pupils a modest economic boost even though they earn a paltry $26,700 a year a decade after enrolment: workers who attended a typical college with its profile would make about $1,000 less.

The sortable table above lists the key figures for all 1,275 institutions in our study that remain open. The first column contains the median post-enrolment salary that our model predicts for each college, the second its actual median earnings, and the third its over- or under-performance. Clicking on a university pops up a window that shows the three factors with the biggest effect on the model’s expectation. For example, Caltech’s forecast earnings increase by $27,114 as a result of its best-in-the-country incoming SAT scores, another $9,234 thanks to its students’ propensity to choose subjects like engineering, and a further $2,819 for its proximity to desirable employers in the Los Angeles area.

In an unexpected coincidence, it has come to our attention that the Brookings Institution, a think-tank in Washington, happens to have published its own “value-added” rankingsusing the scorecard data on the exact same day that we did (October 29th). Although their approach was broadly similar to ours, they looked at a much larger group of universities (including two-year colleges and vocational schools), and they appear to have used a very different set of variables. Above all, the Brookings numbers regard a college’s curriculum as a significant part of its “value add”, causing the top of its rankings to be dominated by engineering schools, and the bottom by art and religious institutions. In contrast, we treated fields of study as a reflection of student preferences, and tried to identify the colleges that offer the best odds of earning a decent living for people who do want to become artists or study in a Christian environment. Similarly, the Brookings rankings do not appear to weight SAT scores nearly as heavily as ours do, if they count them at all: colleges like Caltech and Yale, whose students subsequently earn far more money than those of an average university but significantly less than their elite test results would indicate, sit at the very bottom of The Economist’s list, whereas Brookings puts them close to the top.

It is important to clarify how our rankings should be interpreted. First, the scorecard data suffer from limitations. They only include individuals who applied for federal financial aid, restricting the sample to a highly unrepresentative subset of students that leaves out the children of most well-off parents. And they only track students’ salaries for ten years after they start college, cutting off their trajectory at an age when many eventual high earners are still in graduate school and thus excluded from the sample of incomes. A college that produces hordes of future doctors will have far lower listed earnings in the database than one that generates throngs of, say, financial advisors, even though the two groups’ incomes are likely to converge in their 30s.

Second, although we hope that our numbers do in fact represent the economic value added by each institution, there is no guarantee that this is true. Colleges whose earnings results differ vastly from the model’s expectations might be benefiting or suffering from some other characteristic of their students that we neglected to include in our regression: for example, Gallaudet University, which ranks third-to-last, is a college for the deaf (which is why we excluded it from our table in print). It is also possible that highly ranked colleges simply got lucky, and that their future students are unlikely to make as much money as the entering class of 2001 did.

Finally, maximising earnings is not the only goal of a college, and probably not even the primary one. In fact, you could easily argue that “underperforming” universities like Yale and Swarthmore are actually making a far greater contribution to American society than overperformers like Washington & Lee, if they tend to channel their supremely talented graduates towards public service rather than Wall Street. For students who want to know which colleges are likely to boost their future salaries by the greatest amount, given their qualifications and preferences regarding career and location, we hope these rankings prove helpful. They should not be used for any other purpose.

CORRECTION: An eagle-eyed commenter has alerted us that all 20 listed campuses of Pennsylvania State University appeared with the same median earnings. Other keen observers have noted irregularities regarding a handful of colleges with similar names in different states. In response, we have reviewed the scorecard database, consolidated all colleges with multiple campuses but a single listed salary figure, identified and distinguished universities with overlapping names, re-run the regression, and revised the rankings and the text of this blog post. As a result, the top and bottom ten colleges published in our print issue no longer exactly match the ones in these updated rankings. However, the vast majority of universities moved by no more than a handful of places. Additionally, we have removed references to “graduates” and “alumni”, to reflect the fact that the scorecard’s income data do not distinguish between graduates and students who enrolled but did not graduate.

Written by youryblog

April 5, 2016 at 10:35 PM

How To Get Hired — What CS Students Need to Know & A Future for Computing Education Research

leave a comment »

  1. A Future for Computing Education Research 2015-01-20 21.38.45ulltext

  2. from

I’ve hired dozens of C/C++ programmers (mostly at the entry level). To do that, I had to interview hundreds of candidates. Many of them were woefully poorly prepared for the interview. This page is my attempt to help budding software engineers get and pass programming interviews.


What Interviewers are Tired Of

A surprisingly large fraction of applicants, even those with masters’ degrees and PhDs in computer science, fail during interviews when asked to carry out basic programming tasks. For example, I’ve personally interviewed graduates who can’t answer “Write a loop that counts from 1 to 10” or “What’s the number after F in hexadecimal?” Less trivially, I’ve interviewed many candidates who can’t use recursion to solve a real problem. These are basic skills; anyone who lacks them probably hasn’t done much programming.

Speaking on behalf of software engineers who have to interview prospective new hires, I can safely say that we’re tired of talking to candidates who can’t program their way out of a paper bag. If you can successfully write a loop that goes from 1 to 10 in every language on your resume, can do simple arithmetic without a calculator, and can use recursion to solve a real problem, you’re already ahead of the pack!

What Interviewers Look For

As Joel Spolsky wrote in his excellent essay The Guerrilla Guide to Interviewing:

1. Employers look for people who are Smart and Get Things Done

How can employers tell who gets things done? They go on your past record. Hence:

2. Employers look for people who Have Already Done Things

Or, as I was told once when I failed an interview:

3. You can’t get a job doing something you haven’t done before

(I was interviewing at HAL Computers for a hardware design job, and they asked me to implement a four-bit adder. I’d designed lots of things using discrete logic, but I’d always let the CPU do the math, so I didn’t know offhand. Then they asked me how to simulate a digital circuit quickly. I’d been using Verilog, so I talked about event simulation. The interviewer reminded me about RTL simulation, and then gently said the line I quoted above. I’ll never forget it.)

Finally, you may even find that

4. Employers Google your name to see what you’ve said and done

What This Means For You

What the above boil down to is: if you want to get a job programming, you have to do some real programming on your own first, and you have to get a public reputation, however minor, as a programmer. Don’t wait for your school to teach you how to design and program; they might never get around to it. College courses in programming are fine, probably even necessary, but most programming courses don’t provide the kind of experience that real programming gives, and real employers look for real programming experience.

Malcolm Gladwell wrote in Outliers,

… Ten thousand hours of practice is required to achieve the level of mastery associated with being a world-class expert — in anything.

Seems about right to me. I don’t know how many hours it takes to achieve the level of mastery required to program well enough to do a good job, but I bet it’s something like 500. (I think I had been programming or doing digital logic in one way or another for about 500 hours before my Dad started giving me little programming jobs in high school. During my five years of college, I racked up something like several hundred hours programming for fun, several hundred more in CS/EE lab courses, and about two thousand on paid summer jobs.)

But How Can I Get Experience Without a Job?

If you’re in college, and your school offers programming lab courses where you work on something seriously difficult for an entire term, take those courses. Good examples of this kind of course are

Take several of this kind of course if you can; each big project you design and implement will be good experience.

Whether or not you’re in college, nothing is stopping you from contributing to an existing Open Source project. One good way to start is to add unit or regression tests; nearly all projects need them, but few projects have a good set of them, so your efforts will be greatly appreciated.

I suggest starting by adding a conformance test to the Wine project. That’s great because it gives you exposure to programming both in Linux and in Windows. Also, it’s something that can be done without a huge investment of time; roughly 40 work hours should be enough for you to come up to speed, write a simple test, post it, address the feedback from the Wine developers, and repeat the last two steps until your code is accepted.

One nice benefit of getting code into an Open Source project is that when prospective employers Google you, they’ll see your code, and they’ll see that it is in use by thousands of people, which is always good for your reputation.

Quick Reality Check

If you want a quick reality check as to whether you can Get Things Done, I recommend the practice rooms at If you can complete one of their tasks in C++ or Java within an hour, and your solution actually passes all the system tests, you can definitely program your way out of paper bag!

Here’s another good quick reality check, one closer to my heart.

Good luck!

Please let me know whether this essay was helpful. You can email me at dank at

Shameless Plug

I’m looking for a few good interns. If you live in Los Angeles, and you are looking for a C/C++ internship, please have a look at my internship page.


Written by youryblog

January 17, 2015 at 6:40 PM

Over 70% of the cost (time) of developing a program goes out after it has been released +

leave a comment »

Thu, 1 Jan 2015

Actually I found that the usually the ones that find it the most fascinating
write the least legible code because they never bother with software
engineering and design.

You can get a high school wiz kid to write the fastest code there is, but
there is no way you will be able to change anything about it five minutes

Considering that over 70% of the cost (time) of developing a program goes
out after it has been released, when changes start to be asked for, that is
a problem.

Micha Feigin
Csail-related mailing list

Interesting view on student's grade: Dear Student: No, I Won’t Change the Grade You Deserve

Written by youryblog

January 2, 2015 at 10:04 PM

The Tears of Donald Knuth

leave a comment »

Donald Knuth

Donald Knuth

In this column I will be looking at the changing relationship between the discipline of computer science and the growing body of scholarly work on the history of computing, beginning with a recent plea made by renowned computer scientist Donald Knuth. This provides an opportunity to point you toward some interesting recent work on the history of computer science and to think more broadly about what the history of computing is, who is writing it, and for whom they are writing.

Last year historians of computing heard an odd rumor: that Knuth had given the Kailath lecture at Stanford University and spent the whole time talking about us. Its title, “Let’s Not Dumb Down the History of Computer Science,” was certainly intriguing, and its abstract confirmed that some forceful positions were being taken.a The online video eventually showed something remarkable: his lecture focused on a single paper, Martin Campbell-Kelly’s 2007 “The History of the History of Software.”6,b Reading it had deeply saddened Knuth, who “finished reading it only with great difficulty” through his tear-stained glasses.

Back to Top

What Knuth Said

Knuth began by announcing that, despite an aversion to confrontation, he would be “flaming” historians of computing. This, he worried “could turn out to be the biggest mistake of my life.” The bout might nevertheless be seen as a mismatch. Knuth is among the world’s most celebrated computer scientists, renowned for his ongoing project to classify and document families of algorithms in The Art of Computer Programming and for his creation of the TeX computerized typesetting system ubiquitous within computer science and mathematics. Campbell-Kelly has a similar prominence within the much smaller community of historians of computing but, even by Google Scholar’s generous definitions, the paper that saddened Knuth has been cited only nine times.

Knuth then enumerated his motivations, as a computer scientist, to read the history of science. First, reading history helped him to understand the process of discovery. Second, understanding the difficulty and false starts experienced by brilliant historical scientists in making discoveries that specialists now find obvious helped him to see what made concepts challenging to students and thus to become a “much better writer and teacher.” Third, appreciating the historical contribution of non-Western scientists helped in “celebrating the contributions of many cultures.” Fourth, history is the craft of telling stories, which is “the best way to teach, to explain something.” Fifth, the biographies of scientists teach tactics for a successful and rewarding career. Sixth, history teaches how human experience has changed over time. As humans we should care about that.

Knuth also identified some special contributions to the history of science that professionally trained historians are uniquely well placed to make. We are good at “smoking out” primary sources and putting historical activities in the context of broader timelines. He also appreciates our ability to translate papers written in languages that he cannot himself read. He finds attempts at historical analysis “probably the least interesting” aspects of our papers but appreciates lengthy quotations from primary sources.

Things then headed in a less positive direction. Knuth explained that Campbell-Kelly had centered his paper on a table of important works related to the history of software published between 1967 and 2004. It coded the predominant approaches into four categories—one of which was technical—to demonstrate the technical approach had been dominant until about 1990, dwindling thereafter and vanishing altogether after 1997. Campbell-Kelly characterized this as an “evolution” away from “technical histories” of the “low-hanging-fruit variety” written by Knuth and other “outstanding technical experts” that were “constrained, excessively technical, and lacking in breadth of vision.”

Knuth had previously viewed Campbell-Kelly as a kindred spirit but had now been granted a glimpse of “what historians say when they’re talking to historians instead of when they’re talking to people like me.” Without pausing to dry his glasses he had written to Campbell-Kelly to accuse him of having “lost faith in the notion that computer science is actually scientific.”

So why is the history of computer science not being written in the volume it deserves, or the manner favored by Knuth?

The shift described by Campbell-Kelly reflected a change in the population of scholars writing the history of computing. Many of the senior computing figures of the 1970s worked to preserve the history of the 1940s and early 1950s, starting with a number of “pioneer days” and workshops organized. The most important of these was held at Los Alamos National Laboratory in 1976.15 Most of the 90 participants included in the group photograph of attendees were computer pioneers of the 1940s. Knuth himself contributed a detailed history of the first tools for “automatic programming” (assemblers and compilers). He was one of a handful of interested younger computer scientists who entered the field in the 1950s, which also included Edsger Dijkstra and Brian Randell, a systems programmer turned academic who had assembled an important collection of reprinted historical documents. At the conference were only a handful of trained historians. The editorial board of Annals of the History of Computing, which began in 1979 as a publication of AFIPS, a long-defunct umbrella group for professional computing societies, had a similar makeup. As graduate students in history and history of science programs began to write dissertations on computer-related topics they eventually inverted the ratio of trained historians to computer scientists, though the journal continues to publish a significant number of papers by computer scientists and technical experts.

In his lecture Knuth worried that a “dismal trend” in historical work meant that “all we get nowadays is dumbed down” through the elimination of technical detail. According to Knuth “historians of math have always faced the fact that they won’t be able to please everybody.” He feels that other historians of science have succumbed to “the delusion that … an ordinary person can understand physics …”

I am going to tell you why Knuth’s tears were misguided, or at least misdirected, but first let me stress that historians of computing deeply appreciate his conviction that our mission is of profound importance. Indeed, one distinguished historian of computing recently asked me what he could do to get flamed by Knuth. Knuth has been engaged for decades with history. This is not one of his passionate interests outside computer science, such as his project reading verses 3:16 of different books of the Bible. Knuth’s core work on computer programming reflects a historical sensibility, as he tracks down the origin and development of algorithms and reconstructs the development of thought in specific areas. For years advertisements for IEEE Annals of the History of Computing, where Campbell-Kelly’s paper was published, relied on a quote from Knuth that it was the only publication he read from cover to cover. With the freedom to choose a vital topic for a distinguished lecture Knuth chose to focus on history rather than one of his better-known scientific enthusiasms such as literate programming or his progress with The Art of Computer Programming.

Back to Top

Computing vs. Computer Science

Here is where I part ways with Knuth’s interpretation. Campbell-Kelly’s article was “The History of the History of Software,” not “The History of the History of Computer Science.” Knuth’s complaint that historians have been led astray by fads and pursuit of a mass audience into “dumbed down” history reflects an assumption that computer science is the whole of computing, or at least the only part in which historians can find important questions about software. This conflated the history of computing with the history of computer science. Distinguished computer scientists are prone to blur their own discipline, and in particular few dozen elite programs, with the much broader field of computing. The tools and ideas produced by computer scientists underpin all areas of IT and make possible the work carried out by network technicians, business analysts, help desk workers, and Excel programmers. That does not make those workers computer scientists. The U.S. alone is estimated to have more than 10 million “information technology workers,” which is about a hundred times more than the ACM’s membership. Vint Cerf has warned in Communications that even the population of “professional programmers” dwarfs the association’s membership.7 ACM’s share of the IT workforce has been in decline for a half-century, despite efforts begun back in the 1960s and 1970s by leaders such as Walter Carlson and Herb Grosch to broaden its appeal.

Computing is much bigger than computer science, and so the history of computing is much bigger than the history of computer science. Yet Knuth treated Campbell-Kelly’s book on the business history of the software industry (accurately subtitled “a history of the software industry”) and all the rest of the history of computing as part of “the history of computer science.”4 Others have written about the history of computer use in life insurance and other areas of business, the history of cybernetics, the history of the semiconductor industry, the history of punched card machines, the history of the IT workforce, the history of computer-producing companies such as IBM, the use and development of computers in particular countries, the history of the personal computer, and the history of computer usage in particular areas of scientific practice such as bio-medicine. To call such work “dumbed down” history of computer science, rather than smart history of many other things, is to misunderstand both the intentions and the accomplishments of its authors.

The truth is that regrettably little history of computer science, whether dumb or deep, has been written by trained historians even though the history of computing literature as a whole has been expanding rapidly. Consider our output between 1990 and 2010. Michael Mahoney, a historian of science and mathematics at Princeton University, worked on a narrative history of theoretical computer science but ultimately produced only a set of provocative but schematic papers.13 Mahoney was also interested in the history of software engineering, and several other historians have discussed the 1968 NATO Conference on Software Engineering at which that field was launched. Eminent sociologist of science Donald MacKenzie worked on the history of formal methods and its relationship to the development of computer technology.11,12 Two books explored the history of DARPA and its role in shaping the development of computer science and technology, though Knuth would not approve of their institutional focus.17,19 William Aspray wrote several papers on the history of NSF support for computing2 and a book on John von Neumann.1 A complete list would be longer, but not that much longer.

Back to Top

Historical Careers in Computer Science

So why is the history of computer science not being written in the volume it deserves, or the manner favored by Knuth? I am, at heart, a social historian of science and technology and so my analysis of the situation is grounded in disciplinary and institutional factors. Books of this kind would demand years of expert research and sell a few hundred copies. They would thus be authored by those not expected to support themselves with royalties, primarily academics.

Academic careers are profoundly shaped by the disciplinary communities in which they develop. Throughout their training, scholars are socialized into the culture of their field and pick up a wealth of tacit and explicit knowledge on what is expected of them. They learn how to select a research project, what kinds of work are noticed and which are ignored, what style to write in, how to structure a paper, which professors are respected, what search committees and grant review panels are looking for. This continues throughout their careers, as they aspire to prestigious awards, named chairs, or favors from the Dean. Whether they realize it or not, successful academics have internalized the rules of the game played in their particular field.

The history of computer science might be undertaken from two disciplinary base camps within academia: computer science and the history of science. Someone whose primary training is in history will naturally see the history of computing differently from someone whose disciplinary loyalty is to computer science. They will choose different topics and explore them in different ways for different audiences. For different reasons, outlined below, neither group has shown much interest in supporting work of the kind favored by Knuth. That is why it has rarely been written.

Back to Top

Prospects within the History of Science

The history of science is a kind of history, which is in turn part of the humanities. Some historians of science are specialists within broad history departments, and others work in specialized programs devoted to science studies or to the history of science, technology, or medicine. In both settings, historians judge the work of prospective colleagues by the standards of history, not those of computer science. There are no faculty jobs earmarked for scholars with doctoral training in the history of computing, still less in the history of computer science. The persistently brutal state of the humanities job market means that search committees can shortlist candidates precisely fitting whatever obscure combination of geographical area, time period, and methodological approaches are desired. So a bright young scholar aspiring to a career teaching and researching the history of computer science would need to appear to a humanities search committee as an exceptionally well qualified historian of the variety being sought (perhaps a specialist in gender studies or the history of capitalism) who happens to work on topics related to computing.

This, more than anything else, explains the rise of the broad and non-technical approaches decried by Knuth. Work in the history of computing has been seen by most in the humanities as dull and provincial, excessively technical and devoid of big historical ideas. Whereas fields such as environmental history have produced widely recognized classics that convince non-specialists of the scholarly potential, historians of computing are still inching toward broad acceptance of their relevance. The roles Knuth outlined for them would not serve them well as they were essentially those of the research assistant: gather primary materials, translate them if necessary, and make them available to computer scientists who will do the analysis.

Current enthusiasm for the “digital humanities” and the inescapable importance of computing to the modern world could provide opportunities. One day humanities search committees might even seek out historians of computing, but only those whose work engages with and appeals to scholars who themselves know nothing of computer science. In the meantime many scholars with doctorates in the history of computing have found work in museums or in academic employment outside both history and computer science, for example, in business schools, information schools, or specialist programs such as engineering education. These positions pose their own disciplinary challenges, but for obvious reasons provide few incentives to study the history of computer science.

Back to Top

Prospects within Computer Science

Thus the kind of historical work Knuth would like to read would have to be written by computer scientists themselves. Some disciplines support careers spent teaching history to their students and writing history for their practitioners. Knuth himself holds up the history of mathematics as an example of what the history of computing should be. It is possible to earn a Ph.D. within some mathematics departments by writing a historical thesis (euphemistically referred to as an “expository” approach). Such departments have also been known to hire, tenure, and promote scholars whose research is primarily historical. Likewise medical schools, law schools, and a few business schools have hired and trained historians. A friend involved in a history of medicine program recently told me that its Ph.D. students are helped to shape their work and market themselves differently depending on whether they are seeking jobs in medical schools or in history programs. In other words, some medical schools and mathematics departments have created a demand for scholars working on the history of their disciplines and in response a supply of such scholars has arisen.

As Knuth himself noted toward the end of his talk, computer science does not offer such possibilities. As far as I am aware no computer science department in the U.S. has ever hired as a faculty member someone who wrote a Ph.D. on a historical topic within computer science, still less someone with a Ph.D. in history. I am also not aware of anyone in the U.S. having been tenured or promoted within a computer science department on the basis of work on the history of computer science. Campbell-Kelly, now retired, did both things (earning his Ph.D. in computer science under Randell’s direction) but he worked in England where reputable computer science departments have been more open to “fuzzy” topics than their American counterparts. Neither are the review processes and presentation formats at prestigious computer conferences well suited for the presentation of historical work. Nobody can reasonably expect to build a career within computer science by researching its history.

In its early days the history of computing was studied primarily by those who had already made their careers and could afford to indulge pursuing historical interests from tenured positions or to dabble after retirement. Despite some worthy initiatives, such as the efforts of the ACM History Committee to encourage historical projects, the impulse to write technical history has not spread widely among younger generations of distinguished and secure computer scientists.

To summarize, the upper-right quadrant in the accompanying table is essentially empty. It reflects historical work forming the backbone of a scholarly career and intended as a contribution to computer science. I share Knuth’s regret that the technical history of computer science is greatly understudied. The main cause is that computer scientists have lost interest in preserving the intellectual heritage of their own discipline. It is not, as Knuth implies, that Campbell-Kelly is representative of a broader trend of individual researchers deciding to stop writing one kind of history and to devote a fixed pool of talent to writing another kind instead. There is no zero sum game here. More work by professionally trained historians on social, institutional, and cultural aspects of computing does not have to mean less work by computer scientists themselves. They cannot count on history departments to do this for them, and I hope Knuth’s lament motivates a few to follow his lead in this area. Not simply because Knuth did it—few computer scientists have emulated him by procuring their own domestic pipe organs—but because his commitment to the intellectual history of computer science makes a powerful argument that historical knowledge of a particular kind is a prerequisite for deep technical understanding.

Back to Top

Reopening the Black Box

I will end on a positive note. In his paper, Campbell-Kelly offered a “biographical mea culpa” for his own early work that he now reads with a “mild flush of embarrassment.” He came to see his erstwhile enthusiasm for technical history as a youthful indiscretion and his conversion to business history as an act of redemption, paralleling his own development and that of the field in a way that relied implicitly on a rather unfashionable conceptualization of history as progress along a fixed trajectory.

Contrary both to Knuth’s despair and to Campbell-Kelly’s story of a march of progress away from technical history, some scholars with formal training in history and philosophy have been turning to topics with more direct connections to computer science over the past few years. Liesbeth De Mol and Maarten Bullynck have been working to engage the history and philosophy of mathematics with issues raised by early computing practice and to bring computer scientists into more contact with historical work.3 Working with like-minded colleagues, they helped to establish a new Commission for the History and Philosophy of Computing within the International Union of the History and Philosophy of Science. Edgar Daylight has been interviewing famous computer scientists, Knuth included, and weaving their remarks into fragments of a broader history of computer science.8 Matti Tedre has been working on the historical shaping of computer science and its development as a discipline.22 The history of Algol was a major focus of the recent European Science Foundation project Software for Europe. Algol, as its developers themselves have observed, was important not only for pioneering new capabilities such as recursive functions and block structures, but as a project bringing together a number of brilliant research-minded systems programmers from different countries at a time when computer science had yet to coalesce as a discipline.c Pierre Mounier-Kuhn has looked deeply into the institutional history of computer science in France and its relationship to the development of the computer industry.16

Stephanie Dick, who recently earned her Ph.D. from Harvard, has been exploring the history of artificial intelligence with close attention to technical aspects such as the development and significance of the linked list data structure.d Rebecca Slayton, another Harvard Ph.D., has written about the engagement of prominent computer scientists with the debate on the feasibility of the “Star Wars” missile defense system; her thesis has been published as an MIT Press book.20 At Princeton, Ksenia Tatarchenko recently completed a dissertation on the USSR’s flagship Akademgorodok Computer Center and its relationship to Western computer science.21 British researcher Mark Priestley has written a deep and careful exploration of the history of computer architecture and its relationship to ideas about computation and logic.18 I have worked with Priestly to explore the history of ENIAC, looking in great detail at the functioning and development of what we believe to be the first modern computer program ever executed.9 Our research engaged with some of the earliest historical work on computing, including Knuth’s own examination of John von Neumann’s first sketch of a modern computer program10 and Campbell-Kelly’s technical papers on early programming techniques.5

The history of computer science retains an important place within the diverse and growing field of the history of computing.

Most of this new work is aimed primarily at historians, philosophers, or science studies specialists rather than computer scientists. However, it does not shy away from engagement with the specifics of computer technology or the detailed workings of the computer science community, re-introducing technical analysis along with continued attention to social, cultural, and institutional factors. Some of it may confirm Campbell-Kelly’s prediction that the field will move toward “holistic” work integrating different approaches.

The history of computer science retains an important place within the diverse and growing field of the history of computing. Work of the particular kind preferred by Knuth will flourish only if his colleagues in computer science are willing to produce, reward, or commission it. I nevertheless hope he will continue to find much value in the work of historians and that we will rarely give him cause to reach for his handkerchief.

Back to Top


1. Aspray, W. John von Neumann and the Origins of Modern Computing. MIT Press, Cambridge, MA, 1990.

2. Aspray, W. and Williams, B.O. Arming American scientists: NSF and the provision of scientific computing facilities for universities, 1950–73. IEEE Annals of the History of Computing 16, 4 (Winter 1994), 60–74.

3. Bullynck, M. and De Mol, L. Setting-up early computer programs: D.H. Lehmer’s ENIAC computation. Archive of Mathematical Logic 49 (2010), 123–146.

4. Campbell-Kelly, M. From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry. MIT Press, Cambridge, MA, 2003.

5. Campbell-Kelly, M. Programming the EDSAC: Early programming activity at the University of Cambridge. Annals of the History of Computing 2, 1 (Jan. 1980), 7–36.

6. Campbell-Kelly, M. The history of the history of software. IEEE Annals of the History of Computing 29, 4 (Oct.-Dec. 2007), 40–51.

7. Cerf, V. ACM and the professional programmer. Commun. ACM 57, 8 (Aug. 2014), 7.

8. Daylight, E.G. The Dawn of Software Engineering: From Turing to Dijkstra. Lonely Scholar, Heverlee, Belgium, 2012.

9. Haigh, T., Priestley, M., and Rope, C. Los Alamos bets on ENIAC: Nuclear Monte Carlo simulations, 1947–48. IEEE Annals of the History of Computing 36, 2 (Jan.-Mar. 2014), 42–63.

10. Knuth, D.E. Von Neumann’s first computer program. ACM Computing Surveys 2, 4 (Dec. 1970), 247–260.

11. MacKenzie, D. Knowing Machines. MIT Press, Cambridge, MA, 1998.

12. MacKenzie, D. Mechanizing Proof. MIT Press, Cambridge, MA, 2001.

13. Mahoney, M.S. and Haigh, T., Eds. Histories of Computing. Harvard University Press, Cambridge, MA, 2011.

14. Matti, T. The Science of Computing: Shaping a Discipline. CRC Press/Taylor & Francis, 2014.

15. Metropolis, N., Howlett, J. and Rota, G.-C. Eds. A History of Computing in the Twentieth Century: A Collection of Papers. Academic Press, New York, 1980.

16. Mounier-Kuhn, P. Logic and computing in France: A late convergence. In Proceedings of the Symposium on the History and Philosophy of Programming (Birmingham, July 2012);

17. Norberg, A.L. and O’Neill, J.E. Transforming Computer Technology: Information Processing for the Pentagon, 1962–1986. Johns Hopkins University Press, Baltimore, MD, 1996.

18. Priestley, M. A Science of Operations: Machines, Logic, and the Invention of Programming. Springer, New York, 2011.

19. Roland, A. and Shiman, P. Strategic Computing: DARPA and the Quest for Machine Intelligence. MIT Press, Cambridge, MA, 2002.

20. Slayton, R. Arguments that Count: Physics, Computing, and Missile Defense, 1949–2012. MIT Press, Cambridge, MA, 2013.

21. Tatarchenko, K. A House With the Window to the West: The Akademgorodok Computer Center (1958–1993). Princeton, 2013.

22. Tedre, M. The Science of Computing: Shaping a Discipline. CRC Press/Taylor & Francis, 2014.

Back to Top


Thomas Haigh ( is an associate professor of information studies at the University of Wisconsin, Milwaukee, and immediate past chair of the SIGCIS group for historians of computing.

Back to Top


a. See

b. The video is posted at

c. IEEE Annals of the History of Computing 36, 4 (Oct.–Dec. 2014) is a special issue based on this work.

d. Dick had earlier published “AfterMath: The Work of Proof in the Age of HumanMachine Collaboration,” Isis 102, 3 (Sept. 2011), 494–505.

Written by youryblog

December 31, 2014 at 2:52 PM

Management and CS/IT Job related

leave a comment »

  1. Employees don’t leave Companies, they leave Managers (YB: good article by  Brigette Hyacinth).
  2. Published on: January 6, 2017 | Last Updated: January 6, 2017 11:45 AM PST Is B.C.’s tech sector about to hit a wall? Supply of top tier talent tight
  3. Very useful tools for anonymous voting: – it was pretty easy to set up. Here’s a link to some other possibilities
  4. There are apps allowing you to use your cell phones instead of actually getting the physical clicker device. is a common site used by instructors.
  5. 7 Lies Employers Use To Trick You Into Working For Them
  6. Why Canada is failing at tech Ryan Holmes | July 10, 2013 8:00 AM ET
  7. Education and Job Opportunities in STEM, 2008 Philip Levis, February 2, 2012
  8. Analysis: The exploding demand for computer science education, and why America needs to keep up
  9. Computer science enrolment low, despite opportunities
    University of Windsor student lands lucrative job with Google before graduation
  10. Give Me a Break! : Authorized leaves of absence under BC Employment Standards
  11. Thanks For Your Job Offer, but No Thanks

  12. You Only Live Once, So Do It Warren Buffett’s Way
    Posted: 08/28/2014 8:55 pm EDT Updated: 08/28/2014 9:00 pm EDT

Written by youryblog

August 2, 2013 at 12:31 AM