Youry's Blog

Youry's Blog

Archive for the ‘Research’ Category

The Tears of Donald Knuth

leave a comment »

Donald Knuth

Donald Knuth

In this column I will be looking at the changing relationship between the discipline of computer science and the growing body of scholarly work on the history of computing, beginning with a recent plea made by renowned computer scientist Donald Knuth. This provides an opportunity to point you toward some interesting recent work on the history of computer science and to think more broadly about what the history of computing is, who is writing it, and for whom they are writing.

Last year historians of computing heard an odd rumor: that Knuth had given the Kailath lecture at Stanford University and spent the whole time talking about us. Its title, “Let’s Not Dumb Down the History of Computer Science,” was certainly intriguing, and its abstract confirmed that some forceful positions were being taken.a The online video eventually showed something remarkable: his lecture focused on a single paper, Martin Campbell-Kelly’s 2007 “The History of the History of Software.”6,b Reading it had deeply saddened Knuth, who “finished reading it only with great difficulty” through his tear-stained glasses.

Back to Top

What Knuth Said

Knuth began by announcing that, despite an aversion to confrontation, he would be “flaming” historians of computing. This, he worried “could turn out to be the biggest mistake of my life.” The bout might nevertheless be seen as a mismatch. Knuth is among the world’s most celebrated computer scientists, renowned for his ongoing project to classify and document families of algorithms in The Art of Computer Programming and for his creation of the TeX computerized typesetting system ubiquitous within computer science and mathematics. Campbell-Kelly has a similar prominence within the much smaller community of historians of computing but, even by Google Scholar’s generous definitions, the paper that saddened Knuth has been cited only nine times.

Knuth then enumerated his motivations, as a computer scientist, to read the history of science. First, reading history helped him to understand the process of discovery. Second, understanding the difficulty and false starts experienced by brilliant historical scientists in making discoveries that specialists now find obvious helped him to see what made concepts challenging to students and thus to become a “much better writer and teacher.” Third, appreciating the historical contribution of non-Western scientists helped in “celebrating the contributions of many cultures.” Fourth, history is the craft of telling stories, which is “the best way to teach, to explain something.” Fifth, the biographies of scientists teach tactics for a successful and rewarding career. Sixth, history teaches how human experience has changed over time. As humans we should care about that.

Knuth also identified some special contributions to the history of science that professionally trained historians are uniquely well placed to make. We are good at “smoking out” primary sources and putting historical activities in the context of broader timelines. He also appreciates our ability to translate papers written in languages that he cannot himself read. He finds attempts at historical analysis “probably the least interesting” aspects of our papers but appreciates lengthy quotations from primary sources.

Things then headed in a less positive direction. Knuth explained that Campbell-Kelly had centered his paper on a table of important works related to the history of software published between 1967 and 2004. It coded the predominant approaches into four categories—one of which was technical—to demonstrate the technical approach had been dominant until about 1990, dwindling thereafter and vanishing altogether after 1997. Campbell-Kelly characterized this as an “evolution” away from “technical histories” of the “low-hanging-fruit variety” written by Knuth and other “outstanding technical experts” that were “constrained, excessively technical, and lacking in breadth of vision.”

Knuth had previously viewed Campbell-Kelly as a kindred spirit but had now been granted a glimpse of “what historians say when they’re talking to historians instead of when they’re talking to people like me.” Without pausing to dry his glasses he had written to Campbell-Kelly to accuse him of having “lost faith in the notion that computer science is actually scientific.”

So why is the history of computer science not being written in the volume it deserves, or the manner favored by Knuth?

The shift described by Campbell-Kelly reflected a change in the population of scholars writing the history of computing. Many of the senior computing figures of the 1970s worked to preserve the history of the 1940s and early 1950s, starting with a number of “pioneer days” and workshops organized. The most important of these was held at Los Alamos National Laboratory in 1976.15 Most of the 90 participants included in the group photograph of attendees were computer pioneers of the 1940s. Knuth himself contributed a detailed history of the first tools for “automatic programming” (assemblers and compilers). He was one of a handful of interested younger computer scientists who entered the field in the 1950s, which also included Edsger Dijkstra and Brian Randell, a systems programmer turned academic who had assembled an important collection of reprinted historical documents. At the conference were only a handful of trained historians. The editorial board of Annals of the History of Computing, which began in 1979 as a publication of AFIPS, a long-defunct umbrella group for professional computing societies, had a similar makeup. As graduate students in history and history of science programs began to write dissertations on computer-related topics they eventually inverted the ratio of trained historians to computer scientists, though the journal continues to publish a significant number of papers by computer scientists and technical experts.

In his lecture Knuth worried that a “dismal trend” in historical work meant that “all we get nowadays is dumbed down” through the elimination of technical detail. According to Knuth “historians of math have always faced the fact that they won’t be able to please everybody.” He feels that other historians of science have succumbed to “the delusion that … an ordinary person can understand physics …”

I am going to tell you why Knuth’s tears were misguided, or at least misdirected, but first let me stress that historians of computing deeply appreciate his conviction that our mission is of profound importance. Indeed, one distinguished historian of computing recently asked me what he could do to get flamed by Knuth. Knuth has been engaged for decades with history. This is not one of his passionate interests outside computer science, such as his project reading verses 3:16 of different books of the Bible. Knuth’s core work on computer programming reflects a historical sensibility, as he tracks down the origin and development of algorithms and reconstructs the development of thought in specific areas. For years advertisements for IEEE Annals of the History of Computing, where Campbell-Kelly’s paper was published, relied on a quote from Knuth that it was the only publication he read from cover to cover. With the freedom to choose a vital topic for a distinguished lecture Knuth chose to focus on history rather than one of his better-known scientific enthusiasms such as literate programming or his progress with The Art of Computer Programming.

Back to Top

Computing vs. Computer Science

Here is where I part ways with Knuth’s interpretation. Campbell-Kelly’s article was “The History of the History of Software,” not “The History of the History of Computer Science.” Knuth’s complaint that historians have been led astray by fads and pursuit of a mass audience into “dumbed down” history reflects an assumption that computer science is the whole of computing, or at least the only part in which historians can find important questions about software. This conflated the history of computing with the history of computer science. Distinguished computer scientists are prone to blur their own discipline, and in particular few dozen elite programs, with the much broader field of computing. The tools and ideas produced by computer scientists underpin all areas of IT and make possible the work carried out by network technicians, business analysts, help desk workers, and Excel programmers. That does not make those workers computer scientists. The U.S. alone is estimated to have more than 10 million “information technology workers,” which is about a hundred times more than the ACM’s membership. Vint Cerf has warned in Communications that even the population of “professional programmers” dwarfs the association’s membership.7 ACM’s share of the IT workforce has been in decline for a half-century, despite efforts begun back in the 1960s and 1970s by leaders such as Walter Carlson and Herb Grosch to broaden its appeal.

Computing is much bigger than computer science, and so the history of computing is much bigger than the history of computer science. Yet Knuth treated Campbell-Kelly’s book on the business history of the software industry (accurately subtitled “a history of the software industry”) and all the rest of the history of computing as part of “the history of computer science.”4 Others have written about the history of computer use in life insurance and other areas of business, the history of cybernetics, the history of the semiconductor industry, the history of punched card machines, the history of the IT workforce, the history of computer-producing companies such as IBM, the use and development of computers in particular countries, the history of the personal computer, and the history of computer usage in particular areas of scientific practice such as bio-medicine. To call such work “dumbed down” history of computer science, rather than smart history of many other things, is to misunderstand both the intentions and the accomplishments of its authors.

The truth is that regrettably little history of computer science, whether dumb or deep, has been written by trained historians even though the history of computing literature as a whole has been expanding rapidly. Consider our output between 1990 and 2010. Michael Mahoney, a historian of science and mathematics at Princeton University, worked on a narrative history of theoretical computer science but ultimately produced only a set of provocative but schematic papers.13 Mahoney was also interested in the history of software engineering, and several other historians have discussed the 1968 NATO Conference on Software Engineering at which that field was launched. Eminent sociologist of science Donald MacKenzie worked on the history of formal methods and its relationship to the development of computer technology.11,12 Two books explored the history of DARPA and its role in shaping the development of computer science and technology, though Knuth would not approve of their institutional focus.17,19 William Aspray wrote several papers on the history of NSF support for computing2 and a book on John von Neumann.1 A complete list would be longer, but not that much longer.

Back to Top

Historical Careers in Computer Science

So why is the history of computer science not being written in the volume it deserves, or the manner favored by Knuth? I am, at heart, a social historian of science and technology and so my analysis of the situation is grounded in disciplinary and institutional factors. Books of this kind would demand years of expert research and sell a few hundred copies. They would thus be authored by those not expected to support themselves with royalties, primarily academics.

Academic careers are profoundly shaped by the disciplinary communities in which they develop. Throughout their training, scholars are socialized into the culture of their field and pick up a wealth of tacit and explicit knowledge on what is expected of them. They learn how to select a research project, what kinds of work are noticed and which are ignored, what style to write in, how to structure a paper, which professors are respected, what search committees and grant review panels are looking for. This continues throughout their careers, as they aspire to prestigious awards, named chairs, or favors from the Dean. Whether they realize it or not, successful academics have internalized the rules of the game played in their particular field.

The history of computer science might be undertaken from two disciplinary base camps within academia: computer science and the history of science. Someone whose primary training is in history will naturally see the history of computing differently from someone whose disciplinary loyalty is to computer science. They will choose different topics and explore them in different ways for different audiences. For different reasons, outlined below, neither group has shown much interest in supporting work of the kind favored by Knuth. That is why it has rarely been written.

Back to Top

Prospects within the History of Science

The history of science is a kind of history, which is in turn part of the humanities. Some historians of science are specialists within broad history departments, and others work in specialized programs devoted to science studies or to the history of science, technology, or medicine. In both settings, historians judge the work of prospective colleagues by the standards of history, not those of computer science. There are no faculty jobs earmarked for scholars with doctoral training in the history of computing, still less in the history of computer science. The persistently brutal state of the humanities job market means that search committees can shortlist candidates precisely fitting whatever obscure combination of geographical area, time period, and methodological approaches are desired. So a bright young scholar aspiring to a career teaching and researching the history of computer science would need to appear to a humanities search committee as an exceptionally well qualified historian of the variety being sought (perhaps a specialist in gender studies or the history of capitalism) who happens to work on topics related to computing.

This, more than anything else, explains the rise of the broad and non-technical approaches decried by Knuth. Work in the history of computing has been seen by most in the humanities as dull and provincial, excessively technical and devoid of big historical ideas. Whereas fields such as environmental history have produced widely recognized classics that convince non-specialists of the scholarly potential, historians of computing are still inching toward broad acceptance of their relevance. The roles Knuth outlined for them would not serve them well as they were essentially those of the research assistant: gather primary materials, translate them if necessary, and make them available to computer scientists who will do the analysis.

Current enthusiasm for the “digital humanities” and the inescapable importance of computing to the modern world could provide opportunities. One day humanities search committees might even seek out historians of computing, but only those whose work engages with and appeals to scholars who themselves know nothing of computer science. In the meantime many scholars with doctorates in the history of computing have found work in museums or in academic employment outside both history and computer science, for example, in business schools, information schools, or specialist programs such as engineering education. These positions pose their own disciplinary challenges, but for obvious reasons provide few incentives to study the history of computer science.

Back to Top

Prospects within Computer Science

Thus the kind of historical work Knuth would like to read would have to be written by computer scientists themselves. Some disciplines support careers spent teaching history to their students and writing history for their practitioners. Knuth himself holds up the history of mathematics as an example of what the history of computing should be. It is possible to earn a Ph.D. within some mathematics departments by writing a historical thesis (euphemistically referred to as an “expository” approach). Such departments have also been known to hire, tenure, and promote scholars whose research is primarily historical. Likewise medical schools, law schools, and a few business schools have hired and trained historians. A friend involved in a history of medicine program recently told me that its Ph.D. students are helped to shape their work and market themselves differently depending on whether they are seeking jobs in medical schools or in history programs. In other words, some medical schools and mathematics departments have created a demand for scholars working on the history of their disciplines and in response a supply of such scholars has arisen.

As Knuth himself noted toward the end of his talk, computer science does not offer such possibilities. As far as I am aware no computer science department in the U.S. has ever hired as a faculty member someone who wrote a Ph.D. on a historical topic within computer science, still less someone with a Ph.D. in history. I am also not aware of anyone in the U.S. having been tenured or promoted within a computer science department on the basis of work on the history of computer science. Campbell-Kelly, now retired, did both things (earning his Ph.D. in computer science under Randell’s direction) but he worked in England where reputable computer science departments have been more open to “fuzzy” topics than their American counterparts. Neither are the review processes and presentation formats at prestigious computer conferences well suited for the presentation of historical work. Nobody can reasonably expect to build a career within computer science by researching its history.

In its early days the history of computing was studied primarily by those who had already made their careers and could afford to indulge pursuing historical interests from tenured positions or to dabble after retirement. Despite some worthy initiatives, such as the efforts of the ACM History Committee to encourage historical projects, the impulse to write technical history has not spread widely among younger generations of distinguished and secure computer scientists.

To summarize, the upper-right quadrant in the accompanying table is essentially empty. It reflects historical work forming the backbone of a scholarly career and intended as a contribution to computer science. I share Knuth’s regret that the technical history of computer science is greatly understudied. The main cause is that computer scientists have lost interest in preserving the intellectual heritage of their own discipline. It is not, as Knuth implies, that Campbell-Kelly is representative of a broader trend of individual researchers deciding to stop writing one kind of history and to devote a fixed pool of talent to writing another kind instead. There is no zero sum game here. More work by professionally trained historians on social, institutional, and cultural aspects of computing does not have to mean less work by computer scientists themselves. They cannot count on history departments to do this for them, and I hope Knuth’s lament motivates a few to follow his lead in this area. Not simply because Knuth did it—few computer scientists have emulated him by procuring their own domestic pipe organs—but because his commitment to the intellectual history of computer science makes a powerful argument that historical knowledge of a particular kind is a prerequisite for deep technical understanding.

Back to Top

Reopening the Black Box

I will end on a positive note. In his paper, Campbell-Kelly offered a “biographical mea culpa” for his own early work that he now reads with a “mild flush of embarrassment.” He came to see his erstwhile enthusiasm for technical history as a youthful indiscretion and his conversion to business history as an act of redemption, paralleling his own development and that of the field in a way that relied implicitly on a rather unfashionable conceptualization of history as progress along a fixed trajectory.

Contrary both to Knuth’s despair and to Campbell-Kelly’s story of a march of progress away from technical history, some scholars with formal training in history and philosophy have been turning to topics with more direct connections to computer science over the past few years. Liesbeth De Mol and Maarten Bullynck have been working to engage the history and philosophy of mathematics with issues raised by early computing practice and to bring computer scientists into more contact with historical work.3 Working with like-minded colleagues, they helped to establish a new Commission for the History and Philosophy of Computing within the International Union of the History and Philosophy of Science. Edgar Daylight has been interviewing famous computer scientists, Knuth included, and weaving their remarks into fragments of a broader history of computer science.8 Matti Tedre has been working on the historical shaping of computer science and its development as a discipline.22 The history of Algol was a major focus of the recent European Science Foundation project Software for Europe. Algol, as its developers themselves have observed, was important not only for pioneering new capabilities such as recursive functions and block structures, but as a project bringing together a number of brilliant research-minded systems programmers from different countries at a time when computer science had yet to coalesce as a discipline.c Pierre Mounier-Kuhn has looked deeply into the institutional history of computer science in France and its relationship to the development of the computer industry.16

Stephanie Dick, who recently earned her Ph.D. from Harvard, has been exploring the history of artificial intelligence with close attention to technical aspects such as the development and significance of the linked list data structure.d Rebecca Slayton, another Harvard Ph.D., has written about the engagement of prominent computer scientists with the debate on the feasibility of the “Star Wars” missile defense system; her thesis has been published as an MIT Press book.20 At Princeton, Ksenia Tatarchenko recently completed a dissertation on the USSR’s flagship Akademgorodok Computer Center and its relationship to Western computer science.21 British researcher Mark Priestley has written a deep and careful exploration of the history of computer architecture and its relationship to ideas about computation and logic.18 I have worked with Priestly to explore the history of ENIAC, looking in great detail at the functioning and development of what we believe to be the first modern computer program ever executed.9 Our research engaged with some of the earliest historical work on computing, including Knuth’s own examination of John von Neumann’s first sketch of a modern computer program10 and Campbell-Kelly’s technical papers on early programming techniques.5

The history of computer science retains an important place within the diverse and growing field of the history of computing.

Most of this new work is aimed primarily at historians, philosophers, or science studies specialists rather than computer scientists. However, it does not shy away from engagement with the specifics of computer technology or the detailed workings of the computer science community, re-introducing technical analysis along with continued attention to social, cultural, and institutional factors. Some of it may confirm Campbell-Kelly’s prediction that the field will move toward “holistic” work integrating different approaches.

The history of computer science retains an important place within the diverse and growing field of the history of computing. Work of the particular kind preferred by Knuth will flourish only if his colleagues in computer science are willing to produce, reward, or commission it. I nevertheless hope he will continue to find much value in the work of historians and that we will rarely give him cause to reach for his handkerchief.

Back to Top


1. Aspray, W. John von Neumann and the Origins of Modern Computing. MIT Press, Cambridge, MA, 1990.

2. Aspray, W. and Williams, B.O. Arming American scientists: NSF and the provision of scientific computing facilities for universities, 1950–73. IEEE Annals of the History of Computing 16, 4 (Winter 1994), 60–74.

3. Bullynck, M. and De Mol, L. Setting-up early computer programs: D.H. Lehmer’s ENIAC computation. Archive of Mathematical Logic 49 (2010), 123–146.

4. Campbell-Kelly, M. From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry. MIT Press, Cambridge, MA, 2003.

5. Campbell-Kelly, M. Programming the EDSAC: Early programming activity at the University of Cambridge. Annals of the History of Computing 2, 1 (Jan. 1980), 7–36.

6. Campbell-Kelly, M. The history of the history of software. IEEE Annals of the History of Computing 29, 4 (Oct.-Dec. 2007), 40–51.

7. Cerf, V. ACM and the professional programmer. Commun. ACM 57, 8 (Aug. 2014), 7.

8. Daylight, E.G. The Dawn of Software Engineering: From Turing to Dijkstra. Lonely Scholar, Heverlee, Belgium, 2012.

9. Haigh, T., Priestley, M., and Rope, C. Los Alamos bets on ENIAC: Nuclear Monte Carlo simulations, 1947–48. IEEE Annals of the History of Computing 36, 2 (Jan.-Mar. 2014), 42–63.

10. Knuth, D.E. Von Neumann’s first computer program. ACM Computing Surveys 2, 4 (Dec. 1970), 247–260.

11. MacKenzie, D. Knowing Machines. MIT Press, Cambridge, MA, 1998.

12. MacKenzie, D. Mechanizing Proof. MIT Press, Cambridge, MA, 2001.

13. Mahoney, M.S. and Haigh, T., Eds. Histories of Computing. Harvard University Press, Cambridge, MA, 2011.

14. Matti, T. The Science of Computing: Shaping a Discipline. CRC Press/Taylor & Francis, 2014.

15. Metropolis, N., Howlett, J. and Rota, G.-C. Eds. A History of Computing in the Twentieth Century: A Collection of Papers. Academic Press, New York, 1980.

16. Mounier-Kuhn, P. Logic and computing in France: A late convergence. In Proceedings of the Symposium on the History and Philosophy of Programming (Birmingham, July 2012);

17. Norberg, A.L. and O’Neill, J.E. Transforming Computer Technology: Information Processing for the Pentagon, 1962–1986. Johns Hopkins University Press, Baltimore, MD, 1996.

18. Priestley, M. A Science of Operations: Machines, Logic, and the Invention of Programming. Springer, New York, 2011.

19. Roland, A. and Shiman, P. Strategic Computing: DARPA and the Quest for Machine Intelligence. MIT Press, Cambridge, MA, 2002.

20. Slayton, R. Arguments that Count: Physics, Computing, and Missile Defense, 1949–2012. MIT Press, Cambridge, MA, 2013.

21. Tatarchenko, K. A House With the Window to the West: The Akademgorodok Computer Center (1958–1993). Princeton, 2013.

22. Tedre, M. The Science of Computing: Shaping a Discipline. CRC Press/Taylor & Francis, 2014.

Back to Top


Thomas Haigh ( is an associate professor of information studies at the University of Wisconsin, Milwaukee, and immediate past chair of the SIGCIS group for historians of computing.

Back to Top


a. See

b. The video is posted at

c. IEEE Annals of the History of Computing 36, 4 (Oct.–Dec. 2014) is a special issue based on this work.

d. Dick had earlier published “AfterMath: The Work of Proof in the Age of HumanMachine Collaboration,” Isis 102, 3 (Sept. 2011), 494–505.

Written by youryblog

December 31, 2014 at 2:52 PM

Potential, possible, or probable predatory scholarly open-access publishers and journals

leave a comment »

My updates:

AIRCC’s International Journal of Computer Science and Information Technology (IJCSIT) (see below about AIRCC)

International Journal of Computer Science and Information Technologies see more about this journal here:

LIST OF STANDALONE JOURNALS (just for myself only). Copy from

Potential, possible, or probable predatory scholarly open-accesfs journals

This is a list of questionable, scholarly open-access journals. We recommend that scholars read the available reviews, assessments and descriptions provided here, and then decide for themselves whether they want to submit articles, serve as editors or on editorial boards.  The criteria for determining predatory journals are here.

We hope that tenure and promotion committees can also decide for themselves how importantly or not to rate articles published in these journals in the context of their own institutional standards and/or geo-cultural locus.  We emphasize that journals change in their business and editorial practices over time. This list is kept up-to-date to the best extent possible but may not reflect sudden, unreported, or unknown enhancements

Last updated October 5, 2014

Appeals: If you are a publisher and would like to appeal your firm’s inclusion on this list, please go here.

LIST OF PUBLISHERS Beall’s List: copy from

Potential, possible, or probable predatory scholarly open-access publishers

This is a list of questionable, scholarly open-access publishers. We recommend that scholars read the available reviews, assessments and descriptions provided here, and then decide for themselves whether they want to submit articles, serve as editors or on editorial boards.  The criteria for determining predatory publishers are here.

We hope that tenure and promotion committees can also decide for themselves how importantly or not to rate articles published in these journals in the context of their own institutional standards and/or geocultural locus.  We emphasize that journal publishers and journals change in their business and editorial practices over time. This list is kept up-to-date to the best extent possible but may not reflect sudden, unreported, or unknown enhancements.

Last updated October 4, 2014

Appeals: If you are a publisher and would like to appeal your firm’s inclusion on this list, please go here.

Written by youryblog

October 5, 2014 at 5:05 PM

Posted in Conferences, Research

Ultra-Fast, the Robotic Arm Can Catch Objects on the Fly

leave a comment »

Ultra-Fast, the Robotic Arm Can Catch Objects on the Fly
Swiss Federal Institute of Technology in Lausanne (05/12/14) Sarah Perrin  (from ACM TechNews on 14 May 2014)

Researchers at the Swiss Federal Institute of Technology in Lausanne’s (EPFL) Learning Algorithms and Systems Laboratory have developed a robot that can react in real time to grasp objects with complex shapes and trajectories in less than five hundredths of a second. The arm measures about 1.5 meters in length, and the robot keeps it in an upright position. The arm has three joints and a hand with four fingers. The researchers note the ability to catch flying objects requires the integration of several parameters and reacting to unforeseen events in record time. “Today’s machines are often pre-programmed and cannot quickly assimilate data changes,” says EPFL’s Aude Billard. To overcome this limitation, the researchers developed a new technique, programming by demonstration, which does not give specific directions to the robot, but rather shows examples of possible trajectories to the robot. In the first learning phase, objects were thrown several times in the robot’s direction. The robot uses a series of cameras to create a model for the objects’ kinetics based on their trajectories, speeds, and rotational movement. The researchers then translate the data into an equation that enables the robot to position itself very quickly in the right direction.

View Full Article

Written by youryblog

May 14, 2014 at 5:46 PM

How Gobbledygook Ended Up in Respected Scientific Journals By Konstantin Kakaes

leave a comment »



In 2005, a group of MIT graduate students decided to goof off in a very MIT graduate student way: They created a program called SCIgen that randomly generated fake scientific papers. Thanks to SCIgen, for the last several years, computer-written gobbledygook has been routinely published in scientific journals and conference proceedings.

According to Nature News, Cyril Labbé, a French computer scientist, recently informed Springer and the IEEE, two major scientific publishers, that between them, they had published more than 120 algorithmically-generated articles. In 2012, Labbé had told the IEEE of another batch of 85 fake articles. He’s been playing with SCIgen for a few years—in 2010 a fake researcher he created, Ike Antkare, briefly became the 21st most highly cited scientist in Google Scholar’s database.

On the one hand, it’s impressive that computer programs are now good enough to create passable gibberish. (You can entertain yourself by trying to distinguish real science from nonsense on quiz sites like this one.) But the wide acceptance of these papers by respected journals is symptomatic of a deeper dysfunction in scientific publishing, in which quantitative measures of citation have acquired an importance that is distorting the practice of science.

The first scientific journal article, “An Account of the improvement of Optick Glasses,” was published on March 6, 1665 in Philosophical Transactions, the journal of the Royal Society of London. The purpose of the journal, according to Henry Oldenburg, the society’s first secretary, was, “to spread abroad Encouragements, Inquiries, Directions, and Patterns, that may animate.” The editors continued, explaining their purpose: “there is nothing more necessary for promoting the improvement of Philosophical Matters, than the communicating to such, as apply their Studies and Endeavours that way, such things as are discovered or put in practice by others.” They hoped communication would help scientists to share “ingenious Endeavours and Undertakings” in pursuit of “the Universal Good of Mankind.”

As the spate of nonsense papers shows, scientific publishing has strayed from these lofty goals. How did this happen?

Over the course of the second half of the 20th century, two things took place. First, academic publishing became an enormously lucrative business. And second, because administrators erroneously believed it to be a means of objective measurement, the advancement of academic careers became conditional on contributions to the business of academic publishing.

As Peter Higgs said after he won last year’s Nobel Prize in physics, “Today I wouldn’t get an academic job. It’s as simple as that. I don’t think I would be regarded as productive enough.” Jens Skou, a 1997 Nobel Laureate, put it this way in his Nobel biographical statement: today’s system puts pressure on scientists for, “too fast publication, and to publish too short papers, and the evaluation process use[s] a lot of manpower. It does not give time to become absorbed in a problem as the previous system [did].”

Today, the most critical measure of an academic article’s importance is the “impact factor” of the journal it is published in. The impact factor, which was created by a librarian named Eugene Garfield in the early 1950s, measures how often articles published in a journal are cited. Creating the impact factor helped make Garfield a multimillionaire—not a normal occurrence for librarians.

In 2006, the editors of PloS Medicine, then a new journal, were miffed at the capriciousness with which Thomson Scientific (which had bought Garfield’s company in 1992), calculated  their impact factor. The PloS editors argued for “better ways of assessing papers and journals”—new quantitative methods. The blossoming field of scientometrics (with its own eponymous journal—2012 impact factor: 2.133) aims to come up with more elaborate versions of the impact factor that do a better job of assessing individual articles rather than journals as a whole.

There is an analogy here to the way Google and other search engines index Web pages. So-called search-engine optimization aims to boost the rankings of websites. To fight this, Google (and Microsoft, and others) employ armies of programmers to steadily tweak their algorithms. The arms race between the link spammers and the search-algorithm authors never ends. But no one at Thomson Reuters (or its competitors) can really formulate an idea of scientific merit on par with Google’s idea of search quality.

Link spam is forced upon even reputable authors of scientific papers. Scientists routinely add citations to papers in journals they are submitting to in the hopes of boosting chances of acceptance. They also publish more papers, as Skou said, in the hopes of being more widely cited themselves. This creates a self-defeating cycle, which tweaked algorithms cannot address. The only solution, as Colin Macilwain wrote in Nature last summer, is to “Halt the avalanche of performance metrics.”

There is some momentum behind this idea. In the past year, more than 10,000 researchers have signed the San Francisco Declaration on Research Assessment, which argues for the “need to assess research on its own merits.” This comes up most consequentially in academic hiring and tenure decisions. As Sandra Schmid, the chair of the Department of Cell Biology at the University of Texas Southwestern Medical Center, a signatory of the San Francisco Declaration, wrote, “our signatures are meaningless unless we change our hiring practices. … Our goal is to identify future colleagues who might otherwise have failed to pass through the singular artificial CV filter of high-impact journals, awards, and pedigree.”

Unless academic departments around the world follow Schmid’s example, in another couple of years, no doubt Labbé will find another few hundred fake papers haunting the databases of scientific publication. The gibberish papers (“TIC: a methodology for the construction of e-commerce”) are only the absurdist culmination of an academic evaluation and publication process set up to encourage them.

Written by youryblog

March 6, 2014 at 4:30 PM

Posted in IT, Research

Computing Innovations Abundant in CNN’s 10 Ideas List The CCC Blog, December 19

leave a comment »


Computing Innovations Abundant in CNN’s 10 Ideas List
The CCC Blog, December 19

CNN’s list of 10 emerging ideas that have the potential to change our world included many that were based on computer science and the latest computing research. As CNN explains, these concepts have the potential to make us healthier, to keep us safer on the highways, and to help our computers think for themselves. That should be good news to educators who are looking for new ways to sell students on a career in computer science.

One technology mentioned by CNN was the emergence of flexible display screens as a viable option for personal electronics. And once the technology is perfected, the range of possibilities gets a whole lot broader. Another technology involves autonomous, self-driving cars. Automakers are already equipping cars with sensors that know when you’re about to collide into the car in front of you and can brake accordingly. Partially automated cars could be hitting the market by the end of the decade.

Computer science is also having an impact on the future of health and science. Wearable sensors might have been considered strange a few years ago, but now we’re used to devices like FitBits and sensor-filled smartphones monitoring our movements, tallying calories, observing sleep patterns and even tracking heart rate, blood-sugar levels and other vitals. The next step will be tiny sensors under our skin, coursing through our bloodstreams and implanted in our brains to collect valuable information about our health. Finally, CNN mentioned computers that actually know how to think and apply common sense the way humans do. A team at Carnegie Mellon University is training a computer program to think for itself, starting with pictures.
Click Here to View Full Article


Written by youryblog

January 7, 2014 at 6:13 PM

Posted in Interesting, Research

Managing your Research

leave a comment »

Good paper: Managing your Research

This article is for those who are having their first research study and facing problems while managing their research study.

Written by youryblog

December 31, 2012 at 5:45 PM

Posted in Research