Quantcast
Channel: crazy ideas – Secret Blogging Seminar
Viewing all articles
Browse latest Browse all 10

L’affaire El Naschie

$
0
0

So I know I’m a little late to the party on this, but I couldn’t resist commenting on the strange case of M. el Naschie (I assume that this is just the German transliteration of the name English speakers would be more likely to spell al Nashi). Zoran Škoda brought him up in the comments to a post at the n-Category Cafe, and John Baez did an excellent job exposing the level of intellectual bankruptcy at the journal Chaos, Solitons and Fractals. The details are better recounted elsewhere, (unfortunately the posts above have been removed. Those interested in following the case can try Richard Poynder’s blog Open and Shut) but in a nutshell, El Naschie published dozens of papers in his own journal (he’s the editor-in-chief) which appear to be of no scientific or mathematical merit (this is my judgment based on excerpts and titles, and also seems to be the consensus of commenters at nCC), which make rather grandiose claims based on rather incoherent numerology. John Baez characterized him as “worse than the Bogdanov brothers,” which is pretty high up in the food chain of physics hoaxes.

But my intent here is not to beat up on El Naschie. He’s already set to retire in shame. The people who really have egg on their face here are those who enabled the man who is for all intents and purposes a crank to run a superficially prestigious-seeming journal.

Firstly, Elsevier. If you’re looking for the perfect encapsulation of why large for-profit publishers are bad for science, here it is. Elsevier was either too incompetent to notice that one of their editors in chief was publishing dozens of papers in his own journal (for the fact that this is allowed at all, in any circumstances, is on the face of it a bit absurd. Have they never heard of conflicts of interest? Even editor-in-chief recused himself from decisions about his own paper, could the other editors possibly be expected to make an unbiased decision? Some powers are just too great to allow unchecked, even if most people would not abuse them), or they just didn’t care about the scientific integrity of their journals (this is the publisher of the journal Homeopathy, after all), both of which are pretty damning. I understand that the vast majority of Elsevier journals are run by editorial boards would never consider such intellectual malpractice, but if Elsevier isn’t actually checking to see that its editorial boards are not doing so, then what exactly are we paying them for? I know that Elsevier publishes a lot of important math journals, but it reaches a point where we can’t let that fact hold us hostage and force our libraries to spend their scarce resources on bald-faced pseudoscience (CFS is not cheap, around $4500 a year). So I encourage you all to let the librarians at your school now if you would prefer that they found somewhere better to spend their money. I know that there a lot of links in this chain and this is not going to result in a full scale boycott of Elsevier (a boy can dream, can’t he?), but encouraging libraries to emphasize other priorities is a rather good idea.

Another less obvious villain here is Thomson Scientific, which indexes journals. Now, I know you might say “Well how were they supposed to know that this journal was publishing crap? It’s not their job to read the journals.” But if one starts thinking along those lines, it becomes a little unclear what their job it is, if not to figure out which journals are real science, and which are the playthings of cranks. It’s not as though they index every journal in the world; in fact, many are left out, with rather serious consequences for the people who publish in non indexed journals (which are obviously not of the quality of most indexed journals but probably still have more integrity than CFS), or who are cited in non-indexed journals.

For you see, the most simple minded, stupid, and yet pervasive index of journal quality is the Impact Factor, peddled by none other than (you guessed it) Thomson Scientific, which determines the quality of the journal by the brilliant and subtle method of dividing the number of citations are received from publications indexed by Thomson by the number of articles in the journal (in other words, the sort of thing a monkey would come up with). As a result mostly of citations to itself, CFS has a higher impact factor than any mathematics journal, even though it is worthless pseudoscience. (Journals in other fields consistently have higher IFs than mathematics, since they write more, shorter, papers, and thus tend to cite each other more often).

So, what have we learned today?

  1. A journal is only as good as its editorial board. Affiliation with the commercial publisher, a big price tag, good production values, indexing in the Web of Knowledge(TM), all of these are essentially meaningless.
  2. A nontransparent review process is incredibly easy to undermine. Nobody knows for sure who El Naschie asked to referee his own papers for his own journal, or what the referee said. Obviously, there has to be some respect for the right of people to comment on and referee work anonymously, but at the same time, we should think seriously about how to bring transparency to peer review. As they say, sunlight is the best disinfectant.
  3. Impact Factor is a joke, but the sort of joke that’s too sad to laugh at. No serious thought seems to have gone into its creation, it’s liable to gaming. It reflects nothing about the quality of a journal, and every one possible should be educated about this fact. Unfortunately, the rise of Impact Factor isn’t just a story about clueless people trying to make impossibly simple-minded comparisons between fields (though they aren’t helping); it also reflects a deep flaw in our currently constituted journal system which we should be looking at. The only way for people not hooked in to the social networks of the discipline to judge the quality of people’s work is through the quality of the journals that that they publish in. But of course, the only way to find out anything about the quality of journals outside the few most famous ones is to be hooked into the social networks of the discipline. Every mathematician has some loose hazy scheme for ranking journals in his or her head, but this is based almost entirely on talking to other people and finding out about their schemes. Is it really at all surprising that people would rather just look at a number? As far as I’m concerned, the only way to fight Impact Factor is to fight fire with fire and do a better job of ranking journals, but, of course, not everyone agrees (eigenfactor, an alternative effort which uses an algorithm inspired by Google’s which is harder to game, gives a much more accurate rating, placing CFS around the 130th mathematics journal, just about the Canadian Math Bulletin). Still, I think it’s important to recognize that the present system was designed for a time with a much smaller mathematical community and many fewer journals, and is rather quickly becoming insupportable. I’m not sure what the solution is, but I look forward to trying to hash it out in comments.

(EDIT: I forgot to link to a great editorial at Ars Technica on this subject.)


Viewing all articles
Browse latest Browse all 10

Trending Articles