Tuesday, December 23, 2008

The discovery of more knowledge (in repositories, research web sites, blogs, and the like)

In my previous post I was announcing the knowledge discovery 'button' that could be used to enhance any repository, science blog, or any researcher's, scientific society's, or publisher's site for that matter. Well, it is here now. Available to all. Incorporation of a small bit of code will equip any site who wants it with the knowledge discovery 'button' as you have it on this blog in the upper right hand side (the orange one that says "discover more..."). And with all the functionality that comes with it, of course. Even more functionality is being developed.

It really is a small bit of code that needs to be incorporated, and the fact that I managed to do it myself in this blog should give confidence to even the least HTML-savvy person that it really is easy. This is the code:
<script type="text/javascript" src="http://conceptweblinker.wikiprofessional.org/wikibutton.js"></script>
Just cut and paste it in the code of your repository, web site or blog and enhance its ability to serve up relevant additional knowledge to its readers.

As an example of what it might look like, click the "Discover more..." button and then look at this abstract of an article by Matsuda et al., entitled Silencing of caspase-8 and caspase-3 by RNA interference prevents vascular endothelial cell injury in mice with endotoxic shock. (Cardiovascular Research 2007 76(1):132-140;
doi:10.1016/j.cardiores.2007.05.024
).
Abstract
OBJECTIVES: Septic shock and sequential multiple organ failure remain the cause of death in septic patients. Vascular endothelial cell apoptosis may play a role in the pathogenesis of the septic syndrome. Caspase-8 is presumed to be the apex of the death receptor-mediated apoptosis pathway, whereas caspase-3 belongs to the "effector" protease in the apoptosis cascade. Synthetic small interfering RNAs (siRNAs) specifically suppress gene expression by RNA interference. Therefore, we evaluated the therapeutic efficacy of caspase-8/caspase-3 siRNAs in a murine model of polymicrobial endotoxic shock. METHODS: Polymicrobial endotoxic shock was induced by cecal ligation and puncture (CLP) in BALB/c mice. In vivo delivery of siRNAs was performed by using a transfection reagent (Lipofectamine 2000) at 10 h after CLP. As a negative control, animals received non-sense (scrambled) siRNA. RESULTS: Marked increases in caspase-8 and caspase-3 protein expression in CLP aortic tissues were strongly suppressed by treatment with caspase-8/caspase-3 siRNAs. This siRNA treatment prevented DNA ladder formation and less phosphorylation of the pro-apoptotic protein Bad seen in CLP aortic tissues. Transferase-mediated dUTP nick end labeling (TUNEL) revealed that the appearance of apoptosis in aortic endothelium after CLP was eliminated by this siRNA treatment. Although all of the control animals subjected to CLP died within 2 days, administration of caspase-8/caspase-3 siRNAs indefinitely (>7 days) improved the survival of CLP mice. CONCLUSIONS: Gene silencing of caspase-8 and caspase-3 with siRNAs provided profound protection against polymicrobial endotoxic shock. The prevention of vascular endothelial cell apoptosis appears to be, at least in part, responsible for their beneficial effects in endotoxic shock.
If you click on any of the highlighted concepts (the colours disappear after a few seconds, so that you can read the text more easily, but they can be brought back by mousing over the button), you will get a number of options to explore further. First of all, 'add to search', which automatically extends the search argument with synonyms of the concept in question. For instance, if I search further in this way with RNA interference the search is automatically reformulated as "RNA Interference" OR "Post-Transcriptional Gene Silencing" OR "Posttranscriptional Gene Silencings" OR "RNA Silencing" OR "RNA Silencings" OR "Quelling" OR "RNAi" OR "cosuppression" OR "Sequence-Specific Posttranscriptional Gene Silencing".

The options of 'related authors' and 'related publications' are self-explanatory, I guess, and the option 'connected concepts' leads you to a page on which you find concepts that are connected to the concept you clicked on in one of three ways:
  1. there is a factual connection – established in a process of curation, e.g. via the peer-reviewed literature or a curated database such as SwissProt;
  2. there is a co-occurrence in the same sentence in the peer-reviewed literature; and
  3. even though 1. or 2. don't apply, there is such an overlap in the connections that each of two concepts have, that there is a strong 'predictive' association between them, strong enough to 'invite' research to establish if the concepts are indeed factually connected.
Each factual and co-occurrence connection has an 'explain' option with links to the literature from which the connections were 'mined', and so to further discovery possibilities.

There are also links to relevant books, and even more is in the pipeline.

Go forth and multiply (the use of this button)!

Jan Velterop

Tuesday, December 02, 2008

Repositioning repositories

There are more and more repositories and their significance for open access as well as for the universities and institutions that operate them grows, too. Yet many repositories have fairly basic functionality. Some don't mind, and see repositories as a way merely to provide open access or to archive the institution's output. This is a pity. Repositioning them, making repositories attractive places to come to for researchers – and to come back to – would greatly help in their potential success. Many are already on that track. And various developers of repository software, such as MIT's DSpace, are already in the process of starting to experiment with embedding technology that helps the discovery of more knowledge by making it possible for repositories to become 'portals' of sorts.

It's easy enough. Have a look at the functionality that can soon be added to any repository (or blog, or personal site, for that matter) and click the 'button' on the upper right hand side of this page that gives you the opportunity to discover more knowledge. Within weeks we hope to make the code for that button publicly and freely available, for anybody to use on any site (watch this space!). And even more functionality is being worked on and in the pipeline. For now, this technology allows you especially to discover knowledge in the main 'domain of its experience', the biomedical areas.

Below, I am listing a few of the more than a million terms that are recognised as concepts and when you click on them, they open up a 'balloon' with links to more knowledge. I'm just doing that because in this blog you may otherwise not really find too many scientific concepts.

But look at these: hepatic stellate cell – immunoreactivity – squamous cell carcinoma – nonhomologous DNA end joining – monoamine oxidase type B (MAOB) – nuclear envelope – rough endoplasmic reticulum – Kupffer cells – plasma membrane – Ku70.

The first thing you can do is search further. The search will automatically include synonyms. Even something simple as skin is, when used to search further, automatically expanded into the search argument: "Skin" OR "Integument" OR "cutaneous tissue" OR "skin system" OR "Integumental system".

But you can also see authors and publications that are specifically related to the concept you're looking at. And you can see what all the other concepts are that are connected to this concept, and how they are connected. All connections are explained, and these explanations have links to the original source from which the connections were 'mined'.

Of course, this is just the beginning. More knowledge and information that is permanent and relevant can – and will – be added in these balloons. If there is anything you would like us to consider to add, please feel free to give feedback. We do like to hear from you! (Use the 'comments' link below or the email address in the top of this blog.)

Jan Velterop


Tuesday, October 14, 2008

Giving chance a chance, or the usefulness of serendipity

A post on the scholarly kitchen, entitled ‘Citation Controversy’, particularly a reference to the principle of least effort, sparked the train of thought leading to this post.

Scientific articles have references, which represent the connection of the article to other articles, and thus other knowledge. Articles in Wikipedia often have references, too. Although it is not rare that one sees the message “This article or section is missing citations”. The ‘Principle of least effort’ article in Wikipedia carries this message (on the date of posting this). Ironically demonstrating the principle, I think. Authors are often quite parsimonious when it comes to adding references to articles. And when references have been added to an article, there isn’t often a thorough check on whether they include all or enough of the appropriate ones. The omission of obvious references may be picked up by reviewers, but the omission of less obvious ones is easily missed. One of the sad things about omitting references is that it may reduce serendipity.

I have a suggestion for ‘Wikipedians’ who wish to add appropriate references and links to Wikipedia articles. In particular to Wikipedia articles in the areas of health and life science, and so encourage serendipitous discovery. I advise them to go to what I informally call 'wikimore', an enhancement layer where they will find that the text of Wikipedia articles is enriched with highlighted concepts. By clicking on a number of those highlighted concepts and adding them to a search query, you can search the appropriate articles to refer to in, say in Google Scholar, or in Wikipedia itself, and when found, add those references to the Wikipedia article, as a good Wikipedian would.

For instance, by clicking on the concepts ‘information seeking behavior’, ‘design’ and ‘library’, and subsequently searching in Google Scholar, I find this article:

Comparing faculty information seeking in teaching and research: Implications for the design of digital libraries, by Christine L. Borgman et al., in the Journal of the American Society for Information Science and Technology, Vol. 56, No. 6. (2005), pp. 636-657. DOI: 10.1002/asi.20154.

An interesting sentence from that article: “…faculty are more likely to encounter useful teaching resources while seeking research resources than vice versa.” In my view this demonstrates the drawback of a least effort approach (I like to call it the ‘laziness principle’), which by its very nature militates against serendipity. And yet serendipity is one of the most important routes to real breakthroughs in knowledge and understanding. A quote from an article by M.K. Stoskopf: "it should be recognized that serendipitous discoveries are of significant value in the advancement of science and often present the foundation for important intellectual leaps of understanding".

I’m not sure if the article I found (one among many others) would be a good reference to add to the Wikipedia article on the ‘principle of least effort’, but I do hope you can see that with wikimore you can, starting from a Wikipedia article, embark even better on a journey of serendipitous discovery than you already can without the enhancement layer that wikimore provides, since with wikimore, i.e. the concept web enhancement as applied to Wikipedia, every concept that is recognized in the text is a link to further information in itself, a ‘reference’, if you wish.

And while you’re at it, you might want to take a look at the ‘knowlet’ of ‘information seeking behavior’, and explore the concepts with which information seeking behavior is connected in the life and medical science area.

Happy exploring!

Jan Velterop

Open Access Day

Though I haven’t posted for a while on The Parachute, today, on Open Access Day, I feel I should.

Unfettered access to scientific research results is in my view one of the ‘infrastructural’ provisions that enables science to function optimally. So why isn’t open access universal and what can be done to make it so?

After all, open access is easy. Just as I am posting this entry on a blog – open and freely available to any reader, anywhere, any time – I can post a scientific article. It is increasingly unlikely that there are many scientific researchers in the world who don’t have the possibility to publish their articles on a blog or in an open repository. And I use the word ‘publishing’ advisedly. The notion that publishing is something that happens in journals is rather outdated since the emergence of the Web. (Isn’t it interesting, by the way, that our word ‘text’ is derived from the Latin ‘textus’ which means ‘web’?)

Actually, I have to correct myself here. Journals do publish, but they are not needed for the act of publishing by itself. Publishing can easily be done by the authors. The significance of journals lies not so much the scientific content of their articles, but in the metadata of those articles. And by metadata I mean not so much the information about volume, issue, page number, et cetera – though that is useful for unambiguous citation – but in particular the information indicating that, and when, the article has been peer-reviewed (and often enough improved) in the course of a given journal’s editorial process. The role of a journal is to formalize an article, to affix the ‘label’ of the journal to it, indicating not only that it has been peer-reviewed, but also slotting it into what might be called a ‘pecking order’ of scientific publications. One only has to consider the weight attributed to a journal’s Impact Factor to get a sense of how important that pecking order is, or is at least perceived to be.

One of the reasons we do not have universal open access yet is that we keep on confusing the two: publishing (i.e. making public) on the one hand, and formalizing (i.e. affixing a scientific ‘credibility’ label) on the other.

Journal publishers, although still called ‘publishers’, are, in the Web era, mainly in the business of organizing the latter: affixing the label. That is no sinecure, as anyone who has done it will confirm. And as long as it is deemed necessary in the scientific ego-system – in order to get recognition, tenure, funding – it needs to be done. But it should not be confused with making research results openly and freely available.

Journal publishers have been in this business for decades, maybe even centuries. In the print world, publishing and formalizing were completely interwoven, possibly without anyone realizing it. The publishers were paid for their efforts by both readers and authors, though in different ways. Readers paid for access to the information via subscriptions, and authors for affixing the journal label to their articles by transferring their copyright exclusively to the publisher. That exclusively transferred copyright was worth a lot, because it enabled publishers to sell access to their journals, since anyone who didn't hold the copyright (which after copyright transfer included the authors) was prevented from disseminating articles, at least on any significant scale.

But we live in the Web world now, no longer in the exclusively print world. The value to publishers of copyright has decreased significantly since authors either started to ignore it – no-doubt encouraged by the opportunities the Web offers for wide dissemination – or were forced to limit the exclusivity of their copyright transfer, for instance because of mandates to make their articles openly available within a given period of time (within a year, for instance, in the case of the NIH mandate).

Given that open access is a great good to science and society as a whole (I treat this as an axioma), what to do?

Two options for researchers, not mutually exclusive:
  1. Publish research articles freely and openly on the Web, on blogs, in repositories, et cetera, especially in those that allow public comments, and let laying the articles open to such public comments take the place of peer-review. This option may realistically be available only to tenured, established scientists and the very young ones with an independent and iconoclastic frame of mind.
  2. Publish in the ‘traditional’ journal system, but choose journals that accept payment for organizing the peer-review and formalization process, and then make the article in question freely available with full open access immediately upon acceptance, and back this up by depositing a copy of the article in an open repository. This option may realistically be available only to funded scientists, but those who are not able to source funding for it can always resort to option 1.
A few remarks to conclude: There are indications – so far anecdotal – that ‘informal’ publications are gradually being taken more seriously by the science community and that helps the popularity of the first option. There are also indications that even the new and relevant scientific literature is becoming so overwhelming in size in some disciplines that proper manageable ways to get an overview of the state of knowledge, which progresses daily, need to be found. The analogy, if you wish, of a dependable weather report as opposed to just knowing the general climate supplemented by looking out of the window.

And lastly, isn't it fitting that this week, at the Frankfurt Book Fair, the worldwide publishers' jamboree, the inclusion of open access publishing into the mainstream of science publishing is being presented? I'm referring of course to the take-over of BioMed Central by decidedly mainstream publisher Springer.

Jan Velterop

Monday, June 09, 2008

Open Access and WikiProfessional

One of the first WikiProfessional instances is WikiProteins. An article in Genome Biology describes it in great detail. The lead author of that article, Barend Mons, reacts to the post by Euan Adie on Nature’s Nascent blog (“WikiProteins is a croc”, later changed to “WikiProteins – a more critical look”). Because it is important to understand the open access nature of the WikiProfessional project, I am reproducing Barend's reaction to the blog entry in its entirety here.

Jan Velterop
Although the rather sour blog by Euan is quite an exception in the overall positive reactions we receive on the beta site of WikiProteins, I feel that a matter-of-fact reaction from the lead author of the article in Genome Biology that announced it is warranted. It goes hereby.

First of all on Authorship: Jimmy [Wales] was instrumental in making the initial contacts between me and Gerard Meijssen who was then working on WiktionaryZ, now Omegawiki. He also gave invaluable advice on several aspects of the system and he therefore deserves as much of an authorship acknowledgement as the average senior author/professor who ‘conceived of the study’. See also Gerard Meijssens’ Blog about that.

On the interface etc., we all know this is beta and we struggled for a long time to make it as ‘good’ as it is. Obviously a flat file is easier than managing a relational database and therefore the interface can never be ‘really easy’. I agree with Peter Jan [one of the commentators on the Nascent blog entry] that constructive criticism would have been more useful.

Criticism on the commercial nature (as it were) of a company on a blog made available by another commercial company – one that makes money on others’ scientific contributions for as long as we have been studying nature – is a bit peculiar as well. With the involvement of Amos Bairoch, Michael Ashburner , Mark Musen, Abel Packer, Roberto Pacheco, Matt Cockerill and many others in this process, not to mention Jan Velterop’s reputation, it seems to me that the OA nature of the projects is sufficiently safeguarded. With my personal background in malaria, working for 15 years with colleagues in developing countries, I also built a public track record in pushing free access to information for developing countries.

The content in WikiProfessional applications is completely freely available under the Creative Commons Attribution license (we are working on making author credits more clearly visible). The Knowlets are indeed proprietary as we create added value and apply algorithms that by themselves now have taken several million dollars to develop. It has proven exceedingly difficult to get sufficient public funding for this project, which has been carefully internationally discussed and prepared for several years. Bill Melton and Al Berkeley are to be highly commended for taking the risk to fund the vision.

Also the Knowlet space is in Open Access for non-commercial use. I sincerely hope that seasoned investors like Bill and Al would be more imaginative than trying to monetize this site – and the others still to come – by ads only.

On potential fear of competition: let me tell everyone up-front that the authors on the paper have every intention to connect all information on important concepts via WikiProfessional, not trying to put it behind any barrier or to compete with anyone. Some may see us as a competitor to IHOP or Wikipedia pages on biomedical concepts for instance, which is not true, as you will soon see.

We are planning to add locally maintained databases on genes such as www.dmd.nl to the appropriate concept page in WikiProteins much more prominently placed than today (now an indirect link via SwissProt data), but also locally-maintained databases on single gene mutations such as the growing number of Leiden Open Variation Databases (LOVD’s). We have a project starting to map all concepts in WikiProfessional, including all biomedical concept pages, to the corresponding pages in Wikipedia and other emerging wiki’s. People who find the WikiProfessional interface too difficult will be soon able to contribute to their own wiki of choice and their contributions will be seen in WikiProfessional anyway.

We collectively ‘own’ the basic data and anyone is free to ‘add value’ to these and make that ‘added value’ freely available to all or just for public not-for-profit use. Knewco is just one of the companies that derives value from the data and has decided to make the added value available to the scientific community for free.
I cannot wait until Nature will be Open Access as well, at least as far as the scientific articles are concerned. Then it will be easier to make full use of Nature content for the benefit of the scientific community.

One more point on equity and access: the collaboration with our Brazilian colleagues, with whom I co-developed and signed the Salvador Declaration on Open Access, referred to in the supplementary data of the Genome Biology paper, will soon result in crossing the language barrier to Spanish and Portuguese. The record for my beloved ‘malaria’ in Omegawiki will show you our ambition on in how many languages we would like to support the indexing on-the-fly. For Free.

I hope these further explanations take away at least the worst of Euans fears. I see in today’s version of the blog that he did not only change the original title of the contribution, but I also saw a more balanced reaction to Peter-Jan Roes.

However, Euan, if you still feel that some of your comments were justified and not yet properly addressed, please substantiate your claims and in the process it is highly appreciated if you give some constructive criticism. You would really help the community – and us – by doing that. Let’s keep discussing this project to make it better.

Friday, May 30, 2008

The meanings of 'free'

I've received questions about Knewco's WikiProfessional. How free it is; and if it is free as in 'free beer' or free as in 'free speech'.

Life's never simple: it's a combination of both.

WikiProfessional's million minds approach does rely on user input. That's nothing new in science – in fact, the whole scientific knowledge edifice relies on user input. The user-generated content in WikiProfessonal is indeed free as in 'free speech'. The relationship-concept matrix (the knowlet-database, dynamic, relational, and constantly recalculated, reacting to any infusion of new knowledge) is also free to users, but free as in 'free beer'. It took considerable effort to develop and build it – and to maintain it – so it actually is (will be) paid for, by advertising and sponsorships we hope. The users 'pay' as in 'paying' a visit, and 'paying' attention, which we can then use to attract appropriate advertisers. (For some reason we haven't quite figured out yet how to survive on plain air, and we need to generate income to sustain our activities.)

It is important to distinguish the knowlet part and the wiki part in the WikiProfessional database. Knewco (the Knowledge Navigation and Expert Wiki Company) owns the first one and the knowlet is patented. In due time, there will be feeds available from the knowlet database to whoever wants to (or pays for, this might typically be a premium service).

The wiki part of the database on the other hand contains publicly as well as privately available authority and community contributions. We don't 'have' those; we just use those, as anyone else can do, at least with regard to the public ones (one has to approach the 'owners', authorities – NLM, Swissprot/Uniprot, etc. – for these authoritative databases). With respect to the community annotations and contributions, those are freely available under a CC-BY licence (Creative Commons Attribution Licence), and eventually we may have this available in a suitable form for downloading. There may be a potentially fruitful collaboration with Open Progress with regard to standardizing the download/exchange format.

Meanwhile, go to WikiProfessional, use the system, give us feedback, register and contribute, and work with us on spreading scientific knowledge via collaborative intelligence.

Jan Velterop

Wednesday, May 28, 2008

A rose by any other name

"Doctors often exude an air of omniscience, but in truth they are surprisingly ignorant."
Thus began an article in this week’s Economist. Harsh language, but many a doctor, or other professional, including scientists, will recognize himself or herself in these words. The article in The Economist isn’t specifically about that, but the sense of information overload is surely a major contributory factor to this 'surprising ignorance'. After all, a lot of the information one gets to digest is ambiguous, redundant, fragmented, inconsistent, to name a few problems. As Herbert Simon, an American political scientist once observed: “What information consumes is rather obvious: it consumes attention. Hence a wealth of information creates a poverty of attention.” The problem of the information glut in a nutshell.

Today saw the launch of an attempt to combat this abundance, redundancy, fragmentation and inconsistency: WikiProfessional.

The idea is that the combined efforts of a ‘million minds’ would be able, in a collaborative intelligence exercise, to refine a system that 'distills' the essence of established knowledge as well as points to new knowledge that has a high likelihood of being established soon. What it all entails is explained in an open access article in Genome Biology.

The concept (so to speak) is so far optimized for the life sciences and medicine, but there is no reason why it shouldn’t work in other areas as well. And in languages other than English. It is based on concepts, and those are of course valid in any language. It’s just the words or descriptions used for them are different. As Shakespeare already noted in Romeo and Juliet: "What's in a name? That which we call a rose by any other name would smell as sweet."

Just imagine what that means. One of the beauties of the concept approach (as opposed to the keyword approach) is that search terms in one language could, for instance, yield search results in another. Think of Chinese researchers searching with Chinese terms for English literature (they can read English, but may find it more difficult to come up with search terms in English, in the same way that I find it sometimes easier to search with Dutch terms), yet getting served up with English search results. Things like that. Wonderful.

(I have to declare an interest: I’m running Knewco, the company behind WikiProfessional).

Jan Velterop

Sunday, May 25, 2008

Wiki temperatures

In the Chronicle of Higher Education Jeffrey Young reports about a 'frozen' Wikipedia being more academically useful for students than the current version, which can be – and is – edited all the time, sometimes resulting in a lot of heat. There is something tremendously attractive in having unfettered editing possibilities, but also in having stable, authoritative articles in such an extremely useful web resource as the Wikipedia. In an academic environment, one would ideally have both. WikiProfessional, which is specifically conceived for the academic and professional environment, actually gives both. On the one hand it presents stable, vetted and authoritative knowledge, yet on the other hand it gives the utterly useful and necessary option for knowledge to be supplemented and annotated in real time by anyone wishing to do so. Both the authoritative version, and community annotations and additions, are presented side-by-side. Only when annotations and additions are deemed acceptable by the professional or academic community in question – peer-reviewed in one way or another – are they elevated to the level of 'received knowledge'.

For open access WikiProfessional presents a nice additional opportunity: 'annotations' can be links to particularly appropriate and relevant articles. And if such links were made to freely available versions of the articles in question, this would give WikiProfessional some of the functionality of a federated repository, not just enhancing an article's exposure and findability, but at the same time putting it in the right context in the Concept Web. This, in turn, may well further increase the chances of such an article to be cited.

Jan Velterop

Thursday, May 15, 2008

Dealing with abundance – getting more out of the science literature than you thought possible

Open access is adding to the abundance of scientific information available to us. It is to be expected that this abundance will be growing fast, with the growth of open access. This is good, because only comprehensive and unfettered access to the science literature will make it possible for us to be truly abreast of the scientific progress that's being made.

On the other hand, however, it will present us with even more challenges than we already face in terms of being able to deal with all that information. In certain disciplines reading all the relevant papers to our research topic means digesting thousands of papers per year – enough to fill our entire working time. Without assistance from the processing capabilities and speed of computers, we cannot hope to keep up with emerging trends in our chosen fields.

Few scientists can properly cope with mushrooming information and were they to read all the articles relevant to them, they would find that they almost always contain a very large amount of information already known to them. That redundant information is usually provided for the sole purpose of context and readability. The amount of actual new information is often surprisingly small and could have been conveyed in one or two sentences if the context were clear. Yet the essence of the scientific discourse is captured in those few sentences. The surrounding text of articles is, if you wish, the packaging in which the essence is transported, and analogous to the mass of fluffy stuff that's surrounding breakable item that's being shipped: emballage.

At Knewco, the company that I now work for, we aim to provide an environment for concentrating this scientific discourse – 'distilling' it from the abundance of sources, if you wish – and make it more productive by making it computer-processable. Very few scientists can read and digest all the articles and database entries that they would need to read and digest in order to synthesize the essence of the knowledge they need. So what we do is to enable and foster collaborative intelligence between machine processing power and human brainpower. Knewco 'distills' information to the essence of knowledge content from millions of documents, enriching it in the process with linked concepts and context.

This is not the same as making it possible to locate the one right document out of the abundance available. It is identifying 'atoms' of knowledge about a given concept from the literature and combining these atoms into 'molecules' of knowledge (we call those "knowlets" – a knowlet connects facts). Just as a graph can give you in one glance the essence of an enormous array of numbers in one glance, the knowlet gives you the essence of an enormous amount of scientific literature. It's like reading out of a picture instead of text. And as "a picture is worth more than a thousand words", a knowlet could be said to be worth more than the text of a thousand articles. Knowledge redesigned, as it were.

Perhaps more importantly, since a knowlet is a computer artifact, it can be used to identify related information, predict trends and intersections in data (see it as a kind of topology of knowledge), be used in combination with other knowlets of more complex concepts, and be updated in real time to keep information current up to the minute.

For technology of this kind to be optimally effective for scientific knowledge discovery, access to the literature is not sufficient by itself. It goes without saying that the source documents must be computer-readable to be optimally usable. Publishers as well as repositories may wish to take this to heart if they are serious about helping to speed up the pace of scientific progress.

Jan Velterop

Friday, March 14, 2008

Onwards from open access

As many of my readers will already know, I have recently decided to leave my position of Director of Open Access at Springer for that of CEO of Knewco Inc. Several reactions that I have since received indicate to me that my move is not necessarily understood by everyone, and I’ve even seen speculations that my leaving open access might mean that it is not going anywhere at Springer.

Let me say the following to that. First of all, OA has developed some very solid roots within Springer and I am most confident that OA is being further developed with alacrity by my successors at Springer.

Secondly, I don’t feel that I am leaving open access. Open access is not some club that one is a member of or not; it is a 'thought form' that one adheres to. And open access is only one of the ways in which the speed, efficiency and quality of scientific discovery can be enhanced.

Looking back on my career, I feel that my motives haven’t changed much. When I was working on IDEAL/APPEAL* (at Academic Press) in 1994-95 and later, I did this on the premise that there must be better ways to disseminate the research papers published in journals than just via relatively small numbers of subscriptions. The IDEAL concept (derided at first, but then imitated by just about all publishers, and often nicknamed BigDeal) was brought about by the realisation that if access to electronic journal articles could be pooled by larger numbers of institutions, then for the same publisher’s income – the same cost therefore to the academic community – the articles would be accessible to vastly more researchers. If ever the cliché
win-win was appropriate, it was here.

Open access logically follows on from that. The challenge was – still is – to find appropriate economic models to sustain professional scientific publishing with open access. The recently agreed arrangements between Springer and the Max Planck Gesellschaft, the UKB (all the Dutch universities plus the Royal Library), and Göttingen University, may point to a way forward. All articles from these institutions in Springer journals are published with open access under these arrangements.

If the underlying motive is, however, to get the most out of the scientific knowledge that has been gathered, which it is in my case, then moving on from open access to the semantic web – the concept web, if you wish – feels, at least to me, an entirely logical step. Not all knowledge after all is captured in journal articles. There is much more besides those, in databases, for instance, and in less formal web conversations. (A case can even be made that journal publishing ‘destroys’ data, for instance by reducing them to simple pixels in graphs, taking away the underlying richness of the data). Also, the connections between knowledge fragments are not always easily made purely by reading journal articles, in may areas a problem exacerbated by the sheer numbers of articles published. And all relevant. We are in a situation of overwhelming – and growing – abundance of scientific information, and methods that deal with that abundance are clearly needed. This is what Knewco people are working on, and I am very excited to join them.

Jan Velterop

*IDEAL: International Desktop Electronic Access Library – APPEAL: Academic Press Print and Electronic Access Licence



Tuesday, March 04, 2008

Charity and recycled paper

I don't think that assertions such as "...not all OA journals charge anything from either authors or readers..." or even "...the majority of OA journals do not charge anybody..." are very helpful for achieving widespread open access. One does come across them regularly, though. It seems more to do with the desire not to spend anything, or rather, to see that if any money is to be spent, it's done by 'someone else'. They may be mathematically correct, though.

The trouble is, 'journal' is in many respects the wrong entity in this regard. It may be a convenient one, but that doesn't make it right. Journals come in all different sizes. They range from publishing a few articles a year to publishing thousands. The variability is such, and the tail of minuscule journals so long, that I wouldn't even be surprised if it turns out that the smallest 50% of journals altogether represent less than 10% of articles published (I didn't do the calculation, but that's my sense).

I wonder, therefore, if the assertions above hold up if one looks at modal journals (i.e. journals with a modal number of peer-reviewed articles published per year; or perhaps journals with a modal impact factor).

Even if that should be the case, there is another issue. A while ago, I publicly pondered the question whether any of the non-charging OA journals (the ones that charge neither author nor reader) would be acceptable venues for articles that are the subject of funder mandates, such as the NIH or the Wellcome Trust. Not too many, I suspect. So far, I've heard or seen no answers to that question.

The non-charging OA journals are likely to operate on the fringe of scientific and scholarly publishing, and although they no-doubt have their function in the landscape, drawing this kind of attention to them at best takes away the focus from the mainstay of the academic peer-reviewed literature, and at worst, destroys these small journals, as there would be no way of coping with a flood of submissions without charging anyone.

It is relatively easy to sustain small fringe journals (some of them may be of very high quality, of course, though those are likely to cater to very small communities) on what the Dutch would call "charity and recycled paper" (liefdewerk oud papier). That's not scalable to the peer-review literature as a whole. Open access deserves to be taken more seriously.

Jan Velterop

Monday, February 04, 2008

Survival of uncertainty, or uncertainty of survival?

On Sunday, February 3rd, Peter Suber, on his Open Access News blog, wrote in his comments to a post by Kevin Kelly, that "[new] business models aren't just good ideas, for example, to make OA possible. They are necessities for survival. For publishers, self-interest should be the primary driver for OA."

I fully agree with Peter. I have always approached open access publishing with this as my adage. My previous posting is pointing to some of the ways in which such new business models can develop.

However, a large and prominent school of thought in OA advocacy seems to argue the opposite. Namely that publishers aren't threatened by OA. "Look at physics", they say, "and you'll see that even though almost all articles are freely available in ArXiv, and have done so for more than a decade, subscriptions to physics journals survive as if nothing has happened."

Now, is OA necessary for survival, or not, since there is no threat to survival at all? Are these opposing views a sign of OA-diversity, or a kind of
quantum effect like Heisenberg's uncertainty principle?

Jan Velterop

Planck cheque - max. access

The Max Planck Gesellschaft (Max Planck Society) have agreed a deal with Springer that includes immediate open access for all articles by Max Planck researchers that are accepted, after peer review, for publication in Springer journals.

This is one of a few - so far experimental - deals, similar in nature (the others are with the UKB - a consortium of the Universities and the Royal Library of The Netherlands - and with the Georg-August University of Göttingen in Germany) that aim to find a way forward in reconciling the desire for universal and immediate open access to peer-reviewed scientific journal articles with the need to ensure the economic sustainability of peer-reviewed journals.

Implicit in these arrangements is that they mix the subscription model with the author-side payment model during a transition to a fully and properly funded open access model across a whole spectrum of journals and disciplines. In the process, any differences in the ability to publish with immediate open access (the 'gold' route) between well-funded and poorly funded disciplines are evened out.

These experiments could quite conceivably see an increase in article submissions to Springer journals by authors from Max Planck Institutes, Dutch universities, and the University of Göttingen, particularly where the choice of journals for those authors is between a Springer journal which will publish with OA and a more or less equivalent journal, in terms of status, impact factor and the like, from another publisher. In fact, such an increase is expected, over time.

In any event, even without such further increases, these arrangements already entail a substantial growth in the number of high-quality peer-reviewed open access articles.

Jan Velterop

Sunday, February 03, 2008

Charcuterie de science

“Gaming the system” is something that inevitably occurs whenever the quantitative outcomes matter (such as impact factors, usage statistics, number of articles on a CV, money, et cetera). Salami-slicing, the subject of a current thread on Liblicense-l, is just one of the ways of gaming the system. I’m not completely convinced that salami-slicing (or even auto-plagiarism, though that goes rather further, of course) is all that unethical. Or rather, that it is more unethical than, say, mutual citation cliques, boosting a journal’s impact factor by publishing review articles, improving usage statistics and impact factors by publishing with open access, et cetera. In the ‘ego-system’ of science, they’re all ways of gaming the system.

The gist of the discussion on Liblicense-l is that salami-slicing is bad. The motives of salami-slicing authors are presented as suspect, and there are strong suggestions that salami-slicing is bad for science.

As always, in discussions like this, the definition of what is salami-slicing is nor clear. In other words, how thin is a slice? Multiple publication of the same article is even brought under topic. But let’s take as a definition that salami-slicing is the practice of publishing a series of articles in each of which just one, or a small number, of a larger array of connected contributions to knowledge are presented, that could have been presented in one, more substantial article. For instance, "a inhibits b" (just one finding of a set that includes "a inhibits c, and f, and n, and p, and enhances the actions of h, of k, and of z"). Is it really bad for science if these findings are salami-sliced for publication?

I’m not sure, but mining the data from such articles with small units of information may conceivably be easier than mining them from articles that present the whole lot. Or it may make no difference. In certain disciplines, where automated analysis of articles is overtaking actual reading, it may even be desirable and should be the future of science publication. Salami-slicing may come close to publishing entries one by one in a database. If peer-reviewed entries in databases were to give their authors the same sort of acknowledgement as journal articles do, and ‘the system’ (those who decide on funding, promotion, tenure, et cetera) would formally recognise such contributions to science, would we still get upset about salami-slicing?

Gaming the system is human, and it happens in all walks of life, all the time. Usually it’s the result of flaws in the system. In science it is among the survival mechanisms, an evolutionary adaptation, if you wish, to the stresses of the ego-system, and it is done in all manner of guises. Isn’t freely disseminating peer-reviewed research results that are published in journals, by depositing in open repositories, while expecting the journals to continue to be paid for via subscriptions (i.e. via mechanisms intended for and dependent on exclusivity of dissemination), also a way of gaming the system? Ideas about correcting the flaw in the system that makes this particular form of gaming it possible range from stricter copyright enforcement (i.e. abolishing ‘green’ and not publishing if copyright isn’t transferred to the publisher), to open access publishing (i.e. securing payment for the services rendered, via article processing charges, subsidies, and the like). Obviously, the second idea has my preference.

Jan Velterop

Tuesday, January 29, 2008

Open access and publishing

On 24 January, the UK Serials Group (UKSG) published The E-Resources Management Handbook. I contributed a chapter to it on Open access and publishing. It is freely available from the UKSG site.

Jan Velterop

Saturday, January 26, 2008

Plagiarise, don't let anything evade your eyes

Title taken from a song by Tom Lehrer

A commentary in Nature suggested that duplicate publication is on the increase. Mostly autoplagiarism, apparently, as it seems that the majority of these duplicates share at least one author. A few studies are referenced that suggest a relatively low number of plagiarised articles, but a much higher number of suspected duplicates with the same authors. And it is suggested that those have been published simultaneously, which is, of course, not easy to achieve for alloplagiarism ("simultaneous publication is rarely observed for duplicates that do not share authors").

It also suggested that duplicate publication is bad, particularly in areas like clinical research ("Duplication, particularly of the results of patient trials, can negatively affect the practice of medicine, as it can instill a false sense of confidence regarding the efficacy and safety of new drugs and procedures"). This is no-doubt true, but one wonders if this negative effect is anything other than minor, given the rather widespread publication biases when it comes to clinical trials, such as this one regarding the treatment of depression with selective serotonin reuptake (SSRI) inhibitors: "Thirty-seven studies were assessed by the FDA as positive and, with one exception, every single one of those positive trials got properly written up and published. Meanwhile, 22 studies that had negative or iffy results were simply not published at all, and 11 were written up and published in a way that described them as having a positive outcome." (Ben Goldacre in The Guardian of January 26, 2008). Judging the scientific validity of findings just by counting articles is clearly pretty primitive.

Autoplagiarism is seen as ethically questionable, to say the least. According to the authors of the Nature commentary, Mounir Errami and Harold Garner, "it not only artificially inflates an author's publication record but places an undue burden on journal editors and reviewers, and is expressly forbidden by most journal copyright rules."

This is undoubtedly true as well, but again, placed in context it may be dwarfed by the burden on journal editors and reviewers imposed by the cascading effect of the whole publication process, with its cycle of submission, rejection, submission to another journal, rejection by that other journal, and so forth, until the article is finally published somewhere, meanwhile peer-reviewed at every stage.

What if the motives of autoplagiarising authors are more benign? What if they just want to ensure a wide dissemination of their work and they see multiple publication as a way to achieve that? One might say that publishing in a journal that offers open access would be a better way of doing that, or self-archiving in an open repository (and I would certainly be in favour of publishing with open access). But a quick look at the various open access advocacy email lists shows that cross-posting is rife, even though the archives of such lists are completely open. That complete openness is evidently not being regarded as sufficient by the cross-posting posters to get the attention desired. Multi-publication may in essence be the same phenomenon, or at least driven by the same motives. Is it so much different from having multiple versions of an article, as in one in a journal, another one in a central repository, another one in an institutional repository, et cetera? Sure, those should all refer to the same formally published article, so the authors can't get extra credits for them, so maybe it is very different. But hey, the scientific ego-system is a pretty cut-throat arena, and multiple publication seems amongst the smaller of possible misdemeanors, with a least the positive effect of wider dissemination of research results.

I am not convinced that autoplagiarism is anything other than a minor problem in science. It seems to me that non-publication of negative results is a problem of an order of magnitude greater. It is high time that this bias is addressed, and with the kind of indignation now seemingly accorded to autoplagiarism.

Interesting irony:
A Google search on 'non-publication of negative results in 2007' (search done on 26 January 2008, 16:30 GMT) shows as first result an article in the Journal of the American Medical Informatics Association, with the link:
http://linkinghub.elsevier.com/retrieve/pii/S1067502707000394 which leads to a screen saying "The article you requested is not currently available online".

Further down in the Google results is a link to an abstract that seems to be from the same article, and it is online, albeit not open. From the abstract: reasons why studies were not published range from "results not of interest for others" (1/3 of all studies), "publication in preparation" (1/3), "no time for publication" (1/5), "limited scientific quality of study" (1/6), "political or legal reasons" (1/7), and "study only conducted for internal use" (1/8)
.

Jan Velterop

Friday, January 18, 2008

Reviewed reviews

"Book self-archiving cannot and should not be mandated, for the contrary of much the same reasons peer-reviewed journal articles can and should be."
Stevan Harnad
18 January 2008
contribution to liblicense-l

I agree with him.

I think.

Why I can't be entirely certain is because by peer-reviewed journal articles he may mean the same as the NIH in the description of the types of articles that fall under the mandate, which says:
"The Policy applies to all peer-reviewed journal articles, including research reports and reviews. The Policy does not apply to non-peer-reviewed materials such as correspondence, book chapters, and editorials."
That's a mistake, in my view. Review articles belong in the second sentence, with editorials and the like; not the first. More often than not, review articles are initiated by a publisher, inviting a distinguished author to write one. More often than not the author is offered some payment for writing it. Seldom if ever is a review article the result of a funded research project.

Review articles have a lot in common with books. And if self-archiving of books "cannot and should not be mandated", the same applies, grosso modo, to review articles.

Even OA publisher par excellence, BioMed Central, requires subscriptions to access review articles, for instance in the journal Breast Cancer Research. I think they are right to do that. It will be interesting, though, to see how BMC will deal with the NIH requirement to self-archive review articles. Willl the 12 months' embargo be enough? They currently make these articles freely available after two years ("freely available online to registered users", which isn't quite the same as open access, but maybe that distinction is for pedants only). They could just avoid inviting authors with NIH grants to write review articles, of course.

Jan Velterop

Tuesday, January 08, 2008

Taking the trip without paying the ship? Episode 2

Peter Suber, on his Open Access News blog, has made several comments on my previous posting. They all warrant a response, but first I'd like to make the general point that much of what separates the OA-advocacy sphere from the publishing sphere comes down to deep-rooted and stubborn differences of perception.

Such as the idea that researches 'give away' their papers to publishers. It certainly doesn't feel that way on the side of the publishers. There it feels like being asked to perform a service. That's why the process is known as 'submission' and not as 'donation'. Besides, if all this 'giving away' is a bad thing, why would scientists continue to do it? They may be many things, but they're not stupid.

Or the idea that the information in articles is being 'locked-up' by publishers for the sake of control. As far as publishers are concerned, any scientist is completely free to self-publish his articles on his own web sites or in repositories. What causes the 'lock-up' (at least until subscriptions are replaced by other ways of paying for publishers' services) is the requirement to publish in reputable peer-reviewed journals. Not a requirement imposed by publishers. That is not to say that it isn't a useful requirement. One of the main roles of publishers is to provide the structure for a professional, timely and efficient peer-review process to take place, on the scale necessary. Anybody can organise peer review of their own papers and decide not to bother a publisher with it, just as anybody can buy their eggs and wheat from a farmer and proceed to bake their own cake. Both happen, though most people have no time for it, find that they lack the requisite skills, or just find it downright boring. Publishers -- and bakers -- are there to professionalise and speed up that process, offering to take the hassle out of the hands of scientists leaving them to spend their time on where their real interests lie: doing science.

Back to Peter's comments:
"Subscription journals and mandated open access are not compatible." Jan's argument depends on the high level of OA archiving, whether that level is caused by a mandate or by a successful disciplinary culture of self-archiving. It therefore predicts that the near-100% level of OA archiving in physics would kill off subscription journals in physics. But that is not what we see when we look. On the contrary: the American Physical Society (APS) and the Institute of Physics Publishing Ltd (IOPP) have seen no cancellations to date attributable to OA archiving. In fact, both now host mirrors of arXiv and accept submissions from it. They have become symbiotic with OA archiving. We may or may not see the same symbiosis in other fields, as their levels of OA archiving rise to levels now seen in physics. But the experience in physics is enough to falsify the flat prediction that subscription journals and high-volume OA archiving are incompatible. For more on the question whether high-volume OA archiving will cause libraries to cancel subscription journals, see my article from September 2007 (esp. Sections 4-10).
First of all, my argument doesn't depend on a high level of OA-archived content to be valid. If there is a high level of OA content, then potential cancellations are the issue. At a lower level, we see an expectation -- increasingly a demand -- for reductions in the subscription fee. You could call that 'partial cancellation' if you wish. As for the idea that the field of high energy physics demonstrates that subscriptions and self-archiving are compatible, I do wonder why it is that the SCOAP3 initiative was taken. The compatibility that seems to exists in high energy physics is like the fluidity of supercooled water. SCOAP3, the idea of which is to abolish subscriptions altogether, will be the dropping in of the coin around which that water quickly solidifies as ice.

The incompatibility of subscriptions and OA (whether self-archived or otherwise) is as fundamental as the melting point is to supercooled water. In exceptional circumstances, temporary unstable states can occur. I accept that, pragmatically, this unstable state of pseudo-compatibility can occur for a while and runaway cancellations won't necessarily take place until the penny drops properly.

The second of his comments:
Jan assumes that all OA journals charge author-side publication fees. ("They don't give authors a choice and simply refuse to publish articles unless they are paid for by article processing charges....") But in fact most OA journals charge no publication fees. Last month, Bill Hooker's survey of all full-OA journals in the DOAJ found that 67% charged no publication fees. The month before, Caroline Sutton and I found that 83% of society OA journals charged no publication fees.
I'm certainly not assuming that all OA journals charge author-side fees, and I have no reason to doubt the numbers that Bill Hooker and Peter and Caroline come up with. Since the topic at hand was the NIH mandate, however, the question that I have is how many of those 70 to 80% of non-fee-charging OA journals would be acceptable journals for NIH-grantees to publish in?

His third comment:
If "paying the ticket" means paying the publication fee at a fee-based OA journal, then there are two replies. First, the NIH already allows grantees to spend grant funds on such fees. Second, but the NIH does not, and should not, require grantees to publish in OA journals. There aren't yet enough peer-reviewed OA journals in biomedicine to contain the NIH output; and even if there were, such a requirement would severely limit the freedom of authors to publish in the journals of their choice. That's why all funder mandates worldwide focus on green OA, not gold OA.
The freedom of authors to publish in the journals of their choice is important. I fully agree with that. The fact that this is seen as such an important tenet of academic freedom only serves to underscore how important journals are for other reasons than just distribution. That is why I argue that all journals should offer at least the option of immediate OA, and I do take the point of there not being enough journals yet that offer it. (By the way, a journal that offers immediate OA isn’t the same as an OA journal. Journals that offer OA include ‘hybrid’ journals. As the Bethesda Statement clearly says: “Open access is a property of individual works, not necessarily journals or publishers”.)

The NIH should indeed not require publishing in OA journals and journals that offer OA as an option. But if they are truly aiming to have, eventually, a solid and sustainable OA publishing system, they could at least advise publishing with OA and make clearer and more widely known that they allow grantees to spend grant funds on article processing fees for immediate open access.

Peter's last comment, an extensive one:
If "paying the ticket" means paying for peer review even at TA journals, when grantees submit their work to TA journals, then the reply is somewhat different. TA journals are already compensated by subscription revenue for organizing peer review. The NIH mandate will protect their subscriptions by delaying OA for up to 12 months and by providing OA only to author manuscripts rather than to published articles. In the September 2007 article I mentioned above (Section 6), I list four incentives for libraries to continue their subscriptions even after an OA mandate. If the argument is that these protections don't suffice, and that the risk to publishers is too great, then my answer is that Congress and the NIH have to balance the interests of publishers with the interests of researchers and the public. Here's how I described that balance last August:

Publishers like to say that they add value by facilitating peer review by expert volunteers. This is accurate but one-sided. What they leave out is that the funding agency adds value as well, and that the cost of a research project is often thousands of times greater than the cost of publication. If adding value gives one a claim to control access to the result, then at least two stakeholder organizations have that claim, and one of them has a much weightier claim than the publisher. But if publishers and taxpayers both make a contribution to the value of peer-reviewed articles arising from publicly-funded research, then the right question is not which side to favor, without compromise, but which compromise to favor. So far I haven't heard a better solution than a period of exclusivity for the publisher followed by free online access for the public....Publishers who want to block OA mandates per se, rather than just negotiate the embargo period, are saying that there should be no compromise, that the public should get nothing for its investment, and that publishers should control access to research conducted by others, written up by others, and funded by taxpayers.
The first two sentences sound suspiciously like "free-riding on the bus is OK, because the bus company is already compensated by the revenue from season ticket holders". I'm pretty sure that is not what he means, but what does he mean?

His reasoning on the balance struck is also shaky. Yes, publishers do add value, but why is saying so implying that they are the only ones adding value? And they don't claim to control access. They have to as long as there is no widely accepted other way for them to charge for the value they add than subscriptions. That's the beauty of author-side payment: it naturally removes the need to control access that comes with the subscription model. 'Gold' -- paying for the services you ask a publisher to perform -- is so much cleaner than messing around with compromised subscriptions and embargoes. And it would result in OA immediately upon publication as well, and not 12 months later.

Anyway, perhaps this NIH mandate is a spur for publishers and societies to accelerate moving to 'gold', at least for articles falling under these mandates.

Jan Velterop

Sunday, January 06, 2008

Taking the trip without paying the ship?

‘Twas the time of peace on Earth, making merry for some, serious contemplation for others, and infantilisation for others still, if I read the blog and list postings of the last few weeks. And combinations of all of the above, of course. Many of those who favour Open Access have reason to be happy, since the NIH mandate has passed all its hurdles in the US legislature and is becoming law. Albeit, oh irony, as stowaway in a spending bill that allocates nigh unlimited funds to war, a small fraction of which would have made the entire academic literature published since the dawn of modern science open to anyone in the whole world. A bag of sweets hidden in a barge of poison. It is a shame the mandate couldn’t make it on its own.

What is it all about?

The mandate in the bill requires researchers, authors, to deposit the articles resulting from their NIH-funded research immediately in PubMed Central and then make them open after 12 months at the latest. Read thus, the whole thing is ostensibly taking place outside the purview of publishers, as it is not they who are mandated to do anything. There’s even a positive message for many of them, if they are willing to hear it. Open access is, after all, a desirable thing, politically and scientifically. And it is not just any articles resulting from their research that grantees are mandated to deposit and make open within 12 months, it is their published, peer-reviewed articles. So what publishers have to do is make sure they offer authors open access – or at least embargoed open access – to the articles for which they, the publishers, arrange peer-review and then formal publication in a journal.

How they do that is the question. Most journals get ‘paid’ for their efforts by the authors’ transfer of copyright. This copyright they then subsequently ‘trans-substantiate’ into money via subscriptions. What an embargo does is simply to make this ‘payment’ of copyright worth less. For some journals, an embargo of 12 months will make little difference. The time-sensitive currency of the information published in those titles demands that libraries need to subscribe to get immediate access anyway. For those, the ‘value’ of copyright is not eroded. But for other journals, the ones that publish less time-sensitive material, a mandate is possibly devastating, a double whammy, removing the incentive to pay both on the part of the librarian, who judges that his or her constituency can wait 12 months for access, as well as on the part of the author, who, given the option, may judge that his or her readers can wait 12 months for access. Subscription journals and mandated open access are not compatible. Only journals run on entirely charitable support can survive this way.

Fully open access journals stand somewhat outside the pitch as observers of the spectacle, since they have already understood that being dependent on what governments may allow you as a term in which to sell subscriptions is just too risky. They don't give authors a choice and simply refuse to publish articles unless they are paid for by article processing charges, a.k.a. author-side publication fees. Subscription-based journals and hybrid journals (those that offer paid-for open access as an option) are the ones likely to suffer, although hybrid journals have the possibility too, of course, to remove the non-OA option for NIH-funded research articles and behave exactly like a full OA journal towards NIH-grantees.

Surely, the stowaway analogy doesn’t go further than the mandate simply being buried in the bowels of the bill, does it? Surely, the free-readership mandate doesn’t imply free-ridership, too, does it? Surely, the mandate doesn’t imply that NIH-funded researchers are compelled to take the trip without paying for the ticket? If so, the bill is fundamentally a dishonest one. If it isn’t a dishonest one, surely the NIH will clearly indicate that it is entirely legitimate, and advisable, for authors to spend a small percentage of their grant money – estimates range from 1 to 2 percent – on the article processing fees for publication with immediate open access?

If the bill really should be the fundamentally dishonest variety feared, one of ‘taking the trip without paying the ship’, then this OA ‘victory’ will, alas, turn out to be a Pyrrhic one. A short-term pseudo-success at the cost of a long-term open access solution. A palliative that ultimately kills instead of a treatment that ultimately cures.

Advocates of true, immediate, and sustainable open access, as an integral part of research, may still have a long way to go.

Happy 2008!

Jan Velterop