Skip to the content.

Rule no. 5: Improving a resource through peer review.

To improve the quality of a resource, the more it is refined by expert eyes, the more likely it is to be relevant. It’s this collaboration of “specialists” that we’re looking for.

This peer review process comes from science and it’s how knowledge has been built up over the last few centuries. It’s going to be fundamental to making constantly evolving knowledge reliable and then improving it.

Eric Raymond, in his essay The Cathedral and the Bazaar, formulates Linus’ law: « given enough eyeballs, all bugs are shallow. »

The more a software product is read and re-read, the more important its quality can becomes. This is what can produce security and robustness in open source software, which can be massively studied and reworked by everyone. That’s why it’s important when using open source software to be able to analyze the size and involvement of the community to try to assess its potential quality.

There are certain limits to this logic, however: not all eyes are created equal and some subjects are genuinely complex. For example, if we stay with the Linux kernel, it’s relatively mission impossible for a layman to judge the quality of the code, as considerable expertise is required to jump in.

This logic of peer review also applies to Wikipedia. Not all articles are equal, but those of the highest quality will be those that have been reviewed by a large number of experts, academics, specialists, enthusiasts… In fact, since the encyclopedia is public and collaborative, it is potentially the most reliable source of knowledge ever created by humankind. When a scientific journal article is reviewed by 2-3 peers, Wikipedia can count thousands. (

The Delphic method gives a name to this phenomenon, stating “that predictions made by a structured group of experts are generally more reliable than those made by unstructured groups or individuals.” (cf Wikipedia)

We need to seek to maximise peer review in order to obtain a quality resource.