"Bad-Scientist XX submitts his 'p' value and his explanation to the journal editor, and the editor accepts the changes and publishs the paper. There are no rules enforcing that analysis code is submitted with papers for review, but as we know, sometimes it takes a lot of eyes to find a bug, and Bob isn't exactly motivated to debug his code in the first place. He thinks that if it produces a number, then it's right. So a paper with a serious flaw gets through peer review and is now in the permanent record. What can we do about this kind of situation?"
My answer: Well, there is this rise of open, post-publication review on Facebook and Twitter. Include educated patients who took MITx MOOCs.
MITx Answer:
"There is no one right answer here. But if you're thinking about it, you are on the right track. Here are some ideas for what we can do individually:
Treat code-writing as a craft. Always be learning, teaching, and improving.
Treat doing science as a sacred duty. Career ambitions will often conflict with your sacred call to objectivity. Stay objective.
Do not be a scientist who's "bad at coding". If you get behind the wheel of a computer and write any code, don't endanger others while doing so!
Keep your code hosted at github. Learn version control. Let others see your code and help to make sure it's right. Help others by reviewing their code.
Write unit tests.
Don't be embarrased by your code. Learning to code well takes a very long time. Embrace the void of the unknown - just aim to be slowly improving, and you will be ahead of the curve in no time."
Copyrights: MITx, edX.
We certainly need a better image than this on #peerreview in a digital age. http://t.co/QcQa86NUUO … #BMCEds14 pic.twitter.com/HA1jDMuLe6
— Graham Steel (@McDawg) April 22, 2014
Aucun commentaire:
Enregistrer un commentaire