One thing that sucks about #PeerReview being so broken and a vector of domination rather that cooperation is that, in the best case, they can be skillshares as much as anything else. In some code reviews I have given and received, I have taught and learned how to do things that I or the other person wished they knew how to do, but didnt.
That literally cant happen in the traditional model of review, where reviews are strict, terse, and noninteractive. Traditional review also happens way too late, when all the projected work is done. Collaborative, open, early review literally inverts the dreaded "damn reviewers want us to do infinity more experiments" dynamic. Instead, wouldnt it be lovely if during or even before you do an experiment, having a designated person to be like "hey have you thought about doing it this way? If not i can show you how"
The adversarial system forces you into a position where you have to defend your approach as The Correct One and any change in your Genius Tier experimental design must be only to validate the basic findings of the original design. Reviewers cannot be considered as collaborators, and thus have little incentive to review with any other spirit than "gatekeeper of science."
If instead we adopted some lessons from open source and thought of some parts of reviews as "pull requests" - where fixing a bug is somewhat the responsibility of the person who thinks it should be done differently, but then they also get credit for that work in the same way that the original authors do, we could
a) share techniques and knowledge between labs in a more systematic way,
b) have better outcomes from moving beyond the sole genius model of science,
c) avoid a ton of experimental waste from either unnecessary extra experiments or improperly done original experiments,
d) build a system of reviewing that actually rewards reviewers for being collegial and cooperative