The definition of cheating is an interesting one. If tests are intended to measure competence, then the concept of a closed-book test should naively be nonsensical. After all – in any realistic real-world situation, not only books and notes but also fellow professionals will of course be available; competence lies in correct execution.
Naively. The point of a test, of course, is the same as the point of a controlled experiment: removing other variables to simplify a problem, then correlate the results of the test to the real world. Then, formally, two factors determine the efficacy of a test: to what extent “competence” is one of the variables removed, and to what extent the test correlates to the real world.
(On a side note, while the SAT is reasonably competent at avoiding the former, it fails disastrously at the latter.)
Thus – the entire point of history is to be able to hold history in your mind and make inferences and analogies on the fly, so open-book tests would be completely pointless – correlate poorly to one’s ability to perform in reality. Meanwhile, the point of most STEM fields is practical – so long as you can solve a problem, the methods should not be overly penalized, and thus open-book tests are legitimate. (Of course, this only extends so far as the concepts are internalized.)
So far so good. Why penalize collaboration? Other experts are available for questioning in every field. Again, in a field like history one’s ability to readily access large swathes of time is much of the point, but even so there are times where it is entirely appropriate to simply ask a colleague. In the STEM fields, inter-field collaborations are finding massive amounts of unplucked low-hanging fruit that before had gone unnoticed by monofocus researchers.
This might be a rare case where our analysis does not add up to reality: it seems to make sense to test collaboration, group thinking and synthesis, at the very least in addition to individual capability.