Models, Bias, and Necessary(?) Anonymity
- Feb 2, 2018
- 4 min read
With the start of a new semester, my social studies class switched from Public Policy to Economics and although we've only had a few classes so far, I'm finding the concepts to be quite compelling in an extremely logical way. After all, economics is the study of human action under conditions of scarcity. The concept I'm finding to be most applicable to my own emc project however is the building of models. I've learned that Economic theory is essentially that, it seeks to explain how the world operates by building a similar form through a body of assumptions that focuses on explaining certain aspects of human society. Along with every model however, come inherent weaknesses that can manifest themselves in a variety of different ways. The two we discussed in class were bias and lack of care. Bias, although an unintentional source of error, is an important one to consider. Whole and complete objectivity is a theoretically optimal but virtually unachievable characteristic of any human determination. Lack of care on the other hand, is rather straight forward. Despite likely being unintentional, this source of error is remedied far more easily than bias. Lack of care often arises due to an insufficient degree of review or analysis but that doesn't mean the producer of the model cannot make alterations to eliminate such errors.
But anyway, back to bias. Inescapable human bias is the reason for double-blind studies, anonymous peer reviews, and just about every other name-excluding practice that exists throughout our daily operations. We know enough about the way humans think, live, and interact to know that we can't always trust ourselves to be personally removed from situations and decisions. At the same time though, this reality begs the question of whether or not our goal should be an elimination of bias. Can it be such a bad thing if we're humans dealing with other humans under the same conditions of bias? I'm not entirely sure what I think of that but I digress. This morning, I came across an article from a few days ago published in the journal Science with the title "In unusual move, judge grants CrossFit’s request to unmask anonymous peer reviewers." Essentially, there was a paper published in 2013 by The Journal of Strength and Conditioning Research deliberating that 16% of athletes dropped out of the exercise program due to injury. CrossFit has claimed in court that the statistic is false and should not have been published in the article. They went even further to place blame on the journal's publisher (National Strength and Conditioning Association (NSCA) of Colorado Springs, Colorado) for "intentionally skew[ing] the study to damage CrossFit" due to its being a competitor in the fitness business. Naturally, with both sides blaming the other, the case investigation turned towards the history of revisions the paper underwent. It was shown that the paper was initially edited to reduce the number of injuries but then the journal later retracted these changes to coincide with some new protocol, despite its lack of approval by a university review board.
The result? Since CrossFit still stands by the assumption of supposedly anonymous reviewers playing up the number of injuries and the NCSA has proceeded to countersue on the grounds of defamation, a California judge has permitted an unveiling of reviewers' identities. Although I often prefer to remain rather removed from the politics of science, this specific case captured my attention because of the implications suggested by the author. Han speculates that a case like this may cause scientists to think twice before submitting their papers to previously unquestioned journals and may even cause scientists to have a decreased willingness to take part in peer review. As someone who's really just observing this world of science, peer review, and publication from afar, I found myself returning to the idea and impact of bias when considering the case discussed in this article. In an economic sense, it almost seems like accounting for bias would make a model more valuable since it would imply that we are "letting in" more reality (much like the concept of weak assumptions). But simultaneously, it's extremely difficult, maybe even impossible, to quantify such an intangible and ambiguous factor. Furthermore, we must keep in mind the purpose of a theoretical model. I've previously discussed this probably far too many times but just to reiterate, a model would not be a model if it were an exact replication of the situation at hand. But once again, I digress.
It seems that the intentional purpose of preventing bias, an apparently unintentional source of modeling error, is not as foolproof as one would think or hope. As humans, it's extremely difficult to imagine a possible world where such a factor didn't affect the outcome of things. Once again, such a reality brings into question the constant desire to lessen or remove bias' effect. If you think about it, our environments and "nurture" are always making their own alterations to our epigenome - where certain genes can be methylated (turned on or off) based upon external circumstances that we are subject to throughout our lifetimes. It would then be a little unprecedented to think that bias can be completely removed. If we are unfeelingly affected by external circumstances, how much more affect ought we to experience when considering a domain of our thinking, personality, and general conduct? The only difference then, lies in our questioning of such.
Comments