Technoscientists occupy an interesting position. Similar to
engineers, many technoscientists pursue a technical field, using scientific knowledge
to further their aims. Unlike engineers, however, the pursuit of science is often
seen as the pursuit for more knowledge in a specific field than an application
of that knowledge. Learning more about the universe and furthering humanity’s
understanding of how the world works is one of the core motivations of science.
While many engineers may find their personal actions constrained by the
structure of management and corporate policy, some scientists retain more
autonomy in what they study and how they choose to fund that research. Because
they are the experts in their area of choice, their opinion and position hold
significant sway in the direction that progress in that subject will move in.
As a general assumption, I think we can assume that most scientists pursue
their work for the sake of intellectual curiosity and without any direct
malicious intent. It is precisely this focus on the “purity” of the field,
however, which may make science and technoscientists vulnerable to the
alternate intents of external forces. While remaining passive and refusing to
take sides in times of controversy may continue to produce good science, the
global implications of such a removed mindset can be devastating. The process
of analyzing the ethical and socioeconomic implications of scientific research
and acting on such analysis is not trivial, but it is an activity that must be
continually undertaken and improved on. As some of the primary producers of
innovations that affect everyday society, it is essential that technoscientists
act as the first line of regulation in research that could hold unforeseen consequences.
I will argue that there are a number of strategies to help achieve this goal,
including: recognizing the effects of outside influences on research, constantly
re-evaluating the ethicality and morality of participation in research
projects, and minimizing misunderstandings between scientists, the media, and
the public.
Recognizing the effects of outside parties on research is
not only difficult, but also somewhat of an awkward topic. Modern scientific practice
hinges on objectivity and a process that seeks to eliminate personal bias from
influencing the purity of the research results. In certain fields this process
is known as the scientific method, a simple sequence of steps for conducting
good science. While the content and complexity of this method changes between
different institutions and individuals, most can agree it covers the following
(Rochester.edu):
-Observation and description of a phenomena
-Formulation of a hypothesis for the observed phenomena
-Use of hypothesis to predict other phenomena and/or the quantitative
results of new observations
-Performance of a repeatable experiment, to test the
hypothesis
This step by step process is intended to leave little room
for the opinions of the experimenter to influence the published results. This is by design, because in a perfect world
a tested and failed hypothesis is just as valuable as a validated one. Despite
this, outside influences can still hold subconscious effects in experiments. A
2010 article in The New Yorker investigated this very effect, terming it “selective
reporting”. The pressure to validate
hypothesis can be enormous when financial (corporate) interests or the
individual’s career is on the line. Even when the data doesn’t support the
hypothesis, many researchers continue analyzing it until some significant trend
can be found. Rather than conducting a new experiment designed to isolate this
effect, these results are instead published directly. A similar effect was
studied by statistician Theodore Sterling in the 1950s when he noticed that 97%
of all published psychological studies with statistically significant data
found the effect they were looking for (Jonah Leher, 2010). While on their own
these results may not appear dangerous, when poorly supported conclusions are
made by privately funded research the risk of dangerous products reaching the
public increases. Rather than pretending that science occurs in a vacuum, it is
important for technoscientists to remain vigilant to the selective reporting
effect.
Works cited:
"Introduction to the Scientific Method."
Introduction to the Scientific Method. N.p., n.d. Web. 20 Apr. 2015.
<http://teacher.nsrl.rochester.edu/phy_labs/appendixe/appendixe.html>.
LEHRER, JONAH. "The Truth Wears Off." The New
Yorker. N.p., 13 Dec. 2010. Web. 20 Apr. 2015.
<http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off>.
No comments:
Post a Comment