Monday, April 20, 2015

How can technoscientists better act to maximize the ethical and socioeconomic benefits from their actions? (Part 2)

When embarking on any research project, a technoscientist must evaluate if the results of the study will be used in ways that they deem ethically and politically responsible. This is a simple first step, but one that depends significantly on the opinions of the individual in question. While opinions of what is ethical vary, ensuring that every participant in the research is comfortable with the consequence is a great step towards determining if the study should continue. In many respects, this is similar to employment in any controversial industry. Often though, this first step is not enough. Repeated self-inquiry is the only way to ensure that individuals can morally continue in their topic of research. An excellent example of this comes from the famous theoretical physicist Richard Feynman. Feynman’s genius was recognized from his youth, and almost directly after completing his Ph.D. he was recruited as part of the Manhattan Project. He worked at both the Los Alamos and Oak Ridge facilities, and made contributions to safety procedures that helped allow the development of the first nuclear bombs. Later in his life, Feynman expressed regret at not reconsidering his work after the defeat of Germany in World War 2. While he maintained support for his initial participation, he found it difficult to justify his work past that point (atomicheritage.org). The experience of Feynman perfectly illustrates the dangerous human tendency to question once and then accept the consequences beyond that point (despite the fact that the situation is changing). Another lesson can also be learned from Feynman’s mistake, and that is the value in looking to history for additional perspective on the present. There are uncountable examples of noble minded scientists oblivious to the sometimes devastating consequences of their research. In some situations, it can be difficult to cease participation once beginning. In these cases especially, it is essential to understand the loss of control one has over information once it is released. There is no way to enforce this personal caution among all scientists, but I believe that many would (or continue to do) if more critical analysis were encouraged.

One of the final methods to encourage ethically and socio-politically responsible outcomes from science is to minimize misunderstandings. While the suggestion of improved communication could hardly be amiss in almost any field, the process of communication between technoscientists, society, and the media has far reaching consequences. This point becomes even more pertinent given the history of poor communication between these three groups. In many cases, it may be difficult for scientists to translate highly technical findings in a way that is both useful and simple enough for the media to understand. There are a multitude of pitfalls in this process. The first is the potential for a scientist to personally communicate poorly. An attempt to explain important work or results that are not decipherable or susceptible to misinterpretation is dangerous. Similarly, there is the danger of oversimplification.  If a concept is abstracted too far, there remains no value in the media reporting it. The media also holds responsibility for miscommunication too. As we discussed in lecture, media groups are typically profit driven organizations. This can sometimes lead to an unhealthy focus on “fun” science at best, and sensationalism at worst. Moreover, time and time again simple ignorance on the media’s and society’s behalf can lead to misinterpreted statistics and unfounded conclusions. To some degree, scientists need to become judges of what is most important for the media and society to know. This is a huge but necessary responsibility, and despite the reticence some may feel participating in the process, it is the scientist themselves that are in the best position to perform this arbitrage. I would argue that it also becomes the responsibility of the scientist to stay current with modern news and controversy. Any individual in a position affecting so many others must consider their choices with a broader scope.  This too is a challenging method to enforce on its own. Perhaps a mandatory follow up period could be required for certain research. Scientists that produce innovations in certain fields are required to act on a regulation committee and/or guide the directions that the innovation is taken. This requirement could increase the degree of personal responsibility scientists feel for how their discoveries can help society.


Scientists discover and innovate in ways that can both enhance and decrease public well-being.  This is not the only goal of science, however. Some science is undertaken purely for the sake of knowledge, and the belief that knowing more about the universe alone makes this a worthwhile cause. Due to the massive ways in which technoscientists affect society, it becomes the responsibility of these individuals to consider the implications of their research as both professionals and members of the public. One of the important ways this can be done is through being aware of the outside influences on any given experiments, and the ways this can indirectly affect the accuracy of results through selective reporting. Another essential method for maintaining ethics in science is through regular critical thinking and self-inquiry about the kind of work scientists perform. Finally, the curriculum and education of young technoscientists needs to focus on communication between technical work and the public. Scientists are the ambassadors between the future and the present, and their expertise is needed not only for discovery, but also integrating these innovations responsibly.

Works cited:

"Richard Feynman." Atomic Heritage Foundation. N.p., n.d. Web. 20 Apr. 2015. <http://www.atomicheritage.org/profile/richard-feynman>.

How can technoscientists better act to maximize the ethical and socioeconomic benefits from their actions? (Part 1)

Technoscientists occupy an interesting position. Similar to engineers, many technoscientists pursue a technical field, using scientific knowledge to further their aims. Unlike engineers, however, the pursuit of science is often seen as the pursuit for more knowledge in a specific field than an application of that knowledge. Learning more about the universe and furthering humanity’s understanding of how the world works is one of the core motivations of science. While many engineers may find their personal actions constrained by the structure of management and corporate policy, some scientists retain more autonomy in what they study and how they choose to fund that research. Because they are the experts in their area of choice, their opinion and position hold significant sway in the direction that progress in that subject will move in. As a general assumption, I think we can assume that most scientists pursue their work for the sake of intellectual curiosity and without any direct malicious intent. It is precisely this focus on the “purity” of the field, however, which may make science and technoscientists vulnerable to the alternate intents of external forces. While remaining passive and refusing to take sides in times of controversy may continue to produce good science, the global implications of such a removed mindset can be devastating. The process of analyzing the ethical and socioeconomic implications of scientific research and acting on such analysis is not trivial, but it is an activity that must be continually undertaken and improved on. As some of the primary producers of innovations that affect everyday society, it is essential that technoscientists act as the first line of regulation in research that could hold unforeseen consequences. I will argue that there are a number of strategies to help achieve this goal, including: recognizing the effects of outside influences on research, constantly re-evaluating the ethicality and morality of participation in research projects, and minimizing misunderstandings between scientists, the media, and the public.

Recognizing the effects of outside parties on research is not only difficult, but also somewhat of an awkward topic. Modern scientific practice hinges on objectivity and a process that seeks to eliminate personal bias from influencing the purity of the research results. In certain fields this process is known as the scientific method, a simple sequence of steps for conducting good science. While the content and complexity of this method changes between different institutions and individuals, most can agree it covers the following (Rochester.edu): 

-Observation and description of a phenomena
-Formulation of a hypothesis for the observed phenomena
-Use of hypothesis to predict other phenomena and/or the quantitative results of new observations
-Performance of a repeatable experiment, to test the hypothesis


This step by step process is intended to leave little room for the opinions of the experimenter to influence the published results.  This is by design, because in a perfect world a tested and failed hypothesis is just as valuable as a validated one. Despite this, outside influences can still hold subconscious effects in experiments. A 2010 article in The New Yorker investigated this very effect, terming it “selective reporting”.  The pressure to validate hypothesis can be enormous when financial (corporate) interests or the individual’s career is on the line. Even when the data doesn’t support the hypothesis, many researchers continue analyzing it until some significant trend can be found. Rather than conducting a new experiment designed to isolate this effect, these results are instead published directly. A similar effect was studied by statistician Theodore Sterling in the 1950s when he noticed that 97% of all published psychological studies with statistically significant data found the effect they were looking for (Jonah Leher, 2010). While on their own these results may not appear dangerous, when poorly supported conclusions are made by privately funded research the risk of dangerous products reaching the public increases. Rather than pretending that science occurs in a vacuum, it is important for technoscientists to remain vigilant to the selective reporting effect.

Works cited:

"Introduction to the Scientific Method." Introduction to the Scientific Method. N.p., n.d. Web. 20 Apr. 2015. <http://teacher.nsrl.rochester.edu/phy_labs/appendixe/appendixe.html>.

LEHRER, JONAH. "The Truth Wears Off." The New Yorker. N.p., 13 Dec. 2010. Web. 20 Apr. 2015. <http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off>.

Sunday, April 12, 2015

The Benefits and Dangers of Synthetic Biology

A recent lecture and reading assignment introduced us to the rapidly expanding world of biological engineering. According to Biology’s Brave New World, this is a field of study where scientists are beginning a new era of rapid learning and progress. More concerning, however, is the lack of regulation and ethical study that has accompanied this wealth of technical innovation. Could this be another case of technology outstripping society’s capacity to address it? This is a complicated subject, and much of the research being performed in this area is known as dual-use, or capable of being exploited for a number of unintended purposes, both positive and negative. As biologists become engineers and the accessibility to genetic information grows, it will not be long before the general public has access to the tools and manufacturing services that allow new life to be designed. Concerns over this include the development and threat of biological warfare agents by motivated parties and individuals. Even if the most dangerous data and results were kept classified from the general public, the problem of information security then enters the scenario. The increasing ease of access to synthetic biology information and tools carries both advantages and disadvantages for society at large.
One advantage to this lower “barrier to entry” is the ability to perform rapid and inexpensive Intelligent Trial and Error (ITE). The cost of synthetic biological research has been plummeting in recent years. Every year, the cost of sequencing a genome drops 5 to 10 times further. This is well ahead of the rate predicted by Moore’s law, and as of January 2014, has dropped below $1000 (Business Insider, 2014). Sequencing is not the only field in which costs have dropped, however. The synthetic biology competition iGEM (Internationally Genetically Engineered Machines) has existed since 2003, and provides resources and structure to allow high school and college students to design and grow their own genetically engineered life. The competition promotes the development of sophisticated bacteria, with the complexity increasing all the time. Impressively, this competition operates on an annual basis, proving that substantial design improvements and changes can be implemented on a short time frame. Speeding the process even further is the development of automated assembly processes, which could supplement or replace typical standard assembly and parallel assembly techniques. Fast turn-around is essential to the iteration process of ITE, and the ability for minimally funded student teams to produce work so quickly is strong evidence that professional teams could evolve designs even faster. Crucially, iGEM gives access to a Registry of Standard Biological Parts, a standardized source of common biological components needed to allow for rapid (and relatively simple) development of completely new genetic recipes. In the case of iGEM, many of the components are assembled as “BioBricks”, which can be used in designs and supplemented by software to increase the ease of engineering (igem.org). With the numerous standards and technologies in place, increased speed of the bacteria assembly process, and tremendous drop in price of genetic research and components, intelligent trial and error can be performed faster and more consistently to help negate the unexpected consequences of rapid innovation.
While technological advances may provide some solutions, they can also come at a cost. Although increased accessibility to biological engineering may promote more testing and positive outcomes in the professional scientific community, it could also draw less well intentioned interest from others. The process of hazardous bacteria development would not be particularly difficult for a terrorist group. Machinery used in automatic assembly could be easily reverse engineered or purchased though illegitimate channels. In some cases, this is a simple as automated pipette and fluid transfer robots. Not only are such robots easy to acquire, but publicly available code already exists that can be used to program them (Synthetic Biology, 2011). The secondary concern is one of information and data. Equipment for building new bacteria is only as useful as the genetic code sent to it, and it is this code that presents such a large security risk in the future. While the ability to engineer deadly biological weapons may remain out of reach for most of society, replicating existing code is simple if it becomes accessible. This is a system with no redundancy; if classified genetic code were to be released, it would be almost impossible to prevent the spread of the knowledge. This has been seen time and time again through “leak sites” like Wikileaks.org. Another problem presented by Biology’s Brave New World is the potential for dangerous code to be hidden in innocuous places. If such code was unknowingly downloaded to a system with access to automated assembly machinery, the consequences could be devastating. The dangers of information security and the susceptibility of assembly machinery counter many of the advantages of biological engineering with matching disadvantages. It will be up to society and regulation agencies to decide what rate of innovation in the fledgling field is worth the risk.

Cited Sources

"Biology's Brave New World."Foreign Affairs. 12 Apr. 2015. Web. 12 Apr. 2015. <http://www.foreignaffairs.com/articles/140156/laurie-garrett/biologys-brave-new-world>.
Raj, Ajai. "Soon, It Will Cost Less To Sequence A Genome Than To Flush A Toilet - And That Will Change Medicine Forever." Business Insider. Business Insider, Inc, 02 Oct. 2014. Web. 12 Apr. 2015. <http://www.businessinsider.com/super-cheap-genome-sequencing-by-2020-2014-10>.
"Main Page - Ung.igem.org." Main Page - Ung.igem.org. N.p., n.d. Web. 12 Apr. 2015. <http://igem.org/Main_Page>.

Leguia, Mariana, ‡ Jennifer Brophy, Douglas Densmore, and . Christopher J. Anderson. "Chapter 16." Synthetic Biology. San Diego, CA: Academic, 2011. N. pag. Print.