Tuesday, May 12, 2015

The controversy of military innovation

There are many arguments both for and against continued sponsoring and development of advanced military technology. Before examining the arguments, though, a brief analysis of the purpose of the armed forces should be examined. The mission of the United States department of defense is to (in abbreviated form): “to provide the military forces needed to deter war and to protect the security of our country. ( http://www.defense.gov/)” These goals are ones that I believe many Americans would support. Deterring war is a noble goal, as most reasonable individuals wouldn’t condone unneeded death and destruction. Similar is the goal to protect the security of our country. Americans want to safeguard their ways of life, culture, and wellbeing. The Department of Defense, housed in the Pentagon building, is also the source of many of the cutting edge developments in military technology as well, making them the perfect case study in this analysis. Many opponents of continued military development (or war in general) would claim that those goals are flawed, however, in their regulation and implementation. The United States has engaged in numerous wars over its short existence, and it seems doubtful that all of them would meet the strict criteria of deterring war or protecting the security of America. This desire to keep an upper hand on the global stage is one of the most popular arguments supporting military innovation. Another important argument is that of the safety of our soldiers. The United States wants to do whatever it can to support the troops risking their lives on our behalf. While both these points have merit, I cannot conclude that the current regulation of military spending and technology is adequate. The better solution to military development may lie somewhere in the middle of the two extremes.

First we will return to the first justification for weapons innovation: the desire to discourage war through the process of “outgunning” or intimidation. This can certainly be effective, as the chances of success against an enemy that is substantially better armed are very low. No doubt, some engagements have been avoided through this policy of non-violent “preventative measures”. At the same time, as noted in the class text, this policy can often escalate into an uncontrolled arms race between competitor nations. This was precisely the result of the nuclear weaponry development program of the cold war. Additionally, the temptation to use the advanced weaponry grows with time, as outdated technology tends to decrease in value over time. Without substantial, continued funding such a policy could quickly become ineffective. One of the political ways this effect is avoided is through organizations such as NATO. The idea of distributing requirements for defense through alliances and treaties is a far more sustainable long term solution. The continued development of institutions like this could help reduce the necessity of deterrence through force in the future.

Protecting the troops that defend our nation is a necessity. Regardless of one’s opinion on the conflict or specifics of the situation, most citizens want to support the wellbeing of those who risk their lives for us. Oftentimes, the suggestion of a reduction in military funding or innovation is interpreted as having grave consequences for our troops for these reasons. Having listened to the opinions of veterans in my daily life, I hear this concern repeated frequently. The problem stems from the belief that a reduction of funding will reduce the resources going to each soldier, rather than a reduction of total deployed troops in combat zones. The loss of flow of new equipment, support, and tactical intelligence would increase the likelihood of casualties for those in the field. It is easy to see how these concerns are justified. There is no guarantee from the United States government or higher levels of military organization of how cuts in funding and new technology would be dealt with, or how the backlash could affect the safety of the troops. Furthermore, those in the position of making these decisions have an interest in continuing to secure funding, making an honest evaluation even more difficult. While a reduction in the development of certain military technology or funding may be justified, further research or agreements should be secured before making any radical changes.


Many of the topics touched on here hold further interest for myself and many of my peers studying as engineering and science majors. The ethics and opinions surrounding this issue hold direct consequences for us in terms of choosing jobs and industries to work in that may contribute to weapons development. While the potential to jump into a well-paying job in defense is tempting, it is a decision that might require greater personal thought than other career choices. Unfortunately, the arguments of the debate are not simple, and neither extreme appears to hold a realistic answer. Important topics like this illustrate yet another reason why global awareness and an understanding of technological consequences are essential for techno-scientists of the future.

Blogpost on leisure

This was a very difficult assignment to do at this point in the semester. Stress is hard to avoid this close to finals week, and especially with the number of essays and tests getting thrown into the last few weeks. So to try and forget about homework and studying for an hour, I went and worked on a hobby project. I’ve found that working with my hands is one of the most effective ways for me to get absorbed in a task and forget about sources of stress. Unsurprisingly though, it was hard to completely let go of all the other priorities. What started off as a nagging in the back of my mind kept getting stronger as time went on. It wasn’t until took a moment to address the root causes of this nagging that I was able to better enjoy the free time.
One of the first realizations I made was that while most of the items in my to-do list were important academically, few of them had an effort limit. Tests are an excellent example of this. An ideal student would study test material until they feel confident in their abilities. What does confident mean though? One could always feel more confident with more studying, but it isn’t realistic to keep studying forever. Ultimately, the need to study has to be evaluated against the importance of other needs. Unfortunately, this can be a difficult evaluation to make without any guidelines. This effect seemed similar to criticism of the Netflix and the Virgin Group’s so called “unlimited vacation policies”. Touched on in the chapter, these policies allow employees of these corporations to take as much vacation time as they want as long as they continue to perform their duties effectively. Many experts claim, however, that this vagueness actually results in employees taking less vacation time than they would in the past. The effect of non-definite goals is an unbounded workload. Given the difficulty in self-assessing knowledge, many students (myself included) just study up until the test beings.
The second realization was that I had no good idea of how to judge the value of my leisure time. I’m sure that to some degree this comes down to long term goals. In the long term, I want to succeed in college so that I can pursue a career in a field that I find interesting, and have enough resources to spend my future leisure time doing other activities I enjoy. How do I compare the value of long term goals to the short term though? I feel that many in our society eventually become so ingrained in the “work hard now to be able to play hard in the future”, that they never actually make that final transition. Additionally, the ability to relax in the future is never guaranteed. Many work hard their entire lives but still struggle financially. Surely some kind of compromise is best, where one is able to enjoy some fulfilment throughout all parts of their life? I feel like this compares well to the “treadmill effect” discussed in the text. Society spends its days working towards goals that constantly change and require more effort, never actually gaining any satisfaction. There is undoubtedly some amount of social pressure involved in this as well. Nobody wants to fail relative to their peers, and that element of competitiveness contributes to the cycle. It would obviously be desirable to avoid this tendency. The benefits of leisure time can of course extend beyond simple enjoyment; it is practically common knowledge that lowered stress makes you more effective in other activities.

Returning to my efforts to complete the assignment, eventually  I got tired enough of worrying that I stopped caring as much about the work I had to do, and tried just focused on the present. I don’t know if the act of consciously lowering the importance of other academic tasks is a solution or just avoiding the problem, but it was much more effective than trying to simply ignore the stress. I decided I would spend a certain amount of time working on the project I enjoyed (treating that time as a sunk cost), and then re-evaluate after that was done. Overall, I was impressed with how satisfying it was to work on something completely unrelated to school for a while. I felt fully engaged with the subject matter I was working with (something I can’t always say with homework assignments or lecture), and the time passed much more rapidly than I was expecting. I still don’t have a good way to quantify the benefits of leisure time. However, unlike many of the other tasks on my list, I can set limits on how long I choose to engage in this time, something which makes it easier to incorporate into everyday life.

Friday, May 8, 2015

Encouraging the Future of Assistive Technology

What does it mean to be human? This question is fundamental in the debate over the legitimacy and ethics of assistive technology and human enhancement. As innovations in nanotechnology, biomedical, and robotic fields continues to develop, the possibility of substantial human enhancement (in some form) appears to be a real possibility. The class text subdivides these not too distance advancements into a number of categories, based on the kind of problem they solve (or fail to solve), and who is most likely to benefit. These categories range from the elimination of devastating diseases, all the way up to fundamental modifications of the mechanics (and definition)of the human body. For the sake of this post, I will primarily focus on the field of non-permanent, non-medical enhancements. Advanced prosthesis and assistive machinery have the potential to dramatically improve the abilities of the human body, whether the application is to assisting amputees or simply enhancing the mobility of everyday users. Assistive technology and robotic enhancement development should be encouraged in the future, assuming that proper regulation can be enforced and approved.
The first and primary usage of this technology could obviously be in the assistance of the injured or handicapped individuals. As mentioned in the text, many of the simple technologies we take for granted at present could fall into this category. The glasses that allow me to focus on this computer screen enable me to accomplish much more as a student than I could unaided. On a much more serious level, robotics that allow mobility in amputees or paralyzed individuals could change the lives of a number of people around the world. Of course, with any benefit, there are a number of barriers that would need to be overcome before unlimited adoption of this mentality could be made. One of the most common is that of technology distribution. This is a highly valid concern; costs for prosthetic limbs today can cost anywhere from $5000-$50,000 dollars (1). As prosthetics get more advanced and capable, it stands to reason that this cost spectrum will increase even further. Despite the inequality in technology distribution, however, there are very tangible overall benefits to innovations, even if they only initially benefit the wealthiest. For instance, the lithium-ion batteries powering the expensive electric wheelchair described in the text have (on average) almost halved in $/kWh in the last 5 years, from over 900 $/kWh to less than 500 $/kWh (2). This means that dramatically more energy dense batteries could be enjoyed by those who were previously only able to afford a tiring manual wheelchair or heavy lead-acid powered unit. This effect is bolstered by the economies of scale, and the increasing knowledge base around designing with the new technology. Despite inequalities existing in the introduction of a new technology, the applications of innovations to handicapped individuals benefits the handicapped community at large in the long run.

A secondary and more general use of human enhancement technology could be for entertainment, athletic and general mobility purposes. Mountaineering enthusiasts could travel further and higher, and gain access to locations and experiences that they otherwise could not. Workers who needed to lift heavy loads or stand on their feet all day long could gain a reprieve from the physical pain. Even among the general public, I suspect most individuals might enjoy the ability to jump higher, walk faster, and so on. Who knows, perhaps even CO2 emissions could be lowered as the need for cars decreased. At the same time, there would doubtlessly be resistance to this movement. This suggested innovation proposes tying our bodies and activities closer to technology than perhaps they have ever been before.  At some point, concerns about how human society still is would begin to manifest. Those who do not wish to participate in the “technological evolution” could potentially face different treatment or opportunities as those who do. At the same time, however, I would argue that the use of assistive or enhancing technology represents a personal choice. We don’t infringe on personal freedoms to get tattoos, body piercings, or non-medically approved RFID implants, despite the fact that many of these represent a semi-permanent body modification. Similarly, the definition of human is not necessarily chiseled in stone. Some of the characteristic qualities discussed in lecture included: memory, culture, language, reason, questioning, measuring, representations, symbolic cognition, consciousness, empathy, appreciation of mortality, awe, beauty and inspiration. I might even go further to suggest that the practice of innovation and tool-making is a fundamental part of the human identity. Humans were not born with wings, and yet millions fly every day. This is certainly not the only barrier to a more widespread adoption of this technology, however. Military usage, usage by terror groups, and other unanticipated exploitation could certainly have serious consequences. It is for this reason that such a techno future could only be possible with careful though and enforcement of appropriate regulations.

Works Cited:
(1)    "The Cost of a New Limb Can Add up Over a Lifetime." Hospital for Special Surgery. N.p., n.d. Web. 07 May 2015. <http://www.hss.edu/newsroom_prosthetic-leg-cost-over-lifetime.asp#.VUv0mflVhBc>.

(2)    "The EV Conundrum: Uncertain Resale Value Complicates Li-ion Battery Market." Navigant Research The EV Conundrum Uncertain Resale Value Complicates Liion Battery Market Comments. N.p., 21 Jan. 2010. Web. 07 May 2015. <https://www.navigantresearch.com/blog/the-ev-conundrum-uncertain-resale-value-complicates-li-ion-battery-market>.

Monday, April 20, 2015

How can technoscientists better act to maximize the ethical and socioeconomic benefits from their actions? (Part 2)

When embarking on any research project, a technoscientist must evaluate if the results of the study will be used in ways that they deem ethically and politically responsible. This is a simple first step, but one that depends significantly on the opinions of the individual in question. While opinions of what is ethical vary, ensuring that every participant in the research is comfortable with the consequence is a great step towards determining if the study should continue. In many respects, this is similar to employment in any controversial industry. Often though, this first step is not enough. Repeated self-inquiry is the only way to ensure that individuals can morally continue in their topic of research. An excellent example of this comes from the famous theoretical physicist Richard Feynman. Feynman’s genius was recognized from his youth, and almost directly after completing his Ph.D. he was recruited as part of the Manhattan Project. He worked at both the Los Alamos and Oak Ridge facilities, and made contributions to safety procedures that helped allow the development of the first nuclear bombs. Later in his life, Feynman expressed regret at not reconsidering his work after the defeat of Germany in World War 2. While he maintained support for his initial participation, he found it difficult to justify his work past that point (atomicheritage.org). The experience of Feynman perfectly illustrates the dangerous human tendency to question once and then accept the consequences beyond that point (despite the fact that the situation is changing). Another lesson can also be learned from Feynman’s mistake, and that is the value in looking to history for additional perspective on the present. There are uncountable examples of noble minded scientists oblivious to the sometimes devastating consequences of their research. In some situations, it can be difficult to cease participation once beginning. In these cases especially, it is essential to understand the loss of control one has over information once it is released. There is no way to enforce this personal caution among all scientists, but I believe that many would (or continue to do) if more critical analysis were encouraged.

One of the final methods to encourage ethically and socio-politically responsible outcomes from science is to minimize misunderstandings. While the suggestion of improved communication could hardly be amiss in almost any field, the process of communication between technoscientists, society, and the media has far reaching consequences. This point becomes even more pertinent given the history of poor communication between these three groups. In many cases, it may be difficult for scientists to translate highly technical findings in a way that is both useful and simple enough for the media to understand. There are a multitude of pitfalls in this process. The first is the potential for a scientist to personally communicate poorly. An attempt to explain important work or results that are not decipherable or susceptible to misinterpretation is dangerous. Similarly, there is the danger of oversimplification.  If a concept is abstracted too far, there remains no value in the media reporting it. The media also holds responsibility for miscommunication too. As we discussed in lecture, media groups are typically profit driven organizations. This can sometimes lead to an unhealthy focus on “fun” science at best, and sensationalism at worst. Moreover, time and time again simple ignorance on the media’s and society’s behalf can lead to misinterpreted statistics and unfounded conclusions. To some degree, scientists need to become judges of what is most important for the media and society to know. This is a huge but necessary responsibility, and despite the reticence some may feel participating in the process, it is the scientist themselves that are in the best position to perform this arbitrage. I would argue that it also becomes the responsibility of the scientist to stay current with modern news and controversy. Any individual in a position affecting so many others must consider their choices with a broader scope.  This too is a challenging method to enforce on its own. Perhaps a mandatory follow up period could be required for certain research. Scientists that produce innovations in certain fields are required to act on a regulation committee and/or guide the directions that the innovation is taken. This requirement could increase the degree of personal responsibility scientists feel for how their discoveries can help society.


Scientists discover and innovate in ways that can both enhance and decrease public well-being.  This is not the only goal of science, however. Some science is undertaken purely for the sake of knowledge, and the belief that knowing more about the universe alone makes this a worthwhile cause. Due to the massive ways in which technoscientists affect society, it becomes the responsibility of these individuals to consider the implications of their research as both professionals and members of the public. One of the important ways this can be done is through being aware of the outside influences on any given experiments, and the ways this can indirectly affect the accuracy of results through selective reporting. Another essential method for maintaining ethics in science is through regular critical thinking and self-inquiry about the kind of work scientists perform. Finally, the curriculum and education of young technoscientists needs to focus on communication between technical work and the public. Scientists are the ambassadors between the future and the present, and their expertise is needed not only for discovery, but also integrating these innovations responsibly.

Works cited:

"Richard Feynman." Atomic Heritage Foundation. N.p., n.d. Web. 20 Apr. 2015. <http://www.atomicheritage.org/profile/richard-feynman>.

How can technoscientists better act to maximize the ethical and socioeconomic benefits from their actions? (Part 1)

Technoscientists occupy an interesting position. Similar to engineers, many technoscientists pursue a technical field, using scientific knowledge to further their aims. Unlike engineers, however, the pursuit of science is often seen as the pursuit for more knowledge in a specific field than an application of that knowledge. Learning more about the universe and furthering humanity’s understanding of how the world works is one of the core motivations of science. While many engineers may find their personal actions constrained by the structure of management and corporate policy, some scientists retain more autonomy in what they study and how they choose to fund that research. Because they are the experts in their area of choice, their opinion and position hold significant sway in the direction that progress in that subject will move in. As a general assumption, I think we can assume that most scientists pursue their work for the sake of intellectual curiosity and without any direct malicious intent. It is precisely this focus on the “purity” of the field, however, which may make science and technoscientists vulnerable to the alternate intents of external forces. While remaining passive and refusing to take sides in times of controversy may continue to produce good science, the global implications of such a removed mindset can be devastating. The process of analyzing the ethical and socioeconomic implications of scientific research and acting on such analysis is not trivial, but it is an activity that must be continually undertaken and improved on. As some of the primary producers of innovations that affect everyday society, it is essential that technoscientists act as the first line of regulation in research that could hold unforeseen consequences. I will argue that there are a number of strategies to help achieve this goal, including: recognizing the effects of outside influences on research, constantly re-evaluating the ethicality and morality of participation in research projects, and minimizing misunderstandings between scientists, the media, and the public.

Recognizing the effects of outside parties on research is not only difficult, but also somewhat of an awkward topic. Modern scientific practice hinges on objectivity and a process that seeks to eliminate personal bias from influencing the purity of the research results. In certain fields this process is known as the scientific method, a simple sequence of steps for conducting good science. While the content and complexity of this method changes between different institutions and individuals, most can agree it covers the following (Rochester.edu): 

-Observation and description of a phenomena
-Formulation of a hypothesis for the observed phenomena
-Use of hypothesis to predict other phenomena and/or the quantitative results of new observations
-Performance of a repeatable experiment, to test the hypothesis


This step by step process is intended to leave little room for the opinions of the experimenter to influence the published results.  This is by design, because in a perfect world a tested and failed hypothesis is just as valuable as a validated one. Despite this, outside influences can still hold subconscious effects in experiments. A 2010 article in The New Yorker investigated this very effect, terming it “selective reporting”.  The pressure to validate hypothesis can be enormous when financial (corporate) interests or the individual’s career is on the line. Even when the data doesn’t support the hypothesis, many researchers continue analyzing it until some significant trend can be found. Rather than conducting a new experiment designed to isolate this effect, these results are instead published directly. A similar effect was studied by statistician Theodore Sterling in the 1950s when he noticed that 97% of all published psychological studies with statistically significant data found the effect they were looking for (Jonah Leher, 2010). While on their own these results may not appear dangerous, when poorly supported conclusions are made by privately funded research the risk of dangerous products reaching the public increases. Rather than pretending that science occurs in a vacuum, it is important for technoscientists to remain vigilant to the selective reporting effect.

Works cited:

"Introduction to the Scientific Method." Introduction to the Scientific Method. N.p., n.d. Web. 20 Apr. 2015. <http://teacher.nsrl.rochester.edu/phy_labs/appendixe/appendixe.html>.

LEHRER, JONAH. "The Truth Wears Off." The New Yorker. N.p., 13 Dec. 2010. Web. 20 Apr. 2015. <http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off>.

Sunday, April 12, 2015

The Benefits and Dangers of Synthetic Biology

A recent lecture and reading assignment introduced us to the rapidly expanding world of biological engineering. According to Biology’s Brave New World, this is a field of study where scientists are beginning a new era of rapid learning and progress. More concerning, however, is the lack of regulation and ethical study that has accompanied this wealth of technical innovation. Could this be another case of technology outstripping society’s capacity to address it? This is a complicated subject, and much of the research being performed in this area is known as dual-use, or capable of being exploited for a number of unintended purposes, both positive and negative. As biologists become engineers and the accessibility to genetic information grows, it will not be long before the general public has access to the tools and manufacturing services that allow new life to be designed. Concerns over this include the development and threat of biological warfare agents by motivated parties and individuals. Even if the most dangerous data and results were kept classified from the general public, the problem of information security then enters the scenario. The increasing ease of access to synthetic biology information and tools carries both advantages and disadvantages for society at large.
One advantage to this lower “barrier to entry” is the ability to perform rapid and inexpensive Intelligent Trial and Error (ITE). The cost of synthetic biological research has been plummeting in recent years. Every year, the cost of sequencing a genome drops 5 to 10 times further. This is well ahead of the rate predicted by Moore’s law, and as of January 2014, has dropped below $1000 (Business Insider, 2014). Sequencing is not the only field in which costs have dropped, however. The synthetic biology competition iGEM (Internationally Genetically Engineered Machines) has existed since 2003, and provides resources and structure to allow high school and college students to design and grow their own genetically engineered life. The competition promotes the development of sophisticated bacteria, with the complexity increasing all the time. Impressively, this competition operates on an annual basis, proving that substantial design improvements and changes can be implemented on a short time frame. Speeding the process even further is the development of automated assembly processes, which could supplement or replace typical standard assembly and parallel assembly techniques. Fast turn-around is essential to the iteration process of ITE, and the ability for minimally funded student teams to produce work so quickly is strong evidence that professional teams could evolve designs even faster. Crucially, iGEM gives access to a Registry of Standard Biological Parts, a standardized source of common biological components needed to allow for rapid (and relatively simple) development of completely new genetic recipes. In the case of iGEM, many of the components are assembled as “BioBricks”, which can be used in designs and supplemented by software to increase the ease of engineering (igem.org). With the numerous standards and technologies in place, increased speed of the bacteria assembly process, and tremendous drop in price of genetic research and components, intelligent trial and error can be performed faster and more consistently to help negate the unexpected consequences of rapid innovation.
While technological advances may provide some solutions, they can also come at a cost. Although increased accessibility to biological engineering may promote more testing and positive outcomes in the professional scientific community, it could also draw less well intentioned interest from others. The process of hazardous bacteria development would not be particularly difficult for a terrorist group. Machinery used in automatic assembly could be easily reverse engineered or purchased though illegitimate channels. In some cases, this is a simple as automated pipette and fluid transfer robots. Not only are such robots easy to acquire, but publicly available code already exists that can be used to program them (Synthetic Biology, 2011). The secondary concern is one of information and data. Equipment for building new bacteria is only as useful as the genetic code sent to it, and it is this code that presents such a large security risk in the future. While the ability to engineer deadly biological weapons may remain out of reach for most of society, replicating existing code is simple if it becomes accessible. This is a system with no redundancy; if classified genetic code were to be released, it would be almost impossible to prevent the spread of the knowledge. This has been seen time and time again through “leak sites” like Wikileaks.org. Another problem presented by Biology’s Brave New World is the potential for dangerous code to be hidden in innocuous places. If such code was unknowingly downloaded to a system with access to automated assembly machinery, the consequences could be devastating. The dangers of information security and the susceptibility of assembly machinery counter many of the advantages of biological engineering with matching disadvantages. It will be up to society and regulation agencies to decide what rate of innovation in the fledgling field is worth the risk.

Cited Sources

"Biology's Brave New World."Foreign Affairs. 12 Apr. 2015. Web. 12 Apr. 2015. <http://www.foreignaffairs.com/articles/140156/laurie-garrett/biologys-brave-new-world>.
Raj, Ajai. "Soon, It Will Cost Less To Sequence A Genome Than To Flush A Toilet - And That Will Change Medicine Forever." Business Insider. Business Insider, Inc, 02 Oct. 2014. Web. 12 Apr. 2015. <http://www.businessinsider.com/super-cheap-genome-sequencing-by-2020-2014-10>.
"Main Page - Ung.igem.org." Main Page - Ung.igem.org. N.p., n.d. Web. 12 Apr. 2015. <http://igem.org/Main_Page>.

Leguia, Mariana, ‡ Jennifer Brophy, Douglas Densmore, and . Christopher J. Anderson. "Chapter 16." Synthetic Biology. San Diego, CA: Academic, 2011. N. pag. Print.

Monday, March 30, 2015

The Obstacles Facing Internet-Based Democracy

The advent of the internet allows unprecedented communication and collaboration between people all over the world. As such, it only makes sense to use this incredible platform as a way to improve a historically restricted arena: politics. Internet-based democracy promises to improve transparency in government, eliminate barriers between the people and their representation, and allow for new voices and ideas to receive fair consideration. The system holds additional advantages. Web platforms can be constantly edited, improved, and modified to better meet the needs of users. Change can be made quickly, relatively inexpensively, and in direct response to the feedback of those who interact with it, all the necessary elements for Intelligent Trial and Error. Furthermore, the internet is already used by over 2 billion people in ways similar to this proposal. While legacy thinking might slow adoption of such a system, the public’s inherent familiarity with its foundation put it ahead of most radical new ideas. Yet, despite genuine hopes that such an internet-based democracy system could someday exist, a number or specific obstacles remain to be overcome before immediate adoption could even be considered.
The first problem is something I’ll term: “the comment section dilemma”. While there are many different systems through which internet communication happens, one of the most ubiquitous is the comment section featured at the base of an article, video, blog, or product page. In smaller scale communities, this section can often foster intelligent, productive conversation that adds to the page’s existing content or perhaps advances the ideas covered above. These communities frequently rely on self-regulation to keep conversations productive. When applied to much larger pages with heavier traffic, however, this system often breaks down. Spam, joke posts, and hateful comments quickly crowd out the more productive comments, leading nowhere. This problem is not exclusive to comment sections either; large scale forums and chat-rooms regularly deal with these challenges as well. While some sites have been able to deal with these problems to an extent, it often comes at a price. Some sites have recently eliminated comments (or selectively limit comments based on how controversial the content is), or made it more difficult to access (either through drop-down menus, or by requiring registration). Others still have relied on heavy censoring. While censoring is undoubtedly necessary in any potential internet democracy system, the magnitude and method of enforcement are extremely important questions. Automated censoring systems can deal with massive scale, but face problems with intelligence. Existing systems appear to struggle with anything more than obvious spam or profanity. The task of identifying hateful or unproductive posts (beyond simple profanity) requires an actual understanding of the concepts being discussed. Furthermore, any system of censoring (both automated and manual) will hold some degree of bias. Free speech is an essential component to democracy, and the use of censoring in such a forum is dangerous (and perhaps even constitutionally illegal). An official internet democracy site would need to handle these issues nearly flawlessly to gain public approval (especially given the tremendous size of its user base), an obstacle that we, both technologically or socially, have yet to overcome.
The second obstacle is another unintended consequence of scale: the problem of maintaining equality and organization. A site with a massive user-base would generate far more content than any one human could read or comprehend. If perfect equality and equal attention were given to all posts, nobody would ever be heard. Furthermore, the benefit of transparency in this system begins to be lost if the content is hidden not behind closed doors, but behind terabytes of other information (a much more intimidating problem). It quickly becomes clear that for any idea to be seen by enough people to gain support, some kind of ranking system must be developed. Aside from the ethical questions surrounding ranking users or ideas, there exist technical challenges associated with this as well. Many sites, such as Reddit, use complicated algorithms to judge the merits of posts and users and choose how many others will see them. Unfortunately, a perfect algorithm for identifying the best political discussion points does not exist. The disadvantages to an imperfect solution, besides not promoting the best content, are that it can be “gamed” by users attempting to reverse engineer the algorithm. In other words, users can find ways of artificially increasing the ranking of their post that are not directly tied to its merit. The struggle with equality in internet democracy is broader than the specific implementation though. The development or usage of any new technology represents a form of legislation that may be unequal. For instance, an internet-based system gives more political influence to those who can afford internet access and a computer. Many of the poorest and most in need could become even less represented. Such a system also implies a degree of computer literacy. There may be a number of more elderly citizens who do not have the required skills to access and contribute to the system in the way that the younger generation could. The inherent inequality and struggles with organizing an internet-based democracy prevent it from being put into action in the present time.

Sunday, March 15, 2015

Fighting Voter Apathy

Voter apathy in this country has been a growing problem for many years. Not only troubling in terms of the younger generations, voter disinterest in politics and government can lead to a whole host of problems, including the tendency for technological somnambulism and legacy thinking to prevail. As students of STS, our goal is steer the development of science and technology to the most favorable social outcomes. A government that cannot adequately adapt and evolve to regulate or foster these new developments is likely to fail us in our goal. Therefore, it is in our best interest to not only consider the design of a better democracy, but also examine ways in which we can motivate thoughtful participation that is representative of all the citizens. From lecture and supplementary reading, we identified a number of possible reasons for voter apathy based on research. Some of these included: “general disgust”, remoteness, capture by rich and powerful, and overall cynicism of the current system. I believe that these are the symptoms of two primary problems. First, the corruption and inequality of the system due to the controlling elite. We have thoroughly identified the connections between large corporations and politics through channels of lobbying and “revolving doors” between industry insiders and political positions. Secondly, the challenge of the “small voice in the large crowd”. This describes the tendency for citizens to abstain from voting because they see their contribution as having very little value or pull in the huge number of election votes. While our lecture proposed a push towards more participatory or direct democracy to combat these effects, the merits of a sortition (or pseudo-random) democratic process appears to be uniquely suited to meet these needs.

Our current representative form of democracy is plagued with problems of transparency and industry lobbying. The vast majority of voters do not feel represented by their elected officials, and yet the process of removing and replacing ineffective politicians is lengthy and difficult. While, in theory, a purely representative democracy presents a highly efficient way to make decisions, many of its pitfalls appear in the logistics. The Iron Law of Oligarchy is perhaps one of the most succinct ways to describe this challenge. It states that regardless of how democratic an organization may start, it will inevitably begin to fall prey to oligarchy thus eliminating true democracy. This is emphasized especially in large bureaucracies, which produce hierarchies of individuals with different levels of power. Power has the tendency to corrupt, and when those in power are corrupted, it becomes extremely difficult to remove them from that position. Furthermore, those in positions of power (in our current government) are by large majority white males. This distribution is not indicative of the population they are trying to represent. A sortition based process eliminates much of this problem. Voting power is given randomly to a diverse group of individuals who are representative of the overall population. Seen as a responsibility or civic duty rather than a career, those in this position would be expected to put their full effort into developing and voting on good policies for the public. Since this position in only temporary and always changing, participants will not be dis-incentivized from solving issues by the temptations of campaigning or elections to maintain their position.

The disinterest of young voters has been well publicized. Many of us are familiar with the “Every Vote Counts” campaign among many others. Encouraging people to vote, however, does not address the underlying issue of why so many feel that their vote does not count. In 2012, approximately 1.29*10^8 people voted. Although the process is not quite so simple, in effect every vote contributed to only 7.74*10^-7% of the decision. While not entirely logical or rational, there is a tendency to think that if only a tiny fraction voter’s opinion affects the output, why would she or he put in more than an equally small fraction of effort? There is some validity to the effort argument. If everyone who voted put in a week’s worth of dedicated, objective research into their decision, we would probably make far better choices for the country. This is neither practical nor realistic, however. Most individuals couldn't take a week off of work and productivity to pursue this, and the short term effects on the economy could be devastating. Basic economic theory tells us that when people specialize in one area of work, overall economy-wide efficiency improves. A sortition process helps to improve this efficiency. As described in one section of our reading, the sortition based process adopted by the coastal district of Zeguo in China has been using this process successfully for a number of years. As described in our reading: “if the public think their voice actually matters, they’ll do the hard work, really study their briefing books, ask the experts the smart questions, and then make tough decisions”. When people know that their decisions make a big difference, perhaps we would get a small but diverse group of informed decisions rather than a huge pool of half-informed ones.

"Voter Turnout in the United States Presidential Elections." Wikipedia. Wikimedia Foundation, n.d. Web. 15 Mar. 2015. 
<http://en.wikipedia.org/wiki/Voter_turnout_in_the_United_States_presidential_elections>.

"Iron Law of Oligarchy." Wikipedia. Wikimedia Foundation, n.d. Web. 15 Mar. 2015. <http://en.wikipedia.org/wiki/Iron_law_of_oligarchy>.

Klein, Joe. "How Can a Democracy Solve Tough Problems?" Time. Time Inc., 02 Sept. 2010. Web. 12 Mar. 2015. <http://content.time.com/time/magazine/article/0,9171,2015790,00.html>.

Saturday, March 7, 2015

Innovation as a Basic Human Right

Both the textbook reading and supplementary article this week demonstrated the failings and inefficiencies associated with using capitalism as a tool to guide markets to the most socially responsible outcomes. While it is hard to argue with their premise, more interesting was the diversity of their proposed improvements to these problems. The supplemental reading strongly favored economic democracy, a system that it defines as one in which public power is more directly accountable to those affected by it. This too, took multiple forms such as public banks and worker cooperatives. The textbook, while also exploring the concept of increased work democracy, also noted some of the weaknesses in the ability of a purely democratic system to efficiently regulate business and developments in the interest of the public. Several other options were presented in its place, many of which appear as logical extensions of the development of an institution dedicated to the study and regulation of the market, which was presented in chapter 7.

Perhaps one of the most novel and unique solutions presented in the text was that of auctioning the right to innovate. As we have explored in the past, unintended consequences are all but impossible to avoid in full (regardless of how much planning and research is invested). In the event that an unintended consequence does occur, the preferred strategy to address it is efficient and intelligent trial and error. Quick identification of the problem and implementation of revisions to compensate are key to the iteration process. The downside to intelligent trial and error is in the resources it requires. This is what the concept of auctioning the right to innovate seeks to solve. The basic proposal, as presented by the text, suggests creating a maximum quota of major innovations that can be pursued every year. The demand created to own one or more of these annual rights will dynamically set their price, with corporations that believe more strongly in their innovation more willing to pay more. The money created through the sale of innovation rights will then go to the analysis and monitoring of possible negative consequences created through this innovation. Additionally, using the rough relationship between scale of an innovation and the likelihood of unintended consequences, monitoring institutions can more easily set priority between development projects. While the merits of this proposal are substantial, there are naturally certain concerns that will be raised. I believe that these concerns tend to fall in one of three areas: questions of effectiveness, questions of implementation, and questions of ethics. Questions of the effectiveness of the proposal are understandable, but difficult to answer without the implementation of test studies or extensive research. Similarly, questions of implementation could only truly be addressed through the extensive investment of time and money into engineering the system (which is not the goal of this discussion). Questions of ethics, however, are more accessible at the present "concept stage" of this proposal.

First, it is important to clarify that the primary target of this auction system is designed to be large businesses. The text even emphasizes that certain exemptions could be enacted for small businesses that are unable to compete at the prices of larger organizations. Without delving too deeply into the businesses as people argument, I accept that many of these concerns may not fully apply at the scale at which they are implemented. At the same time, this proposal represents what I believe to be an unprecedented restriction on business. Historically, we can find examples of government using taxes, environmental concerns, and anti-monopoly laws to restrict the flow of innovation created by the market. This is understandable, as activities restricted by these regulations have a direct, measurable effect on the rest of society. Trying to regulate the pursuit of an idea, however, is fundamentally different. Quite apart from the logistics of regulating ideas and degrees of pursuit, we must consider how the process of innovation is tied to humanity. Is innovation a basic human right? Perhaps not in the same sense as clean water and basic sanitation are (or should be), but it is hard to deny humanity's historical association with it. The development of basic tools and rudimentary construction is one of the achievements that we consider fundamentally (though not exclusively) human. It has helped us survive from ancient times until now (despite its clear potential to harm us). I think that many would consider it to be at least some form of expression of free speech, even if that right is not considered essential globally. Of course (as is the subject of much of this course), innovation can create consequences detrimental to others. Some of humanity's greatest inventions have been born of unlikely ideas and unexpected sources. While the point of this proposal is to eliminate or closely monitor those innovations less likely to benefit the world overall, is an outright ban on any activity or long-shot idea that cannot garner adequate funding a fair or responsible approach?

Saturday, February 28, 2015

When Intelligent Trial and Error isn't Enough

This week's lecture and reading material explored the concept of "intelligent trial and error". Intelligent trial and error (ITE) is a direct response to one of the earlier concepts covered in this class: unforeseen consequences. Whenever any technical or scientific endeavor is attempted (especially as the magnitude and complexity increase), there must be some expectation of unforeseen consequences. We live in a non-deterministic universe, and as such, even our best efforts and predictions cannot hope to account for all of the direct, secondary, and tertiary effects that our actions will have in the future. With this knowledge, the objective becomes to find ways to minimize and prevent the escalation of problems when they do appear. This is very well aligned with our overall goal for this course, and the process of steering technology is undoubtedly benefited through this process. ITE is one of the most fundamental procedures used to address these problems, but yet it too has failings that limit its application and effectiveness.

At its most basic, ITE is a process humans complete in numerous small ways every day. We attempt to solve a problem, identify failings with the result, modify our procedure based on our findings, and attempt the same problem again. It is a classic closed loop control system, and like any control system, it can be improved and optimized by tuning simple factors like the amplitude and rate of change of the response. In the reading we examined the most obvious failure mode of this system: becoming an open loop, or losing the communication between the feedback and the actions. This effectively eliminates the possibility of any corrective action being taken. In the furniture factory case, employees and site inspectors were repeatedly sending information about the problem back to OSHA, and yet the corrective measures were never acted upon, producing a gap. What about more subtle challenges though, such as those complex enough to require more than one cycle of feedback? As humanity continues to push the boundaries of  science and technology, we may encounter unforeseen consequences that require many, many cycles of feedback and correction before they can be adequately addressed. The iteration process often draws upon resources of time and money to complete, two things which corporations, society, and governments often find in short supply. A favorite example that comes to mind is that of the Saturn V Rocketdyne F-1 engines. During an extremely rushed and expensive development period, an unexpected combustion instability was discovered that propagated into violent engine destruction in most cases. It took numerous iterative tests over the course of two years to develop a precise baffle system that could self dampen the oscillations in such a large combustion chamber. In the F-1's case, the enormous budget from the US government allowed this number of test cycles to be possible. In some cases however, this drawback of the ITE process could hinder or even halt correction of unexpected problems.

Even if the time and resources of a fully ITE ready process can be supplied for a development, other challenges remain. Some of these were covered in class, and the more petty of them included human failings, such as unwillingness to act due to legacy thinking, reputation, pride, inertia, stupidity, or downright stubbornness. As irrational as these may be, they could derail an otherwise solid ITE plan. Other causes tying back to the cost of ITE exist too, like sunk costs and political expedience. Presuming all of these challenges can be overcome, however, we still must consider the less trivial obstacle of the effectiveness of problem and correction identification. Trial and error, for the most part, does not imply or provide guidance towards the proper type of correction that must be made or the change in procedure that must be taken. In some cases, the direction or type of correction that must be made is not clear and a pure trial and error approach (even when taken with "intelligence") comes down to hunting in the dark. To complicate matters further, most real world systems involve changing multiple variables simultaneously. Changing a single factor in a polluting manufacturing process, for example, may appear to slightly improve efficiency, and yet changing that factor further could produce very little benefit without the additional reduction of a different factor. The time needed to identify solutions becomes magnified hugely, and emphasizes the need for a system with a structured, documented procedure for identifying root problems. Item number 4 in the Intelligent Trial and Error table in the textbook comes closest to exploring this issue, with the requirement of "Active Preparation for Learning From Experience", but even this focuses on the need for error identification, rather than the steps to do so effectively.



References:
"SP-4206 Stages to Saturn." Nasa History Program Office. NASA, n.d. Web. 01 Mar. 2015. <http://history.nasa.gov/SP-4206/ch4.htm>
Harford, James J. Korolev: How One Man Masterminded the Soviet Drive to Beat America to the Moon. New York: Wiley, 1997. Print.

Saturday, February 21, 2015

The Treadmill of Progress

In what ways does technology outpace society? The text and lecture investigate a number of possibilities, stretching from the far reaching implications of new ethical issues to the simple gut reactions of a society that feels overwhelmed by the new gadgets of the week. Over 50% of the participants in the survey we studied in lecture feel technology is moving too quickly, yet appear to resign themselves to the inevitable grind of progress. Is this the "treadmill" effect we are seeing; a society constantly running to keep up with the very progress that their actions are inspiring?

 This feels intuitively difficult to believe. Nature and society tend to find steady state solutions to imbalances. A treadmill effect, with people constantly moving towards new technology that they are creating is unstable. Sooner or later, the rate of innovation creation would exceed our ability to follow, or we would have to give up our attempts to do so. This manifests itself economically as well. If society truly disliked the introduction of technology, it would result in less adoption or support of new innovation. It would quickly become unprofitable to produce such innovations, which would reduce incentive to continue the process. What if this process of natural pace regulation was hampered by some other effect? Perhaps there are many individuals unhappy with the rate of scientific and technological development, but the seemingly universal stigma associated with slowing progress prevents them from speaking out and communicating with one another. Certainly there are a number of smaller communities of people who publicly disapprove of the state and pace of technology. Perhaps if society was more universally aware of the opinions of each other, a truer opinion of the masses could be developed. This too, however, seems suspect. The survey we reviewed indicates that no small proportion of the population feels this way. If there were a truly strong opposition, it would be impossible for such sentiment to remain unnoticed. The corporate world directly reflects the interests and desires of the consumers. Huge amounts of money is invested annually to find the new products and developments that will be the most successful. Society produces exactly the demand for innovations that it supports, and this feedback controls the true rate of research and development. As the saying goes: "the customer is always right!"

So why does a population who's actions support innovation simultaneously verbalize discomfort with it? No one would argue that the introduction of new technology does not require society to adapt. Flexibility is mandatory in the debut and integration of any new development, and humanity has repeatedly shown its capacity to adapt to a changing world around it. This process, however, can take a lot of effort. Learning to deal with the new challenges and obligations of technology is not trivial. At the less serious end of the scale, who among us has not felt confusion or frustration at the introduction of a new operating system or the operation of a complex new device for the first time? Humans often crave normality, to be able to use our existing knowledge without the risk of unforeseen consequences or challenge of new obstacles. Despite the fact that technology is often a result of legacy thinking, cannot the opposite also be true? Other effects of the introduction of new science and technology are more serious. Ethical issues, like the development of cloning science place a great burden on society. We are forced to analyze our own long term assumptions, reexamine the reasoning of our beliefs, and push ourselves to reach consensus on issues that we could previously ignore. It is hard to definitively argue whether addressing these issues is to the benefit or detriment of society, but if nothing else, it allows us to make more informed decisions in the future.

The trial-and-error approach was another concept explored in the text that examines the underlying tendencies behind how humans learn and adapt. It is simple in theory, explaining that humans learn through mistakes, allowing them to make better decisions when presented with the same decision again. It is also a valid argument against the speed of development, for how can we iterate through the process of mistakes and corrections if we move too fast to respond? Rather than evidence of society's lack of control over the pace of innovation though, trial and error is much more a reflection of society's inexperience in analyzing new technologies. For instance, the example of nuclear reactors being developed faster than waste disposal or operating procedures outlined in the book clearly represented a failure to wait adequate time for signs of error before entering production. While the imperfect decisions of humans may never be completely resolved, continued experience with the trial and error process will hopefully yield a pace of innovation appropriate to minimize these unforeseen consequences.

Friday, February 13, 2015

2/10 Lecture Thoughts

Our area of study this week is unfairness and social justice. This, in my mind, is an essential issue to address before we can make progress towards our overall goal of finding ways to benefit humanity through the steering of technology. Humanity is a big category that (obviously) includes all human beings. Some parts of that goal remain unclear though. Does humanity include those yet to be born or our future generations? Or (more in line with our reading this week), does this mean merely helping all humans equally, or helping those born less privileged to put everyone on "equal" footing? The question of fairness is one that we all see though our own ideology, and thus impossible to define with a single correct answer. We live in a society that answers questions of fairness in the most impartial way we can: our judicial system. The beginning of the lecture did a good job of introducing the topic with the James Baldwin quote: "ignorance, allied with power, is the most ferocious enemies justice can have". Justice systems are imperfect, and despite our attempts to preserve objectiveness throughout the process, some degree of bias still affects the outcome. The lecture also provided insight into how we can attempt to improve the justice system. When trying to determine the successful of the law, we need to ask those who need its protection most if it is working.

There are other questions, however, that cannot be answered in national or local courts. In my mind, consideration of the fundamental rights to essentials is the responsibility of every individual human. It is not a question that can be answered individually or through a small jury. Even the attempts of large organizations, such as the United Nations, have failed to produce an answer compelling enough to inspire global compliance. Our supplemental reading provided a small glimpse of the conditions endured in the poorer communities of India. There, the struggle is for regular access to clean water and sanitation. This is, no doubt, not unique to India, nor even the worst conditions humans experience daily on a global scale. Most would probably consider those two resources fundamental, and yet government efforts to address the problem are minimally effective. Even if a consensus could be reached, how could humanity manage the logistics of the problem we are facing? The book mentions the suggestion of a global tax that seeks to help better balance the allocation of resources toward the fundamentals. This is a interesting idea in theory, but I imagine that in reality there would be some serious obstacles associated with it. In my opinion, the challenge of distributing resources in one that needs to be addressed locally and individually. As we have read, we can't hope to understand the conditions and factors affecting every poor community, and an ignorant approach to addressing fundamental resources could produce more harm than good.

Distributing resources bring us back to the bigger question of fairness again. As many of us are taught in childhood: "the world isn't a fair place". In lecture we explored the relation between access to technological benefits and privilege. Privilege can come in the form of class, race, gender, class, sexual preference, and an extensive range of other forms. A key point that we touched on, however, its that most of these are granted completely randomly. Furthermore, this random "life lottery" almost directly dictates access to technology (which is a strong indicator of wealth). It is important to distinguish that in this context, technology can be as simple as clean water and sanitation or as complex as access to the latest computing systems. The Ability To Pay (ATP) method of technology distribution is what prevails in most of the world today. ATP has clear links to capitalism and simple economic theory, and tends to be the natural response of an economy in absence of special programs. At the same time, ATP privileges those who need it least, and restricts access to those who need it most. Undoubtedly, superior systems exist. A number of these were detailed in our reading and lecture. Unfortunately, many of these also rely on the ability to quantify and distinguish levels of need and wealth. This is a challenge in its own right, as is producing a system that can identify the least privileged without the influence of corruption or personal bias. From a technocratic perspective, perhaps this could be a place to steer the focus of future technology and scientific research. The power of  computational data analysis, often disliked for its intrusions into personal privacy, might be able to be put to good use helping us produce a better model for mapping communities where resources and technology are most needed.


Saturday, February 7, 2015

Lecture 3 thoughts

Lecture 3 explored the concept of unintended consequences in depth. At its most basic, the notion of unintended consequences is relatively simple. All actions have consequences, no matter how small. Those consequences (whether negative or positive) may present themselves in unexpected ways, and can propagate on to create even further ripples and consequences of consequences. Given that our goal in this class is to find ways to better utilize science and technology for the well being of humanity, it makes sense that we want to steer technical resources in a way that minimizes negative outcomes. Specifically, I noted that both the frequency and severity of negative consequences were metrics that we seek to minimize. It is helpful to define bad events in this fashion, because both of these factors together do a good job of covering the spectrum of unforeseen disasters. For instance, nuclear power disasters occur extremely infrequently, yet the results of an incident are devastating. Small industrial chemical spills are typically containable and addressable, but the number of incidents and violations surrounding waste dumping laws are very numerous. If we truly want to develop a plan to minimize unforeseen consequences, we need to put safeguards in place to reduce both the magnitude of a potential disaster and the statistical likelihood of it occurring in the first place.

The second part of the lecture that I found particularly interesting surrounded the notion of "normalization" of accidents. In many fields we can both empirically and mathematically prove (to a high degree of accuracy) that accidents will occur with some given frequency. While the specific modes of failure may be unknown or complex to calculate, basic statistics can take a macro view of historic events and condense it into a close estimate. Normalization asks that if we know that an event will happen with a certain frequency (even if we don't know how or precisely when), can we really call it an "accident" when it actually occurs? I would go further, to ask if we should choose to do such a thing. In the event of school shootings, like in our supplementary reading material, normalization appears to make horrific acts of violence commonplace by training students to expect it to happen. On the other hand, we know earthquakes and natural disasters will happen regularly too, and we still feel justified in calling those "accidents". The key seems to lie with our degree of involvement in the accident. We can't cause natural disasters (at least not on a short time scale), and neither can we ever hope to eliminate all incidents and industrial accidents, no matter how carefully we try. I think that the term "accident" is still accurate when the timing of the event cannot be known with any detail, because it still entails an element of surprise. At the same time, however, society needs to stop associating the term "accident" with a freedom of liability or responsibility for the consequences. Just because an accident occurs does not mean we don't have to deal with the fallout, and we need to be vigilant to identify risks that are not worth the benefit.

Thursday, February 5, 2015

Pinto Madness Response

I found our latest reading assignment, "Pinto Madness", a very interesting article. While I have heard the story of the infamous Ford Pinto's design flaw a few times before (mostly through second hand recounting), I appreciated the opportunity to read such a comprehensive and research-driven article. At the same time, however, I found myself discouraged by the aggressive and accusatory tone of the writing. I understand that this piece of work was intended to open the public's eyes to the priorities of the auto industry in America and their influence in our government, but I felt that the author's personal outrage towards the Ford Motor Company reduce the effectiveness of his arguments.

An example of this is his continual focus on the fact that Ford puts dollar value on a human life. I can count at least three places in the text where this is quoted directly, and many more where it is indirectly referenced. Economic analysis requires putting a value on a human life from the analyst's frame of reference. It is not (and could never be) a true indication of the actual value of an individual's life, but is is a necessary crude approximation that makes it possible to quantify the idea of safety. While many might understandably make a case that no value is high enough to equal human life, mathematics and the habits of society to not support this. If we picture "safety" as the output of a function asymptotically approaching 100% as a function of dollars, it quickly becomes impractical to keep investing huge sums of money for a tiny marginal benefit in safety. I argue this not to defend Ford (or claim that they had reached the point of diminishing returns), which is clearly responsible for a great deal. Instead, I want to make the case that the author is trying to use this fact to appeal to readers's emotional sensibilities and inspire anger rather than a rational response. Another way in which the author tries to inspire outrage in readers is through his follow up on the activities of senior management officials at Ford. A strong argument with solid facts, like this article, doesn't need to try to inspire resentment towards those responsible, that will happen on its own. Resorting to an examination of Henry Ford II and Lee Iacocca's futures after Ford seems petty and off topic when the focus should be kept to an examination of the circumstances that allowed these accidents to happen.

Saturday, January 31, 2015

Response to first blog prompt

I would argue that, in fact, military robots are a clear product of the enlightenment school of thought. I’ll begin with my interpretation of the two beliefs. The technocratic notion is the simple belief that improved technology, science, and engineering alone is a goal worth pursuing for its own sake. Technocratic progress sees discovery and invention as an achievement in its own right, and that benefits from these advancements will naturally filter down through society to improve the human race. In my opinion, truly technocratic work almost needs to happen in a vacuum, isolated from corporate, political, and other influences. The enlightenment viewpoint, in contrast, sees careful, calculated advances in technical fields as a means of benefiting society. The enlightenment notion requires a clear goal to be defined in advance of scientific or technological developments, so that they can be used in a way that minimizes unforeseen consequences and maximizes the chances of achieving the goal (hopefully, one beneficial to humanity). Unfortunately, the decision of what most benefits humanity is made by humans, and therefore cannot be perfect.
This is where I see the first indications of the enlightenment notion in the development of militarized robotics. These robots were created with a specific purpose in mind. From the viewpoints of their creators and purchasers, these robots are most likely seen to serve the good of the public. They eliminate threats without danger to the attacker, and establish dominance over individuals seen as enemies to the attacking party’s beliefs, nation, or cause. In essence, these robots were the calculated product of a predefined goal. Furthermore, the technology to create these advanced robots was not generated through a singular scientific breakthrough or feat of engineering. The individual pieces of technology needed to create military robots have existed for some time, yet it only recently became cost effective to research and produce such machines. In other words, they did not spring into existence. They were not the development of a pure scientific or engineering endeavor. The decision to produce military robots was one more of business and economics than technical feasibility.
It is also important to consider the ways in which military robots are used, and the various types of machines in use. The most infamous and ubiquitous example has to be the predator drone. With exceptional range and medium high altitude, this flying robot often acts as a symbol of unmanned warfare. It is frequently used in both observational and offensive roles (interestingly, the predator also contributes to society in a number of civilian applications). Yet, the predator does not make life or death decisions on its own. While onboard computers and avionics may guide its flight plan, control surfaces, and cameras, it is a human that remotely oversees all of these activities and has their hand on the virtual trigger. This comes back to the old phrase “It is not guns that kill people, it is people that kill people”.  The technocrat sees people killing people merely with a different gun. The enlightenment follower sees the gun killing people, yet depending on their views of the rightfulness of the shooting, either sees this as positive or negative progress. This is a point that will certainly require more consideration if and when robots are given the power to make these crucial decisions on their own.
Defining these robots in the broad interpretation of “progress” used by both technocratic and enlightenment notions is difficult, because reality tends to fall between the clear-cut boundaries of the extremes. As with any field, robotics is a combination of the unintentional discoveries of science and the careful application of corporate or governmental research. Additionally, the opinion of one with an enlightened point of view is just that: an opinion. The very definition of enlightenment requires some judgment of the correct and incorrect courses of action, and the judgment of whether militarized robots benefit humanity is one that does not have a clear answer (although many may have strong opinions). While a pure technocrat might believe that all technology represents some form of progress, the development of combat robots would likely not historically be seen as a great advance technically (although it represents a huge development in the application of robots). The piece in question certainly seems to favor the enlightenment view, although it spends more time describing the technocratic era of history. Its focus tends to follow the path of history, from early enlightenment thinking to technocratic industrializing America to a time slightly before today (where the opinion of the public may not be quite so clear). Yet, it strongly emphasizes the contrasts between the two with a particular focus on the shortsightedness of the technocratic point of view. It is my opinion that the author (who I would assume would hold the enlightened view) would most likely reject the notion that robot combat is beneficial to society, and not see it as a form of progress.