Tuesday, May 12, 2015

The controversy of military innovation

There are many arguments both for and against continued sponsoring and development of advanced military technology. Before examining the arguments, though, a brief analysis of the purpose of the armed forces should be examined. The mission of the United States department of defense is to (in abbreviated form): “to provide the military forces needed to deter war and to protect the security of our country. ( http://www.defense.gov/)” These goals are ones that I believe many Americans would support. Deterring war is a noble goal, as most reasonable individuals wouldn’t condone unneeded death and destruction. Similar is the goal to protect the security of our country. Americans want to safeguard their ways of life, culture, and wellbeing. The Department of Defense, housed in the Pentagon building, is also the source of many of the cutting edge developments in military technology as well, making them the perfect case study in this analysis. Many opponents of continued military development (or war in general) would claim that those goals are flawed, however, in their regulation and implementation. The United States has engaged in numerous wars over its short existence, and it seems doubtful that all of them would meet the strict criteria of deterring war or protecting the security of America. This desire to keep an upper hand on the global stage is one of the most popular arguments supporting military innovation. Another important argument is that of the safety of our soldiers. The United States wants to do whatever it can to support the troops risking their lives on our behalf. While both these points have merit, I cannot conclude that the current regulation of military spending and technology is adequate. The better solution to military development may lie somewhere in the middle of the two extremes.

First we will return to the first justification for weapons innovation: the desire to discourage war through the process of “outgunning” or intimidation. This can certainly be effective, as the chances of success against an enemy that is substantially better armed are very low. No doubt, some engagements have been avoided through this policy of non-violent “preventative measures”. At the same time, as noted in the class text, this policy can often escalate into an uncontrolled arms race between competitor nations. This was precisely the result of the nuclear weaponry development program of the cold war. Additionally, the temptation to use the advanced weaponry grows with time, as outdated technology tends to decrease in value over time. Without substantial, continued funding such a policy could quickly become ineffective. One of the political ways this effect is avoided is through organizations such as NATO. The idea of distributing requirements for defense through alliances and treaties is a far more sustainable long term solution. The continued development of institutions like this could help reduce the necessity of deterrence through force in the future.

Protecting the troops that defend our nation is a necessity. Regardless of one’s opinion on the conflict or specifics of the situation, most citizens want to support the wellbeing of those who risk their lives for us. Oftentimes, the suggestion of a reduction in military funding or innovation is interpreted as having grave consequences for our troops for these reasons. Having listened to the opinions of veterans in my daily life, I hear this concern repeated frequently. The problem stems from the belief that a reduction of funding will reduce the resources going to each soldier, rather than a reduction of total deployed troops in combat zones. The loss of flow of new equipment, support, and tactical intelligence would increase the likelihood of casualties for those in the field. It is easy to see how these concerns are justified. There is no guarantee from the United States government or higher levels of military organization of how cuts in funding and new technology would be dealt with, or how the backlash could affect the safety of the troops. Furthermore, those in the position of making these decisions have an interest in continuing to secure funding, making an honest evaluation even more difficult. While a reduction in the development of certain military technology or funding may be justified, further research or agreements should be secured before making any radical changes.


Many of the topics touched on here hold further interest for myself and many of my peers studying as engineering and science majors. The ethics and opinions surrounding this issue hold direct consequences for us in terms of choosing jobs and industries to work in that may contribute to weapons development. While the potential to jump into a well-paying job in defense is tempting, it is a decision that might require greater personal thought than other career choices. Unfortunately, the arguments of the debate are not simple, and neither extreme appears to hold a realistic answer. Important topics like this illustrate yet another reason why global awareness and an understanding of technological consequences are essential for techno-scientists of the future.

Blogpost on leisure

This was a very difficult assignment to do at this point in the semester. Stress is hard to avoid this close to finals week, and especially with the number of essays and tests getting thrown into the last few weeks. So to try and forget about homework and studying for an hour, I went and worked on a hobby project. I’ve found that working with my hands is one of the most effective ways for me to get absorbed in a task and forget about sources of stress. Unsurprisingly though, it was hard to completely let go of all the other priorities. What started off as a nagging in the back of my mind kept getting stronger as time went on. It wasn’t until took a moment to address the root causes of this nagging that I was able to better enjoy the free time.
One of the first realizations I made was that while most of the items in my to-do list were important academically, few of them had an effort limit. Tests are an excellent example of this. An ideal student would study test material until they feel confident in their abilities. What does confident mean though? One could always feel more confident with more studying, but it isn’t realistic to keep studying forever. Ultimately, the need to study has to be evaluated against the importance of other needs. Unfortunately, this can be a difficult evaluation to make without any guidelines. This effect seemed similar to criticism of the Netflix and the Virgin Group’s so called “unlimited vacation policies”. Touched on in the chapter, these policies allow employees of these corporations to take as much vacation time as they want as long as they continue to perform their duties effectively. Many experts claim, however, that this vagueness actually results in employees taking less vacation time than they would in the past. The effect of non-definite goals is an unbounded workload. Given the difficulty in self-assessing knowledge, many students (myself included) just study up until the test beings.
The second realization was that I had no good idea of how to judge the value of my leisure time. I’m sure that to some degree this comes down to long term goals. In the long term, I want to succeed in college so that I can pursue a career in a field that I find interesting, and have enough resources to spend my future leisure time doing other activities I enjoy. How do I compare the value of long term goals to the short term though? I feel that many in our society eventually become so ingrained in the “work hard now to be able to play hard in the future”, that they never actually make that final transition. Additionally, the ability to relax in the future is never guaranteed. Many work hard their entire lives but still struggle financially. Surely some kind of compromise is best, where one is able to enjoy some fulfilment throughout all parts of their life? I feel like this compares well to the “treadmill effect” discussed in the text. Society spends its days working towards goals that constantly change and require more effort, never actually gaining any satisfaction. There is undoubtedly some amount of social pressure involved in this as well. Nobody wants to fail relative to their peers, and that element of competitiveness contributes to the cycle. It would obviously be desirable to avoid this tendency. The benefits of leisure time can of course extend beyond simple enjoyment; it is practically common knowledge that lowered stress makes you more effective in other activities.

Returning to my efforts to complete the assignment, eventually  I got tired enough of worrying that I stopped caring as much about the work I had to do, and tried just focused on the present. I don’t know if the act of consciously lowering the importance of other academic tasks is a solution or just avoiding the problem, but it was much more effective than trying to simply ignore the stress. I decided I would spend a certain amount of time working on the project I enjoyed (treating that time as a sunk cost), and then re-evaluate after that was done. Overall, I was impressed with how satisfying it was to work on something completely unrelated to school for a while. I felt fully engaged with the subject matter I was working with (something I can’t always say with homework assignments or lecture), and the time passed much more rapidly than I was expecting. I still don’t have a good way to quantify the benefits of leisure time. However, unlike many of the other tasks on my list, I can set limits on how long I choose to engage in this time, something which makes it easier to incorporate into everyday life.

Friday, May 8, 2015

Encouraging the Future of Assistive Technology

What does it mean to be human? This question is fundamental in the debate over the legitimacy and ethics of assistive technology and human enhancement. As innovations in nanotechnology, biomedical, and robotic fields continues to develop, the possibility of substantial human enhancement (in some form) appears to be a real possibility. The class text subdivides these not too distance advancements into a number of categories, based on the kind of problem they solve (or fail to solve), and who is most likely to benefit. These categories range from the elimination of devastating diseases, all the way up to fundamental modifications of the mechanics (and definition)of the human body. For the sake of this post, I will primarily focus on the field of non-permanent, non-medical enhancements. Advanced prosthesis and assistive machinery have the potential to dramatically improve the abilities of the human body, whether the application is to assisting amputees or simply enhancing the mobility of everyday users. Assistive technology and robotic enhancement development should be encouraged in the future, assuming that proper regulation can be enforced and approved.
The first and primary usage of this technology could obviously be in the assistance of the injured or handicapped individuals. As mentioned in the text, many of the simple technologies we take for granted at present could fall into this category. The glasses that allow me to focus on this computer screen enable me to accomplish much more as a student than I could unaided. On a much more serious level, robotics that allow mobility in amputees or paralyzed individuals could change the lives of a number of people around the world. Of course, with any benefit, there are a number of barriers that would need to be overcome before unlimited adoption of this mentality could be made. One of the most common is that of technology distribution. This is a highly valid concern; costs for prosthetic limbs today can cost anywhere from $5000-$50,000 dollars (1). As prosthetics get more advanced and capable, it stands to reason that this cost spectrum will increase even further. Despite the inequality in technology distribution, however, there are very tangible overall benefits to innovations, even if they only initially benefit the wealthiest. For instance, the lithium-ion batteries powering the expensive electric wheelchair described in the text have (on average) almost halved in $/kWh in the last 5 years, from over 900 $/kWh to less than 500 $/kWh (2). This means that dramatically more energy dense batteries could be enjoyed by those who were previously only able to afford a tiring manual wheelchair or heavy lead-acid powered unit. This effect is bolstered by the economies of scale, and the increasing knowledge base around designing with the new technology. Despite inequalities existing in the introduction of a new technology, the applications of innovations to handicapped individuals benefits the handicapped community at large in the long run.

A secondary and more general use of human enhancement technology could be for entertainment, athletic and general mobility purposes. Mountaineering enthusiasts could travel further and higher, and gain access to locations and experiences that they otherwise could not. Workers who needed to lift heavy loads or stand on their feet all day long could gain a reprieve from the physical pain. Even among the general public, I suspect most individuals might enjoy the ability to jump higher, walk faster, and so on. Who knows, perhaps even CO2 emissions could be lowered as the need for cars decreased. At the same time, there would doubtlessly be resistance to this movement. This suggested innovation proposes tying our bodies and activities closer to technology than perhaps they have ever been before.  At some point, concerns about how human society still is would begin to manifest. Those who do not wish to participate in the “technological evolution” could potentially face different treatment or opportunities as those who do. At the same time, however, I would argue that the use of assistive or enhancing technology represents a personal choice. We don’t infringe on personal freedoms to get tattoos, body piercings, or non-medically approved RFID implants, despite the fact that many of these represent a semi-permanent body modification. Similarly, the definition of human is not necessarily chiseled in stone. Some of the characteristic qualities discussed in lecture included: memory, culture, language, reason, questioning, measuring, representations, symbolic cognition, consciousness, empathy, appreciation of mortality, awe, beauty and inspiration. I might even go further to suggest that the practice of innovation and tool-making is a fundamental part of the human identity. Humans were not born with wings, and yet millions fly every day. This is certainly not the only barrier to a more widespread adoption of this technology, however. Military usage, usage by terror groups, and other unanticipated exploitation could certainly have serious consequences. It is for this reason that such a techno future could only be possible with careful though and enforcement of appropriate regulations.

Works Cited:
(1)    "The Cost of a New Limb Can Add up Over a Lifetime." Hospital for Special Surgery. N.p., n.d. Web. 07 May 2015. <http://www.hss.edu/newsroom_prosthetic-leg-cost-over-lifetime.asp#.VUv0mflVhBc>.

(2)    "The EV Conundrum: Uncertain Resale Value Complicates Li-ion Battery Market." Navigant Research The EV Conundrum Uncertain Resale Value Complicates Liion Battery Market Comments. N.p., 21 Jan. 2010. Web. 07 May 2015. <https://www.navigantresearch.com/blog/the-ev-conundrum-uncertain-resale-value-complicates-li-ion-battery-market>.

Monday, April 20, 2015

How can technoscientists better act to maximize the ethical and socioeconomic benefits from their actions? (Part 2)

When embarking on any research project, a technoscientist must evaluate if the results of the study will be used in ways that they deem ethically and politically responsible. This is a simple first step, but one that depends significantly on the opinions of the individual in question. While opinions of what is ethical vary, ensuring that every participant in the research is comfortable with the consequence is a great step towards determining if the study should continue. In many respects, this is similar to employment in any controversial industry. Often though, this first step is not enough. Repeated self-inquiry is the only way to ensure that individuals can morally continue in their topic of research. An excellent example of this comes from the famous theoretical physicist Richard Feynman. Feynman’s genius was recognized from his youth, and almost directly after completing his Ph.D. he was recruited as part of the Manhattan Project. He worked at both the Los Alamos and Oak Ridge facilities, and made contributions to safety procedures that helped allow the development of the first nuclear bombs. Later in his life, Feynman expressed regret at not reconsidering his work after the defeat of Germany in World War 2. While he maintained support for his initial participation, he found it difficult to justify his work past that point (atomicheritage.org). The experience of Feynman perfectly illustrates the dangerous human tendency to question once and then accept the consequences beyond that point (despite the fact that the situation is changing). Another lesson can also be learned from Feynman’s mistake, and that is the value in looking to history for additional perspective on the present. There are uncountable examples of noble minded scientists oblivious to the sometimes devastating consequences of their research. In some situations, it can be difficult to cease participation once beginning. In these cases especially, it is essential to understand the loss of control one has over information once it is released. There is no way to enforce this personal caution among all scientists, but I believe that many would (or continue to do) if more critical analysis were encouraged.

One of the final methods to encourage ethically and socio-politically responsible outcomes from science is to minimize misunderstandings. While the suggestion of improved communication could hardly be amiss in almost any field, the process of communication between technoscientists, society, and the media has far reaching consequences. This point becomes even more pertinent given the history of poor communication between these three groups. In many cases, it may be difficult for scientists to translate highly technical findings in a way that is both useful and simple enough for the media to understand. There are a multitude of pitfalls in this process. The first is the potential for a scientist to personally communicate poorly. An attempt to explain important work or results that are not decipherable or susceptible to misinterpretation is dangerous. Similarly, there is the danger of oversimplification.  If a concept is abstracted too far, there remains no value in the media reporting it. The media also holds responsibility for miscommunication too. As we discussed in lecture, media groups are typically profit driven organizations. This can sometimes lead to an unhealthy focus on “fun” science at best, and sensationalism at worst. Moreover, time and time again simple ignorance on the media’s and society’s behalf can lead to misinterpreted statistics and unfounded conclusions. To some degree, scientists need to become judges of what is most important for the media and society to know. This is a huge but necessary responsibility, and despite the reticence some may feel participating in the process, it is the scientist themselves that are in the best position to perform this arbitrage. I would argue that it also becomes the responsibility of the scientist to stay current with modern news and controversy. Any individual in a position affecting so many others must consider their choices with a broader scope.  This too is a challenging method to enforce on its own. Perhaps a mandatory follow up period could be required for certain research. Scientists that produce innovations in certain fields are required to act on a regulation committee and/or guide the directions that the innovation is taken. This requirement could increase the degree of personal responsibility scientists feel for how their discoveries can help society.


Scientists discover and innovate in ways that can both enhance and decrease public well-being.  This is not the only goal of science, however. Some science is undertaken purely for the sake of knowledge, and the belief that knowing more about the universe alone makes this a worthwhile cause. Due to the massive ways in which technoscientists affect society, it becomes the responsibility of these individuals to consider the implications of their research as both professionals and members of the public. One of the important ways this can be done is through being aware of the outside influences on any given experiments, and the ways this can indirectly affect the accuracy of results through selective reporting. Another essential method for maintaining ethics in science is through regular critical thinking and self-inquiry about the kind of work scientists perform. Finally, the curriculum and education of young technoscientists needs to focus on communication between technical work and the public. Scientists are the ambassadors between the future and the present, and their expertise is needed not only for discovery, but also integrating these innovations responsibly.

Works cited:

"Richard Feynman." Atomic Heritage Foundation. N.p., n.d. Web. 20 Apr. 2015. <http://www.atomicheritage.org/profile/richard-feynman>.

How can technoscientists better act to maximize the ethical and socioeconomic benefits from their actions? (Part 1)

Technoscientists occupy an interesting position. Similar to engineers, many technoscientists pursue a technical field, using scientific knowledge to further their aims. Unlike engineers, however, the pursuit of science is often seen as the pursuit for more knowledge in a specific field than an application of that knowledge. Learning more about the universe and furthering humanity’s understanding of how the world works is one of the core motivations of science. While many engineers may find their personal actions constrained by the structure of management and corporate policy, some scientists retain more autonomy in what they study and how they choose to fund that research. Because they are the experts in their area of choice, their opinion and position hold significant sway in the direction that progress in that subject will move in. As a general assumption, I think we can assume that most scientists pursue their work for the sake of intellectual curiosity and without any direct malicious intent. It is precisely this focus on the “purity” of the field, however, which may make science and technoscientists vulnerable to the alternate intents of external forces. While remaining passive and refusing to take sides in times of controversy may continue to produce good science, the global implications of such a removed mindset can be devastating. The process of analyzing the ethical and socioeconomic implications of scientific research and acting on such analysis is not trivial, but it is an activity that must be continually undertaken and improved on. As some of the primary producers of innovations that affect everyday society, it is essential that technoscientists act as the first line of regulation in research that could hold unforeseen consequences. I will argue that there are a number of strategies to help achieve this goal, including: recognizing the effects of outside influences on research, constantly re-evaluating the ethicality and morality of participation in research projects, and minimizing misunderstandings between scientists, the media, and the public.

Recognizing the effects of outside parties on research is not only difficult, but also somewhat of an awkward topic. Modern scientific practice hinges on objectivity and a process that seeks to eliminate personal bias from influencing the purity of the research results. In certain fields this process is known as the scientific method, a simple sequence of steps for conducting good science. While the content and complexity of this method changes between different institutions and individuals, most can agree it covers the following (Rochester.edu): 

-Observation and description of a phenomena
-Formulation of a hypothesis for the observed phenomena
-Use of hypothesis to predict other phenomena and/or the quantitative results of new observations
-Performance of a repeatable experiment, to test the hypothesis


This step by step process is intended to leave little room for the opinions of the experimenter to influence the published results.  This is by design, because in a perfect world a tested and failed hypothesis is just as valuable as a validated one. Despite this, outside influences can still hold subconscious effects in experiments. A 2010 article in The New Yorker investigated this very effect, terming it “selective reporting”.  The pressure to validate hypothesis can be enormous when financial (corporate) interests or the individual’s career is on the line. Even when the data doesn’t support the hypothesis, many researchers continue analyzing it until some significant trend can be found. Rather than conducting a new experiment designed to isolate this effect, these results are instead published directly. A similar effect was studied by statistician Theodore Sterling in the 1950s when he noticed that 97% of all published psychological studies with statistically significant data found the effect they were looking for (Jonah Leher, 2010). While on their own these results may not appear dangerous, when poorly supported conclusions are made by privately funded research the risk of dangerous products reaching the public increases. Rather than pretending that science occurs in a vacuum, it is important for technoscientists to remain vigilant to the selective reporting effect.

Works cited:

"Introduction to the Scientific Method." Introduction to the Scientific Method. N.p., n.d. Web. 20 Apr. 2015. <http://teacher.nsrl.rochester.edu/phy_labs/appendixe/appendixe.html>.

LEHRER, JONAH. "The Truth Wears Off." The New Yorker. N.p., 13 Dec. 2010. Web. 20 Apr. 2015. <http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off>.

Sunday, April 12, 2015

The Benefits and Dangers of Synthetic Biology

A recent lecture and reading assignment introduced us to the rapidly expanding world of biological engineering. According to Biology’s Brave New World, this is a field of study where scientists are beginning a new era of rapid learning and progress. More concerning, however, is the lack of regulation and ethical study that has accompanied this wealth of technical innovation. Could this be another case of technology outstripping society’s capacity to address it? This is a complicated subject, and much of the research being performed in this area is known as dual-use, or capable of being exploited for a number of unintended purposes, both positive and negative. As biologists become engineers and the accessibility to genetic information grows, it will not be long before the general public has access to the tools and manufacturing services that allow new life to be designed. Concerns over this include the development and threat of biological warfare agents by motivated parties and individuals. Even if the most dangerous data and results were kept classified from the general public, the problem of information security then enters the scenario. The increasing ease of access to synthetic biology information and tools carries both advantages and disadvantages for society at large.
One advantage to this lower “barrier to entry” is the ability to perform rapid and inexpensive Intelligent Trial and Error (ITE). The cost of synthetic biological research has been plummeting in recent years. Every year, the cost of sequencing a genome drops 5 to 10 times further. This is well ahead of the rate predicted by Moore’s law, and as of January 2014, has dropped below $1000 (Business Insider, 2014). Sequencing is not the only field in which costs have dropped, however. The synthetic biology competition iGEM (Internationally Genetically Engineered Machines) has existed since 2003, and provides resources and structure to allow high school and college students to design and grow their own genetically engineered life. The competition promotes the development of sophisticated bacteria, with the complexity increasing all the time. Impressively, this competition operates on an annual basis, proving that substantial design improvements and changes can be implemented on a short time frame. Speeding the process even further is the development of automated assembly processes, which could supplement or replace typical standard assembly and parallel assembly techniques. Fast turn-around is essential to the iteration process of ITE, and the ability for minimally funded student teams to produce work so quickly is strong evidence that professional teams could evolve designs even faster. Crucially, iGEM gives access to a Registry of Standard Biological Parts, a standardized source of common biological components needed to allow for rapid (and relatively simple) development of completely new genetic recipes. In the case of iGEM, many of the components are assembled as “BioBricks”, which can be used in designs and supplemented by software to increase the ease of engineering (igem.org). With the numerous standards and technologies in place, increased speed of the bacteria assembly process, and tremendous drop in price of genetic research and components, intelligent trial and error can be performed faster and more consistently to help negate the unexpected consequences of rapid innovation.
While technological advances may provide some solutions, they can also come at a cost. Although increased accessibility to biological engineering may promote more testing and positive outcomes in the professional scientific community, it could also draw less well intentioned interest from others. The process of hazardous bacteria development would not be particularly difficult for a terrorist group. Machinery used in automatic assembly could be easily reverse engineered or purchased though illegitimate channels. In some cases, this is a simple as automated pipette and fluid transfer robots. Not only are such robots easy to acquire, but publicly available code already exists that can be used to program them (Synthetic Biology, 2011). The secondary concern is one of information and data. Equipment for building new bacteria is only as useful as the genetic code sent to it, and it is this code that presents such a large security risk in the future. While the ability to engineer deadly biological weapons may remain out of reach for most of society, replicating existing code is simple if it becomes accessible. This is a system with no redundancy; if classified genetic code were to be released, it would be almost impossible to prevent the spread of the knowledge. This has been seen time and time again through “leak sites” like Wikileaks.org. Another problem presented by Biology’s Brave New World is the potential for dangerous code to be hidden in innocuous places. If such code was unknowingly downloaded to a system with access to automated assembly machinery, the consequences could be devastating. The dangers of information security and the susceptibility of assembly machinery counter many of the advantages of biological engineering with matching disadvantages. It will be up to society and regulation agencies to decide what rate of innovation in the fledgling field is worth the risk.

Cited Sources

"Biology's Brave New World."Foreign Affairs. 12 Apr. 2015. Web. 12 Apr. 2015. <http://www.foreignaffairs.com/articles/140156/laurie-garrett/biologys-brave-new-world>.
Raj, Ajai. "Soon, It Will Cost Less To Sequence A Genome Than To Flush A Toilet - And That Will Change Medicine Forever." Business Insider. Business Insider, Inc, 02 Oct. 2014. Web. 12 Apr. 2015. <http://www.businessinsider.com/super-cheap-genome-sequencing-by-2020-2014-10>.
"Main Page - Ung.igem.org." Main Page - Ung.igem.org. N.p., n.d. Web. 12 Apr. 2015. <http://igem.org/Main_Page>.

Leguia, Mariana, ‡ Jennifer Brophy, Douglas Densmore, and . Christopher J. Anderson. "Chapter 16." Synthetic Biology. San Diego, CA: Academic, 2011. N. pag. Print.

Monday, March 30, 2015

The Obstacles Facing Internet-Based Democracy

The advent of the internet allows unprecedented communication and collaboration between people all over the world. As such, it only makes sense to use this incredible platform as a way to improve a historically restricted arena: politics. Internet-based democracy promises to improve transparency in government, eliminate barriers between the people and their representation, and allow for new voices and ideas to receive fair consideration. The system holds additional advantages. Web platforms can be constantly edited, improved, and modified to better meet the needs of users. Change can be made quickly, relatively inexpensively, and in direct response to the feedback of those who interact with it, all the necessary elements for Intelligent Trial and Error. Furthermore, the internet is already used by over 2 billion people in ways similar to this proposal. While legacy thinking might slow adoption of such a system, the public’s inherent familiarity with its foundation put it ahead of most radical new ideas. Yet, despite genuine hopes that such an internet-based democracy system could someday exist, a number or specific obstacles remain to be overcome before immediate adoption could even be considered.
The first problem is something I’ll term: “the comment section dilemma”. While there are many different systems through which internet communication happens, one of the most ubiquitous is the comment section featured at the base of an article, video, blog, or product page. In smaller scale communities, this section can often foster intelligent, productive conversation that adds to the page’s existing content or perhaps advances the ideas covered above. These communities frequently rely on self-regulation to keep conversations productive. When applied to much larger pages with heavier traffic, however, this system often breaks down. Spam, joke posts, and hateful comments quickly crowd out the more productive comments, leading nowhere. This problem is not exclusive to comment sections either; large scale forums and chat-rooms regularly deal with these challenges as well. While some sites have been able to deal with these problems to an extent, it often comes at a price. Some sites have recently eliminated comments (or selectively limit comments based on how controversial the content is), or made it more difficult to access (either through drop-down menus, or by requiring registration). Others still have relied on heavy censoring. While censoring is undoubtedly necessary in any potential internet democracy system, the magnitude and method of enforcement are extremely important questions. Automated censoring systems can deal with massive scale, but face problems with intelligence. Existing systems appear to struggle with anything more than obvious spam or profanity. The task of identifying hateful or unproductive posts (beyond simple profanity) requires an actual understanding of the concepts being discussed. Furthermore, any system of censoring (both automated and manual) will hold some degree of bias. Free speech is an essential component to democracy, and the use of censoring in such a forum is dangerous (and perhaps even constitutionally illegal). An official internet democracy site would need to handle these issues nearly flawlessly to gain public approval (especially given the tremendous size of its user base), an obstacle that we, both technologically or socially, have yet to overcome.
The second obstacle is another unintended consequence of scale: the problem of maintaining equality and organization. A site with a massive user-base would generate far more content than any one human could read or comprehend. If perfect equality and equal attention were given to all posts, nobody would ever be heard. Furthermore, the benefit of transparency in this system begins to be lost if the content is hidden not behind closed doors, but behind terabytes of other information (a much more intimidating problem). It quickly becomes clear that for any idea to be seen by enough people to gain support, some kind of ranking system must be developed. Aside from the ethical questions surrounding ranking users or ideas, there exist technical challenges associated with this as well. Many sites, such as Reddit, use complicated algorithms to judge the merits of posts and users and choose how many others will see them. Unfortunately, a perfect algorithm for identifying the best political discussion points does not exist. The disadvantages to an imperfect solution, besides not promoting the best content, are that it can be “gamed” by users attempting to reverse engineer the algorithm. In other words, users can find ways of artificially increasing the ranking of their post that are not directly tied to its merit. The struggle with equality in internet democracy is broader than the specific implementation though. The development or usage of any new technology represents a form of legislation that may be unequal. For instance, an internet-based system gives more political influence to those who can afford internet access and a computer. Many of the poorest and most in need could become even less represented. Such a system also implies a degree of computer literacy. There may be a number of more elderly citizens who do not have the required skills to access and contribute to the system in the way that the younger generation could. The inherent inequality and struggles with organizing an internet-based democracy prevent it from being put into action in the present time.