This week's lecture and reading material explored the concept of "intelligent trial and error". Intelligent trial and error (ITE) is a direct response to one of the earlier concepts covered in this class: unforeseen consequences. Whenever any technical or scientific endeavor is attempted (especially as the magnitude and complexity increase), there must be some expectation of unforeseen consequences. We live in a non-deterministic universe, and as such, even our best efforts and predictions cannot hope to account for all of the direct, secondary, and tertiary effects that our actions will have in the future. With this knowledge, the objective becomes to find ways to minimize and prevent the escalation of problems when they do appear. This is very well aligned with our overall goal for this course, and the process of steering technology is undoubtedly benefited through this process. ITE is one of the most fundamental procedures used to address these problems, but yet it too has failings that limit its application and effectiveness.
At its most basic, ITE is a process humans complete in numerous small ways every day. We attempt to solve a problem, identify failings with the result, modify our procedure based on our findings, and attempt the same problem again. It is a classic closed loop control system, and like any control system, it can be improved and optimized by tuning simple factors like the amplitude and rate of change of the response. In the reading we examined the most obvious failure mode of this system: becoming an open loop, or losing the communication between the feedback and the actions. This effectively eliminates the possibility of any corrective action being taken. In the furniture factory case, employees and site inspectors were repeatedly sending information about the problem back to OSHA, and yet the corrective measures were never acted upon, producing a gap. What about more subtle challenges though, such as those complex enough to require more than one cycle of feedback? As humanity continues to push the boundaries of science and technology, we may encounter unforeseen consequences that require many, many cycles of feedback and correction before they can be adequately addressed. The iteration process often draws upon resources of time and money to complete, two things which corporations, society, and governments often find in short supply. A favorite example that comes to mind is that of the Saturn V Rocketdyne F-1 engines. During an extremely rushed and expensive development period, an unexpected combustion instability was discovered that propagated into violent engine destruction in most cases. It took numerous iterative tests over the course of two years to develop a precise baffle system that could self dampen the oscillations in such a large combustion chamber. In the F-1's case, the enormous budget from the US government allowed this number of test cycles to be possible. In some cases however, this drawback of the ITE process could hinder or even halt correction of unexpected problems.
Even if the time and resources of a fully ITE ready process can be supplied for a development, other challenges remain. Some of these were covered in class, and the more petty of them included human failings, such as unwillingness to act due to legacy thinking, reputation, pride, inertia, stupidity, or downright stubbornness. As irrational as these may be, they could derail an otherwise solid ITE plan. Other causes tying back to the cost of ITE exist too, like sunk costs and political expedience. Presuming all of these challenges can be overcome, however, we still must consider the less trivial obstacle of the effectiveness of problem and correction identification. Trial and error, for the most part, does not imply or provide guidance towards the proper type of correction that must be made or the change in procedure that must be taken. In some cases, the direction or type of correction that must be made is not clear and a pure trial and error approach (even when taken with "intelligence") comes down to hunting in the dark. To complicate matters further, most real world systems involve changing multiple variables simultaneously. Changing a single factor in a polluting manufacturing process, for example, may appear to slightly improve efficiency, and yet changing that factor further could produce very little benefit without the additional reduction of a different factor. The time needed to identify solutions becomes magnified hugely, and emphasizes the need for a system with a structured, documented procedure for identifying root problems. Item number 4 in the Intelligent Trial and Error table in the textbook comes closest to exploring this issue, with the requirement of "Active Preparation for Learning From Experience", but even this focuses on the need for error identification, rather than the steps to do so effectively.
References:
"SP-4206 Stages to Saturn." Nasa History Program Office. NASA, n.d. Web. 01 Mar. 2015. <http://history.nasa.gov/SP-4206/ch4.htm>
Harford, James J. Korolev: How One Man Masterminded the Soviet Drive to Beat America to the Moon. New York: Wiley, 1997. Print.
Saturday, February 28, 2015
Saturday, February 21, 2015
The Treadmill of Progress
In what ways does technology outpace society? The text and lecture investigate a number of possibilities, stretching from the far reaching implications of new ethical issues to the simple gut reactions of a society that feels overwhelmed by the new gadgets of the week. Over 50% of the participants in the survey we studied in lecture feel technology is moving too quickly, yet appear to resign themselves to the inevitable grind of progress. Is this the "treadmill" effect we are seeing; a society constantly running to keep up with the very progress that their actions are inspiring?
This feels intuitively difficult to believe. Nature and society tend to find steady state solutions to imbalances. A treadmill effect, with people constantly moving towards new technology that they are creating is unstable. Sooner or later, the rate of innovation creation would exceed our ability to follow, or we would have to give up our attempts to do so. This manifests itself economically as well. If society truly disliked the introduction of technology, it would result in less adoption or support of new innovation. It would quickly become unprofitable to produce such innovations, which would reduce incentive to continue the process. What if this process of natural pace regulation was hampered by some other effect? Perhaps there are many individuals unhappy with the rate of scientific and technological development, but the seemingly universal stigma associated with slowing progress prevents them from speaking out and communicating with one another. Certainly there are a number of smaller communities of people who publicly disapprove of the state and pace of technology. Perhaps if society was more universally aware of the opinions of each other, a truer opinion of the masses could be developed. This too, however, seems suspect. The survey we reviewed indicates that no small proportion of the population feels this way. If there were a truly strong opposition, it would be impossible for such sentiment to remain unnoticed. The corporate world directly reflects the interests and desires of the consumers. Huge amounts of money is invested annually to find the new products and developments that will be the most successful. Society produces exactly the demand for innovations that it supports, and this feedback controls the true rate of research and development. As the saying goes: "the customer is always right!"
So why does a population who's actions support innovation simultaneously verbalize discomfort with it? No one would argue that the introduction of new technology does not require society to adapt. Flexibility is mandatory in the debut and integration of any new development, and humanity has repeatedly shown its capacity to adapt to a changing world around it. This process, however, can take a lot of effort. Learning to deal with the new challenges and obligations of technology is not trivial. At the less serious end of the scale, who among us has not felt confusion or frustration at the introduction of a new operating system or the operation of a complex new device for the first time? Humans often crave normality, to be able to use our existing knowledge without the risk of unforeseen consequences or challenge of new obstacles. Despite the fact that technology is often a result of legacy thinking, cannot the opposite also be true? Other effects of the introduction of new science and technology are more serious. Ethical issues, like the development of cloning science place a great burden on society. We are forced to analyze our own long term assumptions, reexamine the reasoning of our beliefs, and push ourselves to reach consensus on issues that we could previously ignore. It is hard to definitively argue whether addressing these issues is to the benefit or detriment of society, but if nothing else, it allows us to make more informed decisions in the future.
The trial-and-error approach was another concept explored in the text that examines the underlying tendencies behind how humans learn and adapt. It is simple in theory, explaining that humans learn through mistakes, allowing them to make better decisions when presented with the same decision again. It is also a valid argument against the speed of development, for how can we iterate through the process of mistakes and corrections if we move too fast to respond? Rather than evidence of society's lack of control over the pace of innovation though, trial and error is much more a reflection of society's inexperience in analyzing new technologies. For instance, the example of nuclear reactors being developed faster than waste disposal or operating procedures outlined in the book clearly represented a failure to wait adequate time for signs of error before entering production. While the imperfect decisions of humans may never be completely resolved, continued experience with the trial and error process will hopefully yield a pace of innovation appropriate to minimize these unforeseen consequences.
This feels intuitively difficult to believe. Nature and society tend to find steady state solutions to imbalances. A treadmill effect, with people constantly moving towards new technology that they are creating is unstable. Sooner or later, the rate of innovation creation would exceed our ability to follow, or we would have to give up our attempts to do so. This manifests itself economically as well. If society truly disliked the introduction of technology, it would result in less adoption or support of new innovation. It would quickly become unprofitable to produce such innovations, which would reduce incentive to continue the process. What if this process of natural pace regulation was hampered by some other effect? Perhaps there are many individuals unhappy with the rate of scientific and technological development, but the seemingly universal stigma associated with slowing progress prevents them from speaking out and communicating with one another. Certainly there are a number of smaller communities of people who publicly disapprove of the state and pace of technology. Perhaps if society was more universally aware of the opinions of each other, a truer opinion of the masses could be developed. This too, however, seems suspect. The survey we reviewed indicates that no small proportion of the population feels this way. If there were a truly strong opposition, it would be impossible for such sentiment to remain unnoticed. The corporate world directly reflects the interests and desires of the consumers. Huge amounts of money is invested annually to find the new products and developments that will be the most successful. Society produces exactly the demand for innovations that it supports, and this feedback controls the true rate of research and development. As the saying goes: "the customer is always right!"
So why does a population who's actions support innovation simultaneously verbalize discomfort with it? No one would argue that the introduction of new technology does not require society to adapt. Flexibility is mandatory in the debut and integration of any new development, and humanity has repeatedly shown its capacity to adapt to a changing world around it. This process, however, can take a lot of effort. Learning to deal with the new challenges and obligations of technology is not trivial. At the less serious end of the scale, who among us has not felt confusion or frustration at the introduction of a new operating system or the operation of a complex new device for the first time? Humans often crave normality, to be able to use our existing knowledge without the risk of unforeseen consequences or challenge of new obstacles. Despite the fact that technology is often a result of legacy thinking, cannot the opposite also be true? Other effects of the introduction of new science and technology are more serious. Ethical issues, like the development of cloning science place a great burden on society. We are forced to analyze our own long term assumptions, reexamine the reasoning of our beliefs, and push ourselves to reach consensus on issues that we could previously ignore. It is hard to definitively argue whether addressing these issues is to the benefit or detriment of society, but if nothing else, it allows us to make more informed decisions in the future.
The trial-and-error approach was another concept explored in the text that examines the underlying tendencies behind how humans learn and adapt. It is simple in theory, explaining that humans learn through mistakes, allowing them to make better decisions when presented with the same decision again. It is also a valid argument against the speed of development, for how can we iterate through the process of mistakes and corrections if we move too fast to respond? Rather than evidence of society's lack of control over the pace of innovation though, trial and error is much more a reflection of society's inexperience in analyzing new technologies. For instance, the example of nuclear reactors being developed faster than waste disposal or operating procedures outlined in the book clearly represented a failure to wait adequate time for signs of error before entering production. While the imperfect decisions of humans may never be completely resolved, continued experience with the trial and error process will hopefully yield a pace of innovation appropriate to minimize these unforeseen consequences.
Friday, February 13, 2015
2/10 Lecture Thoughts
Our area of study this week is unfairness and social justice. This, in my mind, is an essential issue to address before we can make progress towards our overall goal of finding ways to benefit humanity through the steering of technology. Humanity is a big category that (obviously) includes all human beings. Some parts of that goal remain unclear though. Does humanity include those yet to be born or our future generations? Or (more in line with our reading this week), does this mean merely helping all humans equally, or helping those born less privileged to put everyone on "equal" footing? The question of fairness is one that we all see though our own ideology, and thus impossible to define with a single correct answer. We live in a society that answers questions of fairness in the most impartial way we can: our judicial system. The beginning of the lecture did a good job of introducing the topic with the James Baldwin quote: "ignorance, allied with power, is the most ferocious enemies justice can have". Justice systems are imperfect, and despite our attempts to preserve objectiveness throughout the process, some degree of bias still affects the outcome. The lecture also provided insight into how we can attempt to improve the justice system. When trying to determine the successful of the law, we need to ask those who need its protection most if it is working.
There are other questions, however, that cannot be answered in national or local courts. In my mind, consideration of the fundamental rights to essentials is the responsibility of every individual human. It is not a question that can be answered individually or through a small jury. Even the attempts of large organizations, such as the United Nations, have failed to produce an answer compelling enough to inspire global compliance. Our supplemental reading provided a small glimpse of the conditions endured in the poorer communities of India. There, the struggle is for regular access to clean water and sanitation. This is, no doubt, not unique to India, nor even the worst conditions humans experience daily on a global scale. Most would probably consider those two resources fundamental, and yet government efforts to address the problem are minimally effective. Even if a consensus could be reached, how could humanity manage the logistics of the problem we are facing? The book mentions the suggestion of a global tax that seeks to help better balance the allocation of resources toward the fundamentals. This is a interesting idea in theory, but I imagine that in reality there would be some serious obstacles associated with it. In my opinion, the challenge of distributing resources in one that needs to be addressed locally and individually. As we have read, we can't hope to understand the conditions and factors affecting every poor community, and an ignorant approach to addressing fundamental resources could produce more harm than good.
Distributing resources bring us back to the bigger question of fairness again. As many of us are taught in childhood: "the world isn't a fair place". In lecture we explored the relation between access to technological benefits and privilege. Privilege can come in the form of class, race, gender, class, sexual preference, and an extensive range of other forms. A key point that we touched on, however, its that most of these are granted completely randomly. Furthermore, this random "life lottery" almost directly dictates access to technology (which is a strong indicator of wealth). It is important to distinguish that in this context, technology can be as simple as clean water and sanitation or as complex as access to the latest computing systems. The Ability To Pay (ATP) method of technology distribution is what prevails in most of the world today. ATP has clear links to capitalism and simple economic theory, and tends to be the natural response of an economy in absence of special programs. At the same time, ATP privileges those who need it least, and restricts access to those who need it most. Undoubtedly, superior systems exist. A number of these were detailed in our reading and lecture. Unfortunately, many of these also rely on the ability to quantify and distinguish levels of need and wealth. This is a challenge in its own right, as is producing a system that can identify the least privileged without the influence of corruption or personal bias. From a technocratic perspective, perhaps this could be a place to steer the focus of future technology and scientific research. The power of computational data analysis, often disliked for its intrusions into personal privacy, might be able to be put to good use helping us produce a better model for mapping communities where resources and technology are most needed.
Saturday, February 7, 2015
Lecture 3 thoughts
Lecture 3 explored the concept of unintended consequences in depth. At its most basic, the notion of unintended consequences is relatively simple. All actions have consequences, no matter how small. Those consequences (whether negative or positive) may present themselves in unexpected ways, and can propagate on to create even further ripples and consequences of consequences. Given that our goal in this class is to find ways to better utilize science and technology for the well being of humanity, it makes sense that we want to steer technical resources in a way that minimizes negative outcomes. Specifically, I noted that both the frequency and severity of negative consequences were metrics that we seek to minimize. It is helpful to define bad events in this fashion, because both of these factors together do a good job of covering the spectrum of unforeseen disasters. For instance, nuclear power disasters occur extremely infrequently, yet the results of an incident are devastating. Small industrial chemical spills are typically containable and addressable, but the number of incidents and violations surrounding waste dumping laws are very numerous. If we truly want to develop a plan to minimize unforeseen consequences, we need to put safeguards in place to reduce both the magnitude of a potential disaster and the statistical likelihood of it occurring in the first place.
The second part of the lecture that I found particularly interesting surrounded the notion of "normalization" of accidents. In many fields we can both empirically and mathematically prove (to a high degree of accuracy) that accidents will occur with some given frequency. While the specific modes of failure may be unknown or complex to calculate, basic statistics can take a macro view of historic events and condense it into a close estimate. Normalization asks that if we know that an event will happen with a certain frequency (even if we don't know how or precisely when), can we really call it an "accident" when it actually occurs? I would go further, to ask if we should choose to do such a thing. In the event of school shootings, like in our supplementary reading material, normalization appears to make horrific acts of violence commonplace by training students to expect it to happen. On the other hand, we know earthquakes and natural disasters will happen regularly too, and we still feel justified in calling those "accidents". The key seems to lie with our degree of involvement in the accident. We can't cause natural disasters (at least not on a short time scale), and neither can we ever hope to eliminate all incidents and industrial accidents, no matter how carefully we try. I think that the term "accident" is still accurate when the timing of the event cannot be known with any detail, because it still entails an element of surprise. At the same time, however, society needs to stop associating the term "accident" with a freedom of liability or responsibility for the consequences. Just because an accident occurs does not mean we don't have to deal with the fallout, and we need to be vigilant to identify risks that are not worth the benefit.
The second part of the lecture that I found particularly interesting surrounded the notion of "normalization" of accidents. In many fields we can both empirically and mathematically prove (to a high degree of accuracy) that accidents will occur with some given frequency. While the specific modes of failure may be unknown or complex to calculate, basic statistics can take a macro view of historic events and condense it into a close estimate. Normalization asks that if we know that an event will happen with a certain frequency (even if we don't know how or precisely when), can we really call it an "accident" when it actually occurs? I would go further, to ask if we should choose to do such a thing. In the event of school shootings, like in our supplementary reading material, normalization appears to make horrific acts of violence commonplace by training students to expect it to happen. On the other hand, we know earthquakes and natural disasters will happen regularly too, and we still feel justified in calling those "accidents". The key seems to lie with our degree of involvement in the accident. We can't cause natural disasters (at least not on a short time scale), and neither can we ever hope to eliminate all incidents and industrial accidents, no matter how carefully we try. I think that the term "accident" is still accurate when the timing of the event cannot be known with any detail, because it still entails an element of surprise. At the same time, however, society needs to stop associating the term "accident" with a freedom of liability or responsibility for the consequences. Just because an accident occurs does not mean we don't have to deal with the fallout, and we need to be vigilant to identify risks that are not worth the benefit.
Thursday, February 5, 2015
Pinto Madness Response
I found our latest reading assignment, "Pinto Madness", a very interesting article. While I have heard the story of the infamous Ford Pinto's design flaw a few times before (mostly through second hand recounting), I appreciated the opportunity to read such a comprehensive and research-driven article. At the same time, however, I found myself discouraged by the aggressive and accusatory tone of the writing. I understand that this piece of work was intended to open the public's eyes to the priorities of the auto industry in America and their influence in our government, but I felt that the author's personal outrage towards the Ford Motor Company reduce the effectiveness of his arguments.
An example of this is his continual focus on the fact that Ford puts dollar value on a human life. I can count at least three places in the text where this is quoted directly, and many more where it is indirectly referenced. Economic analysis requires putting a value on a human life from the analyst's frame of reference. It is not (and could never be) a true indication of the actual value of an individual's life, but is is a necessary crude approximation that makes it possible to quantify the idea of safety. While many might understandably make a case that no value is high enough to equal human life, mathematics and the habits of society to not support this. If we picture "safety" as the output of a function asymptotically approaching 100% as a function of dollars, it quickly becomes impractical to keep investing huge sums of money for a tiny marginal benefit in safety. I argue this not to defend Ford (or claim that they had reached the point of diminishing returns), which is clearly responsible for a great deal. Instead, I want to make the case that the author is trying to use this fact to appeal to readers's emotional sensibilities and inspire anger rather than a rational response. Another way in which the author tries to inspire outrage in readers is through his follow up on the activities of senior management officials at Ford. A strong argument with solid facts, like this article, doesn't need to try to inspire resentment towards those responsible, that will happen on its own. Resorting to an examination of Henry Ford II and Lee Iacocca's futures after Ford seems petty and off topic when the focus should be kept to an examination of the circumstances that allowed these accidents to happen.
An example of this is his continual focus on the fact that Ford puts dollar value on a human life. I can count at least three places in the text where this is quoted directly, and many more where it is indirectly referenced. Economic analysis requires putting a value on a human life from the analyst's frame of reference. It is not (and could never be) a true indication of the actual value of an individual's life, but is is a necessary crude approximation that makes it possible to quantify the idea of safety. While many might understandably make a case that no value is high enough to equal human life, mathematics and the habits of society to not support this. If we picture "safety" as the output of a function asymptotically approaching 100% as a function of dollars, it quickly becomes impractical to keep investing huge sums of money for a tiny marginal benefit in safety. I argue this not to defend Ford (or claim that they had reached the point of diminishing returns), which is clearly responsible for a great deal. Instead, I want to make the case that the author is trying to use this fact to appeal to readers's emotional sensibilities and inspire anger rather than a rational response. Another way in which the author tries to inspire outrage in readers is through his follow up on the activities of senior management officials at Ford. A strong argument with solid facts, like this article, doesn't need to try to inspire resentment towards those responsible, that will happen on its own. Resorting to an examination of Henry Ford II and Lee Iacocca's futures after Ford seems petty and off topic when the focus should be kept to an examination of the circumstances that allowed these accidents to happen.
Subscribe to:
Comments (Atom)