Saturday, February 28, 2015

When Intelligent Trial and Error isn't Enough

This week's lecture and reading material explored the concept of "intelligent trial and error". Intelligent trial and error (ITE) is a direct response to one of the earlier concepts covered in this class: unforeseen consequences. Whenever any technical or scientific endeavor is attempted (especially as the magnitude and complexity increase), there must be some expectation of unforeseen consequences. We live in a non-deterministic universe, and as such, even our best efforts and predictions cannot hope to account for all of the direct, secondary, and tertiary effects that our actions will have in the future. With this knowledge, the objective becomes to find ways to minimize and prevent the escalation of problems when they do appear. This is very well aligned with our overall goal for this course, and the process of steering technology is undoubtedly benefited through this process. ITE is one of the most fundamental procedures used to address these problems, but yet it too has failings that limit its application and effectiveness.

At its most basic, ITE is a process humans complete in numerous small ways every day. We attempt to solve a problem, identify failings with the result, modify our procedure based on our findings, and attempt the same problem again. It is a classic closed loop control system, and like any control system, it can be improved and optimized by tuning simple factors like the amplitude and rate of change of the response. In the reading we examined the most obvious failure mode of this system: becoming an open loop, or losing the communication between the feedback and the actions. This effectively eliminates the possibility of any corrective action being taken. In the furniture factory case, employees and site inspectors were repeatedly sending information about the problem back to OSHA, and yet the corrective measures were never acted upon, producing a gap. What about more subtle challenges though, such as those complex enough to require more than one cycle of feedback? As humanity continues to push the boundaries of  science and technology, we may encounter unforeseen consequences that require many, many cycles of feedback and correction before they can be adequately addressed. The iteration process often draws upon resources of time and money to complete, two things which corporations, society, and governments often find in short supply. A favorite example that comes to mind is that of the Saturn V Rocketdyne F-1 engines. During an extremely rushed and expensive development period, an unexpected combustion instability was discovered that propagated into violent engine destruction in most cases. It took numerous iterative tests over the course of two years to develop a precise baffle system that could self dampen the oscillations in such a large combustion chamber. In the F-1's case, the enormous budget from the US government allowed this number of test cycles to be possible. In some cases however, this drawback of the ITE process could hinder or even halt correction of unexpected problems.

Even if the time and resources of a fully ITE ready process can be supplied for a development, other challenges remain. Some of these were covered in class, and the more petty of them included human failings, such as unwillingness to act due to legacy thinking, reputation, pride, inertia, stupidity, or downright stubbornness. As irrational as these may be, they could derail an otherwise solid ITE plan. Other causes tying back to the cost of ITE exist too, like sunk costs and political expedience. Presuming all of these challenges can be overcome, however, we still must consider the less trivial obstacle of the effectiveness of problem and correction identification. Trial and error, for the most part, does not imply or provide guidance towards the proper type of correction that must be made or the change in procedure that must be taken. In some cases, the direction or type of correction that must be made is not clear and a pure trial and error approach (even when taken with "intelligence") comes down to hunting in the dark. To complicate matters further, most real world systems involve changing multiple variables simultaneously. Changing a single factor in a polluting manufacturing process, for example, may appear to slightly improve efficiency, and yet changing that factor further could produce very little benefit without the additional reduction of a different factor. The time needed to identify solutions becomes magnified hugely, and emphasizes the need for a system with a structured, documented procedure for identifying root problems. Item number 4 in the Intelligent Trial and Error table in the textbook comes closest to exploring this issue, with the requirement of "Active Preparation for Learning From Experience", but even this focuses on the need for error identification, rather than the steps to do so effectively.



References:
"SP-4206 Stages to Saturn." Nasa History Program Office. NASA, n.d. Web. 01 Mar. 2015. <http://history.nasa.gov/SP-4206/ch4.htm>
Harford, James J. Korolev: How One Man Masterminded the Soviet Drive to Beat America to the Moon. New York: Wiley, 1997. Print.

No comments:

Post a Comment