Monday, December 21, 2009

Us Versus Them

I left my full-time job a few years ago and for 18 months I sent in monthly checks to have my medical insurance continued under the COBRA plan. As I reached that time limit, I started to investigate individual medical insurance plans. Imagine my surprise - I found none at all! I found one provider that allowed me to fill in an application on-line - don't call us, we'll call you! Two years later, that application is still in process!

I wonder, if the new Federal reform of health insurance is enacted, is my situation going to improve at all? I gather that I will be required to make monthly payments again. Do I have any basis to hope to be better served?

I don't know what is in the bill that the House passed, or what the Senate is considering, nevermind what any joint committee is likely to come up with. I'm not encouraged by the snippets I hear on the radio and see on the internet. The main thing I hear is that insurance companies will stop rejecting people's claims because of prior conditions, if everyone, no matter how healthy, is required to enroll.

From an insurance perspective, this makes good sense. As the insurance companies point out, why should they be required to issue a fire insurance policy on a burning house? But somehow medical insurance is not fire insurance. A medical policy with a high deductible is much closer - a policy that would pay the typical holder only once in twenty years or thereabouts. I wonder how our system has evolved to its present state, with very low deductibles, so that insurance companies paying medical costs is common for many people. Most likely this is coupled to another peculiarity, that most medical policies are part of an employer's group plan, and largely paid for by the employer without being subject to income taxes.

From the snippets I've seen and heard, the focus of the proposed legislation is on the relationship between insurance companies and those they insure. Of course, every one of us is a potential customer of medical services. But the health care crisis is so much vaster than that one relationship.

Another crucial player is the physician, and in general the providers of medical services. This expands the network into a triangle. With the insurance company routinely managing all payments for services, the physician is stuck negotiating with the insurance company over which medical services are sufficiently cost-effective. This profoundly disempowers the patient.

Disempowerment of the patient - this may be at the core of the health care crisis. Of course, many of us find it convenient to disempower ourselves. It is easier to remain ignorant and to demand that somebody else take care of our problems, medical or otherwise. But an effective system will resist the temptation to prey on such weakness.

Behind the front line of physicians, nurses, etc., there is a whole army of suppliers, lead by the pharmaceutical industry. It is far more profitable to sell cures to the sick than to provide the simple tools necessary to maintain health. Of course, preventative medicine is not a 100% solution. But one major symptom of the present crisis is that even the most basic foundations of diet and exercise are much too seldom maintained.

That a healthy lifestyle seems so difficult to achieve - this opens up the pattern to the full world of our experience. Our culture has come to honor self-indulgence rather than self-discipline. We are surrounded by images, distortions held up as ideals. These distorted images include criminals held up as heroes, or exaggerated body proportions as healthy, or lavish living as smart economics.

Somehow it seems that many of the struggles around these issues present an illusion of a zero-sum game with distinct players - insurance companies versus policy holders, etc. But in the end we are just one society - one planet. A healthy system requires harmonious relationships among healthy components. Mutual exploitation, us versus them, is the path spiraling to collapse. Can we pull ourselves out of that pattern?

Thursday, December 3, 2009

Process or Result?

I haven't read any of the climate research emails that were stolen and published, that reveal some of the ugly details at the sausage factory of science. I don't know whether the researchers were really massaging their data irresponsibly, or to what extent the emails represent good evidence of such goings-on.

As John Michael Greer discusses in his most recent post, the debate around the emails is serving to amplify the polarization of views about climate change. This polarization does not help us reach the best scientific or political decisions.

There is another polarization involved here beyond the issue of whether the weather is affected by fossil fuel combustion. Joel Salatin's Polyface Farms, discussed in Michael Pollan's Omnivore's Dilemma, comes to mind. Salatin advocated slaughter houses with glass walls. He goes so far as to invite his customers to slaughter their own chickens.

All too often, scientists want to present results to the public. It seems to enhance the prestige of the scientific community to keep the messy process of science hidden behind a curtain, and to present results as settled facts, laws, etc. But this wall between scientists and non-scientists is unstable and unhealthy. It is far too easy for a walled-off mess to fester. Eventually the general public will demand to know the source of the smell. Perhaps nothing is really rotten, but the public may well lose confidence either way.

The real answer may well be to invite the customer to slaughter a few of their own chickens, or at least let the wall of the slaughter house be transparent. This already happens quite wonderfully in astronomy - amateur astronomers regularly make important scientific contributions. The truth is, climate is one messy business. It might seem that the risks of climate change are so serious that climate scientists need to emphasize their certainty in their results, and to hide the inevitable uncertainties and caveats to do that. But the truth is, the risks are so serious that we cannot afford to have those uncertainties hidden.

Here's one idea. NASA has often invited the public to suggest experiments to be performed in outer space. Climate science often involves similarly expensive apparatus, e.g. supercomputers. Invite the public to suggest various experiments, then pick a few most promising suggestions for actual implementation. Invite the public to see and to participate in the messy process of doing climate science.

Monday, October 19, 2009

The Illusion of Peak Petroleum Production

The United States produces considerably less petroleum per year nowadays that it did back around 1970. Similarly, production is in decline in the North Sea oil fields of Great Britain and Norway. Still, though, global production is holding reasonable steady. What does the future hold? Can we expect new technology to continue to open up new resources - or perhaps it's just some yet hazily perceived geological process that will continue to bubble up new oil fields as fast as we can burn through them.

Speculation on this topic generally focuses on some supposed "peak" in production, a point where petroleum is getting harder to extract at a rate faster than our extraction technology is improving. The peak is the point where petroleum production reaches its maximum, after which production will forever be less than that peak. Knowing when that peak is to occur - or has it already occurred? - should help us plan ahead, or so one common brand of thinking goes.

It seems obvious from a mathematical perspective that petroleum production must have a peak at some point. If one accepts that the time interval over which production occurs is bounded both in the past and in the future, at the very least by the emergence of the planet earth from the debris left by the last local supernova and its disappearance in the thermonuclear fire of the next local supernova, and if we represent production as a series of numbers, perhaps the daily tallies, then some number from that finite set must be the largest. Perhaps that maximum occurs multiple times, but if the tallies are given with sufficient precision the likelihood of a repeated number diminishes to insignificance.

What we're interested in, though, is not the mere existence of that peak, but its timing. That, I want to argue here, is chimeric. At a very practical level, the time of the peak may very easily jump across decades, depending on accounting details buried deep in the footnotes. Once you start to play along, the game is so easy that you will surely be able to generate many more accounting tricks than I will suggest here and you will be able to cause the peak to jump across decades all by yourself. But let me make the first few moves, in case they're not immediately obvious.

First, the tally intervals need to be established. This involves setting both their duration and their exact start points. For example, a yearly tally might start on January 1 but perhaps some other date is preferable. Given that production is spread across the globe and undertaken by diverse organizations, the exact start and stop points of the intervals might vary by time zone or by fiscal years in use. The final tally for e.g. 2008 might be the sum of production by all the producing organization, each in their individual fiscal 2008, many of which might be mostly in calendar year 2007.

In any year, the production rates day by day will surely not be constant. One year could have a total production less than some later year, but the first year might easily have a day whose production total exceeds that of any day in the later year.

The sequence of production tallies using one rule for intervals might have several similarly large numbers but only one true maximum. By changing the accounting rules, the numbers will all change a bit, easily by several percent. There won't be an exact correspondence between the two sequences of numbers, because they are defined over different intervals. But if there are two numbers very close in size but quite far apart in timing, a small adjustment can bring the smaller number up a few percent and the larger number down a few percent, so the maximum production tally can be shifted to a very different timing in the sequence.

Petroleum product has ramped up to current levels over almost 200 years. It seems likely enough that production rates are not likely to double yet again from current levels. Nor does it seem likely that production will plummet in just a few years. We are almost certainly faced with decades of fluctuating production, a bumpy plateau, before a slow decline sets in. There isn't a clear line between the plateau and the decline - such a distinction can only be made precise by arbitrary choices of accounting rules and curve fitting parameters. The exact intervals over which production is tallied, that is a first such accounting rule by which such neat distinctions as peaks and plateaus can be manipulated.

Another crucial facet of production accounting is gross flow versus net flow. What is most interesting to society at large is the net output of useful petroleum products such as gasoline, kerozene, etc. that the petroleum industry makes available for end uses such as transportation, heating, etc. Gross production would be the total volume of material extracted from oil fields etc. There can be a lot of variation in the amount of end product made available from whatever fixed amount of gross production. Some crude petroleum may be lost in transportation - more will be lost, generally, from remote fields. Crude petroleum is a class of material which covers a lot of variation - not every barrel will yield the same amount of refined product.

Some really difficult accounting comes into play when considering internal use by the petroleum industry of refined product. However much gasoline etc. is used by the petroleum industry in exploring, extracting, transporting, and refining operations, that much gasoline was not provided for end use by society at large. But how can one really draw a line where the petroleum industry starts and stops? What about the refined product consumed by e.g. the steel industry in producing the tools and structures it provides to the petroleum industry? Should we deduct that petroleum to determine net petroleum production?

As petroleum gets more and more difficult to extract, the petroleum industry will consume more and more resources per barrel of oil. However the exact accounting is performed, a steady gross production will be yielding a declining net production. But the details of the accounting will generate significantly different sequences of net production tallies, which will likely give peaks at very different times.

The exact timing of peak petroleum production is an artifact of arbitrary details of statistical analysis and accounting. The exact timing of the peak is useless for any kind of planning purpose. What is much more useful is a rough forecast of production, something like average production rate decade by decade for the next century. Today we are producing around 85 million barrels per day of crude. In the decade 2030-2040, is this likely to increase to 90 or 95 million or more, will it hold roughly steady, or might we see a decrease to 75 million barrels per day or even less? These are meaningful questions, however difficult to answer. Guessing the timing of peak production is a waste of time.



Wednesday, August 12, 2009

A Thousand Words


A picture might help help clarify the idea about random walks in multidimensional spaces. Here A is some state of affairs in the past, and B is the present state of affairs. There is a region around B that represents the possible states of affairs some relatively short time in the future. In two (or more) dimensions, most of these next states are further from A than is B. If the direction of the next steps we actually take is purely random, we will most likely end up moving further away from A. I.e. a tendency to move further and further from the way things were in the past does not imply any consistent direction of movement.

Tuesday, August 11, 2009

Branching Possibilities

Science and technology seem to be the paradigms of progress. Whether other institutions will advance to ever greater levels of excellence and accomplishment, by whatever measure, doubt seems not to be out of place. But how can science and technology ever go backwards?

The trajectory we've traced through past time does have a directed one dimensional appearance. Surely there is just one set of events that actually happened in e.g. 1752 and some other actual events in 1753 and all the events of 1753 happened after those in 1752. Time advances forward inevitably, therefore progress is inevitable - temporal progress, at the least. The twentieth century did a good enough job teaching us that society in general needn't progress with time. But it also brought us from light bulbs to cell phones - and advances in the nineteenth century were similarly dramatic. How can the twenty first not continue the trend?

This apparent trend is an illusion. Looking back from the great accomplishments of our day, one can find tiny seeds fifty and one hundred years ago from which they have grown. But many other seeds from those past times failed to grow. Many technologies that were thriving at that time have since disappeared. For the most part, these are technologies that would serve little purpose in our time, so their disappearance is small loss. But this kind of pattern starts to look more like a random walk, rather than movement in any consistent direction. One can look further and further back in time and see that we have moved ever greater distances from those earlier states of affairs. But... just because we are moving further and further from the past, doesn't imply that there is any correlation between the direction of the next step and the directions of any past step.

Consider the way distance works in a multidimensional space. Let A be one point, representing a past state of affairs. Let B be another point, the present state of affairs, some considerable distance from A. Now look at a small sphere or ball around B, representing possible states some relatively short time in the future. Most of that sphere or ball will be further from A than B is. The higher the dimensionality, the greater the fraction of the sphere that is further from A. That is, purely random movement will most often lead to states more and more different than past states. This appearance of a trend of increasing difference does not imply movement in any consistent direction.

Of course, movement in science and technology is hardly random. Nor is it predetermined and inevitable. How social institutions such as these steer themselves or are steered, somehow through the collective actions of large numbers of persons along with happy and sad accidents of equipment behavior etc., that is a huge tangle, surely beyond any ultimate resolution. But to realize that multiple paths are possible and that our actions are crucially consequential in determining which path we actually take, this is the central purpose of my exploration here.

The range of possibilities has a hierarchical structure. At a small scale, any individual researcher is confronted daily with choices about how to proceed - which tests to perform to diagnose some equipment malfunction, etc. At a larger scale, a research team needs to choose which research projects to pursue. At a very large scale, society needs to decide what kind of scientific and technological institutions to permit and support. In all these cases, the menu of options that presents itself is not likely to be the menu one would most like to see. There are practical steps that one can take at any given time, but they generally will bring surprising results - pleasant and unpleasant.

We seem to be caught in a trend of increasing financial constraints. Maybe this will turn itself around soon, but maybe not. What kind of science might work best on a tight budget? Maybe we need to constrain ourselves to funding the least risky projects? Or maybe this is exactly the time to be planting many small seeds. Risk evaluation is itself inherently risky! We may not want to run the risk of putting all our eggs in a basket that we think we know is safe!

Friday, July 31, 2009

Connected Crises

I'm not a fan of the automobile, but still I put on my share of miles, ferrying our teenager to the pool and back, buying groceries, participating in group Dharma events, etc. It's hard to find time to get out for a walk, with all the driving I do! But it really wouldn't be practical to walk to many of the places I go, or even to ride a bike. The pool is five miles away - sometimes we ride there, but there are some narrow busy twisty roads along the way, which are even more treacherous in the rain.

In my bachelor days I lived for years without a car, relying on walking, biking, and public transportation. But that was also in the Portland, Oregon area, where the roads and the rails and the weather make that easier than almost anywhere else in the USA. Now, trying to support the expanding horizons of a teenager etc., in a much less hospitable environment - certainly it would still be possible to live without a car, but at a much greater cost.

The automobile can stand as the common denominator between two of our major long term crises, health care and climate change.

The health care crisis is extremely complex, of course. It is clearly not sustainable to have health care costs as a growing fraction of the GDP. And the services provided don't seem to be optimally distributed - certainly many people are under served. It may seem harsh to suggest that some people might be over served, but it does seem that we need to look carefully at what we expect from the health care system. Old age, sickness, and death can be managed to some extent, but not utterly evaded. A desperate grasping at some ideal of physical health, is not healthy at a more meaningful level.

Another difficult component of the health care crisis is preventative medicine, or really just healthy living practices - diet and exercise being the cornerstones. The simplest way to manage these is to incorporate healthy eating and movement as an integral dimension of one's life, rather than as some separate health care activities. A great way to get exercise is to walk to the grocery store and then carry one's groceries back home. But there needs to be a grocery store within walking distance!

One can always choose where to live with grocery store proximity as a highly ranked criterion. But there are many other important criteria, such as proximity of one's job and family, cost and availability of housing, etc.

In a way it seems so simple - if we just rearranged our culture so that all the facilities one needed in regular daily life were accessible by foot or bike, we could make huge dents in both the health care crisis and the climate crisis. But our culture is such a complex system, with so many interlocking components each holding the others in place, so that nothing can change very much without everything else changing too - the problem seems insurmountable.

Of course this is always the nature of things - liberation has always been right in the palm of our hand, yet how many of us manage to find our way clear of this vicious cycle of suffering and confusion. And yet, if we just tap our courage, keep our goal in mind and boldly take the steps at our feet, miracles do happen!


Monday, July 20, 2009

Harsh Tactics

Bureaucracy has been around since the rise of civilization, when agricultural surplus made cities and armies possible and necessary. A special feature of our modern industrial age is that fossil fuels and mechanization have moved the great majority of people off the farm and into cities. Now most of our lives are played out in the context of large organizations.

We all face difficult decisions that will carry significant consequences. How we make those decisions will have impact beyond the results of the actions involved in whichever alternative we choose. The decision process itself constitutes action. We are habit-driven organisms - any action creates a propensity for a future repetition of some similar action. Our ways of making decisions are also structured by habitual patterns, reinforced by our continuing decision processes. If we want the healthiest results of our actions, we need to practice making decisions in healthy ways, along with making healthy decisions.

Bryce Lefever has defended the use of harsh interrogation practices as part of the war on terrorism. His work as a military psychologist convinced him that torture can yield valuable information. To focus on the ethical questions, let's allow him this hypothesis for the moment. Is that sufficient reason to decide to go ahead and apply torture?

It does seem clear that sometimes the best or only way to prevent someone from doing great harms to themselves is to subject to some lesser present harm. The notion that one should never subject anyone else to the slightest inconvenience - surely this is just too naive. But then we face a question of degree, as slight inconvenience turns to distinct unpleasantness and to serious harm. Whatever degree of harm is involved though, one could be rescuing a person from some yet greater future harm. Of course, present consequences are generally more certain than future consequences, so any kind of trade-off like this is a dangerous gamble.

The trade-offs get much more treacherous, though, when multiple people are involved. When ought it be preferred for one person to inconvenience or harm another in order to gain some greater benefit to a third party, or to many others? Perhaps such contemplated action is a response to some prior action, so some kind of justice may be looked for. It seems right that the burden of correcting some painful situation should fall principally on those who created that situation.

Lefever's ethical principle of "the most good for the most people" is clearly too simplistic. It doesn't distinguish between people based on their involvement in the historical background of whatever situation. And it doesn't give an effective rule for selecting one distribution of harm and benefit versus another. Perhaps the median benefit is implied? But this would permit unlimited harm to 49% of the population!

His further principle of "America is my client" is clearly shortsighted. The consequences of the kinds of actions he is discussing run over many generations. The great-grandchildren of an American may not be American, and the great-grandchildren of a non-American may well be American. The brothers and sisters of Americans may often be non-American. The world is much too fluid and interrelated to make such distinctions very useful, except in the most narrow short-term contexts.

While his ethical principles may be slippery, if we admit that sometimes torture provides valuable information, one can imagine situations where torture may be justifiable. But there is a more profound problem - what kind of procedure should be used to decide when to apply torture? Perhaps if some practically omniscient oracle could be consulted, an oracle with a vast historical perspective that could compute consequences and their probabilities, perhaps torture would occasionally make sense. But in a bureaucratic culture of blind routine... once a practice is admitted as legimate, it will be applied indiscriminately.

As individuals, we need rules to live by - we need to cultivate habits that will, for the most part, guide us in a positive direction. Organizations are vastly more habit-bound than individuals. However difficult it is for an individual to develop clear awareness and discriminating wisdom, it is a hundred times more difficult for an organization. The rules that an organization lives by must be designed for blind routine application. History has taught us again and again that torture can all too easily become a blind routine, and it has never been worthwhile to fall into that pit. Leadership fails when it feels specially empowered to break the rules, and the community is thereby guided into despair. Military psychologists need to understand that they are teaching the enemy, too, how to behave.

Monday, July 13, 2009

Collapse of the Quantum Wave Function

It's a curious question, whether Buddhism is a religion or not, or whether it's a spiritual tradition. Religion and spirituality deal with God and the Soul. These are rather elusive in Buddhism. Buddhism does cultivate the realization of the ultimate nature of the universe and the self. But it's our habitual grasping at fixed concepts of these ultimates that is the root of our misery. Faith in Buddhism is the faith to let go.

Modern science is devoted to the quest for the ultimate nature of the universe, while it has discarded as meaningless the quest for the ultimate nature of the self. Given the overwhelmingly powerful role of science and technology in today's world, many who still find meaning in the self look for suitable roles for such an entity within the framework of science. The collapse of the wave function provides one popular such role.

While quantum mechanics has given scientists remarkable powerful methods to understand phenomena at the scale of molecules, atoms, and nuclei, it raises many difficult questions about the nature of the measurement process, and the nature of the phenomena being measured. When not being measured, physical systems seem to be able to explore many trajectories simultaneously through their spaces of possible configurations. But a measurement will trap the system in just one configuration, from which it will then evolve further after the measurement.

The fundamental paradox is that any measurement apparatus is itself just another physical system. The physical system being measured and the physical system performing the measurement, together form just a single larger physical system. Quantum mechanically, this larger system is just as capable of exploring multiple trajectories through its richer space of configurations. The collapse of the wave function, the pruning of possibilities down to a single actuality, is not apparently a consequence of the fundamental equations by which quantum mechanics describes the evolution of phenomena. The collapse seems to require some outside agent - which seems like a perfect role for a soul or spirit or self or consciousness.

Doubt and Certainty, by Tony Rothman and George Sudarshan, discusses various examples of the asymmetry of the direction of time - i.e. the world looks quite different when a movie is run backwards. One classic asymmetry is thermodynamic, e.g. a heat flows from a hot cup of tea out into the surrounding room, until the tea and the room reach the same temperature. It never happens that a cool cup of tea in a room gradually absorbs heat from the room until the tea is much hotter than the room. Another key asymmetry is the collapse of the wave function. There is no way to unmeasure a system! Rothman and Sudarshan suggest that maybe these two asymmetries are really just one, that the collapse of the wave function is actually some kind of thermodynamic affair. The pruning of possibilities might be more like an entropic scrambling. It could be a bit like erasing a chalk board. The chalk that formed the letters is still there, but the letters disappear because the chalk is so uniformly spread around.

Where does this entropic scrambling happen? Could it be buried in some deep perceptual mechanism, some homuncular observer at the foundation of the mind? Christine Skarda has proposed an intriguing perspective on perception, where the function of perception is to chop up what is originally interconnected into discrete objects. She also proposes that much of the chopping happens right at the surface, at the sense organs themselves, rather than at any fundamental homuncular level.

The collapse of the wave function will generally occur well before even the sense organs get involved - at whatever point the isolated microsystem interacts with coarse macroscopic systems like cameras, transistors, etc. One can imagine an experimental arrangement though where that first thermodynamic interaction is with the retina of the observer's eye, with the sense organ.

Friday, July 10, 2009

Walter Willett

Hypoglycemia had its icy fingers around my neck, back when I was starting graduate school. I was trying to eat well, with a breakfast of oatmeal. But somehow by lunch I could barely crawl out to the row of lunch trucks along 33rd St. Desperate, I called my Mom, who recommended Let's Eat Right To Keep Fit by Adele Davis. Thanks, Mom! I switched to wheat germ - now that's a substantial breakfast!

Some years later, my pleasant Saturday routine was breakfast in New Paltz followed by a hike along the Millbrook Ridge trail. That regular repetition provided a good laboratory, a way to isolate which causes led to which effects. A plain cheese omelet would have me rooting through my backpack for the rice cakes, out around mile four. A cheese omelet with broccoli and the rice cakes made it home untouched. But there's practically no nutrition in broccoli! Vitamins and minerals, sure, but that's not going to influence digestion or blood sugar, not over the course of an hour or two. Ah, fiber!

Another experiment - explore the oat spectrum, from quick oats to rolled oats, steel cut oats, whole oat groats. Groats take forever to cook, but I have never found a more solid nutritional foundation for a day. The key, at least for my system, is to eat food that digests slowly, so it keeps generating energy over hours instead of mere minutes. There are some people for whom the challenge seems to be to speed the process up so things don't sit for days. One size doesn't fit all!

My current diet bible is Walter Willett's Eat, Drink, and Be Healthy. He corrects two major over-simplifications to the standard food pyramid. Fats and oils need to be divided - trans fats should be avoided altogether, saturated fats limited, but mono- and poly-unsaturated fats can be eaten in practically unlimited quantity. Similarly, refined carbohydrates need to be limited, while whole grains are a good foundation for a diet. Willett mentions a spectrum of whole wheat flours, from finely ground to coarsely ground. This is like rolled oats versus steel cut oats. It's not just the balance of chemical constituents that matters, but the larger scale structures into which the molecules are organized.

What puts Willett's book into a class of its own is not its discussion of the properties of various foods, but its description of the scientific methods used to discover those properties. He describes laboratory studies with animal subjects and clinical trials with human subjects. But he also discusses the lifecycle of scientfic knowledge, the unreliability of the latest research reports and how further study of a subject slowly sifts out the various erroneous hypotheses. It usually takes decades to work out the kinks. By the time a topic gets to the college textbooks, it's usually reliable enough for routine application - but then it is long out of the news. Nutritional science in the news and nutritional science to eat by are two very different beasts!

Thursday, July 9, 2009

The Paradox of Logic

Somebody has been launching cyber-attacks against government websites in South Korea and the United States, the newspapers report, and North Korea is the prime suspect. I can't think of any other nation versus nation cyber-attack in the past. We seem to have crossed a new frontier.

In the late 1980s, somebody in IBM sent out a clever email Christmas card. When the recipient opened the email, a little script inside the card would be activated. The script would read the recipient's list of contacts and forward the Christmas card, script and all, to all those contacts. It was only a matter of hours before the whole corporate network was flooded beyond capacity. Unintended consequences can be expensive!

Around that same time, I helped a young programmer get a phone call directly from the Rochester, Minnesota IBM site manager. The Rochester site had a large mainframe cluster that was used both for design tasks and to run a large factory. Many of the folks who pioneered the VHDL circuit design language were also from Rochester. I was part of the team in Poughkeepsie, NY, that was helping to develop a simulator for VHDL. It was a prototype simulator. where practically every line of code in the VHDL circuit design would get translated into its own little program to be compiled. We had the clever idea of farming the resulting thousands of compile jobs out to the Rochester cluster - after all, it was their language! We got a little help from a guy in San Jose who'd written a very powerful language that gave programmers control over all sorts of esoteric disk drive functions. We wanted all the thousand little separate simulation programs to get linked together when the last compile job finished. So we put a counter in a file on the cluster scratch disk. We needed to serialize access to the counter, so that each decrement would be a integral transaction. The San Jose guy showed us how the lock the whole drive - fast and simple! Yup, that's right, we were locking the scratch disk on one of the biggest mainframe clusters in IBM, thousands of times, like an attack of a swarm of hornets. The site manager was not happy about that at all! We probably shut down the site for no more than half an hour or so. That was probably the most expensive bug in my career! Unintended consequences - at least they didn't fire me!

When I was in high school, around 1970, you could still see the idea around that somehow computers could be so smart that we could delegate crucial decisions to them, on questions like whether to launch a nuclear attack NOW! So many little bits of evidence need to be weighed to determine whether potential enemy nation X has already launched an attack on us - but we need to decide whether to launch a counterattack very promptly, before their missiles land and destroy our counterattack capability. It's hard to see how humans can be relied upon to make such decisions, but why can't the perfect reasoning power of a well oiled machine be employed?

This dream, of using reliable mechanical reasoning to circumvent the strife and corruption of human decision-making processes, goes back at least to Leibniz. Isn't it our deviation from pure logic that brings us all this misery?

The power and reliability of today's logical inference machines is stunning. Yet what we have created with that power is every bit as chaotic and corrupt as any other human institution. By now, computers have become so integrated into every facet of modern industrial living that human and machine chaos and corruption are seamlessly integrated. Somehow mechanical reasoning hasn't quite lived up to the dream of Leibniz.

Wednesday, July 8, 2009

Machine Intelligence

It's a nice puzzle to try to figure out whether dolphins or chimpanzees are actually intelligent or just clever enough to fake it. Isn't it the same thing, being intelligent or being clever enough to fake it? We might choose to use different terms for different species, like we say horses gallop but humans run, but if we want to dig below the surface to some internal reasoning mechanism - if there is a consistent pattern of logical consequences following the appearance of suitable premises... what else is required?

In the 1930, people like Kurt Goedel, Alonzo Church, and Alan Turing had sketched out mechanisms that could automatically synthesize logical formulas that were implied by existing formulas. Such machinery is the ultimate fake - there is no mystery in it at all. Already in the 1930s these pioneers of mathematical logic had explored both the power of such automated inference mechanisms and discovered some limitations. But even human intelligence is evidently limited! Could a machine be clever enough to deserve to be called "intelligent?"

It could be a bit like asking whether synthetic diamonds are really diamonds - maybe they're just too perfect! We might want to distinguish between natural and artificial intelligence, but still accept that they share the essence of intelligence, at least if we can find a way to detect it.

Turing proposed his famous test to answer this question. Nowadays "teletype" conversation has become pervasive - text messaging by email, cell phone, or whatever other channel. What if one enagaged in a text dialogue with someone, only to discover that they were in fact a machine? That's the Turing test - if a machine could carry on a text dialog in a fashion indistinguishable from a human, then the machine deserves to be called "intelligent".

One curious feature of our world today is that much of our text dialog is in fact carried on by machines - spambots, etc. The need for an effective way to distinguish between real people and machines has driven the construction of automated tests that require users to recognize squiggly letters. Turing's test involved a human judge, rather than an automated test.

But is simulated intelligence really the same as the genuine article? John Searle argued against this. The central processing unit, or CPU, inside a computer performs complex computations by reading a sequence of instructions from the computer's memory and executing the operations specified by those instructions, one by one. Each individual computation is very simple. The results of each operation become the inputs to other operations, and can also cause the reading of the instruction sequence to jump to a different place in memory - it's this pattern of interaction between operations that lets computers perform such sophisticated computations.

The CPU itself, though, is really blind to all that sophistication - it just keeps reading, interpreting, and performing simple operations one by one. A human being could just as easily perform those operations, albeit much more slowly. A human could simulate a computer! The simulated computer could be carrying on a text dialogue in the Chinese language, but the person who is simulating the CPU might well not know any Chinese at all. If a human can simulate intelligence with no comprehension at all of what is being simulated, Searle argues that simulated intelligence must therefore be different than the real thing.

Is Searle right? The question is in large part semantic. Useful distinctions make important difference. What difference might it make whether something is intelligent or not? Suppose someone destroys that possibly intelligent something - what sorts of legal penalties might be appropriate? Destruction of a mere machine, that would qualify as a misdemeanor like vandalism and call for a fine or maybe a night or two in jail. Destruction of an intelligent being, that would be murder and require severe penalities.

Suppose someone destroys the CPU in my computer. Generally a CPU in a computer can easily be replaced. These are generic components. Destruction of a CPU is not a serious matter.

Suppose instead someone destroys the hard disk in my computer! That's a whole different affair! With luck, maybe I have my most crucial data backed up to CD-ROM or some such media. Disks are so big, practically nobody can afford to maintain up-to-date full disk image back-ups. Loss of a hard disk is really a major catastrophe.

Whatever intelligence is in a computer is clearly not in the CPU. Searle's argument fails, because he is simulating the wrong component. He should be simulating the hard drive - as if such a thing were possible! It's a bit like saying there might be intelligence in a large library filled with rare and unique books. Not so strange, after all!

Tuesday, July 7, 2009

Stem Cell Research

Regulation of stem cell research is again in the news: http://www.nytimes.com/2009/07/07/science/07stem.html. The question of the day is how a stem cell line can properly be started if research that uses it is to be federally funded. The details, of who needs to sign what forms when, are mind numbing.
But the broader topic of stem cell research involves up a number of difficult issues.

Any sort of decision generally requires an evaluation of alternatives. Stem cell research brings up several dimensions of values, that seem to sort themselves into a hierarchy. At the top of the value hierarchy are ethical questions, of right and wrong. Scientific research should not involve criminal activity. But in drafting legislation and regulation the question becomes, which sorts of activities should be marked off as being criminal? Perhaps where these lines have been poorly drawn, there may in fact be a moral imperative to engage in illegal activities - or at least a moral imperative to avoid some activities that are wrong despite being legally permissible.

At the bottom of the value hierarchy lie issues of cost. All other things being equal, it makes good sense to pursue any activity, including scientific research, in a cost-effective fashion.

The value of knowledge gained, of new scientific laws discovered, of cures finally established for diseases etc., the value of such scientific progress seems to fall at an intermediate level, between moral imperatives and cost considerations.

Values don't really arrange themselves into such a neat hierarchy, though, at least not in any way obvious enough to insure consensus. In general it is wrong to shorten people's lives with untested medical procedures - but for every procedure, someone has to take the risk of being the first test subject. In general it is wrong to raise animals just to kill them, but if there is enough economic value or reseach value gained, killing animals is considered acceptable by many people and by U.S. law.

The debate over whether stem cells constitute human beings or not is not an isolated question, but deeply entangled with the debate over abortion. That's another aspect of the interdependency involved in such issues. Whether or not stem cells are human beings or not is already a complex question when limited to matters of petri dishes and microscopes. But a decision one way or another will enhance the political power of the winning party, and provide a precedent for decisions on related issues.

Monday, July 6, 2009

Bureaucracy

A dominant feature of modern life is that actions and consequences get channelled through extensive networks. It becomes very difficult to observe the effects of one's actions. For example, an engineer might be given the task of designing some small component of a complex system. That one engineer likely did not participate in defining the system objectives, or even the function to be performed by the component they're designing.

While most of the people in a large manufactuing enterprise may never see their products in use, they do get powerful signals back from the enterprise that steer their behavior - wages, raises, promotions, status, power. In a simple world, a farmer who wanted to drain a field might build a pump and observe directly how well the pump works. In our complex modern world, a pumping system is assembled from purchased manufactured components, each of which is in turn similarly assembled. The person repairing a valve at the chemical plant that produced the plastic from which the pump intake gasket was made - that person has no communication with the farmer. The repair person is likely working to meet corporate objectives for cost control, valve reliability, etc. The stock holders of the corporation have even less communication with the farmer and are just looking for a good return on their investment.

Science and Buddhism share the basic system control paradigm, where system parameters are tuned by observing the system outputs, to the system can be steering to optimum behavior. The crucial issue is system scope. The modern bureaucratic system tends to narrow the scope of any individual's concern. The approach in Buddhism is to broaden one's scope. Tune your behavior not just to fatten your next pay check, not even just to make your family more comfortable next year, but to benefit all beings for countless generations into the future.

We can choose how we interpret our experience as a gradient, steering our behavior one way or another. In fact we are always choosing an interpretation, however unconsciously or driven by habits. The path to freedom is to bring those choices to awareness and courageously to risk interpretations that at least stretch the molds of habit and convention.


Sunday, July 5, 2009

Actions and Consequences

Sometimes it seems as though science and technology evolve autonomously, in a way that is disconnected from individual human actions. The purpose of this blog is to explore the ways our actions and their consequences really do make crucial differences, not just in our individual personal lives, but also in the broader patterns, networks, and institutions that constitute science and technology.

There are two extreme perspectives on science and human action, either of which seem to relieve us of responsibility for the results of our actions. One perspective, which we can call nihilism, portrays humans as complex biochemical systems, the evolutionary product of random genetic mutations and the struggle for survival and reproductive success. From this perspective, action and responsibility are illusory.

From the other extreme, the eternalist perspective, the structure of natural processes is an objective truth, which science and technology reflect ever more accurately. Individual actions may speed up or slow down the processes of discovery and exploitation, but progress is inevitable. Individual action is ultimately powerless.

The Dharma taught by the Buddha, and long cultivated by the Sangha, shows a middle way between these extremes of nihilism and eternalism. The foundation of the Buddhist path is the realization of our responsibility for the consequences of our actions.

The purpose of this blog is to look at science and technology as human activities, and to explore how the way we act shapes the world we experience. We can pursue a path into dark dungeons of misery and despair, or we can follow a way that opens up ever greater freedom and awareness. The goal here is to develop a Buddhist perspective on our modern institutions of science and technology.