Friday, December 16, 2011


However dazzling the visions science gives us, of cosmology or microbiology etc., of at least equal importance is its practical value in helping us figure out how to live more comfortably, more successfully. Two questions then arise immediately. Is science tied to any particular measure of success? And, compared to what alternatives will science make us more successful?

For generations by now our cultural trajectory has blinded us to these questions. Science has enabled progress: every generation has lived more successfully compared to previous generations. Our success is obvious: just look at our amazing capabilities to build machines, to steer the wealth of the world to our purposes. Slowly though dissatisfaction with this trajectory has been growing. Also growing has been the effect of this trajectory on the world, as our growing human population meets tightening resource constraints. Ideally our thinking could lead the way to more sustainable ways of living, but it looks more and more like resource constraints will be the driving force.

Will our belief resist this change in thinking, our belief that science will put more and more resources at our disposal with every generation? Or can science help open our eyes so that reality can inform out beliefs. We can use science to live more successfully than we will if we cling to our blindness, but that is only a possibility.

The relationship between two discussions on the web highlighted this challenge for me. Dmitri Orlov talked about the ability "to abandon who you have been and to change who you are in favor of what the moment demands," while a thread on Bike Forums puzzled over a fellow who lives on his bicycle. Is Fred a bicycle tourist or a homeless person with a bike?

Orlov talks about seasteading, living on a sailboat. (I gather he walks his talk!) In honor of Fred, I would like to introduce the term "bikesteading".

A bicycle is surely a pinnacle of scientific technology. Bikesteading might not be the perfection of sustainability, but it might be one excellent response to the reality we're facing in the coming decades.

Tuesday, December 13, 2011

Dazzling Visions

During the question and answer session of last weekend's teachings by Bardor Tulku Rinpoche at Kunzang Palchen Ling, a question came up about theories of physics presented in the Nova series Fabric of the Cosmos and how these theories relate to views expounded in Buddhism. I confess that I haven't watched the Nova episodes yet, but I expect that a general approach to such questions will likely apply. Modern science is filled with dazzling visions of the nature of reality, and certainly the Buddhadharma has similarly dazzling visions. So perhaps these might coincide somehow or perhaps some mutual adjustment is called for; it is surely a ripe field for exploration.

Buddhism and science are both magnificent traditions that present multiple visions of reality, so many visions that in either case one could never realistically expect to master but a small fraction of them. For example, in Buddhism we can see the world as composed of the six classes of beings from those trapped in hell to those soaring in the heavens, or we can see the world as composed of a network of experiential events which can be classed under five headings, from form to consciousness. In science, the notion of the selfish gene underlying Darwinian evolution gives us one grand vision of the unfolding of nature; another point of view comes from a cosmologist like Brian Greene, a view growing out of the shapes of space-time drawn by general relativity combined somehow with quantized interactions of elementary particle theory.

While Buddhism and science are similar in the way they cultivate such surfeits of dazzling visions, they differ in how they propose this surfeit should be understood. This question is not really a scientific one at all, but rather one of the philosophy of science. While science itself can boast of countless extraordinary accomplishments, the philosophical understanding of its practice has a more difficult time making similar boasts. As scientists work to harmonize their various visions, the philosophers of science seem to splinter into ever narrower factions. And this splintering has real consequences: for example, the way that biological evolution or anthropogenetic climate change are sometimes dismissed as mere "theories". If there were some consensual notion of the relationship between scientific theories and scientific facts, perhaps such debates could progress toward reasonable conclusions.

But where philosophy of science is much the down-trodden Cinderella of the scientific world, the analysis of the status of the various views of Buddhism is rather the honored Princess. The various views are understood as being structured by their boundaries, by their limitations. One can move through a progression of views, each succeeding view providing a perspective from which to analyze the limits of the preceding view. But this series of perspectives is unbounded, or rather ends at a realization or wisdom that transcends any structure or view or perspective.

The value of the various views of Buddhism lies in their tendency to lead to this wisdom that transcends views. How do views support or permit or suggest this transcendence of themselves? Might scientific views, at least sometimes, also work in such ways?

One general answer follows a medical analogy. Progress along the Buddhist path is a matter of curing diseases, of eliminating confused habits. Views are medicinal: they can enable us to overcome our various patterns of self-imprisonment. But views themselves can have self-imprisoning side-effects. The best medicines let us simply let go of the treatment once their work is done. Lesser medicines may require further stages of subtler treatment to work through those side-effects. Of course the relationship to medicine is not merely one of analogy. A healthy body certainly can help provide one with better opportunities to progress along the path. At the very least, scientific views are compatible with Buddhism to the extent that they promote health and happiness.

Grand cosmological theories, such as those of Brian Greene, are rather far from application to practical human welfare. Perhaps one valuable use they might have is as a challenge: is every view really limited, or might some such grand cosmological theory really reflect the nature of reality in an unlimited way? Perhaps we need such fresh challenges if we are really to confront that question rather than treating it as an academic exercise.

Alternatively, we might observe that as scientific theories progress to encompass vaster ranges of phenomena, they bring ever more fundamental shifts in perspective. These theories drive us to let go of ever deeper assumptions. If we see that every assumption will similarly fall at one stage or another of such theoretical progress, perhaps we can infer whither this process leads, and simply let go utterly.

Wednesday, August 24, 2011

Complexity and Power

The world of electronic computing hasn't turned out much like originally envisioned. Thinkers like Leibniz and Hilbert realized that resolving deep questions requires subtle and precise logic. The human mind gets overwhelmed beyond a few layers of abstraction and too easily steps over some crucial gap, missing the details where whole legions of devils often hide. Machines can tirelessly apply inferential rules to logical expressions of daunting complexity. The surprise is that the movement this brings about tends to be horizontal rather than vertical. Indeed, still in the prehistory of computing, Gödel and Turing showed how mechanical application of rules cannot fathom the subtleties of the deepest truths. That lack of depth was the first surprise. But practical mechanical computing brought a second surprise. No one in the 1950s seems to have imagined that computers would become so universal that most folks in the developed world would routinely carry several around in various pockets, chatting with friends and checking on the stock market. Computing has become pervasive.

Parallel to these surprises, the breadth of computing instead of its depth, has come another surprise, the overwhelming complexity of the software that governs computation. The insights of Gödel and Turing in the 1930s already showed the way: code and data are interchangeable. The distinction is one of perspective. Each layer of code interprets data at an outer shell of a system, and itself becomes data to be interpreted by code the next inner shell of the system. The outer layers of the system merge into the vastness of modern life in the information age. Indeed, these layers evolve together, they drive each other's evolution. Computing pushes the complexity of life and life in turn pushes the complexity of computing.

When Leibniz and Hilbert dreamt of computational solutions to profound problems, they dreamt of software with the elegant simplicity and obvious universality of Platonic ideals. The standard formula of Garbage In, Garbage Out is too kind to software. Once software gets tangled enough, it is capable of producing Garbage Out no matter the quality of the input. And the explosion in breadth of computing guarantees that software will be tangled. Indeed, a further surprise is how we are constantly discovering new species, indeed whole new phyla, of tangles: viruses, worms, phish, etc.

This exploding ecology includes more than the two niches of software and user. There is also the software developer. As software complexity has exploded, a dream has lingered that at least the software development process could be managed as a clean methodical process. But slowly the realization has emerged that software and its developer are tightly coupled and evolve together, along with the user community, or not at all. How to manage this co-evolution is not so clear. But it is far more like leading a living community of farmers and artisans and their families than like arranging a flow of subassemblies in a factory. A healthy community is full of surprises, where a well run factory is devoid of them.

A common difficulty in any living community is where some small group gets into a reinforcing feedback loop which accelerates its growth at the expense of the rest of the system. Once this imbalance gets extreme enough, the feedback loop reaches its limits and the bubble collapses. This sort of dynamic happens in the world of software development. Software and its developers can push the complexity of the code and pull the user community along. At first the added complexity brings useful capability so the users incorporate the software more integrally into their business processes. The later levels of complexity serve more to bind the user community to the software ever more tightly, with less and less added utility. The developers can bleed the users to the point where a scary new start, and a shedding of the parasitic developers, happens despite all the efforts to maintain continuity. Often as not a new set of users springs up with the new developers.

In our information age, complex information processing technology is not developed solely by the software industry. Every institution involved in our lives of accelerating complexity is built around such technology. Medicine, automobiles and aerospace, agriculture, and finance all build devices and models of daunting complexity. Each of them are prone to the dynamic of exploiting users by binding them with chains of escalating complexity.

The recent film Inside Job poses the question: how is it that the very architects of the financial crisis of 2008 continue to hold positions of leadership in government and academia? Shouldn't the crisis they engineered disqualify them from those positions?

One good answer to this paradox may be the dynamic of binding users with escalating complexity. The financial system is vastly complex but also pervasively entangled with the rest of the economy. The developers of this system gain their awesome power through this entangled complexity.

Monday, April 25, 2011

Supporting Debate

Democracy, rule by the people, is a challenging goal. It's not practical above the village level to have an entire population gather to discuss a topic. Even at the village level… even at the individual level, gathering and analyzing evidence and tracing out the most likely consequences can push one's resources to the limits. Group dynamics tends to overwhelm any attempt at rational dispassionate consideration. Nowadays on the various internet discussion groups it has become all too obvious how quickly noise can swamp information.

Over the centuries, societies have worked out methods to filter out noise and bring information to the surface, to enable better decision making. Various forms of representative democracy and rules to structure debate yield decisions that surely remain imperfect but still in general vastly superior to mob or dictatorial rule. Representatives can be chosen by popular vote, as is usual for legislatures, or chosen more randomly, as with jury selection. I've read that some ancient Greek cities chose their legislators at random.

What interests me here are the new possibilities opened up by the internet. Certainly the internet already supports group decision making in many ways. Many retail sites give customers the space to offer and read ratings and opinions about products. Political discussion boards are of course filled with debates. The discussion behind Wikipedia articles is also fascinating: somehow that discussion needs to come to a conclusion, at least for the moment. I would like to sketch out here a rough idea for a system to support on-going debate on the internet.

The skeleton of the system is a set of propositions. These propositions should be statements about community affairs that might plausibly be true or false. The propositions themselves should be kept separate from the various arguments for or against them. However, a proposition might assert something about an argument or about a relationship between arguments, or about other propositions. A proposition should not, however, merely assert the truth of another proposition or the validity of some argument: such propositions would be redundant.

The flesh of the system is then the set of arguments. Each argument must support or oppose a proposition. Arguments can refer to various other proposition, some of which might be used as support, while others might be dismissed as invalid or irrelevant.

Users can then vote on propositions, supporting them or opposing them. These votes can be retracted or reversed at any time. Users can also maintain lists of the arguments for and against a proposition that they find most persuasive. A user can maintain such lists even without a current vote on the proposition.

Along with tallying votes, the system can track changes and inconsistencies in the relationship between propositions. For example, a popular argument in support of one proposition might rely on a second proposition as support. If the popularity of that second proposition declines significantly, then users could be alerted to review their support of the first proposition.

Similarly, if a widely accepted proposition asserts a logical relationship amongst the truth values of a set of other propositions, and the voting for those propositions is becoming less consistent with that logical relationship, then users could be alerted to that inconsistency.

The system would also maintain relationships between users. Users might frequently be found on the same side of the same propositions, or on opposite sides. They might find similar arguments convincing, even if they ultimately reach different conclusions on the propositions themselves. These relationships between users can then be used to cluster users along various dimensions, using methods such as principal component analysis.

Once users have been classified, then the voting on various propositions can be analyzed on the basis of that classification. More primitive analysis is also possible; for example, users who vote for this proposition tend also to vote against that proposition, etc.

The group decision making mechanisms supported by the system could also be used to maintain the system. For example, redundant propositions could be combined, complex propositions could be split into simpler components, and frivolous or abusive propositions could be deleted. Each of these operations could be suggested as a proposition which then could be argued. Judgment would generally be required to move from voting results to action: for example, users with a long record of involvement should generally be given more consideration than a flood of new users who have yet to establish trustworthiness. The goal of this system is not to automate judgment, but to support it.

The idea here is to establish a system something like Wikipedia. Wikipedia attempts to present a consensus picture of the way things are. This system, in contrast, has the goal of presenting the best arguments for and against the various alternative descriptions of how things are. The goal here is not to settle debates, but to filter out the noise and to highlight the key issues.

Monday, April 11, 2011

Networks of Correspondence

My friend David sent me this link to a video on the Transition Town movement. How communities can prepare for a radical decline in energy availability is surely a question that deserves great attention. There are many types of communities, though: not all are constrained to a small geographical region.

Advances in communications technology have been at the core of the transformation of community during the industrial revolution, from the telegraph through radio and telephone to the internet. But long distance communication goes back to ancient times: the famous post office motto is a description of ancient Persian letter carriers by the ancient Greek, Herodotus. It is quite conceivable that our modern person-to-person media of telephones and internet could well collapse as the resources required to maintain their infrastructure become increasingly scarce. A worthy challenge and opportunity is to find a way to use our present infrastructure as a scaffold to reconstruct the more robust system where ideas are exchanged via the physical exchange of ink on paper, via the post office, via snail mail.

There are many reasons to exchange letters and many general types of relationships with the people whom one might write to. The primary sort of exchange relationship is with people one knows primarily face to face. Perhaps a family member has moved away, or perhaps one maintains communication with people one has met while traveling. At the other extreme, one might exchange letters with representatives of various widely known institutions such as departments of the national government.

Between these extremes are networks of people with some common interest. e.g. scientific or artistic. Modern science was born with the rise of published printed journals that could broadcast ideas across large international communities. These journals grew out of networks of exchanged letters, which continued to thrive alongside and as a foundation for the printed journals up until the era of email. A beautiful vestige of this practice is the archive of Edsger Dijkstra.

Another recent technology that may well not long survive the resource peak is the photocopier. Nowadays there is a quite smooth spectrum of printing technologies, tailored for print runs of every size. Simple printing technology is good for large numbers of copies where the large set-up time can be effectively amortized, so that may well continue, as it has, for centuries. The real ferment of fresh thinking doesn't happen at that large scale, though. Vital culture can happen on a limited budget, but only with effective structures in place to make the best use of those resources.

Three practices necessary to an effective percolation of ideas through a network of correspondents are: a distributed set of address books, sufficiently cross-linked; a regular practice of letters being forwarded so that a single letter has multiple readers; and a regular practice of copying letters or extensive parts of them.

Keeping track of the locations of people has some challenges. People move from place to place at various time scales and it is not efficient to broadcast each move to every possible correspondent. There are also safety issues with broadcasting addresses too widely. To send a letter will generally involve several steps of forwarding, each step facilitated by a correspondent incrementally more intimate with the addressee. Some system is needed whereby copied and forwarded letters include enough network tracking data so responses can be sent back to the original author.

In the best of worlds, such a network of ideas being exchanged would already be up and running as internet and telephone systems crumple. Bits and pieces of such older systems still exist, e.g. telephone trees as an earlier form of an email distribution list. While our advanced technologies continue to function, these more primitive and resilient systems will have the form of a Creative Anachronism or some similar entertainment. But, like amateur radio or backyard gardening or bicycling, snail mail networks could very well take over as a core cultural practice, and on an almost unforeseeable time table, as resource constraints make it difficult or impossible to recover from the various inevitable failures of the more advanced technologies. Resilient technologies have failures too, but recovery is less expensive.

Tuesday, March 15, 2011

The Human Factor

I was out on my bike yesterday, out to Boiceville and back, delivering a small package to the high school. On my way back, I was having trouble getting my right foot off my pedal. I use Time ATAC pedals which usually work perfectly. Most often my foot is off the pedal because I am on a hill too steep for me to climb without taking a break or two. So on one of my little breaks, I looked at the bottom of my shoe. These pedals couple to cleats that are bolted to the bottoms of my shoes. There should be two bolts on each shoe, but my right shoe only had one bolt remaining, and the cleat had rotated around that one bolt. I reoriented the cleat and tightened the remaining bolt and managed to get back home - with more trouble from the hills than the pedals! I stopped at my Local Bicycle Shop and sure enough they had a spare bolt of the right size and shape, so that got my shoes back in business. I still have a lot of work to do to get strong enough for our local ups and downs!

The effectiveness of my bicycle really depends on a network of spare parts, maintenance supplies, and people with the expertise to use them. This is true of most any technology. On a much grander scale, this dependence is being made clear in Japan, with the problems at the Fukushima nuclear plant. Indeed, it is an interdependence. The nuclear plant requires some power source to pump cooling water, while the plant is itself a major power source. The plant's functioning is tied to its environment in many ways. Its geological environment was the immediate source of the present catastrophe, so that relationship is all too clear. But the human environment, the social context, is perhaps the most crucial facet upon which a nuclear plant depends.

I must say, I am thinking of the current pared down staff of fifty at the Fukushima plant, and thinking of their families. This staff is putting their own lives at grave risk in order to prevent this crisis from further escalation. These people are true heroes. I am praying that they can maintain clear thinking under such extraordinary stress, and that they get the support they need to succeed in their mission, to cool those reactors down, and the used fuel. I pray also for their families, that they can soon be reunited with their loved ones, with all in good health.

In weighing our options for future use of nuclear power, we need to consider what sort of arrangement could provide the greatest safety, or at least understand clearly and weigh accurately the risks involved. Since the earthquake was the primary cause of the Fukushima catastrophe, it's easy to put as a top priority: don't put nuclear reactors on geological faults. But it is very dangerous to get too focussed on the most recent failures of some technology, becoming blind to other key factors that simply haven't made themselves so obvious so recently.

The human side of nuclear technology is an essential factor to consider when weighing the risks in such systems. How might the society using a reactor fail to manage that reactor safely? War or plague or famine could weaken the society so they just don't have the capability to maintain regular servicing or to respond to some minor emergency. Technology for manufacturing spare parts might have been commonplace when the reactor was built, but could become obsolete and therefore prohibitively expensive. Various types of financial and political collapse could eradicate the engineering and managerial expertise required for such a complex system.

Nuclear Guardianship recognizes that while we have a choice whether to build new reactors, we have already committed ourselves, for many generations to come, to maintain the nuclear materials we have already generated. How can we be sure that future generations will be able to manage the nuclear wastes we leave behind? Some of this waste will remain highly toxic for tens of thousands of years, i.e. longer than recorded human history. This is already a daunting task.

The most likely way that buried waste might resurface is through human intervention. People fail to do good not merely through incapacity. Violence has been part of the human condition as long as there have been humans - that is at least a plausible hypothesis. There is a lot of trouble one can create or threaten with nuclear materials. Part of the challenge with nuclear technology is how to make sure the material and the expertise don't get into the wrong hands, the hands of people that might misuse it. Of course, misuse is a curious concept. One might classify any military use as misuse. Or perhaps military use is proper use if that use is by friendly agents, and only constitutes misuses if it is by enemy agents. Who can actually decide whether and how to use nuclear technology in the future?

Geology is a difficult science - earthquakes and tsunamis are impossible to forecast with any precision. But human behavior is so much more complex and unpredictable. If we don't want to build reactors near geological fault lines, shouldn't we also avoid building them near human fault lines?

Wednesday, January 26, 2011

Designing Complex Systems

What kind of world and nation do we want? What are the realistic possibilities from which we are constrained to choose? Already these are enormously difficult questions. Perhaps foreign relations are the trickiest questions. Our present way of living is deeply intertwined with people all around the world. One simple measure of this is that we import about two trillion dollars of goods, while our total GDP is about fifteen trillion dollars. The more our vital interests are involved with other nations, the more we will find ourselves driven to protect those interests. This activity powerfully shapes our government. On the other hand, it's not as if utter isolation is a real option.

A similar spectrum appears at the national level. Perhaps the Tea Party is a revival of the Confederate ideas that sparked the nineteenth century Civil War. Our public and private institutions are so international that the boundaries between the fifty states are almost invisible. To what extent it could be possible to reconnect institutions to smaller regions, to states or even to cities... it is not clear how this could be done and what the consequences would be. The economic principle of comparative advantage is one demonstration of the more general maxim, "United stand, divided we fall." Yet surely that maxim must have its limits.

What we have now is a way of living that is dominated by global institutions, public and private. It surely makes good sense to avoid concentration of power in any institution, but the protection of our vital interests and the regulation of interactions among such huge players surely requires government at a similar scale. In a different world a small scale government might be effective. In today's world, the only possible government is a large and complex government. The challenge is to find a way to make such a government efficient and effective.

In this project, we don't have to start with a blank sheet of paper. There are many global scale institutions and other systems of great complexity. The pragmatic approach is to study what works elsewhere and what doesn't work.

There are two aspects to building complex systems. The system itself will have some structure, e.g. incorporating hierarchy and repetition. The other aspect is the process of design. Just like there is no single system structure that can serve all purposes, similarly the design process needs to fit the specific problem at hand. Still, there are some generally useful design methods that can be combined and adapted to fit most any problem.

Complex system design generally takes place in a series of phases. Design can be thought of as a series of decisions. The idea behind the phases is to order the decisions as well as possible. The main principle is that a decision is best made when the consequences of the alternatives are as clearly known as possible. For a trivial example, the size and shape of a container is generally best decided upon after the sizes and shapes of the contents are already known, otherwise the contents might well not fit in the container, or it could be unnecessarily large. The usual first phase of designing a complex system is determining what it will be required to do and the constraints put on it by whatever external factors.

Another methodological principle is to identify and resolve risks early. The failure of a crucial component can prevent the system from performing its intended function. The less experience designers have in working with such components, the higher the risk of a design error. In such cases it may be worth while to build prototypes as a way to gain experience before scaling up to the full system.

Complex systems are generally designed by large teams with diverse expertise. Transparency and accountability are necessary in the design process in order for this diversity to remain a strength. For the designed system to cohere, effective communication must be maintained between the various design teams. Hidden decisions become hidden problems that can trigger catastrophic failures.

Any complex system will involve risk and novelty: each unique arrangement of components creates fresh opportunities for unforeseen interactions. To work effectively, a complex system must incorporate ways for its design to be changed as problems are detected, or as system requirements shift. The processes of design, implementation, and deployment cannot be kept completely distinct.

Separation of concerns is a principle that can apply both to system structure and to the design process. If every component of the system is tightly coupled to every other component, the system will be extremely brittle. Most likely some interactions will not be handled correctly and the system won't even work from the start. In any case, any small change in requirements will lead to updates rippling through the whole system, at prohibitive expense. The notion with separation of concerns is to reduce the degree to which system components constrain each other, to reduce the level of coupling between them. The detailed operation of each component can be largely independent of the details of other components, as long as a few basic requirements are met. Changes in the design details of one component can have very little or no impact on the requirements of other components. This lets design teams work more independently with many fewer iterations of rippling changes, and then during implementation and deployment problems can also be identified and fixed with minimal impact.

I starting thinking about government as a complex system when I started hearing about the problems with earmarks in legislation. It seems many such clauses are introduced into bills at the last minute. Legislators vote on bills whose contents they cannot effectively track. This is very much like computer software when configuration management tools are not used. Last minute changes are notoriously buggy but developers always want to get the latest fixes and features into the next product release. The software industry has worked out some effective methods to manage this problem. Perhaps the government can learn something!

Tuesday, January 11, 2011

Solar Fruit Dryer

The current process by which food is provided to most folks in the United States is one that uses large amounts of petroleum, from the farm through the distribution network to the home. As petroleum and other sources of energy get more scarce, we will need to find new ways - and return to old ways - that are less energy-intensive. These changes will be required through all the stages of growing, preserving, and distributing food.

Here is a simple design for a solar fruit dryer. The lower part is a trapezoidal solar collector. The top surface of the collector is clear glass. This covers a space for air to flow, in from the bottom, up through the collector as it is heated, then into the drying chamber above the collector. The bottom surface of this air space is a metal surface painted black, to absorb sunlight and turn it into heat. Below the metal surface is a layer of insulation, so the heat from the metal surface goes into the flowing air above it rather than the outside air below it.

The heated air rises and flows from the solar collector into the drying chamber. This air passes over several trays of drying fruit. The air is dryest as it enters the chamber, so the first tray it encounters is that with the dryest fruit, i.e. the fruit closest to the end of drying process. As the air rises, it passes over successively less dry fruit. Finally the air rise out through the top of the drying chamber and into the chimney.

The chimney is simply a vertical tube that helps the rising air accumulate lifting power to keep the steady movement of air in at the bottom of the solar collector, through the dryer, and out at the top of the chimney.

The drying chamber has a door allowing new trays of fresh fruit to be added at the top and trays of dried fruit to be removed from the bottom. As trays are removed from the bottom, the remaining trays should be shuffled down, creating space at the top for the fresh trays to be added.

The next stage of development of this idea will be to tune the relative sizes of the components, to allow maximum throughput of fruit with the least expense. A moderately sized unit ought to be inexpensive enough to build that this tuning can effectively be done experimentally.

The grand vision is that the dried fruit can be taken by bicycle to local farmstands to be sold, and then fresh fruit to be dried can be brought from the farmstand on the return bike trip. Mix in canoes as needed!

Sunday, January 2, 2011

Twisted Arpeggios

I took some piano lessons in 7th grade and a few guitar lessons in 8th grade, but never attained any real competence in performance. I still have no understanding of music theory at all: counterpoint, harmony, etc. It was sophomore year in college when I learned about the overtone series in a physics class that my fascination with music opened to a deeper dimension. I got out my old guitar - I think I spend a year just tuning that guitar. I also started exploring the mathematics of tuning.

In the spring of junior year I wrote a computer program to search for scales made of equal sized steps. The conventional musical scale has equal steps whose size is a twelfth of an octave. Perhaps some other step size could give a better scale. What would be a good measure for the quality of a scale, or of the step size that generates that scale. With the overtone sequence as the foundation of musical intervals, the answer must be that the scale contains pitches that are close approximations to the just-tuned intervals. In my program I just used the overtone series itself: the scale should contain pitches that closely approximate the pitches in the overtone series. Clearly the lower overtones are more important. So I formed a weighted sum of differences. Each difference was the gap between an overtone pitch and the closest pitch in the scale. The weights decreased as one ascended in the overtone series, in such a way that the sum would converge as one extended the sum to include all of the infinite pitches of the overtone series.

Any fixed step size will generate a scale for which this quality metric can be computed. In my program, I computed this metric for a range of step sizes, e.g. 1.050, 1.051, 1.052,… 1.067, 1.068, 1.069, 1,070. I checked the quality metric for each step size against those of the neighboring step sizes. At the optimum step size, the quality metric will be better than those for both slightly smaller and slightly larger step sizes. I expected to see just one or two step sizes singled out as those with the best quality metric, i.e. those that would generate the best musical scales.

This was 1976, so my program was punched into cards and read into the job queue of the campus mainframe. Half an hour or so later the resulting print-out was tossed into my slot. I was totally surprised - practically a third of the step sizes were the best in their little region. A graph of quality metric against step size would not show any kind of smooth approach to a few optima, but rather be very bumpy. I decreased the step size, so now I would explore step sizes like 1.0500, 1.0501, 1.0502, etc. Another half hour wait in the ready room for my result print-out to get dropped in my slot, and an even bigger surprise - zooming in to finer detail revealed even more local optima! This was one very bumpy curve!

I had the idea of going over to the math department to see if someone could explain to me what was going on. Such a bumpy curve didn't fit the picture I had in my head from basic calculus! But the school year was over. The summer of 1976 took me out to Grand Junction, Colorado, where I was working on historical uranium data with a couple geology professors, Ken Deffeyes and Ian MacGregor. For a little summer reading I brought along Functional Analysis by Riesz and Sz.-Nagy. Amazing! On page 3, there was my musical quality metric, or at least an infinite sum very much like it - given as a classical example of a function that is continuous everywhere but differential nowhere. A very bumpy curve!

It's nice to stumble onto classical results but unfortunately that put an end to strategy number one for discovering new and improved musical scales. Basic calculus can't analyze such functions! Strategy number two then emerged, probably still in undergraduate years. Another way to understand how a good musical scale arises is by looking a commas, rational fractions very close to 1 that are constructed by multiplying and dividing small primes.

One fundamental comma, the syntonic comma 81/80, comes up in tuning a guitar. A guitar can be tuned using harmonics, plucking the strings while touching their nodal points which causes the overtones to sound clearly. Playing the fourth harmonic of the low E string along with the third harmonic of the next A string, one can adjust the string tension to bring these to the same pitch, so the E and A strings will be separated by a just tuned interval of a fourth, i.e. a frequency ratio of 4/3. Of the five intervals separating the six strings of a guitar, four are fourths. The fifth interval, from G to B, is a major third. Here the fourth and fifth harmonics can be used to bring those strings into the corresponding just tuned frequency ratio of 5/4.

This approach to tuning a guitar doesn't quite work, though. There should be two octaves from the low E string to the high E string. But when the five just tuned intervals are stacked, the four fourths and the one major third, the top of the stack doesn't end up quite two octaves above the bottom of the stack. Numerically, (4/3)(4/3)(4/3)(5/4)(4/3) = 320/81, where two octaves should be 324/81. This gap, 324/320 = 81/80, the syntonic comma, is one of the most basic differences between just tuning and equal temperament. Is just tuning right and equal temperament wrong, or what difference does this difference really make? That is a nice question to explore!

Strategy number two involved looking for commas. I wrote a program to explore numbers whose factors were all small primes - as I recall, I limited the primes to 2, 3, 5, and 7. My program constructed these composite numbers in ascending order, and looked for pairs that were especially close. Perhaps those results from 30 years ago are still saved in some box of papers in my storage unit, but the details are long gone from my memory. The one result that I remember is that I became fascinated by an scale with equal steps that divided the interval from the fundamental to the tenth harmonic into 90 equal parts. Curiously, this scale would not have exact octaves. Still, it contained enough good approximations to important just intervals that it should support music.

Inventing musical scales is not much fun until two further problems are solved: one needs a way to play notes from any proposed scale, and one needs a way to arrange those notes into musical patterns. So my next idea was a way to lay out keys on some kind of keyboard instrument to allow performance of music that uses small step sizes. If the keys of such an instrument are laid out in a line in ascending order, there can easily be so many steps in an octave that either the keys will be too narrow to play or the octave will be too wide to reach. So I came up with a two dimensional arrangement, with hexagonal keys covering a flat surface. The pitch of the keys increases to the north and decreases to the south. To the east and west are arrayed keys that have very similar pitch. The fundamental idea is that a very narrow interval is actually quite distant harmonically. The keyboard I proposed arranges the keys according to harmonic relationships. Keys very close in pitch can be quite far apart on the keyboard.

The next problem is then how to compose music in one of these proposed scales. The easiest approach to start with is just to transpose or translate existing music into the new scale. If the new scale is based on the same fundamental intervals, such as perfect fifths and major thirds, as the old music, then the transposition should be simple enough… or so I thought! I took a couple of Bach's Harmonized Chorales and attempted the transposition. For any given note in Bach's score, I needed to select which of several closely spaced microtonal notes should be played. With the harmonic relationships available on my keyboard design, the range of choices was essentially a spectrum of notes separated by syntonic commas. I decided simply to use the color spectrum to add this detail to Bach's score. All notes of a given color would be related by Pythogorean intervals, ratios built from the primes 2 and 3. Each factor of 5 in an interval would shift one color in the spectrum, from blue to green etc. The interval of a syntonic comma is denoted by a simple shift in color with no change of position of the note on the staff. The transposition of Bach's score was simply a matter of copying the score and adding the appropriate colors.

I was amazed, not just because I found the task to be impossible, but also because the attempt seemed to reveal the structure of the music. Right up to near the end, I found it easy to see which colors to use for the notes. But the right near the end, I found conflicting harmonic relationships. It wasn't so much that I couldn't decide which was the right color, but more that the notes seemed to demand two colors at once. This conflict or ambiguity seemed to create a harmonic tension that was fundamental to the structure of Bach's music. Sure enough, the final notes of the compositions were again easy to color - the tension was resolved. That was my music lesson for the year!

About a decade after this, in 1991, I had the good fortune to be able to spend some time working at MIT as a visiting scientist - which also got me a library card! I found in MIT's music library a copy of Easley Blackwood's The Structure of Recognizable Diatonic Tunings. I was amazed to see in this book an exhaustive demonstration of the lesson I had learned from Bach. Blackwood attempts to transpose into just tuning a whole series of classical compositions from Bach to Mahler. He didn't use colors but simply little "+" marks to pick out of that spectrum of alternatives separated by syntonic commas. Blackwood shows that this transposition simply doesn't work. Music in this classical Western tradition creates a sense of tension and movement using the ambiguity introduced by temperament. The ambiguity cannot be removed without destroying the music.

I figured the book must have been the PhD dissertation of some obscure graduate student who then disappeared back into the woodwork. Another surprise! In the late 1990s I was in a CD store, probably looking for Beethoven - but right alongside, a section for Easley Blackwood's music! He is a distinguished composer as well as a theorist. His book on tunings marked a shift in his career, closing a phase of experimentation with tunings and returning to music very much in the classical tradition.

There are surely many different approaches to the construction of musical scales and no doubt many of them can provide foundations for musical making. If one wants to play in the space created by the ambiguities of temperament, the possibilities of alternative scales does not disappear. Thinking of ambiguity as a resource does, though, shift how one evaluates a scale.

My fascination for the last decade or so has been with the scale that divides an octave into 53 equal intervals. I've already outlined the logic behind 53 at so I won't repeat that discussion here. This tuning gives very good approximations to just tuned thirds and fifths. But what if we want to play the ambiguity game? What new opportunities does this tuning open up?

The syntonic comma is the fundamental interval tempered by the conventional 12 step scale. Each step of the 53 step scale is just about the size of a syntonic comma, i.e. the 53 step scale accurately expresses exactly what the conventional 12 step scale blurs. The music of the 53 step tuning will be something quite different.

What is a comma that the 53 step scale blurs? The kleisma, 15625:15552, is one fundamental such comma. A minor third is 14 steps in this tuning, while a perfect fifth is 31 steps. If six minor thirds are stacked, that makes 84 steps - exactly a octave above a perfect fifth. Curiously, this comma is not tempered in the conventional 12 step scale, where a minor third is 3 steps and a perfect fifth is 7 steps. The stack of six minor thirds is thus 18 conventional steps, one step short of an octave above a perfect fifth. This again shows that music in the 53 step tuning will be quite different than that of the conventional 12 step tuning.

So my project has become a kind of parallel construction: to build up compositional methods for the 53 step tuning that mimic those in the classical western tradition using the 12 step tuning. Another fundamental compositional structure in classical western music is the key signature. For the most part, a segment of music will not use all twelve notes of the full scale, but instead be centered on a subset of seven. There are different ways to think about the way these seven are selected: perhaps there is a starting note and then a stack of six perfect fifths. Or the seven can be seen as three major triads stacked. These two ways to see the structure of the key signature differ by just the syntonic comma that is blurred by the temperament. If we see the white notes of the piano as a stack of perfect fifths F-C-G-D-A-E-B, then the A is four perfect fifths from the F. Alternatively, the white notes could be the three major triads FAC-CEG-GBD, in which case the A is a major third from the F. The just tuned difference between a major third and a stack of four perfect fifths is just the syntonic comma.

Can I construct some subset of the 53 steps of the octave, with a structure based on the kleisma that parallels that of the role of the syntonic comma in the conventional 12 step scale? Here is one attempt. The syntonic comma is four perfect fifths next to a single major third. The kleisma is six minor thirds next to a perfect fourth. Why not construct a key signature based on minor thirds? Take a stack of twelve notes, with a minor third between each neighboring pair. Then each note in the bottom six can be paired with a note in the top six, with the interval of a tempered fourth between each pair. This way, the roles of fifths and thirds is just reversed relative to the convention seven out of twelve key signature. Can a person make music out of this structure?

As a first step, here is a sequence of arpeggios:

This parallels a conventional sequence like CEG DFA EGB FAC GBD ACE BDF CEG. In the conventional sequence, almost every triad includes a perfect fifth. Only the triad based on B has a flat fifth, because B is at the end of the stack of fifths. In a twisted sort of parallel way, almost every triad in this new sequence includes a minor third. There is just one triad that has a flatter third. I confess, I am just feeling around in the dark here for what might work!

In the conventional sequence, roughly half the triads have a major third from the root while the other half have minor thirds. In a twisted parallel way, this new sequence has about half the triads with fifths from the root and half with fourths.

Does this subset support music? That's not an easy question! Even more importantly, does is support making music that has something to say that couldn't be said in the conventional twelve step scale? That's an even harder question! My main hope is that I can help open up some possibilities enough that others with more compositional talent can explore more deeply.

Here, though, is my first attempt to create some music using this structure:

I should note here that very likely none of these ideas at all were first proposed by me. The scale with 53 steps per octave has a very long history - centuries and likely millennia. Decades ago Shohé Tanaka named the kleisma and noted its importance in the 53 step scale. Fewer decades ago Larry Hanson selected a subset of the 53 notes that was very similar to the set of twelve that I proposed. See:

I learned a lot of this history from the microtuning group on yahoo: