Tuesday, October 28, 2008

Brit Hits the Big Time

It is good to know I can tell my kids I new him back when.

Monday, October 27, 2008

We Won't Get There from Here

AOG and I have been debating the feasibility of space travel, first to the stars, most recently closer to home.

Other than the "Mt Everest" on-account-of-it's-there argument, making the effort requires a payoff exceeding the opportunity costs. In other words, there must be resources or processes that are less expensively obtained in space than here on earth. The most obvious, if not only example, is orbital power stations transmitting solar energy as microwaves to Earth, then converting the microwave energy into electrical energy.

The pro-arguments are powerful: sunlight 24/7, essentially no environmental impact, and no fuel costs once installed.

The counter-arguments focus largely on lift costs to geo-synchronous orbit. Absent assumptions so optimistic as to make Pollyanna blush, moving sufficient mass for an orbital power station will be many times the cost of the station itself.

However, granting Pollyanna her due requires making assumptions about what is technologically possible. In fairness, expanding technological possibility must also be granted to life here on Earth:
SOLAR power should be a cheap and simple way of making electricity, but like any technology the practicalities tend to get in the way. Even if the sun does come out the panels may not face in the right direction. Then there is the cost, which can exceed $40,000 for a household system—more than half of which is accounted for by installation.

...

[Several firms are using new materials] to produce the photovoltaic effect and building them in extremely thin layers, almost like printing on paper. As these films use less material they are cheaper to produce, not least because they can be deposited on bases like metal, glass and plastic.
The downside of these new materials is that they are less efficient than existing silicon-based photovoltaic materials.

However, efficiency is not everything. Not only are the new materials cheaper, they are also flexible. This, in turn, mitigates many of the disadvantages of terrestrial solar power, by providing the ability:
... to coat glass tubes with [the new photovoltaic materials] and encase them in another glass tube with sealed ends. They look a bit like fluorescent-lighting tubes. Forty of these tubes are then assembled into a single panel. Using tubes instead of flat panels makes it possible to capture sunlight, including diffuse light, from any direction—even if it is reflected up from a roof. And whereas traditional solar panels have to be tilted and carefully positioned so as not to shade nearby panels, tubular ones can be laid flat over the entire roof. Being lightweight and open they are also less prone to being blown away. This makes them easier and faster to fit. The cost of installation, reckons the company, should be about half that of conventional panels.
The company pioneering this approach is focusing first on the flat rooftops of commercial buildings:
Chris Gronet, [Solyndra's] chief executive, says that with some 30 billion square feet of large flat roofs in the United States alone, tubular solar cells could generate 150 gigawatts of electricity. That would be enough to power almost 16m homes.
What if the installed cost of these panels turns out to be $20,000 for a household roof top installation? Assuming a $200,000 house with zero down, financed at 7% fixed for thirty years. The monthly payment will be $1330.

Now, roll a $20,000 solar panel installation onto the mortgage. The resulting payment will be $130 more per month. Roughly speaking, about the same as that house's monthly electric bill.

On many days in the lower 48, such an installation could very well end up generating more power than the house consumes during the day. That excess power can be sold back to the utility, which could then "store" it for use at night. Alternatively, battery powered vehicles capable of spanning a typical American commute look likely to appear in the not too distant future. The roof panels could be used to recharge those batteries, thereby eliminating the fuel cost of commuting.

Of course, the batteries need not be confined to the car. No reason they can't be part of the house, charged by the solar panels during the day then supplying the same house's electrical needs at night.

While I do not want to make assumptions sufficiently hopeful to make Pollyanna a hardened skeptic by comparison, the efficiency of household devices is headed in the right direction to make a energy self-sufficient house possible. For example, LED light fixtures are becoming, at 1/3 to 1/5 the power consumption of incandescent bulbs, economically feasible, even before taking into account the far lower load their operation would impose on air conditioners.

Who knows whether these developments will pan out. I can well remember the hoohah surrounding the invention of high-temperature superconductors, and the funereal silence that has prevailed on that subject ever since.

However, the possibilities are far closer to eventuality than anything proposed to yield tolerable lift costs into LEO, never mind geosynchronous orbit.

And, if just two things happen -- viable rooftop solar panels and cost effective batteries -- the most obvious raison d'être for space presence has just been brought crashing to earth.

Joomla anyone?

Trying to hack HTML and CSS and meet deadlines has prompted me to look into web development tools. Does anyone have experience with Joomla, which is an open source content management system? Tips, tricks and recommendations from the peanut gallery are welcome.

Friday, October 24, 2008

Where the hell is Duck redux

Getting tired of narrowly missing layoffs at my consulting company, being on "the bench" for five weeks and not getting sent out on client interviews, I jumped at an opportunity for a new consulting company that presented itself to me on Tuesday, and am now working on a consulting gig at a client that is a startup and going through "growing pains", which entails long hours at the office. Makes me glad to have moved from a salaried to an hourly position.

Anyway, don't know when posting might resume again, hopefully I'll squeeze one out this weekend if I don't get called in to the weekend emergency recovery effort. Not being too technical has its advantages.

In a recession all work is good work.

Tuesday, October 21, 2008

Here are the symptoms ...

Background: Patient presented as a patently healthy 53 yr-old male for a routine physical examination, conducted over approximately one hour, involving two nurses (one for vitals, the other for blood draw) and one physician's assistant.

Symptoms:
Standard Lab work -- $400. (Chem panel, cholesterol, blood count, PSA, urinalysis, thyroid.)

PA and nurses: $350

Total: $750


So what is the disease? How in the heck can it possibly cost $400 to do simple lab work that probably took less than 30 minutes to accomplish? How is it that roughly an hour spent on an individual healthy nearly to the point of offensiveness cost $350?

Granted, there needs to be something extra in the pay packet for the person doing the prostate prodding, but still ...

Wednesday, October 15, 2008

I Blame Global Warming III

The nearly coldest summer on record in Alaska has already made it increasingly difficult to lay awake nights worrying about global warming. Now there is this:

Bad weather was good for Alaska Glaciers
Two hundred years of glacial shrinkage in Alaska, and then came the winter and summer of 2007-2008.

Unusually large amounts of winter snow were followed by unusually chilly temperatures in June, July and August.
"In mid-June, I was surprised to see snow still at sea level in Prince William Sound," said U.S. Geological Survey glaciologist Bruce Molnia. "On the Juneau Icefield, there was still 20 feet of new snow on the surface of the Taku Glacier in late July. At Bering Glacier, a landslide I am studying, located at about 1,500 feet elevation, did not become snow free until early August.
Never before in the history of a research project dating back to 1946 had the Juneau Icefield witnessed the kind of snow buildup that came this year. It was similar on a lot of other glaciers too.
As it happens, glaciers have been receding in Alaska since the most recent maximum in the 1700s. Since then, Alaskan glaciers have lost about 15% of their area. Why might that be?
Climate change has led to speculation they might all disappear. Molnia isn't sure what to expect. As far as glaciers go, he said, Alaska's glaciers are volatile. They live life on the edge.

"What we're talking about to (change) most of Alaska's glaciers is a small temperature change; just a small fraction-of-a-degree change makes a big difference. It's the mean annual temperature that's the big thing.
Yes, I know when it comes to glaciers I am part of the great ignoranti in comparison with Mr. Molnia. I shall risk disagreement, nonetheless.

This "fraction-of-a-degree" stuff is AGW as religion. Over the long term, whether glaciers expand or contract is solely dependent upon the difference between winter snow accumulation and summer melt. It is clearly possible that an increase snowfall will more than offset warmer temperatures, just as cooler temperatures will not mean expanding glaciers if the local winter climate features less snow.

Fortunately, though, the balance of the article is less theological:
During the Little Ice Age -- roughly the 16th century to the 19th -- Muir Glacier filled Glacier Bay and the people of Europe struggled to survive because of difficult conditions for agriculture. Some of them fled for America in the first wave of white immigration.

The Pilgrims established the Plymouth Colony in December 1620. By spring, a bitterly cold winter had played a key role in helping kill half of them. Hindered by a chilly climate, the white colonization of North America through the 1600s and 1700s was slow.

As the climate warmed from 1800 to 1900, the United States tripled in size. The windy and cold city of Chicago grew from an outpost of fewer than 4,000 in 1800 to a thriving city of more than 1.5 million at the end of that century.

The difference in temperature between the Little Ice Age and these heady days of American expansion?

About three or four degrees, Molnia said.

The difference in temperature between this summer in Anchorage -- the third coldest on record -- and the norm?

About three degrees, according to the National Weather Service.

Does it mean anything?

Nobody knows. Climate is constantly shifting. And even if the past year was a signal of a changing future, Molnia said, it would still take decades to make itself noticeable in Alaska's glaciers.
I'm surprised this snuck by the Anchorage Daily News editorial staff and their masters at McClatchy Newspapers.

To learn more than just about anyone needs to know about Alaskan glaciers, read this.

Monday, October 13, 2008

Where to the markets?

I think there is more downside to the equity markets over the next 12 months, but I think that the worst of the panic selling is over for now. One good thing about this market crash, I think, is that it is a chance to clear out the pent-up fears over how much of a threat the mortgage and asset backed/derivatives situation presented to the economy. Once fears are realized, they become a known quantity and therefore lose much of their hold over the imagination. So now that the damage is done, we can work our way through it and get to the other side.

That working through process will take some time, and will lead to some further downside in the markets over the next 12 months. We still don't know what the future regulatory scheme will look like, both in the US and worldwide, but we do know that it will be very different from what it is today. Once the new scheme is in place, the players will evaluate how to make money in the new environment, and will restructure themselves to take advantage of it. There will be winners and losers, and the adjustments will take the markets lower as capital flows out of the losers and starts seeking the winners.

Conventional wisdom will be the biggest loser. This is one of those Duckian moments where the average amateur observer knows as much about the future as the experts, whose expertise is vaporizing along with the bubble capital. A lot of good can come of this moment, both politically and economically, but the window of opportunity will be short before a new regime of conventional wisdom reasserts itself.

Friday, October 10, 2008

Procrastination pays off

I knew if I put off dealing with my procrastination long enough it would pay dividends, and it finally has. I joined my current employer in 2005, and never set up any investment elections with the new 401k plan. Since then all my contributions have been going into their short term cash vehicle. I missed out on all those fabulous gains in the equity market between 2005 and the market peak in October of last year, but unless I had had the prescience to unload my equity holdings then, which I wouldn't have, my cash un-strategy worked perfectly for the equity market crash unfolding before our eyes.

Anyone feeling prescient about when the bottom might appear?

Tuesday, October 07, 2008

Anti-racism - the last refuge of a scoundrel

The current financial mess may be an orphan, but not for lack of fathers. One man present at its conception was Massachusetts representative Barney Frank, who as a member of the House Financial Services Committee pressed Fannie Mae and Freddie Mac to increase its mortgage lending to high risk borrowers. Rather than admit to his role in the unfolding debacle, Frank has charged critics of his actions with, you guessed it, racism:
BOSTON (AP) - Rep. Barney Frank said Monday that Republican criticism of Democrats over the nation's housing crisis is a veiled attack on the poor that's racially motivated.
The Massachusetts Democrat, chairman of the House Financial Services Committee, said the GOP is appealing to its base by blaming the country's mortgage foreclosure problem on efforts to expand affordable housing through the Community Reinvestment Act.

He said that blame is misplaced, because those loans are issued by regulated institutions, while far more foreclosures were triggered by high-cost loans made by unregulated entities.

"They get to take things out on poor people," Frank said at a mortgage foreclosure symposium in Boston. "Let's be honest: The fact that some of the poor people are black doesn't hurt them either, from their standpoint. This is an effort, I believe, to appeal to a kind of anger in people."

Frank also dismissed charges the Democrats failed on their own or blocked Republican efforts to rein in the mortgage companies Fannie Mae and Freddie Mac. The federal government recently took control of both entities.

House Minority Leader John Boehner of Ohio called Frank's remarks "a lame, desperate attempt to divert Americans' attention away from the Democratic party's obstruction of reforms that would have reined in Fannie Mae and Freddie Mac and helped our nation avoid this economic crisis."

"Congressman Frank should retract his ridiculous statements and start taking responsibility for the role he and other top Democrats played in putting Main Street Americans in this mess," Boehner said.

Frank said Republicans controlled Congress for 12 years and passed no regulation, while Democrats passed a Bush administration Fannie and Freddie regulation package since gaining control of the House and Senate in January 1997.

Frank doesn't mention that an earlier attempt by the Bush administration to rein in Fannie and Freedie with tougher regulations following an accounting scandal were blocked by congressional Democrats, and softer rules were enacted in return for an emphasis by the mortgage giants to increase lending to lower income and minority borrowers.

The sad irony is that in an effort to make life easier for low income households, such government sanctioned altruistic business practices have made life harder for them by feeding the lending frenzy that led to the real estate downturn, and to the current financial crisis which has fueled unemployment. As the adage goes, no good deed will go unpunished.

Monday, October 06, 2008

Worldwide financial crisis deepens as Pope's comments spark sell-off on Wall Street

06 Oct 2008 DDP - The world financial crisis entered dangerous territory today as the Dow Industrial Average broke below the 10,000 barrier for the first time since 2004, sparked largely by comments from Pope Benedict XVI earlier in the day that the world's monies were backed by sand. "This came as a shock to many traders, who had believed all along that the dollars they used to buy and sell stocks, bonds and commodities were backed by the full faith and credit of the US Government", bemoaned Larry Fine, a harried pit trader at the New York Stock Exchange.

In what was described by one observer as a "flight to quality", brokers and bankers lined up at the Vatican in an attempt to find a safe place to park their dwindling capital reserves. The move caught Vatican officials off guard, who assured the gathered financial executives that the Holy Mother Church no longer operates an exchange in any financial instruments or commodities. "With the coming of the Reformation there was no longer any support among European governments for a standardized indulgence contract, and liquidity dried up" explained Cardinal Livio Carrasco. "Some of our more creative theologians have explored sin default swaps and certificates of redemption, but at tops we could handle 100 million Euro at this point in time."

A visibly shaken Secretary of the Treasury Henry Paulson tried to quell rumors of the dollar's worthlessness at an emergency press conference in front of the Treasury building this afternoon. "I would like to assure all Americans as well as our international financial partners that the 'sand window' is closed. You can be assured that the Almighty Dollar will rise again from the ashes of this unfortunate misunderstanding and will wreak vengeance on all who challenge its status as a claim on the future earnings of America's children and grandchildren, bless their immortal souls!"

On a more positive note, "Mad Money" tv host Jim Cramer told his viewers that it was time for ordinary citizens to take their money out of the stock market. "I've sold all my stock, and I've gotten all of my children's trust funds out of stocks. I've heard from all my colleagues working on the street that all their dollars are out of the market, so I think it is the right time to let Joe and Jill Sixpack in on the truth. Wall Street is a house of cards. You're all screwed, get out if you can!" Knowledgeable observers say that Cramer is the biggest idiot yet to publicly call for panic selling, meaning that according to the "biggest idiot" theory a bottom to the market is very near.

In other news, the European Union Commission on Cultural Deconstruction today ruled that France is an officially designated "dunce" nation, and that all French citizens will be required to wear an appropriate dunce outfit until the entire country stops being so insufferably silly.

Saturday, October 04, 2008

Whose side are you on?

In "A Christmas Carol" Charles Dickens has a repentant Marley's ghost cry out "Mankind was my business!" It's a poignant moment that strikes a chord with almost everyone, for we all feel in our heart that this entity we call Mankind should be our highest concern.

Yet listening to proponents of the environmental movement in this article from the University of Chicago Magazine one would have to believe that there are many a future Marley's Ghost haunting about its conferences and meetings, forging their chains of future regret. Dead white males have nothing on living conservationists when it comes to devaluing the grubby brown peoples of the world. For them the world is a struggle between forests and people, and people are the bad guys:
When Susanna Hecht went to El Salvador in 1999 to help the government with long-range environmental planning, officials at the Ministry of Environment and Natural Resources told her there were no forests left in the country. To Hecht, AB’72, a professor of urban planning at UCLA and an expert on tropical development, the claim came as no surprise. El Salvador was notorious for population growth and ecological degradation. The most crowded country in Latin America, during the 1960s and ’70s it had suffered severe deforestation with the expansion of livestock and sugar-cane farming. In 1999, the same year Hecht arrived, the tropical ecologist John Terborgh declared that in El Salvador, “nature has been extinguished.”

But as she drove around the country, Hecht noticed plenty of trees. Some were remnants of old forests, but she also saw hedgerows, backyard orchards, coffee groves, trees growing along rivers and streams, cashew and palm plantations, saplings sprouting in abandoned fields, and heavily wooded grassland. Almost every village abounded with trees—“like a big jungle forest,” she said. Rather than no trees, she saw them everywhere. Nature was far from extinguished; it was thriving.

Hecht called these woodlands El Salvador’s “secret forests.” In a country only recently deforested, trees were coming back. And El Salvador was not alone. For many reasons, trees were resurgent throughout Latin America, including Honduras, Puerto Rico, Ecuador, and in parts of the Amazon. But because scientists and policy-makers were preoccupied with tropical deforestation, Hecht said, they had been slow to take notice.
In another sense, she said, they didn’t see El Salvador’s forests because of an old bias toward so-called “pristine” forests—primitive and untouched—and against “anthropogenic” forests, those created by humans or shaped by human activities like burning, grazing, farming, and logging. It was these anthropogenic landscapes, which Hecht called “peasant” or “working” forests, that were reclaiming El Salvador. They were a secret in plain view. But whether you saw them depended on how you counted.
Photo

“A great deal of it looked like forest,” Hecht recalled. “If you start saying anthropogenic forests are OK, the place goes from having no forest to tons of forest.”

I've heard of not being able to see the forest for the trees, but this is a first: not seeing the forest for the people. Yet one would think that the realization that people and trees can coexist would be greeted as a win-win, as good news. One would think wrongly:
And yet the regreening of the Sahel has attracted little notice, Reij said. He and other critics of contemporary conservation efforts say that conservation groups and many scientists have neglected forested landscapes where people live and work because they are more interested in large parks and preserves. Kathleen Morrison, an anthropologist who directs the University’s Center for International Studies and was a conference organizer, studies dry forests in southern India. She said that such forests, with drought seasons that last several months, once covered more than half the world’s tropics and subtropics but receive far less study than tropical rainforests, “perhaps in part because of their entanglements with human histories.” Her own research attempts to reconstruct that entanglement over many millennia, as forests in southern India waxed and waned in response to the rise and fall of cities and the country’s shifting culinary habits.
...
Still, “working” landscapes have become major areas of reforestation the world over, Hecht said. Often dismissed as pyromaniacs and forest clearers, peasants are now seen as creators. Understanding the social dynamics that initiate and sustain the new landscapes is critical for conservation: “Looking at these social relations gives us much more of an idea of how we can support processes that produce forest and diminish processes that don’t.”

So when did our highest goal as a species become the production of forests?
One way to see the contradictions that still cloud Western thinking about forests, Roderick Neumann suggested, is to contrast conservation philosophy and practice in Europe and Africa. Neumann began his career studying protected areas in Tanzania. Both Selous Game Preserve (opened in 1905) and Arusha National Park (opened in 1960), he argued, are typical examples of “fortress conservation” in Africa. In each case, preserving nature has meant excluding humans. Under the influence of 19th- and early 20th-century German forestry, he said, colonial and postcolonial authorities drove the local people, the Meru, out of Arusha National Park, arguing that they were “mismanaging the forest and were ignorant of its conservation value.” Acting by the same principles, Tanzanian authorities later made Selous Game Reserve the second-largest protected area in Africa, home to elephants, lions, and black rhinoceroses, by expelling 40,000 people.

“Typically these evictions are based on neo-Malthusian concerns of overpopulation and claims of irrational and sustainable resource use,” Neumann said. “And these ideas continue in conservation initiatives today.”

If forests and people really were implacable enemies, wouldn't that argue for taking an adversarial attitude toward forests? Have you ever heard an environmentalist decry the clearcutting of people from a region? I don't find it odd that there are people who would prefer the existence of a pristine forest to a thriving community of people, but it is surprising that such a misanthropic consensus could become so ingrained in a field which the general public has placed in such high regard.

Thursday, October 02, 2008

Lets destroy "Art" to save art

Nothing is as cliched, and as true, as the notion that "Art" (as distinct from both "art" and "the Arts") is in a sorry state. In its capitalized identity, Art represents the body of work that is produced by an artist class officially recognized as such by that loosely knit organization known as the Art Establishment. And in its singular context (Art vs the Arts) it refers specifically to painting and sculpture, as opposed to music, theater, dance and cinema, which are not widely recognized as being in a permanent state of decay and disrepute.

Roger Kimball draws an enlightening connection between Art and Religion, which perhaps provides a model for Art's inevitable decline:

The End of Art
by Roger Kimball

Copyright (c) 2008 First Things (June/July 2008).

Nearly everyone cares—or says he cares—about art. After all, art ennobles the spirit, ­elevates the mind, and educates the emotions. Or does it? In fact, tremendous irony attends our culture’s continuing investment—emotional, financial, and social—in art. We behave as if art were something special, something important, something spiritually refreshing; but, when we canvas the roster of distinguished artists today, what we generally find is far from spiritual, and certainly far from refreshing.

It is a curious situation. Traditionally, the goal of fine art was to make beautiful objects. The idea of beauty came with a lot of Platonic and Christian metaphysical baggage, some of it indifferent or even hostile to art. But art without beauty was, if not exactly a contradiction in terms, at least a description of failed art.

Nevertheless, if large precincts of the art world have jettisoned the traditional link between art and beauty, they have done nothing to disown the social prerogatives of art. Indeed, we suffer today from a peculiar form of moral anesthesia—as if being art automatically rendered all moral considerations ­gratuitous. The list of atrocities is long, familiar, and laughable. In the end, though, the effect has been ­anything but amusing; it has been a cultural disaster. By universalizing the spirit of opposition, the avant-garde’s ­project has transformed the practice of art into a purely negative enterprise, in which art is either oppositional or it is nothing. Celebrity replaces aesthetic achievement as the goal of art.
...
The Platonic tradition in Christianity invests beauty with ontological significance, trusting it to reveal the unity and proportion of what really is. Our apprehension of beauty thus betokens a recognition of and ­submission to a reality that transcends us. And yet, if beauty can use art to express truth, art can also use beauty to create charming fabrications. As Jacques Maritain put it, art is capable of establishing “a world apart, closed, limited, absolute,” an autonomous world that, at least for a moment, relieves us of the “ennui of living and willing.” Instead of directing our attention beyond sensible beauty toward its supersensible source, art can fascinate us with beauty’s apparently self-sufficient presence; it can counterfeit being in lieu of revealing it.

Considered as an end in itself, apart from God or being, beauty becomes a usurper, furnishing not a foretaste of beatitude but a humanly contrived substitute. “Art is dangerous,” as Iris Murdoch once put it, “chiefly because it apes the spiritual and subtly disguises and trivializes it.”

This helps explain why Western thinking about art has tended to oscillate between adulation and deep suspicion. “Beauty is the battlefield where God and the devil war for the soul of man,” Dostoevsky had Mitya Karamazov declare, and the battle runs deep.

When deploring the terrible state of the art world today—Tolstoy’s word perverted is not too strong—we often look back to the Renaissance as a golden age when art and religion were in harmony and all was right with the world. But for many traditional thinkers, the Renaissance was the start of the trouble. Thus Maritain charges that “the Renaissance was to drive the artist mad, and to make of him the most miserable of men . . . by revealing to him his own peculiar grandeur, and by letting loose on him the wild beast Beauty which Faith had kept enchanted and led after it, docile.”

Thus, along with the shattering of the medieval ­cosmos and the flowering of Renaissance humanism, “prodigal Art aspired to become the ultimate end of man, his Bread and Wine, the consubstantial mirror of beatific Beauty.” How seriously should we take this rhetoric that fuses the ambitions of art and religion? No doubt it is in part hyperbole. But, like most hyperbole, talk of the artist as a “second god” is exorbitant language striving to express an exorbitant claim—a claim about man’s burgeoning consciousness of himself as a free and creative being.

We have to wait for Romanticism and the flowering of the cult of genius for the completion of this discovery. But the apotheosis of artistic creativity began long before the nineteenth century. With the rise of fixed-point perspective, which Alberti’s fifteenth-century On Painting first systematized and made generally available, the artist had entered into a new consciousness of his freedom and creativity. As Erwin Panofsky pointed out, the achievement of fixed-point perspective marked not only the elevation of art to a science (a prospect that so enthused Renaissance artists) but also “an objectification of the subjective,” a subjection of the visible world to the rule of ­mathematics:

There was a curious inward correspondence between perspective and what may be called the general mental attitude of the Renaissance: the process of projecting an object on a plane in such a way that the resulting image is determined by the distance and location of a “point of view” symbolized, as it were, the Weltanschauung of a period which had inserted an historical distance—quite comparable to the perspective one—between itself and the classical past, and had assigned to the mind of man a place “in the center of the universe” just as perspective assigned to the eye a place in the center of its graphic representation.

In this sense, the perfection of one-point perspective betokened not only the mastery of a particular artistic technique but implied also a new attitude toward the world. Increasingly, nature was transformed from God’s book of human destiny to material for the play of the godlike artist.

The closer one moved toward the present time, the more blatant and unabashed became the association of the artist with God. Thus Alexander Baumgarten, writing in the mid-eighteenth century, compared the poet to a god and likened his creation to “a world”: “Hence by analogy whatever is evident to the philosophers regarding the real world, the same ought to be thought of a poem.” And Lord Shaftsbury, who exerted enormous influence on eighteenth-century aesthetics, asserted that, in the employment of his imagination, the artist becomes “a second god, a just Prometheus under Jove.” Of course, as Ernst Cassirer noted in his gloss on Shaftsbury, “the difference between man and God disappears when we consider man not simply with respect to his original immanent forming powers, not as something created, but as a creator. . . . Here man’s real Promethean nature comes to light.”
...
We do not need Nietzsche to tell us that the disintegration of the Platonic-Christian worldview, already begun in the late Middle Ages, is today a cultural given. Nor is it news that the shape of modernity—born, in large part, from man’s faith in the power of human ­reason and technology to remake the world in his own image—has made it increasingly difficult to hold the traditional view that ties beauty to being and truth, investing it with ontological significance. Modernity, the beneficiary of Descartes’ relocation of truth to the subject ( Cogito, ergo sum), implies the autonomy of the aesthetic sphere and hence the isolation of beauty from being or truth. When human reason is made the measure of reality, beauty forfeits its ontological claim and becomes merely aesthetic—merely a matter of feeling.

At the end of his book Human Accomplishment (2004), Charles Murray argues that “religion is indispensable in igniting great accomplishment in the arts.” I have a good deal of sympathy with the intention behind Murray’s argument, but my first response to his claims for the indispensability of religion for art might be summed up by that Saul Steinberg ­cartoon in which a smallish yes is jetting along toward a large BUT. Murray has done a lot to insulate his ­argument: By religion, he doesn’t mean churchgoing or even theology, and thus he is right to say that classical Greece, though secular (one might even say pagan) in a certain sense, was nonetheless a religious powerhouse for the “mature contemplation” of “truth, beauty, and the good.”

I think that Kimball, in equating the rise of the artist as the ultimate arbiter of artistic value with the demise of the Platonic-Christian worldview, gets it exactly wrong. The artist as ultimate arbiter suggests an objectification of beauty, not its subjectification, and puts the artist, and the Arts Establishment, in the role of the Medieval priesthood. The Medieval church was killed by subjective spirituality, the priesthood of all believers. Religion is, and always has been, a usurpation of subjective spiritual impulses by a dream of philosophic objectification.

Likewise with Beauty and it's handmaiden, Art. Beauty can not be anything but subjective. Subjectivity is the enemy of all priesthoods, and the Art Establishment is nothing but a pathetic priesthood attempting to hold onto influence and relevance in a world where the distinction between what is objective and what is subjective is more clearly established than at any time in history. The Objective is the realm of science, everything else is philosophy and opinion.

The title to this post is a call to action, but in fact no action is really necessary. Despite its inability to produce art that has any aesthetic value, the Art Establishment is not preventing the flourishing of art in society, no more than the Catholic Church's insistence on being the one true church has prevented a flourishing of non-Catholic spirituality anywhere on the globe. The Art Establishment is an irrelevance. Capital A Art is an irrelevance. Little a art abounds everywhere.

I'm just wondering when Time Magazine will get the bright idea to publish an issue titled "Art is Dead".

Wednesday, October 01, 2008

Black Swans and ideological shifts

Rare and catastrophic events, so called Black Swans, have a way of re-aligning the political landscape. 9/11 turned many a moderate into a conservative, as it did I. But the financial crisis of 2008 threatens to undo my conservative identity, for what alternative I am yet unaware. A case in point is this line of conservative cant in defense of the deregulated financial markets which takes this year's Leonard Cohen award for tone-deafness:
This reaction is a bit like protesting against patching the hole in an ocean liner because doing so will save those who made the crucial navigational errors. Watching the navigator sink beneath the waves might be fun, but one's pleasure will be short and gurgly. So let's get real. While some unjustified enrichment is possible, for the most part, if the financial institutions survive and prosper it will not be because they have been "bailed out" but because the system has been saved. So take a deep breath and say "This is good." In the Midwesternism of my youth, "Don't cut off your nose to spite your face."

Another reason for rational restraint is that there are fewer villains in this tale than the news and the political campaigns would lead one to believe. Three basically good things - the securitization of consumer credit, the extension of credit down the economic ladder, and the invention of derivatives - have combined, and the resulting mix turned out to be explosive. Well, live and learn, and do better next time. But first, ensure there is a next time.


Bringing the world financial system to the point of collapse is not an event that can be shrugged off with a "live and learn" nonchalance. This is not a matter of small miscalculations, but systemic ignorance on the part of the entire financial community, both from the public and private sector. If conservatives and free market libertarians think that political confidence can be restored without any recriminations or blame seeking, they will have ensured a statist political hegemony for the next fifty years. To paraphrase Ricky Ricardo, there is a whole lot of 'splaining to be done.

Nassim Nicholas Taleb has, I think, correctly figured out who the guilty are, and they are not the government regulators.
Statistical and applied probabilistic knowledge is the core of knowledge; statistics is what tells you if something is true, false, or merely anecdotal; it is the "logic of science"; it is the instrument of risk-taking; it is the applied tools of epistemology; you can't be a modern intellectual and not think probabilistically—but... let's not be suckers. The problem is much more complicated than it seems to the casual, mechanistic user who picked it up in graduate school. Statistics can fool you. In fact it is fooling your government right now. It can even bankrupt the system (let's face it: use of probabilistic methods for the estimation of risks did just blow up the banking system).

The current subprime crisis has been doing wonders for the reception of any ideas about probability-driven claims in science, particularly in social science, economics, and "econometrics" (quantitative economics). Clearly, with current International Monetary Fund estimates of the costs of the 2007-2008 subprime crisis, the banking system seems to have lost more on risk taking (from the failures of quantitative risk management) than every penny banks ever earned taking risks. But it was easy to see from the past that the pilot did not have the qualifications to fly the plane and was using the wrong navigation tools: The same happened in 1983 with money center banks losing cumulatively every penny ever made, and in 1991-1992 when the Savings and Loans industry became history.

It appears that financial institutions earn money on transactions (say fees on your mother-in-law's checking account) and lose everything taking risks they don't understand. I want this to stop, and stop now—the current patching by the banking establishment worldwide is akin to using the same doctor to cure the patient when the doctor has a track record of systematically killing them. And this is not limited to banking—I generalize to an entire class of random variables that do not have the structure we think they have, in which we can be suckers.

And we are beyond suckers: not only, for socio-economic and other nonlinear, complicated variables, are we are riding in a bus driven by a blindfolded driver, but we refuse to acknowledge it in spite of the evidence, which to me is a pathological problem with academia. After 1998, when a "Nobel-crowned" collection of people (and the crème de la crème of the financial economics establishment) blew up Long Term Capital Management, a hedge fund, because the "scientific" methods they used misestimated the role of the rare event, such methodologies and such claims on understanding risks of rare events should have been discredited. Yet the Fed helped their bailout and exposure to rare events (and model error) patently increased exponentially (as we can see from banks' swelling portfolios of derivatives that we do not understand).

I think the first mistake with deregulation has been to pretend that the financial sector can be treated as private. The financial system is infrastructure, just like roads and bridges. It is an enabler of commerce, a resource that must be in place and must be standardized to allow any economic activity to take place.

The second mistake is to treat financial activity as wealth-producing. When the financial sector leads the economy in profit growth, it is at the expense of real wealth production. Allowing banks to engage in speculative "investment" activity, fueled by leverage, was not only a disaster in the making, but it was a misallocation of capital from truly wealth generating opportunities such as energy and resource production. We have too few oil wells but way too many houses right now because we allowed our financial institutions to play the roulette wheel with leveraged bets backed by the Federal Reserve.

I'm not sure what form future regulations of the financial system should take, but I am in no doubt that the conservative pipe-dream of unregulated financial markets is as dead as Pauly Shore's Oscar ambitions. I only hope that we can direct the bulk of the regulations toward the financial sector and leave the productive sectors of our economy free enough to preserve and enhance our standard of living going forward.

Update: For those who doubt that the current moment represents a potential catastrophe, I refer you to these words from tax law professor Theodore Soto:
In addition, at least $500 billion more of teaser-rate mortgages are scheduled to reset over the next several years. In all likelihood, they too will go into default and become toxic waste. Nothing in Mr. Paulson’s original proposal was intended to do anything about this next $500 billion installment – or, indeed, to prevent lenders from making more teaser-rate mortgages in the future.

Similarly, Mr. Paulson’s proposal was not intended as a general Wall Street bail-out, although to some extent it would have had that effect. Note that the outstanding overhang of credit default swaps alone is estimated to be between $45 and $60 trillion – three to four times the size of our annual gross domestic product. The requested $700 billion, although the single biggest appropriation request in U.S. history, was miniscule when compared with the toxic waste problem as a whole. Mr. Paulson’s proposed solution was to cost just 1% of the size of the problem and was aimed only at a small part of that problem. (It is unnerving to realize that the U.S. government – the “beast” we have been starving for so long – may now lack the borrowing capacity to solve the problem as a whole. We need to get our financial house in order.)

Literature: utility or pretension?

I've asked this question in the past: what is literature's utility function? We are well aware of it's dis-utility function, namely that it serves as a breeding ground for a parasitic critic class that only seems competent at obscuring at best, or subverting at worst, literature's true worth to society. But is there a redeeming value for literature in spite of it's promoters?

Leave it to the Darwinists to root out an explanation for literature's beneficial aspect that actually makes sense:
When Brad Pitt tells Eric Bana in the 2004 film Troy that “there are no pacts between lions and men,” he is not reciting a clever line from the pen of a Hollywood screenwriter. He is speaking Achilles’ words in English as Homer wrote them in Greek more than 2,000 years ago in the Iliad. The tale of the Trojan War has captivated generations of audiences while evolving from its origins as an oral epic to written versions and, finally, to several film adaptations. The power of this story to transcend time, language and culture is clear even today, evidenced by Troy’s robust success around the world.

Popular tales do far more than entertain, however. Psychologists and neuroscientists have recently become fascinated by the human predilection for storytelling. Why does our brain seem to be wired to enjoy stories? And how do the emotional and cognitive effects of a narrative influence our beliefs and real-world decisions?

The answers to these questions seem to be rooted in our history as a social animal. We tell stories about other people and for other people. Stories help us to keep tabs on what is happening in our communities. The safe, imaginary world of a story may be a kind of training ground, where we can practice interacting with others and learn the customs and rules of society. And stories have a unique power to persuade and motivate, because they appeal to our emotions and capacity for empathy.

A Good Yarn
Storytelling is one of the few human traits that are truly universal across culture and through all of known history. Anthropologists find evidence of folktales everywhere in ancient cultures, written in Sanskrit, Latin, Greek, Chinese, Egyptian and Sumerian. People in societies of all types weave narratives, from oral storytellers in hunter-gatherer tribes to the millions of writers churning out books, television shows and movies. And when a characteristic behavior shows up in so many different societies, researchers pay attention: its roots may tell us something about our evolutionary past.

To study storytelling, scientists must first define what constitutes a story, and that can prove tricky. Because there are so many diverse forms, scholars often define story structure, known as narrative, by explaining what it is not. Exposition contrasts with narrative by being a simple, straightforward explanation, such as a list of facts or an encyclopedia entry. Another standard approach defines narrative as a series of causally linked events that unfold over time. A third definition hinges on the typical narrative’s subject matter: the interactions of intentional agents—characters with minds—who possess various motivations.

However narrative is defined, people know it when they feel it. Whether fiction or nonfiction, a narrative engages its audience through psychological realism—recognizable emotions and believable interactions among characters.

“Everyone has a natural detector for psychological realism,” says Raymond A. Mar, assistant professor of psychology at York University in Toronto. “We can tell when something rings false.”

But the best stories—those retold through generations and translated into other languages—do more than simply present a believable picture. These tales captivate their audience, whose emotions can be inextricably tied to those of the story’s characters. Such immersion is a state psychologists call “narrative transport.”

Researchers have only begun teasing out the relations among the variables that can initiate narrative transport. A 2004 study by psychologist Melanie C. Green, now at the University of North Carolina at Chapel Hill, showed that prior knowledge and life experience affected the immersive experience. Volunteers read a short story about a gay man attending his college fraternity’s reunion. Those who had friends or family members who were homosexual reported higher transportation, and they also perceived the story events, settings and characters to be more realistic. Transportation was also deeper for participants with past experiences in fraternities or sororities. “Familiarity helps, and a character to identify with helps,” Green explains.

Other research by Green has found that people who perform better on tests of empathy, or the capacity to perceive another person’s emotions, become more easily transported regardless of the story. “There seems to be a reasonable amount of variation, all the way up to people who can get swept away by a Hallmark commercial,” Green says.

In Another’s Shoes
Empathy is part of the larger ability humans have to put themselves in another person’s shoes: we can attribute mental states—awareness, intent—to another entity. Theory of mind, as this trait is known, is crucial to social interaction and communal living—and to understanding stories.

Children develop theory of mind around age four or five. A 2007 study by psychologists Daniela O’Neill and Rebecca Shultis, both at the University of Waterloo in Ontario, found that five-year-olds could follow the thoughts of an imaginary character but that three-year-olds could not. The children saw model cows in both a barn and a field, and the researchers told them that a farmer sitting in the barn was thinking of milking the cow in the field. When then asked to point to the cow the farmer wanted to milk, three-year-olds pointed to the cow in the barn—they had a hard time following the character’s thoughts to the cow in the field. Five-year-olds, however, pointed to the cow in the field, demonstrating theory of mind.

Perhaps because theory of mind is so vital to social living, once we possess it we tend to imagine minds everywhere, making stories out of everything. A classic 1944 study by Fritz Heider and Mary-Ann Simmel, then at Smith College, elegantly demonstrated this tendency. The psychologists showed people an animation of a pair of triangles and a circle moving around a square and asked the participants what was happening. The subjects described the scene as if the shapes had intentions and motivations—for example, “The circle is chasing the triangles.” Many studies since then have confirmed the human predilection to make characters and narratives out of whatever we see in the world around us.

But what could be the evolutionary advantage of being so prone to fantasy? “One might have expected natural selection to have weeded out any inclination to engage in imaginary worlds rather than the real one,” writes Steven Pinker, a Harvard University evolutionary psychologist, in the April 2007 issue of Philosophy and Literature. Pinker goes on to argue against this claim, positing that stories are an important tool for learning and for developing relationships with others in one’s social group. And most scientists are starting to agree: stories have such a powerful and universal appeal that the neurological roots of both telling tales and enjoying them are probably tied to crucial parts of our social cognition.

As our ancestors evolved to live in groups, the hypothesis goes, they had to make sense of increasingly complex social relationships. Living in a community requires keeping tabs on who the group members are and what they are doing. What better way to spread such information than through storytelling?

Indeed, to this day people spend most of their conversations telling personal stories and gossiping. A 1997 study by anthropologist and evolutionary biologist Robin Dunbar, then at the University of Liverpool in England, found that social topics accounted for 65 percent of speaking time among people in public places, regardless of age or gender.
Anthropologists note that storytelling could have also persisted in human culture because it promotes social cohesion among groups and serves as a valuable method to pass on knowledge to future generations. But some psychologists are starting to believe that stories have an important effect on individuals as well—the imaginary world may serve as a proving ground for vital social skills.

“If you’re training to be a pilot, you spend time in a flight simulator,” says Keith Oatley, a professor of applied cognitive psychology at the University of Toronto. Preliminary research by Oatley and Mar suggests that stories may act as “flight simulators” for social life. A 2006 study hinted at a connection between the enjoyment of stories and better social abilities. The researchers used both self-report and assessment tests to determine social ability and empathy among 94 students, whom they also surveyed for name recognition of authors who wrote narrative fiction and nonnarrative nonfiction. They found that students who had had more exposure to fiction tended to perform better on social ability and empathy tests. Although the results are provocative, the authors caution that the study did not probe cause and effect—exposure to stories may hone social skills as the researchers suspect, but perhaps socially inclined individuals simply seek out more narrative fiction.

In support for the idea that stories act as practice for real life are imaging studies that reveal similar brain ac­tivity during viewings of real people and animated cha­racters. In 2007 Mar conducted a study using Waking Life, a 2001 film in which live footage of actors was traced so that the characters appear to be animated drawings. Mar used functional magnetic resonance imaging to scan volunteers’ brains as they watched matching footage of the real actors and the corresponding animated characters. During the real footage, brain activity spiked strongly in the superior temporal sulcus and the temporoparietal junction, areas associated with processing biological motion. The same areas lit up to a lesser extent for the animated footage. “This difference in brain activation could be how we distinguish between fantasy and reality,” Mar says.

I think that the metaphor of the story as a "flight simulator" is perhaps the single most useful description of literature's value that I have yet come across. But with that realization comes the painful truth that literature as promoted by the modern establishment has largely failed its constituency. By focusing on originality at the expense of meaning, alienation and nihilism at the expense of integration and enduring values, the critic establishment has sought to promote simulators that train pilots to crash their planes rather than arrive safely. The literary establishment would have us all become voyeuristic viewers of the human condition, reveling in fiery crash footage rather than ennobling stories of human acheivement.

Repent! The end of the world is at hand!

Global Warming is the new Apocalypse, and environmentalists the new preachers of gloom and doom urging us to adopt sackcloth and ashes. From the Food Climate Research Network (food climate?) we learn that only by strict rationing of meat and other foods of "low nutritional value" can we hope to avoid a runaway climate catastrophe.

People will have to be rationed to four modest portions of meat and one litre of milk a week if the world is to avoid run-away climate change, a major new report warns.

The report, by the Food Climate Research Network, based at the University of Surrey, also says total food consumption should be reduced, especially "low nutritional value" treats such as alcohol, sweets and chocolates.

It urges people to return to habits their mothers or grandmothers would have been familiar with: buying locally in-season products, cooking in bulk and in pots with lids or pressure cookers, avoiding waste and walking to the shops - alongside more modern tips such as using the microwave and internet shopping.

The report goes much further than any previous advice after mounting concern about the impact of the livestock industry on greenhouse gases and rising food prices. It follows a four-year study of the impact of food on climate change and is thought to be the most thorough study of its kind.

Tara Garnett, the report's author, warned that campaigns encouraging people to change their habits voluntarily were doomed to fail and urged the government to use caps on greenhouse gas emissions and carbon pricing to ensure changes were made. "Food is important to us in a great many cultural and symbolic ways, and our food choices are affected by cost, time, habit and other influences," the report says. "Study upon study has shown that awareness-raising campaigns alone are unlikely to work, particularly when it comes to more difficult changes."

The report's findings are in line with an investigation by the October edition of the Ecologist magazine, which found that arguments for people to go vegetarian or vegan to stop climate change and reduce pressure on rising food prices were exaggerated and would damage the developing world in particular, where many people depend on animals for essential food, other products such as leather and wool, and for manure and help in tilling fields to grow other crops.

Instead, it recommended cutting meat consumption by at least half and making sure animals were fed as much as possible on grass and food waste which could not be eaten by humans.

It still amazes me to realize that the same people who envision a world government that can control social and economic behavior by fiat are the same people who forswear any coercive use of military might. Also these people see no disconnect between globally integrated political action and populaces that have been converted to local and regional modes of economic production and consumption.

Only highly educated minds lose the ability to connect dots, it seems.