Archive for the ‘Philosophy’ Category


spiritual snake oil by chris edwards, cover

(excerpted from Spiritual Snake Oil: Fads & Fallacies in Pop Culture, by Chris Edwards)

“[T]he discovery that mathematics is a good language for describing the Universe is about as significant as the discovery that English is a good language for writing plays in.”

—John Gribbin (from Schrodinger’s Kittens and the Search for Reality)

“Everything zen, everything zen; I don’t think so.”  —G.W. Bush

Robert Pirsig, author of Zen and the Art of Motorcycle Maintenance, deserves a lot of credit for getting a wide readership interested in philosophy; unfortunately he also deserves some of the blame for creating a market in which non-material philosophers and gurus thrive. After reading his book, I found myself thinking about where he went wrong, and eventually wrote an essay about his mistakes. This led me to start reading other pop philosophy and pop science books with the intent of seeing if their authors made the same mistakes as Pirsig.

During that process, I remembered having read, years before I studied logic, a critique of skepticism and science in a Michael Crichton book called Travels. At the time I first read Crichton’s speech/essay, I thought he made some good points. Upon returning to it, however, the flaws in his arguments were obvious.

Both Pirsig and Crichton are/were hyper-intelligent individuals. But that’s beside the point. Logic addresses arguments, not people, and even the hyper-intelligent make mistakes.

Robert Pirsig, author of the wildly popular and perennial bestseller, Zen and the Art of Motorcycle Maintenance, can be seen as the founding father of modern pop philosophy. Pirsig may also be the first modern writer to rework old religious fallacies into mysticism/New Ageism. Many of his errors have been repeated by modern day gurus and shamans such as Deepak Chopra. Pirsig’s book, first published in 1974, sought to undermine scientific thinking and created a cult-like audience of followers who persist in believing in Pirsig’s non-material claims.

Those who doubt Pirsig’s continuing influence might consider Mark Richardson’s recently released book, Zen and Now: On the Trail of Robert Pirsig and the Art of Motorcycle Maintenance. The author of Zen and Now, like so many of Pirsig’s devotees, traveled Pirsig’s famous motorcycle route. I too would like to follow Pirsig’s path, but with a different intention. I’d like to provide maintenance for his logic. Perhaps debunking Pirsig, even at this late date, will be helpful in addressing the claims of the many pop philosophers and gurus who have begun writing for the niche market that he created.

In the Introduction to the 1999 paperback edition of Zen and the Art of Motorcycle Maintenance, Pirsig mentioned schizophrenia. In reference to his own battles with what appears to be some version of split personality disorder, he wrote: “There is a divided personality here: two minds fighting for the same body, a condition that inspired the original meaning of ‘schizophrenia.’” The more psychologically correct definition of schizophrenia is the inability of an individual to distinguish between the images in his head and images in the world. When this condition is chronic, it is defined as a mental disorder. When it is selective, we call it faith. Pirsig’s philosophical mistakes are all schizophrenic in that he cannot always tell the difference between things that merely exist in the mind and things that exist in the world. New Age philosophers often try to distance themselves from their more dogmatic religious cousins. However, a close examination of Pirsig’s writing shows that the errors he makes are carnival-mirror distortions of those that plague religion.

In his book, which Pirsig informs us is a “Chatauquah,” kind of a long philosophical discourse told through an individual narrative, the central philosophical theme is Pirsig’s search for something that falls outside of the traditional philosophical arena. His alter ego “Phaedrus” (Pirsig’s personality before a long bout with mental illness) became consumed with the concept of “Quality” and went into a deep cavern of philosophical thought in search of what it meant.

In order to prevent his search from becoming a scientific quest, Pirsig makes a few clumsy attacks on scientific materialism, otherwise known as atheism. Pirsig’s brief dismissal of “scientific materialism” aka “atheism” has an outsized importance in his book. Once he has gotten those pesky rules of science out of the way, he is free to meander through the mystical and philosophical caverns until he finds his Quality—a strange trip, given the fact that he doesn’t even bother to define it.

Here’s a sample passage:

Phaedrus felt that…scientific materialism was by far the easiest to cut to ribbons. This, he knew from his earlier education, was naïve science. He went after it…using the reductio ad absurdum. This form of argument rest on the truth that if the inevitable conclusions from a set of premises are absurd then it follows logically that at least one of the premises that produced them is absurd. Let’s examine, he said, what follows from the premise that anything not composed of mass-energy is unreal or unimportant.
He used the number zero as a starter. Zero originally a Hindu number, was introduced to the West by Arabs during the Middle Ages and was unknown to the ancient Greeks and Romans. How was that? He wondered. Had nature so subtly hidden the zero that all the Greeks and all the Romans—millions of them—couldn’t find it? One would normally think that zero is right out there in the open for everyone to see. He showed the absurdity of trying to derive zero from any form of mass-energy, and then asked, rhetorically, if that meant the number zero was ‘unscientific.’ If so, did that mean that digital computers, which function exclusively in terms of ones and zeros, should be limited to just ones for scientific work? No trouble finding the absurdity here. (297-298)

The problem with this passage is that Pirsig reduced the wrong argument to absurdity—his own.

First of all, the number zero was invented not discovered, in the same way that Newton invented, not discovered, calculus and Darwin invented, not discovered, evolutionary theory. This does not mean that moving objects began with Newton or that evolution began with Darwin, it merely means that humanity finally created language that could describe real-world phenomena.

The notion that the Greeks and Romans could not see zero is about as significant as saying that the citizens of a landlocked country could not see a ship. In Charles Seife’s wonderful book, Zero: Biography of a Dangerous Idea, Seife pointed out that Greek mathematics concerned itself primarily with geometry because it was useful for farming and building. The Greeks could not conceive of negative landholdings, for example. The concept of zero was created sometime during the 5th or 6th century in the Gupta Dynasty when Hindu thinkers began to contemplate the infinite and the void. Gupta mathematics was impressive and the calculations it enabled amounted to a scientific revolution.

This being said, it would not be proper to say that Indian mathematics was right and Greek mathematics was wrong. This would be like saying that the French language is right and German is wrong. What can be said is that Indian mathematics is more expressive than Greek.

The Greeks seem not to have spent much time contemplating the infinite or the void, which is why they had no names for them. The Hindus, driven by a religion that encouraged contemplation of such things, did. Similarly, Central African tribesmen could hardly be expected to have a word for snow. Yet snow, the infinite, and the void exist (or in the case of the last, don’t exist but the concept does). It is only when cultures become aware of things for which they have no terms are the mathematical and linguistic “names” for them invented or borrowed. This occurs all the time. When Americans first encountered Mexican salsa they adopted not only the sauce but the word for it as well.

If we were given a certain limited amount of sensory data—say the observation of the sun peeking over the horizon every morning—we could develop two different mathematical models, or languages, to describe this phenomenon: the Ptolemaic (Earth centered) and the Copernican (sun centered).

At first, the Ptolemaic view and the Copernican view would both suffice, and there would be no way of saying which better described the observed phenomena. However, let us say that we get a new piece of sensory data, as Galileo did when he used his telescope to see the orbital patterns of the moons of Jupiter, and that one of these models more accurately predicts and describes these new facts; then we would be able to say that one model was the better descriptor of all the facts.

The Copernican “theory” is more descriptive of sensory data and gives us a more accurate description of what is really happening in the universe. Thus, it displaced the Ptolemaic version. If we understand this we can see that Zeno’s famous paradox, for example, is not a paradox at all. (Zeno asked how, if you go half the distance to a goal, then half of that distance, then half of that distance, etc., you could ever arrive at the goal.) Zeno was simply showing the Greeks that their mathematics (devoid of zero) had no way of adequately describing movement.

Modern mathematics, far from being a hard objective “thing” is instead a mish-mash of concepts that arose from a process of cultural synthesis (almost entirely in Eurasia, where cultures were easily able to intermesh because of war and trade). The Greeks contributed geometry; the Gupta Indians the numbers 0-9 and the decimal system; the Muslims gave us algebra; the English gave us physics and calculus; and the Germans contributed the theory of relativity and quantum mechanics. Each time, a culture’s language was adopted and added not because it was “right,” but because it was more descriptive of objective phenomena and therefore a “better” language.

It is important to note that in his “Chatauqua,” Pirsig devotes several pages to the mathematician Poncaire’ (1854–1912) and the supposed mathematical crisis of his time, which involved the “discovery” that two different types of mathematical language—one called Lobachevskian and the other Euclidian (which became known as the Riemann)—could be used. Pirsig writes:

We now had two contradictory visions of unshakable scientific truth, true for all men of all ages, regardless of their individual preferences. This was the basis of the profound crisis that shattered the scientific complacency of the Gilded Age. How do we know which one of these geometries is right? If there is no basis for distinguishing between them, then you have a total mathematics which admits logical contradictions. But a mathematics which admits logical contradictions is not mathematics at all. The ultimate effect of the non-Euclidian geometries becomes nothing more than a magician’s mumbo jumbo in which belief is sustained purely by faith! (335)

We see here that Pirsig is again confused by the nature of mathematics. We cannot ask the question “which of these geometries is right” anymore than we can ask whether Portuguese or Inuit is the “right” language. What we can ask, is, which is more descriptive for the sensory data we have? And, a paragraph down, Pirsig answers his own question: “According to the Theory of Relativity, Riemann geometry best describes the world we live in.” (335)

Reification is not a small mistake. Pirsig’s claim that computers run on Liebniz’s binary code, which works through a series of zeros and ones is not helpful. Does he actually think that computers run on concepts? There are no zeros in a computer but rather a series of electrical “holders” that are either electronically switched on or off. Humans simply describe this in terms of zeros or ones. Again, this description is subjective.

Once this is understood, all of Pirsig’s philosophy falls apart. Consider this oft-quoted passage of a conversation between him and his son:

…the laws of physics and of logic…the number system…the principle of algebraic substitution. These are ghosts. We just believe in them so thoroughly they seem real.”
“They seem real to me,” John says.
“I don’t get it,” says Chris.
So I go on. “For example, it seems completely natural to presume that gravitation and the law of gravitation existed before Isaac Newton. It would sound nutty to think that until the seventeenth century there was no gravity.”
“Of course.”
“So when did this law start? Has it always existed?”
John is frowning and wondering what I’m getting at.
“What I’m driving at,” I say, “is the notion that before the beginning of the earth, before the sun and the stars were formed, before the primal generation of anything, the law of gravity existed.”
“Sure.”
“Sitting there, having no mass of its own, no energy of its own, not in anyone’s mind because there wasn’t anyone, not in space because there was no space either, not anywhere—this law of gravity still existed?”
Now John seems not so sure.
“If that law of gravity existed,” I say, “I honestly don’t know what a thing has to do to be nonexistent. It seems to me that the law of gravity has passed every test of nonexistence there is. You cannot think of a single attribute of nonexistence that that law of gravity didn’t have. Or a single scientific attribute of existence it did have. And yet it is still ‘common sense’ to believe that it existed.”
John says, “I guess I’d have to think about it.”
“Well, I predict that if you think about it long enough you will find yourself going round and round and round and round until you finally reach only one possible, rational, intelligent conclusion. The law of gravity and gravity itself did not exist before Isaac Newton. No other conclusion makes sense.
“And what that means,” I say before he can interrupt, “and what that means is that the law of gravity exists nowhere except in people’s heads! It’s a ghost! We are all of us very arrogant and conceited about running down other people’s ghosts but just as ignorant and barbaric and superstitious as to our own.” (41–42)

Again, Pirsig mistakes the law of gravity, a description, for a thing. Of course the law of gravity could not have existed before there was anything, because without matter objects would not be attracted to each other because there would be no objects. If we define the “law of gravity” as a description of real-world phenomena, in the same way that the word “rock” is used to describe a hunk of granite, then no, the law of gravity did not exist before Newton. However, if we describe the law of gravity as the attraction that objects, depending on weight and distance, have for each other, then of course it existed—just as sound waves came from the falling tree even if no ears were around to hear it.

Pirsig might as well be saying that the word “rock” was floating around in the universe before there were ever rocks, or that poems about flowers existed before there were flowers or poets to write about them. He might as well be Plato looking at the shadows in his cave.

This fallacious thinking is what eventually leads him to this conclusion about his central conceit, which is the search for Quality:

[Q]uality is not just the result of the collision between subject and object. The very existence of subject and object themselves is deduced from the Quality event. The Quality event is the cause of the subjects and objects, which are then mistakenly presumed to be the cause of the Quality! Now he had that whole damned evil dilemma by the throat. (304)

Actually, he was just strangling a reification, holding a shadow in a headlock. Because Pirsig so often commits the philosophical sin of reification, he turns something called “Quality,” which is elusive by definition, into a kind of creator god. It existed before matter, apparently. This is like saying that the painting of a mountain created both the painter and the mountain. Quality is a subjective term in that it differs from person to person. The fact that most of us recognize Quality in the same way is not particularly remarkable given that all DNA-based humans have far more similarities than differences. Neither is it remarkable that separate human civilizations developed mathematics, language, mythologies, and religions. The mistake is reifying the descriptions of these human developments, such as when people mistake their descriptions of gods for actual gods. Pirsig’s “philosophy” is different only in degree, not in kind, from the “philosophy” of any other religion.

Understanding Pirsig’s elementary mistake—reification of descriptions—is an essential first step in understanding the fallacies of those who follow in his footsteps.

Enhanced by Zemanta

“Socrates said, ‘The unexamined life is not worth living.’ He was right. But the examined life is no bargain, either.”

–Woody Allen, Cafe Society


Stanislav Andreski

“So long as authority inspires awe, confusion and absurdity enhance conservative tendencies in society. Firstly, because clear and logical thinking leads to a cumulation of knowledge (of which the progress of the natural sciences provides the best example) and the advance of knowledge sooner or later undermines the traditional order. Confused thinking, on the other hand, leads nowhere in particular and can be indulged indefinitely without producing any impact upon the world.”

* * *
–Stanislav Andreski, Social Sciences as Sorcery, quoted by Alan Sokal and Jean Bricmont in the introduction to Fashionable Nonsense: Postmodern Intellectuals’ Abuse of Science


Neil deGrasse Tyson

“It’s wrong to say ‘You have to be good at it.’ I’d rather say, ‘You have to want to be good at it.’ And then ambition kicks in. And ambition can override whether or not your first foray was unpleasant or you didn’t do well or maybe you flunked an exam. But if you really like it you will spend time learning it. That’s what liking something means. Maybe too many of us believe that we like something because we’re good at it. And sure, there’re plenty of cases where that’s so. But why deny yourself the pleasure of a life of pursuit, of something that brings pleasure?”


Skeptic Michael Shermer

Skeptic Michael Shermer (Photo credit: Wikipedia)

“But isn’t the history of science . . . strewn with the remains of failed theories such as phlogiston, miasma, spontaneous generation and the luminous aether? Yes, and that is how we know we are making progress. The postmodern belief that discarded ideas mean that there is no objective reality and that all theories are equal is more wrong than all the wrong [scientific] theories combined.”

–Michael Shermer (editor of Skeptic), “At the Boundary of Knowledge,” Scientific American, September 2016

* * *

For an amusing illustration of the pretentious vacuity of postmodernism, see physicist Alan Sokal’s hoax article, “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity,” which he describes as “a pastiche of left-wing cant, fawning references, grandiose quotations, and outright nonsense … structured around the silliest quotations [by postmodernists he] could find about mathematics and physics.” Sokal approached a prominent — “prestigious” would be inaccurate — academic postmodernist journal, Social Text, which thought highly enough of the piece that  they published it in their Spring/Summer 1996 “Science Wars” issue.

As Richard Dawkins noted in Nature:

Sokal’s paper must have seemed a gift to the editors because this was a physicist saying all the right-on things they wanted to hear, attacking the ‘post-Enlightenment hegemony’ and such uncool notions as the existence of the real world. They didn’t know that Sokal had also crammed his paper with egregious scientific howlers, of a kind that any referee with an undergraduate degree in physics would instantly have detected. It was sent to no such referee. The editors, Andrew Ross and others, were satisfied that its ideology conformed to their own, and were perhaps flattered by references to their own works. This ignominious piece of editing rightly earned them the 1996 Ig Nobel prize for literature.

For a bit of fun, Communications From Elsewhere has a postmodern text generator, and you can generate your own computer science postmodern masterpiece with the help of an online gibberish generator created by pranksters at MIT. Just fill in the names of the “authors,” and voilá: a correctly formatted “academic” paper that makes sense only occasionally and inadvertently.

I pulled up the generator, fed in the names of a few lesser known cult leaders and serial killers (yes, there is overlap) and came up with a paper titled:

Decoupling the Turing Machine from Consistent Hashing in Byzantine Fault Tolerance

Authored by
Fritz Haarmann, Michel Petiot, Charles Dederich, Ervil LeBaron and Luc Geret

Abstract
Many physicists would agree that, had it not been for linked lists, the deployment of the Ethernet might never have occurred. In this paper, we confirm the investigation of Byzantine fault tolerance. In this position paper, we concentrate our efforts on validating that public-private key pairs and Lamport clocks can agree to surmount this grand challenge.

So, there you go. Have fun with the postmodern text generator and the computer-science gibberish generator.  (Thanks to UA astronomer Jess Johnson for alerting me to the latter wonderful resource.)


(My friend Emmett Velten was murdered five years ago. The police never found his killer. As a small way of keeping his memory alive, here’s a paragraph from his essay, “Postmortem for Postmodernism.” It provides a good taste of the man and his work. The world is a poorer place for his loss.)

 * * *

Postmodern constructionism’s proponents modestly see it as a paradigm shift away from the Great Satan of the modern-era therapies. It arose when the philosophical movement, postmodernism, oozed from humanities departments into the psychotherapy and counseling realm. Various dates are said to mark the start of the modern era of Western culture, with 1900–1920 or so receiving the most votes. The modern era in which science supposedly reigned supreme, began to falter in 1976—or so postmodernists like to think—when Jacques Derrida, the godfather of postmodernism, published his incomprehensible magnum opus, Of Grammatology. Partly due to long-term resentment against logic, science, and reality, and partly because the kindred sicknesses of political correctness and multiculturalism were just beginning to incubate in premorbid professorial body cavities, humanities departments of American and European universities and colleges contracted postmodernism. Pretentious dissertations, learned papers and books, all of them unhinged and anti-science, drew attention to postmodernism and frightened normal people both in and outside the groves of academe.


Quantum Night, by Robert J. Sawyer front cover(Quantum Night, by Robert J. Sawyer; Ace, 2016, 351 pp., $27.00)

reviewed by Zeke Teflon

There’s a lot to like and a lot to dislike in Canadian science fiction writer Robert Sawyer’s new novel.

On the positive side, this is the most ambitious sci-fi novel I’ve read in ages. The writing is skillful — among other things, seamlessly switching between first person and third person narration — and the primary character is believable and sympathetic, if a bit on the irritating side. Sawyer uses the novel as a platform to talk intelligently about philosophical and ethical big issues — something all too rare in contemporary science fiction: Quantum Night makes you think. As well, Sawyer obviously did a thorough job of researching the novel’s background, the supposed quantum-related nature of consciousness — an area in which I’m totally out of my depth.

On the negative side, it’s difficult to buy the political background in which Quantum Night is set, especially that in the U.S. border areas (where I live). As well, Sawyer sets up an essential (for the secondary plot) series of events (riots) for which he provides no explanation.  Beyond that, from the point of view of psychology (an area in which I do know a bit), it’s very difficult to buy Sawyer’s underlying deterministic premise about the nature of consciousness and how it varies in the population. Beyond that, Sawyer provides the most nauseatingly graphic description of violence I’ve ever read; I found the scene so disturbing that I put down the book for several days before deciding that I really did want to see how the novel concluded.

Yet despite the gruesome violence, Sawyer adheres to the standard sci-fi bowdlerization of sexual scenes. Why? Why is sex more taboo than explicit, horrifying violence in sci-fi? (The only exceptions to that prudishness that immediately come to mind are some of the works of Walter Mosley and Richard K. Morgan.)

Quantum Night begins with a cringe-inducing series of scenes in which the protagonist, academic psychologist Jim Marchuk, a specialist in diagnosing psychopathic tendencies, learns that he has no memory of six months of his life as an undergraduate, and that he apparently did terrible things — things totally out of character — during those six months.

Marchuk shortly reconnects with his girlfriend from those lost six months, Kayla Huron,  a quantum physicist who, to quote the endflap, “has made a stunning discovery about the nature of human consciousness,” and not coincidentally has developed what she considers a foolproof method of diagnosing psychopathy.

Her discovery is that the quantum state of electrons in certain portions of the brain determine whether a person is a “philosopher’s zombie” (“p-zed” — a non-self-aware being with no inner voice who merely responds to external stimuli–in Sawyer’s schema 4/7 of the population), a psychopath (a self-aware being without empathy–according to the schema, 2/7 of the population–an astoundingly high proportion, far higher than the common estimates of 1% to 5% of the population), or a self-aware being with empathy (1/7  of the population). I have essentially no knowledge of quantum physics nor brain physiology, so I have no way to judge whether this is plausible; however, Sawyer always does his homework, so I suspect (in terms of quantum physics and brain physiology) it is, however barely. (The breakdown of the numbers of p-zeds, psychopaths, and self-aware, empathetic people is purely arbitrary, purely a plot device.)

There are, however, nonphysiological reasons to doubt that it is plausible. If people were pure behavioral animals reacting mindlessly to external stimuli (p-zeds), they wouldn’t react radically differently to identical stimuli and wouldn’t be almost universally at least somewhat emotionally disturbed. (We’re talking about the garden varieties of emotional disturbance here, such as anxiety and depression, not trauma-induced PTSD.) Pertinently, the most effective type of psychotherapy, cognitive behavioral therapy, is, to simplify, based on the premise that what people (often subconsciously) tell themselves largely determines their emotions: change what you tell yourself — deliberately tell yourself rational instead of irrational things — and you’ll minimize your emotional disturbance. And it works. So there go your “philosopher’s zombies,” who by definition don’t tell themselves anything.

Sawyer sets all this against a backdrop of ever-worsening rioting (for no apparent reason) in both Canada and the U.S., pogroms against Mexicans in Texas (based on a law restricting legal protection — including protection against murder — to U.S. citizens) , and belligerent psychopaths in both the White House and Kremlin. (What else is new?)

The unmotivated rioting is difficult to buy, the pogroms are equally difficult to buy, and it’s inconceivable that any U.S. court, no matter how reactionary, would ever declare such a law redefining murder constitutional, even in Texas. And if pogroms ever would break out down here along the border, it’s absolutely certain that there would be armed resistance; people would not meekly accept it.

The reason for this dire background is to set up a secondary plot — what can our heroes do about these things?  This is unfortunate, as the primary plot — Marchuk’s journey of discovery about what he did and why — is more than adequate, and the secondary plot seems implausible.

Even worse, much of the philosophical discussion in Quantum Night revolves around utilitarianism, the philosophy that ethical behavior is that which promotes the greatest good for the greatest number. Sawyer seems very much in favor of this concept. So far so good. However, he goes beyond this and seems to be making the case that it’s okay, in fact ethically necessary, to play god with the lives of other people as long as you consider it necessary to the “greater good.”  In other words, the ends justify the means. (My apologies to Sawyer if I’m misreading him, but I don’t think I am.)

This is a horrendous belief, one that is an integral part of the foundation of some of the worst forms of totalitarianism. Leninism, a conspicuously utilitarian political philosophy (which is supposed to produce the greatest good for the greatest number), is the example par excellence, and its terrible results when imposed are too well known to enumerate here. Suffice it to say that a very large number of human problems, both individual and societal, are a direct result of those (such as Sawyer’s protagonist) who consider themselves more enlightened than the great unwashed masses and play god with the lives of others — for their “own good,” of course.

Still, despite its warts, Quantum Night is well worth reading. The writing is first rate, Sawyer provides much thought-provoking discussion of philosophical and ethical problems (mostly in chapter introductions recounting Marchuk’s class lectures), the characters are believable and somewhat sympathetic, and the plot will have you on the edge of your seat throughout most of the book.

Recommended.

* * *

Free Radicals front cover

 

Zeke Teflon is the author of Free Radicals: A Novel of Utopia and Dystopia. He’s currently working on the sequel and on an unrelated sci-fi novel in his copious free time.

The first six chapters of Free Radicals, are avaukable  here in pdf form.