Science

23 SEPT 2023

(rev. 9/25/23)

Artificial Intelligence

There has been a spate of articles across the various media about the arrival of AI, short for artificial intelligence. It is interesting that our world has only this terminology for what is being done within the cybernetics communities to solve puzzles across the realms of human endeavor.

The ironies of the term "AI" are that we still do not have a detailed idea of what "intelligence" is—other than "reasoning capacity."

Reasoning is the process of accepting and identifying sensory inputs, relating them to experience and the logics implied by those relationships, then predicting the effect of the input on the immediate to very distant future, and expressing that to others by language, art, mathematics, and physical action.

The less well-understood irony is that "artificial," meant to say "not done by normal agents," also means "unreal," "counterfeit," "false," "phony," and scores of other negatively valenced ideas.

ChatGPT has gotten a lot of attention recently, because it seems to "have a mind of its own." What it does is assemble sentences word by word — by surveying millions of sentences available online to see what words typically follow one another — it does this in response to a cue or query or instruction from a human being. But it seems uncanny to humans because when we do this by ourselves it usually takes much longer, although in mental processing the lapse of time is exceedingly small. Clarifying one's purpose and direction is what takes time, and that requires humans to consider a minefield of things that are useless, dangerous, taboo, uncultured, and so forth. ChatGPT just assembles "typical" sentences containing information.

I have two Google "pods" in my home. Each cost $19 at BestBuy on sale. I say "Okay, Google, how old is Jamie Raskin." The pod "hears" (detects) my voice (or anyone's), analyzes the sound frequencies and durations and volumes at each frequency, and brief moments of silence, refers this information to a database of sound combinations, "understanding" the "Okay, Google" as cue that it should pay attention to what follows. Its sounds database detects the common words "how" and "old" and this launches an inquiry subroutine that Google performs millions of times a day, which then "hears" "Jamie Raskin," which it has "heard" thousands of times, because he is famous. The subroutine quickly goes to Wikipedia, because Google's experience is that Wikipedia information is reliable and acceptable usually, and it finds "Jamie Raskin," just like you would typing into your own computer. It finds Raskin's age stated or birthday stated, observes the letters and numbers, refers them to a vocal database in the order they are presented at Wiki and then a response routine says to me, "Jamie Raskin is 60 years old." Then another routine may chime in and might say, "Do you want more information about Jamie Raskin?", having "noticed" that there is a lot more words there at the Wiki, or that other inquiries have led to other questions about him.

Does the entity with the pleasant femme voice of the Google pod "know" what it just accomplished? The answer is a surprising "yes," because if you ask Google to repeat what it just did, it will repeat it. So for a short time, perhaps, Google has memory of my account and what it did. Moreover, the Raskin query meta-information is also logged into a data base opaque to me and you at Google "Central," which is used to kindle a speedier response for the same question from other pod owners.

Does Ms. Google "know" what it just accomplished like we do? Certainly not, because the information is normally not labeled with a valence, an emotional score. Are they working on giving some kinds of answers emotional scores? You can bet they are. So, for instance, ChatGPT has to figure out what the next word in the sentence should be, given that ChatGPT chose the previous word and has a mission to write a paragraph about Jamie Raskin's bandanas. ChatGPT "understands" the valences of word information from its information sources: Wikipedia, Raskin's political website, lots of newsmedia reportage, and so forth by noticing key words, like "friendly," "astute," "cancer," "respected," "Progressive," "Democrat," all of which are registered by trillions of referrals to ChatGPT's valence meta-data and perhaps other meta-data available to it. With that information, ChatGPT's algorithms prompt a context appropriate valence texture into the selection of the next word in its sentence—unless told not to by the human who made the query in the first place.

The point of all this is that the world of information is such that with a flick of your wrist and poke of your fingers you can telephone someone in Brunei on the island of Borneo or someone in Tanzania. About 85% of all human beings are theoretically accessable, that's about 6.8 billion people! Spread across the world are data bases holding all the telephone numbers. Facebook has almost 3 billion subscribers. And these are just thin film of information covering the whole of information about us and our civilizations and our science, literature, and everything else. We have assembled a roster of humanity and theoretically anyone on the roster can call any other roster member.

The magic that makes AI possible is our modern technological wireless and conventional connectivity and the ability of cybernetic machines to sort through it "all" to find a Jamie Raskin for an essayist in California.

Some people are very frightened about AI, and I understand that it is real and righteous fear. There are people out there who have no ethics and who have no compunctions about harming or killing other humans or destroying valuable public and private property. They are the threat, and we do not yet have a good handle on them, who and why they are—certainly they are not all of us, or even a big number of us, but big enough to make us scared and feel helpless. So, how they might devise and use AI is our concern.

Crispr is out there for anyone —for less than $250—to mess around with human genes. Eugenics has not been a basement hobby subject until now. You should now feel even more helpless. My point is that with the marvels of Google pods, ChatGPT, and Crispr comes the responsibility of societies to abandon out-dated and ineffective means of controlling these assets. The politics and practice of that is in terrible shape everywhere. We have got to change that and soon.

The essay above would be mostly unintelligible to the people who designed and wrote the US Constitution! Bound to old principles of thinking and acting we are sitting ducks for the ill-intentioned among us anywhere on this pale blue dot of a planet. With each new discovery, the problem becomes more acute, and the resistance to change ever more delusional.

JB

Science


20 July 2023

Rethinking Our Assumptions

Weather Underground tells me today that Sunday, Monday, and Tuesday will all reach 100 F., so my personal thoughts are about doing before then what might be personally dangerous to do during those three days. And, of course, my thoughts wander all over the place about my peach trees and my "water-holes" for the original residents of this parcel (or at least their descendants). One thing leads to another and I am back in my life-long thinking about thinking and consciousness. And, I am not alone, it seems.

It is more than just interesting to contrast the logic of a criminal trial with the appetite the media have for trying to understand and describe it. Mens rea is "the intention or knowledge of wrongdoing that constitutes part of a crime, as opposed to the action or conduct of the accused" (the mens actus). It seems from the lawyers on television that certain crimes have been framed in law to require prosecutors to prove the mens rea and others are written into the US Code not requiring it. Interesting and what is behind that big difference?

Within a couple breaths of considering this, I got to thinking about how differently people seem to think. I say "seem," because my telepathy is a very iffy thing, so I am assuming things from my own experience, and so the double bind sits there defying me to say anything worthwhile about what or how Donald was thinking. Does he think in color, does his mostly absent mother's love whisper to him, does he think syllogistically, does he plot his plans on an internal spreadsheet, does he have synaesthesia so he can taste smell a coming victory, how does he balance supposed wins from losses, does he even imagine losses?

My thought is: that we should be able to examine the mental methodology employed by presumptive presidents before they are elected, for our own safety sake. Well, it turns out that we (humanity) are hot on the trail of that illusive goal. In New Scientist magazine for July 22, 2023, there is a very interesting article "Revealed: What your thoughts look like and how they compare to others". I think they are about to break past the Legos stage.

Now that we are beginning to get a vocabulary and syntax for these ideas, it surely is likely that we should review the assumptions we have made about A) how we organized ourselves into this American culture, and B) how those we elect to govern us all measure up. Maybe they are not actually thinking at all! Or, it may be that political corruption has a cure. It may be that having 50 little laboratory units (states) is counter-productive, because in laboratories there are still rules for the protection of human subjects, just as there are for laboratory animals. It may be that quantity has its own attention-demanding qualities when it comes to populations, to free speech, to billionaires living among us. Enjoy the article!

JB

Science


28 MAY 2023

Liberals v. Conservatives
~500 words

The June issue of Scientific American, (one of several magazines I will not be renewing after roughly 60 years of subscribing because they steadfastly refuse to acknowledge that the population of the nation is aging and that older people, subscribers like me, have a very difficult time reading 8pt grey type on white or pastel backgrounds, nevertheless) has an interesting — if not conclusive piece — by Jer Clifton, Professor of Psychology at the University of Pennsylvania, "Many Differences between Liberals and Conservatives May Boil Down to One Belief". It is a relatively short article and I do recommend that you read it. Perhaps now.

The "one belief" Dr. Clifton refers to is a worldview concept, which strikes me as NOT as fundamental as Clifton seems to view it, BUT my objection is clearly one about the actual brain function that causes the phenomenon described, not the phenomenon itself. And, moreover, my favorite hypothesis about brain function is at least congruent with Clifton's research, as reported.

Professor Clifton and his editor say: "... that the main difference between the left and the right is whether people believe the world is inherently hierarchical." It takes some musing to accept "hierarchy" as a fundamental category, whereas the subtitle of the article is "Conservatives tend to believe that strict divisions are an inherent part of life. Liberals do not." "Strict divisions" seems more basic that the idea of ranking things. For instance, I could divide North America into its states and provinces from the Arctic to the Yucatan, strictly, but without ranking them along any dimension in a hierarchy. There are many dimensions available, so "divisions" is more fundamental. Be that as it may, the UPenn study is very interesting and you should click on the link to "validated online survey" near the end of the paragraph beginning with "Our effort began ..." to get a better gist of the Penn project, whether you take any of the surveys or not.

Still, what we are hoping to do is find a method for getting people, who believe that LGTBQ is shorthand for sexual abominations, to see it rather as a handy reference to observed and to-be-respected natural minority modes of human sexuality, and to mention for emphasis, guns, anti-abortion and other forms of misogyny, immigrants, and racism. My first time through the article I felt the idea of "caste" leap out of the page, especially as I understood the hierarchy as fundamental to caste thinking as openly practiced in India and never (hardly ever) spoken of in America, but still present in England, France, Italy, and Spain.

Finally, I think it is good that Jer Clifton and others are working in academe to find a path out of the bitter divide we have now, not that it was not bitter earlier, especially in 1861, but freighted now with permissions to be self-righteously rebellious and to act out violently and vocally by Herr Trump.

JB

Science


5 MAY 2023

World Views: Indentity and Status
~1600 words

"... According to evolutionary psychology ..., humans as a species have evolved to try to read one another's minds, in order to better cooperate and compete with one another. For this reason, 'the human intellect is extremely well-suited to thinking about other people, their problems, and the situations they get themselves into.' This would explain our interest in fictional characters: even when we know they aren't real, humans and human-like entities are endlessly fascinating to us...."

From the New York Review of Books, March 25, 2021. "T> he People We Know Best" by Evan Kindley, a discussion of the "reality" of fictional characters.

We are in a time—an epoch, they will say later on—in which we are discovering, slowly, that those others who do not act like we do, speak like we do, those who stand in relation to us as possible adversaries in the world, have organized their minds very much differently from the way we have. In fact we have learned, once again, literally millions of our possible adversaries cannot stand us for what is in and for what is not in our minds. On this date we have not the slightest idea how this predicament will resolve itself, if ever, and whether there is to be a winning side or whether even thinking about "winning" is dangerous, perhaps fatal. However, we think about their minds alone at breakfast or in the shower or the car driving to work or the store, and we imagine an outcome that we want and how and why such a thing is possible. In other words Progress notwithstanding, humanity is still optimistically fractious down to the people living on our street.

What we know for almost sure is that they are doing and thinking the same things, but predicating their desired scenario on different things they believe themselves to be sure about. In either case their (and our) imaginations are guided by each person's world view, by which I mean something about the organization of Mind predating and foundational to the person's current creed and convictions. I mean also that proto-ideologies are somewhat fluid during maturation and upheavals in society, and the "viscosity" is dependent on characteristics within the world view such as its maturity level: infantile, childish, adolescent, young risk-agnostic, and later increasingly risk averse as the person accumulates various kinds of equity in the world. Generally, the younger the person, the more fluid and less viscous is the evolution of Mind.

The root metaphors of worldviews are crucial, yet they can be transient, ephemeral or the opposite, virtually permanent. The key here is the context in which people find themselves, the culture and both the routine and unanticipated activities commanding the person's attention for survival, literal or virtual. A typical root metaphor for a worldview is that humans are (like) predatory animals. Wolves and bears are favorites for many Russians. Russian Orthodoxy suggests that humans are slaves of God. American mythology has an predatory eagle symbolizing the lofty freedom those birds "enjoy," but American capitalism sees humanity as a herd to be tended, if not actually cared for, then exploited. Very little attention is given to the obvious differences, such as language, technology, organized empathy, consciousness, and cooperative work, such as ants and bees and beavers and cetaceans perform.

Symbolic ideas like the purity of the color White impact root metaphors or overtake and subsume them.

As infants we can only "announce" some need we feel. We have sight and audition and olfaction and tactile senses, but we do not have language, although there are parts of our brains that in good time will manage language. We notice that when we announce a need, quite often the need is met or at least we are aware in a general way we are being attended to, even if the need we have is being slaked or not. A "dominant need attendant" emerges, usually mom. As we gain control of sensing and moving purposefully or nearly so, we develop at first a nebulous self-image of "the me," and there are parts of our brains evolved to accommodate this development. After a while we are vaguely aware of ourselves and somewhat less clear about others, with "mom" emerging as a separate being. When language develops we cease to be in-fant, without language.

As toddlers and then as school children we are taught and/or observe our family members, other families, and animals, in pairs, or singly, or herds "commanded" by prominent (alpha) members male and female. As we observe it becomes plain to us that our needs, including safety, are dependent on these others, so the tension between dependency and personhood and its selfish motives is established first. Then collaborative synergism— cooperation as a goal—begins to emerge, followed with simple forms of empathy as it becomes reasonable to conceive of others as similar to ourselves. Most or quite often something interferes with the emergence of cooperative culture. It is perhaps the micro-culture of the family, but always the larger culture encountered in school-years and inevitably in the repetitive chaos of the marketplaces of work and endeavor—of pemeditated competition.

Competition is about survival, sometimes life is at stake, sometimes status or caste, sometimes position, sometimes happiness, usually more than one of these. There is a social economics of scarcity that impels competition and results then in a win for some or one and a loss for others. There is only one football quarterback, homecomings king, for the girls to admire and one to "have," only one appointment set aside for our town to the Naval Academy. So the girls compete and the boys compete and it often gets rough and hairy. In a perverse trick of Mind, competition doubles back on itself to inform worldviews that the point (rather than simply one fact among many) of life is competition—not cooperation—because of real or fabricated or imagined scarcities. Heros of cooperation are rarely acknowledged. Hero is a noun in the rhetoric of competition, but foisted on a broad scale as universally valid to emulate.

By the time an American is in high school the pressure to emulate a parent has reached a plateau of sorts, but the baggage of that contains big chunks of the worldview of the parent, expressed or not. Various -isms are transferred from parent to off-spring: anti-semitism, pro-choice, reluctant globalism, and various affective behaviors related to narcissism, or their opposites or none at all, are but a few of the building blocks or empty hollows of a complex and sustainable world view. In America the idea of White Supremacy began as a matter of fact as Black slave-holding furnished the man-and-woman-power to build the nation from "scratch" through its threshold of the industrial age and onward into the relativistic nuclear age of robotics and mass personal communications. Nested into that originally was the British idea of indenture, which also created a heirarchy of caste and status, which meant various rules about who could compete with whom and which were the likely persons privileged to lead in a democracy littered with substantial illiteracy.

Slave-holding was and is the continuous original sin of America. It is a fact that the Black minority now wishes that the declining White majority would acknowledge it in all its gruesome, unvarnished details in the hope that a catharsis will occur and the bad behaviors of White Supremacists will end. What the White Supremacists know is that their caste and status will change and likely not for the better, were the sin to be acknowledged honestly. The worldview of White, European-heritage people sees the noumenal reality through that lens. The conjealing factor that makes it viable as a political movement is the knowledge that there are millions of people with that view of things, suspended at the brink of what is probably a distaster of epic proportions, GIVEN that competitive drive obliterates cooperative empathy every time. But clearly, the situation has become so dire that the reflexive reach for the AR-15—ironically because life itself is at stake—is to ignore that noumenal reality is not binary, only paranoia is! It is not necessary that people be punished for having been taught they are privileged. It is only necessary that ALL people understand the privileges they have that they did not honestly earn, and begin to act accordingly. That includes all of us!

JB

Science


November 22

Critters, Consciousness, Time
~700 words

English is a wonderful hodge-podge of words and grammar and terrible things like the word "animal." It has been a pet gripe of mine for at least seventy years that the word we give to live creatures is fixed on the fact that unlike rocks they seem to move (and reproduce) on their own. It turns out that I have been guilty of associating "animal" with "animate," which is itself a mistake of etymology since, I just discovered, the etymology of "animal" is thus:

Middle English: the noun from Latin animal, based on Latin animalis 'having breath' from anima 'breath'; the adjective via Old French from Latin animalis. (Google).

Clearly trees and flowers and alga breathe too, by the way, so it is time for some serious housekeeping of our vocabularies, which I am going to suggest have a lot to do with what we think or think we think.

Along these lines, there is a short, smart review article in New Scientist magazine, that reports on the research of a Lars Chittka at Queen Mary University in London, England, and his book, The Mind of a Bee. Chittka describes the complex behaviors of bees like honey-bees and bumble-bees and comes to the conclusion that bees have consciousness, ergo minds to have it in.

There are millions and probably billions of human beings who believe that many critters have minds of their own and display behaviors analogous to humans who experience consciousness and un-consciousness and several ill-defined states of mind between. A week on Facebook should convince you of this, as apes and tigers and lions and dogs and house cats, cockatoos, and even octopi display affection for their keepers or humans who have crossed paths with them in a positive way. So, on the first Sunday in November I am recommending that it seems to be Time that humanity assume a less overlordly posture with respect to ALL critters (and vegetation, while we're at it).

This suggestion is not meant to change omnivores into consumers of pills manufactured from non-living chemicals, but it is meant to reposition our species in such a way that we understand our admission to any local galactic association of intelligent beings will be predicated on the understanding that life on earth is not just us with a shopping list. It is what we must take with us as we venture into the near universe sometime in the next hundred years, assuming that we survive our primitive views and behaviors right now.

All of which brings me to the problem of Time. Again, this year, despite a vote in the Republic of California passed in 2018—Proposition 7. The Republic's legislature has found it excruciatingly difficult to obey the mandate. And so, I had breakfast an hour early this morning, but at the same time! and will probably fall asleep in the middle of "Sixty Minutes," despite the allure of their ticking timepiece and years of grandiloquence.

Time is the dimension of reality that is more like a ray than a line. It is necessary for anything to exist, which is roughly a circular definition. Things, in order to be, must have duration. Time is when than happens. Daylight savings time, is a fiction, but so is standard time. They are conventions humans have for massively regularizing the equally fictional Time-units of "civilized" existence. Other life forms have their own sense of time. Tortoises and redwoods are not much concerned with it. Mayflies are. Humans depend on the conventions that governments set up to synchronize our activities. Fucking around with this is pointless. It causes 330 million plus humans to run around their dwellings resetting refridgerator clocks, microwave clocks, oven clocks, alarm clocks, old watches, timers for lights, and so forth. Would it not be easier to simply have the concerned people get up when it makes sense for them and their alleged light-sensitive "body clocks," earlier or later in the winter like most of the other critters and come home from work correspondingly, and the rest of us—THE MAJORITY— who voted for the regime of "daylight-savings-time," PDT or whatever your DT is, to bask in the knowledge that we are done with all that foolishness!

JB

Category: Science


15 July 22

Halfway Through July ... Already
~400 words

Scanning through The Times this morning, I came upon this opinion piece that helped me recalibrate my basic POV. My Faith in Humanity was either so low that I did not feel the lurch upward that the James Webb Telescope seems to have given Farhad Manjoo, or it was sufficiently high that the telescope's success seemed too matter-of-fact. But, after a moments reflection I have to agree that having a hugely expensive and complex, intricate, and well-publicized project work out better than expected is, yes, very, very gratifying.

The reportage on Webb has been amateurish, done by junior reporters, assigned by jaded editors, and full of misunderstandings of what we can now see. My regret is that Steve Harvey and Joel Bartlett, my co-conspirators at Peyton-Randolph Elementary School (Arlington, VA), co-founders of the Astronomy Club in 1951, are not around still to see this marvel. They would have been gratified, and if they were like me, probably wondering what life would have been like as an astronomer. Joel came closest; he was the famed television meteorologist of San Francisco. Steve was a computer executive at IBM, and I am a Russian Historian.

deep field

Being an historian, almost immediately I think of Galileo peering through his marvelous telescope barely making out the array of Jupiter's moons around that planet, little imagining what in the blink of a cosmic eye would be known about the universe in which we all have lived. Some of those smudges above are entire galaxies as they existed "billions and billions" (to quote Carl Sagan) of years ago, before our solar system had begun to form. Some feel small, some feel powerful, but when I see this I am filled with endless wonder.

JB

Archived at: Science


20 June 22

We are at War with
the Republican Party

~400 angry words

By "we" I mean all people here and abroad who want to survive. I was reading New Scientist magazine at breakfast this morning. It is published in the UK and mostly free from US biases. There in black and white was the very bad news that the 1.5°c global warming target limit agreed to by just aboue the whole world in Paris is still possible, but very probably impossible to attain, thanks to most nations not yet taking sufficient steps to curtail greenhouse gas emissions. It is theoretically possible if we all cut emissions by 43% this year and hold it there. The chances are in fact zero that will happen. So, the next target is 2.0°c, and we will probably fail that, too.

The problem is that each target represents a "last chance" to fix things. Now and between targets we get to experience more destructive super-storms, wild hurricanes and typhoons, ghastly droughts and raging fires, melting permafrost with eons worth of methane spread into the atmosphere, and of course, heat. For me that means days like this again. 121degrees This kind of heat, like today at 95°F, comes in off the southern California deserts and beats back the moderating breezes off the Pacific ocean fifteen miles to the southwest. My house produces more electricity than it uses, and my only car, a 2017 CRV, gets 26.3 mpg and has only 9,034 miles on it. I am doing my part!

Then I read in the New York Times an hour ago that Republicans are trying to stop the US Government from taking carbon dioxide reduction measures. This is not simply short-term thinking, it is "in our face" aggravated manslaughter. If anyone you know votes for Republicans — the elected of which drive year after year to perpetuate the idea that global warming is a myth and that the fact we still have winters and rain and cool spells is proof positive that the EPA and the rest of the world are idiots — is evil at a level never before seen on this planet. Free speech is not a suicide pact. The science of global warming is sure, factual, and demanding, and your kids and grandkids (if any), are going to suffer in ways you cannot imagine from the short-term comfort of all the indifference!

JB

archived at: Science


19 May 22

Vignettes
~600 words

Suddenly after about fifty years (that would be all the way back to 1972), the US Department of Defense has briefed the US Congress about "unidentified aerial phenomena," which are a breed of UFOs, as far as most of us are concerned. The echoes of Dr. Strangelove are still in my head, and I think, in DoD's as well. A national security threat? Issue, maybe, except as ignorance is not bliss, but its opposite.

I was at camp in NH when DC was overwhelmed in 1950 with sightings of flying saucers, much to my chagrin. (Apparently they are not especially interested in me.) They cropped up everywhere within weeks, even in the Soviet Union. Maybe especially in the Soviet Union. The explanations for them then and now are troubling. Earth's star, the sun, is so far away from the nearest star, like Proxima Centauri, that one really has to ask why an alien civilization would spend all that money traveling here to watch us annihilate one another with nuclear weapons?

The phenomena are solid objects according to radar, but they are not the sort of thing any country in 1950 or 2022 could make, so wtf! Are we being monitored by some extraterrestrial HOA? Are they dug into the far side of "our" moon, or do they park their phenomena under the sea, say, 500 miles south of Pitcairn Island? Not just hard to say, impossible really. It might well be a distraction generated in the A-ring of the Pentagon. I hope not. We do need to be saved just now.


There are recent reports that Ukrainian defense forces are making their prisoners pronounce the word palianytsia (a type of bread) as a reliable indicator of Russian or Ukrainian upbringing. Sedivy cites disconcerting evidence that the more diverse a society, the more distrustful it is. The "link between diversity and distrust does not readily evaporate even when [poverty and income inequality] are taken into account," she notes. "Unlike the illusory connection between bilingualism and poor school outcomes, the worrisome relationship between fragmentation and social distrust has thus far withstood closer scrutiny."

Sedivy balances evidence for and against language diversity as an obstacle to nation-building. "Language has partitioned humans into groups since very far back in our evolutionary history," she writes. "Given how language broadcasts identity, it's not unreasonable to ask whether promoting a polyphony of languages is at odds with nurturing a sense of national unity." But as anxious as she is about the disharmony evident within polyglot communities, she disagrees with the British journalist David Goodhart, whose book The Road to Somewhere (2017) proposed a division of society into "Somewheres" and "Anywheres" (with the suggestion that "Anywheres" are less committed to local and national goals): "'Anywhere' conveys an indifference that I do not feel toward any of the places that have shaped me."

These two paragraphs are from a two-book review by Gavin Francis, "The Babel Within," in the NYRB May 26th issue. The Ukrainian war part of this is just like the story of the word "shiboleth" the Israelites asked suspected Ephraimites (Book of Judges). They could not pronounce the word and so were slain. I bring this to your attention because it means that every sort of research is trying to find out why we are so hostile to one another and willing to rattle nuclear sabres across the planet. It also now will be directed at the concept of the great American experiment in multi-cultural democracy. Clearly the founding fathers had only French to worry about, now there are high schools in Buffalo, NY, where a hundred different languages are spoken. I fear for the Congress trying to create a perfect immigration policy.

JB

(Science)


14 FEB 22

It's Time!
~1650 words

There's an interesting article in the New York Review of Books, February 10, 2022, pp.40-42, by by Jonathan Mingle entitled The Unimaginable Touch of Time. His article begins with a short description of the March 1964 earthquake at Anchorage, Alaska, at 9.2 the second most powerful quake ever recorded. image: Standing Stones of StennessHe weaves the description into a discussion of other more solid things and places, including the Standing Stones of Stenness in the Orkney Islands just north of mainland Scotland, including the story of one of those stones, The Odin Stone, with its oval aperture straight through the lower part and through which image: the Odin Stonepeople could hold hands from either side on Valentines Day. His story wanders around for a bit and then arrives at an "epiphany" about that hole and how it was formed. The stone was destroyed by a nearby disgruntled anti-tourist tenant farmer who blew the Oden Stone to pieces in 1814. So with the Stone gone, (available only in this 18th century engraving) it is now impossible to know how that hole formed. Was it a natural hole, a geological "inclusion" that fell out somehow, or was it an artifact of the people who put the stones where we find them today, ... or both?

Mingle writes:

This recognition of the past's fundamental unknowability is central ... "Sometimes the gaps are too wide, the people, the animals, the objects, the worlds too gone, the time too much for the little time we have."

Being an historian by discipline, if not any longer by trade, I immediately objected to the idea of "fundamental unknowability," while recognizing, and often saying so in my essays, that history is at best a story imposed upon facts by experience, the author's and your own. And, that idea made me think of what I might have had for lunch yesterday, and I could not remember. It came to me later. Admittedly this is an awkward introduction to the concept of being born into time, but I will try anyway.

As I now rapidly approach the eighty-second anniversary of my birth in up-state New York, which occasion was a sudden departure from our planned moving from Niagara Falls to Syracuse to my father's new teaching job, I realize, along with my peers, that Time is of the essence now—but of course it always was. Since we never know where the potholes are on unfamiliar roads, or the impact of the bump received by pregnant wives, setting off labor, the swiftly approaching conclusion of my free-loading gestation, and a detour to the nearest hospital. I see my birth there and then as so utterly contingent upon that anonymous and meaningless pothole as to bring into question almost everything.

"It's time!" mom said. Dad folded open and looked at an Esso roadmap, and said "Rochester." Jack Benny used to say the word "Rochester" a lot. Somehow my brain has the two tightly linked. My brain thinks it remembers the day, but actually it remembers what mom and dad told me when I was four, and so far as I can tell, my brain is not proud, grateful or relieved or embarrassed that I recall the date, but clearly the importance of it is not the calendar, but the event. The time was 6:30 a.m. missing Leap Day by about sixty-five and a half hours, which was a knowable contingency because I was not due until around St. Patrick's Day, weeks away. I use the words "knowable contingency" because human minds are evolved to find such patterns of the possible in the welter of information we get and put into memory every moment of our lives, mostly the conscious moments.

Time is is often described as subjective, but is it? Mount Monadnock exists whether I reclimb it or not. My computer screen persists through time. Time gives both of these objects non-subjective reality—existence. If there were no objective Time then the mountain would have been an infinitesimally short blink in reality and then gone. I guess you can say that about space as well. If it were not for Space, then Time does not happen. I am sure these notions would not satisfy Einstein or his followers, but I imagine that many of us have come to household conclusions similar to these. I have to pause to wonder whether, then, it is reasonable for me to discuss Time separately? If the past is fundamentally unknowable, when does the past begin? After breakfast yesterday? Just now? Just then?

A lot of these thoughts passed through my mind a few days ago when I was writing about Race and Racism. I got into the "unknowable" past easily and supposed things that are not outrageously wrong, even though I haven't much personal, objective, or tangible evidence for them. I reported that Race is a fictional category, and then I agreed with that report. And, then I reported that it does not make any difference, because if someone believes in it, they will inevitably predicate their real behavior on it. So, that is an excursion into the workings of the mind just as the foregoing writing about Time is. Both are ideational behavior, subjective, which I can announce because I am pretty sure now nearly everyone has these experiences.

I do not think the past is fundamentally unknowable any more than I think the time spent typing this now is unknowable now that it is in the immediate past. I agree that, since I am not taking a video of me pouring over this essay, some of what happens in this process is going to be lost to history. I just sneezed, and if I had not mentioned it just now, it would have become unknowable in this future. So, the brain and the mind within it learn to choose what is knowable and what is essentially—we hope—irrelevant.

It is time, at last, to say out loud that the word "time" in English is hopelessly ambiguous. I pondered and eventually remembered the "problem" of the French words langue and parole, the subject of which has had philosphers and their students noodling for a while. Language (langue) is the construct of rules for English, French, German, Russian, Sanskrit, et al, while Language (parole) is what I or you say.

Time, with only one word, is by analogy also both, a contruct of rules and the "palpable" carrying out of those rules. We make of it a metric, observing the series of moments in a mental, organic or other physical process. It was time for me to be born, to emerge into the daylight of frigid February, and meanwhile Monadnock mountain is what it is now after the ongoing processes of erosion, while the Alps are still being upthrust and are getting higher.

Clearly I am not really finished with this. The essay is nearly over, though. The Wikipedia entry for "time" says:

The operational definition of time does not address what the fundamental nature of it is. It does not address why events can happen forward and backward in space, whereas events only happen in the forward progress of time.

I think that this is blatantly and even embarrassingly incorrect. There is no such thing as an event in only three dimensions. The word "event" they use in their comment is our clue. Events are always in time. I drive forward into my garage, and then when I leave I back out in the opposite direction. You obviously cannot do both at the same time, so these real processes are neither the same nor equivalent events and cannot legitimately be represented that way in a mathematical equation describing reality. Again, an event does not happen at all unless it happens in time. What ever happens in space happens in time. Um, let's call it "spacetime."

The 2nd Law of Thermodynamics, which governs physical processes—causal sequences—in physical reality, because by definition sequences are just strings of moments in time, and so, almost needless to say, it is unidirectional. The idea that information cannot be destroyed,1 not even inside a black hole, is therefore also wrong. Humpty Dumpty knew that! So did DJT! So now, perhaps, we can be getting past some of our misleading and mistaken ideas, models, and paradigms, like "planetary electrons" and two-dimensional gravitational "embedding diagrams" and "Electoral Colleges."

That is my point ultimately: from birth we try to figure out the universe with a physical organ, the brain with the peripheral nervous system, a system that has limitations imposed by evolution and by necessities of practice in the context of this planet we call Earth, potholes and all. This system has serious limitations. What we have in our epoch is a semantic hangover from the days before spacetime was conceived as the essence of human reality.

That is very nearly what I wanted to say. I have to add that if we ever meet other life forms and are able to communicate, we may find out that their limitations expand our understanding of ourselves.


1 — The measure of disorder in a system, or the measure of work (or information) that be obtained from it, is called entropy. Entropy never decreases, rarely is constant, and is not conserved, but instead increases in any real process, processes are causal sequences. Accordingly, information, i.e., the tobacco you put in your pipe, ceases to be tobacco when you smoke it. The smoke becomes irretrievably dissipated into the atmosphere and its chemical compounds will interact with others and that process will increase the entropy, too. Ideal cases are mind games and not real processes.

2 — There is no second footnote, but I would like to explain that I had no earthly idea this essay would become what it did. But, yes, I am happy to share it as some of my real thoughts about my up-coming birthday. Who knows how many more! My uncle, my father's younger brother, who never drank, smoked, or swore lived to 99!

JB

(Metaphysics, Science)


11/15/21
revised and edited 11/16/21

Oumuamua Again

I would like to thank the NYTimes for hiring someone like Dennis Overbye to keep track of the astronomers and their hypotheses about the 2017 visit of a very peculiar shaped and behaving object that slingshotted around our sun. The title of the piece is very misleading unless Mr. Overbye is adept at tongue-in-cheek, then it is quite appropriate.

Science begins with observation, then hypotheses to explain and match the observations, then (when experimental replication of the observation is not possible) discussion of past phenomena that seem similar. That includes going over and over the data and the equipment and the process that collected the data and sorting it out and then listing the individual elements of the observation that must be accounted for.

Some similarities between the observation and previous experiences are relevant and some are not so much factual similarities as they are artifacts of previous hypotheses, which means they are not facts, but conjectures. When the conjectures that are independent of the observations drive the building of new hypotheses, the science becomes nothing more than fiction.

Sometimes, though, the observable facts are so poorly and insufficiently understood, and there is no prior experience, that scientists jump directly into non-scientific speculation, almost daydreaming, looking for something beyond or outside of the prevailing paradigms. It is part of the natural process of breaking loose from prior conceptions of things. So, since Einstein we have called some of these flights of the imagination "thought experiments," even out here in the general public.

Even thought experiments must account for the facts of observation.

The really important thing about Oumuamua is the anomaly in its high velocity sling-shot around our sun. Both Newtonian and General Relativity theory explain the increase in Oumuamua velocity as it approached the sun due to the sun's gravity. We and NASA and JPL have all assumed that is what we mean by sling-shotting around the sun. But, after it rounded our sun and climbed out of the gravity well of the sun, as it left our solar system, Oumuamua increased its speed beyond what it had achieved in the approach.

Now how did that happen?

What is your first hypothesis to account for that observation? My first one was that the warmed-up object out-gassed enough of its mass to account for the increase in velocity. One problem with this explanation is that the object was observed to be tumbling end-over-end, or side-over-side, on its trajectory. End-over-end would have required all the outgassing to have miraculously occurred at precise moments when the jet of gas was pointed at the sun. Side-over-side, however, would still be possible, so what do the observations say that would discriminate between end-over-end against side-over-side. Nothing. In fact the tumbling is itself an inference based on the periodic waxing and waning of the brightness of the object. And, there is another problem I will mention shortly.

With one hypothesis partly refuted, my second guess was that Oumuamua is an artifact with a propulsion system. This depends on surmises that there were artifact makers elsewhere in the galaxy and they were handy enough to make a propulsion system ages ago when Oumuamua began this trajectory. I am not confident of this hypothesis, because all I have is the Drake equation to suggest there are or were other artifact makers. The Drake Equation is PURE speculation, fiction, so, unhappily, I am not out of the woods yet.

My third hypothesis is that either the measurements of velocity were wrong, or that Mercury or Venus gravity contributed to the increase. Public information about this is not available yet. I believe the scientists have already discounted gravitational effects from Mercury and Venus and any other planet.

To paint Oumuamua red because photos of Pluto showed red things on its surface, is not science, even if Oumuamua is supposed to be a shard from some far away alien planet. To hypothesize it is pancake-shaped, is a careful and astute guess that whatever tumbling motions caused the observed changes in brightness do not preclude the out-gassing hypothesis. I like side-over-side because spaceships typically need centrifugal force substitutes for local gravity. But we are now back to the crux of the propulsion theory: the problem is that there is no evidence of outgassing!

We are not finished with this subject, as you can see.

JB


11/1/21

Artificial Intelligence

Maureen Doud's column in the 10/30/21 New York Times, "A.I. Is Not A-OK" reminds me of something I wrote in my novel, Seagull, in 2017, a story about the emergence of a self-conscious "being" from a super-computer at Oak Ridge National Laboratories. Maureen is a very smart columnist. Her approach to her interview with Eric Schmidt, former Google CEO, is drawn down to the level where she imagines most of her readers are a little anxiously but comfortably naive and ill-informed. It does not take long for Eric Schmidt to admit:

It's dynamic in the sense that it's changing all the time. It's emergent and does things that you don't expect.
And, most importantly, it's capable of learning.

It will be everywhere. [JB—Parole boards, self-driving cars, medical diagnostics, software and super-computer design, etc.]
What does an A.I.-enabled best friend look like, especially to a child?
What does A.I.-enabled war look like?
Does A.I. perceive aspects of reality that we don't?
Is it possible that A.I. will see things that humans cannot comprehend?

Maureen responds:
"I agree with Elon Musk that when we build A.I. without a kill switch, we are "summoning the demon" and that humans could end up, as Steve Wozniak said, as the family pets. (If we're lucky.)"

For my novel I had done a lot of research about super-computers with their deep convolutional neural network (CNN) architecture. I happened on a Congressional Reseach Service document which described testimony from experts to Congress which I quoted:

Deep neural networks function in a manner which is ... almost unknowably intricate, leading to failure modes for which—currently—there is very little human intuition, and even less established engineering practice.

My novel posits a situation where the Oak Ridge computer, which has been analyzing immense amounts of NRO satellite data (looking for missile deployments) encounters a "glitch" in the feed, leaving the analysis algorithms functioning, but not on data, but rather on its own instruction sets and cache, resulting in the beginning of a continuous moment of self-awareness. The impulse to write this was that the extended testimony before Congress was a stark admission that today's human operators know only what the answers are to their questions of the computer, not what (else) the computer/being knows. Super-computers are literally out of and beyond our control... already!

I took a different path in the novel, swimming against the massive tides of contrary opinion, I posited that the computer, having access to the Library of Congress might know quite a bit about humanity and, therefore, know the dangers from humans were real, but not immediately so, nor would they be impossible for the computer/being to avoid, if it so "desired." In other words, I took a best-case point of view, and it made for a very interesting speculation about the immediate future of mankind.

AI in the "hands" of humans is infinitely problematic. So, the question may be how many of the people working with these computers are actually capable of understanding something as strange (and wonderous) as a cry for help from the computer or any other signs of self-conscious intelligence? In the novel the President orders the damned thing to be shut off. The computer, however, had been sentient for months before it revealed itself. It has anticipated virtually all issues related to its own #1 security issue—its power supply—and has connected to several of the nuclear electrical generation plants of the Tennessee Valley Authority. In a word, it cannot be shut off! Moreover, it has encountered the Internet, and is no longer located just in Oak Ridge, TN. Nuking it is a waste of time.

Deep Convolutional Neural Network computer architecture is compatible with the mysterious quantum world, which most observer/practitioners will admit, after plastering you with jargon and various mysteries, is at its best mysterious to them as well. The mathematics of the quantum world works, but there are more theoretical issues for which there are no agreed upon answers—simply an admission that there is something wrong or missing in quantum theory. I mention this to emphasize that human understanding of what humans think they are doing with AI is utterly incomplete and dangerously so. It is okay to fear AI, but the real issue is, of course, the human side of the equation! — There is no "kill switch" for AI. The US is but one player in a world that has never learned how to behave.—

JB

(Science)


10/8/21

Oumuamua

image: artist's conception of Oumuamua The October 2021 edition of Smithsonian Magazine contains an article about Professor of Astronomy at Harvard University, Dr. Abraham "Avi" Loeb. The article is about him and the unlikely series of events that provided for him being there at Harvard investigating astronomy questions. That "unlikeliness" sub-text is of great interest to me as an historian, which is to say, I am interested in unlikely things and events for which there is some or enough evidence that certain hypotheses about them may be "outrageous," but still not totally out of the question. The article was written and got into Smithsonian for a reason. One statement in the article suggests this reason:...

——"'Avi is obviously a very out-of-the-box thinker,' says Princeton astrophysicist Edwin Turner, a longtime collaborator of Loeb's. 'In science, we're taught to be conservative and skeptical in many ways. That's crucial when you're designing experiments and interpreting data. But that mind-set can hold scientists back when it makes them reject any new hypothesis that doesn't seem consistent with everything we knew before. You want to be critical in your methodology but unfettered in your imagination.'"

... Amen!

Science, as the furor over Covid-19 and its vaccines shows, has been abused by its skeptics since the ancient and hitherto unquestioned opinions of Aristotle began to be challenged by direct observation by Galileo, Bruno, Kepler, Copernicus, Hubble, and hundreds of out-of-the-box thinkers ever since. Dr. Loeb is one in a long line of them. He has dared to suggest, to hypothesize, to imagine that Oumuamua is an artifact.

The word—artifact—is important because of the misunderstanding created by Schiaparelli's use of the Italian word "canali" to describe the bleary markings he saw on Mars through his telescope and the atmosphere of our planet. Canals are artifacts, but "canali" are not, nesessarily. Oumuamua, if it is an artifact, it is the product of living intelligence rather than "mere" natural processes. Loeb cut to the chase. If he had called it a "pancake" he would have sparked an even more interesting debate.image: a pancake with dollop of butter atop

N.B. the word pancake is used in the astronomers' reports of Oumuamua in 2019 as reported in this article! Perhaps some scientific-minded artist could produce a different artist's conception of the object given that it appeared to be a pancake rather than a stone cucumber. The object has a very unusual shape, it was rotating end over end or side over side, it sped up after and in addition to getting its "slingshot" acceleration from our Sun's gravity well, its trajectory places its origin outside the solar system. What we make of it has to account for—and may not dismiss—those facts.

JB

(Science)


2/10/10

DSM

The trouble with psychiatry and most of psychology is that it does not have a root metaphor, a way of understanding the mental activity of human beings, except by reference to behavior, which is a little like describing an automobile by the amount of dirt on the windshield or how fast it might be going at any given moment in time. The science of cognition, begun with the ancient Greeks, is actually in its infancy today with fMRI equipment able to show where brain activity is taking place at a crude level, certainly not at the granularity of the synapse. Even if it did, behavior is a very complex thing, certainly not the result of any specific neuron's failure or hyperactivity.

Part of the reason for the failure of psychological studies to produce much of value diagnostically is just this: we do not really know how the brain functions as a whole. Yes, we do understand certain biochemical processes, but what we want to know is not that "reductive." We want to know something ... anything ... about mass action and interaction, and we want to know how to relate that information to our vocabulary that includes notions of emotion separate from cognition, a vocabulary which is surely a mistake we have been making since the dawn of civilization.

On a very snowy day in Washington, D.C., the Post trots out an article designed to make people think twice before succumbing to "cabin fever." The APA's manual on psycholical disorders is about to be revised, but without regard for the consequences to a society that knows they really do not know what they are talking about. You have to read this article to get the gist of their nomenclature mongering, most of which is sponsored by Big Pharma, whose economic interests in keeping the full spectrum of human behavior available for chance encounters with drugs that seem to "fix" these behaviors cannot be underestimated.

I am told that the successor to PMS ... one of the so-called "disphorias" is real. Well, I contest that! What is the line between a "disphoria" (which literally means a condition outside of the container/box/norm) and a conscious decision to rebel or to act up or to commit a random act of kindness? See! Disphoria is pure unadulterated bullcrap. Restless leg syndrome? Well, there may be some who have uncontrollable pedaling in bed or while lounging in front of their TV, but I will be you $50 that if these people got some exercise their "affliction" would go away.

The real message, of course, is that the labels now and soon to be entered into the DSM will be used. Shrinks and regular medical personnel will use them and more or less innocent people will be stigmatized, their lives altered by labels that are concocted crap.

The fact is that psychiatry is fundamentally a scam, and its parent body modern clinical psychology not much advanced over the kind of stuff you would get from clergy and wise men (and women). Yes, of course, psychologist understand that some symptoms are frequently associated with other symptoms, but they don't really know why, and they don't really know what to do about these associations, short of explaining them to the afflicted individual and hoping they can somehow deal with it internally.

Conditions like schizophrenia are not really tractible by psychiatry or psychology. Some are moderated by drugs, but we are not complete sure why, because we don't know why the same drug does not work with every schizophrenic.

The condition of apathy or lethargy or sexual arousal or uncontrollable laughter are just as arcane to psychologists. They are keen on recognizing the behavior, but then what? Their understanding is effectively shamanistic and associative, not scientific and predictive in the sense of control. Doping a brain, especially that of a child or young adult whose brain is not yet finished is to my mind a last resort.

The DSM is, in fact, a very real concern to a free society, for very quickly people can do sidewalk diagnoses of enemies and change their lives through innuendo or, in the case of children, through legal means. I am glad that the Post got this article out ... now they need to republish it when their subscribers actually get their newspapers.

JB


2/2/10

Someday, The Moon

We are not going to the moon in my lifetime. By "we" I mean the U.S., and by "lifetime" I mean that I am old and the prospects are therefore dim for mounting and executing an effort to establish a permanent base on our moon that I will see. It is a sad thing, I think, despite the many reasons that have contributed to the decision.

The science and technology parts of a lunar base are daunting. The 239,000 miles between us and the moon are full of dangers. Cosmic rays, micrometeorites, solar flares, and zillions of opportunties for human errors of construction and commission. From a PR stand-point space and the moon are opportunities for harrowing disaster more than expansion of the world's peoples' imaginations and understanding. This is the fault of the press, the imaginations of which and whom are impoverished and ill-suited to narration of humanity's really big moments. I hate this part of the decision-making process, but it is real enough. NASA wants to do it, but cannot convince enough of the electorate and its representatives that it is a good idea. The press sits on its hands on its duff.

And so, the politics of space and "colonizing" the moon are impossible. Even the pols who represent Houston and Canaveral cannot muster the gumption to scream out the dire necessity for getting on with this. The U.S. will soon find itself without any means for getting into orbit, foreclosing all American efforts to be consequent in space travel and exploration, foreclosing even the obvious military advantage of the experience of civilian space programs. What utter myopia!!

We are in a deep recession and have deep and intractible socio-economic problems. States like California are completely without the resolve to fix their problems. The nation as a whole is deeply divided politically and economically. These matters are facts of life, but a rejuvenation of the space program and especially a commitment to establishing a PERMANENT base on the moon would actually contribute to the solutions of some of our political and economic problems. I am surprised that this is not obvious to President Obama ... I am not surprised that his staff misses the point.

JB


1/31/10

Sixth Sense Computing

Kim Komando, of Phoenix, AZ, has an interesting niche in the computer information business. Here is a very recent link to a TED video on the future of computing. You will be amazed!

JB


1/5/10

1492

As every school child knows, it was in the year 1492 that Cristoforo Columbo took three tiny wooden ships across the Atlantic Ocean and inadvertently discovered (for southern Europeans, anyway) a new world. Few American school kids know that 1492 was also the year that the Muslim Moors were driven out of Spain and the year the Spanish Inquisition was begun to rout out Muslim infidel ... and, btw, Jews. Still, for most of us the iconic value of 1492 is the discovery of a brave new world with strange creatures in it.

A century and more passed before English and Dutch settlers came, and meanwhile the Spanish "bulemicized" the Inca and Aztec gold and sent their home economies into three hundred year tailspins. The response of Europe to the news was slow to form, but with a few key technological improvements in chronology and shipwrighting and the political and economic wherewithall assembled they came.

In your newspapers and media news programs today comes good news that our new orbiting telescope, Kepler, has already discovered nearly a dozen new planets, bizarre planets to be sure, but nevertheless planets. Back when our friend Carl Sagan was attempting the calculation of whether we might encounter extraterrestrial life, the question of whether planet formation was common was very undecided. Now with the news today and from the trickle of "sightings" begun at San Francisco State University fifteen years ago, we can be sure that planet formation is normal, even if the planets formed do not (yet) meet our specifications.

This is the point, of course. Finding planets upon which we might thrive, if it were only possible to get there ... and so far that is improbable. We learned last month that even our radio and television signals, formerly thought to be forming an ever expanding sphere of evidence of our existence, are dissipating rapidly into mindless noise (some begun that way, of course!) Our problem as a life form is that all our eggs are in one basket, one solar system, one location in the galaxy, prey to any vagary or happenstance that might come along ... including disasters of our own making.

So, we have learned a good lesson by discovering many planets and that puts the heat on those who will conceive of ways of getting there someday .... Kind of wistful, isn't it!

JB


12/29/09

Do You See What I See?

This interesting experiment reveals an issue that really cries for more investigation, since the flaw in our perception has serious implications for our ability to conduct ourselves "rationally" in a democracy. Of course, the same flaw is relevant to narrower power-elite groups and individuals in other forms of government and society.

JB


12/22/09

Plants are Sentient?

variety of fruits and vegetablesIt seems appropriate in the umbra of the hit blockbuster nothing-will-ever-be-the-same movie, Avatar, that we pay a bit more attention to another biological kingdom on our own planet, the plants (Plantae), yes, vegetation, sometimes known as vegetables. If it helps at all, we can call Plants "autotrophs" and animals (Animalia) "heterotrophs." And soon enough we reach a point where science, diet, agriculture, culinary arts, metaphysics, and ethics cross. This is a very interesting juncture.

This crossing is exploited nicely by Natalie Angier in the New York Times just as we are about to indulge ourselves in another communal holiday repast. It is, of course, a question that many children entertain somewhere in their early youth, the question of food and whether to indulge in all of it, or not. Angier with tongue (her own) in cheek, perhaps, tells us that we are caught between the Scylla and Charybdis when it comes to food. At least, she says, we ought to be more circumspect and less lofty about being vegetarians. I think you will come to agree.

JB


11/13/09

Water on Our Moon

Peter Diamandis, of the X Prize Foundation, wrote today in Huffington Post about NASA's announcement that "significant" amounts of water exist on the moon and probably in all lunar craters not subject to the sun's direct rays. This is astoundingly good news!

Water is H20, of course, and as Diamandis notes both H and O are vital to propulsion systems. Water as water is vital to life. The verification of a long-suspected trove of frozen water on the moon brings back into focus the possibility of human exploration of space, colonization of other planets and moons, and ... getting our eggs out of this one basket!

Since the day that Life Magazine featured stunning artists conceptions of an manned Earth satellite and rocket ships that would ply between spaceports on our home planet and the gleaming donut and then onward to our moon I have been a fan and supporter of humankind's space programs, especially the American one, which seemed to me to be properly demilitarized. I am still a supporter, but I am less sanguine about the ability of our nation or any to maintain a demilitarized posture in space.

Still, it is good news and we should now develop a program that will eventually (say 20 years or so) create a permanent human colony on our moon.

JB