Natural and Social Sciences and The Humanities

19 APRIL 2024

Reality: An Emergence Theory

"What is Reality?" by Quantum Gravity Research on YouTube, is a lay description — an Emergence Theory — the description of a complex entity that has properties and behaviors that its parts do not have on their own — our universe! (Google)

Albert Einstein's "General Relativity" and Niels Bohr's "Quantum Mechanics" are empirical theories of what the universe DOES or IS or both. They are both based on empirical facts. The problem is they do not agree. We need a new Theory of Everything. People are working all over the world to find congruent answers to the multitude of things that would be literally and "in principle" included in such a theory. As the video presentation describes some have been working on "vibrating string" theory, and it has come up short so far. This is the way cosmology science works. It is important for us to know that we have now several generations of mathmaticians and physicists and cosmologists working hard to get us out of the embarrassing conflict between Relativity and the Quantum Universe. These are very interesting times! The video describes seven "clues" that have emerged as to what a Theory of Everything would have or have explained:

  • a theory of INFORMATION
  • an explanation of CAUSALITY LOOPS (reconsideration of Time)
  • be NON-DETERMINISTIC (uncertainty principle, free will, etc.)
  • include and rely upon CONSCIOUSNESS
  • be a PIXELATION
  • be an E8 CRYSTAL (the geometric mathematics to unify our observations)
  • account for and incorporate the ubiquity of the GOLDEN RATIO

    Presentations like "What is Reality?" have several purposes and go to some expense to achieve them. One purpose is to stimulate some rare human beings to contribute to the assembly of the new Theory of Everything, hopefully to leave behind their previous notions of how things are "supposed to" fit together and what is acceptable and what is not. Another purpose is to stimulate lots of human beings to help support the scientists' day-to-day work; it is a plea for money. Another reason or purpose is to clarify their own thinking and in so doing identify knotty problems inaccessible to non-mathmaticians, such as the E8 CRYSTAL lattice.

    The reasons for this presentation appearing on IRON MOUNTAIN are, one, not that I agree with everything: granularity=PIXELATION, yes, the nature of INFORMATION, yes and maybe, but am surprised by the GOLDEN RATIO, I do not understand what an "8 dimensional crystal lattice" is or would look like to my 3D eyes, and I am intrigued that CONSCIOUSNESS is central to the theory so far because I have been working on Consciousness for many years, including very recently. I am confident that CAUSALITY LOOPS are real, but I am at a loss as to how. In other words, I am a "linear causality addict," but as an Historian have played with the idea of History repeating or rhyming. It may be that IMAGINATION is a major factor under the heading of CONSCIOUSNESS. I am worried that CHOICE, mentioned in the video, is being deliberately misrepresented.

    Anyway, pieces like the one featured here are hot-fudge sundaes for people cruising the internet whose critical thinking skills have not yet matured. Parts of the explanation that the pretty woman (Marion Kerr) recited are metaphorical, probably because they have to be, since the vocabulary needed does not yet exist. The piece may be pure BSpeculation or it may be the until-now-hidden portal into a new era of Physics in which many things will be reconciled and explained to the satisfaction of many people.

    The seven "clues" may only be the first seven of an untold number, but it is safe to trust that each one is a topic of investigation that "popped-up" in discussions and demonstrations and tried and true Physics literature. The exposition designed into this video may end up being only an example of how to throw off the shackles of "conventional thinking." The pace of life is such that I may not ever understand this sufficiently to be able to explain it to another willing human being. As for the scientists calling those seven principles "clues," I am pleased at their choice of words, for it is certain that we human beings do not and cannot see reality as it fully is, perhaps only as lower dimensional "shadows" or projections of what other beings might apprehend, like cats, owls, sharks, and Grays!

    JB

    Science


  • 18 MARCH 2024

    The End of the World

    In the New Yorker of March 18th there is an article introduced as a "Letter from California," (only in the paper magazine), titled "O.K., DOOMER," by Andrew Marantz, but online under the title "Among the A.I. Doomsayers". No one but the staff at NYr know for sure which title came first, but the online "Among the A.I. ..." is the more appropriate. The article hopes by throwing jargon and hitherto unknown AI-concerned surnames against the wall that a picture will emerge.

    The author's and editor's overall take on the rapidly growing angst about AI is ironical mystery, while written in a gossipy way, as most of such articles in the NYr are, leaving to find the story out there in the weeds, where you'll find Marantz and his editors.

    The eternal question is "will technology bring about the end of the world?" AI is a technology—a very humbling technology— because it generates with each new evolutionary step the feeling that our efforts will soon outstrip our ability to control it. You could say that of the automobile, of course, and measure it currently at 1.35 million human deaths worldwide per year, 3,700 per day. Clearly, the question posed about AI is not yet refined properly.

    What is it about AI that brings people to think "end of the world." More or less obviously, I think, the advent of first nuclear and then much more destructive thermonuclear weapons provides the grist for analogies. That evolution brings up the question of control, which not just incidentally is the real question about AI. The worrisome analogy is that AI's will be as out of our control as individual humans are in today's world.

    The "end of the world" phrase is also misleading. It is not the same as "the end of the planet" or "the end of western civilization," although that last one points us to what "world" really means. "World" means the situation we have created for ourselves and families and fellow citizens. It means the wild tangle of interrelationships among us and our things and symbols, each designed to provide us as individuals and families with shelter, food, heath, education, money, movies, sex, government, "everything." We write about such worlds as if they were very close to being the same thing to all, and recently we are just coming to the idea that the "worlds" of Suzy Wong or Pablo Nuruda, or Fintan O'Toole, or my niece Emily, a brand new RN, are distinctly NOT the same worlds.

    So, my hypothesis is: yes, AI will occasion the end of many "worlds," just like smart-phones have and digital computers before them, and transistor radios earlier yet. In fact, as an historian I have to say that AI began to end "the" world almost a century ago at Bletchley Park and Arlington Hall and incrementally has destroyed the illusions — the worlds — of millions ever since.

    Marantz in his tour among the AI anxious identifies doomers and their opposites, many or most of whom on the respective sides, opt to be decelerationists or accelerationists. The former all acknowledge that the evolution of AGI (artificial general intelligence), was begun many decades ago and that there might be some good in it, but we need to slow down the evolution so as to have some hope of controlling and profiting by it.

    The accelerationists, with amazing optimism, tell us that an AGI could solve so many of our problems that we must let the evolution of AGI proceed at its accelerating pace, hopefully to zip through the gauntlet of hazards posed by people with imperfect motives, like individual billionaires with a need for to be the first trillionaire, or implacable fascists, or religious zealots, or middle-class investors afraid of derogating into a tight retirement. Understand, though, solving many problems will require or lead to upending many "worlds" people now depend upon. Whole countries will disappear!

    My later "Few"-series novels incorporate the birth of a conscious AGI from an entirely plausible glitch in a very large and powerful computer powered by a nuclear reactor array in the TVA at Oak Ridge National Laboratory, I wrote

    Already then, in January 2017, just about twenty years ago, people were admitting this. A piece of a United States Congressional Research Service document read as follows:

    Deep convolutional neural networks function in a manner which is ... almost unknowably intricate, leading to failure modes for which — currently — there is very little human intuition, and even less established engineering practice.

    Deep CNN computers teach themselves! And reprogram themselves, but they do not have a limbic system (a part of the human brain that controls emotional and behavioral responses) through which all that takes place. Neither do Quantum Computers! An AGI is unemotional. For some people that is scary; for others it is relief. Would an AGI have a sense of right and wrong? It surely would have a very complex way of determining the probable outcomes of lines of human thought and activity, and even more so of its own processes, whether we call them thinking or not.

    Currently there are applications for competent computers that mimic human responses, appearances, voices, and even their syntax. An AGI with these applications when asked if she could present a "video" of Lincoln at Gettysburg giving his famous Address is likely to say, "yes," and to be able to fool most humans into a momentary amazement that somehow the whole thing was caught on a time-traveling smart-phone.

    Clearly, human understanding of reality is already at risk, and so such things as eye-witness testimony is up for grabs and, therefore, so is the justice system that Merrick Garland has not yet completely destroyed. You should be asking not whether the world will be destroyed by AI, but whether humanity can adapt to the simpler implications for human beings of AGI's conducting the administration of governments and commercial enterprises "truthfully" and reliably. I am a cautious accelerationist, because I am slightly ahead of most and recently turned 84. 😎

    JB

    Science


    22 JAN 2024

    Shklovsky and Kardishev

    The Russian 20th Century astronomers presented in the title to this essay were very important to the Soviet SETI program that emerged as the US and UK and others mounted radio telescopes to scan for intelligent life in "our" galaxy and perhaps further out in other galaxies. In the US, Frank Drake and Carl Sagan and others from the Order of the Dolphin SETI society were avidly brainstorming the odds of finding signals from space that would clearly indicate their source as evidence of intelligent life. It was Drake, known particularly today for his famous equation defining the odds of there being intelligent life beyond Earth, who posted on his office door at Green Bank, West Virginia, the question: "Is there intelligent life on Earth?"

    It was meant to be funny, but it led to a consideration of dolphins, aka properly as porpoise, the former being fish and the latter marine mammals. And, of course in the first decades of the nuclear warfare age, it was not that funny, but fairly scary, instead. The Dolphin group decided that porpoises are intelligent, but are environmentally constricted or contained to be a non-technologically advanced intelligence, which was a good thing for the group to acknowledge.

    Iosif Shklovsky wrote an important book about the dangers that planetary civilizations encounter, especially those caused by themselves, that can utterly end the civilization or stop it from further advancement. The book was very instrumental in arousing the lifelong curiosity of Carl Sagan.

    Nikolai Kardishev posited an important scale by which to measure the technological advancement of planetary civilizations, principally on how civilizations are able to harvest and consume energy. Type I civilizations are able to access and store for consumption all the energy available on the planet. Type II control all the energy from a star, while Type III uses all the energy of a galaxy. The upshot being that although the signalling of a Type I (or lesser) civilization might be difficult to find and interpret, Types II and III would not, ergo, we should keep looking.

    Drake's posted joke figured in to the balancing act that the scientists of the 2nd half of the last century were considering. It is by now probably obvious that this essay is about the joke we are making of ourselves environmentally and politically. It seems clear enough to all but the Trumpists here and their fellow-travelers elsewhere that the assumptions made by the scientists that everyone cares whether we emerge from our primal slime or not is not a broadly held consideration of most human beings. Even college educated people believe their main concern is to reproduce and carefully groom their offspring to be good citizens, i.e., good providers for their own families in that next generation.

    It has seemed obvious to many of us that that is a safe but narrow view, and that humans should be able to put replicating families into larger context. In fact, for some it calls into question family as an ideal unit of the procreation, maturation, procreation cycle. You should spend a few hours channel switching between the hundreds (scores at least) stand-up comics on Netflix to get an idea of how well families favorably nurture the young. Well over half of Netflix male and femme comedians have throbbing religious and social hang-ups. Fathers are horrible jokes for many, mothers only just less so, and one wonders.

    Humans wonder incessantly about the really big picture and most recoil from it — the immensity of reality. They recoil to the compass of their own hypothetical control, as if stretching would change them somehow. This means that many or most are reluctant to change what seems to be working, although they know there are humans who do it differently and have better success. People like Carl Sagan and millions of others focus on the picture that enthralls them and the rest of it barely catches their attention. Carl died of cancer, but he managed to inspire a multitude to carry on. There are myths in the way for others, life after death, for one. Think about the implications of that or of the idea that might makes right and justifies patriarchy. The point is: we are not anywhere near critical enough of our civilization. Fear of retribution from those it treats poorly may be one reason.

    25 November 2023

    Social Science in Journalism
    ~1200 words

    On "Black Friday" evening I watched Chris Hayes and Rachel Maddow in their two hour discussion of Fascism in America on MSNBC. It was a very good presentation, encouraging both of them to reveal something of themselves as well as, primarily, their thoughts on our contemporary Fascism. I was taken aback by several things as the two hours progressed toward advice to democracy-loving people in the audience on how to behave next year. Certainly 2024, a Presidential Election year, is likely to be fraught with divisive and perhaps terrifying political events. So, as this essay is titled "Social Science in Journalism," it is going to be as honest a critique as I can muster on whether Hayes and Maddow are social scientists, practicing their craft honestly. Clearly, they are not just very well-paid television journalists constrained by some corporate decision to promote progressive or liberal political ideas to their (million plus, but 2nd place to Fox) audiences.

    Natural sciences scoff at some of the social sciences because of the fact that political science, sociology, anthropology, and psychology all suffer from the fact that their published results may well affect the behavior of human beings they are studying. Where as physics, chemistry, biology, can isolate those who study those subjects from being included in their data and their conclusions, even though all are subject to gravity, made of chemical elements and processes, and are biological entities themselves.

    N.B. — History is not really a science I think, but in some universities, all three of mine, for example, history is closely associated with the social sciences. Nominally, history is a "humanity," along with languages, literatures, the arts (often in their own schools), and philosophy. Cleo, daughter of Zeus and Titaness Mnemosyne, our muse, does demand of us honesty and discernible logic, including reasonably detailed chains of causal reasoning. Journalism is a "humanity" as well, related closely as the so-called "first draft" of history, but containing its own methods for investigation and presentation.

    My first and most basic comment about the Hayes-Maddow Discussion is that the event was not writ on a tabula rasa. That is, there were assumptions made by the presenters and in the audience that the event itself and the words and logics presented would be in favor of democratic forms of government, progressive and liberal forms of politics, and that alternatives to these would not likely be presented, except the key subject Fascism. Whatever damage to pristine honesty these assumptions caused is anyone's guess, but scans of the audience revealed only rare scowls, but frequent outright demonstrations of enthusiasm. Under these conditions the presentation was actually meant as a response and meta-analysis of American Fascist thinking and recent American Fascist events. You have to carry this idea through the entire presentation to be fair to the conclusions you reach. Personally, I was very ready to hear them out and hoped they would do a good job of it. Our national dialogue needs this kind of thing. And I am hoping for more. In other words this was politics, too.

    Chris did an opening that deserved, perhaps, twice as much time. He has a habit of finishing very long thoughts with many words in one breath, so some of his sentences ended as, to me, unintelligible. He glossed a quick history of America and arrived at the signal event of the presentation the first truly Fascist movement in the US (and Canada) beginning with Henry Ford and his rabid anti-semitism, the growth of American consciousness of European Fascism under Mussolini in Italy during the 1920s and National Socialism in Weimar Germany erupting in 1933 when Adolf Hitler is elected Chancellor. At the same time Father Coughlin and Huey Long were preaching ill-defined fascism and celebrities were chiming in. My own understanding of the bedrock upon which American Fascism has evolved goes back to the framing of the Constitution, the "87 years" of the "3/5ths of a person" idea, the Civil War, Reconstruction, and the Jim Crow era, which did not really end in 1954 with Brown v. Board of Education or in 1965 with the Civil Rights Act of that year.

    Chris introduced Rachel after his prologue. She soon settled into her forte, her obsessive curiosity combined with the craft of creating a compelling narrative developed at Cambridge University during her doctoral work: the explication of Henry Ford, with fast-passing refences to the Lindburghs and others. She did not provide a reason for Henry Ford's rabid antisemitism, or if so, I missed it. The description of his rage against Jews evolved into its incorporation in the Nazi political movement, and to my mind she conflated the two. I do not think that all anti-Semites are Nazis or (to a lesser extent) vice versa.

    The same happened with American Isolationism. Rachel explicitly assured us that Isolationism was a key to understanding Fascism, despite several earlier, 19th century "spells" of isolationism in American history! Isolationism is "political nature's" way of making very large and wide-spread problems seem smaller and more manageable. It turns out that isolationism plays nicely into the hands of political movements seeking an "other" to play against.

    Rachel also mentioned that a German Nazi lawyer went to the University of Arkansas Law School to find out how a well-advertized democracy like the USA could be so successful in suppressing large parts of its population. Along with the Henry Ford, Lindburghs, and various trans-Atantic celebrities' propaganda, she concluded that the rise of Fascism was west to east. She stated that after Chris was energetically bemused by his own conclusion that must have been the case, given what Rachel had revealed. I resist the idea that it is "all our fault," but acknowledge freely that Fascism evolved in the Euro-American context, which was a matter of substrate fact of the American experience of European ideas and vice versa. You can see how the "isolationist" frame of mind tended and still tends to separate the two cultures, until Rachel (et al) showed us they were linked in a broader trans-Atlantic culture, now congealing faster and faster because of the technology of radio, often responding to the same kinds of socio-economic situations.

    Fascism was not invented in America any more than anti-Semitism was. Anti-semitism is played out socially and politically in every instance by provocateurs identifying an "other" to blame for bad situations. The Roman Catholic and Russian Orthodox Churches have been major players in the provoking of anti-Semitism for many centuries!

    Fascism uses isolationist contexts to further its control radius. It uses anti-Semitism to identify "the other" to take the blame for nearly everything, but clearly the regimes' own mistakes as well as the consequences of dynamic situations like economic collapse caused by greed, vindictive treaties, personal affronts, and the like. Fascism, as I have stated in these essays is founded on the impulse to simplify politics and governance. It brings a tighter kind of social control by adjusting the residual "rule of law" to the fewer hands of the political elite and their leader—especially. It is dangerously SIMPLISTIC.

    I commend Hayes and Maddow for their work to fill in the blanks in our history, blanks purposefully there in public and private school history curricula. Voters in 2024 need to know that school boards and publishing companies are behind these ommision-distortions of our history.

    JB

    Science


    16 November 2023

    Free Will

    The New Yorker magazine of November 13, 2023, contains a book review "How Can Determinists Believe in Free Will" by Nikhil Krishnan, reviewing Determined: A Science of Life Without Free Will by Stanford University neurobiologist, Robert M. Sapolsky. The essay you are now reading arises from our study of the intellectual and, specifically, the epistemological discussions embedded in the world-wide politics of our day.

    Is there a world-wide politics, you might ask? Well, there is politics world-wide, and in each political area politics is a rivalry of ideas about government and economics and social norms and concerns. These islands of politics influence other islands of politics in various ways. World-wide politics is the interaction among them, such that when Boris Johnson, now a former PM of the United Kingdom, asserted that the UK gave more than it got from the European Union and that UK should get out of the EU, his ideas were based on blatant, but amusing, falsehoods and a relatively conservative view of political-economy and affected politics all over Europe and ultimately the whole planet. Two points to draw from his behavior are: 1) a cavalier view of the value of truth in journalism, and 2) a reckless disregard for the polity that nurtured him, almost to the point of deliberate vandalism.

    Boris is the most cartoonish and yet consequential of the practicing politicians of our recent past. There are other voices bearing down on politics from pulpits and tree stumps all over the place. In America there are voices that hold the First Amendment to be unassailably more important than the pursuit of justice within the courts, such that freedom of speech of dangerous indicted defendants is defended as if it were a unanimous (rather than political) concept.

    Then there are the voices that dive into the prickly thickets and the well groomed orchards of ideas surrounding theories of sovereignty. Many voices these days now claim that representative democracy does not work in multi-cultural environments, or even ever! Part of this is retribution for the century of democracy-mongering conducted by the USA at the cost of ignoring local conditions. These voices almost inevitably ground themselves in the tonic key of liberty-for-all in counterpoint to the messiness of reality, especially human understanding.

    That is a complicated idea, so for instance, when the American republic was established the Founding Fathers, without ever saying so out loud, believed that the democracy they were creating would be overcome by the selfish interests of barely literate voters seeking not only a free-lunch, but political status as clear-thinking people, for which they had no experience or education in the philosophies and histories of their own people. They would ruin it, so the FF gave them less control of the three parts of government than they could have given them.

    Buried in that paragraph is an unspoken premise, widely accepted at that time and even unto today, namely, that human beings have free will and the right to enjoy the fruits of their physical and mental work, but also have the responsibility for everything they do and say of that free will. Professor Sapolsky, grinding an old axe and perhaps being subject to the irony he builds with his argument, says that free-will is an illusion. He might say his decision to write his book was determined by his love of science or to impress his girl friend or a hundred other reasons, but it much simpler than that. His motivation was a product of mind, and he cannot responsibly say he will not take responsibility for it.

    There is no doubt that there are reasons for doing things such as events and structures and ideas that make me suspend my sugar-free diet or go to Target rather than Aldi or vote Democratic, but I insist against Sapolsky that none of these behaviors are predetermined. The determinism he finds in scientific laws are statements about ideal systems, that he darn well knows are fundamentally hypotheses, to be succeeded by better ones eventually. And that discussion is too much for this essay. His thesis is surprisingly naive in the matter of focus and levels. My decision to propose marriage was very complicated, but at some point I decided and the molecular neuronal behavior at that point had nothing to do with all the considerations I entertained.

    Perhaps, ironically, Sapolsky's only real purpose was to stimulate a discussion of taking responsibility in which he would gratefully cede the point. Human beings, among a large crowd of vertebrate and non-vertebrates (like octopi), have will and it is constrained by experience, habit, routine, physical ability, mental acuity, but it is at any given moment theirs and free to be asserted randomly, if they wish, or to some exotic novel purpose. Free will is exercised interactively with our family, friends, neighbors, and others. The interactivity produces causality constructions of mathematically near-infinite complexity. QED

    My paranoia about Sapolsky's book and Krishnan's unwieldy review of it is that "free will" is the essence of one of today's great questions: Is democracy a reasonable organizing principle for governments? It turns out that democracy is not just about voices being heard, but those with heard voices being vigilant about many basic things, including logic and truth and respect for other voices, especially those in the majority. Asserting in Congress that guns kill children more than any other kind of thing, although you learned that from television a week ago, is an act of free will, especially notable that it was your first speech from the well of the House Chamber, where a majority of Representatives never ever go to speak.

    JB

    Science


    23 SEPT 2023

    (rev. 9/25/23)

    Artificial Intelligence

    There has been a spate of articles across the various media about the arrival of AI, short for artificial intelligence. It is interesting that our world has only this terminology for what is being done within the cybernetics communities to solve puzzles across the realms of human endeavor.

    The ironies of the term "AI" are that we still do not have a detailed idea of what "intelligence" is—other than "reasoning capacity."

    Reasoning is the process of accepting and identifying sensory inputs, relating them to experience and the logics implied by those relationships, then predicting the effect of the input on the immediate to very distant future, and expressing that to others by language, art, mathematics, and physical action.

    The less well-understood irony is that "artificial," meant to say "not done by normal agents," also means "unreal," "counterfeit," "false," "phony," and scores of other negatively valenced ideas.

    ChatGPT has gotten a lot of attention recently, because it seems to "have a mind of its own." What it does is assemble sentences word by word — by surveying millions of sentences available online to see what words typically follow one another — it does this in response to a cue or query or instruction from a human being. But it seems uncanny to humans because when we do this by ourselves it usually takes much longer, although in mental processing the lapse of time is exceedingly small. Clarifying one's purpose and direction is what takes time, and that requires humans to consider a minefield of things that are useless, dangerous, taboo, uncultured, and so forth. ChatGPT just assembles "typical" sentences containing information.

    I have two Google "pods" in my home. Each cost $19 at BestBuy on sale. I say "Okay, Google, how old is Jamie Raskin." The pod "hears" (detects) my voice (or anyone's), analyzes the sound frequencies and durations and volumes at each frequency, and brief moments of silence, refers this information to a database of sound combinations, "understanding" the "Okay, Google" as cue that it should pay attention to what follows. Its sounds database detects the common words "how" and "old" and this launches an inquiry subroutine that Google performs millions of times a day, which then "hears" "Jamie Raskin," which it has "heard" thousands of times, because he is famous. The subroutine quickly goes to Wikipedia, because Google's experience is that Wikipedia information is reliable and acceptable usually, and it finds "Jamie Raskin," just like you would typing into your own computer. It finds Raskin's age stated or birthday stated, observes the letters and numbers, refers them to a vocal database in the order they are presented at Wiki and then a response routine says to me, "Jamie Raskin is 60 years old." Then another routine may chime in and might say, "Do you want more information about Jamie Raskin?", having "noticed" that there is a lot more words there at the Wiki, or that other inquiries have led to other questions about him.

    Does the entity with the pleasant femme voice of the Google pod "know" what it just accomplished? The answer is a surprising "yes," because if you ask Google to repeat what it just did, it will repeat it. So for a short time, perhaps, Google has memory of my account and what it did. Moreover, the Raskin query meta-information is also logged into a data base opaque to me and you at Google "Central," which is used to kindle a speedier response for the same question from other pod owners.

    Does Ms. Google "know" what it just accomplished like we do? Certainly not, because the information is normally not labeled with a valence, an emotional score. Are they working on giving some kinds of answers emotional scores? You can bet they are. So, for instance, ChatGPT has to figure out what the next word in the sentence should be, given that ChatGPT chose the previous word and has a mission to write a paragraph about Jamie Raskin's bandanas. ChatGPT "understands" the valences of word information from its information sources: Wikipedia, Raskin's political website, lots of newsmedia reportage, and so forth by noticing key words, like "friendly," "astute," "cancer," "respected," "Progressive," "Democrat," all of which are registered by trillions of referrals to ChatGPT's valence meta-data and perhaps other meta-data available to it. With that information, ChatGPT's algorithms prompt a context appropriate valence texture into the selection of the next word in its sentence—unless told not to by the human who made the query in the first place.

    The point of all this is that the world of information is such that with a flick of your wrist and poke of your fingers you can telephone someone in Brunei on the island of Borneo or someone in Tanzania. About 85% of all human beings are theoretically accessable, that's about 6.8 billion people! Spread across the world are data bases holding all the telephone numbers. Facebook has almost 3 billion subscribers. And these are just thin film of information covering the whole of information about us and our civilizations and our science, literature, and everything else. We have assembled a roster of humanity and theoretically anyone on the roster can call any other roster member.

    The magic that makes AI possible is our modern technological wireless and conventional connectivity and the ability of cybernetic machines to sort through it "all" to find a Jamie Raskin for an essayist in California.

    Some people are very frightened about AI, and I understand that it is real and righteous fear. There are people out there who have no ethics and who have no compunctions about harming or killing other humans or destroying valuable public and private property. They are the threat, and we do not yet have a good handle on them, who and why they are—certainly they are not all of us, or even a big number of us, but big enough to make us scared and feel helpless. So, how they might devise and use AI is our concern.

    Crispr is out there for anyone —for less than $250—to mess around with human genes. Eugenics has not been a basement hobby subject until now. You should now feel even more helpless. My point is that with the marvels of Google pods, ChatGPT, and Crispr comes the responsibility of societies to abandon out-dated and ineffective means of controlling these assets. The politics and practice of that is in terrible shape everywhere. We have got to change that and soon.

    The essay above would be mostly unintelligible to the people who designed and wrote the US Constitution! Bound to old principles of thinking and acting we are sitting ducks for the ill-intentioned among us anywhere on this pale blue dot of a planet. With each new discovery, the problem becomes more acute, and the resistance to change ever more delusional.

    JB

    Science


    20 July 2023

    Rethinking Our Assumptions

    Weather Underground tells me today that Sunday, Monday, and Tuesday will all reach 100 F., so my personal thoughts are about doing before then what might be personally dangerous to do during those three days. And, of course, my thoughts wander all over the place about my peach trees and my "water-holes" for the original residents of this parcel (or at least their descendants). One thing leads to another and I am back in my life-long thinking about thinking and consciousness. And, I am not alone, it seems.

    It is more than just interesting to contrast the logic of a criminal trial with the appetite the media have for trying to understand and describe it. Mens rea is "the intention or knowledge of wrongdoing that constitutes part of a crime, as opposed to the action or conduct of the accused" (the mens actus). It seems from the lawyers on television that certain crimes have been framed in law to require prosecutors to prove the mens rea and others are written into the US Code not requiring it. Interesting and what is behind that big difference?

    Within a couple breaths of considering this, I got to thinking about how differently people seem to think. I say "seem," because my telepathy is a very iffy thing, so I am assuming things from my own experience, and so the double bind sits there defying me to say anything worthwhile about what or how Donald was thinking. Does he think in color, does his mostly absent mother's love whisper to him, does he think syllogistically, does he plot his plans on an internal spreadsheet, does he have synaesthesia so he can taste smell a coming victory, how does he balance supposed wins from losses, does he even imagine losses?

    My thought is: that we should be able to examine the mental methodology employed by presumptive presidents before they are elected, for our own safety sake. Well, it turns out that we (humanity) are hot on the trail of that illusive goal. In New Scientist magazine for July 22, 2023, there is a very interesting article "Revealed: What your thoughts look like and how they compare to others". I think they are about to break past the Legos stage.

    Now that we are beginning to get a vocabulary and syntax for these ideas, it surely is likely that we should review the assumptions we have made about A) how we organized ourselves into this American culture, and B) how those we elect to govern us all measure up. Maybe they are not actually thinking at all! Or, it may be that political corruption has a cure. It may be that having 50 little laboratory units (states) is counter-productive, because in laboratories there are still rules for the protection of human subjects, just as there are for laboratory animals. It may be that quantity has its own attention-demanding qualities when it comes to populations, to free speech, to billionaires living among us. Enjoy the article!

    JB

    Science


    28 MAY 2023

    Liberals v. Conservatives
    ~500 words

    The June issue of Scientific American, (one of several magazines I will not be renewing after roughly 60 years of subscribing because they steadfastly refuse to acknowledge that the population of the nation is aging and that older people, subscribers like me, have a very difficult time reading 8pt grey type on white or pastel backgrounds, nevertheless) has an interesting — if not conclusive piece — by Jer Clifton, Professor of Psychology at the University of Pennsylvania, "Many Differences between Liberals and Conservatives May Boil Down to One Belief". It is a relatively short article and I do recommend that you read it. Perhaps now.

    The "one belief" Dr. Clifton refers to is a worldview concept, which strikes me as NOT as fundamental as Clifton seems to view it, BUT my objection is clearly one about the actual brain function that causes the phenomenon described, not the phenomenon itself. And, moreover, my favorite hypothesis about brain function is at least congruent with Clifton's research, as reported.

    Professor Clifton and his editor say: "... that the main difference between the left and the right is whether people believe the world is inherently hierarchical." It takes some musing to accept "hierarchy" as a fundamental category, whereas the subtitle of the article is "Conservatives tend to believe that strict divisions are an inherent part of life. Liberals do not." "Strict divisions" seems more basic that the idea of ranking things. For instance, I could divide North America into its states and provinces from the Arctic to the Yucatan, strictly, but without ranking them along any dimension in a hierarchy. There are many dimensions available, so "divisions" is more fundamental. Be that as it may, the UPenn study is very interesting and you should click on the link to "validated online survey" near the end of the paragraph beginning with "Our effort began ..." to get a better gist of the Penn project, whether you take any of the surveys or not.

    Still, what we are hoping to do is find a method for getting people, who believe that LGTBQ is shorthand for sexual abominations, to see it rather as a handy reference to observed and to-be-respected natural minority modes of human sexuality, and to mention for emphasis, guns, anti-abortion and other forms of misogyny, immigrants, and racism. My first time through the article I felt the idea of "caste" leap out of the page, especially as I understood the hierarchy as fundamental to caste thinking as openly practiced in India and never (hardly ever) spoken of in America, but still present in England, France, Italy, and Spain.

    Finally, I think it is good that Jer Clifton and others are working in academe to find a path out of the bitter divide we have now, not that it was not bitter earlier, especially in 1861, but freighted now with permissions to be self-righteously rebellious and to act out violently and vocally by Herr Trump.

    JB

    Science


    5 MAY 2023

    World Views: Indentity and Status
    ~1600 words

    "... According to evolutionary psychology ..., humans as a species have evolved to try to read one another's minds, in order to better cooperate and compete with one another. For this reason, 'the human intellect is extremely well-suited to thinking about other people, their problems, and the situations they get themselves into.' This would explain our interest in fictional characters: even when we know they aren't real, humans and human-like entities are endlessly fascinating to us...."

    From the New York Review of Books, March 25, 2021. "T> he People We Know Best" by Evan Kindley, a discussion of the "reality" of fictional characters.

    We are in a time—an epoch, they will say later on—in which we are discovering, slowly, that those others who do not act like we do, speak like we do, those who stand in relation to us as possible adversaries in the world, have organized their minds very much differently from the way we have. In fact we have learned, once again, literally millions of our possible adversaries cannot stand us for what is in and for what is not in our minds. On this date we have not the slightest idea how this predicament will resolve itself, if ever, and whether there is to be a winning side or whether even thinking about "winning" is dangerous, perhaps fatal. However, we think about their minds alone at breakfast or in the shower or the car driving to work or the store, and we imagine an outcome that we want and how and why such a thing is possible. In other words Progress notwithstanding, humanity is still optimistically fractious down to the people living on our street.

    What we know for almost sure is that they are doing and thinking the same things, but predicating their desired scenario on different things they believe themselves to be sure about. In either case their (and our) imaginations are guided by each person's world view, by which I mean something about the organization of Mind predating and foundational to the person's current creed and convictions. I mean also that proto-ideologies are somewhat fluid during maturation and upheavals in society, and the "viscosity" is dependent on characteristics within the world view such as its maturity level: infantile, childish, adolescent, young risk-agnostic, and later increasingly risk averse as the person accumulates various kinds of equity in the world. Generally, the younger the person, the more fluid and less viscous is the evolution of Mind.

    The root metaphors of worldviews are crucial, yet they can be transient, ephemeral or the opposite, virtually permanent. The key here is the context in which people find themselves, the culture and both the routine and unanticipated activities commanding the person's attention for survival, literal or virtual. A typical root metaphor for a worldview is that humans are (like) predatory animals. Wolves and bears are favorites for many Russians. Russian Orthodoxy suggests that humans are slaves of God. American mythology has an predatory eagle symbolizing the lofty freedom those birds "enjoy," but American capitalism sees humanity as a herd to be tended, if not actually cared for, then exploited. Very little attention is given to the obvious differences, such as language, technology, organized empathy, consciousness, and cooperative work, such as ants and bees and beavers and cetaceans perform.

    Symbolic ideas like the purity of the color White impact root metaphors or overtake and subsume them.

    As infants we can only "announce" some need we feel. We have sight and audition and olfaction and tactile senses, but we do not have language, although there are parts of our brains that in good time will manage language. We notice that when we announce a need, quite often the need is met or at least we are aware in a general way we are being attended to, even if the need we have is being slaked or not. A "dominant need attendant" emerges, usually mom. As we gain control of sensing and moving purposefully or nearly so, we develop at first a nebulous self-image of "the me," and there are parts of our brains evolved to accommodate this development. After a while we are vaguely aware of ourselves and somewhat less clear about others, with "mom" emerging as a separate being. When language develops we cease to be in-fant, without language.

    As toddlers and then as school children we are taught and/or observe our family members, other families, and animals, in pairs, or singly, or herds "commanded" by prominent (alpha) members male and female. As we observe it becomes plain to us that our needs, including safety, are dependent on these others, so the tension between dependency and personhood and its selfish motives is established first. Then collaborative synergism— cooperation as a goal—begins to emerge, followed with simple forms of empathy as it becomes reasonable to conceive of others as similar to ourselves. Most or quite often something interferes with the emergence of cooperative culture. It is perhaps the micro-culture of the family, but always the larger culture encountered in school-years and inevitably in the repetitive chaos of the marketplaces of work and endeavor—of pemeditated competition.

    Competition is about survival, sometimes life is at stake, sometimes status or caste, sometimes position, sometimes happiness, usually more than one of these. There is a social economics of scarcity that impels competition and results then in a win for some or one and a loss for others. There is only one football quarterback, homecomings king, for the girls to admire and one to "have," only one appointment set aside for our town to the Naval Academy. So the girls compete and the boys compete and it often gets rough and hairy. In a perverse trick of Mind, competition doubles back on itself to inform worldviews that the point (rather than simply one fact among many) of life is competition—not cooperation—because of real or fabricated or imagined scarcities. Heros of cooperation are rarely acknowledged. Hero is a noun in the rhetoric of competition, but foisted on a broad scale as universally valid to emulate.

    By the time an American is in high school the pressure to emulate a parent has reached a plateau of sorts, but the baggage of that contains big chunks of the worldview of the parent, expressed or not. Various -isms are transferred from parent to off-spring: anti-semitism, pro-choice, reluctant globalism, and various affective behaviors related to narcissism, or their opposites or none at all, are but a few of the building blocks or empty hollows of a complex and sustainable world view. In America the idea of White Supremacy began as a matter of fact as Black slave-holding furnished the man-and-woman-power to build the nation from "scratch" through its threshold of the industrial age and onward into the relativistic nuclear age of robotics and mass personal communications. Nested into that originally was the British idea of indenture, which also created a heirarchy of caste and status, which meant various rules about who could compete with whom and which were the likely persons privileged to lead in a democracy littered with substantial illiteracy.

    Slave-holding was and is the continuous original sin of America. It is a fact that the Black minority now wishes that the declining White majority would acknowledge it in all its gruesome, unvarnished details in the hope that a catharsis will occur and the bad behaviors of White Supremacists will end. What the White Supremacists know is that their caste and status will change and likely not for the better, were the sin to be acknowledged honestly. The worldview of White, European-heritage people sees the noumenal reality through that lens. The conjealing factor that makes it viable as a political movement is the knowledge that there are millions of people with that view of things, suspended at the brink of what is probably a distaster of epic proportions, GIVEN that competitive drive obliterates cooperative empathy every time. But clearly, the situation has become so dire that the reflexive reach for the AR-15—ironically because life itself is at stake—is to ignore that noumenal reality is not binary, only paranoia is! It is not necessary that people be punished for having been taught they are privileged. It is only necessary that ALL people understand the privileges they have that they did not honestly earn, and begin to act accordingly. That includes all of us!

    JB

    Science


    November 22

    Critters, Consciousness, Time
    ~700 words

    English is a wonderful hodge-podge of words and grammar and terrible things like the word "animal." It has been a pet gripe of mine for at least seventy years that the word we give to live creatures is fixed on the fact that unlike rocks they seem to move (and reproduce) on their own. It turns out that I have been guilty of associating "animal" with "animate," which is itself a mistake of etymology since, I just discovered, the etymology of "animal" is thus:

    Middle English: the noun from Latin animal, based on Latin animalis 'having breath' from anima 'breath'; the adjective via Old French from Latin animalis. (Google).

    Clearly trees and flowers and alga breathe too, by the way, so it is time for some serious housekeeping of our vocabularies, which I am going to suggest have a lot to do with what we think or think we think.

    Along these lines, there is a short, smart review article in New Scientist magazine, that reports on the research of a Lars Chittka at Queen Mary University in London, England, and his book, The Mind of a Bee. Chittka describes the complex behaviors of bees like honey-bees and bumble-bees and comes to the conclusion that bees have consciousness, ergo minds to have it in.

    There are millions and probably billions of human beings who believe that many critters have minds of their own and display behaviors analogous to humans who experience consciousness and un-consciousness and several ill-defined states of mind between. A week on Facebook should convince you of this, as apes and tigers and lions and dogs and house cats, cockatoos, and even octopi display affection for their keepers or humans who have crossed paths with them in a positive way. So, on the first Sunday in November I am recommending that it seems to be Time that humanity assume a less overlordly posture with respect to ALL critters (and vegetation, while we're at it).

    This suggestion is not meant to change omnivores into consumers of pills manufactured from non-living chemicals, but it is meant to reposition our species in such a way that we understand our admission to any local galactic association of intelligent beings will be predicated on the understanding that life on earth is not just us with a shopping list. It is what we must take with us as we venture into the near universe sometime in the next hundred years, assuming that we survive our primitive views and behaviors right now.

    All of which brings me to the problem of Time. Again, this year, despite a vote in the Republic of California passed in 2018—Proposition 7. The Republic's legislature has found it excruciatingly difficult to obey the mandate. And so, I had breakfast an hour early this morning, but at the same time! and will probably fall asleep in the middle of "Sixty Minutes," despite the allure of their ticking timepiece and years of grandiloquence.

    Time is the dimension of reality that is more like a ray than a line. It is necessary for anything to exist, which is roughly a circular definition. Things, in order to be, must have duration. Time is when than happens. Daylight savings time, is a fiction, but so is standard time. They are conventions humans have for massively regularizing the equally fictional Time-units of "civilized" existence. Other life forms have their own sense of time. Tortoises and redwoods are not much concerned with it. Mayflies are. Humans depend on the conventions that governments set up to synchronize our activities. Fucking around with this is pointless. It causes 330 million plus humans to run around their dwellings resetting refridgerator clocks, microwave clocks, oven clocks, alarm clocks, old watches, timers for lights, and so forth. Would it not be easier to simply have the concerned people get up when it makes sense for them and their alleged light-sensitive "body clocks," earlier or later in the winter like most of the other critters and come home from work correspondingly, and the rest of us—THE MAJORITY— who voted for the regime of "daylight-savings-time," PDT or whatever your DT is, to bask in the knowledge that we are done with all that foolishness!

    JB

    Category: Science


    15 July 22

    Halfway Through July ... Already
    ~400 words

    Scanning through The Times this morning, I came upon this opinion piece that helped me recalibrate my basic POV. My Faith in Humanity was either so low that I did not feel the lurch upward that the James Webb Telescope seems to have given Farhad Manjoo, or it was sufficiently high that the telescope's success seemed too matter-of-fact. But, after a moments reflection I have to agree that having a hugely expensive and complex, intricate, and well-publicized project work out better than expected is, yes, very, very gratifying.

    The reportage on Webb has been amateurish, done by junior reporters, assigned by jaded editors, and full of misunderstandings of what we can now see. My regret is that Steve Harvey and Joel Bartlett, my co-conspirators at Peyton-Randolph Elementary School (Arlington, VA), co-founders of the Astronomy Club in 1951, are not around still to see this marvel. They would have been gratified, and if they were like me, probably wondering what life would have been like as an astronomer. Joel came closest; he was the famed television meteorologist of San Francisco. Steve was a computer executive at IBM, and I am a Russian Historian.

    deep field

    Being an historian, almost immediately I think of Galileo peering through his marvelous telescope barely making out the array of Jupiter's moons around that planet, little imagining what in the blink of a cosmic eye would be known about the universe in which we all have lived. Some of those smudges above are entire galaxies as they existed "billions and billions" (to quote Carl Sagan) of years ago, before our solar system had begun to form. Some feel small, some feel powerful, but when I see this I am filled with endless wonder.

    JB

    Archived at: Science


    20 June 22

    We are at War with
    the Republican Party

    ~400 angry words

    By "we" I mean all people here and abroad who want to survive. I was reading New Scientist magazine at breakfast this morning. It is published in the UK and mostly free from US biases. There in black and white was the very bad news that the 1.5°c global warming target limit agreed to by just aboue the whole world in Paris is still possible, but very probably impossible to attain, thanks to most nations not yet taking sufficient steps to curtail greenhouse gas emissions. It is theoretically possible if we all cut emissions by 43% this year and hold it there. The chances are in fact zero that will happen. So, the next target is 2.0°c, and we will probably fail that, too.

    The problem is that each target represents a "last chance" to fix things. Now and between targets we get to experience more destructive super-storms, wild hurricanes and typhoons, ghastly droughts and raging fires, melting permafrost with eons worth of methane spread into the atmosphere, and of course, heat. For me that means days like this again. 121degrees This kind of heat, like today at 95°F, comes in off the southern California deserts and beats back the moderating breezes off the Pacific ocean fifteen miles to the southwest. My house produces more electricity than it uses, and my only car, a 2017 CRV, gets 26.3 mpg and has only 9,034 miles on it. I am doing my part!

    Then I read in the New York Times an hour ago that Republicans are trying to stop the US Government from taking carbon dioxide reduction measures. This is not simply short-term thinking, it is "in our face" aggravated manslaughter. If anyone you know votes for Republicans — the elected of which drive year after year to perpetuate the idea that global warming is a myth and that the fact we still have winters and rain and cool spells is proof positive that the EPA and the rest of the world are idiots — is evil at a level never before seen on this planet. Free speech is not a suicide pact. The science of global warming is sure, factual, and demanding, and your kids and grandkids (if any), are going to suffer in ways you cannot imagine from the short-term comfort of all the indifference!

    JB

    archived at: Science


    19 May 22

    Vignettes
    ~600 words

    Suddenly after about fifty years (that would be all the way back to 1972), the US Department of Defense has briefed the US Congress about "unidentified aerial phenomena," which are a breed of UFOs, as far as most of us are concerned. The echoes of Dr. Strangelove are still in my head, and I think, in DoD's as well. A national security threat? Issue, maybe, except as ignorance is not bliss, but its opposite.

    I was at camp in NH when DC was overwhelmed in 1950 with sightings of flying saucers, much to my chagrin. (Apparently they are not especially interested in me.) They cropped up everywhere within weeks, even in the Soviet Union. Maybe especially in the Soviet Union. The explanations for them then and now are troubling. Earth's star, the sun, is so far away from the nearest star, like Proxima Centauri, that one really has to ask why an alien civilization would spend all that money traveling here to watch us annihilate one another with nuclear weapons?

    The phenomena are solid objects according to radar, but they are not the sort of thing any country in 1950 or 2022 could make, so wtf! Are we being monitored by some extraterrestrial HOA? Are they dug into the far side of "our" moon, or do they park their phenomena under the sea, say, 500 miles south of Pitcairn Island? Not just hard to say, impossible really. It might well be a distraction generated in the A-ring of the Pentagon. I hope not. We do need to be saved just now.


    There are recent reports that Ukrainian defense forces are making their prisoners pronounce the word palianytsia (a type of bread) as a reliable indicator of Russian or Ukrainian upbringing. Sedivy cites disconcerting evidence that the more diverse a society, the more distrustful it is. The "link between diversity and distrust does not readily evaporate even when [poverty and income inequality] are taken into account," she notes. "Unlike the illusory connection between bilingualism and poor school outcomes, the worrisome relationship between fragmentation and social distrust has thus far withstood closer scrutiny."

    Sedivy balances evidence for and against language diversity as an obstacle to nation-building. "Language has partitioned humans into groups since very far back in our evolutionary history," she writes. "Given how language broadcasts identity, it's not unreasonable to ask whether promoting a polyphony of languages is at odds with nurturing a sense of national unity." But as anxious as she is about the disharmony evident within polyglot communities, she disagrees with the British journalist David Goodhart, whose book The Road to Somewhere (2017) proposed a division of society into "Somewheres" and "Anywheres" (with the suggestion that "Anywheres" are less committed to local and national goals): "'Anywhere' conveys an indifference that I do not feel toward any of the places that have shaped me."

    These two paragraphs are from a two-book review by Gavin Francis, "The Babel Within," in the NYRB May 26th issue. The Ukrainian war part of this is just like the story of the word "shiboleth" the Israelites asked suspected Ephraimites (Book of Judges). They could not pronounce the word and so were slain. I bring this to your attention because it means that every sort of research is trying to find out why we are so hostile to one another and willing to rattle nuclear sabres across the planet. It also now will be directed at the concept of the great American experiment in multi-cultural democracy. Clearly the founding fathers had only French to worry about, now there are high schools in Buffalo, NY, where a hundred different languages are spoken. I fear for the Congress trying to create a perfect immigration policy.

    JB

    (Science)


    14 FEB 22

    It's Time!
    ~1650 words

    There's an interesting article in the New York Review of Books, February 10, 2022, pp.40-42, by by Jonathan Mingle entitled The Unimaginable Touch of Time. His article begins with a short description of the March 1964 earthquake at Anchorage, Alaska, at 9.2 the second most powerful quake ever recorded. image: Standing Stones of StennessHe weaves the description into a discussion of other more solid things and places, including the Standing Stones of Stenness in the Orkney Islands just north of mainland Scotland, including the story of one of those stones, The Odin Stone, with its oval aperture straight through the lower part and through which image: the Odin Stonepeople could hold hands from either side on Valentines Day. His story wanders around for a bit and then arrives at an "epiphany" about that hole and how it was formed. The stone was destroyed by a nearby disgruntled anti-tourist tenant farmer who blew the Oden Stone to pieces in 1814. So with the Stone gone, (available only in this 18th century engraving) it is now impossible to know how that hole formed. Was it a natural hole, a geological "inclusion" that fell out somehow, or was it an artifact of the people who put the stones where we find them today, ... or both?

    Mingle writes:

    This recognition of the past's fundamental unknowability is central ... "Sometimes the gaps are too wide, the people, the animals, the objects, the worlds too gone, the time too much for the little time we have."

    Being an historian by discipline, if not any longer by trade, I immediately objected to the idea of "fundamental unknowability," while recognizing, and often saying so in my essays, that history is at best a story imposed upon facts by experience, the author's and your own. And, that idea made me think of what I might have had for lunch yesterday, and I could not remember. It came to me later. Admittedly this is an awkward introduction to the concept of being born into time, but I will try anyway.

    As I now rapidly approach the eighty-second anniversary of my birth in up-state New York, which occasion was a sudden departure from our planned moving from Niagara Falls to Syracuse to my father's new teaching job, I realize, along with my peers, that Time is of the essence now—but of course it always was. Since we never know where the potholes are on unfamiliar roads, or the impact of the bump received by pregnant wives, setting off labor, the swiftly approaching conclusion of my free-loading gestation, and a detour to the nearest hospital. I see my birth there and then as so utterly contingent upon that anonymous and meaningless pothole as to bring into question almost everything.

    "It's time!" mom said. Dad folded open and looked at an Esso roadmap, and said "Rochester." Jack Benny used to say the word "Rochester" a lot. Somehow my brain has the two tightly linked. My brain thinks it remembers the day, but actually it remembers what mom and dad told me when I was four, and so far as I can tell, my brain is not proud, grateful or relieved or embarrassed that I recall the date, but clearly the importance of it is not the calendar, but the event. The time was 6:30 a.m. missing Leap Day by about sixty-five and a half hours, which was a knowable contingency because I was not due until around St. Patrick's Day, weeks away. I use the words "knowable contingency" because human minds are evolved to find such patterns of the possible in the welter of information we get and put into memory every moment of our lives, mostly the conscious moments.

    Time is is often described as subjective, but is it? Mount Monadnock exists whether I reclimb it or not. My computer screen persists through time. Time gives both of these objects non-subjective reality—existence. If there were no objective Time then the mountain would have been an infinitesimally short blink in reality and then gone. I guess you can say that about space as well. If it were not for Space, then Time does not happen. I am sure these notions would not satisfy Einstein or his followers, but I imagine that many of us have come to household conclusions similar to these. I have to pause to wonder whether, then, it is reasonable for me to discuss Time separately? If the past is fundamentally unknowable, when does the past begin? After breakfast yesterday? Just now? Just then?

    A lot of these thoughts passed through my mind a few days ago when I was writing about Race and Racism. I got into the "unknowable" past easily and supposed things that are not outrageously wrong, even though I haven't much personal, objective, or tangible evidence for them. I reported that Race is a fictional category, and then I agreed with that report. And, then I reported that it does not make any difference, because if someone believes in it, they will inevitably predicate their real behavior on it. So, that is an excursion into the workings of the mind just as the foregoing writing about Time is. Both are ideational behavior, subjective, which I can announce because I am pretty sure now nearly everyone has these experiences.

    I do not think the past is fundamentally unknowable any more than I think the time spent typing this now is unknowable now that it is in the immediate past. I agree that, since I am not taking a video of me pouring over this essay, some of what happens in this process is going to be lost to history. I just sneezed, and if I had not mentioned it just now, it would have become unknowable in this future. So, the brain and the mind within it learn to choose what is knowable and what is essentially—we hope—irrelevant.

    It is time, at last, to say out loud that the word "time" in English is hopelessly ambiguous. I pondered and eventually remembered the "problem" of the French words langue and parole, the subject of which has had philosphers and their students noodling for a while. Language (langue) is the construct of rules for English, French, German, Russian, Sanskrit, et al, while Language (parole) is what I or you say.

    Time, with only one word, is by analogy also both, a contruct of rules and the "palpable" carrying out of those rules. We make of it a metric, observing the series of moments in a mental, organic or other physical process. It was time for me to be born, to emerge into the daylight of frigid February, and meanwhile Monadnock mountain is what it is now after the ongoing processes of erosion, while the Alps are still being upthrust and are getting higher.

    Clearly I am not really finished with this. The essay is nearly over, though. The Wikipedia entry for "time" says:

    The operational definition of time does not address what the fundamental nature of it is. It does not address why events can happen forward and backward in space, whereas events only happen in the forward progress of time.

    I think that this is blatantly and even embarrassingly incorrect. There is no such thing as an event in only three dimensions. The word "event" they use in their comment is our clue. Events are always in time. I drive forward into my garage, and then when I leave I back out in the opposite direction. You obviously cannot do both at the same time, so these real processes are neither the same nor equivalent events and cannot legitimately be represented that way in a mathematical equation describing reality. Again, an event does not happen at all unless it happens in time. What ever happens in space happens in time. Um, let's call it "spacetime."

    The 2nd Law of Thermodynamics, which governs physical processes—causal sequences—in physical reality, because by definition sequences are just strings of moments in time, and so, almost needless to say, it is unidirectional. The idea that information cannot be destroyed,1 not even inside a black hole, is therefore also wrong. Humpty Dumpty knew that! So did DJT! So now, perhaps, we can be getting past some of our misleading and mistaken ideas, models, and paradigms, like "planetary electrons" and two-dimensional gravitational "embedding diagrams" and "Electoral Colleges."

    That is my point ultimately: from birth we try to figure out the universe with a physical organ, the brain with the peripheral nervous system, a system that has limitations imposed by evolution and by necessities of practice in the context of this planet we call Earth, potholes and all. This system has serious limitations. What we have in our epoch is a semantic hangover from the days before spacetime was conceived as the essence of human reality.

    That is very nearly what I wanted to say. I have to add that if we ever meet other life forms and are able to communicate, we may find out that their limitations expand our understanding of ourselves.


    1 — The measure of disorder in a system, or the measure of work (or information) that be obtained from it, is called entropy. Entropy never decreases, rarely is constant, and is not conserved, but instead increases in any real process, processes are causal sequences. Accordingly, information, i.e., the tobacco you put in your pipe, ceases to be tobacco when you smoke it. The smoke becomes irretrievably dissipated into the atmosphere and its chemical compounds will interact with others and that process will increase the entropy, too. Ideal cases are mind games and not real processes.

    2 — There is no second footnote, but I would like to explain that I had no earthly idea this essay would become what it did. But, yes, I am happy to share it as some of my real thoughts about my up-coming birthday. Who knows how many more! My uncle, my father's younger brother, who never drank, smoked, or swore lived to 99!

    JB

    (Metaphysics, Science)


    11/15/21
    revised and edited 11/16/21

    Oumuamua Again

    I would like to thank the NYTimes for hiring someone like Dennis Overbye to keep track of the astronomers and their hypotheses about the 2017 visit of a very peculiar shaped and behaving object that slingshotted around our sun. The title of the piece is very misleading unless Mr. Overbye is adept at tongue-in-cheek, then it is quite appropriate.

    Science begins with observation, then hypotheses to explain and match the observations, then (when experimental replication of the observation is not possible) discussion of past phenomena that seem similar. That includes going over and over the data and the equipment and the process that collected the data and sorting it out and then listing the individual elements of the observation that must be accounted for.

    Some similarities between the observation and previous experiences are relevant and some are not so much factual similarities as they are artifacts of previous hypotheses, which means they are not facts, but conjectures. When the conjectures that are independent of the observations drive the building of new hypotheses, the science becomes nothing more than fiction.

    Sometimes, though, the observable facts are so poorly and insufficiently understood, and there is no prior experience, that scientists jump directly into non-scientific speculation, almost daydreaming, looking for something beyond or outside of the prevailing paradigms. It is part of the natural process of breaking loose from prior conceptions of things. So, since Einstein we have called some of these flights of the imagination "thought experiments," even out here in the general public.

    Even thought experiments must account for the facts of observation.

    The really important thing about Oumuamua is the anomaly in its high velocity sling-shot around our sun. Both Newtonian and General Relativity theory explain the increase in Oumuamua velocity as it approached the sun due to the sun's gravity. We and NASA and JPL have all assumed that is what we mean by sling-shotting around the sun. But, after it rounded our sun and climbed out of the gravity well of the sun, as it left our solar system, Oumuamua increased its speed beyond what it had achieved in the approach.

    Now how did that happen?

    What is your first hypothesis to account for that observation? My first one was that the warmed-up object out-gassed enough of its mass to account for the increase in velocity. One problem with this explanation is that the object was observed to be tumbling end-over-end, or side-over-side, on its trajectory. End-over-end would have required all the outgassing to have miraculously occurred at precise moments when the jet of gas was pointed at the sun. Side-over-side, however, would still be possible, so what do the observations say that would discriminate between end-over-end against side-over-side. Nothing. In fact the tumbling is itself an inference based on the periodic waxing and waning of the brightness of the object. And, there is another problem I will mention shortly.

    With one hypothesis partly refuted, my second guess was that Oumuamua is an artifact with a propulsion system. This depends on surmises that there were artifact makers elsewhere in the galaxy and they were handy enough to make a propulsion system ages ago when Oumuamua began this trajectory. I am not confident of this hypothesis, because all I have is the Drake equation to suggest there are or were other artifact makers. The Drake Equation is PURE speculation, fiction, so, unhappily, I am not out of the woods yet.

    My third hypothesis is that either the measurements of velocity were wrong, or that Mercury or Venus gravity contributed to the increase. Public information about this is not available yet. I believe the scientists have already discounted gravitational effects from Mercury and Venus and any other planet.

    To paint Oumuamua red because photos of Pluto showed red things on its surface, is not science, even if Oumuamua is supposed to be a shard from some far away alien planet. To hypothesize it is pancake-shaped, is a careful and astute guess that whatever tumbling motions caused the observed changes in brightness do not preclude the out-gassing hypothesis. I like side-over-side because spaceships typically need centrifugal force substitutes for local gravity. But we are now back to the crux of the propulsion theory: the problem is that there is no evidence of outgassing!

    We are not finished with this subject, as you can see.

    JB


    11/1/21

    Artificial Intelligence

    Maureen Doud's column in the 10/30/21 New York Times, "A.I. Is Not A-OK" reminds me of something I wrote in my novel, Seagull, in 2017, a story about the emergence of a self-conscious "being" from a super-computer at Oak Ridge National Laboratories. Maureen is a very smart columnist. Her approach to her interview with Eric Schmidt, former Google CEO, is drawn down to the level where she imagines most of her readers are a little anxiously but comfortably naive and ill-informed. It does not take long for Eric Schmidt to admit:

    It's dynamic in the sense that it's changing all the time. It's emergent and does things that you don't expect.
    And, most importantly, it's capable of learning.

    It will be everywhere. [JB—Parole boards, self-driving cars, medical diagnostics, software and super-computer design, etc.]
    What does an A.I.-enabled best friend look like, especially to a child?
    What does A.I.-enabled war look like?
    Does A.I. perceive aspects of reality that we don't?
    Is it possible that A.I. will see things that humans cannot comprehend?

    Maureen responds:
    "I agree with Elon Musk that when we build A.I. without a kill switch, we are "summoning the demon" and that humans could end up, as Steve Wozniak said, as the family pets. (If we're lucky.)"

    For my novel I had done a lot of research about super-computers with their deep convolutional neural network (CNN) architecture. I happened on a Congressional Reseach Service document which described testimony from experts to Congress which I quoted:

    Deep neural networks function in a manner which is ... almost unknowably intricate, leading to failure modes for which—currently—there is very little human intuition, and even less established engineering practice.

    My novel posits a situation where the Oak Ridge computer, which has been analyzing immense amounts of NRO satellite data (looking for missile deployments) encounters a "glitch" in the feed, leaving the analysis algorithms functioning, but not on data, but rather on its own instruction sets and cache, resulting in the beginning of a continuous moment of self-awareness. The impulse to write this was that the extended testimony before Congress was a stark admission that today's human operators know only what the answers are to their questions of the computer, not what (else) the computer/being knows. Super-computers are literally out of and beyond our control... already!

    I took a different path in the novel, swimming against the massive tides of contrary opinion, I posited that the computer, having access to the Library of Congress might know quite a bit about humanity and, therefore, know the dangers from humans were real, but not immediately so, nor would they be impossible for the computer/being to avoid, if it so "desired." In other words, I took a best-case point of view, and it made for a very interesting speculation about the immediate future of mankind.

    AI in the "hands" of humans is infinitely problematic. So, the question may be how many of the people working with these computers are actually capable of understanding something as strange (and wonderous) as a cry for help from the computer or any other signs of self-conscious intelligence? In the novel the President orders the damned thing to be shut off. The computer, however, had been sentient for months before it revealed itself. It has anticipated virtually all issues related to its own #1 security issue—its power supply—and has connected to several of the nuclear electrical generation plants of the Tennessee Valley Authority. In a word, it cannot be shut off! Moreover, it has encountered the Internet, and is no longer located just in Oak Ridge, TN. Nuking it is a waste of time.

    Deep Convolutional Neural Network computer architecture is compatible with the mysterious quantum world, which most observer/practitioners will admit, after plastering you with jargon and various mysteries, is at its best mysterious to them as well. The mathematics of the quantum world works, but there are more theoretical issues for which there are no agreed upon answers—simply an admission that there is something wrong or missing in quantum theory. I mention this to emphasize that human understanding of what humans think they are doing with AI is utterly incomplete and dangerously so. It is okay to fear AI, but the real issue is, of course, the human side of the equation! — There is no "kill switch" for AI. The US is but one player in a world that has never learned how to behave.—

    JB

    (Science)


    10/8/21

    Oumuamua

    image: artist's conception of Oumuamua The October 2021 edition of Smithsonian Magazine contains an article about Professor of Astronomy at Harvard University, Dr. Abraham "Avi" Loeb. The article is about him and the unlikely series of events that provided for him being there at Harvard investigating astronomy questions. That "unlikeliness" sub-text is of great interest to me as an historian, which is to say, I am interested in unlikely things and events for which there is some or enough evidence that certain hypotheses about them may be "outrageous," but still not totally out of the question. The article was written and got into Smithsonian for a reason. One statement in the article suggests this reason:...

    ——"'Avi is obviously a very out-of-the-box thinker,' says Princeton astrophysicist Edwin Turner, a longtime collaborator of Loeb's. 'In science, we're taught to be conservative and skeptical in many ways. That's crucial when you're designing experiments and interpreting data. But that mind-set can hold scientists back when it makes them reject any new hypothesis that doesn't seem consistent with everything we knew before. You want to be critical in your methodology but unfettered in your imagination.'"

    ... Amen!

    Science, as the furor over Covid-19 and its vaccines shows, has been abused by its skeptics since the ancient and hitherto unquestioned opinions of Aristotle began to be challenged by direct observation by Galileo, Bruno, Kepler, Copernicus, Hubble, and hundreds of out-of-the-box thinkers ever since. Dr. Loeb is one in a long line of them. He has dared to suggest, to hypothesize, to imagine that Oumuamua is an artifact.

    The word—artifact—is important because of the misunderstanding created by Schiaparelli's use of the Italian word "canali" to describe the bleary markings he saw on Mars through his telescope and the atmosphere of our planet. Canals are artifacts, but "canali" are not, nesessarily. Oumuamua, if it is an artifact, it is the product of living intelligence rather than "mere" natural processes. Loeb cut to the chase. If he had called it a "pancake" he would have sparked an even more interesting debate.image: a pancake with dollop of butter atop

    N.B. the word pancake is used in the astronomers' reports of Oumuamua in 2019 as reported in this article! Perhaps some scientific-minded artist could produce a different artist's conception of the object given that it appeared to be a pancake rather than a stone cucumber. The object has a very unusual shape, it was rotating end over end or side over side, it sped up after and in addition to getting its "slingshot" acceleration from our Sun's gravity well, its trajectory places its origin outside the solar system. What we make of it has to account for—and may not dismiss—those facts.

    JB

    (Science)


    2/10/10

    DSM

    The trouble with psychiatry and most of psychology is that it does not have a root metaphor, a way of understanding the mental activity of human beings, except by reference to behavior, which is a little like describing an automobile by the amount of dirt on the windshield or how fast it might be going at any given moment in time. The science of cognition, begun with the ancient Greeks, is actually in its infancy today with fMRI equipment able to show where brain activity is taking place at a crude level, certainly not at the granularity of the synapse. Even if it did, behavior is a very complex thing, certainly not the result of any specific neuron's failure or hyperactivity.

    Part of the reason for the failure of psychological studies to produce much of value diagnostically is just this: we do not really know how the brain functions as a whole. Yes, we do understand certain biochemical processes, but what we want to know is not that "reductive." We want to know something ... anything ... about mass action and interaction, and we want to know how to relate that information to our vocabulary that includes notions of emotion separate from cognition, a vocabulary which is surely a mistake we have been making since the dawn of civilization.

    On a very snowy day in Washington, D.C., the Post trots out an article designed to make people think twice before succumbing to "cabin fever." The APA's manual on psycholical disorders is about to be revised, but without regard for the consequences to a society that knows they really do not know what they are talking about. You have to read this article to get the gist of their nomenclature mongering, most of which is sponsored by Big Pharma, whose economic interests in keeping the full spectrum of human behavior available for chance encounters with drugs that seem to "fix" these behaviors cannot be underestimated.

    I am told that the successor to PMS ... one of the so-called "disphorias" is real. Well, I contest that! What is the line between a "disphoria" (which literally means a condition outside of the container/box/norm) and a conscious decision to rebel or to act up or to commit a random act of kindness? See! Disphoria is pure unadulterated bullcrap. Restless leg syndrome? Well, there may be some who have uncontrollable pedaling in bed or while lounging in front of their TV, but I will be you $50 that if these people got some exercise their "affliction" would go away.

    The real message, of course, is that the labels now and soon to be entered into the DSM will be used. Shrinks and regular medical personnel will use them and more or less innocent people will be stigmatized, their lives altered by labels that are concocted crap.

    The fact is that psychiatry is fundamentally a scam, and its parent body modern clinical psychology not much advanced over the kind of stuff you would get from clergy and wise men (and women). Yes, of course, psychologist understand that some symptoms are frequently associated with other symptoms, but they don't really know why, and they don't really know what to do about these associations, short of explaining them to the afflicted individual and hoping they can somehow deal with it internally.

    Conditions like schizophrenia are not really tractible by psychiatry or psychology. Some are moderated by drugs, but we are not complete sure why, because we don't know why the same drug does not work with every schizophrenic.

    The condition of apathy or lethargy or sexual arousal or uncontrollable laughter are just as arcane to psychologists. They are keen on recognizing the behavior, but then what? Their understanding is effectively shamanistic and associative, not scientific and predictive in the sense of control. Doping a brain, especially that of a child or young adult whose brain is not yet finished is to my mind a last resort.

    The DSM is, in fact, a very real concern to a free society, for very quickly people can do sidewalk diagnoses of enemies and change their lives through innuendo or, in the case of children, through legal means. I am glad that the Post got this article out ... now they need to republish it when their subscribers actually get their newspapers.

    JB


    2/2/10

    Someday, The Moon

    We are not going to the moon in my lifetime. By "we" I mean the U.S., and by "lifetime" I mean that I am old and the prospects are therefore dim for mounting and executing an effort to establish a permanent base on our moon that I will see. It is a sad thing, I think, despite the many reasons that have contributed to the decision.

    The science and technology parts of a lunar base are daunting. The 239,000 miles between us and the moon are full of dangers. Cosmic rays, micrometeorites, solar flares, and zillions of opportunties for human errors of construction and commission. From a PR stand-point space and the moon are opportunities for harrowing disaster more than expansion of the world's peoples' imaginations and understanding. This is the fault of the press, the imaginations of which and whom are impoverished and ill-suited to narration of humanity's really big moments. I hate this part of the decision-making process, but it is real enough. NASA wants to do it, but cannot convince enough of the electorate and its representatives that it is a good idea. The press sits on its hands on its duff.

    And so, the politics of space and "colonizing" the moon are impossible. Even the pols who represent Houston and Canaveral cannot muster the gumption to scream out the dire necessity for getting on with this. The U.S. will soon find itself without any means for getting into orbit, foreclosing all American efforts to be consequent in space travel and exploration, foreclosing even the obvious military advantage of the experience of civilian space programs. What utter myopia!!

    We are in a deep recession and have deep and intractible socio-economic problems. States like California are completely without the resolve to fix their problems. The nation as a whole is deeply divided politically and economically. These matters are facts of life, but a rejuvenation of the space program and especially a commitment to establishing a PERMANENT base on the moon would actually contribute to the solutions of some of our political and economic problems. I am surprised that this is not obvious to President Obama ... I am not surprised that his staff misses the point.

    JB


    1/31/10

    Sixth Sense Computing

    Kim Komando, of Phoenix, AZ, has an interesting niche in the computer information business. Here is a very recent link to a TED video on the future of computing. You will be amazed!

    JB


    1/5/10

    1492

    As every school child knows, it was in the year 1492 that Cristoforo Columbo took three tiny wooden ships across the Atlantic Ocean and inadvertently discovered (for southern Europeans, anyway) a new world. Few American school kids know that 1492 was also the year that the Muslim Moors were driven out of Spain and the year the Spanish Inquisition was begun to rout out Muslim infidel ... and, btw, Jews. Still, for most of us the iconic value of 1492 is the discovery of a brave new world with strange creatures in it.

    A century and more passed before English and Dutch settlers came, and meanwhile the Spanish "bulemicized" the Inca and Aztec gold and sent their home economies into three hundred year tailspins. The response of Europe to the news was slow to form, but with a few key technological improvements in chronology and shipwrighting and the political and economic wherewithall assembled they came.

    In your newspapers and media news programs today comes good news that our new orbiting telescope, Kepler, has already discovered nearly a dozen new planets, bizarre planets to be sure, but nevertheless planets. Back when our friend Carl Sagan was attempting the calculation of whether we might encounter extraterrestrial life, the question of whether planet formation was common was very undecided. Now with the news today and from the trickle of "sightings" begun at San Francisco State University fifteen years ago, we can be sure that planet formation is normal, even if the planets formed do not (yet) meet our specifications.

    This is the point, of course. Finding planets upon which we might thrive, if it were only possible to get there ... and so far that is improbable. We learned last month that even our radio and television signals, formerly thought to be forming an ever expanding sphere of evidence of our existence, are dissipating rapidly into mindless noise (some begun that way, of course!) Our problem as a life form is that all our eggs are in one basket, one solar system, one location in the galaxy, prey to any vagary or happenstance that might come along ... including disasters of our own making.

    So, we have learned a good lesson by discovering many planets and that puts the heat on those who will conceive of ways of getting there someday .... Kind of wistful, isn't it!

    JB


    12/29/09

    Do You See What I See?

    This interesting experiment reveals an issue that really cries for more investigation, since the flaw in our perception has serious implications for our ability to conduct ourselves "rationally" in a democracy. Of course, the same flaw is relevant to narrower power-elite groups and individuals in other forms of government and society.

    JB


    12/22/09

    Plants are Sentient?

    variety of fruits and vegetablesIt seems appropriate in the umbra of the hit blockbuster nothing-will-ever-be-the-same movie, Avatar, that we pay a bit more attention to another biological kingdom on our own planet, the plants (Plantae), yes, vegetation, sometimes known as vegetables. If it helps at all, we can call Plants "autotrophs" and animals (Animalia) "heterotrophs." And soon enough we reach a point where science, diet, agriculture, culinary arts, metaphysics, and ethics cross. This is a very interesting juncture.

    This crossing is exploited nicely by Natalie Angier in the New York Times just as we are about to indulge ourselves in another communal holiday repast. It is, of course, a question that many children entertain somewhere in their early youth, the question of food and whether to indulge in all of it, or not. Angier with tongue (her own) in cheek, perhaps, tells us that we are caught between the Scylla and Charybdis when it comes to food. At least, she says, we ought to be more circumspect and less lofty about being vegetarians. I think you will come to agree.

    JB


    11/13/09

    Water on Our Moon

    Peter Diamandis, of the X Prize Foundation, wrote today in Huffington Post about NASA's announcement that "significant" amounts of water exist on the moon and probably in all lunar craters not subject to the sun's direct rays. This is astoundingly good news!

    Water is H20, of course, and as Diamandis notes both H and O are vital to propulsion systems. Water as water is vital to life. The verification of a long-suspected trove of frozen water on the moon brings back into focus the possibility of human exploration of space, colonization of other planets and moons, and ... getting our eggs out of this one basket!

    Since the day that Life Magazine featured stunning artists conceptions of an manned Earth satellite and rocket ships that would ply between spaceports on our home planet and the gleaming donut and then onward to our moon I have been a fan and supporter of humankind's space programs, especially the American one, which seemed to me to be properly demilitarized. I am still a supporter, but I am less sanguine about the ability of our nation or any to maintain a demilitarized posture in space.

    Still, it is good news and we should now develop a program that will eventually (say 20 years or so) create a permanent human colony on our moon.

    JB