TV news just reported that New York City schools will give permission to students to skip school for a day to protest climate change. Upon seeing the story, I remembered one historical protest I read about when I was actually "in" school. When his pontoon bridge on the Hellespont was destroyed by waves, the Persian King Xerxes supposedly had his soldiers shout at and whip the sea. Yeah. That didn’t do much. Students taking a day off to protest climate change? Yeah. You draw the conclusion.
2 Comments
Our age is a bit obsessed with robots, artificial intelligence, and simulations. Do we have some sort of built-in God Complex that we are driven to create in our own image?
Here’s where robot technology coupled with 3D printing and response polymers has taken us: Materials with self-actuation in response to different stimuli. Essentially, some clever humans at Caltech 3D-built a flat polymer sheet that incorporates hinges and that responds to 200 degrees Celsius by becoming a folding and unfolding pentagonal wheel that moves itself. In other words, the gizmo can roll along a surface in response to heating. The “five-sided pentagonal wheel,” called “Rollbot,” paves the way for “untethered soft robots,” shades of one of those Terminator movies.* Think of an auto-origami making itself auto-mobile, a shape-shifting inorganic actor responding to its environment. Well, we’re not quite at the Terminator-2 stage yet, not quite making a T-1000 to battle Arnold Schwarzenegger’s older model T-800; nevertheless, we’re on verge of making robots that can physically respond to external stimuli by changing shape. And it’s not just the self-folding Rollbot that the Caltech group produced. One of the self-actuating robots they made appears to be a flat sheet until it is placed in water, whereupon it turns into a bug-like shape. It’s a real shape-shifter. When someone figures out how to add some neurocircuitry, we’re going to have competition as planetary rulers, or at least that was the fear of the late Stephen Hawking. Is there a limiting principle that will prevent robots from competing or from establishing superiority in the hierarchy of intelligence? I think there is, but I’ll name it below. I can imagine such “soft” robots serving both art and technology. One could make sculptures that change shapes in a manner that previous, more rigid kinetic art could not do. And unlike your computer’s screen-saving fractals, such art would be three-dimensional and solid. Furniture, also. Think of the advantage there. “Honey, the door is too narrow to get the couch into the house.” “No, dear. Remember we bought a self-assembling couch from IKEA. It is a thin sheet until you wedge some Cheez Doodles into the slot where the pillows will form.” And what about such soft devices sent into places where humans would be in danger or where changing shape would be necessary for access, such as in caves or collapsed tunnels, maybe also into reactors? Maybe the soft robots could make their way into tight quarters to seal a leak: They could travel into the tight spot. And then, acting like some surrogate for an “as seen on TV” product, they would unfold themselves to cover the leak. Maybe all our robotic creations are designed as helpers that would also decrease danger, robot soldiers, for example. But, then, what if robot soldiers turned on us as so many sci-fi stories project? “Why,” I ask myself, “would a Caltech story about a soft robot not only catch my attention, but also keep it?” Is there a lesson to be learned here? Was Hawking right to warn us about our impending slavery to machines that think? Will our projected downfall be our own fault? Remember what Zeus said when the gods asked him to take sides during the Trojan War: “Man has only himself to blame if his miseries are worse than they ought to be.” Yeah, the biblical “Man,” Neil Armstrong’s “Man”: The word means, of course, “humanity” or “humans,” and it includes both genders, the male and female brains, the Earth-bound “Martians” and “Venusians.” We’ve long been in the business of trying to create that which mimics what we are: Thinking, responding, and anticipating beings. Statues are nice, but they lack life and only imitate shape. Disney World’s “animatronic characters” imitate shape and movement, but move only to the extent that they are preprogrammed. Factory robots are very practical inventions; car bumpers are no longer misaligned and engines are built with finer tolerances than they were in the pre-robot assembly line. But robots are at this time incapable of responding to unexpected stimuli on the order of human response, though very good at responding to stimuli we anticipate for them (Did Pavlov know what was to come?). Characters in computer games do things humans can’t, defying the physical laws of the “real” world; they are also two-dimensional and, for all their abilities, can’t move a feather. Artificial intelligences incarnate in a robot bodies are the product of piece-by-piece additions of sensors. At this time, we have some robots doing what we can do here or there, but not, as we, everywhere with everything at anytime. That’s that “responding and anticipating” that I just noted. This new soft robot technology is a further step toward our making a being in our own image. In creating intelligent, responsive robots, we are like the mythical sculptor Pygmalion, whose statue Galatea came to life. And, now that I come to think of Pygmalion, maybe our goal outside utilitarian technology is to make companions-to-order, male or female “Stepford wives” that carry none of our human unpredictability. Shades of “sexbots”! (Do we create with the ultimate goal of satisfying our hormonal drives?) As neurologist Oliver Sacks, author of the popular book The Man Who Mistook His Wife for a Hat: And Other Clinical Tales, noted, humans are a “miracle of integration.” That integration results in a sense of self; otherwise, we are merely more sophisticated robots that respond to our environments as biological computers. In our attempt to recreate ourselves in the form of robots, we fail to consider the principle of integration because it is too difficult to imitate. And there’s something else beyond the principle of integration: The Principle of Gender Difference. As Louann Brizendine explains in The Female Brain, there are in general identifiable differences in “Man” that Zeus, from his perch on Olympus, glossed over in his comment. Female brains, according to Brizendine, contain on average more neurons (11% more) devoted to language and hearing than men, and they have a slightly larger hippocampus. In an on-average relatively smaller female head, female brains also pack more densely the same number of neurons as men with an on-average larger (more voluminous) head—yes, guys, you have bigger heads, but you incorporate more “emptiness.” There’s a difference, even if an on-average difference. And that difference will probably keep us a step ahead of robot we create, even soft ones imbued with some very fast processors. Creating a “human-like robot” is difficult, but, were we to do so, therein lies the danger that Stephen Hawking and others—many of them science fiction writers—tell us to avoid. When Rollbot can both change shape and employ AI while responding to different stimuli, we’re in at least a little trouble of our losing that place at the head of the hierarchy. Rollbot’s successors will have none of our frailties unless we, the creators, build them into their makeup. But gender is a problem for robot creators. Toshiba Corp.’s Aiko Chihira, the Japanese-speaking and very female-looking robot that directed customers at the Bank of Tokyo Mitsubishi UFJ and the Mitsukoshi store in Nihonbashi, Tokyo, was a long way from being “human.” That Aiko Chihira looked human was simply a capitulation to our need to think “she” was human. A robot in the shape of a box could give the same information as the robot with human features and multiple human-like expressions. ** Aiko (if I may be so bold as to refer to “her” by her first name) is “humanoid” at best, more appearance than substance, with remarkably life-like features, but without “life.” “She” has no sense of self, and we cannot assume that she “thinks like a woman.” If you look at Aiko and other robots, you see in them our limitations in making beings in our own image. Of course, the goal of robotics engineers is to make a robot that can sense its environment as we do and respond to stimuli. To do so means they would have to program robots for the unexpected vicissitudes of the real-world environment; and when the engineers fail to anticipate what the robot surrogate will encounter, the robot will fail. But notice that engineers design by stereotype. “Human-like” and “humanoid” images shape their engineering designs. But there are those subtle differences in “Man,” and those differences are embedded in the brains of two genders, making imitating what we all are as humans very difficult to mimic. Pygmalion’s Galatea both looked and acted like a human female. That ancient Greek myth contains a number of assumptions, not the least of which is that a statue can become “woman-like” in the real world of marriage, birth, and motherhood (Galatea’s child was Paphos). In contrast Watson, the IBM Jeopardy champion (against Ken and Brad in 2011) had no human physical features, but outperformed the two greatest trivia champs.*** Watson had to deal only with language and historically recorded knowledge, and it seemed to be capable of making inferences to reach correct answers. But could Watson “feel”? Did it reach its answers in a linear, single-plane, though rapid, set of logical steps while Ken and Brad acquired their answers as aggregates that just popped into their consciousness through multi-planal thinking? You might argue that as physicists and engineers approach quantum computing, soft robots will become more human-like. But there’s still going to be a difference. The cold machine computing on a quantum level will always be different from the hot brain computing that operates on as yet not fully understood multiple levels of interconnections with a central command that chooses what to attend to and what to ignore. Think of a computer’s processes as occurring on a single plane and a human’s processes as occurring on multiple planes, many of them intersecting. True, the computer can process faster than the brain, but that doesn’t make it more complex or capable of “seeing” what the brain sees in every situation. Plus, humans aren’t really good at predicting on the basis of mathematical computation, so they assess differently from computers, which are good at processing probabilities. Give me an intuitive robot, and you’ll give me a more human-like robot. Is our robot-goal, then, to make an error-free human? So, which “image” of “Man” is the image model for “Man”-like robots? The brain in general and all the physical properties associated with it, just the brain’s processing, or the processing and behaviors associated with and connected to stimuli and the brains of others? Here’s my own “Turing Test”: Make an angry robot. Or a loving one. The motion picture I Am Mother, written by Michael Lloyd Green and Grant Sputore, puts a “loving,” caring robot in charge of rebooting the human race. Mother is even willing to act as a mother would (I won’t give a spoiler, but will recommend the film). But think about it, to make a human, you really have to make a human. All our efforts to create in our own image will fall short, regardless of what the science fiction writers tell us and the engineers promise to make. And what about those sheets of polymers that seem to respond to the environment? Well, you, too, have polymers. Teamed up with Stuart Hameroff, Roger Penrose explained in his Shadows of the Mind that consciousness derives from superposed quantum states, each acting in its own spacetime geometry. Somehow, they propose, the human brain operates on a level of quantum gravity in tubular polymer structures that can be found inside neurons. Regardless of the somewhat unprovable nature of that explanation, one has to acknowledge that we don’t operate on purely logical grounds as a machine would. Look inside; there’s much going on, too much to keep track of and anyway, no one has yet been able to keep track of the components of the quantum microworld. That means that the replication or duplication of a thinking, responding, and anticipating being we know as “Man” is beyond our present and future technologies. Why go out of our way to train inorganic polymers to do what those tubular polymer structures in our neurons already do? Again, is it because we have a God Complex? And so my answer to that question comes back to a theme I’ve hit on before. Remember the Adam and Eve story? If you reduce it to its essence, it’s about pride. They wanted to be like God. And we, in creating robot after robot, refining AI as much as we can, seem to have inherited that ancient human desire. “I want to be like God, and making a thinking, responding, and anticipating being is my path to apotheosis.” Short of our creating in our own image some robot, we take on the task of shaping others, our children, as best we can to make them in our own image. But, as in creating the perfect human substitute in a robot, creating the perfect replica of us in our children is doomed to failure. In part, that lack of success lies in our thinking of individuals in terms of “Man,” that is, the “general human.” As every parent discovers, there are just too many variables to manipulate, to control, to anticipate. And if Penrose is correct, then we’ll never get a handle on the illogical nature of our quantum nature, where probability, and not predictability, rules. Of course, the course is set. Engineers will continue to create robot after robot after robot. No one is going to change that unless our technological civilization fails. Ironically, there are those among us those who would, in fact, destroy civilization and their very hope of achieving such godlike status, and many of them—as terrorism of the last twenty years has shown—seek to destroy in the name of their deity. That self-destructiveness is part of “Man’s” potential, and its presence means that to truly create in our own image, we would have to incorporate it in our robot progeny. We have two kinds of brains (in general), unpredictability, and integration of Self that make up individuals and that make a creation of “Man” impossible to achieve no matter how much our desire to be godlike drives us in our creations. *Caltech. Self-folding “Rollbot” paves the way for fully untethered soft robots. August 21, 2019. https://www.caltech.edu/about/news/self-folding-rollbot-paves-the-way-for-fully-untethered-soft-robots **Hongo, Jun. Robotic Customer Service? In This Japanese Store, That’s the Point. The Wall Street Journal online, April 16, 2015. https://blogs.wsj.com/japanrealtime/2015/04/16/toshiba-humanoid-robot-to-debut-in-tokyo-department-store/ *** https://www.youtube.com/watch?v=P18EdAKuC1U Ah! Humans. A prideful lot, really. And among their number are those termed archaeologists, a special group of people devoted to ancient times. Put their devotion to their craft in an environment of professional journals, academic departments, and sensational news reporting, and you get competition: For funding, publication, and recognition. I mean, who wouldn’t want to be recognized as the person who discovered Eldorado, Atlantis, or some antique writing that explains why people drew lines on the Plain of Nazca?
And so, in the Americas, there’s been a problem for archaeologists. How could they compete with those who discovered Homo erectus, Homo habilis, and the like? How could they find something that put people in the Americas before the Bering Land Bridge lay above sea level? I mean, everyone knew about the Clovis people. What’re they? Inhabitants about nine or ten millennia ago, maybe a few millennia earlier at best? And then relentless diggers that they are, a few archaeologists discovered evidence of older habitation: Meadowcroft Rock Shelter in western Pennsylvania, for example, and the Topper site along the Savannah River. Like so many of our species, archaeologists have emotional attachments to what they know. That’s been the story of scientists left behind as knew discoveries displace older ones. And all the paradigm-changers face the same doubt, disbelief, and, sad to say, disgust from adherents of previous knowledge. I mean, who likes to have what he has built a career on overturned at a moment’s notice, a chance discovery of an unknown phenomenon? But, of course, you know, as I know, that archaeologists aren’t the only humans who hold onto that in which they have invested time. And you also know about intellectual inertia. It took a number of years for Einstein’s five groundbreaking 1905 physics papers to work their way into a new physics paradigm. It took decades for Alfred Wegener’s continental drift to be understood as seafloor spreading. And it has taken a number of years to get past the Clovis-as-first-North-Americans mantra among archaeologists. And of recent, the Topper site along the Savannah River, uncovered by a group from the U. of South Carolina led by Dr. Albert Goodyear III, has encountered the same resistance. Now don’t get me wrong. Doubt is the scientific method of choice. I witnessed that in person when I attended a seminar at the Rosenstiel School of Marine and Atmospheric Sciences at the U. of Miami in 1980. Cesare Emiliani had returned from a visit to the Alvarez father-son team, who, with Asaro and Michel, studied the ash layer at Gubio, Italy, and concluded that a worldwide disaster had occurred when an asteroid or comet hit the planet 65 million years ago (goodbye, dinosaurs). After Emiliani explained what the Alvarezes had discovered and postulated to the seminar audience, the attendees showed healthy skepticism. After all, where was the crater made by this impact? No one in the attendance at the time seemed to know of a discovery by Camargo and Penfield that further confirmed a Yucatan Peninsula site as a large crater.* In a world with so many scouring Earth’s surface for discovery, scientists can become quite isolated—this was especially so in the largely pre-Web era of 1980. Anyway, at the time, I was willing to cast doubt aside in favor of the hypothesis backed by an iridium-laced ash layer. I thought, these guys are onto something that might have been a big contributor to the extinction at the end of the Cretaceous. However, I understood at the time that most scientists at the seminar seemed to respond emotionally rather than rationally, but that they also knew that other extinction mechanisms could have been at work, such as a pandemic. Yet, their doubt seemed to me to be a kind of protective doubt, a mechanism to save “secure” knowledge. Back to Topper. I’m not an archaeologist. But anyone who looks even rather briefly at the discoveries of American archaeologists during the late twentieth and early twenty-first centuries will notice the reluctance of older archaeologists to relinquish their hold on “established” hypotheses like the primacy of the Clovis culture in North America. They don’t want their research to be topped. There are reasons for the reluctance to change, as I wrote above: Pride being one; funding being another. And that seems to be applicable to the current intellectual climate of climate studies. There are a good many scientists supported by government funding for work on “climate change.” If you read through the studies supported by governments and that lend support to the IPCC’s main theme, you’ll see articles that lie on the periphery, yet appear to be central to “scientific belief.” When we hear often enough that a bolide of some sort killed the dinosaurs, we make it part of our worldview. When we hear often enough that the earliest people in the Americas were the Clovis, we make it part of our worldview. And when we hear that we’re all doomed because the climates continue to exhibit vicissitudes, we make it part of our worldview. And why shouldn’t we believe what we constantly hear? Isn’t everyone saying it? And that applies to panic among the uneducated who want to turn common weather events, such as droughts and storms, into “evidence” for their beliefs. “But aren’t those fires in California and Brazil evidence of climate change?” And that is what makes science unscientific. That is what keeps not only the majority of the laity in the secure darkness of isolated thinking, but also scientists who should doubt both the conclusions of others and their own conclusions. It seems to me that we’ve arrived at an era of “finished science,” but then, maybe every era since the ancient Greeks has been so. Aristotle might hold the Guinness World Record for the endurance of a “theory” when he told us why objects move. It took almost two millennia until Galileo and Newton overturned his thinking. It then took hundreds of years to alter Newton’s understanding of gravity. And then Einstein topped Newton. Remember phrenologists? They could explain personality by bumps on the head. Think of astrologers. They explain personality by the position in the zodiac. You might say, “Well, those are just extreme beliefs, debunked by science. But who can debunk climate science?” Think of what the reporters have repeatedly told their TV audiences when a major hurricane hits the Caribbean islands or the coast of the USA. The sky is falling. The climate is dying. People are doomed. “We have the established science, and all the scientists—save some oddball deniers—agree.” But wasn’t the Clovis Culture supposed to be the oldest in North America. Wasn’t the Bering Land Bridge supposed to be the route from Asia into America? What the heck are we to do with the Topper archaeological site and the Meadowcroft Rock Shelter site? Are the carbon-14 dates for Clovis legitimate but the carbon-14 dates for Topper and Meadowcroft illegitimate? Anyway, how did those Topper and Meadowcroft people get to America? “It’s not the same. It’s not the same. Climate science is established. And we’re going to have more and more studies that prove it’s established. We’re all doomed by rising temperatures. And no one can top that with a contradictory study.” Hold on a moment. “Doomed”? Oh! Right! I recently heard that we have just 12 years until climate doomsday (2031?). So, following the advice to shut down all fossil fuel use in North America to save the world, we might make a dent by less than a tenth of a percent in the supposed temperature rise while China and India continue to spew carbon dioxide into the atmosphere unabated. The discoveries at Topper and Meadowcroft appear to top the previous discoveries. They were made by tenacious archaeologists who bucked the trend. Their discoveries, however, haven’t been fully accepted by the establishment because of intellectual and professional inertia. Climate deniers face an even more difficult uphill climb. The world is settling on the hypothesis by consensus, and that, in science is tantamount to making a theory. And the theory means inevitable doom. The masses appear to have abandoned doubt, and in doing so, have made topping the climate story only a matter of producing more of the same. Imagine. Someone identifies Clovis. Someone else finds a better Clovis site. And so on, and so on. All within the context of the belief that the Clovis culture was primary. Who among those committed to Clovis will yield to the discoverers of Topper and Meadowcroft? That’s the way it is with climate science today. The only topping is to do more of the same. That’s where the grant money is. That’s where the pride lies. Unfortunately for the masses, the repeated becomes the true. And the repeated “truth” has inertia. It’s not going anywhere because more and more of us are becoming comfortable with it. There’s a Higgs Field for ideas and facts that counter the truth du jour. Those facts take on such mass that no intellectual endeavor can move them. So, the panicked masses will believe Earth has “12 years,” and will then, 12 years from now, hear that “Earth has just 12 years.” And on and on, not much different, you realize from the Y2K scare, the numerous doomsday dates that have come and gone, such as the Heaven’s Gate mass suicide date, and others, like the one centered on syzygy. How you goin’ to top ‘em? You’re not. The isolated voice of reason isn’t, either. When everyone is on the same intellectual plane, there’s no high ground, there’s no “topping.” Every climate study will lie on the same contour line. I find it a bit interesting that in order to top previous studies and hypotheses in archaeology, one has to dig a hole. I have a feeling that climate science lies at the bottom of a very large hole. Every acceptable climate study appears to derive from a dig at the established “Clovis-equivalent Climate level.” *Pemex, the company that funded the research Penfield did, didn’t let an earlier researcher, Robert Baltosser, publish similar findings about the Chixulub crater. Baltosser found the crater a decade earlier than Penfield, but the Alvarezes had no access to that finding. With regard to alcohol, the non-addicted simply ask, “Why can’t people just have more self-control?” And they might be heard to say, “I like a drink or two, but I’m not dependent upon it for a good time, social identity, or physical need. Heck, if I have a problem, I deal with it without alcohol or drugs.” Easy for the non-addicted to say, right? I suppose the same can be said for those who comment on the use of opioids and the proliferating numbers of overdoses during the second decade of this century.
Would that self-control were an easy solution. It isn’t, of course, because the chemistry of the body takes control, powerfully so for many. True, peer pressures and individual life histories play roles in initiating addiction, but once the addicting chemical takes root, it grows like yeast, unchecked only by outside intervention or death. Humans share a chemistry with yeast, the stuff that makes wine and beer what they are. That shared chemistry lies in the protein Cytochrome-C, composed of nearly 100 amino acids. You and I are the same kind of organism by virtue of our common Cytochrome-C. Other animals are recognizably different by variations in the molecule. Change Cytochrome-C by about 6%, and you’re a chimp; 15%, and you’re a horse. Change it by 25%, and you’re a bee. Change it by 70% or so, and you’re probably beer yeast. Maybe our related Cytochrome-C molecules are the reasons that we have developed a taste for beer and wine. And that’s where this self-control thing comes in play. Yeasts turn sugar into alcohol under anaerobic conditions. But the growth of the fermenting yeast slows and then stops when the alcoholic content in wine reaches 13 to 14%. That’s why you don’t buy 100-proof wine; the yeast just can’t grow in increased alcoholic surroundings. Producing alcohol is a self-defeating process for yeast. There’s a natural self-control built into operation by the toxic nature of ethyl alcohol (CH3CH2OH). For beer, the shut-off for yeast growth is about six percent. Unfortunately, humans, regardless of their distant relationship to yeast, have only a couple of options when it comes to shutting down alcohol: Personal self-control or enforced control by the deadly nature of the alcohol. And there are similar options for drugs. Either individuals shut down their use or the chemicals act to shut themselves down. Keep in mind that if yeast produces too much alcohol in the fermentation process, the increased volume of alcohol shuts down yeast growth. Maybe someday some researcher will discover a truly safe analog of the yeast-alcohol process for drugs, something beyond an emergency administration of naloxone, something that can be incorporated into cells themselves and that acts as a safety valve that shuts down any runaway effect just as alcohol shuts down its own production in wine and beer. Until that time comes, however, individuals have only one safe process, self-control. And that means the best way to slow the addiction rate is by teaching self-control before addiction occurs. In a permissive, narcissistic society, self-control isn’t an easy lesson to teach. All of us have some kind of addiction, even if in mild or slight form: Coffee for breakfast, NFL team, fishing, favorite pillow. They give us recognizable patterns in a complex and seemingly chaotic world. They ensure that we have something on which we can rely. That some people have more severe forms of addiction is a matter of both kind and degree. But if you are one to say, “Why can’t people just have more self-control,” then prove it by giving up cold turkey that coffee, that NFL team, fishing, or that favorite pillow. Otherwise, like wine and beer yeast, you will continue to do what you do until the very doing undoes itself just as producing alcohol through fermentation quashes the growth of the fermenters. Consider these headlines: “’Sense of urgency’, as top tech players seek AI ethical rules,” * “Fake news model in staged release but two researchers fire up replication,” ** and “US to use fake social media to check people entering country.”*** They center on what appears to be our latest ethical problem: Fake news.
Now consider how little any individual knows about what goes on in a complex world with 7 + billion people, competing entities of all sorts, and governments with opposing agendas. Knowledge is power, we often hear. Fake knowledge appears to be just as powerful. But the latter isn’t new. People have been spreading rumors since they gathered as extended units of families and neighbors. False information is a proven method of controlling crowdthink. **** Obviously, fake information can be dangerous. The “witch trials” in 1692 Salem, Massachusetts, make the point. But innocent people have always suffered from malicious fake stories. And populations have been manipulated by them, even highly educated populations. Take the story of Japanese medical researcher Yoshihiro Sato, a guy who published numerous—over 200—papers on bone research that led others to further research involving thousands of patients. He fabricated data for clinical trials published in international journals. ***** It took Alison Avenell thousands of hours to uncover the extent of the fake medical news that had impacted so many. That people have maliciously made up stories about other humans is part of human history, but the phenomenon has always been within the realm of our experience and understanding. Now, however, we have to consider that such stories might be the product of Artificial Intelligence. In our ingenuity, we have created another layer of falsehood that will inevitably have dire consequences for someone or some group. Is it time for us to become skeptical about all news? If so, have we come to the point of no return on trustworthiness? I’m giving Ringo Starr last word on this, “I don’t ask for much; I only want your trust/And you know it don’t come easy.” ****** *Staff report. Tech Xplore. September 2, 2019. https://techxplore.com/news/2019-09-urgency-tech-players-ai-ethical.html **Cohen, Nancy. Tech Xplore. August 31, 2019. https://techxplore.com/news/2019-08-fake-news-staged-replication.html ***Abdollah, Tami. Tech Xplore. August 31, 2019. https://techxplore.com/news/2019-08-fake-social-media-people-country.html **** Americans were opposed to entering WWII in Europe, for example, but were persuaded to change their minds through the efforts of Canadian William Stephenson and other spies. Stephenson was Fleming’s model for James Bond. Think also of the “Russian collusion” stories that the Mueller Report debunked. Half of America bought into the fake story, and some, regardless of the Mueller Report, still believe it. *****Kuperschmidt, Kai. Researcher at the center of an epic fraud remains an enigma to those who exposed him. Science. August 17, 2018. This extensive article is like a mystery story as it details how Sato’s fake stories were uncovered. https://www.sciencemag.org/news/2018/08/researcher-center-epic-fraud-remains-enigma-those-who-exposed-him ******Starr, Ringo and George Harrison. 1971. “It Don’t Come Easy.” Starr released a “single” of the song after the Beatles disbanded. Ultimately, everyone is alone, even in the midst of others. True, we can turn outside, look for meaning, and actually find it at least temporarily in the thinking, actions, and emotional support of others. But suppressing our thoughts, especially our innermost thoughts, is virtually impossible; even in our sleep, dreams reveal our individuality and our isolation. In our mental solitude we think, remember, conclude, classify, and believe. Are we all just like parallel lines that never meet on any point?
The late prolific writer Isaac Azimov addressed many subjects in his books and essays. Among those subjects were Euclid’s axioms. In an essay called “Euclid’s Fifth,” Azimov partly addresses the concepts of truth and proof.* A point he makes is that Euclid’s five axioms and five postulates, particularly the Fifth Postulate, ultimately rely on some unquestioning faith and assumptions about assumptions. Euclid’s axioms stood virtually unchallenged for over a thousand years though Nasir Eddin al-Tus tried to rework the Fifth, and building on his work, Girolamo Saccheri also tried to rethink the postulate during the Renaissance. Eventually, all of Euclid’s foundations of geometry developed some cracks as new geometries arose in the nineteenth and twentieth centuries. But for over 2,000 years and even in today’s high school geometry textbooks, Euclid’s axioms have been accepted as “incapable of contradiction” and, as Azimov argues, representative of “…absolute truth. They seem something a person could seize upon as soon as he had evolved the light of reason. Without ever sensing the universe in any way, but living only in the luminous darkness of his own mind, he would see that things equal to the same thing are equal to one another (one of the axioms) and all the rest [of the axioms]” (143). To further make his point about truth, proof, and assumption, Asimov uses Socrates’ discussion in Meno with an “uneducated” slave. Under the assumption that the slave can reach a valid conclusion because he is imbued with knowledge a priori, Socrates draws truths about a geometric figure from the slave’s mind, evidence, we should think that there lies within us some a priori truths that are self-evident, those truths in question here being Euclid’s axioms. But are there truly self-evident truths available to any rational, even uneducated, mind? Are there any absolutes, such as Euclid’s axioms? And if there aren’t, are we required to ask Pilate’s question indefinitely, “What is truth?” Does that mean that in the dark recesses of each mind there is some sort of flashlight all of us can turn on to illuminate what we alone—in our loneliness—can see? The term that catches my eye (and maybe yours, also), is Azimov’s oxymoronic “luminous darkness of…mind.” Aside from the relevance of the phrase to Azimov’s point, “luminous darkness of…mind” encapsulates a fault that lies in me and probably you. It’s easy for us to assume our “common sense” and our experiences are sufficient enough for us to know through reason the essence of truth—our truth that we assume is THE TRUTH. In our isolation, do we bring to light Socrates’ a priori knowledge? is there a priori knowledge that we can discover on our own? That brings us back to what I wrote at the beginning of this little discussion, that we are ultimately isolated and that in our isolation, the dark mind self-illuminates and then determines what is meaningful, at least, what is meaningful to us. So, then what do we do when someone like Euclid comes along and offers us a set of principles that appear so self-evident that for thousands of years no one can think of a valid challenge to them? Have all those mathematicians of the past two-plus millennia simply taken Euclid’s axioms on faith, and if they have, does that mean that even the brightest of us are locked by faith into axiomatic thinking? Does it also mean that in order to accept assumptions, we have to make further assumptions? And given the penchant we have to simplify in light of our darkness, will we forever be closed off from discovering how our own truths might fall and others’ truths might rise? You might say this is all hogwash, some drivel from a prattler. But think of what has occurred in our halls of enlightenment, universities where speakers of recent have been subjected to censorship because the self-supposed enlightened do not wish to hear any challenge to their truths. The dark truth is that in censoring free discussion and rational debate, a feeble light of “truth” will shine only internally. No challenges to axioms will illuminate the validity or invalidity of postulates that lie in the darkness of isolated minds.** And like Euclid's parallel lines that remain isolated forever, those with opposing views never meet on any point. *Asimov, Isaac. "Euclid's Fifth," in The Left Hand of the Electron. New York. Dell Publishing Co., Inc., 1974. Pp. 140-153. By the way, don’t confuse “Euclid’s Fifth” with Beethoven’s (Da-da-da daaaaaaaaaa; da-da-da daaaaaaaa). **No debate is, well, no debate. |
|