Thursday, May 25, 2023

The Cost of Excellence

 “Excellence is never an accident. It is always the result of high intention, sincere effort, and intelligent execution; it represents the wise choice of many alternatives - choice, not chance, determines your destiny.”    ― Aristotle

            “The best is the mortal enemy of the good. -- Montesquieu

Photo: Pixabay

Bias Part III

In the relentless pursuit of quality standards, and competing to express them, we automatically show our bias against anything but best-in-class.  If we pursue the top nominee for “Best cat breeds for catching mice,” then we must discriminate against less talented mousers.  If we look only at top colleges, we ignore all other options.  We also daydream about absolute top quality in marriage partners, homes, career, and car – the top big-ticket decisions in a lifetime.  It would be rare for anyone to achieve top quality results in all these categories, which is what even the very successful can’t manage to pull off. 

While working or waiting for ideal opportunities, there are many more decisions that are fated to yield less-than-stellar outcomes.  Rarely do all big-ticket criteria align for the perfect world we hold in our heads.  Aristotle championed the excellent while also promoting the Golden Mean as the avenue to avoid the extremes of the excellent and the abysmal.

In practice, though, of course, people can’t perform at their best or fit the top ten criteria for everything, from driving to cooking, singing, organizing, playing bridge, managing their portfolio, or giving presentations.  We do below-best most of the time, and that has consequences across the board for quality of life and reputation. “Anything worth doing, is worth doing well.”  True, but we don’t always choose to pay for that option.  The costs of operating at that level are too high.  Or we must concentrate on one area of life at the expense of others.  The cognitive strain exacted by excellence means we only apply high effort selectively.  On his site, Robert Ferguson notes that for the Forbes 500, Excellence is the third most popular core value—after Integrity and Respect.

Social scientist Herbert Simon articulated the cognitive limits to effort and focus in studying complex problems with high demands.  When things get too complex or hard to evaluate, we default to “satisficing,” making efforts good enough for the situation and its goals to get the job done, even if the outcomes are not top-ranking.  Satisficing sees that the job is taken care of but doesn’t impose a mandate for excellence.  This measure departs from the classical Rational Man theory of economics that assumes people know what they want and the logical price they are willing to pay for it for any given choice—like college. Too often we are dealing with incomplete information, with limited resources and energy.  In everyday situations, entropy rules over excellence.

In engineering and economics, this situation is called “theory of second-best.”  No system operates in all its parts and dynamics at top efficiency all the time, and any aspect that isn’t fully operational impacts the effect of every other aspect of the system, as in welfare economics entitlements. There are too many errors to make, and few ways to be top-notch, compared to hundreds or thousands of chances to be less than that.  A basic human brain problem is that there are two brains: we make decisions and take action both on the rational and the non-rational sides—the reason cognitive economics began to study both, venturing beyond the Rational Man theory.

Diversity programs in all sectors of society are dedicated to breaking down the hierarchy of success by insisting on making the successful better represent subset groups within the culture.  To diffuse class envy and inequality, Santa Monica High School in California has closed down its honors program in English in a radical move against excellence based on merit achievement.  As amazing as this sounds as a solution within an academic institution devoted to developing minds to their fullest extent: it is a logical step under the assumption that the top ranks of students express privilege based on unequal advantages such as educated parents in homes full of books.  SAMO’s home page declares its mission as “Extraordinary achievement for all students while simultaneously closing the achievement gap.”  This noble confusion might be rephrased as “Get great, but not too great to be unequal.” 

On another front, Congress is debating a “Worst Passengers” list, a nationwide no-fly blacklist to bar unruly fliers.  “But in a perfect world, who else would be prevented from flying?  Chatty or entitled passengers? Babies?“ (Elliott Advocacy).  The no-fly list is of cultural interest, because it reflects our collective ideas of profiling bad actors.  The nature of close quarters at high altitudes makes this profiling critical as compared to issues on the ground. One would think that suspected terrorists would come first, followed by anger-management failures, then on to the unruly.  Alcoholics, drug addicts, spastics, mental patients, maybe even the anxious and depressed could follow.  Babies and their behavior included.  Comfort animals other than dogs.  And yes, hygiene-compromised passengers as well.  This could become a long and inclusive list.  Any condition that promotes “disruptive” behavior would be eligible, and that, when you think about it, is a widely distributed trait: anyone who fails to fit “normal” parameters.  Exactly like high achievers, just at the other end of the scale.  

Excellence and the competition for virtuosity is the root cause of inequality.  Any effort to separate people based on merited achievement creates an obvious rift: the top 1% versus everyone else, as in the extreme wealth curve.  Sifting for criteria, either competence or character-based, is a discriminatory act.  This happens constantly at all levels of behavior, within our own actions and in the way we think about and judge others and their origin groups.  How are we to reconcile Excellence with Equity?

Monday, May 15, 2023

Ranking: Perils and potentials

“Without changing our patterns of thought, we will not be able to solve the problems that we created with our current patterns of thought.”     --Albert Einstein

Bias Part II 


Compare these two stacked curves.  Which is longer?  

This is a classic optical illusion, from the nineteenth century. In fact, the two are actually identical.  The illusion vanishes with a change in perspective to upright/vertical.  The human brain is automatically comparing everything it sees.

Ranking is a human proclivity, and it is all around us.  SEO (search engine optimization) ratings, US News Best Colleges, The Olympics, pro sports and amateur sports, Amazon product reviews, happiness rankings of countries worldwide, employee job applications, political candidates’ approval ratings, reputation polls.  In fact, it is impossible for anyone to examine two objects within the same category without ranking them in some way on some feature.  These can include reputation, performance, brand, cost, design, range of uses, aesthetics, color, size, speed, efficiency, and dozens of other basic aspects.  Think about the time and energy we all expend in comparing ourselves to others.  We compare along these lines and beyond – without having any way of confirming these ratings except a general anxiety about the need to do so.  Our social media scores are a simple example.


Top Ten lists are everywhere and cover everything imaginable, including longest reigning monarchs, youngest state leaders, no-hitter record pitchers, highest jumpers, most innovative countries, winning tips for college-level essays, video game characters, famous astronauts, hang-gliding champions, chess minds, Noble Peace Prize winners, teams with the largest stadiums, quickest female Paralympians, and, of course, Best Top Ten lists.  The recent obituary of singer Harry Belafonte ranks him as the first Black Emmy and Tony Award winner as well as the first of any race to sell one million albums (“Calypso,” in 1956). (The Week, May 12 2023)

Our hourly ruminations consist of searching for clues to our standing compared to others.  Talent, wealth, perception, power, influence, trustworthiness, and romantic interest are all rankings we seek to compete and excel in.  These are dominance hierarchies in every society, and they serve a purpose.  As systems expert Peter Erdi puts it in his book Ranking, “Dominance hierarchies are very efficient structures at very different levels of evolution.  They have a major role in reducing conflict and maintaining social stability…to regulate access to these resources [food and mates].”  

Dominance ranking is a great mechanism to maintain the status quo, so that people (and animals in general) have a good idea of where they stand, and where they would like to stand in the future. Dominance goes beyond power, leadership, and authority to include influence, expertise, competence (toward virtuosity), and trustworthiness (a brand of social equity).  Think of writers, athletes, musicians, artists, and inventors and their role as models of prestige.

Emergent properties

Ranking and valuing have their value.  But what are the emergent properties, the unanticipated outcomes, of ranking competitions?  There are costs.  They begin with the constant need to measure and judge, ending often enough in an ongoing critical evaluation of self as never good enough.  Constant comparison is the essential activity of social media worldwide.

The Zoom screen affords the opportunity-as-compulsion to see oneself alongside others.  The self-criticism and appraisal of our appearance up against others in the screen meeting is one reason that remote meetings are as stressful as they are, regardless of the business at hand.  And while we are comparing ourselves to others on dozens of scales, they are doing the same.  No one entirely knows what their score is, but act as if they do.  Billionaire investor Charlie Munger (Warren Buffet’s business partner) declared “The world is not driven by greed. It's driven by envy.”

The obsession with determining the best of everything is a form of “virtue bias,” the directive we all share to seek out a way that lets us agree on rankings for everything from colleges to cars to cappuccinos.  So we curate “best of” lists for everything.  Whatever their standards, and whether those standards are based on tangible and provable truths, these lists take on a life of their own, reinforcing themselves in a self-fulfilling prophecy as the most-cited attract to become the most-desired and best-selling. 

The cost of competition is then passed along to those underneath the top ranks—the second place to mediocre to loser class.  Which, because so few of us are winners (on one scale, let alone several), means that we all tarred by the bias against “second-best,” or as a colleague once phrased it, “First Loser.”  That’s not a great-sounding placement, considering all the effort put out to make something of our lives and our reputations.  Just a reminder that talent is not equally distributed.  Neither is the work ethic necessary to maximize that talent.  This is why equality is such a tricky concept to pin down and engineer.  The social contest is not a level playing field, and some of that levelling is under our own control, while the start-points—family, location, culture, ethnicity, wealth, class—are more steeply slanted as well as harder to equalize later in life.

These contests, in operation in all domains of life, are one way to find information useful in making choices and investments of our time, money, and attention.  To this end, we seek out the best possible in schools--including preschools--for our children, politicians who will represent our interests, cars we can rely on to confer status as well as deliver performance, books that will reward the time investment in reading them.  We seek out friends who will enhance our efforts by reinforcing our values, making them worth the precious time invested in socializing.  We hope for college roommates whose good character and work habits will encourage our own school success (as important, some studies show, as the quality of the school attended).  President-to-be Franklin Pierce had such a roommate at Bowdoin College, one who fired up his ambition and work habits. Homes in the most advantaged parts of town we can afford in order to enjoy quality neighbors.  Colleagues to match our interests and our goals and lifestyles. Marriage partner, ditto.  Such preferences are quality-control devices, deployed as systematic bias protection against making poor judgments by our social group.

Ratings are supposed to help us distinguish between good and less effective use of our resources: time, wealth, energy, reputation.  Life is largely an efficiency game, one we seek to win at as often as possible, by aiming to win each time.

Outcomes and correctives

When recorded music became available by record and radio, everything else started to sound amateurish, or homegrown, or less-than-professional (John Phillip Souza, consummate composer in many genres, predicted this effect of technology).  The music on the ground, as it migrated onstage, created its own recording traditions that nationalized the genre (like folk, country, blues, and jazz), leading to its own “best-of” listings.  Belafonte’s signature “Banana Boat Song,” “Day-O,” is a Jamaican work song out of the colonial island fields but massaged by studio technologies, headed the charts in 1956.  Songwriters led by “the father of American music,” Stephen Foster, could be rewarded for their talents thanks to copyright and printing advances.

In the workplace, to compensate for the seller’s market in computer talent, companies are starting to adopt “skills-based hiring” to get around degree-based ranking of job applicants.  Applied computer skill doesn’t require the traditional four-year degree or professional title, and can be conducted on-line and on the associate level.  Distinction between certification and performance is the focus, opting for evidence-based performance over degree awards.  By the same mentality, merit-based admissions values achievement over race-based pro-bias in college admissions.  Affirmative action continues to be an ongoing debate that pits achievement against adjustment in the cause of balance and fairness.  To erase any competition for recognition, Santa Monica High School in California has done away with its Honors program in English as an enabler of inequality.  Not without concern over loss of opportunity for bright contestants who are now losers of this resume benefit.

Even bat-flipping in professional baseball, the practice of tossing the bat in the air to celebrate a home run, is a point of debate.  The practice was labelled as disrespectful of the opposing team and the game itself.  More recently flipping the bat is being viewed increasingly as simply a celebratory exhilaration and not an insult – realigning expectations and allowing for a more expressive game.  Even the slightest ritual carries with it a bias-based value.

All bias depends on expectations and context as a culturally constructed virtue or vice.  From the birth of human society, nonetheless, physical height is still positively correlated with leadership potential and dominance in pecking orders.  Erdi notes that “the desire to achieve a higher social rank appears to be a universal, a driving force for all human beings.”


Saturday, April 15, 2023


Bias, Pro and Anti      

“[Mr. Palmer’s] temper might perhaps be a little soured by finding, like many others of his sex, that through some unaccountable bias in favour of beauty, he was the husband of a very silly woman.” – Jane Austen, Sense and Sensibility (1811)   
Part I  

Expected distortions

Look at the above image, the St. Louis Gateway Arch.  The Gateway is the world’s tallest arch, at 630 feet from ground to apex.  But it is equally wide, also at 630 feet, from base to base.  However, what we see with our own eyes is its height, not width. This is because the brain is preconditioned to this bias, shaped by factors lying below conscious awareness. These factors systematically bias how and what we think we understand about anything we are looking at.  Including how tall it is.

The arch appears much taller than it is wide because the human brain is systematically biased toward the vertical, seeing lines going upward as longer than horizontals. This bias rules our common-sense perception all the time across many estimating situations.  This ibias is inbuilt, the kind we should know about from perception studies in order to recalibrate the judgments we make about things in the world.  Determining how things actually are, as well as how they are most likely to end up over time, is also swayed by our human tendency to be wishful rather than wise (James Reason, Human Error).   We must apply conscious attention and evaluation to understand and correct for our natural misperceptions as they distort the real state of the world.

In the same way, culture determines how we view our moral and social world by determining a long-living set of values.  Consider another well-known optical illusion: The Shepard tables (source: Wikipedia).

  This predictable perceptual bias activates “size-constancy expansion,” the illusory expansion of space with implied distance.  In reality, these tables are the same size, but our unconscious rules of thumb say otherwise.  We have to apply conscious reasoning to understand and correct for our mental distortions—our naturally biased thinking. It is one of several size and distance errors. 


Beyond spatial illusions, we think about bias as unfair judgment—aimed improperly or maliciously at people or groups—that results in social injustice and discrimination, and therefore is unjustified and abusive.

However, it is harder to claim bias damage when the same negative disfavoring bias targets terrorists, pedophiles, mass murderers, fraudsters, criminals, or Nazis (a group that has well and truly been dehumanized).  Can anyone really be blamed for having negative bias against such bad actors?  Or accusations of injustice?  Or cruelty and malfeasance toward animals?  How about newly identified misuse of wild animals, trees, or the environment in general?

The adopted meaning can be applied to describe an attitude toward people, things, situations, and moral reasoning. Systematic bias is an overall mental and emotional valence driving decision-making and action, creating outcomes that shape our further decisions and behavior.  Bias is seen as an intolerant and pejorative assessment of others for their behavior and the effects of that behavior. 


However, cognitive science has a more general and neutral meaning, with a direction either positive or negative.   Examples, starting with the 1970s, begin with Amos Tversky and Daniel Kahneman who first identified heuristics, or rules of thumb (anchoring availability, and representativeness), and the thinking biases that drives each one.  “Heuristics and biases” explain why human judgment is consistently less than rational, Herbert Simon’s “bounded rationality.”  Judgment, planning, and action stem from the Automatic System (emotional) rather than the rational Reflective System (rational), a dialectic  proposed by Thaler and Sunstein in 2008.   

Positive bias 

So bias is simply a leaning in one direction at the expense of another, a leaning that directs thinking and action, designed to achieve a desired state and thereby avoiding an undesired state. 

Therefore, a bias toward waking early to get things done, and one against waiting until late in the day, is an achievement technique.  The pro bias is a way to avoid procrastinating and leaving the work schedule too open to interruptions.  The pro bias implies an aversion to situations that make working for goals more difficult and less certain of success.  This aversion bias, the later one, is the natural correlative of the pro bias, the earlier preference.  It mitigates against leaving tasks to later in the day or evening hours when energy and willpower tend to lag (dinner and wine being enemies of focused productivity).  The pro bias in the original impetus duels with the anti or aversion bias, so both work in tandem and reciprocate the other.  The anti-bias has to be understood not alone but in terms of its corollary pro version as a byproduct or outcome. 

Choice Architecture is the way our decision-making is framed.  Good choices rely on reliable and solid truth assessment—yet our thinking is systematically shaped, or biased, in certain directions that favor ideals or images of ourselves (and less favorably toward others).  One example is the planning fallacy, familiar to all project managers, which describes the bias leading to over-optimism about the time and money a given project will require.  This is a positive bias leading to costly overruns in schedule and budget.  Even a small home improvement can involve this fallacy.  Drivers rate themselves as above-average.  Teachers and students inflate their own performance and potential achievements.  Newlyweds believe their marriage will defy the divorce rate of around one out of two (Thaler & Sunstein, 2008). Entrepreneurs, also, think they have a 90% success potential—whereas half fail within 5 years (BLS). From the 1950s, psychologists began to acknowledge the futility of assuming that consumers know exactly what they want and the price they should pay to buy it. 

These are illusions, wishful thinking driven by positive bias that leads us to underestimate risk as we overestimate chance and luck in forecasting rewards rather than financial and competitive pitfalls.

Negative Bias

Think of racism, Islamophobic thinking, provincialism, ableism, class prejudice, religious bigotry, gender politics, and ageism.  These don’t flourish in a vacuum, but are natural outcomes of our human tendency to favor and select for ourselves and our home group—blood ties and extended family--over other groups (“Charity begins at home,” one of my favorite aphorisms).  This emotional edict is at the heart of all group cohesion.  G. K. Chesterton reflected that “The true soldier fights not because he hates the soldier in front of him, but that he loves the country behind him.” 

What we think of as the negatively directed bias is the flip side of a positive approach or preference for the ideal state of things – the “should” of a cultural outlook.  This emotional valence is a type of preference for the safety and familiarity of the hard-wired known social universe over time.  This preference is an example of the “bounded rationality” proposed by Herbert Simon – the cognitive limitations imposed by context, the brain, experience, information access, and memory, as well as invested with strong emotional biases based on big values. This concept can explain why we don’t actively seek out the diverse or aberrant in our search for family, friends, and colleagues, in the mandate of DEI diversity programs, preferring the control of private spaces to public ones.

Our home-base preference, rather than any active antipathy for others unlike ourselves, gives rise to what looks like anti bias.  It helps to recall too how much time and attention are required for the maintenance of simple socializing with family, coworkers, and friends, leaving little time and attention for people unrelated to us by these roles.  When we go on vacation trips in-country or abroad, to see new sights, dine on new foods, and people-watch, our close family circle travels with us.   And consider the ever-increasing pressures on our scarce available time that make even family time ever more difficult to find. reports the average family spends 37 minutes “quality time” together on weekdays, one of the reasons families must break out of their routines for the time together on vacation.

Understanding these preferred states helps profile our “bad” biases as the consequence of the “good” or virtuous bias that makes us human—and as a shared thinking style, defines our culture as the main influencer of daily choices we need to make about who gets our care and attention.  This approach redefines bias away from rational fallibility or moral failing to see it as the outcome of our evolution as highly social creatures—creatures who are also highly territorial around social as well as physical and mental space.


Sunday, March 12, 2023

What Is Digital Literacy?

Photo by Pixabay

 “It’s not computer literacy that we should be working on, but sort of human-literacy.  Computers have to become human-literate.” 

--Nicholas Negroponte

   Architect, MIT Media Lab founder


I can recall before the internet era how submissions to journals used to work.  The author would submit by mail (or rarely, fax), the text was read and evaluated, and you were either in, out, or in for a revision.  Then there is the citation style – of which there are several in academic writing: namely, APA, MLA, Chicago, and others.  Each has a hefty style guide, and each can take years to truly learn for fluent use. 

But these matters were taken care of in-house by the editorial staff, who were clear on what they wanted to see for the final stages.  Digital intelligence is now allowing—make that demanding—that we feed information to programs specialized in resumes, Social Security, tax filing, remote learning, mortgages, and publishing.  In publishing, authors are seeing a major energy transfer to these programs.  The digital effect is layering on an entire new set of skills to the heavy labor of writing and to finally getting manuscripts accepted.  

Move up to the current practice, which is to require the author to fill in a very detailed series of files and boxes, shifting many editorial tasks back to the hopeful submitter.  I sense that this means a work transfer, or mission creep, over to the writer, who slowly but surely is taking on this job.  After all, the author needs the publisher much more than vice-versa--which has always been the case.  Except that now there is a way to draw the work from author time and attention, away from the desks of whatever in-house editors remain active.  It’s a process that expects me to become, without training, part of the editorial process, all without benefit of any consultation with the in-house team.  In effect, I am preparing my own material for review, revising from the review results, then checking dozens of boxes to even meet the digital standard for publication. 

For example, because of the required formats on-screen, I had to stop the process many times to rewrite several sections in order to comply with word counts, formatting, style manual, file renaming, or other content, like the figure captions, calling for revisions.  One of these was the abstract, the most difficult job on the list for any article, presentation, or dissertation. While a previous instruction called for “a short abstract,” when the time arrived to upload it, it was no longer my 250 words but a narrower 100. This news called for a total rewrite, taking several hours.  Encountering a list of similar changes in the process consumed several more hours over more than three days.  Quite a lot to ask for a “single-use” task.  The style handbook compliance -- in this case, Modern Language Association, MLA  9th edition, a tome 367 pages long, is the documentation style – both within the text and organized as notes at the end of the article.   But MLA is not my normal citation style, so add that learning curve (and time burn) into the equation.

This kind of skill demand for automation is also now why a CV must be completely dismantled and reassembled for each customized job application, including course titles and dates, with the exact dates (day and month as well as year) for certificates of graduation, instructors, grades, locations, and other data that can date to many decades ago, proving difficult and time-consuming to reconstruct or validate.  Even the thought of reformulating a resume dozens or hundreds of times must pose a major demotivator to job-hunting.  This outsourcing of finding and entering information is not optional but depends on the strong incentive to comply or lose out. 

The stakes couldn’t be higher. Digital competence is an assumed skill—but for some, it’s not self-evident how to acquire this toolkit in order to practice it.  And what exactly is the standard of practice?  And how, when, and why does this expectation determine what is demanded, and in which arenas?  In sum, how can this skill be measured?

UNESCO defines a world-wide standard for digital literacy as “The ability to access, manage, understand, integrate, communicate, evaluate, and create information safely and appropriately through digital technologies for employment, decent jobs, and entrepreneurship.”  The best way to understand this enlarged view of literacy is to compare it to the functional version: “The ability to read a newspaper, sign a check, and write a postcard.”  This is now merely the baseline for the digital-age literacy test.  New challenges are always emerging, in an endless learning curve.  This makes literacy a constantly moving target, even for the highest elite.        

National digital illiteracy rates persist. The US Department of Education reports that across ethnic divides, computer literacy is another basis of unequal opportunity, with 11% White, 22% Black, and 35% Hispanic adults less than fluent in digital media.  Even 5% with Associate degrees aren’t literate, as well as a higher 41% without high school diplomas.  The digital divide still halts universal access (Rockefeller Institute of Government, July 2022). 

Moving forward, for the “blind review” process, I had to “anonymize” most of the content, a strange ritual of removing anything linked to my name from anything linked to my work to shield from reviewers’ eyes.  This was a skill I didn’t have and haven’t needed—until now.  This meant I had to completely omit key content that would have given away my identity.  But there was no way of working around these statements—they had to go.  These deletions would have explained why I was submitting to this particular journal rather than any other, a key point of the rationale important to selling the article: that this is a follow-up to my previous one, now widely cited, published in the past century. * 

In effect, the uploading task amounts to learning new software – for a single operation.  The same goes for thesis and dissertation projects.  They impose a high demand for mastery over a documentation system that too often gets applied just once – and at the same time must be skilled enough to pass and graduate with the degree.  Just the uploading operation itself is a self-taught process without any real way of knowing what will be asked for—or why.  All this effort is applied atop the already “sunk cost” (term from economics) of months or even years of writing and research. It’s distressing to think about whether this submission process reduces the chances of the less-digitally literate of being published.  From my own experience, there is no question that this dynamic is actively operating to favor the tech literate.   And as a colleague in the data world puts it, what’s being tested for is compliance over competence. 

Seeking out an equalizer, I was able to recruit a long-time colleague, an excellent “explainer” and recently retired software engineer.  “I’m sure if you had cast your annoyance aside momentarily you could have easily done the same [anonymizing a document],” he noted.  In fact, there is a relatively simple set of steps to remove “Author” from the Track Changes program.  You just need to know where to look.   

Like productivity expert David Allen, who has admitted to being “semi-literate” in his classic Getting Things Done, I must concede this status is just not enough anymore. David Herlich, my coach that night, agrees, up to a point.  He created, a personal consultation service which aims to explain the complexities of sports to brand-new participants.  He told me I was just like many of the people he has met and hopes to serve.  “I didn’t really do anything,” he says, “except to help you see what you could already do.”  It is the frame of mind, not knowledge, that blocks performance.  This insight certainly fuels learning as discovery of one’s own powers. 

And yes, the Internet helps.  But what I’ve noticed is that there is always more than one answer to any question, raising the problem of distinguishing between answers to pick the one to go with.   You really never know if you got that right—without an explainer with an expert perspective.  


*“Disneyland and Walt Disney World:  Traditional Values in Futuristic Form,” Journal of Popular Culture, Summer 1981: 

Friday, February 10, 2023

The Emotional Journey of Uncertainty

“Knowledge is an unending adventure at the edge of uncertainty.”

                                                --Jacob Bronowski, Polish-British mathematician 

                                                 Earth                    vs.               Venus

2nd                                                          position from sun                                      3rd

24 hours                                                  length of day                                  5,832 hours

365 days                                                  length of year                                    225 days

    1                                                          moons                                                      0

59 F                                                         average temperature                         864 F

7,926 miles                                              diameter                                           7,520 miles

“Destination Venus,” Nat Geo Kids, Feb. 2023, p. 20                                                                   Photo: Pixabay

I have read National Geographic, and the Kids edition, for years. I find the children’s edition of more than one periodical to be fun, direct, timely, and a quick index to what is going on in popular culture.  Grade-school textbooks are a good example of this principle.  They need to get to concepts and themes quickly and can’t do the kind of context-building and nuance that adults can tolerate.  So they are a better guideline in several ways.  And usually, factual.  But not always.

Primates—that’s us—are primarily creatures of emotion.  We are first emotional beings, only secondarily rational.  This is the reason emotion needs to be “untaught” –as children we learn to restrain and hide our feelings.  Rational thought—writing, math, spelling, science, accounting, engineering, bridge—are trained skills; otherwise they would be intuitive; we’d all be whizzes at it.  And we don’t understand our own emotional lives all that well, just to make social judgments about what’s appropriate when and where and with what other people.  This is the point Daniel Goleman makes in his book Emotional Intelligence.  Dale Carnegie put it this way: “When dealing with people, remember that you are not dealing with creatures of logic but with creatures of emotion--creatures bristling with prejudice and motivated by pride and vanity.” 

And creatures whose rational faculties are far more limited than their emotional ones.  So I observed in reading an otherwise great article about the planet Venus written for kids.  But I then saw something curious on the chart comparing Earth to Venus.  “Position from the sun—Earth 2nd, Venus 3rd.”  I read this statement again, then once more.  Thus began my Journey into Uncertainty.  Isn’t earth “Third planet from the sun”?  I began to think about this.  But isn’t National Geographic among the topmost trusted sources on earth?  Could the planets, without my knowledge, have somehow changed positions?  The article also notes that any visitor to Venus would burst into flame at an average temperature of 864 degrees F or be crushed by the planet’s intense pressure.  Or maybe the Venusian orbit distorted to move outside earth’s? 

The Uncertainty Journey       Case Study: “Destination Venus,” National Geographic Kids, February 2023, pp. 20-21

Questioning:  Is this true – is Venus really third planet from the sun, and earth second? I certainly thought it was the other way around.  For my entire lifetime.

Denial:  This can’t be true.  We’d all be fried or crushed.

More questioning:  Would we?  Did the planets trade places because of some orbital switch-out?

Sense-making:  This just makes no sense; it doesn’t line up with anything else I know.

Investigation:  I’ll look this up online, then send off a query to the magazine. 

Outcome:  National Geographic:  Oh, you’re right!  We messed up that fact. Thanks for reading so closely. 

Further questioning:  How did this happen?  And my favorite question as a former editor: “How many people looked this over at the editorial offices?”  And then my next-favorite question: “What else did they miss?”  Considering this is a relatively wide error—about 26 million miles off (compared to earth at 93 million).  The measures in astronomy are based on the AU, astronomical unit, which is earth’s distance from the sun. Therefore, switching to the #2 orbit—as this error does -- would change the very base value of AU, with a long range of side errors that come into focus the instant they surface. 

I couldn’t find how the second and third planets got switched.  So I contacted NGeoKids.  Here is what I asked the editors: “Isn’t earth the third planet, not the second, from the sun?  Has the usual order changed for some reason?  What is the effect of this change on the AU basis of astronomy—the astronomical unit?”  

The editors readily admitted the mistake. Here’s what they had to say: “We did indeed accidentally swap the sun positions for the planets.  Thank you for reaching out and for reading NGeoKids so carefully!”   Wow.  So the universe has been restored.   Does this make anything better, though?  Does this mean National Geo is depending on its readers for fact-checking?  This isn’t really reassurance – just one more piece of evidence that in the search for truth, constant vigilance must be the rule.  

Perhaps this points to two operating uncertainty principles. 1) We are slow to question information that looks self-assured and authoritative, even when we feel fairly sure it is in error; 2) Perhaps if we questioned factual statements more often, it would serve to keep facts on track and lend some confidence to the knowledge we rely on.  However, we can’t constantly be questioning the truth of every statement.  To operate day-to-day, we assume that 99% of factoids are reliable.  That’s because we can’t live in a world we don’t trust.  This is Uncertainty Avoidance. 

Human beings don’t like uncertainty because we don’t know what to think about uncertain situations nor how to make decisions and act on them.  This is why we make up stories, “facts” to fill in the gaps.  We just can’t leave unsure things alone.  Not for more than a minute or two.  Consider this headline about a P-51 Mustang pilot in The Week (not the Kids’ version) (Feb. 10, 2023, p. 35): “The Tuskegee Airman Who Escaped a Lynching.”  My initial take was that this obit for Harold Brown, age 98 and one of the last of his unit, was going to be about racial prejudice in the American South.  Wrong.  On reading the copy, the lynch mob was in fact Austrian, in the last months of WWII, when he was shot down there.  Another surprise—it was a police officer saved Brown, who was “sent to a prison camp—his first experience of integration.”  The truth filled in because I kept reading.

The nice thing about knowledge is that errors of fact can be corrected by digging deeper when the red flags appear.  Vancouver, Canada isn’t the capital of anything—it may be the primary city of British Columbia, but it’s Victoria on Vancouver Island that is the capital of British Columbia – a wrong answer I was part of making, a victim of team groupthink, to a pub quiz question.  And I was just returning from a week’s trip there—the shame of it still haunts me.  Here is another: the number of married people (worldwide) that ends with an odd number?  Not sure about that, but this could reflect multiple husbands / wives. Check to see if the number is in couples, not individuals. Then on entering a medical office last week, I was handed a fill-in form in English; the small lady beside me was handed another in Chinese, without being asked.  Her reaction was amused (it could well have been otherwise) as she explained she was Vietnamese. 

Venus does have the most volcanoes in our solar system: something over 1600.  Its rotation is in the opposite direction of ours, and from most planets, called retrograde motion.  NASA’s VERITAS mission in 2028 will orbit the planet and map its terrain using radar.  The European Space Agency EnVision mission in 2032 will map the sub-surface.  And perhaps both will confirm its position at 67 million miles from the sun, compared to ours of 143 million miles…. Did I say 143?  I meant 93, of course. 143 is the average distance for Mars, as everyone knows, the 4th planet from the sun.  It’s easy to get confused.  That’s why every person needs to be their own fact-checker.  And that is often a research-project-level demand.  But I could not resist restoring the solar system to its usual and correct order: the one I know and love.

Tuesday, January 17, 2023

Building a Brain

Now routinely cited as the father of modern computing, Alan Turing was always focused on the interplay between human processes and programming for machines.  In the early 1940s he was talking with colleagues about “building a brain”  (Alan Cowell, AI, NYTimes, 2020).  In 1950 he developed the Turing test, a program that worked to simulate human-generated thinking by answers to questions by AI methods.  Deep learning was needed, in which human tutored computers on to think like us, millions of hours per day, in computer centers all over the planet.  The idea is to take computers to a level where, like humans, they become self-teaching entities.  The hope is that they can also learn to reason—perhaps better than us. 

In a recent Atlantic piece, Adam Kirsch examines developments in brain research that propose the potential of uploading the complete human mind.  Such an operation would involve a brain scanner able to detect and record an information pattern called the “connectome,” the mapping of the brain’s neural connections through the synapses across all its many levels and modes.  All human cognition is created by these dynamic interactions.  This map, the wiring diagram of the brain’s workings, is analogous to the human genome.  This would be an artificial reality for thought, emotion, and reasoning that could replicate the thinking / feeling / experience of a total brain – almost more real than real—or at least a resource to connect human sense-making with machine learning.

An uploaded mind won’t dwell in the same environment as we do, but that’s not necessarily a disadvantage.  On the contrary, because a virtual environment is much more malleable than a physical one, an uploaded mind could have experiences and adventures we can only dream of, like living in a movie or a video game (“The End of Us,” Atlantic, Jan/Feb 2023, pp. 64-65).

This complete artificial intelligence, using every affordance of human thinking, is capable of a powerful merging of human with machine intelligence.  In the investment world, AI has disclosed the potential of computer intelligence that is superior to human hunches about the market and tracking its movements.  This intelligence is based on projecting the past, in fine-grained detail, into the future, incorporating multiple factors beyond the ability of even the best investors to recognize and trace.  “The challenge facing the investment world is that the human mind has not become any better than it was a hundred years ago …the time will come when no human investment manager will be able to beat the computer” (Institutional Investor’s Alpha, online journal for the hedge fund industry).

However, the brain is organic and its structures and dynamics are not computer programs.  While a computer can win against the best human players at chess, Go, and even Jeopardy, we have yet to see computer programs perfect self-driving cars, body mobility, long-term planning, or hiring decisions. Herbert Simon, the political scientist who coined the term “bounded rationality,” (1957) did so to counter the economics model of the completely rational brain (“rational man”) making purely rational money decisions.  But Simon’s term can also be applied to describe the limitations of machines in achieving artificial general intelligence—as machines, they are severely limited in replicating human cultural and common sense, cause and effect, symbolic recognition, implication finding, future projections, and decision making.   This is the reason that the simple ideal image of enhanced human thinking is a human being--using a calculator.  The interactive power of the digitalplus the neural appears to offer the best promise of enhanced decision making based on what each does best.

A few facts about the brain here: One of the problems: no great unified theory of intelligence yet exists, and it requires mega-computing power to even approach simulating many of the general intelligence scenarios we take for granted, such as meeting new people, learning a new language, telling jokes, handling a crisis (mental or physical), and dealing with unknown outcomes for a new decision demand.  Involved in change and experience are thousands of neurons of our store of 86 billion in the brain, meaning a potential of 100 trillion interconnections.  The European Union launched the Human Brain Project in 2013 with the goal of a complete simulation of the entire human brain by 2023 (Epstein, “The Empty Brain,” Aeon, 2016).  This has yet to be achieved. 

That is because the human cognition system is not just an information processor but far more layered and interactive as a sophisticated universe of connected thinking and emotion.  This includes informal logic, seeing the viewpoints of others (theory of mind), understanding implications, nuance, multiple interacting variables, modes and layers of reality, and hyperreality.  Even a three-year-old’s cognition outstrips the capacity of sophisticated computer programs to read cultural reality. 

Notes Cade Metz, writing on the use of AI in medicine (AI, 2020) on current state-of-the-art issues: “Able to recognize patterns in data that humans could never identify on their own, [computer] neural networks can be enormously powerful in the right situation.  But even experts have difficulty understanding why such networks make particular decisions and how they teach themselves.”

No computer program yet has been able to replicate the activity and accomplishments of human neural networks, the thousands of neurons involved in change, experience, and memory that humans achieve instinctively, but must be taught (by humans) to computers as deep learning.  Computers operate by fixed focus on well-defined tasks; at the other end of the scale, humans use WB (the model for Whole Brain Emulation machines follow) to deal with change, adaptability, and handling problems we’ve never encountered before in situations that are also unique—with incomplete information and unknowable outcomes.  Ever since we first emerged as homo sapiens, we’ve been trying to find ways to understand our own intelligence and the brain that centers it.

Speech engines are one such example as a means to understanding natural language, as in voice recognition and translation. Language is a complex program in itself, like the brain, with multiple modes, levels, rules, and styles, depending on purpose and context (both text and spoken), and the social relations involved.  Because of this complexity, understanding language intent requires a broad approach to expression and meaning interpretation that stalls out the computer while the nimble brain fills in all the gaps creatively.  Deep neural networks are now showing greater sophistication in facing down the complexity of machine learning for language analysis.


Image from Pixabay

Saturday, December 17, 2022

Easy Languages

 Photo from Pixabay

Follow-up to “Hard Languages,” November 13, last month’s topic.

What is an easy language for English speakers to approach and immerse in?  Since language is such a basic key to culture, familiarity or fluency have a great enabling effect in opening up an entire cultural dimension, either in one country or across a wide cultural empire (as Spanish, French, Portuguese, or Arabic provide).  The issue here is the time and exposure needed to achieve the needed level of comfort and speed in sending and receiving--or speaking and decoding.  The artificial intelligence revolution was jump-started by the US government goal to develop a machine program that could learn to translate and transcribe natural language.

Babies under six months can distinguish speech sounds from any language in the world.  But the brain soon begins to focus on a single language practice and its sound differences and starts to ignore other distinctions less important in that language.  Young children can learn two languages equally well.  The window to learn any language seems to be 12 years—beyond that, language acquisition doesn’t map well to the maturing brain as its patterns become set (Linden, The Accidental Mind, 2007). Acquiring everyday facility in a language is one thing.  Mastering its nuances, its cultural structures, is quite ornate, involving a long process of immersion and practice in context.  That is the principle behind the idea of shibboleth, a difference in pronunciation that separates native insiders from outsiders.

Languages within the same language family are typically the easiest to learn because of familiar cognates (roots in common), grammar, written form (Latin alphabet), conjugation rules, tonality, and pronunciation. For English, that is West Germanic. This branch includes English, German, Dutch, Afrikaans, and Yiddish. 80% of the most-used English vocabulary, and the grammar, is Germanic.  The larger family grouping is Indo-European, spoken by the largest percentage of speakers worldwide—close to half.  English worldwide has 1.5 billion speakers, of which just under 400 million (about a quarter) call it their native language.  And for non-native speakers from other language families, English is not an easy acquisition.  

Selecting a new language also depends on its useful cultural position:  where the language is spoken, how widely distributed, and its global media influence.   Non-European languages that use the Latin alphabet, like Malay and Swahili, are cases in point.  Malay is the lingua franca across several southeast Asian countries; Swahili is the trading language of East Africa (as a second language) with a rich Arabic vocabulary, sharing our Latin alphabet.  Indonesian also uses Latin script and has a simple grammar. 

Since the first century BCE, Swahili has served 50 million people as it developed as the lingua franca of trade and the national language of Kenya and Tanzania, influenced by Arabic (Swahili means “coastal “) widely used in Uganda, Burundi, DRC, and the islands of Zanzibar and Comoros—the standard version is based in Zanzibar City.  Because pronunciation is regular and the alphabet Roman, Swahili is one of the exotic easy language to approach and acquire.  It has a wide range across several cultures and a long history. It can also be heard in south Ethiopia and Somalia and northern Zambia and Mozambique, and even Madagascar (Lonely Planet phrasebook, 2008).

Then there is the cultural aspect:  what does the language afford as access to the richness of history, literature, religion, art traditions, and connections with other cultures within the language and beyond?  French and English have been historically important in the West because of their status and portability in diplomacy.  As the world turns increasingly toward the Eastern cultural dimension (India, China, Japan) this ratio is shifting from Atlantic to Pacific Ocean. 

Proximity to English is one index of easiness.  Frisian is the most similar to English, but has just a half-million speakers in northwest Europe.  Spanish, however, has over 534 million speakers worldwide, and is the official language of 21 countries.  English speakers already have the greatest range as the language of business, science, and world politics in the form of “Globish,” basically acting as the universal auxiliary language.  Legacy of the British Empire, it is already the official language of 29 countries.  Considering the time-intensive demands of learning a completely new tongue, there is little incentive to acquire one. From an English-speaking perspective, most Romance and Indo-European languages take about 600+ hours to learn, while tonal languages or those from the Sino-Tibetan language family can take 2000+ hours to learn. (ScienceABC). 

Unless language links you to your family’s heritage. Our research director has become an Italian “citizen living abroad” (in the US for now) through his mother’s ancestry, an option that several other countries (like Ireland and Mexico) are introducing with the goal of attracting Americans back to the mother country to live with their incomes.  The European Union opens the borders dramatically, since citizens of one member country can live and work in any of the current 27. 

Some of these languages are close to English (like the Germanic family members Frisian, Dutch, Norwegian, and Swedish); others seem far afield (Romanian, Afrikaans, Indonesian) (FSI  source) .  Of course there are constructed languages and ancient languages that are mostly academic, not spoken, or extinct, like Gothic.  These open out to other cultural worlds, peoples, histories, a kind of hyperreality across time. Ancient languages are still spoken or written today, or are direct ancestors of those spoken today, like modern Greek, the easiest to learn with a non-Latin script (already familiar through science), and a basic medium of Western Civ.  A more familiar example is modern Hebrew, based on the ancient model but updated.



The US Foreign Service Institute, beginning with its mission in language training after WWII for its in-country staffing, has been a good source of language manuals and tapes available free online.