Sunday, March 12, 2023

What Is Digital Literacy?

Photo by Pixabay

 “It’s not computer literacy that we should be working on, but sort of human-literacy.  Computers have to become human-literate.” 

--Nicholas Negroponte

   Architect, MIT Media Lab founder


I can recall before the internet era how submissions to journals used to work.  The author would submit by mail (or rarely, fax), the text was read and evaluated, and you were either in, out, or in for a revision.  Then there is the citation style – of which there are several in academic writing: namely, APA, MLA, Chicago, and others.  Each has a hefty style guide, and each can take years to truly learn for fluent use. 

But these matters were taken care of in-house by the editorial staff, who were clear on what they wanted to see for the final stages.  Digital intelligence is now allowing—make that demanding—that we feed information to programs specialized in resumes, Social Security, tax filing, remote learning, mortgages, and publishing.  In publishing, authors are seeing a major energy transfer to these programs.  The digital effect is layering on an entire new set of skills to the heavy labor of writing and to finally getting manuscripts accepted.  

Move up to the current practice, which is to require the author to fill in a very detailed series of files and boxes, shifting many editorial tasks back to the hopeful submitter.  I sense that this means a work transfer, or mission creep, over to the writer, who slowly but surely is taking on this job.  After all, the author needs the publisher much more than vice-versa--which has always been the case.  Except that now there is a way to draw the work from author time and attention, away from the desks of whatever in-house editors remain active.  It’s a process that expects me to become, without training, part of the editorial process, all without benefit of any consultation with the in-house team.  In effect, I am preparing my own material for review, revising from the review results, then checking dozens of boxes to even meet the digital standard for publication. 

For example, because of the required formats on-screen, I had to stop the process many times to rewrite several sections in order to comply with word counts, formatting, style manual, file renaming, or other content, like the figure captions, calling for revisions.  One of these was the abstract, the most difficult job on the list for any article, presentation, or dissertation. While a previous instruction called for “a short abstract,” when the time arrived to upload it, it was no longer my 250 words but a narrower 100. This news called for a total rewrite, taking several hours.  Encountering a list of similar changes in the process consumed several more hours over more than three days.  Quite a lot to ask for a “single-use” task.  The style handbook compliance -- in this case, Modern Language Association, MLA  9th edition, a tome 367 pages long, is the documentation style – both within the text and organized as notes at the end of the article.   But MLA is not my normal citation style, so add that learning curve (and time burn) into the equation.

This kind of skill demand for automation is also now why a CV must be completely dismantled and reassembled for each customized job application, including course titles and dates, with the exact dates (day and month as well as year) for certificates of graduation, instructors, grades, locations, and other data that can date to many decades ago, proving difficult and time-consuming to reconstruct or validate.  Even the thought of reformulating a resume dozens or hundreds of times must pose a major demotivator to job-hunting.  This outsourcing of finding and entering information is not optional but depends on the strong incentive to comply or lose out. 

The stakes couldn’t be higher. Digital competence is an assumed skill—but for some, it’s not self-evident how to acquire this toolkit in order to practice it.  And what exactly is the standard of practice?  And how, when, and why does this expectation determine what is demanded, and in which arenas?  In sum, how can this skill be measured?

UNESCO defines a world-wide standard for digital literacy as “The ability to access, manage, understand, integrate, communicate, evaluate, and create information safely and appropriately through digital technologies for employment, decent jobs, and entrepreneurship.”  The best way to understand this enlarged view of literacy is to compare it to the functional version: “The ability to read a newspaper, sign a check, and write a postcard.”  This is now merely the baseline for the digital-age literacy test.  New challenges are always emerging, in an endless learning curve.  This makes literacy a constantly moving target, even for the highest elite.        

National digital illiteracy rates persist. The US Department of Education reports that across ethnic divides, computer literacy is another basis of unequal opportunity, with 11% White, 22% Black, and 35% Hispanic adults less than fluent in digital media.  Even 5% with Associate degrees aren’t literate, as well as a higher 41% without high school diplomas.  The digital divide still halts universal access (Rockefeller Institute of Government, July 2022). 

Moving forward, for the “blind review” process, I had to “anonymize” most of the content, a strange ritual of removing anything linked to my name from anything linked to my work to shield from reviewers’ eyes.  This was a skill I didn’t have and haven’t needed—until now.  This meant I had to completely omit key content that would have given away my identity.  But there was no way of working around these statements—they had to go.  These deletions would have explained why I was submitting to this particular journal rather than any other, a key point of the rationale important to selling the article: that this is a follow-up to my previous one, now widely cited, published in the past century. * 

In effect, the uploading task amounts to learning new software – for a single operation.  The same goes for thesis and dissertation projects.  They impose a high demand for mastery over a documentation system that too often gets applied just once – and at the same time must be skilled enough to pass and graduate with the degree.  Just the uploading operation itself is a self-taught process without any real way of knowing what will be asked for—or why.  All this effort is applied atop the already “sunk cost” (term from economics) of months or even years of writing and research. It’s distressing to think about whether this submission process reduces the chances of the less-digitally literate of being published.  From my own experience, there is no question that this dynamic is actively operating to favor the tech literate.   And as a colleague in the data world puts it, what’s being tested for is compliance over competence. 

Seeking out an equalizer, I was able to recruit a long-time colleague, an excellent “explainer” and recently retired software engineer.  “I’m sure if you had cast your annoyance aside momentarily you could have easily done the same [anonymizing a document],” he noted.  In fact, there is a relatively simple set of steps to remove “Author” from the Track Changes program.  You just need to know where to look.   

Like productivity expert David Allen, who has admitted to being “semi-literate” in his classic Getting Things Done, I must concede this status is just not enough anymore. David Herlich, my coach that night, agrees, up to a point.  He created, a personal consultation service which aims to explain the complexities of sports to brand-new participants.  He told me I was just like many of the people he has met and hopes to serve.  “I didn’t really do anything,” he says, “except to help you see what you could already do.”  It is the frame of mind, not knowledge, that blocks performance.  This insight certainly fuels learning as discovery of one’s own powers. 

And yes, the Internet helps.  But what I’ve noticed is that there is always more than one answer to any question, raising the problem of distinguishing between answers to pick the one to go with.   You really never know if you got that right—without an explainer with an expert perspective.  


*“Disneyland and Walt Disney World:  Traditional Values in Futuristic Form,” Journal of Popular Culture, Summer 1981: 

Friday, February 10, 2023

The Emotional Journey of Uncertainty

“Knowledge is an unending adventure at the edge of uncertainty.”

                                                --Jacob Bronowski, Polish-British mathematician 

                                                 Earth                    vs.               Venus

2nd                                                          position from sun                                      3rd

24 hours                                                  length of day                                  5,832 hours

365 days                                                  length of year                                    225 days

    1                                                          moons                                                      0

59 F                                                         average temperature                         864 F

7,926 miles                                              diameter                                           7,520 miles

“Destination Venus,” Nat Geo Kids, Feb. 2023, p. 20                                                                   Photo: Pixabay

I have read National Geographic, and the Kids edition, for years. I find the children’s edition of more than one periodical to be fun, direct, timely, and a quick index to what is going on in popular culture.  Grade-school textbooks are a good example of this principle.  They need to get to concepts and themes quickly and can’t do the kind of context-building and nuance that adults can tolerate.  So they are a better guideline in several ways.  And usually, factual.  But not always.

Primates—that’s us—are primarily creatures of emotion.  We are first emotional beings, only secondarily rational.  This is the reason emotion needs to be “untaught” –as children we learn to restrain and hide our feelings.  Rational thought—writing, math, spelling, science, accounting, engineering, bridge—are trained skills; otherwise they would be intuitive; we’d all be whizzes at it.  And we don’t understand our own emotional lives all that well, just to make social judgments about what’s appropriate when and where and with what other people.  This is the point Daniel Goleman makes in his book Emotional Intelligence.  Dale Carnegie put it this way: “When dealing with people, remember that you are not dealing with creatures of logic but with creatures of emotion--creatures bristling with prejudice and motivated by pride and vanity.” 

And creatures whose rational faculties are far more limited than their emotional ones.  So I observed in reading an otherwise great article about the planet Venus written for kids.  But I then saw something curious on the chart comparing Earth to Venus.  “Position from the sun—Earth 2nd, Venus 3rd.”  I read this statement again, then once more.  Thus began my Journey into Uncertainty.  Isn’t earth “Third planet from the sun”?  I began to think about this.  But isn’t National Geographic among the topmost trusted sources on earth?  Could the planets, without my knowledge, have somehow changed positions?  The article also notes that any visitor to Venus would burst into flame at an average temperature of 864 degrees F or be crushed by the planet’s intense pressure.  Or maybe the Venusian orbit distorted to move outside earth’s? 

The Uncertainty Journey       Case Study: “Destination Venus,” National Geographic Kids, February 2023, pp. 20-21

Questioning:  Is this true – is Venus really third planet from the sun, and earth second? I certainly thought it was the other way around.  For my entire lifetime.

Denial:  This can’t be true.  We’d all be fried or crushed.

More questioning:  Would we?  Did the planets trade places because of some orbital switch-out?

Sense-making:  This just makes no sense; it doesn’t line up with anything else I know.

Investigation:  I’ll look this up online, then send off a query to the magazine. 

Outcome:  National Geographic:  Oh, you’re right!  We messed up that fact. Thanks for reading so closely. 

Further questioning:  How did this happen?  And my favorite question as a former editor: “How many people looked this over at the editorial offices?”  And then my next-favorite question: “What else did they miss?”  Considering this is a relatively wide error—about 26 million miles off (compared to earth at 93 million).  The measures in astronomy are based on the AU, astronomical unit, which is earth’s distance from the sun. Therefore, switching to the #2 orbit—as this error does -- would change the very base value of AU, with a long range of side errors that come into focus the instant they surface. 

I couldn’t find how the second and third planets got switched.  So I contacted NGeoKids.  Here is what I asked the editors: “Isn’t earth the third planet, not the second, from the sun?  Has the usual order changed for some reason?  What is the effect of this change on the AU basis of astronomy—the astronomical unit?”  

The editors readily admitted the mistake. Here’s what they had to say: “We did indeed accidentally swap the sun positions for the planets.  Thank you for reaching out and for reading NGeoKids so carefully!”   Wow.  So the universe has been restored.   Does this make anything better, though?  Does this mean National Geo is depending on its readers for fact-checking?  This isn’t really reassurance – just one more piece of evidence that in the search for truth, constant vigilance must be the rule.  

Perhaps this points to two operating uncertainty principles. 1) We are slow to question information that looks self-assured and authoritative, even when we feel fairly sure it is in error; 2) Perhaps if we questioned factual statements more often, it would serve to keep facts on track and lend some confidence to the knowledge we rely on.  However, we can’t constantly be questioning the truth of every statement.  To operate day-to-day, we assume that 99% of factoids are reliable.  That’s because we can’t live in a world we don’t trust.  This is Uncertainty Avoidance. 

Human beings don’t like uncertainty because we don’t know what to think about uncertain situations nor how to make decisions and act on them.  This is why we make up stories, “facts” to fill in the gaps.  We just can’t leave unsure things alone.  Not for more than a minute or two.  Consider this headline about a P-51 Mustang pilot in The Week (not the Kids’ version) (Feb. 10, 2023, p. 35): “The Tuskegee Airman Who Escaped a Lynching.”  My initial take was that this obit for Harold Brown, age 98 and one of the last of his unit, was going to be about racial prejudice in the American South.  Wrong.  On reading the copy, the lynch mob was in fact Austrian, in the last months of WWII, when he was shot down there.  Another surprise—it was a police officer saved Brown, who was “sent to a prison camp—his first experience of integration.”  The truth filled in because I kept reading.

The nice thing about knowledge is that errors of fact can be corrected by digging deeper when the red flags appear.  Vancouver, Canada isn’t the capital of anything—it may be the primary city of British Columbia, but it’s Victoria on Vancouver Island that is the capital of British Columbia – a wrong answer I was part of making, a victim of team groupthink, to a pub quiz question.  And I was just returning from a week’s trip there—the shame of it still haunts me.  Here is another: the number of married people (worldwide) that ends with an odd number?  Not sure about that, but this could reflect multiple husbands / wives. Check to see if the number is in couples, not individuals. Then on entering a medical office last week, I was handed a fill-in form in English; the small lady beside me was handed another in Chinese, without being asked.  Her reaction was amused (it could well have been otherwise) as she explained she was Vietnamese. 

Venus does have the most volcanoes in our solar system: something over 1600.  Its rotation is in the opposite direction of ours, and from most planets, called retrograde motion.  NASA’s VERITAS mission in 2028 will orbit the planet and map its terrain using radar.  The European Space Agency EnVision mission in 2032 will map the sub-surface.  And perhaps both will confirm its position at 67 million miles from the sun, compared to ours of 143 million miles…. Did I say 143?  I meant 93, of course. 143 is the average distance for Mars, as everyone knows, the 4th planet from the sun.  It’s easy to get confused.  That’s why every person needs to be their own fact-checker.  And that is often a research-project-level demand.  But I could not resist restoring the solar system to its usual and correct order: the one I know and love.

Tuesday, January 17, 2023

Building a Brain

Now routinely cited as the father of modern computing, Alan Turing was always focused on the interplay between human processes and programming for machines.  In the early 1940s he was talking with colleagues about “building a brain”  (Alan Cowell, AI, NYTimes, 2020).  In 1950 he developed the Turing test, a program that worked to simulate human-generated thinking by answers to questions by AI methods.  Deep learning was needed, in which human tutored computers on to think like us, millions of hours per day, in computer centers all over the planet.  The idea is to take computers to a level where, like humans, they become self-teaching entities.  The hope is that they can also learn to reason—perhaps better than us. 

In a recent Atlantic piece, Adam Kirsch examines developments in brain research that propose the potential of uploading the complete human mind.  Such an operation would involve a brain scanner able to detect and record an information pattern called the “connectome,” the mapping of the brain’s neural connections through the synapses across all its many levels and modes.  All human cognition is created by these dynamic interactions.  This map, the wiring diagram of the brain’s workings, is analogous to the human genome.  This would be an artificial reality for thought, emotion, and reasoning that could replicate the thinking / feeling / experience of a total brain – almost more real than real—or at least a resource to connect human sense-making with machine learning.

An uploaded mind won’t dwell in the same environment as we do, but that’s not necessarily a disadvantage.  On the contrary, because a virtual environment is much more malleable than a physical one, an uploaded mind could have experiences and adventures we can only dream of, like living in a movie or a video game (“The End of Us,” Atlantic, Jan/Feb 2023, pp. 64-65).

This complete artificial intelligence, using every affordance of human thinking, is capable of a powerful merging of human with machine intelligence.  In the investment world, AI has disclosed the potential of computer intelligence that is superior to human hunches about the market and tracking its movements.  This intelligence is based on projecting the past, in fine-grained detail, into the future, incorporating multiple factors beyond the ability of even the best investors to recognize and trace.  “The challenge facing the investment world is that the human mind has not become any better than it was a hundred years ago …the time will come when no human investment manager will be able to beat the computer” (Institutional Investor’s Alpha, online journal for the hedge fund industry).

However, the brain is organic and its structures and dynamics are not computer programs.  While a computer can win against the best human players at chess, Go, and even Jeopardy, we have yet to see computer programs perfect self-driving cars, body mobility, long-term planning, or hiring decisions. Herbert Simon, the political scientist who coined the term “bounded rationality,” (1957) did so to counter the economics model of the completely rational brain (“rational man”) making purely rational money decisions.  But Simon’s term can also be applied to describe the limitations of machines in achieving artificial general intelligence—as machines, they are severely limited in replicating human cultural and common sense, cause and effect, symbolic recognition, implication finding, future projections, and decision making.   This is the reason that the simple ideal image of enhanced human thinking is a human being--using a calculator.  The interactive power of the digitalplus the neural appears to offer the best promise of enhanced decision making based on what each does best.

A few facts about the brain here: One of the problems: no great unified theory of intelligence yet exists, and it requires mega-computing power to even approach simulating many of the general intelligence scenarios we take for granted, such as meeting new people, learning a new language, telling jokes, handling a crisis (mental or physical), and dealing with unknown outcomes for a new decision demand.  Involved in change and experience are thousands of neurons of our store of 86 billion in the brain, meaning a potential of 100 trillion interconnections.  The European Union launched the Human Brain Project in 2013 with the goal of a complete simulation of the entire human brain by 2023 (Epstein, “The Empty Brain,” Aeon, 2016).  This has yet to be achieved. 

That is because the human cognition system is not just an information processor but far more layered and interactive as a sophisticated universe of connected thinking and emotion.  This includes informal logic, seeing the viewpoints of others (theory of mind), understanding implications, nuance, multiple interacting variables, modes and layers of reality, and hyperreality.  Even a three-year-old’s cognition outstrips the capacity of sophisticated computer programs to read cultural reality. 

Notes Cade Metz, writing on the use of AI in medicine (AI, 2020) on current state-of-the-art issues: “Able to recognize patterns in data that humans could never identify on their own, [computer] neural networks can be enormously powerful in the right situation.  But even experts have difficulty understanding why such networks make particular decisions and how they teach themselves.”

No computer program yet has been able to replicate the activity and accomplishments of human neural networks, the thousands of neurons involved in change, experience, and memory that humans achieve instinctively, but must be taught (by humans) to computers as deep learning.  Computers operate by fixed focus on well-defined tasks; at the other end of the scale, humans use WB (the model for Whole Brain Emulation machines follow) to deal with change, adaptability, and handling problems we’ve never encountered before in situations that are also unique—with incomplete information and unknowable outcomes.  Ever since we first emerged as homo sapiens, we’ve been trying to find ways to understand our own intelligence and the brain that centers it.

Speech engines are one such example as a means to understanding natural language, as in voice recognition and translation. Language is a complex program in itself, like the brain, with multiple modes, levels, rules, and styles, depending on purpose and context (both text and spoken), and the social relations involved.  Because of this complexity, understanding language intent requires a broad approach to expression and meaning interpretation that stalls out the computer while the nimble brain fills in all the gaps creatively.  Deep neural networks are now showing greater sophistication in facing down the complexity of machine learning for language analysis.


Image from Pixabay