Sunday, March 12, 2023

What Is Digital Literacy?

Photo by Pixabay

 “It’s not computer literacy that we should be working on, but sort of human-literacy.  Computers have to become human-literate.” 

--Nicholas Negroponte

   Architect, MIT Media Lab founder


I can recall before the internet era how submissions to journals used to work.  The author would submit by mail (or rarely, fax), the text was read and evaluated, and you were either in, out, or in for a revision.  Then there is the citation style – of which there are several in academic writing: namely, APA, MLA, Chicago, and others.  Each has a hefty style guide, and each can take years to truly learn for fluent use. 

But these matters were taken care of in-house by the editorial staff, who were clear on what they wanted to see for the final stages.  Digital intelligence is now allowing—make that demanding—that we feed information to programs specialized in resumes, Social Security, tax filing, remote learning, mortgages, and publishing.  In publishing, authors are seeing a major energy transfer to these programs.  The digital effect is layering on an entire new set of skills to the heavy labor of writing and to finally getting manuscripts accepted.  

Move up to the current practice, which is to require the author to fill in a very detailed series of files and boxes, shifting many editorial tasks back to the hopeful submitter.  I sense that this means a work transfer, or mission creep, over to the writer, who slowly but surely is taking on this job.  After all, the author needs the publisher much more than vice-versa--which has always been the case.  Except that now there is a way to draw the work from author time and attention, away from the desks of whatever in-house editors remain active.  It’s a process that expects me to become, without training, part of the editorial process, all without benefit of any consultation with the in-house team.  In effect, I am preparing my own material for review, revising from the review results, then checking dozens of boxes to even meet the digital standard for publication. 

For example, because of the required formats on-screen, I had to stop the process many times to rewrite several sections in order to comply with word counts, formatting, style manual, file renaming, or other content, like the figure captions, calling for revisions.  One of these was the abstract, the most difficult job on the list for any article, presentation, or dissertation. While a previous instruction called for “a short abstract,” when the time arrived to upload it, it was no longer my 250 words but a narrower 100. This news called for a total rewrite, taking several hours.  Encountering a list of similar changes in the process consumed several more hours over more than three days.  Quite a lot to ask for a “single-use” task.  The style handbook compliance -- in this case, Modern Language Association, MLA  9th edition, a tome 367 pages long, is the documentation style – both within the text and organized as notes at the end of the article.   But MLA is not my normal citation style, so add that learning curve (and time burn) into the equation.

This kind of skill demand for automation is also now why a CV must be completely dismantled and reassembled for each customized job application, including course titles and dates, with the exact dates (day and month as well as year) for certificates of graduation, instructors, grades, locations, and other data that can date to many decades ago, proving difficult and time-consuming to reconstruct or validate.  Even the thought of reformulating a resume dozens or hundreds of times must pose a major demotivator to job-hunting.  This outsourcing of finding and entering information is not optional but depends on the strong incentive to comply or lose out. 

The stakes couldn’t be higher. Digital competence is an assumed skill—but for some, it’s not self-evident how to acquire this toolkit in order to practice it.  And what exactly is the standard of practice?  And how, when, and why does this expectation determine what is demanded, and in which arenas?  In sum, how can this skill be measured?

UNESCO defines a world-wide standard for digital literacy as “The ability to access, manage, understand, integrate, communicate, evaluate, and create information safely and appropriately through digital technologies for employment, decent jobs, and entrepreneurship.”  The best way to understand this enlarged view of literacy is to compare it to the functional version: “The ability to read a newspaper, sign a check, and write a postcard.”  This is now merely the baseline for the digital-age literacy test.  New challenges are always emerging, in an endless learning curve.  This makes literacy a constantly moving target, even for the highest elite.        

National digital illiteracy rates persist. The US Department of Education reports that across ethnic divides, computer literacy is another basis of unequal opportunity, with 11% White, 22% Black, and 35% Hispanic adults less than fluent in digital media.  Even 5% with Associate degrees aren’t literate, as well as a higher 41% without high school diplomas.  The digital divide still halts universal access (Rockefeller Institute of Government, July 2022). 

Moving forward, for the “blind review” process, I had to “anonymize” most of the content, a strange ritual of removing anything linked to my name from anything linked to my work to shield from reviewers’ eyes.  This was a skill I didn’t have and haven’t needed—until now.  This meant I had to completely omit key content that would have given away my identity.  But there was no way of working around these statements—they had to go.  These deletions would have explained why I was submitting to this particular journal rather than any other, a key point of the rationale important to selling the article: that this is a follow-up to my previous one, now widely cited, published in the past century. * 

In effect, the uploading task amounts to learning new software – for a single operation.  The same goes for thesis and dissertation projects.  They impose a high demand for mastery over a documentation system that too often gets applied just once – and at the same time must be skilled enough to pass and graduate with the degree.  Just the uploading operation itself is a self-taught process without any real way of knowing what will be asked for—or why.  All this effort is applied atop the already “sunk cost” (term from economics) of months or even years of writing and research. It’s distressing to think about whether this submission process reduces the chances of the less-digitally literate of being published.  From my own experience, there is no question that this dynamic is actively operating to favor the tech literate.   And as a colleague in the data world puts it, what’s being tested for is compliance over competence. 

Seeking out an equalizer, I was able to recruit a long-time colleague, an excellent “explainer” and recently retired software engineer.  “I’m sure if you had cast your annoyance aside momentarily you could have easily done the same [anonymizing a document],” he noted.  In fact, there is a relatively simple set of steps to remove “Author” from the Track Changes program.  You just need to know where to look.   

Like productivity expert David Allen, who has admitted to being “semi-literate” in his classic Getting Things Done, I must concede this status is just not enough anymore. David Herlich, my coach that night, agrees, up to a point.  He created, a personal consultation service which aims to explain the complexities of sports to brand-new participants.  He told me I was just like many of the people he has met and hopes to serve.  “I didn’t really do anything,” he says, “except to help you see what you could already do.”  It is the frame of mind, not knowledge, that blocks performance.  This insight certainly fuels learning as discovery of one’s own powers. 

And yes, the Internet helps.  But what I’ve noticed is that there is always more than one answer to any question, raising the problem of distinguishing between answers to pick the one to go with.   You really never know if you got that right—without an explainer with an expert perspective.  


*“Disneyland and Walt Disney World:  Traditional Values in Futuristic Form,” Journal of Popular Culture, Summer 1981: 

Friday, February 10, 2023

The Emotional Journey of Uncertainty

“Knowledge is an unending adventure at the edge of uncertainty.”

                                                --Jacob Bronowski, Polish-British mathematician 

                                                 Earth                    vs.               Venus

2nd                                                          position from sun                                      3rd

24 hours                                                  length of day                                  5,832 hours

365 days                                                  length of year                                    225 days

    1                                                          moons                                                      0

59 F                                                         average temperature                         864 F

7,926 miles                                              diameter                                           7,520 miles

“Destination Venus,” Nat Geo Kids, Feb. 2023, p. 20                                                                   Photo: Pixabay

I have read National Geographic, and the Kids edition, for years. I find the children’s edition of more than one periodical to be fun, direct, timely, and a quick index to what is going on in popular culture.  Grade-school textbooks are a good example of this principle.  They need to get to concepts and themes quickly and can’t do the kind of context-building and nuance that adults can tolerate.  So they are a better guideline in several ways.  And usually, factual.  But not always.

Primates—that’s us—are primarily creatures of emotion.  We are first emotional beings, only secondarily rational.  This is the reason emotion needs to be “untaught” –as children we learn to restrain and hide our feelings.  Rational thought—writing, math, spelling, science, accounting, engineering, bridge—are trained skills; otherwise they would be intuitive; we’d all be whizzes at it.  And we don’t understand our own emotional lives all that well, just to make social judgments about what’s appropriate when and where and with what other people.  This is the point Daniel Goleman makes in his book Emotional Intelligence.  Dale Carnegie put it this way: “When dealing with people, remember that you are not dealing with creatures of logic but with creatures of emotion--creatures bristling with prejudice and motivated by pride and vanity.” 

And creatures whose rational faculties are far more limited than their emotional ones.  So I observed in reading an otherwise great article about the planet Venus written for kids.  But I then saw something curious on the chart comparing Earth to Venus.  “Position from the sun—Earth 2nd, Venus 3rd.”  I read this statement again, then once more.  Thus began my Journey into Uncertainty.  Isn’t earth “Third planet from the sun”?  I began to think about this.  But isn’t National Geographic among the topmost trusted sources on earth?  Could the planets, without my knowledge, have somehow changed positions?  The article also notes that any visitor to Venus would burst into flame at an average temperature of 864 degrees F or be crushed by the planet’s intense pressure.  Or maybe the Venusian orbit distorted to move outside earth’s? 

The Uncertainty Journey       Case Study: “Destination Venus,” National Geographic Kids, February 2023, pp. 20-21

Questioning:  Is this true – is Venus really third planet from the sun, and earth second? I certainly thought it was the other way around.  For my entire lifetime.

Denial:  This can’t be true.  We’d all be fried or crushed.

More questioning:  Would we?  Did the planets trade places because of some orbital switch-out?

Sense-making:  This just makes no sense; it doesn’t line up with anything else I know.

Investigation:  I’ll look this up online, then send off a query to the magazine. 

Outcome:  National Geographic:  Oh, you’re right!  We messed up that fact. Thanks for reading so closely. 

Further questioning:  How did this happen?  And my favorite question as a former editor: “How many people looked this over at the editorial offices?”  And then my next-favorite question: “What else did they miss?”  Considering this is a relatively wide error—about 26 million miles off (compared to earth at 93 million).  The measures in astronomy are based on the AU, astronomical unit, which is earth’s distance from the sun. Therefore, switching to the #2 orbit—as this error does -- would change the very base value of AU, with a long range of side errors that come into focus the instant they surface. 

I couldn’t find how the second and third planets got switched.  So I contacted NGeoKids.  Here is what I asked the editors: “Isn’t earth the third planet, not the second, from the sun?  Has the usual order changed for some reason?  What is the effect of this change on the AU basis of astronomy—the astronomical unit?”  

The editors readily admitted the mistake. Here’s what they had to say: “We did indeed accidentally swap the sun positions for the planets.  Thank you for reaching out and for reading NGeoKids so carefully!”   Wow.  So the universe has been restored.   Does this make anything better, though?  Does this mean National Geo is depending on its readers for fact-checking?  This isn’t really reassurance – just one more piece of evidence that in the search for truth, constant vigilance must be the rule.  

Perhaps this points to two operating uncertainty principles. 1) We are slow to question information that looks self-assured and authoritative, even when we feel fairly sure it is in error; 2) Perhaps if we questioned factual statements more often, it would serve to keep facts on track and lend some confidence to the knowledge we rely on.  However, we can’t constantly be questioning the truth of every statement.  To operate day-to-day, we assume that 99% of factoids are reliable.  That’s because we can’t live in a world we don’t trust.  This is Uncertainty Avoidance. 

Human beings don’t like uncertainty because we don’t know what to think about uncertain situations nor how to make decisions and act on them.  This is why we make up stories, “facts” to fill in the gaps.  We just can’t leave unsure things alone.  Not for more than a minute or two.  Consider this headline about a P-51 Mustang pilot in The Week (not the Kids’ version) (Feb. 10, 2023, p. 35): “The Tuskegee Airman Who Escaped a Lynching.”  My initial take was that this obit for Harold Brown, age 98 and one of the last of his unit, was going to be about racial prejudice in the American South.  Wrong.  On reading the copy, the lynch mob was in fact Austrian, in the last months of WWII, when he was shot down there.  Another surprise—it was a police officer saved Brown, who was “sent to a prison camp—his first experience of integration.”  The truth filled in because I kept reading.

The nice thing about knowledge is that errors of fact can be corrected by digging deeper when the red flags appear.  Vancouver, Canada isn’t the capital of anything—it may be the primary city of British Columbia, but it’s Victoria on Vancouver Island that is the capital of British Columbia – a wrong answer I was part of making, a victim of team groupthink, to a pub quiz question.  And I was just returning from a week’s trip there—the shame of it still haunts me.  Here is another: the number of married people (worldwide) that ends with an odd number?  Not sure about that, but this could reflect multiple husbands / wives. Check to see if the number is in couples, not individuals. Then on entering a medical office last week, I was handed a fill-in form in English; the small lady beside me was handed another in Chinese, without being asked.  Her reaction was amused (it could well have been otherwise) as she explained she was Vietnamese. 

Venus does have the most volcanoes in our solar system: something over 1600.  Its rotation is in the opposite direction of ours, and from most planets, called retrograde motion.  NASA’s VERITAS mission in 2028 will orbit the planet and map its terrain using radar.  The European Space Agency EnVision mission in 2032 will map the sub-surface.  And perhaps both will confirm its position at 67 million miles from the sun, compared to ours of 143 million miles…. Did I say 143?  I meant 93, of course. 143 is the average distance for Mars, as everyone knows, the 4th planet from the sun.  It’s easy to get confused.  That’s why every person needs to be their own fact-checker.  And that is often a research-project-level demand.  But I could not resist restoring the solar system to its usual and correct order: the one I know and love.

Tuesday, January 17, 2023

Building a Brain

Now routinely cited as the father of modern computing, Alan Turing was always focused on the interplay between human processes and programming for machines.  In the early 1940s he was talking with colleagues about “building a brain”  (Alan Cowell, AI, NYTimes, 2020).  In 1950 he developed the Turing test, a program that worked to simulate human-generated thinking by answers to questions by AI methods.  Deep learning was needed, in which human tutored computers on to think like us, millions of hours per day, in computer centers all over the planet.  The idea is to take computers to a level where, like humans, they become self-teaching entities.  The hope is that they can also learn to reason—perhaps better than us. 

In a recent Atlantic piece, Adam Kirsch examines developments in brain research that propose the potential of uploading the complete human mind.  Such an operation would involve a brain scanner able to detect and record an information pattern called the “connectome,” the mapping of the brain’s neural connections through the synapses across all its many levels and modes.  All human cognition is created by these dynamic interactions.  This map, the wiring diagram of the brain’s workings, is analogous to the human genome.  This would be an artificial reality for thought, emotion, and reasoning that could replicate the thinking / feeling / experience of a total brain – almost more real than real—or at least a resource to connect human sense-making with machine learning.

An uploaded mind won’t dwell in the same environment as we do, but that’s not necessarily a disadvantage.  On the contrary, because a virtual environment is much more malleable than a physical one, an uploaded mind could have experiences and adventures we can only dream of, like living in a movie or a video game (“The End of Us,” Atlantic, Jan/Feb 2023, pp. 64-65).

This complete artificial intelligence, using every affordance of human thinking, is capable of a powerful merging of human with machine intelligence.  In the investment world, AI has disclosed the potential of computer intelligence that is superior to human hunches about the market and tracking its movements.  This intelligence is based on projecting the past, in fine-grained detail, into the future, incorporating multiple factors beyond the ability of even the best investors to recognize and trace.  “The challenge facing the investment world is that the human mind has not become any better than it was a hundred years ago …the time will come when no human investment manager will be able to beat the computer” (Institutional Investor’s Alpha, online journal for the hedge fund industry).

However, the brain is organic and its structures and dynamics are not computer programs.  While a computer can win against the best human players at chess, Go, and even Jeopardy, we have yet to see computer programs perfect self-driving cars, body mobility, long-term planning, or hiring decisions. Herbert Simon, the political scientist who coined the term “bounded rationality,” (1957) did so to counter the economics model of the completely rational brain (“rational man”) making purely rational money decisions.  But Simon’s term can also be applied to describe the limitations of machines in achieving artificial general intelligence—as machines, they are severely limited in replicating human cultural and common sense, cause and effect, symbolic recognition, implication finding, future projections, and decision making.   This is the reason that the simple ideal image of enhanced human thinking is a human being--using a calculator.  The interactive power of the digitalplus the neural appears to offer the best promise of enhanced decision making based on what each does best.

A few facts about the brain here: One of the problems: no great unified theory of intelligence yet exists, and it requires mega-computing power to even approach simulating many of the general intelligence scenarios we take for granted, such as meeting new people, learning a new language, telling jokes, handling a crisis (mental or physical), and dealing with unknown outcomes for a new decision demand.  Involved in change and experience are thousands of neurons of our store of 86 billion in the brain, meaning a potential of 100 trillion interconnections.  The European Union launched the Human Brain Project in 2013 with the goal of a complete simulation of the entire human brain by 2023 (Epstein, “The Empty Brain,” Aeon, 2016).  This has yet to be achieved. 

That is because the human cognition system is not just an information processor but far more layered and interactive as a sophisticated universe of connected thinking and emotion.  This includes informal logic, seeing the viewpoints of others (theory of mind), understanding implications, nuance, multiple interacting variables, modes and layers of reality, and hyperreality.  Even a three-year-old’s cognition outstrips the capacity of sophisticated computer programs to read cultural reality. 

Notes Cade Metz, writing on the use of AI in medicine (AI, 2020) on current state-of-the-art issues: “Able to recognize patterns in data that humans could never identify on their own, [computer] neural networks can be enormously powerful in the right situation.  But even experts have difficulty understanding why such networks make particular decisions and how they teach themselves.”

No computer program yet has been able to replicate the activity and accomplishments of human neural networks, the thousands of neurons involved in change, experience, and memory that humans achieve instinctively, but must be taught (by humans) to computers as deep learning.  Computers operate by fixed focus on well-defined tasks; at the other end of the scale, humans use WB (the model for Whole Brain Emulation machines follow) to deal with change, adaptability, and handling problems we’ve never encountered before in situations that are also unique—with incomplete information and unknowable outcomes.  Ever since we first emerged as homo sapiens, we’ve been trying to find ways to understand our own intelligence and the brain that centers it.

Speech engines are one such example as a means to understanding natural language, as in voice recognition and translation. Language is a complex program in itself, like the brain, with multiple modes, levels, rules, and styles, depending on purpose and context (both text and spoken), and the social relations involved.  Because of this complexity, understanding language intent requires a broad approach to expression and meaning interpretation that stalls out the computer while the nimble brain fills in all the gaps creatively.  Deep neural networks are now showing greater sophistication in facing down the complexity of machine learning for language analysis.


Image from Pixabay

Saturday, December 17, 2022

Easy Languages

 Photo from Pixabay

Follow-up to “Hard Languages,” November 13, last month’s topic.

What is an easy language for English speakers to approach and immerse in?  Since language is such a basic key to culture, familiarity or fluency have a great enabling effect in opening up an entire cultural dimension, either in one country or across a wide cultural empire (as Spanish, French, Portuguese, or Arabic provide).  The issue here is the time and exposure needed to achieve the needed level of comfort and speed in sending and receiving--or speaking and decoding.  The artificial intelligence revolution was jump-started by the US government goal to develop a machine program that could learn to translate and transcribe natural language.

Babies under six months can distinguish speech sounds from any language in the world.  But the brain soon begins to focus on a single language practice and its sound differences and starts to ignore other distinctions less important in that language.  Young children can learn two languages equally well.  The window to learn any language seems to be 12 years—beyond that, language acquisition doesn’t map well to the maturing brain as its patterns become set (Linden, The Accidental Mind, 2007). Acquiring everyday facility in a language is one thing.  Mastering its nuances, its cultural structures, is quite ornate, involving a long process of immersion and practice in context.  That is the principle behind the idea of shibboleth, a difference in pronunciation that separates native insiders from outsiders.

Languages within the same language family are typically the easiest to learn because of familiar cognates (roots in common), grammar, written form (Latin alphabet), conjugation rules, tonality, and pronunciation. For English, that is West Germanic. This branch includes English, German, Dutch, Afrikaans, and Yiddish. 80% of the most-used English vocabulary, and the grammar, is Germanic.  The larger family grouping is Indo-European, spoken by the largest percentage of speakers worldwide—close to half.  English worldwide has 1.5 billion speakers, of which just under 400 million (about a quarter) call it their native language.  And for non-native speakers from other language families, English is not an easy acquisition.  

Selecting a new language also depends on its useful cultural position:  where the language is spoken, how widely distributed, and its global media influence.   Non-European languages that use the Latin alphabet, like Malay and Swahili, are cases in point.  Malay is the lingua franca across several southeast Asian countries; Swahili is the trading language of East Africa (as a second language) with a rich Arabic vocabulary, sharing our Latin alphabet.  Indonesian also uses Latin script and has a simple grammar. 

Since the first century BCE, Swahili has served 50 million people as it developed as the lingua franca of trade and the national language of Kenya and Tanzania, influenced by Arabic (Swahili means “coastal “) widely used in Uganda, Burundi, DRC, and the islands of Zanzibar and Comoros—the standard version is based in Zanzibar City.  Because pronunciation is regular and the alphabet Roman, Swahili is one of the exotic easy language to approach and acquire.  It has a wide range across several cultures and a long history. It can also be heard in south Ethiopia and Somalia and northern Zambia and Mozambique, and even Madagascar (Lonely Planet phrasebook, 2008).

Then there is the cultural aspect:  what does the language afford as access to the richness of history, literature, religion, art traditions, and connections with other cultures within the language and beyond?  French and English have been historically important in the West because of their status and portability in diplomacy.  As the world turns increasingly toward the Eastern cultural dimension (India, China, Japan) this ratio is shifting from Atlantic to Pacific Ocean. 

Proximity to English is one index of easiness.  Frisian is the most similar to English, but has just a half-million speakers in northwest Europe.  Spanish, however, has over 534 million speakers worldwide, and is the official language of 21 countries.  English speakers already have the greatest range as the language of business, science, and world politics in the form of “Globish,” basically acting as the universal auxiliary language.  Legacy of the British Empire, it is already the official language of 29 countries.  Considering the time-intensive demands of learning a completely new tongue, there is little incentive to acquire one. From an English-speaking perspective, most Romance and Indo-European languages take about 600+ hours to learn, while tonal languages or those from the Sino-Tibetan language family can take 2000+ hours to learn. (ScienceABC). 

Unless language links you to your family’s heritage. Our research director has become an Italian “citizen living abroad” (in the US for now) through his mother’s ancestry, an option that several other countries (like Ireland and Mexico) are introducing with the goal of attracting Americans back to the mother country to live with their incomes.  The European Union opens the borders dramatically, since citizens of one member country can live and work in any of the current 27. 

Some of these languages are close to English (like the Germanic family members Frisian, Dutch, Norwegian, and Swedish); others seem far afield (Romanian, Afrikaans, Indonesian) (FSI  source) .  Of course there are constructed languages and ancient languages that are mostly academic, not spoken, or extinct, like Gothic.  These open out to other cultural worlds, peoples, histories, a kind of hyperreality across time. Ancient languages are still spoken or written today, or are direct ancestors of those spoken today, like modern Greek, the easiest to learn with a non-Latin script (already familiar through science), and a basic medium of Western Civ.  A more familiar example is modern Hebrew, based on the ancient model but updated.



The US Foreign Service Institute, beginning with its mission in language training after WWII for its in-country staffing, has been a good source of language manuals and tapes available free online. 

Sunday, November 13, 2022

Hard Languages

  “The limits of my language are the limits of my world.”
– Philosopher Ludwig Wittgenstein

Arcane tribal languages in remote settings (South America, Asia, Africa) would be the most daunting for English speakers. This is because of their isolation from the mainstream languages of more populated areas, and therefore have little in common with familiar Roman and Greek roots in the Indo-European tradition. 

If you want to try your hand at a south/eastern European language, try Romanian.  It is the only Latin-based language in that geography, and shows many cognate commonalities with English.  And although Danish is a close cognate to English, its 27 phonetically distinct vowels make it much harder to understand and master than Swedish and Norwegian. Also complicating Danish are its varied glottal stops that are both hard to hear and hard to pronounce for non-native speakers. While Danes can pick up both these Nordic systems, both the two other speaking groups have more trouble with Danish, and pronunciation is exacting (Jens Lund, Ph.D., folklorist and native Danish speaker). 

Are you up for a real language challenge that will allow you to speak to under 100 other specialists after years of work?  Then consider the constructed language (conlang) domain. Klingon would have to be one of the most challenging.  This language was invented for Star Trek III (1984) as a formal integrated speech for the Klingons in the Trek universe.  After a dictionary was published, many people dabbled in its difficult spelling and pronunciation, but only a handful (under 100 estimated) have become skilled speakers able to converse with each other and understand the film tracks.  In addition, since Klingon speech focuses largely on spacecraft and warfare, it has limited use for day-to-day conversation. It is popular with linguists for its creative aspects played out within the general principles of language.  

Of natural languages (as opposed to constructed cases), it is interesting that Mandarin Chinese is the hardest to learn for English speakers—because of the thousands of ideographs necessary for written comprehension, as well as a four-tone scale for meaning.  But it is also the most widely spoken global language (besides Globish, basic English spoken as an auxiliary tongue).  Arabic, Polish, and Russian follow, the first also forcing a totally unfamiliar writing system.  The US Foreign Service Institute groups languages for difficulty from 1 through 4, with the “Super-hard” Category 4 including Arabic, Chinese (Cantonese and Mandarin), Japanese, and Korean.

Classical languages are more difficult simply because of their restricted lives in religious and academic contexts, but express a range.  "Classical Greek and Sanskrit are extremely difficult because they are so highly inflected--hundreds of forms of the verb and numerous case endings.   Late Greek (koine) simplifies the grammar and thus is much easier to read and not particularly difficult. Egyptian grammar and vocabulary are very simple.  Its only real difficulty is mastering the hieroglyphs, which are very few compared to Chinese" (Prof. Robert Littman, Classical studies, University of Hawaii at Manoa).

How about learning a tribal language?  There are many still active around the globe, the most in New Guinea (numbering around 850), the most diverse linguistic area known. The Khoisan language of South Africa is among the world’s oldest, at 60,000 years.  Closer, in the US, the three leading tribal languages still spoken are Navajo (by far the largest and hardest) in Arizona, Yupik in Central Alaska, and Sioux in the upper Midwest and Canada. Navajo was famously employed as an unbreakable talking code by native-speaking marines in WWII. They are all difficult, made more so by their roots in exotic and ancient cultures, arcane to learn and relate to vocabulary—and have only in modern times enjoyed a written format.  Hawaii is the only US state with two official languages—Hawaiian and English--as of 1978.  Along the range of tribal language difficulty, Hawaiian is among the easiest.

So there are many “hardest languages” out there to appreciate, if not to master as a fluent speaker, and each has a rich cultural component.  Klingon was born from the constructed science fiction of the Star Trek universe, so does have a soundtrack, but a steep learning for pronunciation, structure, and symbol alphabet.  Klingon was designed to look and sound truly alien, which it does as a function of its weirdly off-center profile without cognates.  (However, Duolingo now actually offers the course.) The most widely spoken constructed lingua franca, Esperanto, has an estimated million speakers world-wide, but little cultural baggage (literature, history, religion, film, cuisine).  However, fluency can be reached in one-tenth the time of natural languages, and in itself, Esperanto offers a quickly effective base for language-learning capability. 


Next blog:  “Easy” Languages

Saturday, November 5, 2022

Acing the College Essay


Acing the College Essay

I answered a call for experts on the college essay by the New York Post.  Here are my answers to their questions about this high-value writing challenge: the personal profile, an essay that can make (or even break) the candidate’s chances. 

We are at the starting line for college applications. The early-decision deadline for many colleges is November 1st.  Between November and February, upwards of 5 million college applicants—including 65% of high-school graduates--will be struggling to compose an essay of 250 to 650 words in their “authentic voice.” The goal is to portray themselves as uniquely interesting college material for selective schools across the country.

Here are a few heuristics—rules of thumb--applicants need to know for this essay portion, the personal statement, of their application.  An effective essay is important because by itself it has an important job.  This is to focus, or refocus, the whole application: by putting a face and voice to the facts of student grades, activities, and awards, or to temper a less-than-stellar record by showcasing insight, values, and clear expression.  As essay coach Alan Gelb puts it in his book Conquering the College Admissions Essay in 10 Steps, “…admissions office counsellors name the essay as the single most important ‘tip factor’—that is, the thing that can tip your application in your favor, all other factors being equal.” 

Q: What is your experience in writing and education?  As an academic editor and dissertation project manager, for several decades I have been an admissions essay coach, as well as Faculty Reader of the Advanced Placement test in English for the College Board.   

Q: Why is the college essay such an important part of the application process? The college personal essay is quite possibly the most important piece of writing you will ever undertake.  While something outside the main application, it can be a high card in your hand if handled well.  “It can turn around the way the committee looks at your other achievements, acting as the catalyst that can channel positive attention on to acceptance,” says Steve Goodman, admissions strategist and author of the results-based College Admissions Together.  Individual schools, and the central Common APP, issue specific “prompts” (which can change) to set the focus, including “Describe a person you admire,” “Personal growth,” “Learning from obstacles,” “Solving a problem,” and “What captivates you?” (Princeton Review)

Q: Can you point to a leading thing not to do in the essay?  Select your essay topic with the reader in mind, the admissions officer, who will give you under ten minutes to impress them.  (In-person interviews are increasingly rare).  The topic might not even be your intuitive first pick of what’s most important about your character and experience.  Think of something unique to you, your family, community, or values.  Example from a student client’s first draft: “I am unique.  You will never meet anyone like me.” My edit:  Everyone is unique; it’s what we do with that position that counts.  Here’s the question:  How did you mobilize your unique qualities to make a difference for yourself and others?

We then revised the initial statement to read “I realized that I could use my special talents to create value not just for myself but for others, from my family out to school and community.”  Then describe how.  Avoid topics that many others will gravitate to:  My trip to Israel (or European / Asian tour), gender or religious conversion, why I hate / love / admire my parent / stepparent, and political opinions, unless you are involved in political work.  Think of something either off-beat or seemingly ordinary to signal an important principle you learned, then applied

The goal of the personal essay is to show off your insight, self-awareness, ability to derive value and meaning from any situation (family business, volunteering, off-brand sports, assignments, reading, challenges from family, peers, authority figures).  Showcase your own specialized perception, talents, expertise, ideas, even hopes and fears, and the doubts you have struggled with—showing how you coped, managed, or overcame them, and how you were able to surmount resistance with resilience. 

Q:  What about other best practices?  This would be obvious to experienced applicants:  No texting spellings (e.g.,” I xpect 2hav evn mor xper”); use a translation program if you need one.  Don’t rely on your own judgment about how well you write; show your “finished” draft around to your English teacher, an editor online, your parents, assuming they are literate types, or other seasoned writers.  Your own peers, unless they are highly qualified, probably don’t make the grade here.

But here’s a warning:  admissions experts know immediately when an essay looks “cooked”: written over 50% by an expert.  It can’t be a world-class essay when your grades are Bs and Cs.  If it’s 85% mechanically correct, and the ideas are solid, that level will be fine.  Students tend to put off the essay until last, but it’s important to work on it over time, starting slowly the summer before the due date. (Yes! This means draft after draft as you discover yourself in the text.)  This is the critical piece you spend the most time building up by multiple drafts, a much-encouraged method, and each stage takes the time of close attention.  No matter how skilled a writer you believe you are, this is no midnight-the-night-before task.  As a reward, this experience will greatly strengthen your essay writing in all school subjects.

QWhat can make the essay shine?  Seek originality and insight-finding moments to describe and analyze.  This means going beneath the surface of people, incidents, and circumstances to discover what’s important and perception-shifting about them.  Dedicate the time to focus, mind-map, then gather together a good number of thinking pieces as paragraphs you can then pull together to construct your essay (and note any word limits to be aware of).  Find the unexpected insight, the extraordinary embedded within “ordinary” scenarios. 

The idea is to show perception wedded to knowledge (weaving in references to school reading), especially impressive to your admissions readers.  One of my clients wrote a winning essay about watching Bill Cosby as TV’s Dr. Huxtable for his medical school application; another covered her job mowing lawns with his father when the family economy got tight for a business school placement.  Responding to “most impressive historical event,” another wrote about the explorations and innovations of the Phoenicians as key to civilization-building—a personal view.  

Exploring the many concepts implicit in ordinary experience, or to themes of human experience, is the key to an intelligent take on the world (the same skill that marks great literature, in fact) signals you are perceptive acceptance material who will prove an asset to the incoming class.  

Saturday, October 1, 2022

Engineering Bias


“As fuel was consumed, the ship got lighter and the acceleration more pronounced.  Rising at this exponential rate, the craft quickly reached maximum acceleration, a limit defined not by the ship’s power, but by the delicate human bodies inside.”    -- Andy Weir, The Martian

“Engineering:  The discipline of applying technical and scientific knowledge and physical resources to design and produce materials, machines, devices, systems, and processes that meet at desired objective and specified criteria.”  -- New World Encyclopedia


“Objective and specified criteria” sounds highly rational and technical.  But this demand set starts out with human factors – the controller, driver, or user of whatever is design-engineered.  Here are two examples: 

Case 1:  Climate control: Many female workers report office climates as chilly, whereas men feel quite comfortable.  Why is this? writes that office building algorithms for temperature regulation date back to the 60s, targeted to a 154-pound male.  The smaller bodies and lower muscle mass of women make them more susceptible to cold.  Unless climate control is updated to reflect this difference, including the growing numbers of women in the office workforce, this male-bias design problem persists.  “Minor” design aspects like this set-point exert a major impact.  Temperature affects not just comfort but productivity (like keyboarding performance).  The gender pay gap could be just one outcome of off-balance climate control.

Case 2:  Ergonomic seating:  On the other hand, a product made to solve a niche-ability problem became a major bestseller by virtue of its appeal across the board—inclusive of all body types and positions. In 1994 Herman Miller contracted engineering to design a versatile office seat that would accommodate any person in any posture at a range of seated tasks—largely computer-based.  Not only could the chair angle well from straight to reclined, it was “lined” with an elastic polymer mesh, first developed to prevent sores in bedridden patients.  The Aeron chair—the “dot-com throne”--quickly became one of the most popular high-end office chairs ever made. 


The design, building, and use of engines, machines, and structures begins with the physiology and mentality of human beings (biology and psychology), moving from that base out into cultural values (how people, things, and experiences are defined, weighted, and ranked across groups).  This means that people, not devices, are the central core of design thinking.  These human factors introduce a powerful bias into the “neutral” processes based on math and physics.  UX—user experience—experts understand product users and their experiences—including thought conventions, emotional feedback, intuitive assumptions, decision-making, task procedure, and options for action.  Human Factors Engineering is now a subspecialty, but all engineering projects must, ideally from the outset of the design process, define, test, and evaluate the fit between design and user (Goddard).

Typical of a project well understood in this way is medical devices.  Less well understood are chronic-care pharmaceutical regimens and effects, where compliance with use rules (adherence) is only around 50% (and less for males than females), decreasing over time (US Pharmacist).  The countering side-effects, dosage schedules, and low effectiveness of any given medication are the main causes of non-adherence.  And yet these counter-productive factors are not fully recognized or acknowledged by the medical profession as obstacles to patient compliance that interfere with the engineering of desired drug outcomes.


The first bias going in is that the designer looks and uses the device in the same way the user would – but the first is an expert, whereas the second, the typical user, is often a first-time user.  Don Norman, human-centered design expert, puts it this way in the opening of his human factors book: “You are designing for people the way you would like them to be, not the way they really are” (The Design of Everyday Things p. 7). (When looking under the topic of bias in engineering, you will see plenty of articles on bias—in the hiring of women and minorities.  While diversity and inclusion aren’t under discussion here, male dominance in the profession has design outcomes as bias toward male users.)

But bias is also an outcome of the human factors involved in the designing assumptions of the (usually male) engineer.  Men and male bodies dominate medical testing, with female subjects missing from medical trials as too complex and variable—and at special risk for any adverse after-effects of testing. Differing male and female physiology produce differing responses to drug type and dosage.  In parallel, in the design of credit ratings, males are given higher credit and spending limits—based on assumptions about long-term earnings and employment.  Microsoft vision systems fail to recognize darker-skinned figures, and self-driving cars have recognition systems less likely attuned to dark skin tone as well  ( 

In AI, male voices are easier for voice programs to recognize and interact with.  Critics of this bias have noted that most of the voice-activated home programs (like Siri and Alexa) use the female assistant model of the young articulate admin with a compliant and faintly flirtatious edge.  It can also be that female voices signal trust and reassurance—as advertisers are aware in healthcare, beauty, and hospitality, versus the more authoritative male voice (ESB Advertising).

This can result in critical situations in automotive design and safety, as Carol Reiley writes in “When bias in product design means life or death” (Techcrunch, Nov. 16, 2016).  She points out that test dummies are modelled on the average male body, so that females are almost half-again as likely to be injured in a crash.  The first female crash dummies entered the design process in 2011, and since then, Toyota and Volvo have coded programs dedicated to testing the smaller-scale female body as well as pregnant ones.

Self-centric design

Designers use people like themselves (unconsciously) as models for the majority of products and programs (male, white, US-based).  This is no surprise but an outcome of everyone’s natural homophily—the tendency to relate best to those who look, think, and act like ourselves.  In a study by the Geena Davis Institute for Gender in Media, white men over-perceive women and minorities in simulations where just 17% were women (seen as 50/50 ratio to men), with 33% women seen as the majority.  This is an irony in view of the fact that women make three-quarters of all consumer buying decisions. 

And consider just designing for the brain itself, which is complex but runs best on programs and input that are first of all intuitive.  Few people except the technically inclined even bother to read a manual—a complex tech manual being an even greater obstacle to operation.  Don Norman points to an early digital watch, the Junghans Mega 1000 Digital Radio Controlled.  With five buttons along the top, bottom, and side for operation, the follow questions arise:  “What is each button for?  How would you set the time? There is no way to tell—no evident relationship between the operating controls and the functions, no constraints, no apparent mappings.  Moreover, the buttons have multiple ways of being used” (TDOET p. 27-28).  And as much as Norman likes the watch itself, even he (an expert in device design) can’t recall these functions or how they are deployed in order to fully enjoy the watch features.

Undiscovered bias makes engineering design much more difficult to define or shape to the right purposes.  Bias skews the problem definition from solving the problem that needs to be solved (not necessarily the one presented by the client) for the right array of users, who will then be able to use the device by the maps, concepts, and symbols already in their heads.  Ignoring or failing to identify these factors leads to more protracted processes to make corrections or change direction as the team works to solve the wrong or misstated problem (Norman).  Finding out what the actual issues are at their root is the mandate of cultural analysis, based on human biological, brain, and cultural motives.