Your Ass Through the Looking Glass
Winter – Wk 7: Swabbed in Abu Dhabi | AI has Superior Eyes | Racist Robots See All the Wrong Patterns | Facial Phrenology | The Cotton Swab of Sodom
PUBLICUS – Swabbed in Abu Dhabi
My latest (republished, w/ intro): “The View from Abu Dhabi” — in ColdType — Issue 219 — February 2021
They say travel opens your eyes to the world, but the New Normal doesn't offer many opportunities to nose around. Your path is pretty well marked out for you, from one door to the next. Any deviation from that path is considered a violation of public health policy. One false move, and you could be held responsible for countless people dying of Covid. During a global panic, nose-pickers are the new mass murderers.
Just to get out the door in the first place, I had to send in a mail-in nasal swab test. I probably took six Covid tests before it was over — one swab every five days for three weeks. Each time, a weary nurse from Asia, India, or Africa — peering out over two masks and through a clear plastic face shield, searching my face for any sign of humanity — would ram a massive Q-tip down my throat until I gagged, or up my nose until they'd scraped a layer of cells off my prefrontal cortex.
Most were gentle at first, but each time I went in, they'd drill down a little deeper. You got the feeling they'd conspired to test our limits, if only out of boredom.
The day after I got home, Chinese authorities announced that anal swabs are far more effective than oral or nasal testing. I've never been more relieved to be back in the good ol' USA, land of the free and home of the brave.
PDF here (pg. 8-11)
———
MYTHOS – AI has Superior Eyes
Always, the inventor's hope is that his machine will improve upon human abilities. Take the ubiquitous surveillance grid, for instance—the latest expression of our desire for omniscience.
A watched society is a compliant society. To pull it off, you need loyal spies who never rest and never defect. In the old, mid-20th century surveillance state, you had to have human watchers on every street corner, behind every counter, embedded in every institution, keeping watch to make sure the population remained compliant with all social and psychological mandates.
Who has the time to train that many people? Or the money to pay them all? And how can you expect a working class police force to be loyal to their betters?
Nevertheless, raised nails have to be hammered down. As with furniture manufacturing or home carpentry projects, these days it's better to have machines do it.
The optimal solution is to put cameras at every available vantage-point, feed the endless data into facial recognition software, and correlate that information with vast databases brimming with detailed personal dossiers. Over time, you can accurately reward the cooperative, and weed out the non-compliant.
Knowing that human choice might lead to an unpredictable screw up, the only rational way forward is to replace humans with robots who aren't encumbered by favoritism, and who will show no mercy.
ETHNOS – Racist Robots See All the Wrong Patterns
Truly, the only thing worse than a flesh-and-blood bigot is a racist robot. You can always neutralize a human by any means necessary, but a racist robot might never die.
This issue really came to light in 2016, when Microsoft's deep learning chatbot—an AI social media addict named "Tay"—went online and immediately began tweeting things like "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism", and most memorably, "caitlyn jenner isn't a real woman yet she won woman of the year?"
Tay may have passed the Turing Test (frat-house version), but at what cost?
Tech-savvy social justice warriors were quick to point out that such outputs are clearly the fault of Microsoft's programmers, or bad people on Twitter, or something like that. "Garbage in, garbage out," as coders are fond of saying.
Microsoft apologized and promised to do better.
The year before, an astute software engineer discovered that the image recognition software powering Google Photos was mistakenly classifying pics of black people as "gorillas." Google apologized and promised to do better. But in the end, they resorted to a cheap fix.
Rather than train the AI powering Google Photos to distinguish people of African ancestry from pics of primates, Google programmers simply turned off the search terms "gorilla," "chimp," and "monkey," leaving the world to shake their damn heads.
What if someone created an AI algorithm to weed out racist AI algorithms? That could be even more problematic.
AI has actually been employed to identify racist attitudes in the population, with disastrous results. These digital policemen consistently singled out black people for disproportionate blame.
Two papers presented at the 2019 annual meeting of the Association of Computational Linguistics—a catchy use of language if there ever was one—highlighted the problem of anti-racist robots who turn out to be racist themselves. Researchers found that AI bots trained to find hate speech online consistently point their squeaky metal fingers at black people. As Vox reported at the time:
"Platforms like Facebook, YouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. …
"In one study, researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English."
These problematic results hearken back to a 2014 study conducted (by humans) at the left-leaning think tank, Demos. The team found that around 10,000 tweets a day contained a racial epithet. Of these, the most commonly used slur (slightly less than half the total) was "white boy." Another big hit was "whitey."
What explains these troubling findings? If we submit the problem to a process of rigorous deduction—basing our assumptions in anti-racist ethics—the only possible conclusion is that racist programmers are creating prejudiced AI bots.
All educated people know that any system which disproportionately affects a protected ethnic group in a negative way must be dismantled and restructured. That's just science. Unless, of course, that system happens to be Google or Microsoft, in which case the company must issue a public apology and promise to do better.
MACHINA – Facial Phrenology
Personality types are as old as soothsayers and busybodies. From the ziggurat to the hen house, individuals have long been classified by astrological signs or some other sacred geometry of the soul. More recently, psychological tests and genetic sequencing have become fashionable ways of answering that age old question:
"Hey babe, what's your sign?"
It sounds crazy, but what if distinctive personality types are real, innate, and written all over us?
Even crazier, what if AI got so good at pattern recognition that an algorithm could search your face and discover the secrets of your soul? According to a recent paper by Stanford computer scientist Michal Kosinski, that machine-learning program has already been created. It doesn’t even need access to your phone’s search history.
Kosinski became famous in 2017 for creating the world's first functional AI Gaydar—a controversial algorithm that could accurately predict a person's sexual orientation based on subtle facial features.
Kosinki’s new machine can determine a person's political leanings, and with far greater accuracy than a human observer. The task may not seem that difficult, especially if you're judgy and superficial. Consider these two faces side-by-side, and ask yourself who is the liberal and who is the conservative:
———
Alternatively, which of these women is gay and which one is straight?
———
Even better, which of these men is the gay liberal, and which is the straight conservative (who married a total bombshell)?
You probably did okay, but chances are the Kosinski Machine did better. The mad scientist describes his results in Nature:
"A facial recognition algorithm was applied to naturalistic images of 1,085,795 individuals to predict their political orientation by comparing their similarity to faces of liberal and conservative others.
“Political orientation was correctly classified in 72% of liberal-conservative face pairs, remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%)."
So what’s the Kosinski Machine’s secret? According to its inventor:
"The highest predictive power was afforded by head orientation (58%), followed by emotional expression (57%). Liberals tended to face the camera more directly, were more likely to express surprise, and less likely to express disgust."
Think of the way Biden stares into the camera as if he's mumbling directly into your hair, or recall the many FB pics of nu-males doing the O-mouth in front of some cupcake shop. Therein lie the secrets of the soul.
"Even when combined, interpretable facial features afforded an accuracy of merely 59%, much lower than one achieved by the facial recognition algorithm in the same sample (73%), indicating that the later employed many more features than those extracted here."
In other words, the Kosinski Machine just knows which way your barn door swings, politically speaking. That implies an artificial mind is capable of intuition.
Unsurprisingly, Kosinski's work has come under fire for potential privacy violations. What if this technology is used to identify and label individuals on the basis of physical attributes beyond their control?
It may be hard to believe, but the day may come when a person's political leanings, sexual orientation, or even race might become the basis for job selection, university admission, or even the application of criminal law. What happens if it becomes establishment policy to simply snap a headshot and let the Kosinski Machine do the rest?
The inventor defends his work by insisting that he's simply showing that it's possible to create such a device—solely as a warning to others. That’s sort of like synthesizing LSD and dosing your municipal water supply, because if you don't, the Russians will.
Kosinski may be onto something, though, because every day I read the latest news, I wonder who put LSD in our water supply.
RELIGARE – The Cotton Swab of Sodom
The following video features an Asian medic demonstrating an innovative Covid-testing technique recently developed in China.
[TRIGGER WARNING: The deft twist of the swab at the 0:35 mark could make you pucker.]
Aside from its obvious instructional (and comical) value, this clip offers an important case study in potential AI bias. From search engines to social media, these automated systems are already being deployed to root out racist content and other forms of online hate.
Let’s speculate on the biased perspective a flawed AI bot might inadvertently project onto this 100% innocuous video.
Consider for a moment the racial appearance of the dummy. Notice his submissive posture. Notice the prison pajamas he's wearing. A prejudiced AI system might interpret this scene as a passive-aggressive display of homoerotic interracial dominance. Such an interpretation, however, would be highly problematic.
For one thing, Asian people can't be racist because RACISM = POWER + PREJUDICE, and when did the Chinese ever have any power? For another thing, to interpret an anal swab as being aggressive or violating or gross or unpleasant in any way would be homophobic and problematically unacceptable and bordering on hate speech.
With such mistakes in mind, it’s clear that all security algorithms should be front-loaded with ethical counter-biases before a police-bot has a chance to come to the wrong conclusion. Otherwise, diverse YouTubers could be deplatformed unfairly.
From an orthodox anti-racist standpoint, the only proper interpretation of the above video is that this beleaguered doctor is willing to share the wisdom of Asia with the world. Think about it. An over-burdened medical professional, already fatigued from dealing with a global pandemic, is taking time out of his busy day to demonstrate the best way to test for the 'rona. Perhaps most importantly, he does this despite his Euro patient's obvious privilege and supremacist tendencies.
In the final analysis, the above scenario is all about the altruism of the Chinese medical establishment. If that doesn’t touch you deep inside, nothing will.
This is why it's so important to have enlightened programmers—a diverse workforce that’s committed to social justice—coding the AI-powered monitoring systems that will dominate the social engineering projects of the future.
Having an automated surveillance grid that tags, tracks, and sorts individuals according to the arbitrary protocols of technocrats is pretty much inevitable. But God forbid you unleash an algorithm that might notice uncomfortable patterns or hurt the wrong person's feelings. That would be dystopia.
I am a messenger Donald Trump is Jesus Christ Feabkspeech.com for real news . FIX2020first.com
I am a messenger Donald Trump is Jesus Christ Feabkspeech.com for real news . FIX2020first.com