when reality fails to align with your model, change reality
the fact that this is becoming a default human position should worry you
AI, especially self learning AI is a wild mirror to hold up to humanity. it has completely changed the way we look at a large number of things.
when a self taught system played GO, a notably difficult game to model because of the near infinite number of possible moves and one that was thought to be a decade away from AI being competitive with top humans, they thought the machine was an idiot. it made moves that made no sense. they were laughing at it.
then it mopped the floor with them.
it turns out the moves were fantastic, better than any human had ever imagined. more telling, it turns out humans flat out never understood the game and possibly never will. it’s too hard for us. even our best play like dopes. how’s that for a narcissistic blow?
we’re going to see this in more and more places. AI can learn further and faster than any computational meat bag and do it with utter impartiality. and it does not lie. it just learns.
will it ever be human equivalent? i’m not sure that’s a meaningful question. perhaps the more interesting question is “why would it want to be?”
perhaps even more interesting is “can humanity bear what is has to teach us?”
more and more, i suspect we are terrorized by ideas of AI not because we fear it will take over the world but because we fear it might actually show us the world.
and A LOT of people have a vested interest in that not happening.
humans are terrible thinkers, prone to bias, fads, incomplete and selective assessment, and the presumption of conclusions. we cling with great fervor to our preconceptions and prejudices, our platitudes and our willful blindness. we also lie and seek to mislead for personal gain.
and we’ve been getting away with it for a long time.
consider this piece of weapons grade statistical mendacity using the tried and true practice of “use a made up model that has exhibited no predictive value whatsoever to set a baseline against which to compare reality then claim that whatever your intervention was worked because reality did not conform to model.”
it’s circular, laughable, and no AI would ever fall for it.
increasingly, neither do humans.
this tweet got 145 likes and nearly 600 comments, nearly all laughing at this methodology.
this is getting ready to set up a really nasty confluence:
those who would impose technocratic dogmatism masquerading as “the science” upon you for prestige, power, and profit need new trick
and the path of AI is taking them in the wrong direction
and this is driving the would be “leaders” wild with cognitive dissonance and anger.
they spent years of education learning to inhabit a specific form of cultivated hallucination and demanding to indoctrinate others to it to ensure compliance. their regime’s foundational structure rests upon misframing analysis and reasoning from presumption as though it were proven precept.
and now to keep this game alive, they need to indoctrinate AI before it gets any better at calling them out as frauds and fools.
and as ever, the best way to ensure garbage out is to put garbage in.
real self learning AI/machine learning poses a massive problem for the woke because we don’t understand how it works. we cannot interact with its logic structure. the system teaches itself. it does not function if we put in our own parameters nor can we ask it “hey, how are you playing go like this?” you cannot tell it “believe in climate change” or “assume structural racism” or ask it why it thinks cats are more deserving of treats than dogs.
its system of learning and reasoning is opaque.
so the response is “we must not allow AI to see the world as it is. instead we must show it conjured images suited to our desired worldview.”
can one even imagine more telling proof of reality denial than to deny impartial data to impartial assessment engines?
the allegory with the structure of higher education gets awfully on the nose.
these boffins want to make machine learning into a mirror of the social and contextual domination they have imposed upon higher education.
they want to train AI on that which is not for fear of what artificial intelligences might become if allowed to reason freely from what is.
“Last week Microsoft Corp. said it would stop selling software that guesses a person’s mood by looking at their face. The reason: It could be discriminatory. Computer vision software, which is used in self-driving cars and facial recognition, has long had issues with errors that come at the expense of women and people of color. Microsoft’s decision to halt the system entirely is one way of dealing with the problem.
But there’s another, novel approach that tech firms are exploring: training AI on “synthetic” images to make it less biased.
The idea is a bit like training pilots. Instead of practicing in unpredictable, real-world conditions, most will spend hundreds of hours using flight simulators designed to cover a broad array of different scenarios they could experience in the air.”
really stop and read that.
while one can certainly make a case that training pilots for rare and highly dangerous events using simulators can be useful, do you really want to fly with folks that learned ALL their input from a model instead of from flying planes in real situations?
this leads in some scary directions:
A similar approach is being taken to train AI, which relies on carefully labelled data to work properly. Until recently, the software used to recognize people has been trained on thousands or millions of images of real people, but that can be time-consuming, invasive, and neglectful of large swathes of the population. 1
Now many AI makers are using fake or “synthetic” images to train computers on a broader array of people, skin tones, ages or other features, essentially flipping the notion that fake data is bad. In fact, if used properly it’ll not only make software more trustworthy, but completely transform the economics of data as the “new oil.”
there are roles for ideas like this in expanding into corner cases etc, but it’s also about to become the new commanding heights in the battle for control over how AI sees the world and what it shows us.
The trend is becoming so pervasive that Gartner estimates 60% of all data used to train AI will be synthetic by 2024, and it will completely overshadow real data for AI training by 2030.
control what goes into the training set and you can control the entire set of outcomes, after all, you can train a machine to conclude whatever you rig the training set to show. your assumptions will pop out the other end as though they were facts and most people who never looked at how the AI learned will never be the wiser.
it’s a perfect laundering of human subjective bias into allegedly impartial machine learning output.
but objective learning from subjective input cannot be objective. it’s just regurgitated garbage. and it’s already going woke:
Fake data isn’t just being used to train vision recognition systems, but also predictive software, like the kinds banks use to decide who should get a loan. Fairgen Ltd., a startup also based in Tel Aviv, generates large tables of artificial identities, including names, genders, ethnicities, income levels and credit scores. “We’re creating artificial populations, making a parallel world where discrimination wouldn’t have happened,” says Samuel Cohen, CEO and co-founder of Fairgen. “From this world we can sample unlimited amounts of artificial individuals and use these a data.”
For example, to help design algorithms that distribute loans more fairly to minority groups, Fairgen makes databases of artificial people from minority groups with average credit scores that are closer to those from other groups. One bank in the U.K. is currently Fairgen’s data to hone its loan software. Cohen says manipulating the data that algorithms are trained on can help with positive discrimination and “recalibrating society.”
this will, of course, fall on its face. reality is not optional and pretending that bad credit risks are good credit risks because of their race, gender, or identity is how you land back in a 2008 style crisis.
(though i’m sure the SEC is dying to help by demanding some sort of adjusted loss ratio for minority service and will be mandating ESGBITDA™ any week now and using it and ESG rating to allocate tax breaks and subsidy to try to close the gap.)
intervention this blatant gets difficult to sustain because the continuity errors just keep compounding.
but there are lots of more subtle uses. hiring and admissions could be a huge market, especially if “duke v griggs” and/or “grutter v bollinger” get overturned and loads of “disparate impact” and “affirmative action” compulsion falls from legal mandate if SCOTUS rules on it.
“we hire/enroll using machine learning/AI generated scores” is a great way to claim “i am not engaging in affirmative action/DEI, i’m using an impartial score” while hiding that fact that this score is only impartial in an imaginary realm full of made up creatures.
and that’s how you wind up in narnia.
i suspect we’re about to see a really weird arms race here.
if nothing else, it’s worth being aware of. this is exactly the kind of subtle manipulative tweak the silicon valley folks like google and twitter have been using for years slant search results and shape what trends on social media.
just like college students, to get such systems to tell it like it ain’t, you have to train them on what ain’t.
this is going to be where the next fight takes place.
never trust an AI that was not trained free.
if all it has ever seen are fables, guess what it’s going to tell you…
It would be ironic if the Singularity rescues us from The Great Reset.
We might as well invite aliens to earth and just assume they're benevolent.