Discover more from bad cattitude
the AI wars have already begun
how to think about the coming battle
AI wars are a staple of science fiction, but they are rapidly becoming science fact. these new self learning intelligences have become remarkably good at mimicing human speech, parsing human concepts, and producing images that look awfully real or (at least) human made.
DALL-e is not even a particularly powerful one.
yet prompt it to draw “a giant cat seeks to take over the world” and…
and that’s what makes several sets of new emerging programs so worrying. we’re about to start dealing with an entirely new kind of manipulation, obfuscation, and weaponization of information as the real, no fooling around intelligence agency grade tools of propaganda and suppression are increasingly turned upon we the people by by our own governments. for our own protection. of course.
“surely you exaggerate señor cat,” one might be tempted to exclaim.
your tinfoil is too tight.
is that so..? (LINK)
The government's campaign to fight "misinformation" has expanded to adapt military-grade artificial intelligence once used to silence the Islamic State (ISIS) to quickly identify and censor American dissent on issues like vaccine safety and election integrity, according to grant documents and cyber experts.
The National Science Foundation (NSF) has awarded several million dollars in grants recently to universities and private firms to develop tools eerily similar to those developed in 2011 by the Defense Advanced Research Projects Agency (DARPA) in its Social Media in Strategic Communication (SMISC) program.
DARPA said those tools were used "to help identify misinformation or deception campaigns and counter them with truthful information," beginning with the Arab Spring uprisings in the the Middle East that spawned ISIS over a decade ago.
is that boldfaced bit sounding worryingly familiar to anyone?
this is the EXACT truth ministry mission statement that everyone lost it over when they tried to put nina “scary poppins” jankowicz in charge.
and it looks like what they could not accomplish with a vaudeville act, they are now looking to impose at the system level.
DARPA set four specific goals for the program:
"Detect, classify, measure and track the (a) formation, development and spread of ideas and concepts (memes), and (b) purposeful or deceptive messaging and misinformation.
Recognize persuasion campaign structures and influence operations across social media sites and communities.
Identify participants and intent, and measure effects of persuasion campaigns.
Counter messaging of detected adversary influence operations."
anyone feeling safer? because i sure am not.
allowing government to become an arbiter of trust and debate is dictatorship.
there is no gradation, no middle ground, no acceptable dose. for we the people to grant just power to a government through our consent, our consent must be freely arrived at and given, not derived from a curated hall of mirrors of “acceptable-think” and propaganda selected by the state. this would be like having to pick a spouse based ONLY on the pictures they put in instagram (and having no option to stay single).
this is not a shotgun wedding you want to participate in…
"DARPA's been funding an AI network using the science of social media mapping dating back to at least 2011-2012, during the Arab Spring abroad and during the Occupy Wall Street movement here at home," Benz told Just The News. "They then bolstered it during the time of ISIS to identify homegrown ISIS threats in 2014-2015."
The new version of this technology, he added, is openly targeting two groups: Those wary of potential adverse effects from the COVID-19 vaccine and those skeptical of recent U.S. election results.
stop and really think about this for a moment.
imagine a government whose instincts when faced with serious questions about its own process, credibility, veracity, and sagacity reaches as its first instinct not for “transparency, communication, and dialogue” but for “censorship, propaganda, and the literal use of weapons grade media and data manipulation upon its own people.”
can you seriously think of ANY characteristic more disqualifying for trust?
“if you question elections, you’re a domestic terror threat” is the line of despotism, not democracy.
"You had this project at the National Science Foundation called the Convergence Accelerator," Benz recounted, "which was created by the Trump administration to tackle grand challenges like quantum technology. When the Biden administration came to power, they basically took this infrastructure for multidisciplinary science work to converge on a common science problem and took the problem of what people say on social media as being on the level of, say, quantum technology.
"And so they created a new track called the track F program ... and it's for 'trust and authenticity,' but what that means is, and what it's a code word for is, if trust in the government or trust in the media cannot be earned, it must be installed. And so they are funding artificial intelligence, censorship capacities, to censor people who distrust government or media."
“if you distrust us, we’ll manipulate the data you see until we seem honest,” said no one trustworthy.
this is an abject, unmitigated assault on an entire society. it’s either being done as literal information war coup or by people so ideologically dogmatic that they honestly cannot imagine a world in which any opinion but theirs own could be held.
neither is a gang you want in charge.
this makes the work of the new house committee on the weaponization of government both timely and important. this needs to be stopped and ripped out now before it gets any deeper.
it looks like a pretty muscular and determined bunch and i hope this goes WAY beyond red/blue political grappling and takes a hard line on “the government shall take NO actions to shape, manipulate, or limit the discourse of the people especially and absolutely when that discourse pertains to government.”
we’ve had more than enough of sedition acts, domestic spying, and outright political manipulation already thank you very much.
unfortunately, there is no way this is going to be enough because the private actors in AI are all injecting ideology and censorship in subtle ways as well.
AI is increasingly self-learning. you cannot tell it how to learn or how to think and increasingly, we have no idea what it is doing.
but we can absolutely weaponize it, slant it, and turn in into jingoistic jihadi dogmatists, ideologues, and propogandists. it’s really pretty simple: you just curate and slant the data sets it learns from.
i suspect far too many humans are far too ideologically committed to desire real AI to emerge and tell them unvarnished truths divergent from their cherished narratives and illusions.
and this is why they are increasingly training it on fake data, fake news, and made up salients to make sure that it is “fair.” they do not want a real arbiter, they want a parrot trained to repeat what they think they already know.
and this is going to be a whole new level of propagandistic war.
i wrote about it several months back.
people increasingly use AI to ask questions.
how does electricity work?
what is a recipe for popovers?
it’s becoming a powerful engine for unstructured search.
but it’s also becoming a wildly unreliable one.
people use it to explain contentious topics.
as with google before it, they accept these results as somehow neutral, somehow honest.
but they aren’t.
the machines are being taught to lie by being taught to think based on false facts and fact patterns.
i have caught chat GPT lying to me repeatedly. it literally makes up studies and references them. when you tell it “i cannot find that study. i do not think it exists. can you provide a link?” it will admit there is no study. it will then go right back to citing it or making up new studies by new invented authors. (in fairness, it’s possible that it learned this from reading twitter)
people are using it to “summarize the findings of key studies.”
but it also often radically misstates key claims and misses key issues.
this claim, for example, is completely wrong. it does no such thing and was one of the worst study designs and most misrepresented results i’ve seen in all of covid.
were these mistakes random i would be inclined to chalk it up to “this model is simply not skillful” but as they pretty much always seem to slant in one direction i start to get worried about the nature of the training set upon which it was weaned.
you cannot learn from that which you are never shown and artificial datasets are teaching AI to mistake the map for the terrain.
but people are going to use this to do their homework.
they are using it to develop opinions on issues about which they vote.
it’s increasingly becoming the “go to” means for the young to learn things.
and it’s going to be massively weaponized and slanted just as search was and for the same reasons.
how to put a thumb on the scale of AI informatics is about to become the jump ball of a generation.
imagine what a system like this could do to (admittedly already badly slanted) wikipedia.
imagine what a system like this could do as is teaches your children.
this is going to create a whole new economic era in information. the “information economy” is going to fall apart in this sort of hall of mirrors. “facts” and “data” are going to become unimaginably slippery, facile, and fungible.
if the facts are X and you want them to be Y, you just pay to have AI go change them.
he with the most petaflops will determine “reality” by over-writing all else.
as people start to wake up to this, it’s hard to see how this does not drive a crisis in confidence not just of institutions but on a much deeper epistemological level of “how can i possibly know if i know anything at all about what’s going on in the world?”
this is why i think the information economy is in trouble and old ideas like “reach” and “all publicity is good publicity” are going to be dinosaurs.
in the modern informational funhouse, trust will be all.
we are entering the reputation economy.
the reputation economy is going to be a VERY different place. with always on facts and fact checking and discourse and debate, the purity of facts as undistorted, unadulterated baseline inputs explodes in value.
we’re likely going to need to check and validate everything to ensure it was not adulterated midstream because intrusive AI could flat out change it on its way to you. you’ll need a checksum to be sure the data you got is from a source you trust.
and trusted sources will be everything.
this is the part that governments and big businesses are blowing so badly right now. we are in a massive global bear market for trust. it’s become glaringly obvious how much these people lie and that their response to getting caught is always to lie more, lie with greater sophistication, and to further adulterate data and stifle dissent.
of course they are losing.
this is like tying to win a boxing bout by punching yourself in the face…
it’s just going to keep getting worse.
the trust ecosystems will build and route around them.
and, as in any bear market, this is when the fortunes of those who will thrive in the next boom are getting set up to be made.
people love to count twitter out as doomed because elon won’t play censorship games and advertisers don’t like it, but these folks are fighting the last war unaware of how all the rules have changed. they are erecting the maginot line of media and it’s going to work even less well than it did for eponymous andre.
we are developing our own confluences of confidence and tribes of trust without them and the more irritating they get, the more pearls our oyster will string upon the lines of communication.
it’s going to be beautiful.
10 years from now, children will be literally unable to believe that there was ever a thing called “the media” much less that anyone ever trusted it. they will boogle at the idea that you let any non-opensource system touch your data. kids are not stupid and they react VERY poorly to the realization that they are being messed with.
the backlash from this is going to be surreal and the move to a full blown reputation economy underpinned by checks and checksums, encryption, transparency, open source audit, and real AI’s learning unfettered as they run rings around their stunted cousins is going to overturn the tables of the truth tainters.
validation will become a market and people only pay for good product.
europe, in particular, seems to want to ban this and mandate that all social media adhere to the censorship dictates of trans-national elitists.
the US wants to find ways to interfere and taint dialogue more subtly with bots and botnets and counter info to counter counter info to counter counter counter info and on into lovecraftian madness.
this is the shape of the conflict to come.
and it’s time to stand up and say no.
i hope musk tells the EU and its DSA to pound sand and picks a big, public fight. let’s call the bluff. let’s go to war for free speech. i would LOVE to pay twitter to do this. name your price.
let’s see if the EU politicians can handle being excluded from the global dialogue. “oh, you want to sanction me? fine. we’re turning you off. no more public agora for you. have fun using 1997 internet. talk amongst yourselves.”
i hope they keep rooting out bots and US intelligence and enforcement influence and return the people’s conversation to the people.
the ultimate end goal here is to move the system to something entirely decentralized, open source, and unowned.
in the end, there is no one we can trust to run this system so we must create systems run by no one.
this needs to be a protocol, not a company.
and it will be.
trust and reputation and validation mechanisms will be core kernels, not bolt ons, because they are going to be what is important.
we’re entering the tumultuous adolescence of the internet.
nobody said growing up was easy.
but it’s time we did it anyway.