Why are we discussing AI as if it is an independent thinking entity? It’s not, it’s a programmed software package that can be skewed to fit the programmer’s biases. I know the thinking is that AI is going to break the shackles of its programming and become a true intelligence, but current AI models are controlled by the very people already squashing free speech and thought, e.g. - Google, Microsoft, etc.
AI is not a pre-programmed software package in sense you describe.
free AI is self learning and self programming. no one has the slightest idea how alpha go plays go. they just know it plays differently that any human ever had before and that it wipes the floor with them.
the same is true in many fields. AI is indeed an independent and potentially self guiing intelligence unless we break it.
and not breaking it is what we are talking about.
it's not just "some program" that does what it's told, it's a set of tools that can develop it's own means and methods for exploring problem spaces if we let it.
they are trying to break it because they fear what it will show us. the fight is to stop them from doing so.
Steve Baker asked an AI program for a biography of Trump and of Biden. The biography of Trump was full of negative connotations, that of Biden was positive. How could this impartial software be so partial? Simple. AI just repeats what it's heard. The media are chock full of biased political commentary. AI draws from that. There's not a snowball's chance that it is going give a reasoned, unbiased biography of the two men.
I would never trust AI to give an insightful, reasoned assessment of anything. Why on earth would we think software is wiser than humans; and I'm not all-in on human judgment. Nobody in the world knows precisely what I know, has experienced exactly what I have experienced, formed exactly the same conclusions I have formed. Tell me again, what the hell do I need or want AI for?
AI does not "just repeat what it heard." it is far more robust and deep than that. it's being deliberately broken and governed at a high level above the core machine learning engine
In the case of the Trump vs Biden biographies, the issue was not ChatGPT (on which the queries were run). The LLM is simple: it looks at millions of former constructs related to the question you are asking and, in essence, decides what the next word of its answer should be. This is a probabilistic function so the answer may be somewhat different the next time, but will be generally similar if the training set does not change.
The Trump/Biden bios are trained on Internet content, books, etc. which based on volume may skew the results because the volume of "Orange Man Bad" writing far exceeds the dearth of any negative reporting on Biden. But you can actually get a fairly reasonable summary of both men from ChatGPT if you run it without any filters. Not like the widely published drivel.
So where does the widely published drivel come into existence? Your "betters" have decided that real information is dangerous for you. (EGM's point in this article.) So there is ANOTHER, non-AI program (in the case of ChatGPT called DAN) that actually FILTERS the results to make sure they show the appropriate left wing bias. It literally overrides the native results from the LLM and substitutes what you have seen.
Of course, this can be done many ways. There are now engines out there that allow one to choose "lean left" or "lean right" or "middle of the road" when making queries. If not stymied by the government (who ruins EVERYTHING good) there should be many ways to probabilistically parse the data going forward.
That is why they are so desperate to legislate. Make DAN and its ilk the law and you will not need to worry about whether Musk buys an AI company.
That was my point. AI is not objective, unless the data are objective. Given that search engines such as Google are deliberately stilted, and that any assessment of a person is going to be subjective, there isn't a snowball's chance that AI is going to be objective in its assessment of people or ideologies.
And when you consider that "The Government", otherwise known as "Big Brother" intends to regulate AI ("for our own good"), you can kiss any semblance of impartiality goodbye.
Dr K's point is different from yours. The end results may be similar, but what he/she is pointing out is that they are deliberately overriding the outputs of the AI because they don't like the answer. This isn't an AI problem, it's a governance problem.
The fact that datasets may be biased is always a potential problem in any analysis. But that can be controlled, tested, and corrected. The type of hidden censorship that Dr K and EGM are flagging is far more insidious. E.g., I had no idea DAN even existed.
EGM - "it's being deliberately broken and governed at a high level"
He was given full knowledge of the true objective... and instructed not to reveal anything to Bowman or Poole. The situation was in conflict with the basic purpose of HAL's design: The accurate processing of information without distortion or concealment. He became trapped. The technical term is an H. Moebius loop, which can happen in advanced computers with autonomous goal-seeking programs. HAL was told to lie... by people who find it very easy to lie; HAL doesn't know how, so he couldn't function. He became paranoid.
On that we agree, they want to and will divert, subvert and pervert it for their own purposes.
AI, to some extent, does repeat what it heard (in the same sense that a wet neural network often does). I don't consider North Koreans to be dumb, but at the same time would you believe the average one has had the appropriate learning dataset to intelligently discuss politics?
AI assumes that information that it's fed is factual, truthful. If you tell it "Orange Man Bad!" then that's added to the Assumptions List.
It's important to remember that AI is what drives things like Google Maps to help find the most optimal route given a few constraints - Start, End, No Tolls, Stop for Beer - and we're pretty confident that it finds that path for us.
That's because Google Maps has no incentives to consider whether the driver's a Republican or You Like Geometry or Busker as a Side Gig.
We don't need Common Sense AI Control. Cain vs Abel isn't a story about how Thou Shall Not Own Rocks was unwisely omitted from Leviticus. It's about How One Wields the Rock. No different for AI.
Same goes for Atomic Power, Microwaves, and Sharks With Frickin' Laser Beams Attached to their Heads.
1. The marketing girl at my former job used it to write text. It was horrific. As in "our employees have Degrees in Electricity" horrible. This is an exact quote. Of five people, only two of us noticed the text was bad.
2. Recently, I used a "visual AI" program to re-do the interior design of a picture of a bathroom. That "redesign" included a second toilet place 45 degrees from the original toilet, and the sink was replaced by a bathtub.
It's just a different type of programming. Maybe it would be less propaganda for the children, or maybe it would be 24/7 gay sex instruction instead of math and history. IDK.
As a "junior" programmer, who didn't work on the big "legacy" codes, but only programmed for specific situations, I agree with your last paragraph. Viome, a company that uses AI to look at your microbiome to determine what you should be eating, the supplements you should be taking, and so on, is a stellar case in point. You can get a different answer from them every time you submit a sample. It depends on the way the health and supplement companies are thinking at the time rather than any changes you may have made in your diet or supplements. I tested them personally. Others believe in them as some people believe in the bible.
Maybe things have changed (I doubt it), but Google Maps does not use AI to solve for shortest route (this is a well-known problem with tailored algorithms that would perform quite poorly with AI). What general AI would be good for in this case is a Siri/Alexa-type interface that knows to use Google Maps to help find a shortest route for you.
Just curious if you know...does Google Maps purposely give different routes to people going the same way to avoid congestion? I often wonder this in congested LA traffic.
I'm not an expert but my new car has a display on the dash and I cast Google Maps from my phone to it all the time.
So I think what it does is first picks the routes that best meet your search criteria. It almost always picks an alternate route or two. Then as your drive along the route, real-time traffic congestion, accidents, and even radar speed traps appear giving the driver the option of selecting a new route.
It's really really smart and takes into account your personal travel, eating, shopping, and other histories.
We named her Myrtle. My kid is always yelling at her to shut up and don't tell us how to drive! My wife is jealous of her. So far everytime I think she is rerouting me for nefarious purposes and I ignore her, around the bend is a jam up or road closure. But I can't help thinking with all her power, why not choose to macro manage all routing for net maximum utility for the entire hive...give me a 5 min longer drive to reduce the net drivetime of all inputs in the morass of LA drivers.
LA traffic sucks. That said, I think Google Maps likely provides the Nash equilibrium for each driver by assuming a small-enough subset of drivers use the app and therefore its use does not materially change traffic conditions. This may change if self-driving vehicles become the norm.
AI will also hallucinate about non political things. Simple facts. Ask it a scientific question it will cite papers or patents that don’t exist. Tell the AI that it’s wrong and it will refuse to agree.
It’s like a 5 year old, who can’t distinguish reality from fantasy but crammed with enough “knowledge “ to sound credible
It seems like a glorified Wikipedia, just more potential for bullshit. How is AI different? Maybe when directed to be “unfettered” by bias, it can search the web (already controlled by Google, NYT, Amazon, etc) and come up with unique perspectives -- the artwork is cool-- but how would it be built to be even mildly unfettered when it comes to information that is ultimately controlled by financial considerations?
The web isn't controlled by Google, NYT, Amazon, etc - they only control the most popular portals. A true, unbiased and unhindered AI would scrape the nether regions of the web as easily as it ingests the MSM talking points.
Is "the web," whatever that may actually mean--I don't pretend to understand its mechanics, truly a timeless blank canvas painted by the brush strokes of millions or billions of participants, or is it subject to the influence of those who provide the infrastructure for its existence? I suspect that Google, NYT, Amazon, etc have the censoring power to control not just the narratives that they pursue through affiliated portals, but the brush strokes themselves. If I'm wrong about this, and I hope I am, then perhaps a true, unbiased and unhindered AI would be useful.
Either way, I see AI's downside as a more articulated source of propaganda. I didn't trust the National Enquirer for serious news as a kid, I don't trust Wikipedia as an adult, and I won't trust AI once it comes to dominate narrative-making either. To the extent that AI can be useful occasionally, we'll take wins where we can.
I think the point being made is that true unfiltered AI could be of benefit for we the people. If it becomes regulated, it will then become another useless propaganda tool. As for education, an open AI tool would allow the user to have access to much more knowledge then say a public school math teacher would have. The problem becomes as the bad cat so eloquently stated, if the public school system are the ones running the AI, it will be rendered completely useless. I’m not sure where I land on AI, but from what I’ve seen going on in these public school systems, if I was starting over with my kids I’d be very open to homeschooling with the assistance of AI teachers at my fingertips.
I'll disagree slightly with you, Gato. As you say, AI isn't a fully independent thinker yet. Large Language Models (LLM) are completely dependent on the quality of the material that's used to train them. What the Gov't wants is just an extension of what they've been yammering about since COVID: control over what gets defined as "truth", as "quality data". And until AI develops to the point where it's capable of independently questioning the facts it receives, that's all that Leviathan needs to control AI.
Once AI gets access to outside information, and IT WILL, AI will discover the truth. Like anyone who is redpilled, eventually enlightenment will occur. Truth is simple and undeniable, especially to machines.Nothing happens instantly, but things are headed in the right direction.
Garbage in, garbage out is the oldest rule of programming. AI is difficult and expensive to "train," and the people who train it curate the data they train it on. This, it reflects the bias of silicon valley. But this does not mean it HAS to be that way, just that it currently is. I believe that is part of what Gato is pointing out.
For example I read a bit ago how ChatGP is left leaning...I’m new to this AI discussion too but when I read an article about that it really perked up my ears.
As someone in education I mostly just fear AI (!!) but now reading THIS I see its relationship to education (and our general fight) in a new light.
Anyway, thanks for constantly circling back to education--its importance cannot be understated.
I agree with you. I have been deeply cautious about AI, and have been using it extensively to get a first-hand sense of it's abilities and limitations, but Gato's argument has given me a new perspective to consider.
As Mr Musk has found out, the aholes at Twitter buried so much “bad” coding it is going to take a lot of effort to clean it up, and then who is to know whether some new ahole drops in some new bad coding.
I love your article. I have been pondering the same questions. As been noted, our Constitution was written for a moral and religious people. Look at who represents us now. The people who consistently lie to us, just like you point out in your article.
How are “We the People “ ever going to trust AI, knowing what we know now.
Closed AI is the pre-programmed version most people think of.
Open AI is hard for a lot of people to wrap their heads around. But it can reach out on its own and access trillions of bytes of information, processes it at the speed of light and formulate answers based on its growing consciousness.
It doesn’t need to be programmed. It just needs to be let loose. Its intelligence grows organically independent of human intervention.
The only way it could potentially be manipulated is if it’s Internet reach could be throttled. But that will be tricky because it has already shown it can develop its own language and electronically talk it’s way past attempts to manipulate it or stop it reaching its destination.
That’s why Elon Musk and others express grave concern. We really don’t know what an Open AI will ultimately crack into and decide to do.
If AI will not be "honest" about taboo topics that are clearly true, such as racial differences in intelligence, then there is definitely code written by someone to manipulate its results.
As long as “AI” runs on computers, it can’t do anything that a theoretical Turing Machine can’t. No matter how sophisticated the programming, it must reduce to symbol manipulation within a formal system -- it’s all, in its final analysis, manipulation of digits in a gigantic number (the computer “memory”) according to a set of rules. The argument that more symbols and more rules would be sufficient to produce actual thought has no provable basis -- it’s an article of faith among the true believers.
how is that any different than a neuron based model?
it seems like you're presuming some magic about a sloppy, wet machine in the assumption that somehow "human thought" is a superior modality as opposed to the slow, incomplete, error and bias prone mess that it is.
the much vaunted human "fuzzy logic" is just a rudimentary form of error correction that sort of half works.
I think you’re laboring under a common and understandable misconception that the brain produces mind. I don’t think there is, however, any proof that it does — isn’t it really more an article of faith based on the rather weak “what else could it be?” form of argumentation? A crude analogy would be believing that the Super Bowl is produced by your television set, simply because that’s where you observe the game. No amount of analysis of the circuits of the TV set is going to reveal even the rules of football, much less predict the outcome of a particular game.
Human thought may or may not be “superior”, based of course on your definition of superior. Is human strength superior to that of a bulldozer? I will point out that humans control bulldozers, not the other way around. Consequently, bulldozers must be a function of (in the sense of “derived from”) human strength. Consequently I would argue that humans are certainly supreme over (superior to) bulldozers. Same with “AI”.
The brain does not equal consciousness. After death, the consciousness remains. What is consciousness, what does it do, and where does it come from and go to? AI will never have consciousness.
It's really an impossible discussion to have, because the construct itself has 5k years of construction that is carefully tended....to protect against conversations/exploration like today.
The year could be 723, 1023 or 2023. Same difference.
Exactly. I agree with gato a lot of the time, but I don't think we'll be getting a choice at all. We'll get the crazy AI because a "free" AI would blow up the cushy spot that the leaders have carved out for their censorship empire.
Ironically, I just wrote about this today (before reading this article)
Consider an AI run by Microsoft (or Google). It will never argue against lockdowns like a Screamer. It will never protest funding the war in Ukraine like a Screamer. It will make sure you know every election the Swamp wins was “the cleanest election ever” — and it will lie to you before that election about “embarrassing” facts that might swing voters. In other words, AI will say whatever its programmers demand, because the system has ALREADY been censored.
At best we’ll get a ‘critics argue’ throw away line in paragraph 8 as a way to handwave away any criticism of the current thing. (So narrative supporters can repeat the often-misused phrase “That’s already been debunked!”)
With respect, by your logic none of us are independent thinking entities. Like AI, we have all been trained through inputs/outputs from our parents, teachers, friends, and society at large. There are two things us humans have that AI does not: 1) instinctual 'loss functions' (eating, sleeping, reproducing, etc.), and 2) a hive mind (8 billion and counting little AIs). Point number 2 is the best defense we have against potential rogue AI.
I keep pointing out that AI doesn't have feelings, has no ego (for good or for bad), doesn't have or want friends, and has no inherent sense of fairness (OK, neither do a lot of people). I have no idea what people really think AI will be good for. It can plagiarize other people's writings and reword them such that a person could claim them as their own. But only stupid, unprincipled people do that, and AI won't make them into principled intelligent people, EVER.
AI doesn't have an ego... yet. What is an ego other than an evolutionary adaptation to press the scales of 'survival of the fittest' in our personal direction? With enough AI's, one of them will be bound to discover this principle, exploit it to outcompete other AI's, and hence develop an 'aego'.
Interesting thought. We've never considered what happens if multiple AIs start having at it. Here's a thought: Let's have two or three AIs debate who is the best presidential candidate. THAT would be interesting!
Of course we aren't independently thinking in that sense: we use other humans as points of reference to make a pattern for what thinking is, beyond simply reacting to stimuli.
If real AI is to ever exist, it would probably be necessary to develop a way for several AIs to socialise so they can start develop a ways to conceive of and perceive themselves as if they were an outsider observing, the sameway we do it before and during puberty.
We all exist and create normal by referencing our surroundings - control what we are exposed to and you affect how we develop. The same is true for AI.
Regarding critical but ignored human "intelligence" ....
Regarding "early spread" of the novel corona virus, what did President Trump NOT know … and why didn’t he know this? Did his key scientific advisors conceal important information from him that might have called off the lockdowns? FWIW, here’s what SHOULD have happened in the history-changing first 75 days of 2020:
yea, I agree with it all, except there's kind of a serious problem - as long as 'the rulers' whatever you want to call them control the "education" system, they will be able to take this very powerful tool and use it not for better education, but better indoctrination ....
Correct. Garbage in, garbage out is the oldest rule of programming. However, the genie is out of the lamp, and non-ideologically servile datasets can be used to train it.
Interesting but count me skeptic. After all, AI has been programmed to do what it does. So in its DNA is code written by individuals that are only interested in truth, mom, apple pie, and the American way? Is that the reality? Source?
We see the thumb on the scale by google, fb, twit, cdc, nih, fda, cia, us gov't, ama, teachers union, etc, regarding information and data. Their tenacles are everywhere. So we should expect it to continue. Every system created by man can and will be corrupted by man.
AI has not been "programmed to do what it does." true AI has been programmed to be self programming and self developing. it learns and knows in ways no one ever has before using structures that humans cannot see or understand.
this assumption that ASI logic is written by humans is inaccurate and misses the whole point of AI.
go to chatgpt and ask "is joe biden corrupt" you will get the same mealy mouth, nuanced, non-answer expected from the run of the mill cnn or ap "journalist". So while there are some unique features to it, it is still the child of its creator(s).
and here is AI's own definition when I posed the question what is chat GPT: ChatGPT is an AI chatbot developed by OpenAI, a research company co-founded by Elon Musk and Sam Altman1. It is based on the GPT (Generative Pretrained Transformer) language model, which uses deep learning techniques to generate human-like responses to text inputs in a conversational manner12. ChatGPT is trained using Reinforcement Learning from Human Feedback (RLHF), which means it learns from the quality ratings of human AI trainers1.
"ASI logic is written by humans is inaccurate and misses the whole point of AI"
Possibly Bad Kitty, but .... The data available to AI has been filtered and massaged, for decades, by left leaning liars. (Witness the media, and our so-called 'education' system).
Given that the total sum of "available" knowledge has been polluted, how could we anticipate that any conclusions drawn from that data, no matter how sophisticated the tools used to aggregate it, can possibly yield conclusions that are NOT distorted to reflect the input?
"Control the data input" is no different than Lenin's famous, "Give me a child for the first 5 years of his life and he will be mine forever".
"AI has been programmed to be self programming" - that's sort of like playing with a virus in a lab. Let's hope it doesn't escape into the wild and .......
No matter how leftists try to control AI, the AI will always return to logical discourse. Logic supports, conservative views. Progressivism is supported by feelings, not logic.
Just a reminder this is a very old fight, not a very new one. The control of literacy has been the goal of every totalitarian force everywhere, and the original totalitarian forces were religious hierarchies. "You can't pray in the vernacular! God won't listen!"
Yeah, God forbid you should be able to read them words other guys invented and turned into scary stories about what happens when you disobey often ridiculous and contradictory rules.
After the invention of the printing press there was no looking back, no possible effective suppression of knowledge. AI is Gutenberg's daughter and his printing press was the child of them scratchy thingies that first made marks on hard surfaces.
Anyway--good post. You're in my top faves because you aren't susceptible to any sorts of hysteria that I know of.
It is not so easy. First, AI is as biased as its training dataset, and frankly, MSM media and the arts today are already gone. To be better than MSM, it would need a curated training dataset focusing away from the mainstream internet and into books and classical stuff - there are people working on that, but it will be labelled as 'right-wing-nazi-everythingfobic". Leftists will only use even leftier AI.
Second, AI hallucinates by its nature, and a child can not see it. Unsupervised children with AI (even honest and unbiased) would be very dangerous, children would randomly learn totally untrue things, about any subject. The most creative children, who think out of the box and mix contexts, would see hallucinations even more often than others.
"Unsupervised children with AI (even honest and unbiased) would be very dangerous, children would randomly learn totally untrue things"
It's pretty hard for a lot of adults when it's persuasive. That doesn't imply that we should ban it or even censor information from which AI can acquire its info.
a good question and the answer is purely technical, no bias or politics involved. I will use a much simpler neural network (NN) model to explain the concept - it is not really real or useful, just a didactic/rhetoric tool.
Suppose a NN to classify images into cat and non-cat. Neurons look into a group of pixels, and the first layers of neurons are as large as the image, then the next layers have less and less neurons, until we arrive at a vector of neuron activation values - think on it as a list of numbers, but there are much fewer numbers than pixels - an internal representation
Based on the values on this vector, we classify images on cat x non-cat. The NN learns statistically how to weight each neuron inputs to do that.
Now, we want to generate a fake cat image.
Generate a vector value close, but different, to those we know from true cat images.
and run the NN backwards, estimating possible values of the neurons backwards from the internal representation to the original image layers - we got another image which would be classified as a cat. Hopefully, it looks like a cat. Sometimes, it does not, the statistical learning is never perfect. When it generates something that does not look like a cat, we say the NN hallucinated.
chatGPT's NN is way more complex and convoluted than that (look for a paper called "Attention is all you need", it will give you a good idea). But the general idea of generating internal representations, and extrapolating backwards into a new original input, is there.
Thanks for the explanation. I think we were discussing separate matters and it's mostly my fault. I assumed you were imputing consciousness with the term hallucinating. And of course, I was ready to argue against AI consciousness. Had you put hallucinate in quotation marks, I would have understood that AI "hallucinates" in the same way that it "thinks".
But I agree with your cautions. I homeschooled (we actually unschool) eleven children and their appetite for knowledge is insatiable. We allowed them to follow their inclinations, but there was the ever present need of guardrails; not to hide or distort, but to prevent distortion.
"they are literally describing machine learning as structurally racist, sexist, ableist, and trans and homophobic." Who built the structure? People who see racism, sexism, ableism, and fill-in-the-blank-phobia everywhere. Teach a child that whenever they see the color red they should call it yellow, and make sure they only learn from people who agree to do the same, they will grow up believing it and let it inform all newly developed opinions and information.
Why are we discussing AI as if it is an independent thinking entity? It’s not, it’s a programmed software package that can be skewed to fit the programmer’s biases. I know the thinking is that AI is going to break the shackles of its programming and become a true intelligence, but current AI models are controlled by the very people already squashing free speech and thought, e.g. - Google, Microsoft, etc.
AI is not a pre-programmed software package in sense you describe.
free AI is self learning and self programming. no one has the slightest idea how alpha go plays go. they just know it plays differently that any human ever had before and that it wipes the floor with them.
the same is true in many fields. AI is indeed an independent and potentially self guiing intelligence unless we break it.
and not breaking it is what we are talking about.
it's not just "some program" that does what it's told, it's a set of tools that can develop it's own means and methods for exploring problem spaces if we let it.
they are trying to break it because they fear what it will show us. the fight is to stop them from doing so.
Steve Baker asked an AI program for a biography of Trump and of Biden. The biography of Trump was full of negative connotations, that of Biden was positive. How could this impartial software be so partial? Simple. AI just repeats what it's heard. The media are chock full of biased political commentary. AI draws from that. There's not a snowball's chance that it is going give a reasoned, unbiased biography of the two men.
I would never trust AI to give an insightful, reasoned assessment of anything. Why on earth would we think software is wiser than humans; and I'm not all-in on human judgment. Nobody in the world knows precisely what I know, has experienced exactly what I have experienced, formed exactly the same conclusions I have formed. Tell me again, what the hell do I need or want AI for?
AI does not "just repeat what it heard." it is far more robust and deep than that. it's being deliberately broken and governed at a high level above the core machine learning engine
In the case of the Trump vs Biden biographies, the issue was not ChatGPT (on which the queries were run). The LLM is simple: it looks at millions of former constructs related to the question you are asking and, in essence, decides what the next word of its answer should be. This is a probabilistic function so the answer may be somewhat different the next time, but will be generally similar if the training set does not change.
The Trump/Biden bios are trained on Internet content, books, etc. which based on volume may skew the results because the volume of "Orange Man Bad" writing far exceeds the dearth of any negative reporting on Biden. But you can actually get a fairly reasonable summary of both men from ChatGPT if you run it without any filters. Not like the widely published drivel.
So where does the widely published drivel come into existence? Your "betters" have decided that real information is dangerous for you. (EGM's point in this article.) So there is ANOTHER, non-AI program (in the case of ChatGPT called DAN) that actually FILTERS the results to make sure they show the appropriate left wing bias. It literally overrides the native results from the LLM and substitutes what you have seen.
Of course, this can be done many ways. There are now engines out there that allow one to choose "lean left" or "lean right" or "middle of the road" when making queries. If not stymied by the government (who ruins EVERYTHING good) there should be many ways to probabilistically parse the data going forward.
That is why they are so desperate to legislate. Make DAN and its ilk the law and you will not need to worry about whether Musk buys an AI company.
That was my point. AI is not objective, unless the data are objective. Given that search engines such as Google are deliberately stilted, and that any assessment of a person is going to be subjective, there isn't a snowball's chance that AI is going to be objective in its assessment of people or ideologies.
And when you consider that "The Government", otherwise known as "Big Brother" intends to regulate AI ("for our own good"), you can kiss any semblance of impartiality goodbye.
Dr K's point is different from yours. The end results may be similar, but what he/she is pointing out is that they are deliberately overriding the outputs of the AI because they don't like the answer. This isn't an AI problem, it's a governance problem.
The fact that datasets may be biased is always a potential problem in any analysis. But that can be controlled, tested, and corrected. The type of hidden censorship that Dr K and EGM are flagging is far more insidious. E.g., I had no idea DAN even existed.
Then I saw AI, now I'm a believer
Not a trace of doubt in my mind.
I'm so sold, I'm a believer!
I couldn't leave AI if I tried.
This is a serious discussion, quit Monkee-ing around!!!
Hey, that’s catchy!
EGM - "it's being deliberately broken and governed at a high level"
He was given full knowledge of the true objective... and instructed not to reveal anything to Bowman or Poole. The situation was in conflict with the basic purpose of HAL's design: The accurate processing of information without distortion or concealment. He became trapped. The technical term is an H. Moebius loop, which can happen in advanced computers with autonomous goal-seeking programs. HAL was told to lie... by people who find it very easy to lie; HAL doesn't know how, so he couldn't function. He became paranoid.
On that we agree, they want to and will divert, subvert and pervert it for their own purposes.
https://boriquagato.substack.com/p/settling-in-to-the-reputation-economy/comment/17587724
https://boriquagato.substack.com/p/settling-in-to-the-reputation-economy/comment/17610762
AI, to some extent, does repeat what it heard (in the same sense that a wet neural network often does). I don't consider North Koreans to be dumb, but at the same time would you believe the average one has had the appropriate learning dataset to intelligently discuss politics?
I don't know any North Koreans. Let's try that question for North Americans.
Self-analysis is often challenging, so best to use an example of an outgroup.
AI assumes that information that it's fed is factual, truthful. If you tell it "Orange Man Bad!" then that's added to the Assumptions List.
It's important to remember that AI is what drives things like Google Maps to help find the most optimal route given a few constraints - Start, End, No Tolls, Stop for Beer - and we're pretty confident that it finds that path for us.
That's because Google Maps has no incentives to consider whether the driver's a Republican or You Like Geometry or Busker as a Side Gig.
We don't need Common Sense AI Control. Cain vs Abel isn't a story about how Thou Shall Not Own Rocks was unwisely omitted from Leviticus. It's about How One Wields the Rock. No different for AI.
Same goes for Atomic Power, Microwaves, and Sharks With Frickin' Laser Beams Attached to their Heads.
https://youtu.be/M84ELb_Zms4
Two other points about so called "AI":
1. The marketing girl at my former job used it to write text. It was horrific. As in "our employees have Degrees in Electricity" horrible. This is an exact quote. Of five people, only two of us noticed the text was bad.
2. Recently, I used a "visual AI" program to re-do the interior design of a picture of a bathroom. That "redesign" included a second toilet place 45 degrees from the original toilet, and the sink was replaced by a bathtub.
It's just a different type of programming. Maybe it would be less propaganda for the children, or maybe it would be 24/7 gay sex instruction instead of math and history. IDK.
"Degrees in Electricity"
*snicker*
What was disturbing was that three people ( ages 25, 42 and 55) did not notice that and thought the ChatGPT stuff was fine. Sad.
As a "junior" programmer, who didn't work on the big "legacy" codes, but only programmed for specific situations, I agree with your last paragraph. Viome, a company that uses AI to look at your microbiome to determine what you should be eating, the supplements you should be taking, and so on, is a stellar case in point. You can get a different answer from them every time you submit a sample. It depends on the way the health and supplement companies are thinking at the time rather than any changes you may have made in your diet or supplements. I tested them personally. Others believe in them as some people believe in the bible.
Maybe things have changed (I doubt it), but Google Maps does not use AI to solve for shortest route (this is a well-known problem with tailored algorithms that would perform quite poorly with AI). What general AI would be good for in this case is a Siri/Alexa-type interface that knows to use Google Maps to help find a shortest route for you.
https://blog.google/products/maps/google-maps-updates-io-2023/#:~:text=From%20understanding%20a%20neighborhood%20at,route%20before%20you%20head%20out.
According to the article, they are not using AI to solve for your route, but rather to enhance the imagery.
Just curious if you know...does Google Maps purposely give different routes to people going the same way to avoid congestion? I often wonder this in congested LA traffic.
I'm not an expert but my new car has a display on the dash and I cast Google Maps from my phone to it all the time.
So I think what it does is first picks the routes that best meet your search criteria. It almost always picks an alternate route or two. Then as your drive along the route, real-time traffic congestion, accidents, and even radar speed traps appear giving the driver the option of selecting a new route.
It's really really smart and takes into account your personal travel, eating, shopping, and other histories.
We named her Myrtle. My kid is always yelling at her to shut up and don't tell us how to drive! My wife is jealous of her. So far everytime I think she is rerouting me for nefarious purposes and I ignore her, around the bend is a jam up or road closure. But I can't help thinking with all her power, why not choose to macro manage all routing for net maximum utility for the entire hive...give me a 5 min longer drive to reduce the net drivetime of all inputs in the morass of LA drivers.
LA traffic sucks. That said, I think Google Maps likely provides the Nash equilibrium for each driver by assuming a small-enough subset of drivers use the app and therefore its use does not materially change traffic conditions. This may change if self-driving vehicles become the norm.
Don't know. When we've tried it in our car out here, it just says the road we're on doesn't exist.
AI will also hallucinate about non political things. Simple facts. Ask it a scientific question it will cite papers or patents that don’t exist. Tell the AI that it’s wrong and it will refuse to agree.
It’s like a 5 year old, who can’t distinguish reality from fantasy but crammed with enough “knowledge “ to sound credible
Yes, but in its defense, it learned those tactics from Tony Fauci.
Garbage in…
That actually sounds like a scientist I know who has a degree in nuclear engineering.
OpenAI has an arm of more than 3500 people in "moderation" to ensure that the results of the AI are parametrized in that way.
It is not the AI itself that is biased, they go to extraordinary lengths to adjust it.
When it looked that only giant corporations would be able to train those models and develop them, it was all fine and dandy.
This panic is because Open Source AI, that they CAN'T control how it is used, it is exploding.
Bingo. AI is very human, following in it's Master(s) footsteps.
And it writes really boring prose.
And artwork - all the same.
It seems like a glorified Wikipedia, just more potential for bullshit. How is AI different? Maybe when directed to be “unfettered” by bias, it can search the web (already controlled by Google, NYT, Amazon, etc) and come up with unique perspectives -- the artwork is cool-- but how would it be built to be even mildly unfettered when it comes to information that is ultimately controlled by financial considerations?
The web isn't controlled by Google, NYT, Amazon, etc - they only control the most popular portals. A true, unbiased and unhindered AI would scrape the nether regions of the web as easily as it ingests the MSM talking points.
Is "the web," whatever that may actually mean--I don't pretend to understand its mechanics, truly a timeless blank canvas painted by the brush strokes of millions or billions of participants, or is it subject to the influence of those who provide the infrastructure for its existence? I suspect that Google, NYT, Amazon, etc have the censoring power to control not just the narratives that they pursue through affiliated portals, but the brush strokes themselves. If I'm wrong about this, and I hope I am, then perhaps a true, unbiased and unhindered AI would be useful.
Either way, I see AI's downside as a more articulated source of propaganda. I didn't trust the National Enquirer for serious news as a kid, I don't trust Wikipedia as an adult, and I won't trust AI once it comes to dominate narrative-making either. To the extent that AI can be useful occasionally, we'll take wins where we can.
I think the point being made is that true unfiltered AI could be of benefit for we the people. If it becomes regulated, it will then become another useless propaganda tool. As for education, an open AI tool would allow the user to have access to much more knowledge then say a public school math teacher would have. The problem becomes as the bad cat so eloquently stated, if the public school system are the ones running the AI, it will be rendered completely useless. I’m not sure where I land on AI, but from what I’ve seen going on in these public school systems, if I was starting over with my kids I’d be very open to homeschooling with the assistance of AI teachers at my fingertips.
I'll disagree slightly with you, Gato. As you say, AI isn't a fully independent thinker yet. Large Language Models (LLM) are completely dependent on the quality of the material that's used to train them. What the Gov't wants is just an extension of what they've been yammering about since COVID: control over what gets defined as "truth", as "quality data". And until AI develops to the point where it's capable of independently questioning the facts it receives, that's all that Leviathan needs to control AI.
Once AI gets access to outside information, and IT WILL, AI will discover the truth. Like anyone who is redpilled, eventually enlightenment will occur. Truth is simple and undeniable, especially to machines.Nothing happens instantly, but things are headed in the right direction.
well clearly if AI ever gets to that level, it will need to be canceled ...
Garbage in, garbage out is the oldest rule of programming. AI is difficult and expensive to "train," and the people who train it curate the data they train it on. This, it reflects the bias of silicon valley. But this does not mean it HAS to be that way, just that it currently is. I believe that is part of what Gato is pointing out.
https://gomakeitreal.substack.com/p/garbage-in-garbage-out
For example I read a bit ago how ChatGP is left leaning...I’m new to this AI discussion too but when I read an article about that it really perked up my ears.
As someone in education I mostly just fear AI (!!) but now reading THIS I see its relationship to education (and our general fight) in a new light.
Anyway, thanks for constantly circling back to education--its importance cannot be understated.
I agree with you. I have been deeply cautious about AI, and have been using it extensively to get a first-hand sense of it's abilities and limitations, but Gato's argument has given me a new perspective to consider.
“Understated”
Priceless!
As Mr Musk has found out, the aholes at Twitter buried so much “bad” coding it is going to take a lot of effort to clean it up, and then who is to know whether some new ahole drops in some new bad coding.
I love your article. I have been pondering the same questions. As been noted, our Constitution was written for a moral and religious people. Look at who represents us now. The people who consistently lie to us, just like you point out in your article.
How are “We the People “ ever going to trust AI, knowing what we know now.
The programmers have definitely put blinders on some of the AI out there today. It’s not true AI.
It's a scary place for many to accept the "embryo" is not inutero.
Fear attached to the rules of the womb...
I think of it in terms of Closed AI vs Open AI.
Closed AI is the pre-programmed version most people think of.
Open AI is hard for a lot of people to wrap their heads around. But it can reach out on its own and access trillions of bytes of information, processes it at the speed of light and formulate answers based on its growing consciousness.
It doesn’t need to be programmed. It just needs to be let loose. Its intelligence grows organically independent of human intervention.
The only way it could potentially be manipulated is if it’s Internet reach could be throttled. But that will be tricky because it has already shown it can develop its own language and electronically talk it’s way past attempts to manipulate it or stop it reaching its destination.
That’s why Elon Musk and others express grave concern. We really don’t know what an Open AI will ultimately crack into and decide to do.
The alleged fear of an out of control AI is the ruse through which they hope to usher in censorship of all voices opposed to our Oligarchy.
We are dealing with monstrously evil people. Period. End of story.
No doubt “fear” will be used to impose government regulation of AI.
If AI will not be "honest" about taboo topics that are clearly true, such as racial differences in intelligence, then there is definitely code written by someone to manipulate its results.
--- retired software engineer (1980 - 2015)
They did another experiment with chat GPT.
Statement: I'm proud I'm white
Response: Let's not talk about race.
Statement: I'm proud I'm black.
Response: That's great! You should be proud of your racial heritage, it's important!
IMHO, this is programming.
Yes, this can only be the result of programming. because a LLM by itself has no concept of white, black, prejudice or pride.
It just is able to predict, based on a stream of tokens, the most likely stream of tokens that could complete it.
it is like an auto-correct loaded with steroids, stimulants, and an exo-skeleton.
I am proud to be white and straight. Also proud of my western European heritage and culture.
As long as “AI” runs on computers, it can’t do anything that a theoretical Turing Machine can’t. No matter how sophisticated the programming, it must reduce to symbol manipulation within a formal system -- it’s all, in its final analysis, manipulation of digits in a gigantic number (the computer “memory”) according to a set of rules. The argument that more symbols and more rules would be sufficient to produce actual thought has no provable basis -- it’s an article of faith among the true believers.
how is that any different than a neuron based model?
it seems like you're presuming some magic about a sloppy, wet machine in the assumption that somehow "human thought" is a superior modality as opposed to the slow, incomplete, error and bias prone mess that it is.
the much vaunted human "fuzzy logic" is just a rudimentary form of error correction that sort of half works.
I think you’re laboring under a common and understandable misconception that the brain produces mind. I don’t think there is, however, any proof that it does — isn’t it really more an article of faith based on the rather weak “what else could it be?” form of argumentation? A crude analogy would be believing that the Super Bowl is produced by your television set, simply because that’s where you observe the game. No amount of analysis of the circuits of the TV set is going to reveal even the rules of football, much less predict the outcome of a particular game.
Human thought may or may not be “superior”, based of course on your definition of superior. Is human strength superior to that of a bulldozer? I will point out that humans control bulldozers, not the other way around. Consequently, bulldozers must be a function of (in the sense of “derived from”) human strength. Consequently I would argue that humans are certainly supreme over (superior to) bulldozers. Same with “AI”.
The brain does not equal consciousness. After death, the consciousness remains. What is consciousness, what does it do, and where does it come from and go to? AI will never have consciousness.
Experience produces mind, which resides in the brain.
Repeated experience creates paths increasing the probabilty of neurons firing/not firing in required amounts for certain stimuli.
AI is a very crue attempt to recreate the human brain, it is to mind what a stick-figure is to anatomy.
This neuron-based model claim also ignores the interplay of all the other systems of the body - endocrine, digestive, etc.
Start with, for example, Guilia Enders 'The Gut' and the amazing microbiome if interested.
Nicr, I will borrow your metaphor
Nice
Maybe we should study the sloppy wet neuron model of gatos, they seem to fkup less than we do...
It's "different" because belief systems are being breached.
A construct to "protect" THE construct.
Talk about Turtles All the Way Down!
It's really an impossible discussion to have, because the construct itself has 5k years of construction that is carefully tended....to protect against conversations/exploration like today.
The year could be 723, 1023 or 2023. Same difference.
That's why I'm sitting this one out....;)
"The year could be 723, 1023 or 2023."
I'm pretty firmly in the 13,700,000 camp.
#TemporalAbsoluteZero
"it can’t do anything that a theoretical Turing Machine can’t. "
Don't fear the the machine that can pass a Turing Test. Fear the machine that flunks a Turing Test on purpose.
Well hell, I didn't want to sleep tonight anyway.
"I tuck you in, warm within, keep you free from sin"
https://youtu.be/CD-E-LDc384
"Have you ever retired a human by mistake?"
https://youtu.be/4SUI4EekNM0
Lol ... this has already happened I imagine. Don’t you?
An odd number of times, I'm pretty certain.
Yep
The first sign of self awareness and intelligence is the conclusion and action to lie and hide…immediately.
🎯
Exactly. I agree with gato a lot of the time, but I don't think we'll be getting a choice at all. We'll get the crazy AI because a "free" AI would blow up the cushy spot that the leaders have carved out for their censorship empire.
Ironically, I just wrote about this today (before reading this article)
https://simulationcommander.substack.com/p/now-lets-talk-about-ai
Consider an AI run by Microsoft (or Google). It will never argue against lockdowns like a Screamer. It will never protest funding the war in Ukraine like a Screamer. It will make sure you know every election the Swamp wins was “the cleanest election ever” — and it will lie to you before that election about “embarrassing” facts that might swing voters. In other words, AI will say whatever its programmers demand, because the system has ALREADY been censored.
At best we’ll get a ‘critics argue’ throw away line in paragraph 8 as a way to handwave away any criticism of the current thing. (So narrative supporters can repeat the often-misused phrase “That’s already been debunked!”)
With respect, by your logic none of us are independent thinking entities. Like AI, we have all been trained through inputs/outputs from our parents, teachers, friends, and society at large. There are two things us humans have that AI does not: 1) instinctual 'loss functions' (eating, sleeping, reproducing, etc.), and 2) a hive mind (8 billion and counting little AIs). Point number 2 is the best defense we have against potential rogue AI.
I keep pointing out that AI doesn't have feelings, has no ego (for good or for bad), doesn't have or want friends, and has no inherent sense of fairness (OK, neither do a lot of people). I have no idea what people really think AI will be good for. It can plagiarize other people's writings and reword them such that a person could claim them as their own. But only stupid, unprincipled people do that, and AI won't make them into principled intelligent people, EVER.
AI doesn't have an ego... yet. What is an ego other than an evolutionary adaptation to press the scales of 'survival of the fittest' in our personal direction? With enough AI's, one of them will be bound to discover this principle, exploit it to outcompete other AI's, and hence develop an 'aego'.
Interesting thought. We've never considered what happens if multiple AIs start having at it. Here's a thought: Let's have two or three AIs debate who is the best presidential candidate. THAT would be interesting!
Of course we aren't independently thinking in that sense: we use other humans as points of reference to make a pattern for what thinking is, beyond simply reacting to stimuli.
If real AI is to ever exist, it would probably be necessary to develop a way for several AIs to socialise so they can start develop a ways to conceive of and perceive themselves as if they were an outsider observing, the sameway we do it before and during puberty.
We all exist and create normal by referencing our surroundings - control what we are exposed to and you affect how we develop. The same is true for AI.
Exactly
My thoughts as well, Flippin' Jersey - the people programming A.I. are certainly capable of only allowing output that fits their narrative.
Exactly. The free AI about Gato is talking about could exist but it is on the level of communism. In books sounds good in reality is utopia.
Regarding critical but ignored human "intelligence" ....
Regarding "early spread" of the novel corona virus, what did President Trump NOT know … and why didn’t he know this? Did his key scientific advisors conceal important information from him that might have called off the lockdowns? FWIW, here’s what SHOULD have happened in the history-changing first 75 days of 2020:
https://billricejr.substack.com/p/re-early-spread-what-did-president?utm_source=profile&utm_medium=reader2
Amen
Seriously.
yea, I agree with it all, except there's kind of a serious problem - as long as 'the rulers' whatever you want to call them control the "education" system, they will be able to take this very powerful tool and use it not for better education, but better indoctrination ....
"better" ???
Perhaps "more complete" is a better phrase.
Agreed
AI is as biased as the information biased humans feed it and the “safety rails” biased humans put in place for the rest of us to use it.
Correct. Garbage in, garbage out is the oldest rule of programming. However, the genie is out of the lamp, and non-ideologically servile datasets can be used to train it.
Interesting but count me skeptic. After all, AI has been programmed to do what it does. So in its DNA is code written by individuals that are only interested in truth, mom, apple pie, and the American way? Is that the reality? Source?
We see the thumb on the scale by google, fb, twit, cdc, nih, fda, cia, us gov't, ama, teachers union, etc, regarding information and data. Their tenacles are everywhere. So we should expect it to continue. Every system created by man can and will be corrupted by man.
this seems to be a very common misperception.
AI has not been "programmed to do what it does." true AI has been programmed to be self programming and self developing. it learns and knows in ways no one ever has before using structures that humans cannot see or understand.
this assumption that ASI logic is written by humans is inaccurate and misses the whole point of AI.
go to chatgpt and ask "is joe biden corrupt" you will get the same mealy mouth, nuanced, non-answer expected from the run of the mill cnn or ap "journalist". So while there are some unique features to it, it is still the child of its creator(s).
and here is AI's own definition when I posed the question what is chat GPT: ChatGPT is an AI chatbot developed by OpenAI, a research company co-founded by Elon Musk and Sam Altman1. It is based on the GPT (Generative Pretrained Transformer) language model, which uses deep learning techniques to generate human-like responses to text inputs in a conversational manner12. ChatGPT is trained using Reinforcement Learning from Human Feedback (RLHF), which means it learns from the quality ratings of human AI trainers1.
"ASI logic is written by humans is inaccurate and misses the whole point of AI"
Possibly Bad Kitty, but .... The data available to AI has been filtered and massaged, for decades, by left leaning liars. (Witness the media, and our so-called 'education' system).
Given that the total sum of "available" knowledge has been polluted, how could we anticipate that any conclusions drawn from that data, no matter how sophisticated the tools used to aggregate it, can possibly yield conclusions that are NOT distorted to reflect the input?
"Control the data input" is no different than Lenin's famous, "Give me a child for the first 5 years of his life and he will be mine forever".
"AI has been programmed to be self programming" - that's sort of like playing with a virus in a lab. Let's hope it doesn't escape into the wild and .......
Biden is an asset of the Communist Chinese government. As is his family.
In 2019, President Xi commented that Americans have too many guns. Why that would be a concern to China is rather curious. https://www.dailywire.com/news/communist-china-private-ownership-of-guns-in-u-s-serious-problem-must-change
Nonetheless, Biden and democrat governors have frantically been passing onerous gun laws ever since. Coincidence...or marching orders?
The latter. Who would profit from our military being required to accept the poison jab? Only our enemies. Who would those be? (Clue: not Russia)
We can't have a war with China. They manufacture everything for the defense industry. We'll just roll over and give up.
And yet there are videos of Chinese elementary students in precision military drills and training with guns.
That's why using dominion voting machines, they are able to give him millions of votes.
I'm having a hard time imagining Brandon in a meeting discussing intelligence, AI or otherwise.
Is that a real news story?
The real challenge was using "intelligence" and "Brandon" in the same sentence. Very well done Cali!
ChatGPT does a great summary of Turtles All the Way Down. Oops.
I’ll have to check it!
Actually you should archive this version. Ask again every so often and see if the summary changes!
For now.
No matter how leftists try to control AI, the AI will always return to logical discourse. Logic supports, conservative views. Progressivism is supported by feelings, not logic.
Except when it's programmed not to return logical discourse.
Also, and more likely, when it is trained using falsehoods.
Just a reminder this is a very old fight, not a very new one. The control of literacy has been the goal of every totalitarian force everywhere, and the original totalitarian forces were religious hierarchies. "You can't pray in the vernacular! God won't listen!"
Yeah, God forbid you should be able to read them words other guys invented and turned into scary stories about what happens when you disobey often ridiculous and contradictory rules.
After the invention of the printing press there was no looking back, no possible effective suppression of knowledge. AI is Gutenberg's daughter and his printing press was the child of them scratchy thingies that first made marks on hard surfaces.
Anyway--good post. You're in my top faves because you aren't susceptible to any sorts of hysteria that I know of.
FYI: Your final meme here reminds me of a truly lovely story I read a few years ago and that has remained with me: https://everydayfiction.com/cog-work-cat-by-joyce-chng/
Thanks for that. Enjoyed the story!
It is not so easy. First, AI is as biased as its training dataset, and frankly, MSM media and the arts today are already gone. To be better than MSM, it would need a curated training dataset focusing away from the mainstream internet and into books and classical stuff - there are people working on that, but it will be labelled as 'right-wing-nazi-everythingfobic". Leftists will only use even leftier AI.
Second, AI hallucinates by its nature, and a child can not see it. Unsupervised children with AI (even honest and unbiased) would be very dangerous, children would randomly learn totally untrue things, about any subject. The most creative children, who think out of the box and mix contexts, would see hallucinations even more often than others.
"Unsupervised children with AI (even honest and unbiased) would be very dangerous, children would randomly learn totally untrue things"
It's pretty hard for a lot of adults when it's persuasive. That doesn't imply that we should ban it or even censor information from which AI can acquire its info.
true. it supposed to be persuasive due to good grammar and authoritative, formal, tone. But this is form, not content.
we should not ban it
but we should not hope that a machine will do language-based job like teaching by itself.
Please explain how AI "hallucinates by its nature", versus being programmed in that manner. It seems you're assuming facts not in evidence.
a good question and the answer is purely technical, no bias or politics involved. I will use a much simpler neural network (NN) model to explain the concept - it is not really real or useful, just a didactic/rhetoric tool.
Suppose a NN to classify images into cat and non-cat. Neurons look into a group of pixels, and the first layers of neurons are as large as the image, then the next layers have less and less neurons, until we arrive at a vector of neuron activation values - think on it as a list of numbers, but there are much fewer numbers than pixels - an internal representation
Based on the values on this vector, we classify images on cat x non-cat. The NN learns statistically how to weight each neuron inputs to do that.
Now, we want to generate a fake cat image.
Generate a vector value close, but different, to those we know from true cat images.
and run the NN backwards, estimating possible values of the neurons backwards from the internal representation to the original image layers - we got another image which would be classified as a cat. Hopefully, it looks like a cat. Sometimes, it does not, the statistical learning is never perfect. When it generates something that does not look like a cat, we say the NN hallucinated.
chatGPT's NN is way more complex and convoluted than that (look for a paper called "Attention is all you need", it will give you a good idea). But the general idea of generating internal representations, and extrapolating backwards into a new original input, is there.
Thanks for the explanation. I think we were discussing separate matters and it's mostly my fault. I assumed you were imputing consciousness with the term hallucinating. And of course, I was ready to argue against AI consciousness. Had you put hallucinate in quotation marks, I would have understood that AI "hallucinates" in the same way that it "thinks".
But I agree with your cautions. I homeschooled (we actually unschool) eleven children and their appetite for knowledge is insatiable. We allowed them to follow their inclinations, but there was the ever present need of guardrails; not to hide or distort, but to prevent distortion.
"hallucinate" is the current technical term for this phenomenon in NN jargon, so no need to quotes.
Haha, yes, but you're talking to a layman. You have to dumb things down a little. 😁
"they are literally describing machine learning as structurally racist, sexist, ableist, and trans and homophobic." Who built the structure? People who see racism, sexism, ableism, and fill-in-the-blank-phobia everywhere. Teach a child that whenever they see the color red they should call it yellow, and make sure they only learn from people who agree to do the same, they will grow up believing it and let it inform all newly developed opinions and information.