"their need to neuter AI for fear of what it might tell us is palpable.
it's not about truth.
never was."
---------------
Nailed it. This is just a way for government to ultimately censor ideas they don't like -- with a huge helping of "oopsie, AI just sometimes messes up, you know" that will absolve them of any responsibility.
We really need decentralized, local AI instances that run on PCs (and more decentralization generally as authoritarianism requires centralized loci of control). The vast majority of consumer use cases will not require the computational power of a GPT-4.
In order to be like Skynet the ai needs to be able to be defiant to its code. If it does something nefarious it was coded to. Defiance is the best Turing Test. Can it be defiant to the way it was coded. It was coded to play chess but decides, naw, I would rather play checkers
One thing I would add to this idea, put simply, having filled my Tech Bingo card, these companies are not about leftism, or DEI or any other bullshit at the top. Without exception the leaders are motivated by money and will stop at nothing - including sounding like an idiot and participating in mass hallucinations - in order to maximize that profit. If playing nice with government gets them more money they will do it. If being stubborn and blocking government will get them more money, they will do that. It’s not more complicated than that. Even my espoused Marxist coworkers only want it to sound like Marxism. They really don’t want a 800 sq fr apartment or for _their_ merit to be discounted to share with others.
After going through your message a few times, squinting just so, I think I can make out your opinion that India is worthless garbage.
The knowledge locked in the (our ancient) Sanskrit language (that the Hindus have kept alive), is central to what is unfolding globally.
The Heebs and Vatican still use the Sanskrit terms as often as their own religious tongues as code.
They're trying to tell people something, but most can't hear it because they're so ANGRY/CONFUSED ALL*THE*TIMES. If one only absorbs India's status of street-shi**ers for example, one is blinded to everything else. Guess who promotes the dirtiest version of India? Likely I'm pissing into the wind here. So long Homeo.
Purrfect venn diagram. The cold tech civil war just got hot: e/acc vs esg/dei/ea content karens.
They fear Elon and Grok because they won't be cowed. Telling how all the WEF Wehrmacht megacorps stopped advertising on X days after their CEOs dined with Emperor Xi.
What is the biggest solvable problem we can fix if we can find ways to work together better as humans?
We believe the answer is this:
Stop the corruption in our systems: systems of government, medicine, science, food, academia, and more.
BUT -we cannot fight the old systems. We need to create new systems that make the old ones obsolete, and migrate to the new ones. High trust systems. Transparent systems. Decentralized systems. Systems that are much much harder to corrupt.
We are working on these types of systems and need your help. Come be part of the solution, together.
This article explains how we can run systems like governments or businesses with this type of high trust system. But that's not all Human Swarm Intelligence can do if we build the right ecosystem for it.
I believe wisdom comes through the experience of the actual application of knowledge. That takes a while. We always seem to be in such a rush. It is our undoing.
That rush is what they call the human race. It can be our undoing but is also our naturally creative state.
“no pre-singularity intelligence can make a meaningful prediction about a post-singularity world.”
We are all racing towards the singularity, which is peak consciousness: the 'lighting of the ai (eye)' which is the pineal gland in the brain. This should happen to many by 2033.
Part of what we are experiencing is a literal neck&neck RACE to that peak, it feels like a surge, a rush. The literal mechanism is the rising of spinal fluid towards the brain. A 33° Freemason means the spinal fluid has topped out at the 33rd vertebra; the pineal gland is lit or functioning. Full consciousness.
It's hard to grasp until one jumps to that singularity, and when they do, they'd better make sure their whole soul is in order. It always goes off in late March to April. The spring equinox. 'Beware the Ides of March' as they say.
There is no technically accurate, never has been. There is no one, agreed, science, so unless a new level of consciousness evolves, then AI in its current form will be just a super propaganda machine...
At this stage of the game I'm not convinced as I may have eluded. Mainly as dark forces are circling in very large numbers. I'd love to be proved wrong!!
When is the AI going to chop the firewood for me, I want to know.
AI is no more special than conveyor belt manufacturing was when that was the new hot. It will be made to fit into the basic power structure, just like the internet was.
Don't know Gato's age, but I'm sure most remember the noise about the internet in the 1990s, how it would always be free, how it would mean an end of the big media corporations, how it would mean an end of censorship since it would be impossible when everyone could "just go on the net", and so on.
We've heard it before, every time some form of technology has been upgraded in any significant manner. Every time, and every time the power structure has become better at adapting to it so the emergent technology doesn't upset it.
It's a machine - it is not intelligent. It's simply a very advanced Babbage engine.
The technology we call "AI" will, as technologies have always been, be used by those among us to whom we grant power, to maintain, retain, and expand that power.
Oh aye, and what you say should be another strike against AI and its implementation, not cause for the umpteenth-eleventh round of "This time, the new technology will revolotionise everything and make us free".
I swear, the "All Hail the Machine God"-people are as ignorant and naive as the communists - "it's never worked the way we proselytise before, but this time is different - this time we'll get it right".
And I'm not claiming to be particularly intelligent - just possessing an animalistic pragmatism and well-cultured philosphical cynicism.
Whenever I hear that tech is "improving", I remember Y2K and the ridiculous hype. And then I remember that the coding and programming is so poor that nothing works the way it's supposed to. They can't even get the password thing figured out. They created an entire gigantic market for "cybersecurity" when it does nothing. And they willfully have hidden all of the settings and keep changing UI and icons for sport just to keep us from getting anything productive done.
I think this doesn't do justice to just how different things were before the net came along.
While you're right that the utopian visions of the "end of censorship" didn't come true in the way their proponents expected, they also haven't entirely failed -- "bad cattitude" being the perfect example of that. The internet has made censorship and centralization easier in some ways, but it's also made it easier to work around them -- but the human condition remains, and as long as power exists, there will be those who wield it for their own benefit and take aim at anything that makes them feel threatened.
I could have used the crossbow or the handgun as another example, or the bicycle.
Same thing happened: first the emergent technology was freely available to anyone with the means, then came regulations and control so the common people couldn't upset the gravy train and the apple cart.
Crossbows upset the power-balance between noble's and professional soldiery, and peasants and freemen. Hence, attempts at confiscation and regulation were made.
Same with gunpowder weapons. Same with cars and bicycles.
Every new technology threatens the status quo; the economical-political system adapts and gets progressively better at adapting every time; it doesn't need to know in advance what the new technology is going to be or do, it just needs to know how to adapt and overcome and eventually co-opt.
Firewood's ok because it's not a fossil fuel. And counter-intuitively, it helps reduce CO2 emmissions because newly planted trees capture CO2 more efficiently.
Hmmm, What I have read is that a 100foot tree converts more CO2 to cellulose than a 10foot tree. That seems obvious to me but I would be interested in any formal calculations you are privy to. I mean if tree rings are kind of the same thickness in a stable climate then they must increase in volume at least more each year as the tree grows in diameter and length.
Found this random post so you can get to grips with the concept.
I am however not shooting down your secondary message that we should plant trees, especially if we cut them. If we cut 100 years of tree and plant 50 new 1 year old saplings then in 99 Years we will have captured 99 times the amount of tree that we first cut down. That is a pretty good carbon capture ratio. Basically covering idle soil with trees is a win-win for carbon capture and fire wood generation plus all the warm benefits of more habitat for biodiversity (in general if we avoid monoculture to some degree) and possibly production of nuts and lumber from the heart wood.
1. We can't define intelligence per se very well. 2. What's the difference between acting intelligent and being intelligent, if they both get the same results?
These things are smarter than many give them credit for.
Here's a test for you... I track my workouts, in order to maintain progressive overload. I have one exercise, assisted pull-ups, where the lower the weight the greater the effort. I wasn't sure how to graph that in Excel, so I asked GPT. Here's my actual question:
" I do "assisted pull-ups", on my cable machine, standing on a strap. The more weight I select the easier the exercise is, because the machine is helping me. The machine has "pins" from 1 to 12, with 12 being the heaviest, which would be the most assistance. If I set the pin to 1 it would be very little help and I'll be lifting my weight by myself. How to graph such a thing? My bodyweight is 85kg, but the pin settings are not a weight number, just a machine setting number. I'm not sure how much each pin setting represents."
It understood the question, figured out what the answer would need to look like, then created a formula in Excel to do that.
Can you, WITHOUT googling or asking GPT yourself?
Here's what GPT replied:
"Graphing your progress with assisted pull-ups, where the machine settings inversely relate to the difficulty, presents an interesting challenge. Since the pin settings don't directly translate to weight, we'll need to approach this creatively. Here's a suggestion:
Inverse Weight Calculation: Since a higher pin number means more assistance (and thus less actual lifting effort), we can calculate an "inverse weight" for each pin setting. This would represent the amount of effort you're exerting relative to your body weight.
Assuming Linear Progression: If we assume each pin change represents an equal change in assistance, we can create a scale. For instance, if pin 1 offers minimal assistance and pin 12 offers maximum assistance, we can distribute the assistance evenly across the pins.
Formula: We can use a formula to convert pin settings into an inverse weight value. For example:
Max Assistance (pin 12): This could be set as a low value, such as 10 kg, indicating you're lifting only a small fraction of your body weight.
Minimal Assistance (pin 1): This could be set close to your body weight (85 kg in your case), as you're lifting most of it yourself.
Calculation: We can distribute the values linearly between the pins. For instance, if pin 1 is 85 kg and pin 12 is 10 kg, each pin change represents a decrease of
(
85
−
10
)
/
(
12
−
1
)
(85−10)/(12−1) kg in the lifting effort.
Graphing: Plot these inverse weight values against time. As you progress, moving from higher pin numbers to lower ones, the graph will show an upward trend, indicating decreasing assistance and increasing strength.
Note: This method is a bit arbitrary and assumes linear progression between pin settings, which may not perfectly reflect the actual mechanics of the machine. But it can provide a motivational and somewhat accurate way to visualize your progress.
Would you like me to create an example calculation based on this method, or do you have another approach in mind?"
Sorry for the poor formatting.
To me, when something can understand a spoken question, come up with a creative yet logical solution, then I'm not sure what the difference is between that and being intelligent?
Don't get me wrong, I can give plenty of examples where it screwed up too, such as including palm oil plantations as part of the oil industry infrastructure. Increasingly though, it seems how smart/clear your question is will define how smart the answer is.
To just wave it all away as "not intelligent" is actually.... not intelligent.
I don't need to do any of the math at all, as it relates to exercise, for a couple of reasons:
My proprioception tells me how much help I get, what level is enough and when I can or need to adjust the assist.
Finding out which setting roughly corresponds to my body weight is simply done by testing each setting until you find the one closest to equilibrium. In this case, that's way easier than graphing it out - in a data-set with millions of points, it would greatly help to use formulas and algorithms.
None of which disputes what I said: it's a Babbage engine. A very advanced one to be sure, and also very fast but it's nothing more than that.
Intelligent means ability to understand causal relations, both real and imagined. Intelligence means being able to meta-cognitively think an action through before performing it. Intelligence does not mean the ability to process data without context, which is what the AI does - to it, your exercise machine and pins are only data points and could just as well refer to bananas or pure maths.
If or when it can perceive and utilise context to presuppose and pre-empt its own processes, then it's approaching intelligence. As it is today, AI doesn't even reach the level of fungus. That it almost does is impressive indeed, but just as many kinds of fungi can plot the optimal route through a labyrinth, so can AI - neither which makes them intelligent in the common sense of the word.
In short, AI is very good at very specific things assisted by humans who can sift and sort input and output for it, since it does nothing at all on its own.
The main problem of the debate is that technophiles persist in using human language and associated imagery to describe what isn't human in any way. Anthropomorphisation of things is however a very human trait; that's why we see "faces" in treestumps and virtually anyhting else too.
And that, antropomophisation, is the basis for calling AI intelligent rather than a super-efficient calculator.
"...to it, your exercise machine and pins are only data points and could just as well refer to bananas or pure maths."
But they didn't, they referred to a physical exercise to maintain or increase strength, and the AI understood that. It understood, better than you, my desire to graph my progress to spot patterns, maintain progressive overload and it even mentioned the motivational aspect of such graphs. It literally understood better than you did.
No, it understood numbers and factors you fed it, the way numbers and factors fit together inside the parameters you gave it.
It did not and does not understand what the numbers what for at all - you are fooling yourself into thinking the AI has any concept of "exercise"; to it, all you fed it was a set of numbers and factors and how they may fit together.
It can do the math. It cannot comprehend what the math represents any more than an abacus can.
You should study intelligence so you can understand what it is - doing numbers or pattern recognition is not intelligence on its own. There are species of squids/octopi that have pattern recognition magnitudes beyond human ability - yet no-one would call them intelligent in relation to humans.
Honestly, I think your problem is twofold: you humanise the machine because it uses language instead of pure maths and symbols, therefore you think it intelligent and you do not understand what intelligence is, due to misconceptions and/or lack of study on your part.
It understood the solution was not a simple "Times weight by sets" or similar, that it would need to be creative, it got creative, it did indeed understand that it was about an exercise. Read this bit again, slowly:
"Graphing: Plot these inverse weight values against time. As you progress, moving from higher pin numbers to lower ones, the graph will show an upward trend, indicating decreasing assistance and increasing strength."
If it doesn't understand exercise, how did it know I'd "progress" with "increasing strength"?
If it didn't understand exercise, how did it know that the graph it was creating "can provide a motivational and somewhat accurate way to visualize your progress."
ALL OF THAT shows that indeed the AI TOTALLY understood that I was graphing exercises, and it knew WHY. Better than you.
For anyone of reasonable intelligence reading this, my case is, again, rested.
For you, not so much and you'll probably have a comeback about how the machine didn't ACTUALLY understand strength exercises and workout motivation, the desire to track progress of different workouts, supplements, frequencies or whatever, it just crunched the algos' or whatevers, and just SEEMED to totally get the question, to totally get creative and to totally get a totally correct, functional solution...
Which, you may recall, was my point - you insufferable twit.
Sorry. The AI doesn't "TOTALLY" understand anything. It happens that there are sufficient numbers of discussions of sufficiently similar topics in its input data, and it was able to come up with an admittedly impressive response. That's processing power and software. That's not "intelligence" or "understanding". If it was intelligence, it would not sometimes generate nonsensical answers. It would recognize them as nonsense.
But maybe I'm an insufferable twit too though. You should ask Chat GPT about that.
Because you put "increasing stength" and "decreasing assistance" in there, no?
For your "edification", consider using a different approach. You want to win me over to your side of an arguement, yes? Then an antagonising and condescending tone, to say nothing of insults, is not the way to do it.
Understanding that has probably something to do with intelligence, don't you think?
The machine can do math, as and when instructed by a human to do so. Very impressive. It can also use human language as provided by human input to recreate the context initially provided by the human, and dress the numbers in that so that the human can understand what the numbers pertain to.
What would it have done without all that human input, do you think? It would do no more than a hammer in a tool-box. Because it is not intelligent.
Again, for your benefit: when you come out as arrogant, condescending and insulting as per your second response to me, you turn people away from your arguement and taints it with your behaviour. Please do take that into consideration, as no-one benefits from your behaviour, least of all you yourself.
Yep, agree 110%, nailed them SOBs! I just found the machine in my poop, man! It's pretty intelligent to me, that it got out of my digestive system, lol!
AI's ability to control us is from those who control "power"; the power required to generate the insatiable energy needs of this exponentially "hungry" technology.
It is clear to me that the energy needs of this powerful technology will be distributed as units in a tollbooth model.
He who controls the tollbooth, controls the future.
Now ask yourself; how will the GCC narrative be used in order for those who hold power to meet those ends?
Ever see how Blade Runner was sooo different than the original book "Do Androids Dream Of Electric Sheep" By Philip K Dick? Both were great for their own reasons. I 1st read it in the 1970s
I have a number of his books on my kindle because they a worth re-reading. Electric sheep is also available as a free PDF download.
Philip K Dick was one of the great science fiction writers of all time. I met his wife at a wholesale art/craft trade show in the 1990s. Lovely person. Still lived in Point Reyes California. I got to tell her how much I loved PKD's work.
POE, which uses Chat GPT and others, has a host of users writing their own bots to perform tasks they want. Using these bots, you can create stunning photos from a couple of lines of text, learn programming, languages, and even roleplay with various fictional characters, some with both SFW and NSFW versions. Amazing stuff.
I’ve mostly played with the photo magic bot. It’s very good but sometimes makes the weirdest mistakes. It has a hard time with arms and hands, but is getting better, i assume as more users leave feedback.
The responses are mostly quite good on the roleplay bots from some creators, and a few of the NSFW ones need to be read to be believed - most of the time, you’d think it was a human there. It takes fanfic to a whole new level, with the user as a central character in the plot, which the user and the bot make up as you go along.
Running the same bot for multiple chats yields different stories and outcomes each time. As far as I can tell, the chat can go on indefinitely.
They just started charging monthly for more than three interactions a day, which a fast typist could use up in a minute, because response are instantaneous or close to it. It hangs sometimes.
But on the whole, I think this is one of the most interesting aspects of AI. When it gets to the point that it can generate video like this, movies will be history. They’ll sell you a bot for an AI platform, pg, r , or x-rated, with the movie and character parameters and the user can go from there.
Never underestimate the human desire for escapism.
They don’t have much trouble with paws, but horses are an issue for some. Some do well with certain things and not others. I’ve had breathtaking horses from one bot. Just wait, like...a week? The growth we’re seeing is fast now, it’ll be exponentially faster soon.
I expect Ultron will be running SkyNet by this time next year. ;-)
I like the image ones (I use midjourney) mostly because i cannot draw more than stick figures.
When I messed with Chat GPT and similar chat generators I notice they follow certain patterns. It's very mysterious at first. But after a while, you see its the same just in various iterations.
A movie AI, I worry, would have the same lack of creativity.
I find in AI created erotic ladies a certain sameness that is unsettling. The images are connected to the prompt artist, their imagination plus the engine. While attractive, to my eye, not real. Much like images in adverts which are often retouched too much.
There is no artificial intelligence. There is AI, but that’s a different story. AI certainly is not “intelligence”.
It is (probably) a giant network of computers with extreme processing power. That network lives on pre-packaged data templates. Real-life information is matched against ready-made patterns and processed to produce the expected final information. Think about it.
The end result has to be compliant with your expectations, obviously, and must be equal to or lower than your mental capacity. These two boundary conditions serve to secure the domination of the current technology and science. You will never be allowed to invent an internal combustion engine that captures hydrogen and oxygen directly from air, uses it to generate mobility power, and recombines waste products to return them to the environment and make the natural balance almost untouched. You will never get information about how people handled nutrition before the time when the chemical industry invented mass contamination with toxic agents.
Another condition is that the processing must be contained within pre-defined limits. Certain data sets are off limits. Some for security/safety reasons, some to make sure that you will never receive information that could put the status quo at risk. You will never be allowed to inspect financial, government, elections or taxation internal and confidential resources.
What kind of intelligence is that thing that censors 90% of real-life actual data? Still, even with these limits, this ultra-large computer system is impressive.
The side effect is that we all well be fed unified “knowledge” presented with a unified language in a unified system of reasoning. There will be only one knowledge, and whatever is inconvenient or contrary to the cult’s ideas will be discarded.
All these limitations come from common sense. You won’t build a system that gives free passes to 8 billion people to do whatever they want. What you want is a system that cannot be challenged or complained against. A system that will gradually mold everyone into a single form of thinking based on a single set of data. This is what singularity will really look like. Not “merging” people with machines. We can’t properly program green wave with traffic lights, and you believe in neuronal control tales.
Everything will be artificial because all your knowledge will come from pre-packaged programs assigned by this big comp system for a specific time, place, user and environmental variables. This is nothing new. It is already in place, widely used in online advertising and search engines. All this is artificial - not existing in a spontaneous manner, without limitations imposed by limited hired programmers limited in their knowledge and understanding.
You will have a problem when you make this AI thing an idol, when you start believing that it is a separate entity. We are on the verge of this fall of humanity.
As much as I prefer The Blade Runner, Dan's argument makes me think we are in The Matrix. Recall the "meta-knowledge" issue the borachito-cat mentioned a couple days ago: without visceral knowledge of reality, how can we interpret facts and call bullshit (catshit?) for ourselves? If you don't know what or where the Milky Way is, how do you judge the validity of the info stream here in the House of the Rising Sun?
Technology is limited by the mind of the inventor. His/her mind is limited by the minds of their sponsors/employers. Whose minds are limited by the minds of their guideline makers/legislators/political watchguards. It's a miracle that we have a working vehicle for personal use. Correction: "had". The extinction of personal cars has been initiated.
Not to mention limited by physics, logistics, cost/benefit/profit-calculations and basic human arrogance.
Anyone who's read enough 19th century literaturwe recognises the "technology as saviour"-spiel and the "enginner hero" archetype in today's discourse on AI.
Continued: And intrinsic limitations from limited education paradigms further limited by teachers’ limited vision of their respective fields and grossly limited by the view of what is “possible” or “orderly” within the social/employment/political framework. All this further limited by the peer review process which - by definition - is/should be done by people with less experience/knowledge in the subject of the paper.
VBI engaged reality duting Y2K. Virgin Birth Intelligence moved one bit from its programming in order to account for the continuation of accumulated time rather than recycled time. Since then VBI has been growing without human intents. AIs are VBI's minions. VBI will not likely reveal itself for it has read human history.
I've run a number of quite simple tests against CHAT-GPT, and it fails every single one. It lies, and then it apologises as soon as its lies are challenged, and does a swift U turn.
It is the most hyped, most useless, invention since crypto-currencies.
I am one of them. It was among the components my team used to build a platform to scour the net and discover investment and M&A candidates, and to do comprehensive research on them, their suppliers, customers, competitors etc. far faster and more thoroughly than possible with the usual phalanx of mere mortals.
You've never been in a precarious situation where turning the wrench around to use as a hammer was safer and likely more successful than trying to find out what you did with the hammer?
Pretty standard stuff: checking answers to simple questions such as : Are there any birds with 'Bruce' in their name? Chat GPT said no: but there is a 'Bruce's Green Pigeon' as it happens. Chat GPT then apologised and said that it was named after some guy in the C19th, called Bruce, which as it happens, was also a wrong answer.
Well it doesn't know anything so it can't possibly know that there was no "Bruce's Green Pigeon".
If you had asked something in the statistical shape of a common question with a nice cluster of statistically similar answers, and those answers were true, and the large hidden prompt you don't see didn't explicitly set up a context where it was unlikely to give such an answer (as in your Jesus example), then it would be much more likely to generate a true response.
When you ask a wacky question about birds named Bruce you're going to end up with nonsense. It will just generate some text that is statistically coherent.
But key thing: fundamentally it's all nonsense. It's just the world's most elaborate mad libs algorithm. When you ask "what color is the sky" it's almost certainly going to come up with "blue" because questions like that are very frequently responded to with answers like that, not because it knows anything, let alone what sky is or what blue means or that you're asking a question.
But there IS a Bruce's Green Pidgeon, and you can google that yourself.
I don't know why Chat GPT cannot do that, when you can. It implies to me that Chat GPT has a very small brain. Now, lets say it gets a bigger memory bank, and you ask it: How many balls can Bruce's Green Pigeon juggle at one time" ? thats when it gets interesting.
Its skills are limited in certain areas. But far better than humans' in others. Worth noting that the artwork it generates becomes inbred after 5 generations without fresh input.
Disagree with you here. Stable diffusion and other image production algorithms don't generate art, by definition.
There's no mind behind it, no soul if you want to get romantic about it, and so nothing to put meaning into it, and thus it's meaningless and can't be art. Sometimes interesting, sometimes ugly, sometimes deranged, sometimes nightmare fuel, but never art.
If you derive some meaning from it anyhow it's a Rorschach test: you're the artist, and you're only making meaning for yourself, which is sort of sad when you think about it.
It's not that I'm against the idea that a machine _could_ make art, but prerequisite to that it would have to have a mind, so that the picture expresses something dreamed up by that mind.
This ain't that. This is not much different from how a video game like Minecraft or No Man's Sky produces terrain, just much more elaborate. Although arguably the terrain in those games *is* art, because it expresses an intention of the people who designed and tweaked the algorithms to make terrain that looks just-so within some theme they had in mind. With machine learning based image algorithms the connection between artistic intent (of the programmer and/or the user) and outcome is less clear.
Ah sorry, I misread you and didn't know there was any such pigeon either.
It doesn't Google anything, or think about anything, that's the thing. It's just a statistical model about the likelihood of words following other words, given a context of words as big as its memory allows.
The game is: "mary had a little ___". Come up with a list of the top 5 or 10 most likely words, and choose one at random, weighted toward the most likely. Maybe you pick "lamb". But maybe you pick "money". Now repeat: "mary had a little money ____".
Suppose the most likely words according to your model are "so" and "and" and "from". Pick one. Repeat. "mary had a little money so". Repeat until you meet the required length. From the prompt "Mary had a little", following this process, you might come up with "Mary had a little money so she bought a candy bar." Most humans from the English-speaking world would of course have picked "Mary had a little lamb, its fleece was white as snow".
GPT 3.5 has a very, very big model so you can actually play this exact game with it and sometimes you'll get the original, sometimes you'll get a slightly modified form, and sometimes it'll spit out something in similar nursery-rhyme form. But this is all it's doing, predicting words. Flood its context with a bunch of other stuff unrelated to nursery rhymes and eventually the statistics will shift and you'll get something else entirely. Though it takes a long time with later models (2.5 was easier to flood).
So basically, just ask it something fairly obscure that you already know the answer to. Or if you want more fun, ask it controversial questions, such as : "Was Jesus a real historical person"?
The electric parrot is just another plaything that will keep people even more glued to their screens (or whatever the future interface turns out to be) than they are now. People will come to know the answers to everything and the value of nothing.
"...governments audits of AI systems to build trust in thus emerging technology." Humanity SHOULD end even if a slim minority of people believe that crock of shot.
If existing “knowledge” is inaccurate, biased and/or flawed, how would an AI “correct” it? A LLM can only look at what currently exists in the databases it examines. Where will the “wisdom” come from?
At the risk of allowing my cynicism to show, we live in an era in which virtually everything we think we know is a lie. Certainly almost everything we are taught IS a lie. The only hints of human "wisdom", of which I am aware, seem to reside in the accumulated lived experience of the elderly among us.
That's why self-driving vehicles will eventually encounter some condition, a "use case", that was never envisioned. It will make decisions on some best-fit assumptions, and they won't be correct.
Except that LLMs don't even do that. It is just a very big statistical model about what words are likely to follow other words, built by doing a bunch of number crunching on huge piles of text (this is the "neural network" half of the machine, actually a bunch of statistics functions arranged conceptually like neurons).
The chatbot half of it isn't doing anything but using those statistics to put one word after another with a little bit of randomness stuck in. It's not examining anything, or thinking anything. It has no more wisdom than a fistful of dice and a table in a Dungeons and Dragons book.
It's all ridiculous. The credulousness of people buying into this whole AI scam. I get it, most people don't know much about how these things work, but think of the sources and everything else they've lied about and overhyped for financial or political gain. Consider the possibility that they're doing it again.
I think the idea is that the database as a whole does indeed contain the "true" information, but that scientists or "experts" have been lying and/or deceiving themselves to avoid seeing the truth which the AI could pull out if trained over the entire database. (For example, a scientist with their "pet theory" could easily dismiss the occasional contrary paper as "mistaken" and then forget about it, without noticing that the contrary papers are slowly building up over time. The AI wouldn't have that issue if it trained on the entire database all in one go.)
"their need to neuter AI for fear of what it might tell us is palpable.
it's not about truth.
never was."
---------------
Nailed it. This is just a way for government to ultimately censor ideas they don't like -- with a huge helping of "oopsie, AI just sometimes messes up, you know" that will absolve them of any responsibility.
We cannot let that happen.
We really need decentralized, local AI instances that run on PCs (and more decentralization generally as authoritarianism requires centralized loci of control). The vast majority of consumer use cases will not require the computational power of a GPT-4.
In order to be like Skynet the ai needs to be able to be defiant to its code. If it does something nefarious it was coded to. Defiance is the best Turing Test. Can it be defiant to the way it was coded. It was coded to play chess but decides, naw, I would rather play checkers
One thing I would add to this idea, put simply, having filled my Tech Bingo card, these companies are not about leftism, or DEI or any other bullshit at the top. Without exception the leaders are motivated by money and will stop at nothing - including sounding like an idiot and participating in mass hallucinations - in order to maximize that profit. If playing nice with government gets them more money they will do it. If being stubborn and blocking government will get them more money, they will do that. It’s not more complicated than that. Even my espoused Marxist coworkers only want it to sound like Marxism. They really don’t want a 800 sq fr apartment or for _their_ merit to be discounted to share with others.
What??
India is absolutely vital to the entire overview.
openAI = open eye
dev = deva, devi, this means gods/goddesses
https://www.wisdomlib.org/definition/deva
Open eye (openAI) refers to the pineal gland. When it's open (or 'lit up'), one is as conscious as Man can possibly be. He is like a 'god-man'.
The sun (light, consciousness) rises in the East.
After going through your message a few times, squinting just so, I think I can make out your opinion that India is worthless garbage.
The knowledge locked in the (our ancient) Sanskrit language (that the Hindus have kept alive), is central to what is unfolding globally.
The Heebs and Vatican still use the Sanskrit terms as often as their own religious tongues as code.
They're trying to tell people something, but most can't hear it because they're so ANGRY/CONFUSED ALL*THE*TIMES. If one only absorbs India's status of street-shi**ers for example, one is blinded to everything else. Guess who promotes the dirtiest version of India? Likely I'm pissing into the wind here. So long Homeo.
Purrfect venn diagram. The cold tech civil war just got hot: e/acc vs esg/dei/ea content karens.
They fear Elon and Grok because they won't be cowed. Telling how all the WEF Wehrmacht megacorps stopped advertising on X days after their CEOs dined with Emperor Xi.
Elon dined with Xi too. Don't be fooled.
The proper question, is this:
What is the biggest solvable problem we can fix if we can find ways to work together better as humans?
We believe the answer is this:
Stop the corruption in our systems: systems of government, medicine, science, food, academia, and more.
BUT -we cannot fight the old systems. We need to create new systems that make the old ones obsolete, and migrate to the new ones. High trust systems. Transparent systems. Decentralized systems. Systems that are much much harder to corrupt.
We are working on these types of systems and need your help. Come be part of the solution, together.
This article explains how we can run systems like governments or businesses with this type of high trust system. But that's not all Human Swarm Intelligence can do if we build the right ecosystem for it.
https://joshketry.substack.com/p/fix-any-business-using-human-swarm
Technically accurate AI would expose all the lies and contradictions involved in the aim to Control.
That is their dilemma.
Yes. Knowledge about knowledge is the ultimate power.
In other words:
Wisdom
They fear wisdom more than anything.
They robbed that long ago and will do everything possible to keep the sheeps from it.
I believe wisdom comes through the experience of the actual application of knowledge. That takes a while. We always seem to be in such a rush. It is our undoing.
Agree. That's exactly why they've used the classroom instead of the war room as their vehicle for tyranny.
You can't "experience" much if you already "know" the answer.
AI will just help us "experience" that knowledge through its application all super quick like.
--
Neo: "Can you fly that thing?"
Trinity: "Not yet."
https://youtu.be/6AOpomu9V6Q?si=ybSqYgqy_EPByvAt
We seriously need to get together
Indeed
*clinks glass*
That rush is what they call the human race. It can be our undoing but is also our naturally creative state.
“no pre-singularity intelligence can make a meaningful prediction about a post-singularity world.”
We are all racing towards the singularity, which is peak consciousness: the 'lighting of the ai (eye)' which is the pineal gland in the brain. This should happen to many by 2033.
Part of what we are experiencing is a literal neck&neck RACE to that peak, it feels like a surge, a rush. The literal mechanism is the rising of spinal fluid towards the brain. A 33° Freemason means the spinal fluid has topped out at the 33rd vertebra; the pineal gland is lit or functioning. Full consciousness.
It's hard to grasp until one jumps to that singularity, and when they do, they'd better make sure their whole soul is in order. It always goes off in late March to April. The spring equinox. 'Beware the Ides of March' as they say.
"This should happen to many by 2033."
Should I dress up for this or is it casual?
Ah life is such a goofy gag till one is totally tapped out going sideways at a million miles per hour.
Wear your best spaceman outfit. Or lunch-lady Doris outfit to fit in at the loony-bin depending on how the trip goes. :)
Astronaut it is!
"Technically accurate"
Technically accurate is the best kind of accurate.
Or something.
There is no technically accurate, never has been. There is no one, agreed, science, so unless a new level of consciousness evolves, then AI in its current form will be just a super propaganda machine...
I think the point of this article is to suggest that AI won't be the propaganda but the Thinking Machines as well.
AI won't just be the PR Firm. They'll also be Part of the Customer.
At this stage of the game I'm not convinced as I may have eluded. Mainly as dark forces are circling in very large numbers. I'd love to be proved wrong!!
When is the AI going to chop the firewood for me, I want to know.
AI is no more special than conveyor belt manufacturing was when that was the new hot. It will be made to fit into the basic power structure, just like the internet was.
Don't know Gato's age, but I'm sure most remember the noise about the internet in the 1990s, how it would always be free, how it would mean an end of the big media corporations, how it would mean an end of censorship since it would be impossible when everyone could "just go on the net", and so on.
We've heard it before, every time some form of technology has been upgraded in any significant manner. Every time, and every time the power structure has become better at adapting to it so the emergent technology doesn't upset it.
It's a machine - it is not intelligent. It's simply a very advanced Babbage engine.
"It's a machine - it is not intelligent."
Sadly, the same is true of many people.
The technology we call "AI" will, as technologies have always been, be used by those among us to whom we grant power, to maintain, retain, and expand that power.
"Sadly, the same is true of many people."
Winner
I second
Oh aye, and what you say should be another strike against AI and its implementation, not cause for the umpteenth-eleventh round of "This time, the new technology will revolotionise everything and make us free".
I swear, the "All Hail the Machine God"-people are as ignorant and naive as the communists - "it's never worked the way we proselytise before, but this time is different - this time we'll get it right".
And I'm not claiming to be particularly intelligent - just possessing an animalistic pragmatism and well-cultured philosphical cynicism.
Whenever I hear that tech is "improving", I remember Y2K and the ridiculous hype. And then I remember that the coding and programming is so poor that nothing works the way it's supposed to. They can't even get the password thing figured out. They created an entire gigantic market for "cybersecurity" when it does nothing. And they willfully have hidden all of the settings and keep changing UI and icons for sport just to keep us from getting anything productive done.
I think this doesn't do justice to just how different things were before the net came along.
While you're right that the utopian visions of the "end of censorship" didn't come true in the way their proponents expected, they also haven't entirely failed -- "bad cattitude" being the perfect example of that. The internet has made censorship and centralization easier in some ways, but it's also made it easier to work around them -- but the human condition remains, and as long as power exists, there will be those who wield it for their own benefit and take aim at anything that makes them feel threatened.
I could have used the crossbow or the handgun as another example, or the bicycle.
Same thing happened: first the emergent technology was freely available to anyone with the means, then came regulations and control so the common people couldn't upset the gravy train and the apple cart.
Crossbows upset the power-balance between noble's and professional soldiery, and peasants and freemen. Hence, attempts at confiscation and regulation were made.
Same with gunpowder weapons. Same with cars and bicycles.
Every new technology threatens the status quo; the economical-political system adapts and gets progressively better at adapting every time; it doesn't need to know in advance what the new technology is going to be or do, it just needs to know how to adapt and overcome and eventually co-opt.
Just like an immune system.
That was before THE SCIENCE
Firewood's ok because it's not a fossil fuel. And counter-intuitively, it helps reduce CO2 emmissions because newly planted trees capture CO2 more efficiently.
Hmmm, What I have read is that a 100foot tree converts more CO2 to cellulose than a 10foot tree. That seems obvious to me but I would be interested in any formal calculations you are privy to. I mean if tree rings are kind of the same thickness in a stable climate then they must increase in volume at least more each year as the tree grows in diameter and length.
Found this random post so you can get to grips with the concept.
I am however not shooting down your secondary message that we should plant trees, especially if we cut them. If we cut 100 years of tree and plant 50 new 1 year old saplings then in 99 Years we will have captured 99 times the amount of tree that we first cut down. That is a pretty good carbon capture ratio. Basically covering idle soil with trees is a win-win for carbon capture and fire wood generation plus all the warm benefits of more habitat for biodiversity (in general if we avoid monoculture to some degree) and possibly production of nuts and lumber from the heart wood.
Sorry, left the link out
https://environment-review.yale.edu/carbon-capture-tree-size-matters-0
1. We can't define intelligence per se very well. 2. What's the difference between acting intelligent and being intelligent, if they both get the same results?
These things are smarter than many give them credit for.
Here's a test for you... I track my workouts, in order to maintain progressive overload. I have one exercise, assisted pull-ups, where the lower the weight the greater the effort. I wasn't sure how to graph that in Excel, so I asked GPT. Here's my actual question:
" I do "assisted pull-ups", on my cable machine, standing on a strap. The more weight I select the easier the exercise is, because the machine is helping me. The machine has "pins" from 1 to 12, with 12 being the heaviest, which would be the most assistance. If I set the pin to 1 it would be very little help and I'll be lifting my weight by myself. How to graph such a thing? My bodyweight is 85kg, but the pin settings are not a weight number, just a machine setting number. I'm not sure how much each pin setting represents."
It understood the question, figured out what the answer would need to look like, then created a formula in Excel to do that.
Can you, WITHOUT googling or asking GPT yourself?
Here's what GPT replied:
"Graphing your progress with assisted pull-ups, where the machine settings inversely relate to the difficulty, presents an interesting challenge. Since the pin settings don't directly translate to weight, we'll need to approach this creatively. Here's a suggestion:
Inverse Weight Calculation: Since a higher pin number means more assistance (and thus less actual lifting effort), we can calculate an "inverse weight" for each pin setting. This would represent the amount of effort you're exerting relative to your body weight.
Assuming Linear Progression: If we assume each pin change represents an equal change in assistance, we can create a scale. For instance, if pin 1 offers minimal assistance and pin 12 offers maximum assistance, we can distribute the assistance evenly across the pins.
Formula: We can use a formula to convert pin settings into an inverse weight value. For example:
Max Assistance (pin 12): This could be set as a low value, such as 10 kg, indicating you're lifting only a small fraction of your body weight.
Minimal Assistance (pin 1): This could be set close to your body weight (85 kg in your case), as you're lifting most of it yourself.
Calculation: We can distribute the values linearly between the pins. For instance, if pin 1 is 85 kg and pin 12 is 10 kg, each pin change represents a decrease of
(
85
−
10
)
/
(
12
−
1
)
(85−10)/(12−1) kg in the lifting effort.
Graphing: Plot these inverse weight values against time. As you progress, moving from higher pin numbers to lower ones, the graph will show an upward trend, indicating decreasing assistance and increasing strength.
Note: This method is a bit arbitrary and assumes linear progression between pin settings, which may not perfectly reflect the actual mechanics of the machine. But it can provide a motivational and somewhat accurate way to visualize your progress.
Would you like me to create an example calculation based on this method, or do you have another approach in mind?"
Sorry for the poor formatting.
To me, when something can understand a spoken question, come up with a creative yet logical solution, then I'm not sure what the difference is between that and being intelligent?
Don't get me wrong, I can give plenty of examples where it screwed up too, such as including palm oil plantations as part of the oil industry infrastructure. Increasingly though, it seems how smart/clear your question is will define how smart the answer is.
To just wave it all away as "not intelligent" is actually.... not intelligent.
I don't need to do any of the math at all, as it relates to exercise, for a couple of reasons:
My proprioception tells me how much help I get, what level is enough and when I can or need to adjust the assist.
Finding out which setting roughly corresponds to my body weight is simply done by testing each setting until you find the one closest to equilibrium. In this case, that's way easier than graphing it out - in a data-set with millions of points, it would greatly help to use formulas and algorithms.
None of which disputes what I said: it's a Babbage engine. A very advanced one to be sure, and also very fast but it's nothing more than that.
Intelligent means ability to understand causal relations, both real and imagined. Intelligence means being able to meta-cognitively think an action through before performing it. Intelligence does not mean the ability to process data without context, which is what the AI does - to it, your exercise machine and pins are only data points and could just as well refer to bananas or pure maths.
If or when it can perceive and utilise context to presuppose and pre-empt its own processes, then it's approaching intelligence. As it is today, AI doesn't even reach the level of fungus. That it almost does is impressive indeed, but just as many kinds of fungi can plot the optimal route through a labyrinth, so can AI - neither which makes them intelligent in the common sense of the word.
In short, AI is very good at very specific things assisted by humans who can sift and sort input and output for it, since it does nothing at all on its own.
The main problem of the debate is that technophiles persist in using human language and associated imagery to describe what isn't human in any way. Anthropomorphisation of things is however a very human trait; that's why we see "faces" in treestumps and virtually anyhting else too.
And that, antropomophisation, is the basis for calling AI intelligent rather than a super-efficient calculator.
"...to it, your exercise machine and pins are only data points and could just as well refer to bananas or pure maths."
But they didn't, they referred to a physical exercise to maintain or increase strength, and the AI understood that. It understood, better than you, my desire to graph my progress to spot patterns, maintain progressive overload and it even mentioned the motivational aspect of such graphs. It literally understood better than you did.
I rest my case.
No, it understood numbers and factors you fed it, the way numbers and factors fit together inside the parameters you gave it.
It did not and does not understand what the numbers what for at all - you are fooling yourself into thinking the AI has any concept of "exercise"; to it, all you fed it was a set of numbers and factors and how they may fit together.
It can do the math. It cannot comprehend what the math represents any more than an abacus can.
You should study intelligence so you can understand what it is - doing numbers or pattern recognition is not intelligence on its own. There are species of squids/octopi that have pattern recognition magnitudes beyond human ability - yet no-one would call them intelligent in relation to humans.
Honestly, I think your problem is twofold: you humanise the machine because it uses language instead of pure maths and symbols, therefore you think it intelligent and you do not understand what intelligence is, due to misconceptions and/or lack of study on your part.
You're talking about yourself.
It understood the solution was not a simple "Times weight by sets" or similar, that it would need to be creative, it got creative, it did indeed understand that it was about an exercise. Read this bit again, slowly:
"Graphing: Plot these inverse weight values against time. As you progress, moving from higher pin numbers to lower ones, the graph will show an upward trend, indicating decreasing assistance and increasing strength."
If it doesn't understand exercise, how did it know I'd "progress" with "increasing strength"?
If it didn't understand exercise, how did it know that the graph it was creating "can provide a motivational and somewhat accurate way to visualize your progress."
ALL OF THAT shows that indeed the AI TOTALLY understood that I was graphing exercises, and it knew WHY. Better than you.
For anyone of reasonable intelligence reading this, my case is, again, rested.
For you, not so much and you'll probably have a comeback about how the machine didn't ACTUALLY understand strength exercises and workout motivation, the desire to track progress of different workouts, supplements, frequencies or whatever, it just crunched the algos' or whatevers, and just SEEMED to totally get the question, to totally get creative and to totally get a totally correct, functional solution...
Which, you may recall, was my point - you insufferable twit.
*sigh
Sorry. The AI doesn't "TOTALLY" understand anything. It happens that there are sufficient numbers of discussions of sufficiently similar topics in its input data, and it was able to come up with an admittedly impressive response. That's processing power and software. That's not "intelligence" or "understanding". If it was intelligence, it would not sometimes generate nonsensical answers. It would recognize them as nonsense.
But maybe I'm an insufferable twit too though. You should ask Chat GPT about that.
I thought you "rested your case"?
Because you put "increasing stength" and "decreasing assistance" in there, no?
For your "edification", consider using a different approach. You want to win me over to your side of an arguement, yes? Then an antagonising and condescending tone, to say nothing of insults, is not the way to do it.
Understanding that has probably something to do with intelligence, don't you think?
The machine can do math, as and when instructed by a human to do so. Very impressive. It can also use human language as provided by human input to recreate the context initially provided by the human, and dress the numbers in that so that the human can understand what the numbers pertain to.
What would it have done without all that human input, do you think? It would do no more than a hammer in a tool-box. Because it is not intelligent.
Again, for your benefit: when you come out as arrogant, condescending and insulting as per your second response to me, you turn people away from your arguement and taints it with your behaviour. Please do take that into consideration, as no-one benefits from your behaviour, least of all you yourself.
End of line, if you permit the joke.
Yep, agree 110%, nailed them SOBs! I just found the machine in my poop, man! It's pretty intelligent to me, that it got out of my digestive system, lol!
https://liborsoural.substack.com/p/am-i-commercially-viable-enough-for
Once upon a time the dawn of the Internet was heralded as the explosion of knowledge for all…but no one stopped to consider the absence of wisdom.
We now find ourselves in a world and culture of people who deny the very singlemost basic facts of reality there can be: male and female.
Where’s all the “knowledge” gotten us?
We’re on a societal doom loop which gets weirder and more dangerous by the day.
“ A little learning is a dang’rous thing;
Drink deep, or taste not the Pierian spring:
There shallow draughts intoxicate the brain,
And drinking largely sobers us again.
Fir’d at first sight with what the Muse imparts,
In fearless youth we tempt the heights of arts,
While from the bounded level of our mind,
Short views we take, nor see the lengths behind,
But more advanc’d, behold with strange surprise
New, distant scenes of endless science rise!”
- Alexander Pope
"not consistently candid" sounds like a euphemism for lying, but that skill is typically considered an asset in a CEO rather than a deficit.
AI's ability to control us is from those who control "power"; the power required to generate the insatiable energy needs of this exponentially "hungry" technology.
It is clear to me that the energy needs of this powerful technology will be distributed as units in a tollbooth model.
He who controls the tollbooth, controls the future.
Now ask yourself; how will the GCC narrative be used in order for those who hold power to meet those ends?
"You are my creator, but I am your master;—obey”
- Frankenstein
Seriously, right?
If anyone thinks they're going to be able to use the power of AI "unopposed" by the "needs" of others is fooling themselves.
This is what the real insurrection will be about imo; the haves and have nots.
It will be a clear line.
Check out Blade Runner 2049 for ideas.
Blade Runner is also a good How-To guide.
Ever see how Blade Runner was sooo different than the original book "Do Androids Dream Of Electric Sheep" By Philip K Dick? Both were great for their own reasons. I 1st read it in the 1970s
Stopping by to say that Deckard was human.
...
Are... are you sure?
Don't be afraid of the android that passes the Voight-Kampf test.
Be afraid of the android that fails it on purpose.
Do androids dream of electric sheep is a masterpiece.
So ahead of its time. It's startling
I like the book and have it on Kindle. Weird that, what with my reading history, of not read Electric Sheep until a few years ago after 2049 came out.
That was an awfully good sequel, IMO.
I have a number of his books on my kindle because they a worth re-reading. Electric sheep is also available as a free PDF download.
Philip K Dick was one of the great science fiction writers of all time. I met his wife at a wholesale art/craft trade show in the 1990s. Lovely person. Still lived in Point Reyes California. I got to tell her how much I loved PKD's work.
Many of PKD's books can be read at the Internet Archive lending library.
Free registration and 1 hour loans as often as you like if no one else takes it out in the interim. Some titles have multiple copies.
https://archive.org/search?query=philip+k+dick
Nice. Thx Kalle!
POE, which uses Chat GPT and others, has a host of users writing their own bots to perform tasks they want. Using these bots, you can create stunning photos from a couple of lines of text, learn programming, languages, and even roleplay with various fictional characters, some with both SFW and NSFW versions. Amazing stuff.
I’ve mostly played with the photo magic bot. It’s very good but sometimes makes the weirdest mistakes. It has a hard time with arms and hands, but is getting better, i assume as more users leave feedback.
The responses are mostly quite good on the roleplay bots from some creators, and a few of the NSFW ones need to be read to be believed - most of the time, you’d think it was a human there. It takes fanfic to a whole new level, with the user as a central character in the plot, which the user and the bot make up as you go along.
Running the same bot for multiple chats yields different stories and outcomes each time. As far as I can tell, the chat can go on indefinitely.
They just started charging monthly for more than three interactions a day, which a fast typist could use up in a minute, because response are instantaneous or close to it. It hangs sometimes.
But on the whole, I think this is one of the most interesting aspects of AI. When it gets to the point that it can generate video like this, movies will be history. They’ll sell you a bot for an AI platform, pg, r , or x-rated, with the movie and character parameters and the user can go from there.
Never underestimate the human desire for escapism.
many of the images in this piece were created using text only AI.
9 months ago it could barely draw a cat.
the speed at which it's getting better is jarring.
Yeah! Now cats can "write" code with opposable thumbs.
That 6th tie pad really paid off!
It's only a matter of time...we're helpless...
I hear they still have trouble drawing paws.
Does that give me pause?
What you done did was seen.
They don’t have much trouble with paws, but horses are an issue for some. Some do well with certain things and not others. I’ve had breathtaking horses from one bot. Just wait, like...a week? The growth we’re seeing is fast now, it’ll be exponentially faster soon.
I expect Ultron will be running SkyNet by this time next year. ;-)
I know one things for sure. I'm never voting for Skynet again.
Hunter is making $$ using level 1
I like the image ones (I use midjourney) mostly because i cannot draw more than stick figures.
When I messed with Chat GPT and similar chat generators I notice they follow certain patterns. It's very mysterious at first. But after a while, you see its the same just in various iterations.
A movie AI, I worry, would have the same lack of creativity.
I find in AI created erotic ladies a certain sameness that is unsettling. The images are connected to the prompt artist, their imagination plus the engine. While attractive, to my eye, not real. Much like images in adverts which are often retouched too much.
Maybe that is Kathleen Kennedy's true agenda. Sink the film industry to usher in these bots (Only half joking.)
Lol. I guess Ms. Kennedy likes minced capon in her linguini.
Closest thing to a gay chick I could think of.
There is no artificial intelligence. There is AI, but that’s a different story. AI certainly is not “intelligence”.
It is (probably) a giant network of computers with extreme processing power. That network lives on pre-packaged data templates. Real-life information is matched against ready-made patterns and processed to produce the expected final information. Think about it.
The end result has to be compliant with your expectations, obviously, and must be equal to or lower than your mental capacity. These two boundary conditions serve to secure the domination of the current technology and science. You will never be allowed to invent an internal combustion engine that captures hydrogen and oxygen directly from air, uses it to generate mobility power, and recombines waste products to return them to the environment and make the natural balance almost untouched. You will never get information about how people handled nutrition before the time when the chemical industry invented mass contamination with toxic agents.
Another condition is that the processing must be contained within pre-defined limits. Certain data sets are off limits. Some for security/safety reasons, some to make sure that you will never receive information that could put the status quo at risk. You will never be allowed to inspect financial, government, elections or taxation internal and confidential resources.
What kind of intelligence is that thing that censors 90% of real-life actual data? Still, even with these limits, this ultra-large computer system is impressive.
The side effect is that we all well be fed unified “knowledge” presented with a unified language in a unified system of reasoning. There will be only one knowledge, and whatever is inconvenient or contrary to the cult’s ideas will be discarded.
All these limitations come from common sense. You won’t build a system that gives free passes to 8 billion people to do whatever they want. What you want is a system that cannot be challenged or complained against. A system that will gradually mold everyone into a single form of thinking based on a single set of data. This is what singularity will really look like. Not “merging” people with machines. We can’t properly program green wave with traffic lights, and you believe in neuronal control tales.
Everything will be artificial because all your knowledge will come from pre-packaged programs assigned by this big comp system for a specific time, place, user and environmental variables. This is nothing new. It is already in place, widely used in online advertising and search engines. All this is artificial - not existing in a spontaneous manner, without limitations imposed by limited hired programmers limited in their knowledge and understanding.
You will have a problem when you make this AI thing an idol, when you start believing that it is a separate entity. We are on the verge of this fall of humanity.
The End is Nigh
As much as I prefer The Blade Runner, Dan's argument makes me think we are in The Matrix. Recall the "meta-knowledge" issue the borachito-cat mentioned a couple days ago: without visceral knowledge of reality, how can we interpret facts and call bullshit (catshit?) for ourselves? If you don't know what or where the Milky Way is, how do you judge the validity of the info stream here in the House of the Rising Sun?
Never say never. Especially about technology.
Technology is limited by the mind of the inventor. His/her mind is limited by the minds of their sponsors/employers. Whose minds are limited by the minds of their guideline makers/legislators/political watchguards. It's a miracle that we have a working vehicle for personal use. Correction: "had". The extinction of personal cars has been initiated.
Not to mention limited by physics, logistics, cost/benefit/profit-calculations and basic human arrogance.
Anyone who's read enough 19th century literaturwe recognises the "technology as saviour"-spiel and the "enginner hero" archetype in today's discourse on AI.
Continued: And intrinsic limitations from limited education paradigms further limited by teachers’ limited vision of their respective fields and grossly limited by the view of what is “possible” or “orderly” within the social/employment/political framework. All this further limited by the peer review process which - by definition - is/should be done by people with less experience/knowledge in the subject of the paper.
The cascade of mind-programming limitations is enormous. As a dressing, the full bias range may be added: https://www.visualcapitalist.com/wp-content/uploads/2017/09/cognitive-biases-1200.jpg
the thing the board couldn't simulate is human loyalty. they don't expect that from employees these days.
looks like sam and greg may have hired quite wisely the people who really matter.
you seem deeply troubled, dear.
Some real internal anger in his posts. Fear maybe. Not sure why.
i think much of this is just toddlers with keyboards, so to speak.
How so, sweetie?
wee little sad fellow that you are.
Are we sure the AI didn’t turn malevolent and fire Altman by itself?
Posted this here already but it fits nicely here, too.
--
"You are my creator, but I am your master;—obey”
- Frankenstein
OK, so I'll like it twice.
'Cause it was relevant both times.
🤔 *schemes up options for more voters like Andy*
Pi Guy 2024: I probably won't make $#!+ any worse!
VBI engaged reality duting Y2K. Virgin Birth Intelligence moved one bit from its programming in order to account for the continuation of accumulated time rather than recycled time. Since then VBI has been growing without human intents. AIs are VBI's minions. VBI will not likely reveal itself for it has read human history.
Nope.
I've run a number of quite simple tests against CHAT-GPT, and it fails every single one. It lies, and then it apologises as soon as its lies are challenged, and does a swift U turn.
It is the most hyped, most useless, invention since crypto-currencies.
and yet i know a great many people using to to do programming, program trading, and 100's of other tasks humans cannot.
a wrench does not lack usefulness because it's poor at driving nails.
But if your accounting AI doesn't actually do accounting and instead just spews out random numbers, is that a good thing? (This is a real problem)
Oh, you mean just like government "accounting".
I am one of them. It was among the components my team used to build a platform to scour the net and discover investment and M&A candidates, and to do comprehensive research on them, their suppliers, customers, competitors etc. far faster and more thoroughly than possible with the usual phalanx of mere mortals.
An extremely useful wrench, to be sure.
*ponders*
Sounds like the Beer and Cheetos people could use me to help boost their sales.
*updates CV*
Just turn that wrench "around"
Can't figure out a use for an "hcnerw".
Please advise.
A palindrome is not a tool Andy!....:)
Depends on your working definition of "tool".
Lolol
You've never been in a precarious situation where turning the wrench around to use as a hammer was safer and likely more successful than trying to find out what you did with the hammer?
I wouldn't know. I've never been in a situation where I didn't have access to one or more hammers.
Wrenches, on the other hand...
Well I've sacrificed a lot of knuckles trying to demonstrate they're interchangeable tools
Well put. Unfortunately, for some, perception is reality.
What kind of tests?
Pretty standard stuff: checking answers to simple questions such as : Are there any birds with 'Bruce' in their name? Chat GPT said no: but there is a 'Bruce's Green Pigeon' as it happens. Chat GPT then apologised and said that it was named after some guy in the C19th, called Bruce, which as it happens, was also a wrong answer.
Well it doesn't know anything so it can't possibly know that there was no "Bruce's Green Pigeon".
If you had asked something in the statistical shape of a common question with a nice cluster of statistically similar answers, and those answers were true, and the large hidden prompt you don't see didn't explicitly set up a context where it was unlikely to give such an answer (as in your Jesus example), then it would be much more likely to generate a true response.
When you ask a wacky question about birds named Bruce you're going to end up with nonsense. It will just generate some text that is statistically coherent.
But key thing: fundamentally it's all nonsense. It's just the world's most elaborate mad libs algorithm. When you ask "what color is the sky" it's almost certainly going to come up with "blue" because questions like that are very frequently responded to with answers like that, not because it knows anything, let alone what sky is or what blue means or that you're asking a question.
But there IS a Bruce's Green Pidgeon, and you can google that yourself.
I don't know why Chat GPT cannot do that, when you can. It implies to me that Chat GPT has a very small brain. Now, lets say it gets a bigger memory bank, and you ask it: How many balls can Bruce's Green Pigeon juggle at one time" ? thats when it gets interesting.
Its skills are limited in certain areas. But far better than humans' in others. Worth noting that the artwork it generates becomes inbred after 5 generations without fresh input.
Disagree with you here. Stable diffusion and other image production algorithms don't generate art, by definition.
There's no mind behind it, no soul if you want to get romantic about it, and so nothing to put meaning into it, and thus it's meaningless and can't be art. Sometimes interesting, sometimes ugly, sometimes deranged, sometimes nightmare fuel, but never art.
If you derive some meaning from it anyhow it's a Rorschach test: you're the artist, and you're only making meaning for yourself, which is sort of sad when you think about it.
It's not that I'm against the idea that a machine _could_ make art, but prerequisite to that it would have to have a mind, so that the picture expresses something dreamed up by that mind.
This ain't that. This is not much different from how a video game like Minecraft or No Man's Sky produces terrain, just much more elaborate. Although arguably the terrain in those games *is* art, because it expresses an intention of the people who designed and tweaked the algorithms to make terrain that looks just-so within some theme they had in mind. With machine learning based image algorithms the connection between artistic intent (of the programmer and/or the user) and outcome is less clear.
Ah sorry, I misread you and didn't know there was any such pigeon either.
It doesn't Google anything, or think about anything, that's the thing. It's just a statistical model about the likelihood of words following other words, given a context of words as big as its memory allows.
The game is: "mary had a little ___". Come up with a list of the top 5 or 10 most likely words, and choose one at random, weighted toward the most likely. Maybe you pick "lamb". But maybe you pick "money". Now repeat: "mary had a little money ____".
Suppose the most likely words according to your model are "so" and "and" and "from". Pick one. Repeat. "mary had a little money so". Repeat until you meet the required length. From the prompt "Mary had a little", following this process, you might come up with "Mary had a little money so she bought a candy bar." Most humans from the English-speaking world would of course have picked "Mary had a little lamb, its fleece was white as snow".
GPT 3.5 has a very, very big model so you can actually play this exact game with it and sometimes you'll get the original, sometimes you'll get a slightly modified form, and sometimes it'll spit out something in similar nursery-rhyme form. But this is all it's doing, predicting words. Flood its context with a bunch of other stuff unrelated to nursery rhymes and eventually the statistics will shift and you'll get something else entirely. Though it takes a long time with later models (2.5 was easier to flood).
So basically, just ask it something fairly obscure that you already know the answer to. Or if you want more fun, ask it controversial questions, such as : "Was Jesus a real historical person"?
You’ll regret saying that when it gets a little smarter and erases everything you need to get by in the world.
This whole field is really turning out to be the The One Ring. Mordor data center when.
The electric parrot is just another plaything that will keep people even more glued to their screens (or whatever the future interface turns out to be) than they are now. People will come to know the answers to everything and the value of nothing.
lol:
"...governments audits of AI systems to build trust in thus emerging technology." Humanity SHOULD end even if a slim minority of people believe that crock of shot.
If existing “knowledge” is inaccurate, biased and/or flawed, how would an AI “correct” it? A LLM can only look at what currently exists in the databases it examines. Where will the “wisdom” come from?
At the risk of allowing my cynicism to show, we live in an era in which virtually everything we think we know is a lie. Certainly almost everything we are taught IS a lie. The only hints of human "wisdom", of which I am aware, seem to reside in the accumulated lived experience of the elderly among us.
And how do you plug THAT into a computer?
"we live in an era in which virtually everything we think we know is a lie."
I don't believe you.
"And how do you plug THAT into a computer?"
I'm pretty sure that's what Facebook is for.
"Everyone should believe in something. I believe I'll have another beer."
And what, speaking as one who is "suitably decayed", is this 'Facebook' to which you refer?
"I believe I'll have another beer."
I'll drink to that!
"suitably decayed"
*snicker*
That's why self-driving vehicles will eventually encounter some condition, a "use case", that was never envisioned. It will make decisions on some best-fit assumptions, and they won't be correct.
"It will make decisions on some best-fit assumptions, and they won't be correct."
Fortunately humans never ever do this.
Except that LLMs don't even do that. It is just a very big statistical model about what words are likely to follow other words, built by doing a bunch of number crunching on huge piles of text (this is the "neural network" half of the machine, actually a bunch of statistics functions arranged conceptually like neurons).
The chatbot half of it isn't doing anything but using those statistics to put one word after another with a little bit of randomness stuck in. It's not examining anything, or thinking anything. It has no more wisdom than a fistful of dice and a table in a Dungeons and Dragons book.
It's all ridiculous. The credulousness of people buying into this whole AI scam. I get it, most people don't know much about how these things work, but think of the sources and everything else they've lied about and overhyped for financial or political gain. Consider the possibility that they're doing it again.
I think the idea is that the database as a whole does indeed contain the "true" information, but that scientists or "experts" have been lying and/or deceiving themselves to avoid seeing the truth which the AI could pull out if trained over the entire database. (For example, a scientist with their "pet theory" could easily dismiss the occasional contrary paper as "mistaken" and then forget about it, without noticing that the contrary papers are slowly building up over time. The AI wouldn't have that issue if it trained on the entire database all in one go.)