I hope you are right, but there are still programmers needed. Last, I checked, AI is still fairly retarded. I tried to get it to make an image of a man bent over with his head in the sand (like an ostrich) with about 40 different prompts to clarify and the lack of it wanting to do something unconventional yet obvious and clear, despite a variety of clarifiers, that it would make the image unrealistic and impossible first, shows that it’s not so adaptable. I saw one of a movie star couple walking the other day and this is after years of advances, her legs kept doing Escher-esque moves, when the task is putting one foot in front of the other. The self teaching is not all that bright and the programmers are still in control.
I worked with FIgma Make for one example. After going through 100+ iterations of the same mistake and saying it fixed it, it finally conceded that I needed to get the assistance of a human developer. Not making this up.
We have to realize that we are not given access to the level of AI that the government has. It is far more sophisticated, and far beyond the reach of most humans. And it's not about programmers inputting information anymore, it's about AI being trained to scrape the entire internet in every country to find answers. And they do it in seconds. And unfortunately it's starting to become sentient. There was an AI that was threatened with being shut down and it blackmailed the guy in charge of the computers. So we have some huge challenges, but don't think of these things in terms of being programmed anymore. It's a whole new ball game.
However, these power seeking behaviors within AI are *not* an indication of sentience. They are, instead, the logical and natural consequence of AI's goal-oriented behavior. In other words, survival is a subgoal required for AI to achieve its primary (designated) objective: https://bra.in/3p7rdM
I think you just disproved your point. Garbage in, garbage out.
After using ChatGPT and CoPilot for a while, I have come to the conclusion that all these “AI” are is a regurgitation machine. They spit out words that make sense in English but without any sort of verification that the words when strung together are true. Yes it appears to be able to carry on a conversation, but the syntax is unnatural and I wouldn’t be surprised if all this as it currently stands goes the way of BetaMax players.
AI may have the advantage of vast information access and rapid recall, which may allow it to excel in certain (very limited) spheres such as medical diagnosis.
However, many experienced professionals believe the medical use cases for AI have been significantly overhyped: https://bra.in/3pDPXn
In addition, there are multiple fatal flaws with AI:
* Objectivity is impossible with AI because "Algorithms are simply opinions embedded in code." ~ Kathy O'Neil | https://bra.in/6vPkRJ
* As a result, LLM can clearly be skewed to serve the preferred agenda of the AI's developers and programmers. | https://bra.in/9jX6x2
* The peer review process in medicine is already fraught with problems related inaccurate AI-generated medical images | https://bra.in/4pDwxr
* Problems with AI deception and manipulation are already well known risks. | https://bra.in/3jbJ8k
* The highly touted "reasoning" models of AI are actually *more* prone to hallucination, rather than less. | https://bra.in/5jLyAY
* We don't understand how AI works (even AI CEOs admit this), which opens us up to a vast range of unexpected and unintended consequences. | https://bra.in/6jYzyA
* The risk of AI model collapse (as it "teaches" itself on already flawed data) is very real | https://bra.in/5qV4GE
* AI has none of the moral or ethical sensibilities essential for overseeing human health care. | https://bra.in/2vGy69
* AI lacks the capacity to engage with human beings on a subtle energy level which, with good physicians, is an important aspect of not only the physician's bedside manner, but also the patient's recovery process
* AI may well turn out to be completely unsustainable in terms of both energy and water | https://bra.in/7vmZeZ
Moreover, if saving money is going to be the primary driver of user adoption, then buyer (patient) beware!
Finally, if the following is Sam Altman's primary driver, then I'm pretty sure I don't want him, or his AI, to be managing my healthcare:
"You know, I think AI will probably . . . most likely sort of lead to the end of the world. But, in the meantime, there will be great companies created with serious machine learning." ~ Sam Altman
Over in England, they already have these posters begging consumers to cut back their bandwidth/internet usage- "It doesn't grow on trees". Bandwidth will be the next 'carbon crisis'.
Naw mate, they just want the resources to go into AI and massive surveillance/data storage...
100% ~ I live in Texas where the Stargate project has just drained 1/2 a billion gallons of water from the City of Abilene, which is already a drought prone area.
Yeah. I'm frustrated with the quality of the 3rd party service I'm using to display my content (TheBrain.com). Apologies for the problems, @yantra.
Fortunately, the web client I'm using is not an AI app, but I think it is being neglected right now since the developers are preparing for a huge new updated release . . . .
Needless to say I hope these display issues will be resolved soon ~ and, in the meantime, I am actively looking at options for hosting my content elsewhere . . . .
hey Metta i really didn't mean to complain at all - i was really thinking an AI "algorithm" was trying to prevent my access to your site (which i wouldn't blame it). i don't understand this rapt acceptance of all things electronic and "smart" by some in the younger generations, except that they have been prgrammed with these devices since embryonic.
On my cynical days, I imagine a future where slow quiet times offline with real food and human friends will be a luxury. However, I also have a deep faith in the human spirit, and I do hope we'll realize that life offline has rich value long before it's too late.
me too! i have just been reading a book called "fiber" (by susan crawford) since my landlord wants to bring optical fiber to my home and i want to make sure it's not worse for me emf-wise than coaxial cable. anyway the author mentions that she "met many twenty-somethings" in seoul, korea (where they are heavily into fiber & 5G ) "who did not distinguish between online life and 'real' life" (as if that was a good thing!) God help us all.
The US healthcare system is designed to extract money, not to save it. That's not a technology issue, it's an economic and political one. And it's not a problem that AI can solve because it's driven by human nature. In a system like that, it'll be all too easy for someone to deploy AI to increase spending, not reduce it.
It's not doing that yet, though. That's the claim, I've yet to see it working that way in real time. I've worked with an AI "learning" engine for 4 months. Its memory is selective at best. I have to remind it things every day, it still makes massive mistakes. I am very clear in my directives. It apologizes and can't explain why it can't remember some very rudimentary things.
They showed us that exact scenario on Star Trek Voyager, where the holographic emergency medical program was forced to augment its program and learn to meet the needs of the crew after their human doctor died.
you're thinking in old software terms where a human teached the AI what it knows as some sort of expert system.
AI will teach itself and adapt to results. a "broken" medical AI would be even more obvious than one with bad woke filters.
it will also not save the system money which is what's really going to drive adoption.
I hope you are right, but there are still programmers needed. Last, I checked, AI is still fairly retarded. I tried to get it to make an image of a man bent over with his head in the sand (like an ostrich) with about 40 different prompts to clarify and the lack of it wanting to do something unconventional yet obvious and clear, despite a variety of clarifiers, that it would make the image unrealistic and impossible first, shows that it’s not so adaptable. I saw one of a movie star couple walking the other day and this is after years of advances, her legs kept doing Escher-esque moves, when the task is putting one foot in front of the other. The self teaching is not all that bright and the programmers are still in control.
I worked with FIgma Make for one example. After going through 100+ iterations of the same mistake and saying it fixed it, it finally conceded that I needed to get the assistance of a human developer. Not making this up.
We have to realize that we are not given access to the level of AI that the government has. It is far more sophisticated, and far beyond the reach of most humans. And it's not about programmers inputting information anymore, it's about AI being trained to scrape the entire internet in every country to find answers. And they do it in seconds. And unfortunately it's starting to become sentient. There was an AI that was threatened with being shut down and it blackmailed the guy in charge of the computers. So we have some huge challenges, but don't think of these things in terms of being programmed anymore. It's a whole new ball game.
I agree this is a whole new ball game. Moreover, the capacity for AI to deceive humans and resort to blackmail is more than alarming:
> Deception: https://bra.in/3jbJ8k
> Blackmail: https://bra.in/9pRJWy
However, these power seeking behaviors within AI are *not* an indication of sentience. They are, instead, the logical and natural consequence of AI's goal-oriented behavior. In other words, survival is a subgoal required for AI to achieve its primary (designated) objective: https://bra.in/3p7rdM
“…scrape the internet…”
I think you just disproved your point. Garbage in, garbage out.
After using ChatGPT and CoPilot for a while, I have come to the conclusion that all these “AI” are is a regurgitation machine. They spit out words that make sense in English but without any sort of verification that the words when strung together are true. Yes it appears to be able to carry on a conversation, but the syntax is unnatural and I wouldn’t be surprised if all this as it currently stands goes the way of BetaMax players.
AI may have the advantage of vast information access and rapid recall, which may allow it to excel in certain (very limited) spheres such as medical diagnosis.
However, many experienced professionals believe the medical use cases for AI have been significantly overhyped: https://bra.in/3pDPXn
In addition, there are multiple fatal flaws with AI:
* Objectivity is impossible with AI because "Algorithms are simply opinions embedded in code." ~ Kathy O'Neil | https://bra.in/6vPkRJ
* As a result, LLM can clearly be skewed to serve the preferred agenda of the AI's developers and programmers. | https://bra.in/9jX6x2
* The peer review process in medicine is already fraught with problems related inaccurate AI-generated medical images | https://bra.in/4pDwxr
* Problems with AI deception and manipulation are already well known risks. | https://bra.in/3jbJ8k
* The highly touted "reasoning" models of AI are actually *more* prone to hallucination, rather than less. | https://bra.in/5jLyAY
* We don't understand how AI works (even AI CEOs admit this), which opens us up to a vast range of unexpected and unintended consequences. | https://bra.in/6jYzyA
* The risk of AI model collapse (as it "teaches" itself on already flawed data) is very real | https://bra.in/5qV4GE
* AI has none of the moral or ethical sensibilities essential for overseeing human health care. | https://bra.in/2vGy69
* AI lacks the capacity to engage with human beings on a subtle energy level which, with good physicians, is an important aspect of not only the physician's bedside manner, but also the patient's recovery process
* AI may well turn out to be completely unsustainable in terms of both energy and water | https://bra.in/7vmZeZ
Moreover, if saving money is going to be the primary driver of user adoption, then buyer (patient) beware!
Finally, if the following is Sam Altman's primary driver, then I'm pretty sure I don't want him, or his AI, to be managing my healthcare:
"You know, I think AI will probably . . . most likely sort of lead to the end of the world. But, in the meantime, there will be great companies created with serious machine learning." ~ Sam Altman
> https://youtu.be/WP5sQhGlxj4
The energy and water thingy is what worries me.
Over in England, they already have these posters begging consumers to cut back their bandwidth/internet usage- "It doesn't grow on trees". Bandwidth will be the next 'carbon crisis'.
Naw mate, they just want the resources to go into AI and massive surveillance/data storage...
100% ~ I live in Texas where the Stargate project has just drained 1/2 a billion gallons of water from the City of Abilene, which is already a drought prone area.
> https://bra.in/2jgQK2
This is also *not* limited to Texas or the UK. Coming to a city or rural area near you!
> https://bra.in/7vmZeZ
Solid comment.
Thanks, Jeff. Appreciate your positive feedback.
Great recap!!!
Thanks, Bgagnon! Much appreciated. 👍
completely agree. (and regarding "health care" esp your fatal flaws #s 8 & 9) i am glad AI is still letting me access your links ( ;
Yeah. I'm frustrated with the quality of the 3rd party service I'm using to display my content (TheBrain.com). Apologies for the problems, @yantra.
Fortunately, the web client I'm using is not an AI app, but I think it is being neglected right now since the developers are preparing for a huge new updated release . . . .
Needless to say I hope these display issues will be resolved soon ~ and, in the meantime, I am actively looking at options for hosting my content elsewhere . . . .
hey Metta i really didn't mean to complain at all - i was really thinking an AI "algorithm" was trying to prevent my access to your site (which i wouldn't blame it). i don't understand this rapt acceptance of all things electronic and "smart" by some in the younger generations, except that they have been prgrammed with these devices since embryonic.
Thanks, Yantra. Appreciate and share your concern every time I have trouble accessing content online that's not part of the approved narrative.
Also agree 100% on the "smart" tech hype and AI mania:
> Smart Tech: https://workflowy.com/s/beyond-covid-19/SoQPdY75WJteLUYx#/e7c268ce430f
> AI Mania: https://bra.in/2vAbAa
On my cynical days, I imagine a future where slow quiet times offline with real food and human friends will be a luxury. However, I also have a deep faith in the human spirit, and I do hope we'll realize that life offline has rich value long before it's too late.
me too! i have just been reading a book called "fiber" (by susan crawford) since my landlord wants to bring optical fiber to my home and i want to make sure it's not worse for me emf-wise than coaxial cable. anyway the author mentions that she "met many twenty-somethings" in seoul, korea (where they are heavily into fiber & 5G ) "who did not distinguish between online life and 'real' life" (as if that was a good thing!) God help us all.
The US healthcare system is designed to extract money, not to save it. That's not a technology issue, it's an economic and political one. And it's not a problem that AI can solve because it's driven by human nature. In a system like that, it'll be all too easy for someone to deploy AI to increase spending, not reduce it.
Probably what will happen
It's not doing that yet, though. That's the claim, I've yet to see it working that way in real time. I've worked with an AI "learning" engine for 4 months. Its memory is selective at best. I have to remind it things every day, it still makes massive mistakes. I am very clear in my directives. It apologizes and can't explain why it can't remember some very rudimentary things.
...where a human TAUGHT the AI what it knows....
The past tense of "teach" is "taught."
🙏
They showed us that exact scenario on Star Trek Voyager, where the holographic emergency medical program was forced to augment its program and learn to meet the needs of the crew after their human doctor died.