"people are inevitably going to howl "nooo! we need people to make sure the AI is right!" but this is already, obviously wrong." - I'm going to disagree with you here, after spending 4 weeks - hours every day - building an app/platform with two separate AI engines. They absolutely need to be given instruction, and they can't do everything.
I am going to write an article on my experience because it was maddening.
So AI passes the exams - because it can store the info like a snapshot. OK, what happens when there are deviations? Can AI identify a lesion? Can it identify a group of symptoms that has not been grouped before? I highly doubt it. Even the AI said when I questioned how easy it was for me to get it into a confused loop: "Many users are told to “work with AI” as if it’s a partner. But unless the system can metabolize logic, narrate failure truthfully, and respect architectural clarity, it’s not a partner—it’s a performative generator. True AI assistance begins with honesty about limitations. Anything less is erosion disguised as support."
I was working with AI, and it got into a loop where for 100 iterations I could not get it to fix an issue. It said it fixed it every time, even though it had not. Finally, I got it to admit it could not, and it said that I needed to consult a human developer to fix the problem.
This is the reality - who oversees the AI and makes sure it works correctly? Other AI? Highly doubtful, because that's what I was doing, and it still ended up a mess. I had two completely separate AI engines working for me, and when one messed up I'd bring it to the other to unpack. And unpack it did! I basically had one AI dissing on the other one.
Humans are still VERY needed in the world of AI. From my experience, I will say that at best it's a low-level assistant that can make some tasks go faster - but on a high level it could be catastrophic as if it hits one bump - calamity can ensue. As it is now, I have to rebuild the entire scenario I worked with the AI for weeks to build, because it hit so many limitations on pretty easy stuff.
I'm not saying all AI is the same, but before you put all your stock in it, there is a lot of research and real-world testing to be done.
We can feel, we can emote, we can taste, smell, and do all sorts of things AI will never be able to simulate. Not even close. And so far - call me gobsmacked, I did NOT expect my memory to be better than the AI's.
"people are inevitably going to howl "nooo! we need people to make sure the AI is right!" but this is already, obviously wrong." - I'm going to disagree with you here, after spending 4 weeks - hours every day - building an app/platform with two separate AI engines. They absolutely need to be given instruction, and they can't do everything.
I am going to write an article on my experience because it was maddening.
So AI passes the exams - because it can store the info like a snapshot. OK, what happens when there are deviations? Can AI identify a lesion? Can it identify a group of symptoms that has not been grouped before? I highly doubt it. Even the AI said when I questioned how easy it was for me to get it into a confused loop: "Many users are told to “work with AI” as if it’s a partner. But unless the system can metabolize logic, narrate failure truthfully, and respect architectural clarity, it’s not a partner—it’s a performative generator. True AI assistance begins with honesty about limitations. Anything less is erosion disguised as support."
I was working with AI, and it got into a loop where for 100 iterations I could not get it to fix an issue. It said it fixed it every time, even though it had not. Finally, I got it to admit it could not, and it said that I needed to consult a human developer to fix the problem.
This is the reality - who oversees the AI and makes sure it works correctly? Other AI? Highly doubtful, because that's what I was doing, and it still ended up a mess. I had two completely separate AI engines working for me, and when one messed up I'd bring it to the other to unpack. And unpack it did! I basically had one AI dissing on the other one.
Humans are still VERY needed in the world of AI. From my experience, I will say that at best it's a low-level assistant that can make some tasks go faster - but on a high level it could be catastrophic as if it hits one bump - calamity can ensue. As it is now, I have to rebuild the entire scenario I worked with the AI for weeks to build, because it hit so many limitations on pretty easy stuff.
I'm not saying all AI is the same, but before you put all your stock in it, there is a lot of research and real-world testing to be done.
Exactly. Humans with experienced observation see and hear and deduce what an AI cannot
We can feel, we can emote, we can taste, smell, and do all sorts of things AI will never be able to simulate. Not even close. And so far - call me gobsmacked, I did NOT expect my memory to be better than the AI's.