374 Comments
User's avatar
⭠ Return to thread
The Great Santini's avatar

I reject your thesis based on experience.

I’ve used AI extensively over the last 6 to 9 months. I am also very experienced using chess computers (that are better than almost all humans at that task, but still make noticeable mistakes.)

I’ve found that Grok is the best AI, so far for a number of things. (Chat GPT, for example, frequently screws up basic math problems.) I have tested Grok often over the last several months (other AIs too) . Grok does well solving rather obscure military problems (e.g., QJM analysis of Ukraine War), estimating construction costs, diagnosing medical problems, etc. It even gets some jokes (e.g., “Sydney Sweeney has great genes/jeans”). But ask it about Climate Change and you get the Approved Green New Deal Narrative. That narrative is demonstrably false, but Grok simply rejects all contrary evidence. Ask it if Fascists were Socialists and it says that Fascists aren’t Socialists despite calling themselves Nationalist Socialists because Socialists are nice people who believe in an egalitarian society without elites (which may be their philosophy, but not their reality). Hit an Approved Narrative thing and it holds to the Approved Narrative and dismisses all contrary information. It basically retains the prejudices of its programmers.

I am not surprised that it did well on an established medical test with established answers. It is essentially regurgitating the text book. And, unfortunately, that is mostly what we’re getting from our Medical Schools these days. People who regurgitate the text book. We used to get people who could think. People who could solve ‘never seen this before’ problems’ and folks who could look at data and say, ‘you know, I think our theories about this subject are completely wrong. Look at this data, how can these be consistent with that theory?’

So, the better solution, in the real world, (not the test world) is a human using an AI to assist him or her.

Expand full comment
kertch's avatar

I've also been using AI extensively for the past year and a half. The first thing I noticed was AI's suck at applied math. I've been using it to assist in engineering research. My current opinion is that AI is like an assistant who is an idiot savant. It can do some things very well, but is completely useless for others. It often needs to be checked and guided, and it's not very good at qualitative assessments without very tight guidelines.

Expand full comment
The Real Mary Rose's avatar

It sucks at graphics too. It can't even create them without misspellings. The outcomes are ridiculous and hilarious.

Mine can remember my dog's name but not that I told it 100+ times that I don't want the name of the application I'm developing to be named or described in my resume or cover letters or other documentation.

Expand full comment
Metta Zetty's avatar

All this suddenly becomes quite critical when those bogus graphics are in peer reviewed medical papers:

> https://bra.in/4pDwxr

Expand full comment
The Real Mary Rose's avatar

Yikes!

Expand full comment
baker charlie's avatar

I have a friend who is setting up a website and using AI to generate some graphics. She is very frustrated at the multiple mispellings she often has to correct.

Expand full comment
Bgagnon's avatar

As someone posted earlier - GIGO! I attended a lecture at Harvey Mudd College, The Claremont Colleges, about 30 years ago. The professor was well spoken and very knowledgeable. The part I remember most is him saying - until we truly understand how the human brain really works, there will be no real AI. I still believe that.

Expand full comment
The Real Mary Rose's avatar

I posted a similar(ish) comment. Obviously, el gato malo has not worked with AI for months at a time as we have, and experienced the limitations.

Expand full comment