There are several things wrong with your conclusion:
1. Not understanding the difference between a programming and training -- there might still be a human to blame, of course, in regards to who is curating the training material, but it's not some programmer putting in a biased algorithm.
2. Going from 'grok' to 'AI' as if all LLM engines were the same, all trained on exactly the same material, etc. Drawing a broad generic conclusion from a specific example isn't exactly wise.
“ there might still be a human to blame, of course, in regards to who is curating the training material”.
Exactly, you think the trainers will be unbiased? Or as Gato said in an earlier comment that AI will teach itself. But that assumes it will then be able to correct for all the biases already embedded in the available training materials.
There are several things wrong with your conclusion:
1. Not understanding the difference between a programming and training -- there might still be a human to blame, of course, in regards to who is curating the training material, but it's not some programmer putting in a biased algorithm.
2. Going from 'grok' to 'AI' as if all LLM engines were the same, all trained on exactly the same material, etc. Drawing a broad generic conclusion from a specific example isn't exactly wise.
“ there might still be a human to blame, of course, in regards to who is curating the training material”.
Exactly, you think the trainers will be unbiased? Or as Gato said in an earlier comment that AI will teach itself. But that assumes it will then be able to correct for all the biases already embedded in the available training materials.