Please enlighten me. None of what EGM has said persuades me that AI is a reliable, independent source of information and ideas. AI can only draw from available information. So, my first thought is, who is making, and withholding, that information? What is THEIR bias? Because whatever their bias is, that's what AI's bias is.
There's an expression from long ago regarding computer programming, "Garbage in, garbage out."
AI is a pretty broad field and I'm by no means an AI expert. However, I have some practical experience with a company that I used to work at that used machine learning models to analyze online consumer behavior on behalf of advertisers. What we found is that the machine consistently identified customer segments that no one had ever thought of. For example, it identified a subspecialty of scientists to target for travel companies. This made no sense to any of the human analysts at first glance and was sidelined until someone got curious and followed up on it. It turns out that the model somehow picked up on the fact that there was a major academic event that was being held that particular year that was generating demand for travel.
Another example that I have secondhand familiarity with is the use of AI to detect fraudulent transactions in financial services. The sheer volume of transactions per second means this is impossible to handle in a manual manner. Most banks have been using this type of software in action for a while now.
The GIGO problem that you refer to is one that exists for any type of analysis, not just AI-based ones. But as I mentioned before, we already know how to handle this problem. Examining the dataset, validating its quality, and conducting sanity checks were lessons I was taught in my stats 101 class. All of this would remain valid whether conducting a simple linear regression or deploying a neural net. The real threat that El Gato is highlighting is that the government will, under the guise of "protecting" the public from "AI risks", impose controls and filters on the results from that technology that don't align with its political preferences. It's the pandemic playbook, except it's being imposed on AI instead of people.
Please enlighten me. None of what EGM has said persuades me that AI is a reliable, independent source of information and ideas. AI can only draw from available information. So, my first thought is, who is making, and withholding, that information? What is THEIR bias? Because whatever their bias is, that's what AI's bias is.
There's an expression from long ago regarding computer programming, "Garbage in, garbage out."
AI is a pretty broad field and I'm by no means an AI expert. However, I have some practical experience with a company that I used to work at that used machine learning models to analyze online consumer behavior on behalf of advertisers. What we found is that the machine consistently identified customer segments that no one had ever thought of. For example, it identified a subspecialty of scientists to target for travel companies. This made no sense to any of the human analysts at first glance and was sidelined until someone got curious and followed up on it. It turns out that the model somehow picked up on the fact that there was a major academic event that was being held that particular year that was generating demand for travel.
Another example that I have secondhand familiarity with is the use of AI to detect fraudulent transactions in financial services. The sheer volume of transactions per second means this is impossible to handle in a manual manner. Most banks have been using this type of software in action for a while now.
The GIGO problem that you refer to is one that exists for any type of analysis, not just AI-based ones. But as I mentioned before, we already know how to handle this problem. Examining the dataset, validating its quality, and conducting sanity checks were lessons I was taught in my stats 101 class. All of this would remain valid whether conducting a simple linear regression or deploying a neural net. The real threat that El Gato is highlighting is that the government will, under the guise of "protecting" the public from "AI risks", impose controls and filters on the results from that technology that don't align with its political preferences. It's the pandemic playbook, except it's being imposed on AI instead of people.