Hence my position is that if the US government wants intends on 'establishing strong guardrails to ensure its use of AI keeps people safe and doesn’t violate their rights' then they have to make it illegal and impossible for AI to control anything in the real world. But I suspect the US government, nor the legislature, are going to do that.
Sorry for the snip, I needed room to be wordy too...
So, I have no problem with what you've asserted in the part that I kept, in fact I agree totally. But I think there's a tendency to misunderstand what AI is primarily used for. As someone who uses it on a daily basis, let me see if I can help.
If you think that AI is a total replacement for humans in 2024, it's not. In most cases AI is not in control - I say most cases because I don't know everything in the universe...lol...but I've never seen a case where AI is totally in control. Rather, it augments human capacity. For example, I am in the process of creating content for a law firm's website. I could spend 20 hours on doing that, or I could use AI to create my drafts for me in a couple minutes, spend an hour or two editing, and move on to the next project. Instead of having 20 employees, I can get the same amount of work done myself, with the help of AI. But at the end of the day, I'm still making the content decisions.
Like every technology there is potential for problems and misuse, which is why I totally support your position, and I think it's something important for everyone to think about. There absolutely should be checks and balances, because as the technology improves and AI can do more and more, these considerations become more relevant. At the moment AI merely reflects the intelligence that creates it, and given the topic, it appears that those checks and balances are in place - Google took it down and said it didn't meet the mark of what they were trying to do. That's actually an example of a responsible corporation, despite the fact that the "consequences" weren't that severe.
To answer your question:
Do you think such biased results would play well in the worlds of 'healthcare, transportation, the environment, and benefits delivery'?
I'm not sure how bias enters into the equation here. Perhaps part of the problem is that so many mundane, transactional things have become politicized. None of these issues should be impacted by partisanship, or, put another way, I do not know how supporting one side or the other would impact these very practical topics.
But it would appear that your concerns are already being addressed:
The federal government is also establishing strong guardrails to ensure its use of AI keeps people safe and doesn’t violate their rights.
It is also important to note that there is no one "AI" mega entity. The AI software Google was trialling is not the same as the one the government uses. So, just because bias exists on one platform, it doesn't mean it's in another. And, given that Google is a business, not a government agency bound to serve and represent everyone, I expect that if they had wanted to keep that bias in their system they very well could have, as a matter of freedom of speech. Just like it's on us to evaluate the bias in the news we read, so shall it be on us to evaluate the bias in the AI tools we use. Some users / audiences will welcome the bias - look at how well Breitbart and Huff Post do / did (sorry, with so many media giants going down these days, I was suddenly unsure if these venues were still in business..lol).
Basically, it's a complicated issue, and I think it's good that people are calling things out and that companies are listening to that feedback. But I think it's important to start out by understanding what to actually be concerned about, based on what AI is today. This Google thing appears to be part of the knee-jerk reaction cycle that happens any time either side sees something that doesn't pass their personal sniff test - nope, I don't like it, it's the devil, burn it with fire... lol But there's a lot of good AI can do, and I feel like we all benefit by being a little bit informed, so as not to throw the baby out with the bathwater.