• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Google takes down Gemini AI image generator. Here’s what you need to know.

I suspect that the real fix which needs to be done is at the organizational level.
Google is pretty good in the ML and data science front and is one of the leaders in research in that space.
 
Hence my position is that if the US government wants intends on 'establishing strong guardrails to ensure its use of AI keeps people safe and doesn’t violate their rights' then they have to make it illegal and impossible for AI to control anything in the real world. But I suspect the US government, nor the legislature, are going to do that.

Sorry for the snip, I needed room to be wordy too... :)

So, I have no problem with what you've asserted in the part that I kept, in fact I agree totally. But I think there's a tendency to misunderstand what AI is primarily used for. As someone who uses it on a daily basis, let me see if I can help.

If you think that AI is a total replacement for humans in 2024, it's not. In most cases AI is not in control - I say most cases because I don't know everything in the universe...lol...but I've never seen a case where AI is totally in control. Rather, it augments human capacity. For example, I am in the process of creating content for a law firm's website. I could spend 20 hours on doing that, or I could use AI to create my drafts for me in a couple minutes, spend an hour or two editing, and move on to the next project. Instead of having 20 employees, I can get the same amount of work done myself, with the help of AI. But at the end of the day, I'm still making the content decisions.

Like every technology there is potential for problems and misuse, which is why I totally support your position, and I think it's something important for everyone to think about. There absolutely should be checks and balances, because as the technology improves and AI can do more and more, these considerations become more relevant. At the moment AI merely reflects the intelligence that creates it, and given the topic, it appears that those checks and balances are in place - Google took it down and said it didn't meet the mark of what they were trying to do. That's actually an example of a responsible corporation, despite the fact that the "consequences" weren't that severe.

To answer your question:

Do you think such biased results would play well in the worlds of 'healthcare, transportation, the environment, and benefits delivery'?

I'm not sure how bias enters into the equation here. Perhaps part of the problem is that so many mundane, transactional things have become politicized. None of these issues should be impacted by partisanship, or, put another way, I do not know how supporting one side or the other would impact these very practical topics.

But it would appear that your concerns are already being addressed:

The federal government is also establishing strong guardrails to ensure its use of AI keeps people safe and doesn’t violate their rights.

It is also important to note that there is no one "AI" mega entity. The AI software Google was trialling is not the same as the one the government uses. So, just because bias exists on one platform, it doesn't mean it's in another. And, given that Google is a business, not a government agency bound to serve and represent everyone, I expect that if they had wanted to keep that bias in their system they very well could have, as a matter of freedom of speech. Just like it's on us to evaluate the bias in the news we read, so shall it be on us to evaluate the bias in the AI tools we use. Some users / audiences will welcome the bias - look at how well Breitbart and Huff Post do / did (sorry, with so many media giants going down these days, I was suddenly unsure if these venues were still in business..lol).

Basically, it's a complicated issue, and I think it's good that people are calling things out and that companies are listening to that feedback. But I think it's important to start out by understanding what to actually be concerned about, based on what AI is today. This Google thing appears to be part of the knee-jerk reaction cycle that happens any time either side sees something that doesn't pass their personal sniff test - nope, I don't like it, it's the devil, burn it with fire... lol But there's a lot of good AI can do, and I feel like we all benefit by being a little bit informed, so as not to throw the baby out with the bathwater.
 
Google is pretty good in the ML and data science front and is one of the leaders in research in that space.
That Google 'is pretty good in the ML and data science front and is one of the leaders in research in that space' is not being disputed.
That their organizational ideology which has infected the results they deliver to their users is at issue.

We can see this in other than Google's Gemini. For example:

Google News’ bias skewed even further left in 2023 — 63% from liberal media sources, only 6% from the right: analysis​


This reinforced even one of the left of center's favorite media bias web sites:

Google News – Bias and Credibility​

leftcenter02.png

MBFCMostlyFactual.png


It goes back to the Big Tech questions:
  1. With as much influence as Big Tech has over the population, shouldn't the reasonable expectation be a minimum of bias?
  2. What is this bias in the most commonly used information sources doing to the society and it's peoples?
 
That Google 'is pretty good in the ML and data science front and is one of the leaders in research in that space' is not being disputed.
That their organizational ideology which has infected the results they deliver to their users is at issue.

We can see this in other than Google's Gemini. For example:

Google News’ bias skewed even further left in 2023 — 63% from liberal media sources, only 6% from the right: analysis​


This reinforced even one of the left of center's favorite media bias web sites:

Google News – Bias and Credibility​

leftcenter02.png

MBFCMostlyFactual.png

You got a point with this, to say otherwise would be hypocritical.
It goes back to the Big Tech questions:
  1. With as much influence as Big Tech has over the population, shouldn't the reasonable expectation be a minimum of bias?
No. Big Tech is still private enterprise with a target audience. To remove bias from marketing is impossible and anti-business.

Rather, we need to be responsible for our own consumption. Bias and propaganda did not begin with Big Tech, they are merely participating in something humans have engaged all throughout recorded history. Teach the kids critical thinking. Engage in critical thinking yourself. It's your choice to be influenced by Big Tech. You can't blame them when you have to give them that power in the first place.
  1. What is this bias in the most commonly used information sources doing to the society and it's peoples?
Selling product. That's all it is these days, bud, just marketing, nothing else. And that goes for government, too. Its the society we live in. We are no longer people, we are users, or consumers, or voters. And given the above, if you want to change that, the change starts from within. If society didn't want it, corporations wouldn't give it to them, or they'd go out of business trying.
 
That Google 'is pretty good in the ML and data science front and is one of the leaders in research in that space' is not being disputed.
That their organizational ideology which has infected the results they deliver to their users is at issue.

We can see this in other than Google's Gemini. For example:

Google News’ bias skewed even further left in 2023 — 63% from liberal media sources, only 6% from the right: analysis​


This reinforced even one of the left of center's favorite media bias web sites:

Google News – Bias and Credibility​

leftcenter02.png

MBFCMostlyFactual.png


It goes back to the Big Tech questions:
  1. With as much influence as Big Tech has over the population, shouldn't the reasonable expectation be a minimum of bias?
  2. What is this bias in the most commonly used information sources doing to the society and it's peoples?
Are you talking about their LLM group or their news group? Those are different people.
 
Are you talking about their LLM group or their news group? Those are different people.
I don't doubt that they are two different groups, but it does provide a window into Google's corporate culture and values, which appear to be left to hard left, and what effect does this continuous stream of biased information which their users are consuming impact those users? Which is a legitimate concern. Wouldn't you rather those users be consuming a continuous stream of unbiased information?
 
I don't doubt that they are two different groups, but it does provide a window into Google's corporate culture and values, which appear to be left to hard left, and what effect does this continuous stream of biased information which their users are consuming impact those users? Which is a legitimate concern. Wouldn't you rather those users be consuming a continuous stream of unbiased information?
I guess capitalism isn't sacred to conservatives if liberals are prospering then.
 
Sorry for the snip, I needed room to be wordy too... :)
:)

So, I have no problem with what you've asserted in the part that I kept, in fact I agree totally.
Cool.

But I think there's a tendency to misunderstand what AI is primarily used for. As someone who uses it on a daily basis, let me see if I can help.

If you think that AI is a total replacement for humans in 2024, it's not. In most cases AI is not in control - I say most cases because I don't know everything in the universe...lol...but I've never seen a case where AI is totally in control. Rather, it augments human capacity. For example, I am in the process of creating content for a law firm's website. I could spend 20 hours on doing that, or I could use AI to create my drafts for me in a couple minutes, spend an hour or two editing, and move on to the next project. Instead of having 20 employees, I can get the same amount of work done myself, with the help of AI. But at the end of the day, I'm still making the content decisions.
This is how AI is being used today. It's not hard to visualize a future where AI systems are in direct control of physical systems out in the real world, bypassing any human moderation or intermediary, such as your web site example.

Like every technology there is potential for problems and misuse, which is why I totally support your position, and I think it's something important for everyone to think about. There absolutely should be checks and balances, because as the technology improves and AI can do more and more, these considerations become more relevant.
Agreed.

At the moment AI merely reflects the intelligence that creates it, and given the topic, it appears that those checks and balances are in place - Google took it down and said it didn't meet the mark of what they were trying to do. That's actually an example of a responsible corporation, despite the fact that the "consequences" weren't that severe.
I don't agree with you that the Google Gemini example and response is a case of 'a responsible corporation', given that Google has been feeding left to hard left information to its users for no one knows how long. Reference #32
(continued)
 
To answer your question:



I'm not sure how bias enters into the equation here. Perhaps part of the problem is that so many mundane, transactional things have become politicized. None of these issues should be impacted by partisanship, or, put another way, I do not know how supporting one side or the other would impact these very practical topics.

But it would appear that your concerns are already being addressed:
Relying on the government, and it's miserable technology track record, to 'establishing strong guardrails' for AI I think is the wrong place to put one's faith.

It is also important to note that there is no one "AI" mega entity. The AI software Google was trialling is not the same as the one the government uses.
Understood and accepted, but wasn't an assertion I was making either.
Essentially the same AI code can be trained on two different sets of data, and will in essence become two different AIs based on the data they've been trained on.

So, just because bias exists on one platform, it doesn't mean it's in another.
Fair, the two AI example above.

And, given that Google is a business, not a government agency bound to serve and represent everyone, I expect that if they had wanted to keep that bias in their system they very well could have, as a matter of freedom of speech. Just like it's on us to evaluate the bias in the news we read, so shall it be on us to evaluate the bias in the AI tools we use. Some users / audiences will welcome the bias - look at how well Breitbart and Huff Post do / did (sorry, with so many media giants going down these days, I was suddenly unsure if these venues were still in business..lol).
The Big Tech question Reference #32

Basically, it's a complicated issue, and I think it's good that people are calling things out and that companies are listening to that feedback.
It's one thing 'listening to that feedback' and it's another thing to make the changes to address that feedback. We've seen the former, we have yet to see the latter, of course it's not been more than a few days since the excrement hit the rotary oscillator. ;)

But I think it's important to start out by understanding what to actually be concerned about, based on what AI is today.

This Google thing appears to be part of the knee-jerk reaction cycle that happens any time either side sees something that doesn't pass their personal sniff test - nope, I don't like it, it's the devil, burn it with fire... lol
See legitimate concerns and the power of big tech in post #32. It's legitimate concerns, and while some may have the reaction you describe, others aren't. :)

But there's a lot of good AI can do, and I feel like we all benefit by being a little bit informed, so as not to throw the baby out with the bathwater.
Agreed. But the very researchers and developers of this AI technology have issued seemingly dire warnings.

Let's set aside for one column whether generative AI will save or destroy humanity and focus on the actual words from actual creators of it:

  • Dario Amodei, who has raised $7.3 billion for his AI start-up Anthropic after leaving OpenAI over concerns over ethics, says there's a 10% to 25% chance that AI technology could destroy humanity. But if that doesn't happen, he said, "it'll go not just fine, it'll go really, really great."
  • Fei-Fei Li, a renowned AI scholar who is co-director of Stanford's Human-Centered AI Institute, told MIT Technology Review last year that AI's "catastrophic risks to society" include practical, "rubber meets the road" problems — misinformation, workforce disruption, bias and privacy infringements.
  • Geoffrey Hinton — known as "The Godfather of AI," and for 10 years one of Google's AI leaders — warns anyone who'll listen that we're creating a technology that in our lifetimes could control and obliterate us.

These warnings need to be taken into account as well.
 
This is how AI is being used today. It's not hard to visualize a future where AI systems are in direct control of physical systems out in the real world, bypassing any human moderation or intermediary, such as your web site example.
I think it's worth remembering, though, that in order for us to trust AI systems to that point, they would need to demonstrate that those systems could reliably replace humans by doing a better job consistently, meeting the goals of the operating party or organization, and the more sensitive or critical the goals are, the more thorough that vetting and demonstration would need to be.

And not because anyone cares about anyone's feelings, I'd call bullshit if that was the stated objective, but rather for sustained profitability. You might not always be able to count on people or organizations being decent, but you can count on their greed.
I don't agree with you that the Google Gemini example and response is a case of 'a responsible corporation', given that Google has been feeding left to hard left information to its users for no one knows how long. Reference #32
(continued)
That's a question of audience, though. Google is the golden standard for consumer analysis. If they're leaning left they've decided that's the most profitable path to take, which, under a capitalist system, is exactly the right course to take. You cannot claim to be for capitalism and forcing companies to cater to less profitable markets. That's closer to socialism.

(This could take the conversation in a totally different path, so I'll leave it there, but I'd be down to wage through it if you want to)

So this wasn't a partisan decision, this was a standards decision. They didn't walk it back to soothe conservative concerns, they did it to improve their AI. That's being responsible over being partisan, since we both agree that their priority is their left leaning audience.
 
Snips have been made, in case anyone else is following along. :)
Relying on the government, and it's miserable technology track record, to 'establishing strong guardrails' for AI I think is the wrong place to put one's faith.
Lol... OK, i feel that, but i guess who else would you suggest be in charge of this? AI certainly isn't going away, that toothpaste has left the tube, so who would you suggest that everyone could trust? Not rhetorical. :)
The Big Tech question Reference #32
I'm not sure what this references, sorry. Help?
It's one thing 'listening to that feedback' and it's another thing to make the changes to address that feedback. We've seen the former, we have yet to see the latter, of course it's not been more than a few days since the excrement hit the rotary oscillator. ;)
I think, given that this was a quality issue, and that they've made very public statements about it, it'll get done. We'd have to look at that on a case by case basis, but generally speaking giant companies like this are very deliberate, calculated, and intentional with these kind of statements. Trust is incredibly important from a branding perspective. We'll see, but I'd be far more shocked if they didn't follow through.
See legitimate concerns and the power of big tech in post #32. It's legitimate concerns, and while some may have the reaction you describe, others aren't. :)


Agreed. But the very researchers and developers of this AI technology have issued seemingly dire warnings.

Let's set aside for one column whether generative AI will save or destroy humanity and focus on the actual words from actual creators of it:

  • Dario Amodei, who has raised $7.3 billion for his AI start-up Anthropic after leaving OpenAI over concerns over ethics, says there's a 10% to 25% chance that AI technology could destroy humanity. But if that doesn't happen, he said, "it'll go not just fine, it'll go really, really great."
  • Fei-Fei Li, a renowned AI scholar who is co-director of Stanford's Human-Centered AI Institute, told MIT Technology Review last year that AI's "catastrophic risks to society" include practical, "rubber meets the road" problems — misinformation, workforce disruption, bias and privacy infringements.
  • Geoffrey Hinton — known as "The Godfather of AI," and for 10 years one of Google's AI leaders — warns anyone who'll listen that we're creating a technology that in our lifetimes could control and obliterate us.

These warnings need to be taken into account as well.
So, sure, i agree, these things definitely need to be considered, and i think they are - certainly the conversation around ethics and AI is taking place all over the world.

But you also need to take the fringe opinions with a grain of salt, as there will always be people looking to profit from societal anxiety around change through sensationalism and fear mongering. Disinformation can make for great click bait, as we've seen time and time again. I'm not saying we shouldn't consider it, but rational thought would suggest that no one would want these outcomes, and they would be worked out during the development process. So many checks and balances would have to be ignored... i just don't see it getting to that point.

Taking a less jaded stance, that these people are calling these things out is what you want. It's no different that pointing out the opportunity for risk in anything else - lots of ways to die in a car, but we still drive them, because we know the risks and we have numerous solutions, from seat belts to laws, to mitigate that risk in order to benefit from the usefulness of the car.
 
I think it's worth remembering, though, that in order for us to trust AI systems to that point, they would need to demonstrate that those systems could reliably replace humans by doing a better job consistently, meeting the goals of the operating party or organization, and the more sensitive or critical the goals are, the more thorough that vetting and demonstration would need to be.
I'd agree with this, however, implementations of new technologies, such as this, always have first implementers who get it wrong, sometimes badly so, and then the entire industry does case studies to figure out how to fix it. Given some of the insider's dire warnings, there may not be second chances.

And not because anyone cares about anyone's feelings, I'd call bullshit if that was the stated objective, but rather for sustained profitability. You might not always be able to count on people or organizations being decent, but you can count on their greed.
Not sure if the recent spat of businesses incorporating 'woke' and their financial losses supports your premise here.

That's a question of audience, though. Google is the golden standard for consumer analysis. If they're leaning left they've decided that's the most profitable path to take, which, under a capitalist system, is exactly the right course to take. You cannot claim to be for capitalism and forcing companies to cater to less profitable markets. That's closer to socialism.
Your statement here rephrasing or furthering your premise above, on which I've added my 2 cents.

(This could take the conversation in a totally different path, so I'll leave it there, but I'd be down to wage through it if you want to)


So this wasn't a partisan decision, this was a standards decision. They didn't walk it back to soothe conservative concerns, they did it to improve their AI. That's being responsible over being partisan, since we both agree that their priority is their left leaning audience.
I don't pretend to know Google's motivation here. On one end of the spectrum it could be that Google was pushing woke and got caught at it all the way to the other end of the spectrum which you stated above, or, more likely, something in between.
 
Snips have been made, in case anyone else is following along. :)

Lol... OK, i feel that, but i guess who else would you suggest be in charge of this? AI certainly isn't going away, that toothpaste has left the tube, so who would you suggest that everyone could trust? Not rhetorical. :)
No, AI isn't going to go away, and no, there are no other candidates who can draft the guardrails, pass them into laws, and enforce them. Just as it is reasonable and legitimate to conclude that the government is going to suck at it, like it does most every other thing it touches.

I'm not sure what this references, sorry. Help?
Ahh, sorry. That was my post #32, which is a link you can click on to get there.

I think, given that this was a quality issue, and that they've made very public statements about it, it'll get done. We'd have to look at that on a case by case basis, but generally speaking giant companies like this are very deliberate, calculated, and intentional with these kind of statements. Trust is incredibly important from a branding perspective. We'll see, but I'd be far more shocked if they didn't follow through.

So, sure, i agree, these things definitely need to be considered, and i think they are - certainly the conversation around ethics and AI is taking place all over the world.
Ahh, if only the bad actors who are going to use AI for evil purposes (and are already) would comply with those ethics which are being discussed all around the world.

If anything is similar, it'd be the medical ethics discussions surrounding DNA and genetic manipulation, I would imagine. Yet:

CRISPR-kits allow people to do at home what once was reserved for only the highest-tech research labs: turn on and off gene expression. Therefore, DIY CRISPR kits may target biohackers' own genes by 2040; CRISPRing nerve tissue may come to be seen on par with taking ecstasy. The future of fun is in biohackers' hands.​
If you look at past interests of this community to guide predictions, we think it'll be enhancing human senses — heightening sensitivity to touch, achieving a better high, a more satisfying sexual encounter. Mix gene hacking with teenage or 20-something DIY gene hackers and you get mighty efforts to have a more fun night out.​
From an ethical perspective, we should worry about dangerous effects of DIY gene editing, including off-target effects. The use of CRISPR in basements and dorm rooms, and subsequent flood of YouTube videos chronicling CRISPR-fueled experiences, mean that its risks might not be fully characterized or understood before others join in.​
Bottom line: A genetically enhanced high may be so intense that the notion of addiction won't even capture the attachment people may have to their enhanced experiences. They may be so intense that they overwhelm those who try to indulge.​
(continued)
 
But you also need to take the fringe opinions with a grain of salt, as there will always be people looking to profit from societal anxiety around change through sensationalism and fear mongering. Disinformation can make for great click bait, as we've seen time and time again. I'm not saying we shouldn't consider it, but rational thought would suggest that no one would want these outcomes, and they would be worked out during the development process. So many checks and balances would have to be ignored... i just don't see it getting to that point.
Many who bring new technologies into existence don't see things getting to the point that they do.

Taking a less jaded stance, that these people are calling these things out is what you want.
Yes. Agreed.

It's no different that pointing out the opportunity for risk in anything else - lots of ways to die in a car, but we still drive them, because we know the risks and we have numerous solutions, from seat belts to laws, to mitigate that risk in order to benefit from the usefulness of the car.
Yes, the same sort of opportunity risk, but the impact and it's magnitude need also be considered. Same with the gene hacking citation above. can you imagine if one of the DYI gene hackers creates a deadly and virulent air borne pathogen? Making COVID look mild in comparison? The dire warnings are about AI are the same with the same level of consequence of getting it wrong, which humans, evil or otherwise, are bound to do.
 
I'd agree with this, however, implementations of new technologies, such as this, always have first implementers who get it wrong, sometimes badly so, and then the entire industry does case studies to figure out how to fix it. Given some of the insider's dire warnings, there may not be second chances.
Perhaps. I just don't think you, me, and the whistle blowers are the only ones that recognize the stakes here. I may be proven wrong at some point, it just seems like the less likely outcome.
Not sure if the recent spat of businesses incorporating 'woke' and their financial losses supports your premise here.
Well, there is a tendency currently for those on the "allergic to woke" front that tend to overestimate the impact. I'm not aware of any major business to go under due to appealing to the left. As for "wokeism", I'm not entirely sure what that is anymore... it's become a useless word because a certain demographic uses it to describe anything they don't like or agree with. It's less useful than the word "nice" at this point... hehe

I'd be happy to explore this with you, but let's use more precise language.
I don't pretend to know Google's motivation here. On one end of the spectrum it could be that Google was pushing woke and got caught at it all the way to the other end of the spectrum which you stated above, or, more likely, something in between.
Again, i don't really talk about "woke", for the reasons I've stated above. I'm not sure what's woke about a female pope, for example, or how having AI generating it is Google "pushing woke".

As for their motivation, it's always the same: profit. That's it. Super simple. They are entirely capitalist.

Perhaps the bigger problem for some is that they're not comfortable with the fact that their chosen lean isn't seen as being as profitable enough to represent in private sector business, but that's maybe a different discussion.
 
No, AI isn't going to go away, and no, there are no other candidates who can draft the guardrails, pass them into laws, and enforce them. Just as it is reasonable and legitimate to conclude that the government is going to suck at it, like it does most every other thing it touches.
So we're doomed... hehe... and maybe we are, i don't know, the party is going to end some day.

But in the meantime, it doesn't seem like the most likely outcome. I get that we hold two very different views of the world, though, and our reactions to this reflect that, so i won't try to convince. Nothing is certain, I'm sure that, at least, we can agree on.
Ahh, sorry. That was my post #32, which is a link you can click on to get there.


Ahh, if only the bad actors who are going to use AI for evil purposes (and are already) would comply with those ethics which are being discussed all around the world.
Yep... and some people use cars to drive through crowds. And some people use guns for mass shootings. And some nations use atomic energy to destroy cities.

There's always been two sides to every coin. We manage. Progress still happens.

I get your point, it's just that it's never stopped us before.
If anything is similar, it'd be the medical ethics discussions surrounding DNA and genetic manipulation, I would imagine. Yet:

CRISPR-kits allow people to do at home what once was reserved for only the highest-tech research labs: turn on and off gene expression. Therefore, DIY CRISPR kits may target biohackers' own genes by 2040; CRISPRing nerve tissue may come to be seen on par with taking ecstasy. The future of fun is in biohackers' hands.​
If you look at past interests of this community to guide predictions, we think it'll be enhancing human senses — heightening sensitivity to touch, achieving a better high, a more satisfying sexual encounter. Mix gene hacking with teenage or 20-something DIY gene hackers and you get mighty efforts to have a more fun night out.​
From an ethical perspective, we should worry about dangerous effects of DIY gene editing, including off-target effects. The use of CRISPR in basements and dorm rooms, and subsequent flood of YouTube videos chronicling CRISPR-fueled experiences, mean that its risks might not be fully characterized or understood before others join in.​
Bottom line: A genetically enhanced high may be so intense that the notion of addiction won't even capture the attachment people may have to their enhanced experiences. They may be so intense that they overwhelm those who try to indulge.​
(continued)
Welp, full disclosure, I'm a super ADHD guy, and you've just sentenced me to hours of research on the above... lol I don't know anything about that, but i will be an expert in a week or so... Fascinating and terrifying. So much potential for good and evil. Makes sense it was is humans who thought it up.
 
Perhaps. I just don't think you, me, and the whistle blowers are the only ones that recognize the stakes here. I may be proven wrong at some point, it just seems like the less likely outcome.
Well, one rather straight forward guardrail which could be adopted, one I think we both agreed on, was the banning of any AI system the ability to control anything in the physical world without a human go-between. We did agree on that, at least it was my impression that we did. I may be wrong.

The adoption and strict adherence to this tenant would probably go a long way to reducing the opportunity risk that we were discussing just prior.

Well, there is a tendency currently for those on the "allergic to woke" front that tend to overestimate the impact. I'm not aware of any major business to go under due to appealing to the left. As for "wokeism", I'm not entirely sure what that is anymore... it's become a useless word because a certain demographic uses it to describe anything they don't like or agree with. It's less useful than the word "nice" at this point... hehe
The well known cases were with large corporations with significant reserves and wouldn't cause them to under, but their financial futures were indeed significantly impacted, that can't be denied.

But what do businesses expect as results when those businesses alienate and / or preach down to, their core customers? There's going to be a backlash, and it isn't going to financially look pretty.

I'd be happy to explore this with you, but let's use more precise language.

Again, i don't really talk about "woke", for the reasons I've stated above. I'm not sure what's woke about a female pope, for example, or how having AI generating it is Google "pushing woke".
OK by me.

As for their motivation, it's always the same: profit. That's it. Super simple. They are entirely capitalist.
This I rather doubt, but you sure seem to believe this. So, OK, I guess.

Perhaps the bigger problem for some is that they're not comfortable with the fact that their chosen lean isn't seen as being as profitable enough to represent in private sector business, but that's maybe a different discussion.
Yeah, that probably would be a different discussion.
 
Many who bring new technologies into existence don't see things getting to the point that they do.


Yes. Agreed.


Yes, the same sort of opportunity risk, but the impact and it's magnitude need also be considered. Same with the gene hacking citation above. can you imagine if one of the DYI gene hackers creates a deadly and virulent air borne pathogen? Making COVID look mild in comparison? The dire warnings are about AI are the same with the same level of consequence of getting it wrong, which humans, evil or otherwise, are bound to do.
I agree with all of this, but i guess i wonder what the solution is, then? Our entire learning function is reactionary, an endless cycle of trial and error, and even there it has its limits - if it didn't we would no longer be killing each other in forever wars, for example. There are certain lessons we just can't learn.

I'm still not suggesting that the worst case scenarios are the only plausible scenarios, but if that's where we are headed, history would tell us that's where we're headed. Better to be calm and thoughtful, abandoning petty elements such as partisanship, so as to be in the best position possible to engage in critical thinking for when the eventual challenges arise, than to engage in luddite alarmism driven by partisan concerns. (That sounds like a diss but it's not, just want to be clear.)

The reward for doing so far outweighs the risk. AI could, and already is, changing reality for the better. It's potential is so expansive as to usher in an entirely new golden age, provided we don't **** it up by being the worst versions of what humanity can be. I know I'm naive to hope for that outcome, but if i believed the alternative I'd probably choose to check out early. But that isn't limited to AI. It wouldn't matter if we were back to sticks and stones, if humanity decided to be its worst version of itself, life wouldn't be worth living. So i can only hope and advocate for better. We'll all find out for sure one day. :)
 
So we're doomed... hehe... and maybe we are, i don't know, the party is going to end some day.
The human race, or for that matter the Earth, one day, of nature's own astrological due course, cease to exist (and they will), the universe will simply shrug.
That said, there's no reason in the heavens or Earth, to hurry that along any more than it needs to be.

But in the meantime, it doesn't seem like the most likely outcome. I get that we hold two very different views of the world, though, and our reactions to this reflect that, so i won't try to convince. Nothing is certain, I'm sure that, at least, we can agree on.
Agreed. Nothing is certain, nothing is written, as of yet.

Yep... and some people use cars to drive through crowds. And some people use guns for mass shootings. And some nations use atomic energy to destroy cities.

There's always been two sides to every coin. We manage. Progress still happens.
Yep, and we manage. Progress happens, up until the point where that progress kills everyone, then progress stops cold. I'd rather try and avoid that scenario, thanks, and I'd err on the side of caution when doing so as well.

I get your point, it's just that it's never stopped us before.

Welp, full disclosure, I'm a super ADHD guy, and you've just sentenced me to hours of research on the above... lol
Aww shit man. I certainly didn't mean to do that! :eek: ;)
My only advice is to not try it to adjust anything about you. You are fine just the way you are, at least based on this rather nice exchange we've had here which I've enjoyed. (y)

I don't know anything about that, but i will be an expert in a week or so... Fascinating and terrifying. So much potential for good and evil. Makes sense it was is humans who thought it up.
"So much potential for good and evil." No truer statement on the human condition, I think.
 
Well, one rather straight forward guardrail which could be adopted, one I think we both agreed on, was the banning of any AI system the ability to control anything in the physical world without a human go-between. We did agree on that, at least it was my impression that we did. I may be wrong.
Nope, you're right, we agreed on that. But that's not really a solution either. The imperfections in AI are due to the imperfections in the human that built it. You're never really trusting AI, you're trusting the human that built it. So human interaction doesn't really address your concern, unless the concern is AI going beyond what the designing human intended. If that's the concern, then this is a partial solution, so long as you (and everyone else) trusts the human keeping watch.
The adoption and strict adherence to this tenant would probably go a long way to reducing the opportunity risk that we were discussing just prior.
I should have put my previous content here for continuity, sorry, I'm sick as a dog doing this in bed on my phone... lol
The well known cases were with large corporations with significant reserves and wouldn't cause them to under, but their financial futures were indeed significantly impacted, that can't be denied.

But what do businesses expect as results when those businesses alienate and / or preach down to, their core customers? There's going to be a backlash, and it isn't going to financially look pretty.
Honestly, given the way society is currently divided, they have no choice but to choose. Some businesses have done really well going in the other direction. But nobody can hope to make everyone happy these days. Marketing always reflects society. Again, profit is the only motivation, and the strategy is far more analytical than a lot of people would believe. That's why they recovery, and quickly. I mean, 2023 was the year of bud light hate... their parent company fully recovered their stock value they remained the number one beer global market share company, and enjoyed year over year growth from 2022. If that's taking a hit, well, knock me out... hehe
OK by me.


This I rather doubt, but you sure seem to believe this. So, OK, I guess.
I'd be interested in hearing your alternative explanation.
Yeah, that probably would be a different discussion.
:)
 
Back
Top Bottom