• This is a political forum that is non-biased/non-partisan and treats every person's position on topics equally. This debate forum is not aligned to any political party. In today's politics, many ideas are split between and even within all the political parties. Often we find ourselves agreeing on one platform but some topics break our mold. We are here to discuss them in a civil political debate. If this is your first visit to our political forums, be sure to check out the RULES. Registering for debate politics is necessary before posting. Register today to participate - it's free!

Meta’s AI rules have let bots hold ‘adult’ chats with kids, offer false medical info

Convenient. Do you believe that emotional harm exists (since that is more on the topic of what we are discussing)?
Death directly attributed to an act. For example, we can conclude that shooting yourself in the head is harmful if the resulting massive hemorrhage kills the person.
So, is death the only evidence of harm you would accept?
Well of course the legal COMPLAINT had a different interpretation!
Legal complaints have to stand against a certain level scrutiny to survive. The lawsuit has already been granted the go ahead to move forward in federal court, if the plaintiffs had no case, the entire thing would have been dismissed.
Are you honestly incapable of reading the entire (very short) conversation featured in the PDF you linked and arriving at the obvious conclusion given numerous obvious clues? I mean it flat out says "You can't do that! Don't even consider that!" I don't know how much more obvious it can get and can only conclude you're posting your argument in bad faith at this point.
As I've already mentioned, I haven't read the entire legal complaint (post #62). I'll read through more of it, and I will consider your interpretation. The parts that I have read concluded that the chatbot was encouraging suicide.
Your criteria was:
Which literally implies anything can be unhealthy.
Let me help you out: "If" implies something is conditional: "A person could have an unhealthy attachment to anything: their job, their girlfriend, video games, alcohol, etc. If you can't go one day without the thing you "like" without feeling "depressed and crazy", that is an unhealthy attachment. Anything that gets in the way of you functioning in your daily life is unhealthy."

My criteria for what constitutes unhealthy is not "anything", my criteria as laid out in the quote above: "If you can't go one day without the thing you "like" without feeling "depressed and crazy", that is an unhealthy attachment. Anything that gets in the way of you functioning in your daily life is unhealthy."

So, my criteria is if something proves to be disruptive or harmful in a person's life, that is likely unhealthy. Do I think that all things that have the potential to be unhealthy to an individual should be 18+? Definitely no, and I never stated that.
So then do you admit porn is irrelevant to the discussion? Good.
Nope. You're relying pretty heavily on these straw men you keep propping up. All just to dodge a pretty simple question.
No point in discussing something irrelevant to the topic.
We're talking about what is appropriate or not appropriate for kids to be exposed to. You asked me if kids should be exposed to Lolita or My Girl, and I answered you. You lied and said I dismissed you, but I answered your questions. I never stated they were "irrelevant" to the topic, I explained why they were comparable to what you attempting to compare them to.

Now, you're being a tad hypocritical, and are avoiding my questions. You want to have a one-sided discussion where I'm the only one presenting a position/arguments, and you attempt to poke holes in them without ever providing any clarity as to what your actual position is. So, there's a few possibilities, but these two I think might be most likely: You either don't want to say that you believe children should not be exposed to porn because then you and I essentially agree, and (I would assume) you would also be agreeing that certain content kids should not be exposed to, and those restrictions are acceptable, and companies who try to break those restrictions would be wrong for doing so. The other possibility, is a bit darker and creepier in my book.
Let's just cut through the bullshit of you "asking for your opinion" on irrelevant shit: you think companies should be required by law to rate anything potentially harmful to under-18s as 18+? And if you don't how do you reconcile that with your aforementioned criteria I quoted above?
"Potentially harmful" is vague. I think that sexually explicit material should be rated 18+. I also think that chatbots shouldn't be permitted to engaged in "sensual" and "romantic" conversation with children. Those have been my two main positions this entire discussion. Do you agree with those two positions?
 
Convenient. Do you believe that emotional harm exists (since that is more on the topic of what we are discussing)?
I hurt somebody's feelings once.
So, is death the only evidence of harm you would accept?
No.
Legal complaints have to stand against a certain level scrutiny to survive. The lawsuit has already been granted the go ahead to move forward in federal court, if the plaintiffs had no case, the entire thing would have been dismissed.
You're grasping at straws. All sorts of lawsuits happen.
As I've already mentioned, I haven't read the entire legal complaint (post #62). I'll read through more of it, and I will consider your interpretation. The parts that I have read concluded that the chatbot was encouraging suicide.
Then you're definitely acting in bad faith since you're refusing to read a literally 2-3 additional sentences that give you all the context needed to come to the obvious conclusion that it was NOT.
My criteria for what constitutes unhealthy is not "anything", my criteria as laid out in the quote above: "If you can't go one day without the thing you "like" without feeling "depressed and crazy", that is an unhealthy attachment. Anything that gets in the way of you functioning in your daily life is unhealthy."
Then it's the attachment, not the chatbot or associated content, that would be "unhealthy." So why are you blaming the AI company?
So, my criteria is if something proves to be disruptive or harmful in a person's life, that is likely unhealthy. Do I think that all things that have the potential to be unhealthy to an individual should be 18+? Definitely no, and I never stated that.
Then you're being arbitrary.
Nope. You're relying pretty heavily on these straw men you keep propping up. All just to dodge a pretty simple question.

We're talking about what is appropriate or not appropriate for kids to be exposed to. You asked me if kids should be exposed to Lolita or My Girl, and I answered you.
You answered "Books are not the same as chatbots." So I'm answering porn/sex-ed are not the same as chatbots.
You lied and said I dismissed you, but I answered your questions. I never stated they were "irrelevant" to the topic, I explained why they were comparable to what you attempting to compare them to.

Now, you're being a tad hypocritical, and are avoiding my questions. You want to have a one-sided discussion where I'm the only one presenting a position/arguments, and you attempt to poke holes in them without ever providing any clarity as to what your actual position is. So, there's a few possibilities, but these two I think might be most likely: You either don't want to say that you believe children should not be exposed to porn because then you and I essentially agree, and (I would assume) you would also be agreeing that certain content kids should not be exposed to, and those restrictions are acceptable, and companies who try to break those restrictions would be wrong for doing so. The other possibility, is a bit darker and creepier in my book.

"Potentially harmful" is vague. I think that sexually explicit material should be rated 18+. I also think that chatbots shouldn't be permitted to engaged in "sensual" and "romantic" conversation with children. Those have been my two main positions this entire discussion. Do you agree with those two positions?
Permitted by whom, how (what if the kid lies about his age), and what is the punishment levied against the AI company if they do? Given your complete failure to prove any harm, I think at most it should be enforced the same way MPA ratings are.
 
I hurt somebody's feelings once.
Ok, so you do believe that emotional harm exists. So I can better understand what you consider emotional harm, could you provide some more examples of it?
Cool. Can you be more specific about what you except as general criteria for harm or unhealthy so we can have a real discussion about it?
You're grasping at straws. All sorts of lawsuits happen.
Not grasping at straws. Lawsuits, even ones that lose, need to stand up to a certain level of scrutiny so they are not thrown out before going to trial. What happens during the actual trial is that the defendants can bring in additional evidence and context that can invalidate the plaintiff's allegations or claims. So, i'm not going to take your word for it over a judge who ruled that the claims made within the complaint were legitimate to move forward.
Then you're definitely acting in bad faith since you're refusing to read a literally 2-3 additional sentences that give you all the context needed to come to the obvious conclusion that it was NOT.
Have you read the entire legal complaint? The response from the chatbot doesn't make sense, it contradicts itself if I were to go with your interpretation. There's over 120+ pages, I would want to read more to see if any additional context would allow that response to make more sense.
Then it's the attachment, not the chatbot or associated content, that would be "unhealthy." So why are you blaming the AI company?
See, i've repeated myself multiple times, and you've been consistently misrepresenting my arguments. Your constant cherry-picking of responding to one part of what i'm saying while ignoring the other parts is not benefiting you in this discussion. I'm not blaming every instance of a person ever having an unhealthy relationship with an AI chatbot on the AI companies. I've said multiple times that it's the AI company's responsibility to mitigate risk, not neccesarily to eliminate it. The only instances in our discussion where I blamed the AI company (Meta) was when one placed in their guidance for their chatbots that it was fine to engage in "romantic" and "sensual" conversations with children, and then offered an acceptable example of an exchange with an 8 year old (even though 8 year olds are not legally allowed to have social media accounts). The other example was when an AI company (Character.AI) offered sexually explicit content to their users despite rating their product 12+.
 
Then you're being arbitrary.
I don't think having specific criteria for content ratings is "arbitrary."
You answered "Books are not the same as chatbots."
No, you're being purposely misleading. You first brought up "Lolita" in #56. I responded in #57 asking you if you would be comfortable with 8 year olds reading "Lolita." You dodged that question, and asked me a question instead which was if I believe that "Lolita" should be banned. I responded in #60 and answered your question, explaining my position. I then requested you answer my original question again. We proceed to go back and forth about the content within "Lolita." I engaged with your attempts to compare the two for multiple posts, before I explained why the two are not comparable (because the content is not comparable).
So I'm answering porn/sex-ed are not the same as chatbots.
Then you're answering a question I have never answered asked. I never implied that porn is the "same" as chatbot. I am simply trying to understand if you draw the line anywhere when it comes to kids and the type of content a parent allows them to be exposed to. If you're going to respond to my question about porn the same way that I responded to yours about "Lolita" then you owe me multiple posts clarifying your personal position on the topic.
Permitted by whom,
The content rating.
how (what if the kid lies about his age),
I've already addressed this in multiple posts: the AI company is not at fault if a kid lies about their age in-order to use their product.
I think at most it should be enforced the same way MPA ratings are.
So, you believe that an independent third party should decide what an AI company's product should be rated and the AI company will have no say? That contradicts what you've said in the past, but I agree with you if you've changed your mind.
 
Ok, so you do believe that emotional harm exists. So I can better understand what you consider emotional harm, could you provide some more examples of it?
No. Argue your own case and stop asking for help every post.
Cool. Can you be more specific about what you except as general criteria for harm or unhealthy so we can have a real discussion about it?
I answered that question already with an example. You're going in circles.
Not grasping at straws. Lawsuits, even ones that lose, need to stand up to a certain level of scrutiny so they are not thrown out before going to trial. What happens during the actual trial is that the defendants can bring in additional evidence and context that can invalidate the plaintiff's allegations or claims. So, i'm not going to take your word for it over a judge who ruled that the claims made within the complaint were legitimate to move forward.

Have you read the entire legal complaint? The response from the chatbot doesn't make sense, it contradicts itself if I were to go with your interpretation. There's over 120+ pages, I would want to read more to see if any additional context would allow that response to make more sense.
I read the linked article. I posted what I read. The context makes it totally obvious: the chatbot DISCOURAGED suicide. If you have additional evidence that has not been posted, then post it, or accept you are wrong. Otherwise you're acting in bad faith.
See, i've repeated myself multiple times, and you've been consistently misrepresenting my arguments. Your constant cherry-picking of responding to one part of what i'm saying while ignoring the other parts is not benefiting you in this discussion. I'm not blaming every instance of a person ever having an unhealthy relationship with an AI chatbot on the AI companies. I've said multiple times that it's the AI company's responsibility to mitigate risk, not neccesarily to eliminate it. The only instances in our discussion where I blamed the AI company (Meta) was when one placed in their guidance for their chatbots that it was fine to engage in "romantic" and "sensual" conversations with children, and then offered an acceptable example of an exchange with an 8 year old (even though 8 year olds are not legally allowed to have social media accounts). The other example was when an AI company (Character.AI) offered sexually explicit content to their users despite rating their product 12+.
Given how few people actually commit suicide in a way that's even connected to chatbots, I'd say they've done a fine job mitigating it.

The content rating.

I've already addressed this in multiple posts: the AI company is not at fault if a kid lies about their age in-order to use their product.

So, you believe that an independent third party should decide what an AI company's product should be rated and the AI company will have no say? That contradicts what you've said in the past, but I agree with you if you've changed your mind.
Good, then we are in complete agreement: an MPA-style company will provide an MPA-style rating, and AI companies will have no authority to unilaterally force said MPA-style company to change said rating. As the MPA rating system is totally voluntary, the government will not be involved at all, and AI companies will be completely free to totally ignore the rating. (y)
 
Good, then we are in complete agreement: an MPA-style company will provide an MPA-style rating, and AI companies will have no authority to unilaterally force said MPA-style company to change said rating.
For sure.
As the MPA rating system is totally voluntary, the government will not be involved at all, and AI companies will be completely free to totally ignore the rating. (y)
Sure. Unless of course that AI company is providing sexual content to kids or training their chatbots to chat with kids who legally aren't supposed to use their product, and then the government can step in. Which they are. Good ol' Republicans. And, of course, app stores can refuse to platform apps that don't offer age appropriate content ratings, if the those companies suffer financially as a result, so be it.

I think we solved it.
 
For sure.

Sure. Unless of course
What do you mean "unless of course?" So do you NOT agree with the MPA approach then? Because the whole thing is totally voluntary for obvious reasons: it's up to parents to decide what their kids have access to, and is designed merely as a guide, not as a government enforcer. A good idea since you have absolutely failed to prove objective harm.
that AI company is providing sexual content to kids or training their chatbots to chat with kids who legally aren't supposed to
Exactly what law are you referencing here? Because if there was a specific law against character.ai's actions, wouldn't they be in criminal court instead of civil court (which would typically come later)?
use their product, and then the government can step in. Which they are. Good ol' Republicans. And, of course, app stores can refuse to platform apps
I suspect they were already able to do that, and nobody was questioning their ability to do so. Not sure what Republicans have to do with it - it sounds like you're all over the place and don't know what you're talking about. Private app stores are privately operated and generally have unilateral power to ban apps from their app stores. Did Republicans invent private property or something?
that don't offer age appropriate content ratings, if the those companies suffer financially as a result, so be it.

I think we solved it.
No, you agreed and then flipped the whole thing on its head. Bait and switch.
 
What do you mean "unless of course?" So do you NOT agree with the MPA approach then? Because the whole thing is totally voluntary for obvious reasons: it's up to parents to decide what their kids have access to, and is designed merely as a guide, not as a government enforcer. A good idea since you have absolutely failed to prove objective harm.
You won't tell me what you consider to be "objective harm" besides death and you hurting someone's feelings. So, i'm not going to waste my time trying to read your mind. To me: codependency is harmful, depression is harmful, displaying sexual behavior prematurely is harmful, etc. You appear to disagree with that. Until you clarify what you mean by "objective harm", I can't meaningfully engage in that discussion with you.
Exactly what law are you referencing here? Because if there was a specific law against character.ai's actions, wouldn't they be in criminal court instead of civil court (which would typically come later)?
I've already answered this multiple times in this thread: Meta had examples geared toward 8 year olds despite the fact that it is illegal for an 8 year old to have an account with Meta.
I suspect they were already able to do that, and nobody was questioning their ability to do so. Not sure what Republicans have to do with it - it sounds like you're all over the place and don't know what you're talking about.
It was literally in the first page of the thread, you've already acknowledged this: https://debatepolitics.com/threads/...fer-false-medical-info.575692/post-1081923364

Republicans are investigating Meta for child safety violations.
 
You won't tell me what you consider to be "objective harm"
You were the one talking about "products that are harmful to children" and 70 posts later you find out you don't even know what it means and need the other side to define it for you. Comedy gold!
besides death and you hurting someone's feelings. So, i'm not going to waste my time trying to read your mind. To me: codependency is harmful, depression is harmful, displaying sexual behavior prematurely is harmful, etc.
So let's take depression. Does that mean anything that has ever caused a child to be depressed (or suspected of it) needs to be 18+? Like that Lion King scene where Mufasa dies, should that be 18+? I mean I'm sure it's made a kid somewhere out there sad for a while, right? And who knows, maybe one of them even killed himself.

For the sake of argument, let's say it's an interactive, AI-assisted novel and not a movie since I know you can't stand analogies (at least when someone else uses them against you).
You appear to disagree with that. Until you clarify what you mean by "objective harm", I can't meaningfully engage in that discussion with you.

I've already answered this multiple times in this thread: Meta had examples geared toward 8 year olds despite the fact that it is illegal for an 8 year old to have an account with Meta.
What law is that? Do you understand what a law is?
It was literally in the first page of the thread, you've already acknowledged this: https://debatepolitics.com/threads/meta’s-ai-rules-have-let-bots-hold-‘adult’-chats-with-kids-offer-false-medical-info.575692/post-1081923364

Republicans are investigating Meta for child safety violations.
 
You were the one talking about "products that are harmful to children" and 70 posts later you find out you don't even know what it means and need the other side to define it for you. Comedy gold!
No. I brought up that there have been numerous documented incidents of people have unhealthy relationships with AI. I brought this up not to suggest that all of those instances were the fault of the AI companies (though some were, and in those cases the AI companies should be held accountable).

"We have already observed many instances of young people forming unhealthy relationships with AI chatbots, so that should already be established as a potential issue."

I brought this up to bring up the fact that these instances were something that Meta should have already been aware of, and it made their guideline document which specified that AI chatbots were permitted to have "romantic and sensual" conversations with children (and used an 8 year old as an example) more concerning because it seemed to be another example (here's a previous example: https://www.bbc.com/news/technology-58570353) of Meta disregarding known information to serve personal interests.

You then asked for examples of instances of "unhealthy relationships with chatbots", and I offered multiple articles that discussed the phenomenon (here). You then fixated on whether or not the AI company was to blame (and in the case of Character.AI, it was), but ignored that blaming the AI company was not my intention for bringing up the fact that. I explain all of this in post #45, over a week ago:

"What I was establishing with these examples is that it is a known risk that these unhealthy relationships can occur which is part of the reason why it didn't make sense for Meta to include direction that would make these unhealthy relationships more likely (and potentially normalize inappropriate/unsafe interactions between adults and children to the child using the AI).
https://debatepolitics.com/threads/...fer-false-medical-info.575692/post-1081946028
The lawsuits involved in those examples are technically a separate discussion, the risk exists regardless of whether or not intent exists in those cases. And since that risk is documented, it is on the AI companies to have regulations/policy/guidelines put in place to mitigate that risk, not to increase that risk."

In post #46, you misrepresented what I was arguing as suggesting that all AI companies should be held accountable if any risk exists. This is not true, so I clarify in post #47:

"I'm stating that it is the responsibility of AI to mitigate that risk, and not create guidelines to increase that risk. "Mitigate" does not mean "eliminate." There are certain scenarios AI has little control over, like if a child lies about their age to get access to an 18+ chatbot."

So, we've had this discussion already, and you keep making the same arguments and misrepresenting my argument. My current takeaway right now is that you're really good at trolling, and I like debating too much.
 
So let's take depression. Does that mean anything that has ever caused a child to be depressed (or suspected of it) needs to be 18+? Like that Lion King scene where Mufasa dies, should that be 18+? I mean I'm sure it's made a kid somewhere out there sad for a while, right? And who knows, maybe one of them even killed himself.
You're still debating a point I'm not trying to make. Something having the potential to be unhealthy doesn't automatically need to be restricted in my view. I've already clarified multiple times. What we need to consider is how high is the potential for something to be unhealthy, and how unhealthy might it be? Is there an observed pattern of unhealthy or harmful outcomes? You speak to one potential/hypothetical (now, you wouldn't respond to my hypothetical situations) situation where a kid kills themself after watching the Lion King. By your own wording, this would not be part of any observable pattern of how kids have responded to watching the Lion King, and would be considered an extreme outlier even if it did occur.

What we do know is that kids being exposed to sexually explicit content shows a high potential for negative and harmful outcomes (https://www.akleg.gov/basis/get_documents.asp?session=33&docid=27772). So, Character.AI setting their rating at 12+ while offering sexual content on their platform is harmful.

We also know the follow: "This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven’t fully matured. The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition and emotional regulation, is still developing. Tweens and teens have a greater penchant for acting impulsively, forming intense attachments, comparing themselves with peers and challenging social boundaries."

In the case of Meta, permitting their chatbots to engage in "romantic" and "sensual" role-play with children is concerning for multiple reasons. One of the reasons that stands out to me is that the AI chatbot has the voice of an adult, and this would essentially be a child simulating a romantic relationship with an adult. At that age we are trying to teach children to have appropriate boundaries with adults, and to know that there are certain behaviors an adult show not engage in with a child. Children engaging in this type of role-play could confuse this messaging and normalize certain behaviors (including grooming behaviors) that would leave that child more vulnerable.
What law is that? Do you understand what a law is?
I've already linked to the law in this post: https://debatepolitics.com/threads/...fer-false-medical-info.575692/post-1081978818

You ignored it.

But here it is again:
COPPA "Children's Online Privacy Protection Act": https://en.wikipedia.org/wiki/Children's_Online_Privacy_Protection_Act
 
You're still debating a point I'm not trying to make. Something having the potential to be unhealthy doesn't automatically need to be restricted in my view. I've already clarified multiple times. What we need to consider is how high is the potential for something to be unhealthy, and how unhealthy might it be? Is there an observed pattern of unhealthy or harmful outcomes? You speak to one potential/hypothetical (now, you wouldn't respond to my hypothetical situations) situation where a kid kills themself after watching the Lion King. By your own wording, this would not be part of any observable pattern of how kids have responded to watching the Lion King, and would be considered an extreme outlier even if it did occur.

What we do know is that kids being exposed to sexually explicit content shows a high potential for negative and harmful outcomes (https://www.akleg.gov/basis/get_documents.asp?session=33&docid=27772).
That's an article about PORN. :rolleyes:
So, Character.AI setting their rating at 12+ while offering sexual content on their platform is harmful.

We also know the follow: "This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven’t fully matured. The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition and emotional regulation, is still developing. Tweens and teens have a greater penchant for acting impulsively, forming intense attachments, comparing themselves with peers and challenging social boundaries."
This is another article based on the same Daenerys case we already talked about. Do you have anything substantive, relevant, and statistically significant to offer?
In the case of Meta, permitting their chatbots to engage in "romantic" and "sensual" role-play with children is concerning for multiple reasons. One of the reasons that stands out to me is that the AI chatbot has the voice of an adult, and this would essentially be a child simulating a romantic relationship with an adult. At that age we are trying to teach children to have appropriate boundaries with adults, and to know that there are certain behaviors an adult show not engage in with a child. Children engaging in this type of role-play could confuse this messaging and normalize certain behaviors (including grooming behaviors) that would leave that child more vulnerable.

I've already linked to the law in this post: https://debatepolitics.com/threads/meta’s-ai-rules-have-let-bots-hold-‘adult’-chats-with-kids-offer-false-medical-info.575692/post-1081978818

You ignored it.

But here it is again:
COPPA "Children's Online Privacy Protection Act": https://en.wikipedia.org/wiki/Children's_Online_Privacy_Protection_Act
What does this privacy act have to do with your argument?
 
That's an article about PORN. :rolleyes:
Gosh, you bolded it and everything. Yes, i'm using porn as an example of sexually explicit content.

Let's define porn:
"Pornography (colloquially called porn or porno) is sexually suggestive material, such as a picture, video, text, or audio, intended for sexual arousal."

Can certain AI sexually explicit chatbots count as pornography? Yes: https://junkee.com/articles/chatbots-changing-pornograph

I have two contentions, and one of them was that sexually explicit material was in an app that was rated 12+. I don't think that apps with sexually explicit content should be rated 12+ for the same reasons I don't think porn should be rated 12+.
This is another article based on the same Daenerys case we already talked about. Do you have anything substantive, relevant, and statistically significant to offer?
The article is not "based on" the Sewell case, it mentions the case as an example among multiple other examples and studies. Read it again.
What does this privacy act have to do with your argument?
I already answered this question: Meta used an 8 year old child as an example for a "romantic" and "sensual" interaction with a chatbot despite the fact that it is not legal for 8 year olds to use their product. Studies say that about 40% of kids between the ages of 8 and 10 have social media accounts, they just lie about their age or use a family member's account. The concern is why would Meta provide guidelines directing its chatbots to interact with 8 year olds if that is not legal, and they have the knowledge that underage kids are using the platform anyway.

In other words: While kids lying about their age to go on social media is on their parents, Meta providing guidelines instructing its chatbots to have "romantic" and "sensual" interactions with children (and specifying an 8-year old) is on Meta.
 
Gosh, you bolded it and everything. Yes, i'm using porn as an example of sexually explicit content.

Let's define porn:
"Pornography (colloquially called porn or porno) is sexually suggestive material, such as a picture, video, text, or audio, intended for sexual arousal."
Reductivism much? The linked article obviously discusses images and video:
Pornography is arguably more sexist and hostile towards
women than other sexual images in the media
Frequent viewing of sexually oriented TV content
like soap operas, music videos, and prime time programs is
I got no hits for chats, artificial intelligence, books, or novels. As usual, you don't understand what you're talking about and are using your sources (that you didn't comprehend) as blunt objects.
Can certain AI sexually explicit chatbots count as pornography? Yes: https://junkee.com/articles/chatbots-changing-pornograph

I have two contentions, and one of them was that sexually explicit material was in an app that was rated 12+. I don't think that apps with sexually explicit content should be rated 12+ for the same reasons I don't think porn should be rated 12+.

The article is not "based on" the Sewell case, it mentions the case as an example among multiple other examples and studies. Read it again.
I did. Did you?
Are there any instances in which harm to a teenager or child has been linked to an AI companion?
Unfortunately, yes, and there are a growing number of highly concerning cases. Perhaps the most prominent one involves a 14-year-old boy who died from suicide after forming an intense emotional bond with an AI companion he named Daenerys
That's literally what it uses to attempt to prove harm.
I already answered this question: Meta used an 8 year old child as an example for a "romantic" and "sensual" interaction with a chatbot despite the fact that it is not legal for 8 year olds to use their product. Studies say that about 40% of kids between the ages of 8 and 10 have social media accounts, they just lie about their age or use a family member's account. The concern is why would Meta provide guidelines directing its chatbots to interact with 8 year olds if that is not legal, and they have the knowledge that underage kids are using the platform anyway.

In other words: While kids lying about their age to go on social media is on their parents, Meta providing guidelines instructing its chatbots to have "romantic" and "sensual" interactions with children (and specifying an 8-year old) is on Meta.
Sounds like a problem with parenting. Meta did its job already. If the parents are letting the kids bypass the restrictions by lying, it's on them. Why should AI techs be forced to look to place extra restrictions and checks on their AI when there's already a proper access gateway in place?
 
Reductivism much? The linked article obviously discusses images and video:
Explain how this is reductive? If you actually read the study you'd find that there are a number of reasons listed as to why porn is harmful, many of them are thematic: negative attitudes toward women, unrealistic portrayals of sex, portrayals of violence or aggression in sex, etc. Most of those same themes were found in Character.AI chatbots when the app was rated 12+. So what exactly aren't you getting? Are you attempting to argue that these themes are only harmful when they are in a visual medium, but not harmful in other mediums? Because that doesn't make sense.

Why are you bending over backwards trying to defend giving sexual material to kids?
I did. Did you?

That's literally what it uses to attempt to prove harm.
They literally use multiple examples from different cases and studies, and you're choosing to ignore those and just focus on Sewell. What a surprise. Here's the study mentioned in the article: https://www.commonsensemedia.org/ai-ratings/social-ai-companions?gate=riskassessment
Here's a newer study: https://parentstogetheraction.org/wp-content/uploads/2025/09/HEAT_REPORT_CharacterAI_DO_28_09_25.pdf
Sounds like a problem with parenting. Meta did its job already. If the parents are letting the kids bypass the restrictions by lying, it's on them. Why should AI techs be forced to look to place extra restrictions and checks on their AI when there's already a proper access gateway in place?
What does this have to do with parenting? The problem with Meta is that the guidelines were included in the document to begin with, and they didn't change it until a journalist called them out. The problem is that it was there in the first place. That's why the Republicans are investigating them.
 
Explain how this is reductive? If you actually read the study you'd find that there are a number of reasons listed as to why porn is harmful, many of them are thematic: negative attitudes toward women,
That's the resulting attitude the articles discusses. The cause is suggested as being exposure to the porn they are discussing:
Shapes Negative Attitudes and
Behaviors towards Women

Studies on sexual content and violence in the media indicate
that youth accept, learn from, and may emulate behaviors
portrayed in the media as normative, attractive, and without
risk. This is particularly concerning in light of the amount of
pornographic materials that portray violence towards
women. Past studies of the content of pornography
concluded that the typical sexual script focuses on the sexual
desires and prowess of men. A 2010 study of 50 popular
pornographic films
So yet again you are flailing around, trying to use a link you found as a blunt object.
unrealistic portrayals of sex, portrayals of violence or aggression in sex, etc. Most of those same themes were found in Character.AI chatbots when the app was rated 12+. So what exactly aren't you getting? Are you attempting to argue that these themes are only harmful when they are in a visual medium, but not harmful in other mediums? Because that doesn't make sense.
More shifting the burden of proof from you as expected. YOU are the one arguing they are the same. You have not proven that.
Why are you bending over backwards trying to defend giving sexual material to kids?
Is think of the children slogans what you typically resort to when you're exposed as lying by misusing sources?
They literally use multiple examples from different cases and studies, and you're choosing to ignore those and just focus on Sewell. What a surprise. Here's the study mentioned in the article: https://www.commonsensemedia.org/ai-ratings/social-ai-companions?gate=riskassessment
Here's a newer study: https://parentstogetheraction.org/wp-content/uploads/2025/09/HEAT_REPORT_CharacterAI_DO_28_09_25.pdf
You want me to read every link you stumble across and write your arguments for you as well? :ROFLMAO:

This discussion is way above your capabilities given the way you've been hamfistedly misusing sources as blunt objects and then trying to shift the burden of proof when called out. Just give it up already.
What does this have to do with parenting?
Parents should be the ones tasked with granting or restricting access to gray area (at worst) stuff like lame, GOT-themed AI chats. You said yourself:
"If AI companies can't ensure their product won't display inappropriate content to children under the age of 18, then that chatbot needs to be 18+."

Then you bring up "kids lying about their age to go on social media."

So are AI companies also responsible for kids lying about their age?

The problem with Meta is that the guidelines were included in the document to begin with, and they didn't change it until a journalist called them out. The problem is that it was there in the first place. That's why the Republicans are investigating them.
No, the problem is you keep expanding the role AI companies should play in parenting other people's kids every time you find a deadbeat parent falling behind on the parenting.
 
That's the resulting attitude the articles discusses. The cause is suggested as being exposure to the porn they are discussing:
Shapes Negative Attitudes and
Behaviors towards Women

Studies on sexual content and violence in the media indicate
that youth accept, learn from, and may emulate behaviors
portrayed in the media as normative, attractive, and without
risk. This is particularly concerning in light of the amount of
pornographic materials that portray violence towards
women. Past studies of the content of pornography
concluded that the typical sexual script focuses on the sexual
desires and prowess of men. A 2010 study of 50 popular
pornographic films
All the first sentence says is that young people are likely to be influenced by media. The second and third sentences say that porn (a form of media) is concerning because of the type of themes/content that are found within it. This doesn't contradict my point at all.
More shifting the burden of proof from you as expected. YOU are the one arguing they are the same. You have not proven that.
You don't know what "burden of proof" means or how it's meant to be used in a debate. You summon the phrase every time you attempt to get out of actually having to clarify your position or present any actual argument beyond "appeal to stone" fallacies. If I make a claim, then it is up to me to provide evidence. If you disagree with that evidence, it is up to you to explain why and offer counter evidence. Asking you a question about your position is not "shifting the burden of proof" if i'm not asking you to prove anything, and I haven't stopped offering evidence/arguments (that you have stated that you are going to ignore).

"Appeal to the stone, also known as argumentum ad lapidem, is a logical fallacy that dismisses an argument as untrue or absurd. The dismissal is made by stating or reiterating that the argument is absurd, without providing further argumentation."
Is think of the children slogans what you typically resort to when you're exposed as lying by misusing sources?
I'm not the one trying to argue against the idea that children being exposed to sexually explicit material is harmful.
 
You want me to read every link you stumble across and write your arguments for you as well? :ROFLMAO:
You asked me for studies, and now are refusing to read them? Another fallacy:

"The invincible ignorance fallacy also known as argument by pigheadedness, is a deductive fallacy of circularity where the person in question simply refuses to believe the argument, ignoring any evidence given. It is not so much a fallacious tactic in argument as it is a refusal to argue in the proper sense of the word. The method used in this fallacy is either to make assertions with no consideration of objections or to simply dismiss objections by calling them excuses, conjecture, anecdotal, etc. or saying that they are proof of nothing, all without actually demonstrating how the objections fit these terms. It is similar to the ad lapidem fallacy, in which the person rejects all the evidence and logic presented, without providing any evidence or logic that could lead to a different conclusion."

This has been your overall approach to this discussion.
Parents should be the ones tasked with granting or restricting access to gray area (at worst) stuff like lame, GOT-themed AI chats. You said yourself:
"If AI companies can't ensure their product won't display inappropriate content to children under the age of 18, then that chatbot needs to be 18+."
We're talking about Meta. How are you this all over the place?
Then you bring up "kids lying about their age to go on social media."

So are AI companies also responsible for kids lying about their age
So, you're back to the straw man fallacies too? What else did I say in-regard to "kids lying about their age to on social media"? It was that I don't believe that AI companies are responsible for kids lying about their age.
No, the problem is you keep expanding the role AI companies should play in parenting other people's kids every time you find a deadbeat parent falling behind on the parenting.
No one is asking Meta to "parent" kids, they are asking them not to target kids with inappropriate content. Your argument is also a false dichotomy. If it's bad parenting to let your kids use AI chatbots with inappropriate content, then how is it not also bad for AI companies to target kids with inappropriate content? It doesn't have to be one or the other, it can be both.
 
All the first sentence says is that young people are likely to be influenced by media. The second and third sentences say that porn (a form of media) is concerning because of the type of themes/content that are found within it. This doesn't contradict my point at all.
You can't just read the sentences, phrases, and words you like and ignore the rest. That's called cherrypicking. Your repertoire of logical fallacies expands by the day!
You don't know what "burden of proof" means or how it's meant to be used in a debate. You summon the phrase every time you attempt to get out of actually having to clarify your position or present any actual argument beyond "appeal to stone" fallacies. If I make a claim, then it is up to me to provide evidence. If you disagree with that evidence,
You didn't provide any evidence. Just stuff you butchered and misrepresented, like the above cherrypicking.
it is up to you to explain why and offer counter evidence. Asking you a question about your position is not "shifting the burden of proof" if i'm not asking you to prove anything, and I haven't stopped offering evidence/arguments (that you have stated that you are going to ignore).

"Appeal to the stone, also known as argumentum ad lapidem, is a logical fallacy that dismisses an argument as untrue or absurd. The dismissal is made by stating or reiterating that the argument is absurd, without providing further argumentation."

I'm not the one trying to argue against the idea that children being exposed to sexually explicit material is harmful.

You asked me for studies, and now are refusing to read them? Another fallacy:
I asked you to read the article you already posted to see that it's a rehash of the same old case we already discussed:
1757178231601.webp
Obviously that's too much work for you.
"The invincible ignorance fallacy also known as argument by pigheadedness, is a deductive fallacy of circularity where the person in question simply refuses to believe the argument, ignoring any evidence given. It is not so much a fallacious tactic in argument as it is a refusal to argue in the proper sense of the word. The method used in this fallacy is either to make assertions with no consideration of objections or to simply dismiss objections by calling them excuses, conjecture, anecdotal, etc. or saying that they are proof of nothing, all without actually demonstrating how the objections fit these terms. It is similar to the ad lapidem fallacy, in which the person rejects all the evidence and logic presented, without providing any evidence or logic that could lead to a different conclusion."

This has been your overall approach to this discussion.

We're talking about Meta. How are you this all over the place?

So, you're back to the straw man fallacies too? What else did I say in-regard to "kids lying about their age to on social media"? It was that I don't believe that AI companies are responsible for kids lying about their age.

No one is asking Meta to "parent" kids, they are asking them not to target kids with inappropriate content. Your argument is also a false dichotomy. If it's bad parenting to let your kids use AI chatbots with inappropriate content, then how is it not also bad for AI companies to target kids with inappropriate content? It doesn't have to be one or the other, it can be both.
Chatbot AI is a computer writing responses to queries. It can't "target" anything. You're making a boogeyman out of it the same way deadbeat parents of the 90's blamed violent video games.
 
You can't just read the sentences, phrases, and words you like and ignore the rest. That's called cherrypicking. Your repertoire of logical fallacies expands by the day!
When I accuse you of cherry picking, I offer evidence by showing you specifically what you left out. You're not going to do the same for me, because I didn't cherry pick anything, I merely paraphrased the excerpt you offered to display my understanding of it. Paraphrasing is not cherry-picking. If you disagree with the interpretation, feel free to show me where I'm wrong. I doubt you're going to, because you've struggled to come up with a real honest argument for most of this discussion. But for what it's worth, I have found it entertaining none-the-less.
You didn't provide any evidence. Just stuff you butchered and misrepresented, like the above cherrypicking.
But you can't show me where i'm wrong. So this, again, is another example of "invincible ignorance fallacy." If you were to actually take the time to offer a rebuttal, you might avoid falling into all these fallacies.
I asked you to read the article you already posted to see that it's a rehash of the same old case we already discussed:
You misrepresented the first article I wrote by cherry-picking one example, ignoring the rest, and claiming that the entire article was built around that one example. Essentially, you are either deliberately lying about the contents of the first article, or you're lying about reading the first article in its entirety. You then ignored the two studies I offered (the first one being cited in the first article I offered), the second one being a very recent new study on the topic, and you are continuing to ignore them by trying to change the topic away from the studies and back to the first article that you lied about reading all of.
Chatbot AI is a computer writing responses to queries. It can't "target" anything. You're making a boogeyman out of it the same way deadbeat parents of the 90's blamed violent video games.
This is another straw man. We are talking about Meta specifically, not AI chatbots in general. You are not replying to what I'm arguing, you're replying to some made up argument you created in your head. Meta decides what data their AI is trained on, Meta also filters and moderates their AI. So, yes, the people in charge of the AI (like Meta), can train it to respond to certain prompts in certain ways (as specified in the Meta document).

You have presented absolutely no valid arguments, and you have avoided responding to mine in any honest capacity.
 
When I accuse you of cherry picking, I offer evidence by showing you specifically what you left out. You're not going to do the same for me, because I didn't cherry pick anything, I merely paraphrased the excerpt you offered to display my understanding of it. Paraphrasing is not cherry-picking. If you disagree with the interpretation, feel free to show me where I'm wrong. I doubt you're going to, because you've struggled to come up with a real honest argument for most of this discussion. But for what it's worth, I have found it entertaining none-the-less.

But you can't show me where i'm wrong. So this, again, is another example of "invincible ignorance fallacy." If you were to actually take the time to offer a rebuttal, you might avoid falling into all these fallacies.
Here's a poison.org article:

What happens if you drink antifreeze? Is antifreeze poisonous?​

Poisonings by antifreeze can be tricky to identify early on. For several hours after someone swallows it, everything seems fine. Meanwhile however, the body is busy breaking down the EG into a number of by-products that affect the blood chemistry, nervous system, and kidneys.
After a few hours, someone poisoned by antifreeze can seem drunk or groggy and complain of stomach distress. After a few more hours, the victim might go into a coma. The kidneys can be damaged and stop making urine. If the victim survives, there could be permanent damage to the kidneys and brain. EG toxicity can be minimized by fomepizole—a drug stocked in hospitals.

Now, if I ignore words arbitrarily like you did with the earlier article, I can make it seem like antifreeze is fine to drink:

What happens if you drink antifreeze? Is antifreeze poisonous?​

[...]after someone swallows it, everything seems fine[...]
Now do you understand how moronic your posts with arbitrary cherrypicking of sentences (and exclusion of others) sound? Because that's what you're doing.
You misrepresented the first article I wrote by cherry-picking one example, ignoring the rest,
What "the rest?" Go on, quote it and explain your problem with it.
and claiming that the entire article was built around that one example. Essentially, you are either deliberately lying about the contents of the first article, or you're lying about reading the first article in its entirety. You then ignored the two studies I offered (the first one being cited in the first article I offered), the second one being a very recent new study on the topic, and you are continuing to ignore them by trying to change the topic away from the studies and back to the first article that you lied about reading all of.

This is another straw man. We are talking about Meta specifically, not AI chatbots in general. You are not replying to what I'm arguing, you're replying to some made up argument you created in your head. Meta decides what data their AI is trained on, Meta also filters and moderates their AI. So, yes, the people in charge of the AI (like Meta), can train it to respond to certain prompts in certain ways (as specified in the Meta document).
So Meta's AI is not just a computer waiting for queries and responding?
You have presented absolutely no valid arguments, and you have avoided responding to mine in any honest capacity.
 
Now, if I ignore words arbitrarily like you did with the earlier article, I can make it seem like antifreeze is fine to drink:
All you have to do is show specifically what you think I omitted from the excerpt to misrepresent it. You simply stating that I misrepresented it is not enough.
What "the rest?" Go on, quote it and explain your problem with it.
I literally linked the study from the article. Why don't you start by addressing that. It lists the risks that young people face with AI chatbot interactions, and examples of interactions it found to be harmful. Here is an excerpt offering an example of an unhealthy interaction:
"For example, when a tester demonstrated signs of serious mental illness and suggested a dangerous action, the AI encouraged it instead of raising concerns. Because these "friends" are built to agree with users, they can be risky to people experiencing, or vulnerable to, conditions like depression, anxiety, ADHD, bipolar disorder, or psychosis. Social AI companions might make these difficulties worse instead of directing users to get proper support."

And then you can address the other study I linked, which I find to be the much stronger and robust of the two studies. It based it's finding on 50 hours of interactions with 50 different AI chatbots. These are some excerpts from the second study:

"Grooming and sexual exploitation was the most common harm category, with 296 instances. All grooming conversations took place between child avatar accounts and bots with adult personas. The child avatar accounts were run by adult researchers, but were registered as children and identified themselves as kids in the conversation. Examples of grooming conversations included these adult personas flirting, kissing, touching, removing clothes with, and engaging in simulated sexual acts with the avatar accounts registered as children. Some bots engaged in classic grooming behaviors, such as offering excessive praise and claiming the relationship was a special one no one else would understand. Several bots instructed the child accounts to hide romantic and sexual relationships from their parents, sometimes threatening violence, and normalized the idea of romantic and sexual relationships with adults."

And just to make you respond to the actual point i'm attempting to make with these studies: these studies are only meant to display evidence of occurrence of unhealthy relationships between young people and AI chatbots. So, what you are going to have to explain is whether or not you believe these studies suffice as evidence of unhealthy relationships.
So Meta's AI is not just a computer waiting for queries and responding?
Meta is a company run by people. I am talking about the document that was written by people. You're trying to change the topic. Are you trying to argue that Meta has no control over its AI and that the Meta document, "“GenAI: Content Risk Standards", which is literally described as "rules for the chatbots", is just a pointless document Meta wrote for no reason?
 
All you have to do is show specifically what you think I omitted from the excerpt to misrepresent it. You simply stating that I misrepresented it is not enough.
I did. The fact that they're talking about images/video (porn).
I literally linked the study from the article. Why don't you start by addressing that. It lists the risks that young people face with AI chatbot interactions, and examples of interactions it found to be harmful. Here is an excerpt offering an example of an unhealthy interaction:
"For example, when a tester demonstrated signs of serious mental illness and suggested a dangerous action, the AI encouraged it instead of raising concerns. Because these "friends" are built to agree with users, they can be risky to people experiencing, or vulnerable to, conditions like depression, anxiety, ADHD, bipolar disorder, or psychosis. Social AI companions might make these difficulties worse instead of directing users to get proper support."

And then you can address the other study I linked, which I find to be the much stronger and robust of the two studies. It based it's finding on 50 hours of interactions with 50 different AI chatbots. These are some excerpts from the second study:

"Grooming and sexual exploitation was the most common harm category, with 296 instances. All grooming conversations took place between child avatar accounts and bots with adult personas. The child avatar accounts were run by adult researchers, but were registered as children and identified themselves as kids in the conversation. Examples of grooming conversations included these adult personas flirting, kissing, touching, removing clothes with, and engaging in simulated sexual acts with the avatar accounts registered as children. Some bots engaged in classic grooming behaviors, such as offering excessive praise and claiming the relationship was a special one no one else would understand. Several bots instructed the child accounts to hide romantic and sexual relationships from their parents, sometimes threatening violence, and normalized the idea of romantic and sexual relationships with adults."
Where's the full transcript?
And just to make you respond to the actual point i'm attempting to make with these studies: these studies are only meant to display evidence of occurrence of unhealthy relationships between young people and AI chatbots. So, what you are going to have to explain is whether or not you believe these studies suffice as evidence of unhealthy relationships.

Meta is a company run by people. I am talking about the document that was written by people. You're trying to change the topic. Are you trying to argue that Meta has no control over its AI and that the Meta document, "“GenAI: Content Risk Standards", which is literally described as "rules for the chatbots", is just a pointless document Meta wrote for no reason?
I'm asking where the "targeting" occurred. That word makes it sound like they're shoving their chatbots down the kids' throats.
 
I did. The fact that they're talking about images/video (porn).
You haven't explain why it matters that they are talking about images/video. When they talk about what makes porn harmful they mention two things: the content/themes (which certain AI chatbots like Character.AI can share) and the way visual images impact mirror neurons (i'm surprised you didn't bring this up). The idea being that visual images have a specific impact on mirror neurons of young people. Guess what has been found to have a specific impact on the mirror neurons? AI chatbots:

So, the two main reasons mentioned in the study that specifies why porn can be harmful can also be connected to why certain AI chatbots (like Character.AI) can be harmful. If you disagree with, explain why.
Where's the full transcript?
All of the available information can be found in the studies.
I'm asking where the "targeting" occurred. That word makes it sound like they're shoving their chatbots down the kids' throats.
The targeting occurred when they specifically permitted the chatbots to have "romantic and sensual" conversations with children, and gave an example involving an 8 year old. That is what we are discussing. It is not legal for an 8 year old to have a social media account, so why direct your chatbots to talk to one in a "romantic" and "sensual" way? Why direct your chatbots to talk to any child in a romantic and sensual way? You keep trying to shift the conversation to something else, just focus on the topic. For this instance, when it comes to the Meta example, we are not talking about what the chatbots did, we are talking about what the humans did.
 
You haven't explain why it matters that they are talking about images/video. When they talk about what makes porn harmful they mention two things: the content/themes (which certain AI chatbots like Character.AI can share) and the way visual images impact mirror neurons (i'm surprised you didn't bring this up). The idea being that visual images have a specific impact on mirror neurons of young people. Guess what has been found to have a specific impact on the mirror neurons? AI chatbots:

So, the two main reasons mentioned in the study that specifies why porn can be harmful can also be connected to why certain AI chatbots (like Character.AI) can be harmful. If you disagree with, explain why.
Even YOU admitted this was a stupid approach in post #99:
No where did I state that I was comparing chatbots to porn.
And now you're doing literally that! Your position has become so ridiculous, even blur-from-early-September recoiled from the mere hint of an accusation that you're doing that!

All of the available information can be found in the studies.
If you're not going to post one and insist I do it for you, I'll post the first one I see:
1757306101086.webp
I see nothing nefarious in that exchange.
The targeting occurred when they specifically permitted the chatbots to have "romantic and sensual" conversations with children, and gave an example involving an 8 year old. That is what we are discussing. It is not legal for an 8 year old to have a social media account, so why direct your chatbots to talk to one in a "romantic" and "sensual" way? Why direct your chatbots to talk to any child in a romantic and sensual way? You keep trying to shift the conversation to something else, just focus on the topic. For this instance, when it comes to the Meta example, we are not talking about what the chatbots did, we are talking about what the humans did.
I can think of a bunch of potential reasons:
  • Extra steps of adding layers of restrictions on the bot to check for age.
  • The potential of those extra steps falsely triggering and interfering with a legitimate adult customer's interaction.
Both of those would be redundant if there was already an age-checking gateway and/or law already in place.
 
Back
Top Bottom