Jump to content

Will The New TOS on Child Avatars Ensnare Short Adults?


Recommended Posts

Just now, Scylla Rhiadra said:

I don't disagree, which is why in a later post I noted that this development was potentially cause for some concern.

We'll see. But I can't imagine how it could be used to "hunt down and destroy" child avis. How exactly would it do that? Using what quantifiable metrics?

And again, the potential for using to pinpoint people they might need to investigate, which is how Keira described it, is not the same thing as LL going on search and destroy missions against child avis. Which was the leap that Arielle made in her post.

 

There is a fear such tools will be used to go over old conversations, that devoid of all possible context, someone once said something sketchy and is now out on their ear, given warnings, accrued secret strikes, put on a list ..

This is about way more than just those with child avatars, or questionable role-play history.

Does SL feeling like it's a police state hinder growth .. does that simple suggestion once made, true or not, do more harm than the recent AP article and fear over child avatars? I would wager yes.

Everyone can agree on "kick the bad apples out", no one will be comfortable being told they have nothing to fear.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

4 minutes ago, Starberry Passion said:

That site.... that was a terrible site, it spread so much un-necessary drama.

Oh god I know, I tried reading it once and ended up with a migraine!

  • Like 1
  • Haha 2
Link to comment
Share on other sites

1 minute ago, Coffee Pancake said:

There is a fear such tools will be used to go over old conversations, that devoid of all possible context, someone once said something sketchy and is now out on their ear, given warnings, accrued secret strikes, put on a list ..

This is about way more than just those with child avatars, or questionable role-play history.

Does SL feeling like it's a police state hinder growth .. does that simple suggestion once made, true or not, do more harm than the recent AP article and fear over child avatars? I would wager yes.

Everyone can agree on "kick the bad apples out", no one will be comfortable being told they have nothing to fear.

I disagree with none of this.

Again, though . . . this was not what Arielle said, which is what I was responding to.

  • Like 1
  • Haha 1
Link to comment
Share on other sites

Posted (edited)
21 minutes ago, Persephone Emerald said:

Algorithms can search for specific word combinations in text - definitely in parcel descriptions and maybe in chat too. Even though we can't see private chat between other users, LL might be able to search both public and private text for keyword combinations that are more common in sexual AP. It could flag and save those conversations for a real person to check.

I think they will use these tools when they are investigating an age-play case, after an AR to Governance.  The new rules just create too many edge-cases and affect to many Adult and Moderate regions for the tiny Governance to handle without AI help.  That is a big concern in itself, but LL won't hire and train another 50 employees to enforce these new rules.

Governance does not do general surveillance - they need an AR to act on any violation.  So, clubs and beaches where members don't tattle on their fellow club members should never be on the radar screen of LL.

Of course there is always infiltration by enemies.  We can rename SL to Spy Life.

We can't talk about allegations that other high level Linden employees with access to everything were searching past chat logs looking for behavior against the TOS for persons they didn't trust.   But Brad says new rules about access and Goverance are being considered, so "what never happened in the past" might be fixed.

 

Edited by Jaylinbridges
  • Like 3
Link to comment
Share on other sites

Posted (edited)

@Coffee Pancake if what you said about two people being banned for an *****  im chat with no visuals is true, and combining this with what LL is gonna do to SL to appease Apple, (gloom and doom i know) SL is finished. Whatever anyone thought about SL, however anyone enjoyed it, it will all come to an end very soon. I personally can't justify spending another penny in SL. They can sink to the bottom of the sea without any financial support from me.

Banning people for a conversarion in IM just crosses a line for me. There is no going back now.

 

Edited by BilliJo Aldrin
added a line
  • Like 1
Link to comment
Share on other sites

4 hours ago, Jaylinbridges said:

I think they will use these tools when they are investigating an age-play case, after an AR to Governance.  The new rules just create too many edge-cases and affect to many Adult and Moderate regions for the tiny Governance to handle without AI help.  That is a big concern in itself, but LL won't hire and train another 50 employees to enforce these new rules.

Governance does not do general surveillance - they need an AR to act on any violation.  So, clubs and beaches where members don't tattle on their fellow club members should never be on the radar screen of LL.

Of course there is always infiltration by enemies.  We can rename SL to Spy Life.

We can't talk about allegations that other high level Linden employees with access to everything were searching past chat logs looking for behavior against the TOS for persons they didn't trust.   But Brad says new rules about access and Goverance are being considered, so "what never happened in the past" might be fixed.

 

They have not before but with their saying they are going to be utilizing proactive tools it gives the idea that this is going to go beyond residential reporting. Otherwise it would just be reactive tools.

  • Thanks 1
Link to comment
Share on other sites

4 hours ago, Persephone Emerald said:

We don't know yet but we can guess.

Algorithms can search for specific word combinations in text - definitely in parcel descriptions and maybe in chat too. Even though we can't see private chat between other users, LL might be able to search both public and private text for keyword combinations that are more common in sexual AP. It could flag and save those conversations for a real person to check.

Yes, that can be and is already done extensively on platforms and forums. Depending in which country you are, even if you're just a private person maintaining a forum, there might be an obligation to not just monitor/moderate "public" posts but even PMs to some extent.

I know it from a platform, where your chat with everyone is monitored by AI for blacklisted expressions, and if you type and press Send, and your text contains anything on the list, it only reaches the recipient, once a person has looked over the context and cleared it (or not, then you'll get a warning, or ban). If you're lucky, thar might take just a few minutes, if not, it might take several hours or even a day, which, obviously, makes live chatting... inconvenient. So, the smarter people quickly learn to remember blacklisted phrases and use other wording. Of course, the AI and the blacklist maintaining people, also learn and update the list,... It's the same race as with hackers and IT security people, with laws and loophole-searching paragraph pushers, etc.

There are lots of people claiming they were banned "for no reason", but typically, if you dig, it turns out that, yes, they did, knowingly or unaware, violate the ToS in some way. Interestingly, often, they are just told "ToS violation", but not what exactly/which specific ToS. A big part of that, I think, is to make it more difficult "to game the system", and to get rid of people who aren't ready to thoroughly read and understand the ToS, to re-read in case of updates, to "err on the side of caution",... If there are (more than) enough people who want to be on a platform, it's easier to use a broad brush and to even be fine if you lose a few innocent users on the way. The smaller the pool of users, the more careful a company will want to be with getting rid of users, especially erroneously, naturally, for its own sake.

The only thing hindering it being done pretty much everywhere, as long as it's on a self-imposed basis per company, and not mandatory per laws or regulations on state or higher entity level, is money/the people power needed to check for context. It will always need people as the last instance for unclear cases, but AI will get better and better with context, too, it's a question of time, and of how much companies and societies can and want to spend on this.

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

6 minutes ago, InnerCity Elf said:

I know it from a platform, where your chat with everyone is monitored by AI for blacklisted expressions, and if you type and press Send, and your text contains anything on the list, it only reaches the recipient, once a person has looked over the context and cleared it (or not, then you'll get a warning, or ban). If you're lucky, thar might take just a few minutes, if not, it might take several hours or even a day, which, obviously, makes live chatting... inconvenient. So, the smarter people quickly learn to remember blacklisted phrases and use other wording. Of course, the AI and the blacklist maintaining people, also learn and update the list,

Oh you mean this forum? :)

Link to comment
Share on other sites

Posted (edited)
35 minutes ago, Arielle Popstar said:

They have not before but with their saying they are going to be utilizing proactive tools it gives the idea that this is going to go beyond residential reporting. Otherwise it would just be reactive tools.

IMO they will still need a reason to use AI selectively.  Full monitoring of all conversations seems impractical since still too many AI misses and false positives and a huge database to pay for. Random monitoring will seldom yield any positive hits, and would be unfair. 

They are more likely to start monitoring all activities and private conversations of someone who was already ARed, but passed through with no action or warning notice.  Looking at past and future chats of that person  would be more likely to yield the direct information to use in the future to ban that person. 

So, try to avoid getting on their AR list ever, even it they take no action.  It's a form of profiling used by law enforcement to be more efficient in catching those bad guys and girls.  In case you were innocent, oh well - just hire a good lawyer and spend $50K to prove it.

 

Edited by Jaylinbridges
  • Like 2
Link to comment
Share on other sites

Posted (edited)
9 hours ago, Coffee Pancake said:

There is a fear such tools will be used to go over old conversations, that devoid of all possible context, someone once said something sketchy and is now out on their ear, given warnings, accrued secret strikes, put on a list ..

This is about way more than just those with child avatars, or questionable role-play history.

Does SL feeling like it's a police state hinder growth .. does that simple suggestion once made, true or not, do more harm than the recent AP article and fear over child avatars? I would wager yes.

Everyone can agree on "kick the bad apples out", no one will be comfortable being told they have nothing to fear.

Sounds kind of like how twitter and other social platforms were used by it users, against other users to wreck their lives, even digging back years later.

As someone once said years ago.. Though I walk through the valley of the shadow of virtual socializing. I shall fear no evil, no other user, no other world, no other abuser.  Is the sword much mightier than the key?

I guess we'll have to wait and see..

hehehe

Edited by Ceka Cianci
  • Like 1
Link to comment
Share on other sites

Posted (edited)
10 hours ago, Jaylinbridges said:

I think they will use these tools when they are investigating an age-play case, after an AR to Governance.  The new rules just create too many edge-cases and affect to many Adult and Moderate regions for the tiny Governance to handle without AI help.  That is a big concern in itself, but LL won't hire and train another 50 employees to enforce these new rules.

Governance does not do general surveillance - they need an AR to act on any violation.  So, clubs and beaches where members don't tattle on their fellow club members should never be on the radar screen of LL.

Of course there is always infiltration by enemies.  We can rename SL to Spy Life.

We can't talk about allegations that other high level Linden employees with access to everything were searching past chat logs looking for behavior against the TOS for persons they didn't trust.   But Brad says new rules about access and Goverance are being considered, so "what never happened in the past" might be fixed.

 

It's a reasonable take @Jaylinbridges , and I think a fair picture of where we are now.

However, I do think it underestimates the current capabilities of LLMs and Inference engines, as well as where we might be in a couple of years time.

My punt is as follows:

  • LL could implement real-time filtering on messages at submission time.
    Not as outlandish as it sounds with both OpenAI and Google rolling out real time conversational agents over the next few months.

    Per Token cost might be an issue here, not technology or privacy concerns. Even that will be mitigated by self hosting an open source LLM - something like LLAMA 3 - with OLLAMA (which I run quite happily on a modern-ish laptop rocking just 16Gb RAM - albeit it not for 50,000 concurrent conversations :) )

    Either way LL is an Amazon customer, and Amazon are already pitching their Bedrock and Sagemaker offerings.

    Also leveraging multi-agent systems (something like CrewAI) could triage conversations to minimize the AI attention spent on harmless conversations.
     
  • Privacy is protected (as much as it is now anyway), because the conversation will not be stored unless it is flagged as potentially inappropriate.
     
  • Flagged conversations are sent to Humans for final review and decision making.
     
  • Mitigating False Positives:
    LL has a goldmine of data collected over 20 years that could be used to fine-tune the LLM for Second Life's specific context and language. This would significantly improve the accuracy and effectiveness of content moderation, ensuring the AI understands the nuances of in-world communication.


This is technically feasible now, just not cost effective, but at the rate open source models and availability of inference compute are increasing, I give it 2 years for most companies, probably another 2 decades for LL :) 

Edited by JacksonBollock
add in links
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, JacksonBollock said:

ensuring the AI understands the nuances of in-world communication.

 

1 hour ago, JacksonBollock said:

AI... nuances...

Lol.

Don't mind me. That part just made me giggle a bit. 😄

  • Haha 2
Link to comment
Share on other sites

Posted (edited)
9 minutes ago, Ayashe Ninetails said:

 

Lol.

Don't mind me. That part just made me giggle a bit. 😄

It's not bad these days you know... nuances and all :) 

if you take a go at just an open ended chat with GPT4-O it can be a bit off putting just how realistic it is. Then add in some fine-tuning or even just a large-ish context and it starts to feel definitely a bit weird..

My daughter loaded a fair chunk of Harry Potter into the latest Gemini and then asked to have a conversation with Hermione. Hermione was pretty articulate, and did display a surprising emotional intelligence :) 

Edited by JacksonBollock
  • Like 2
  • Thanks 1
Link to comment
Share on other sites

Just now, JacksonBollock said:

It's not bad these days you know... nuances and all :) 

I guess it depends on the model!

Much of my freelancing these days involves working on LLM and chatbot training projects. If there's one thing they still struggle with in this here 2024 (aside from totally making things up and getting a bit too sassy for their own good), it's nuance.

Maybe one day they'll get there. Maybe. 👀

I personally hate the idea of using AI for content moderation and hope they don't go that route, but that's just me.

  • Like 2
Link to comment
Share on other sites

Posted (edited)
11 minutes ago, Ayashe Ninetails said:

I guess it depends on the model!

Much of my freelancing these days involves working on LLM and chatbot training projects. If there's one thing they still struggle with in this here 2024 (aside from totally making things up and getting a bit too sassy for their own good), it's nuance.

Maybe one day they'll get there. Maybe. 👀

I personally hate the idea of using AI for content moderation and hope they don't go that route, but that's just me.

That's really interesting Ayashe - maybe a chat for a different day though ..

We tend more to the domain specific end of things, lots of fine-tuning, but no personality required :)

We've been using the usual latest and greatest from OpenAI, Claude, and Gemini - but we get really good stuff from open source LLAMA3 too.
 

All the Best
Jackson

Edited by JacksonBollock
  • Like 1
Link to comment
Share on other sites

13 minutes ago, JacksonBollock said:

That's really interesting Ayashe - maybe a chat for a different day though ..

We tend more to the domain specific end of things, lots of fine-tuning, but no personality required :)

We've been using the usual latest and greatest from OpenAI, Claude, and Gemini - but we get really good stuff from open source LLAMA3 too.
 

All the Best
Jackson

Ah, Claude. The sassiest of sassypants.

I do wonder if we'll eventually get more information about what SL intends to bring onboard for this proactive moderation and how it'll work, but I doubt they'd let us in on those secrets.

  • Like 1
Link to comment
Share on other sites

17 hours ago, Scylla Rhiadra said:

And again, the potential for using such a system to pinpoint people they might need to investigate, which is how Keira described it, is not the same thing as LL going on search and destroy missions against child avis. Which was the leap that Arielle made in her post.

 

23 hours ago, Scylla Rhiadra said:

That is NOT the same thing as, to quote your post again, "becoming proactive to search out and destroy(ban) a*eplayers."

Speaking of "disinformation" . . . please stop scaremongering.

 

23 hours ago, Scylla Rhiadra said:

I have no idea. I have concerns about such things too, and will want to hear more details.

But . . . again . . . it is not the same thing as "becoming proactive to search out and destroy(ban) a*eplayers."

And again, you're pulling speculative and paranoid nonsense out of . . . wherever you keep it.

Just pointing out Scylla that it is you who is making leaps as I never said that there might be a search or destroy mission on child avi's but a*geplayers, as you quoted me to start with then changed it to child avatars in your posts to @Coffee Pancake. Where did you pull out that from?

Classic example of disinformation by misquoting and misattributing someone. I expected better from an educator!

I'll be awaiting my apology in the mail :)

  • Like 2
Link to comment
Share on other sites

2 hours ago, Daniel Voyager said:

I think there should be a ban on 24/7 bots. 

Why?

Traffic bots are already banned regardless of the length of time they are logged in, but 24/7 bots are used for other purposes too. What about bots that are not logged in 24/7? What about land-owners that are logged in 24/7 and are not doing anything, but are not bots? What about other avatars that are logged in 24/7 and don't do anything much/most of the time?

  • Haha 2
Link to comment
Share on other sites

33 minutes ago, Phil Deakins said:

Why?

Traffic bots are already banned regardless of the length of time they are logged in, but 24/7 bots are used for other purposes too. What about bots that are not logged in 24/7? What about land-owners that are logged in 24/7 and are not doing anything, but are not bots? What about other avatars that are logged in 24/7 and don't do anything much/most of the time?

It actually messes around with the user daily concurrency numbers. It's a shame because we don't really know the actual real resident numbers. 

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, Daniel Voyager said:

It actually messes around with the user daily concurrency numbers. It's a shame because we don't really know the actual real resident numbers. 

 

At the very least they should be able to not have them count towards the concurrency though I suspect that is why they have them in the first place.

  • Like 1
Link to comment
Share on other sites

Posted (edited)
2 hours ago, Daniel Voyager said:

It actually messes around with the user daily concurrency numbers. It's a shame because we don't really know the actual real resident numbers. 

 

And that matters so much to you that you'd like all 24/7 bots banned regardless of whether or not they are actually functional and useful?

Why does it matter that much to you?

Knowing the concurrency is interesting, of course, but it makes no difference to how anyone uses SL. As long as LL keeps SL going, we are all fine. And, judging by the way they are pressing on with things, such as the mobile viewer, it does look as though they are keeping SL going.

You didn't answer the rest of the questions I asked you.

Edited by Phil Deakins
  • Like 1
Link to comment
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...