Jump to content

Why SL is been left out of the current metaverse hype?


Oct Oyen
 Share

You are about to reply to a thread that has been inactive for 872 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

In the early days of SL when it was all so truly amazing and wonderous, I'm sure I am not the only one who was told about a "beautiful farm where you could feed the animals, ride them around, brush their coats" and actually do RL farm chores.
And you were like " oh wow cool i'd love to go there sometime" *accepts LM.

Then one day you're a bit tired of making clothes etc, (and this was after adult regions went away or whatever happened with them) and *ding OMG the beautiful farm! I'm going there now.....
rezz... ^~  rezz... 👀  rezz... 🥴 rezz.. 😵  rezz..😲 rezz.. 😬 rezz..🤢 rezz..🤮 rezz..🤬

Message LM giver... yooouuuu sicko, thats disgusting!. Wth are you playing at!?

And after all this time I concluded it was quite a popular prank for a lot of people to play on others.
I didn't realize the end implications of it all for metaverses etc now it's 2022. 
I also did not imagine some people might "spend a bit of time there". 

SeeNo.gif

  • Like 1
Link to comment
Share on other sites

7 hours ago, Sammy Huntsman said:

Okay, but the whole thing is. There is a human behind that avatar. I am not saying the behaviour is okay. That and it's not a real horse. 

And a child or teen playing a FPS game shooting Realistic looking characters or cartoon avatars is just a game. Yet studies have proven that it can have a psychological effect on that child or teen where they develop an apathy to guns, violence and murder as well as other issues.

Whilst as you say, there is a human behind that avatar (whatever the age), it doesn't mean that those sort of acts are acceptable in Second Life, especially when those acts are illegal in RL (for a good reason not just the affect on the animal) as it could be a precursor to a person 'liking' it and then acting out on it in RL.

Additionally, one could act out on their RL desires in SL without any from of repercussions and still develop psychological effects from it. There is a proven link between people engaging in B*stiality or its counterpart Z*ph*lia (interest/attraction in) and other adult impersonal crime, including child ab*se, p*dophilia, etc. Arrest and Prosecution of .... Offenders in the United States.

Giving them an outlet to commit what would be a crime, in SL may not be the best choice for LL to take.

Besides all of that, it also comes down to the impact of allowing such behaviour in SL to occur when a potential user/investor etc reads a review saying it is in SL. It is an instant turn off.

Edited by Drayke Newall
  • Haha 1
Link to comment
Share on other sites

44 minutes ago, Drayke Newall said:

And a child or teen playing a FPS game shooting Realistic looking characters or cartoon avatars is just a game. Yet studies have proven that it can have a psychological effect on that child or teen where they develop an apathy to guns, violence and murder as well as other issues.

And yet the vast majority other peer-reviewed and more rigorously tested scientifically valid studies, over a period of a decade, by some very well known organizations and institutions (University of Nevada, MAYO, Indiana, Villenova, Oxford University etc), have repeatedly shown the exact opposite. There is no connection between violent video games and RL aggressive acts by the players. None. It's a trope hauled out by hand-wringing politicians on both sides of the spectrum and a persistent myth that gets traction because no one bother lobbying the truth. 

I've been fine with a lot of the debate here but facts are facts, and that statement is plain incorrect. Sorry.

Edited by Katherine Heartsong
  • Like 4
  • Thanks 2
Link to comment
Share on other sites

44 minutes ago, Drayke Newall said:

And a child or teen playing a FPS game shooting Realistic looking characters or cartoon avatars is just a game. Yet studies have proven that it can have a psychological effect on that child or teen where they develop an apathy to guns, violence and murder as well as other issues.

Whilst as you say, there is a human behind that avatar (whatever the age), it doesn't mean that those sort of acts are acceptable in Second Life, especially when those acts are illegal in RL (for a good reason not just the affect on the animal) as it could be a precursor to a person 'liking' it and then acting out on it in RL.

Additionally, one could act out on their RL desires in SL without any from of repercussions and still develop psychological effects from it. There is a proven link between people engaging in B*stiality or its counterpart Z*ph*lia (interest/attraction in) and other adult impersonal crime, including child ab*se, p*dophilia, etc. Arrest and Prosecution of .... Offenders in the United States.

Giving them an outlet to commit what would be a crime, in SL may not be the best choice for LL to take.

Besides all of that, it also comes down to the impact of allowing such behaviour in SL to occur when a potential user/investor etc reads a review saying it is in SL. It is an instant turn off.

Sure but the common sense thing is to explain to the child, teen or younger. What is the difference between right and wrong, and real not and not real. I mean it is literally down to the Parents to teach the kids the difference. But back to the whole thing, I mean there are also studies to prove that people with schizophrenia have a difficult time to distinguished between what's real and not real. But not everyone in SL has a mental affliction that makes it hard to determine what is real and fake. I can see what you are saying. But I think the proper word should be animal play.

Link to comment
Share on other sites

1 hour ago, Katherine Heartsong said:

I've been fine with a lot of the debate here but facts are facts, and that statement is plain incorrect. Sorry.

Uh-huh 🙄. For every one you find that says no, I can find just as many (peer reviewed/published) well known organisations and institutions saying it is all within the same decade. For example The University of Innsbruck 2017, University of Potsdam 2014, Southwest University  2019, Iowa State University 2014, then there is this one 2012, and this one 2012, and this one 2019, then this one 2019.

I could go on and on with other published/reviewed and scientifically valid studies backing up that there is a correlation and even disproving studies saying there isn't. The fact is that it is still a hotly debate topic with some saying they are wrong and others agreeing.

I stated in my post that studies have proven that their is a correlation between the two, which is true. I linked them for you in this post, many recent just like the oxford study and can find just as many more. So who's facts are correct your peer reviewed studies or mine?

Also be careful of news articles stating headlines like "recent study shows no evidence of ...". For example if you read the latest 2021 study that examined kids over a 10 year period (longest yet) that media has claimed states no increased aggression whilst playing games, it doesn't say that at all and actually agrees in part with other studies. I'll quote from the published journal article:

"For outcomes, there were no differences in prosocial behavior, depression, or anxiety at the final wave. However, “Moderates” showed significantly higher levels of aggression than “High Initial Violence” 

"Nevertheless, the current study provides evidence that of multiple violent video game trajectories, with moderate and relatively consistent play being the most likely related to increased aggressive behavior over time."

Despite all that, discussion on VVGE is off topic and I was using it as an analogy which you clearly missed.

1 hour ago, Sammy Huntsman said:

 But I think the proper word should be animal play.

No mater how you word it, it is still a detriment to Second Life's position in the current metaverse hype. You can see that just by the reactions of users in this thread to it. So, imagine what the reaction to someone who doesn't play SL and is researching whether to play it is.

Edited by Drayke Newall
  • Haha 2
Link to comment
Share on other sites

The major difference being, you have to actually seek it out in SL to find it.  It's not an integral part of the game as violence is in video.games.  I've been in SL a.long time and have never accidentally run into anyone engaging in b*******.   And I go to many adult places.  On purpose.  Can't say I've ever seen anyone in those places doing those things.  I'm sure I could actively search for it and find it but otherwise, I don't see it.  

I do agree that it shouldn't be allowed.  Not even sure why it is as even depictions of it are illegal in many places.  

  • Like 3
Link to comment
Share on other sites

It isn't allowed really but if no one sees it, how are they going to report it? You can't report what you haven't seen. If you do, and do it often enough, you risk losing your own account. The people having to deal with the reports aren't stupid. They know where and how to look to see who did what. Filing a bunch of false reports gives LL a reason to boot you off the grid.

  • Like 3
Link to comment
Share on other sites

Rowan is absolutely right: if you aren't looking for stuff that is either highly sketchy, or actually in violation of the ToS, the odds are very good you won't see it. I visit all kinds of sims with the full range of ratings (although not generally ones that are devoted to sex), and I almost never trip over people having sex, yet alone people having sex with goats.

And that fact, which is owed in part to the overall effectiveness of the ratings system, is one of the reasons why LL has generally been able to ignore a lot of what is happening at these places. Most residents in SL have at least some sense that this stuff is going on (although some certainly are oblivious), but it's not in their faces, and so they quietly carry on enjoying their own SL without paying much mind to it.

But . . . if you are actually looking for this stuff, it is not hard to find. And I speak from experience on this. And honestly, if LL wanted to "clean up the grid," they could probably get rid of the bulk of the most visible stuff in a day or two with a dedicated team. For that matter, IF it was what I did (and it's not), I could hit an awful lot on a single 8 hour AR spree all by myself. More if I turned lose a small posse of friends to reinforce me.

But I am really unconvinced that bans and ARs are effective. Age pl*y is "banned," but it's still happening, just barely disguised in a lot of cases. Bans just tend to drive stuff underground. And there is the problem of where to draw the line. Strictly enforced rules are going to lead to a lot of pretty innocent content and people getting caught up in the net, and some residents as well tend to weaponize these rules. Code is a really blunt instrument too: it can't begin to account for the shades and nuances that characterize actual human behaviour.

Rules (and, in civil society, laws) have their uses, but you can't legislate behaviours and attitudes. What is required is a shift in our cultural attitudes. And it begins with educating ourselves, not by gathering pitchforks and torches.

Edited by Scylla Rhiadra
  • Like 2
Link to comment
Share on other sites

2 hours ago, Rowan Amore said:

The major difference being, you have to actually seek it out in SL to find it.

True, however as I have said, when people type in "Second Life Reviews" when they are thinking of playing second life and want to find out about it, they dont have to look for it as it is there in plain sight in negative user reviews.

1 hour ago, Silent Mistwalker said:

It isn't allowed really but if no one sees it, how are they going to report it? 

The difference is that when you flag things to LL even with undeniable evidence they do nothing about it. Take MP for instance I have flagged numerous items in the past, such as the ones Scylla mentioned earlier and they are still up. They were not false reports, they are plain to see what they are for, yet nothing happens.

The more worrying thing is that Linden Lab know full well there are problems and monitor them (with such issues increasing every year) but are actively choosing to ignore them by saying things like you have "its against our ToS and if we catch them we will act against them".

I may as well copy this from my still hidden post as I dont know when it will be unhidden and if this was the case then can someone explain this quote from the court case (Second Life Is Plagued by Security Flaws, Ex-Employee Says | WIRED) mentioned in my still hidden post in response to @Innula Zenovka:

"According to the lawsuit, in 2018 the manager of Linden Lab’s fraud team “presented information to Linden board members in quarterly fraud reports that acknowledged a high number of such Age pl*y [sic] violations were actually occurring on a regular basis each quarter.” The suit says Pearlman “was concerned that Linden Lab was apparently allowing the users to violate age pl*y rules, by not implementing appropriate procedures to prevent violations from repeating at the same levels each quarter.”

The lawsuit claims that Scott Butler, Linden Lab’s former chief compliance officer, wrote a memo to other executives in June 2018 “urging compliance with cybersecurity laws consistent with Pearlman’s repeated concerns” … A former high-level Linden Lab employee confirmed the contents of the memo. The former employee said the memo “indicated that there should be more scrutiny on the ‘skill gaming program,’” and recommended Linden Lab adopt a suggestion from Pearlman to determine why it “had not been able to prevent the seedy population of ‘age-pl**ers’ from returning to Second Life, time and again.”

The former high-level Linden Lab employee I bolded, I have no idea who it is but from the article it seems like he was contacted by the journalist outside of the court case and not part of the case (I may be wrong though).

The article and court case also suggests that LL were not complying with anti-money-laundering rules, which after she was fired LL introduced such complying measures 1 year later. Suggesting in this case she was correct, so makes one wonder what else she was correct about.

Other things to highlight is this: "Pearlman urged Linden Lab to review its age verification and consent review process, as she was worried the company could be erroneously collecting data on minors and enabling children to use the platform without the consent of a parent or guardian".

Now whilst they dismissed this notion of enabling children to use the platform with their usual ToS spiel and we dont need to comply with x,y,z, it is disturbing to see that kids under 13 and under 16 have reviewed SL Here claiming to have played and seen content they shouldn't have, and in some cases encountering borderline stalking. Most are old but there are some from 2017-18 when Pearlman was employed and some even newer from 2019-2021

These children reviews of kids well underage would (if to be believe and I see no reason why they shouldn't) appear to confirm what Pearlman said to be factual with her concerns regarding age verification despite LL winning the case upon grounds that she was toxic and inept.

:EDIT:

This also shows that AR's are not effective (if at all acted on by LL and despite what some users have said otherwise in this thread) as LL literally are conducting quarterly reports showing no change in the number of violations. Meaning that more than likely they get banned, then create an alt and start again.

I also dont think it is a matter of it going underground, it is more a matter of lack of procedures to prevent it like Pearlman suggests in the court case.

Edited by Drayke Newall
  • Haha 1
Link to comment
Share on other sites

In the context of this discussion, Philip Rosedale's comments on moderation in virtual worlds and the metaverse during his Twitter Spaces talk today are interesting. Inara has the recordings on her blog, along with a much fuller account of the full talk, but here is her bullet-point summary of what was said on the subject of moderation by Rosedale, and by Avi Bar-Zeev.

Note that Rosedale, from what I've seen here and in his comments on Twitter, is more interested in the relationship between users and user communities than in the minutiae of what should be permitted and how this should be policed, but his overall comments on things like self-moderation by communities are very relevant.

 

On Moderating Virtual Spaces

  • Sees moderation of virtual spaces / virtual worlds as something that still needs to be fully addressed.
  • Believes the approaches to moderation taken by social media platforms and across the Internet as a whole are insufficient for immersive spaces utilising avatars – simply put, a single standard of rules applied from above by a single company will not work.
  • In particular sees a top-down approach to moderation troublesome for a number of reasons, including:
    • Those utilising Meta’s suggested approach of recording interactions so that in the event of a dispute / reported abuse, the last 10-15 minutes can be attached to an abuse report, could use the gathered data to also help drive any advert / content-based revenue generation model they might also use.
    • Top-down approaches risk utilising a “one size fits all” approach to disputes in order to minimise the costs involved in managing moderation activities, thus removing the opportunity for for subtlety of approach or taking into consideration the uniqueness of any given situation / group, potentially alienating groups or activities.
  • Instead, believes that there should be a more fluid approach to moderation more in keeping with the physical world, and adjusted by circumstance / situation, and that:
    • Companies need to look at how spaces within their platforms are used and what is deemed as acceptable behaviour by the people operating  / using them.
    • Enable the communities / groups using spaces to be able to self-moderate through the provision of the means for them to do so (e.g. provide their own guidelines backed by the ability for them to remove troublemakers).
    •  Recognise the fact that the majority of people will adjust their behaviour to suit the environment they are within and self-moderate according to expectations of that environment.
  • Toward the end of the session, notes that there is a risk associated with some aspects of decentralisation of moderation / control. Within Second Life, for example, decentralisation of land ownership brought with it issues of anti-social behaviour and griefing – ad farms, intentionally being abusive towards neighbours through the use of large billboards, sounds, etc., whilst making the land too expensive for it to be reasonably purchased.

From Avi Bar-Zeev

  • Also notes that there is an inherent danger in how a company could use the recording / surveillance approach to moderation to profile users and to assist their ad / targeting revenue model.
  • However, he thinks the larger issue is that given the review of recordings associated with abuse reports that may be coming in by the thousand in a large-scale system is going to be human-intensive, then the use of AI systems to manage this process and minimise costs is likely inevitable. But:
    • How do we know the AI isn’t by its very nature, pre-disposed to “find bad behaviour”, and to do so without consideration of a wider context (pre Philip Rosedale’s warning).
    • How can we be sure AI programming is sufficient for a system to correctly recognise some behaviour types as being abusive.
    • Is dealing with incidents in retrospect and with limited supporting data (e.g. just 10 minutes of audio) actually the best method of handling incidents.
  • As such, also believes it is better to design systems wherein people are innately aware that they are dealing with other people across the screen from them, and so they self-moderate their behaviour (as most of us naturally do most of the time when engaging with others), and that there are ramifications if we then chose to be directly abusive towards others. In short, virtual spaces should “re-humanise” our interactions with others.
  • Like 1
  • Thanks 3
Link to comment
Share on other sites

at the moment the penalties for breaking the rules are goto jail (login blocked for X days) or death (account banned)

i think people are more likely to moderate their behaviour when they get fined in game currency (L$). Same as people do in the real world when they get fined

egregious people with throwaway accounts will end up with a negative L$ account on the first offence

as soon the account balance goes negative then goto jail (login blocked) til the full sum of the fine(s) are paid.  The system calculates how much US$ to charge the account  payment method to get the L$. No or insufficient payment method then oh! well, have to stay in jail

sometimes people will want to appeal their fine, protest their innocence. JudgeAI Linden will just reply: Sorry you guilty, there is no appeal

  • Like 2
Link to comment
Share on other sites

5 hours ago, Scylla Rhiadra said:

In the context of this discussion, Philip Rosedale's comments on moderation in virtual worlds and the metaverse during his Twitter Spaces talk today are interesting. Inara has the recordings on her blog,

As always, we're indebted to Inara Pey for all the hard work of reporting this—from which I lift further below.

What struck me from that talk was the discussion of "accessibility" which apparently refers to a metaversal learning curve:

Quote

[...]

  • Ergo, the first step in accessibility is moving things to a point where people are comfortable within idea of using avatars and a virtual presence. Only when this has been addressed, and people are comfortable with the idea, can the wider issues of moderation, world-building, economics, etc., be tackled.
  • Believes the way to do this is to make avatars more visually expressive – which is itself a tough proposition [see, for one thing, the issue of the Uncanny Valley], and towards the end of the video expresses how this could be done by using webcams on laptops, mobile devices to capture facial expressions and have the back-end software then translate these onto avatar faces [an approach LL have indicated they plan to develop in 2022].

This strikes me as applicable to extroverts accustomed to regarding themselves at length in mirrors.

Some of us with different Jungian predispositions find the prospect of an avatar that watches and mimics our RL expressions to be terrifying.

  • Like 5
Link to comment
Share on other sites

9 hours ago, Drayke Newall said:

True, however as I have said, when people type in "Second Life Reviews" when they are thinking of playing second life and want to find out about it, they dont have to look for it as it is there in plain sight in negative user reviews.

The difference is that when you flag things to LL even with undeniable evidence they do nothing about it. Take MP for instance I have flagged numerous items in the past, such as the ones Scylla mentioned earlier and they are still up. They were not false reports, they are plain to see what they are for, yet nothing happens.

The more worrying thing is that Linden Lab know full well there are problems and monitor them (with such issues increasing every year) but are actively choosing to ignore them by saying things like you have "its against our ToS and if we catch them we will act against them".

I may as well copy this from my still hidden post as I dont know when it will be unhidden and if this was the case then can someone explain this quote from the court case (Second Life Is Plagued by Security Flaws, Ex-Employee Says | WIRED) mentioned in my still hidden post in response to @Innula Zenovka:

"According to the lawsuit, in 2018 the manager of Linden Lab’s fraud team “presented information to Linden board members in quarterly fraud reports that acknowledged a high number of such Age pl*y [sic] violations were actually occurring on a regular basis each quarter.” The suit says Pearlman “was concerned that Linden Lab was apparently allowing the users to violate age pl*y rules, by not implementing appropriate procedures to prevent violations from repeating at the same levels each quarter.”

The lawsuit claims that Scott Butler, Linden Lab’s former chief compliance officer, wrote a memo to other executives in June 2018 “urging compliance with cybersecurity laws consistent with Pearlman’s repeated concerns” … A former high-level Linden Lab employee confirmed the contents of the memo. The former employee said the memo “indicated that there should be more scrutiny on the ‘skill gaming program,’” and recommended Linden Lab adopt a suggestion from Pearlman to determine why it “had not been able to prevent the seedy population of ‘age-pl**ers’ from returning to Second Life, time and again.”

The former high-level Linden Lab employee I bolded, I have no idea who it is but from the article it seems like he was contacted by the journalist outside of the court case and not part of the case (I may be wrong though).

The article and court case also suggests that LL were not complying with anti-money-laundering rules, which after she was fired LL introduced such complying measures 1 year later. Suggesting in this case she was correct, so makes one wonder what else she was correct about.

Other things to highlight is this: "Pearlman urged Linden Lab to review its age verification and consent review process, as she was worried the company could be erroneously collecting data on minors and enabling children to use the platform without the consent of a parent or guardian".

Now whilst they dismissed this notion of enabling children to use the platform with their usual ToS spiel and we dont need to comply with x,y,z, it is disturbing to see that kids under 13 and under 16 have reviewed SL Here claiming to have played and seen content they shouldn't have, and in some cases encountering borderline stalking. Most are old but there are some from 2017-18 when Pearlman was employed and some even newer from 2019-2021

These children reviews of kids well underage would (if to be believe and I see no reason why they shouldn't) appear to confirm what Pearlman said to be factual with her concerns regarding age verification despite LL winning the case upon grounds that she was toxic and inept.

:EDIT:

This also shows that AR's are not effective (if at all acted on by LL and despite what some users have said otherwise in this thread) as LL literally are conducting quarterly reports showing no change in the number of violations. Meaning that more than likely they get banned, then create an alt and start again.

I also dont think it is a matter of it going underground, it is more a matter of lack of procedures to prevent it like Pearlman suggests in the court case.

 

All of that tl;dr did nothing to address what you quoted.

I never mentioned the MP. I was talking about ARs! 

You move the goal posts, I quit and no that does not make you a winner. Everyone loses.

It's a civil suit, not criminal, for wrongful discharge, nothing else.

https://www.plainsite.org/dockets/40sd3ihux/superior-court-of-california-county-of-san-francisco/kavyanjali-pearlman-v-linden-research-inc-a-california-corporation-et-al/

Edited by Silent Mistwalker
Link to comment
Share on other sites

While on the topic of A, M, and G since joining back this year I find it a disapointment that M is considered Nudity allowed (Though in the past it always had ment this) there is no Middle ground from M to G, id almost say you need another rating PG that allows cursing in public chat (That could be filter!) and NO Nudity whatsoever. Then perhaps in M you allow nudity but only behind Close Wall or in parcels-regions that permit nudity.

I just feel the Adult aspect of the game is what turns of the majority of new players, they feel SecondLife is a Sex game when its much more than that.

Without proper regulation on other issue in game such as the Child Models which should be regulated similar to Gambling I do not think we will see people adopt the game as a leading Metaverse even if it clearly is.

On upload you should be able to mark the object itself as A, M, G (As in its ORGININAL upload state what might it be considered, as further modification in game could change the rating and may be a manual option to change this setting...) This way you also prevent models from rendering when entering-exiting A, M, G regions. (For example if a model is Nude the model would be market M or A, when the model walks into a G region it would be invisible. Also all viewers on the other side of the Sim from G to M would also be unable to render out the M content. Though such a system would force developers to come up with auto fallbacks.... Perhaps allow models to be uploaded in sets of 3; an A rating for the model LODs, an M Rating for another set of model LODs, and a G rating for the final model LODs. And hopes the migrations to the cloud can afford these type of changes >.<

 

  • Haha 2
Link to comment
Share on other sites

12 minutes ago, NardweBones said:

While on the topic of A, M, and G since joining back this year I find it a disapointment that M is considered Nudity allowed (Though in the past it always had ment this) there is no Middle ground from M to G, id almost say you need another rating PG that allows cursing in public chat (That could be filter!) and NO Nudity whatsoever. Then perhaps in M you allow nudity but only behind Close Wall or in parcels-regions that permit nudity.

I just feel the Adult aspect of the game is what turns of the majority of new players, they feel SecondLife is a Sex game when its much more than that.

Without proper regulation on other issue in game such as the Child Models which should be regulated similar to Gambling I do not think we will see people adopt the game as a leading Metaverse even if it clearly is.

On upload you should be able to mark the object itself as A, M, G (As in its ORGININAL upload state what might it be considered, as further modification in game could change the rating and may be a manual option to change this setting...) This way you also prevent models from rendering when entering-exiting A, M, G regions. (For example if a model is Nude the model would be market M or A, when the model walks into a G region it would be invisible. Also all viewers on the other side of the Sim from G to M would also be unable to render out the M content. Though such a system would force developers to come up with auto fallbacks.... Perhaps allow models to be uploaded in sets of 3; an A rating for the model LODs, an M Rating for another set of model LODs, and a G rating for the final model LODs. And hopes the migrations to the cloud can afford these type of changes >.<

 

Assigning maturity ratings on an object that hasn't even been scripted yet is putting the cart before the horse. Not to mention the coding for doing what you want would literally cripple SL.

Please people, stop trying to kill the only platform out there that is worth half a damn.

  • Like 4
Link to comment
Share on other sites

1 minute ago, Silent Mistwalker said:

Assigning maturity ratings on an object that hasn't even been scripted yet is putting the cart before the horse. Not to mention the coding for doing what you want would literally cripple SL.

Please people, stop trying to kill the only platform out there that is worth half a damn.

I agree so much with this.  All this talk about changing SL to attract the metaverse-hype people, while no one really knows what will happen to the metaverse, if it will work as a plataform or what type of moderation issues it will have.

  • Like 4
Link to comment
Share on other sites

24 minutes ago, Silent Mistwalker said:

Assigning maturity ratings on an object that hasn't even been scripted yet is putting the cart before the horse. Not to mention the coding for doing what you want would literally cripple SL.

Please people, stop trying to kill the only platform out there that is worth half a damn.

I do not think that this would cripple the game nor would it be hard to program. Second Life has gone to long without moderation and we are seeing the SAME issues second life has/had in other metaverse-like areas such as VrChat and their issues with moderation are no different.

Someone needs to take a stance on their platform, not a HUGE Big Brother type of stance but enough to keep the garbage out so new users feel attracted to the platform.

Perhaps Second Life Wont be the leading industry in the Metaverse, but as it stands right now it appears to be just that. So I just hope whatever they do, they do it with confidence and make wise decisions based of the analytics they have acquired over the years.

No one here wants to kill second life, we just want a common ground for what a metaverse should be. Perhaps you are right and all the features I mention would ruin the game, but its 1 of thousands of suggestions that have been made that may perhaps also benefit the game as well.

 

Edit: Lets respect each others opinions as they are like *** holes, we all have one brother.

Edited by NardweBones
  • Haha 2
Link to comment
Share on other sites

7 minutes ago, StarlanderGoods said:

I agree so much with this.  All this talk about changing SL to attract the metaverse-hype people, while no one really knows what will happen to the metaverse, if it will work as a plataform or what type of moderation issues it will have.

And if these fabled metaversians have money , they can buy their own Estates and impose whatever additional restrictions they want, regardless of "maturity" rating.

Sure, they shouldn't have to wade through domestic terrorism, pedo-porn and anti-vax propaganda—nobody should—but if they can't handle nudity they better stick to the virtual churchyard.

  • Like 2
  • Thanks 2
Link to comment
Share on other sites

3 minutes ago, NardweBones said:

I do not think that this would cripple the game nor would it be hard to program. Second Life has gone to long without moderation and we are seeing the SAME issues second life has/had in other metaverse-like areas such as VrChat and their issues with moderation are no different.

Someone needs to take a stance on their platform, not a HUGE Big Brother type of stance but enough to keep the garbage out so new users feel attracted to the platform.

Perhaps Second Life Wont be the leading industry in the Metaverse, but as it stands right now it appears to be just that. So I just hope whatever they do, they do it with confidence and make wise decisions based of the analytics they have acquired over the years.

No one here wants to kill second life, we just want a common ground for what a metaverse should be. Perhaps you are right and all the features I mention would ruin the game, but its 1 of thousands of suggestions that have been made that may perhaps also benefit the game as well.

Your proposal is not doable. Most SL items work by combining them, you buy a body from X creator, a skin from Y creator, clothes from Z creator, and naughty bits from Z2 creator.   Even if you make a PG rated body, you cant control for every other attachment, how is an AI check if a skin TEXTURE is adult or not? Who is going to rate my slightly revealing outfit?

Where does other adult content fit into this, the non-sexual type, like violent content, weapons, gestures, combat HUD´s ?

Is the whole MP and in-world already existing stock of items is going to be rated retroactively?  What about all things I already bought?

  • Like 3
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 872 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...