Jump to content

Bot Scams: Linden Labs Should Do More About These Scams


Aster Rae
 Share

You are about to reply to a thread that has been inactive for 2834 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I have been seeing a lot more bots begging for money lately. The usual text is something like:

"Can someone lend me 100L? Please help me complete 1250L for pay my rent, I promise pay back tomorrow."

They use accounts that have been stolen, so they are unlikely to be traced to the actual person scamming other users. The scam alwys seems to be about somone who needs money to pay their rent, and I realize that sending Linden Labs an abuse report is not going to do much. The account will be removed, or given back to the orginal creator, but the scammer will just use another account.

There are only two ways that I can see to stop the scam entirely. The first involves notifying everyone who has a Second Life (SL) account to be more careful when logging into their SL account. Make sure that you're logging into secondlife.com and not a phishing website. The other way involves Linden Labs (LL) forcing all bots to be registered by the owner of a group before being able to comment on that group.

I would just like LL, if possible, to force bot registration (by a group owner) before allowing bots to comment. Though I suppose that will only force scammers to use non-bot text. And if there are enough scammers, then they can still cause the same problems. Perhaps it would slow them down a bit, at least.

Link to comment
Share on other sites


Aster Rae wrote:

 

There are only two ways that I can see to stop the scam entirely. The first involves notifying everyone who has a Second Life (SL) account to be more careful when logging into their SL account. Make sure that you're logging into secondlife.com and not a phishing website. The other way involves Linden Labs (LL) forcing all bots to be registered by the owner of a group before being able to comment on that group.

I would just like LL, if possible, to force bot registration (by a group owner) before allowing bots to comment. Though I suppose that will only force scammers to use non-bot text. And if there are enough scammers, then they can still cause the same problems. Perhaps it would slow them down a bit, at least.

Guess what?.... this is exactly what LL already does for many years but residents are sometimes a bit less clever than what they warn for... passwords... look at https... don't lcick links, don't respond to pw questions.... and what??

THEY STILL GIVE IT OUT ..

we have a saying here: you can bring a thirsty horse to the water, but you can not force it to drink......

LL can ot prevent sign ups or all new residents will find the entrance blocked, there is a bot registration... there are warnings everywhere...

For now it's on the groups owners, ban, eject ánd stop with those open enrollments...

 

for a very recent thread about this, look here

https://community.secondlife.com/t5/Abuse-and-Griefing/Should-you-file-abuse-reports-against-the-quot-Can-someone-lend/qaq-p/3069372

 

 

Link to comment
Share on other sites


Sassy Romano wrote:

Implement stronger authentication in the first place and prevent the hijacking of accounts.

Right. Two-factor would solve this 100% of the time. (I imagine you know this, Sassy)

Currently the log in form is just two text fields (name, password), a design that's easily copied and requires no special attention from the user. No-one even reads the page, let alone the URL.

Solution is to create an Authenticator, as used by many popular web services of equivilent size - and is increasingly standard. This creates a fixed-digit code derived from the current timestamp and an encrypted hash of your password, meaning the elements on the log in form would be (name, authentication). In the event of a phisher copying the form, they would only capture the hashed code (not the base password), and being tied also to the current timestamp the hashed code would expire within only a couple of minutes preventing delayed or persistant exploitation (combine this with the need to enter your full password - or hashed code - when buying Lindens or changing your Secret Answer, Recovery Email or Password and you have a bulletproof system).

It is entirely predictable that our userbase is being exploited in this way. We're an old service using old security methods, and our service operator is shying away from spending resources on new developments and is well-known for not actively-policing its users - the perfect target for phishers.

Link to comment
Share on other sites


Freya Mokusei wrote:


Sassy Romano wrote:

Implement stronger authentication in the first place and prevent the hijacking of accounts.

Right. Two-factor would solve this
100%
of the time.

I would love TFA to be added to the main website, because that's all that needs to be done to stop account takeovers.

Mind you, adding it to the marketplace, forums, and viewer itself is also ok!

 

But, sadly, it's not an impressive shiny for most people.

Link to comment
Share on other sites

I don't know what evidence there is that they "use accounts that have been stolen". Maybe they do, but maybe they just use day-old alts or even mains because there are no repercussions. 

There is nothing in the TOS about soliciting money from other residents. Given that all kinds of people have their hands out for things that seem legitimate, like to help cancer research, how could you police this effectively?

All groups now have a function that enables you to ban members yet keep your group open for membership from the general public. So the key is to moderate your group if it is plagued with these problems and ban the miscreants. Not everyone has time or money for this sort of policing so those annoyed by these discussions -- which usually consist way more of people expressing disgust than the original beggar -- just X the window and it closes group chat.

I'm not persuaded these are "bots" because I don't see how a scripted agent could join a group. Group joining requires clicking on a link in chat, not sure bots can do that.

Link to comment
Share on other sites


Prokofy Neva wrote:

 

I'm not persuaded these are "bots" because I don't see how a scripted agent could join a group. Group joining requires clicking on a link in chat, not sure bots can do that.

https://www.mysmartbots.com/blog/2012/09/new-bot-api-functions-to-join-and-leave-groups/

Just like that.

You should remember that all the functionality of SL that the viewer offers is a result of API's, the only reason that a link needs "clicking" in chat is because that's how the viewer expects it to be presented to a human.  There's no reason that a non interactive viewer doesn't just call the API and interact appropriately.

Think of a web page, software can invoke actions and links but by your reasoning, this couldn't be done because a link on a web page needs "clicking".  It's all just software, just about anything is possible.

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2834 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...