Jump to content

SL genocide V.S. "if it aint broke, don't fix it". What will SL being in 2 or 3 years?


BellaDonna Mocha
 Share

You are about to reply to a thread that has been inactive for 4220 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

I don't mean that people can't be consistently stupid.

I mean that stupidity doesn't produce as consistent a set of results as we're seeing from LL in terms of discouraging people from continuing off-world commerce (the bear didn't encourage anyone, it's just supposed to appear to try to).

To get zero on a multiple-choice test, for example, doesn't take an idiot, it takes a genius. 

Since the beginning of Xstreet, has LL really done even one thing that didn't ultimately move them closer to seeming to need to shut down off-world commerce?

 

Link to comment
Share on other sites


Josh Susanto wrote:

I don't mean that people can't be consistently stupid.

I mean that stupidity doesn't produce as consistent a set of results as we're seeing from LL in terms of discouraging people from continuing off-world commerce (the bear didn't encourage anyone, it's just supposed to appear to try to).

To get zero on a multiple-choice test, for example, doesn't take an idiot, it takes a
genius. 

Since the beginning of Xstreet, has LL really done
even one thing
that didn't ultimately move them closer to seeming to need to shut down off-world commerce? 

(I like your term "Off-World Commerce" ... I'm gonna steal that if you don't mind. *smile*)

Yes, they have done something good. They linked in-world L$ account balances with the Marketplace. That was a stroke of genius in that it accomplshed two goals:

 

  1. It erected a hurdle too big for any competitors to challenge and 
  2. It made the process of purchasing from the Marketplace incredibly easy for anyone.

 

Link to comment
Share on other sites

3) It made it more difficult to detect, document or prove if you're getting paid in-world but not for transactions shown in the SLM.

Thus, despite a partial upside for users, it needs to be more properly understood as one facet of LL's metrics eradication program.

Link to comment
Share on other sites


Paladin Pinion wrote:

The QA was very poorly done.

I'm revising my opinion about this. If it's true that all the problem listings are within a certain numerical range, then no amount of QA would have ever found it. All test items would have received new numbers and would be unaffected. The problem would only appear when the changes went live with the real database.

Link to comment
Share on other sites


Josh Susanto wrote:

>
then no amount of QA would have ever found it.

Were people with those numbers not on any of the tests?

Why not?

 

Of course not. You never test with real production data, it would be a disaster. Tests are intended to find errors, and the full expectation is that errors exist. You never, ever experiment with your real database.

Link to comment
Share on other sites

>You never test with real production data, it would be a disaster.

Meaning what, in reality, this time?

That they chose to test with everybody's production data on 21 March rather than risk some much smaller constrained set of volunteered production data before that?

Sorry, I still don't see the logic.

Link to comment
Share on other sites


Paladin Pinion wrote:


Paladin Pinion wrote:

The QA was very poorly done.

I'm revising my opinion about this. If it's true that all the problem listings are within a certain numerical range, then no amount of QA would have ever found it. All test items would have received new numbers and would be unaffected. The problem would only appear when the changes went live with the real database.

Not out to contradict you, just catching up here and there.

The problem was still obvious (assuming this is what they were calling a "search" problem), during that timespan, although I have no idea when that range of products was affected.

Doubtful the numbers themselves had anything to do with it, but rather the bug(s) causing those problems happened around that timeframe, and whatever was causing it was disabled, fixed (marketplace can be tweaked live without going down) or otherwise halted in its tracks.

In all likelihood the problem would have been caught by QA, because it was still there to be plainly seen when viewed manually. It obviously wasn't completely global, but it wasn't rare enough to be anything near an edge case that could have only been caught afterwards in production.

Automated tests of a simulated 50k merchants, 50k shoppers and 100k products with tests to check listing integrity would be what you might expect as a part of the process. Even the associated images can be tested automatically.

Going with your first response on both conspiracy and QA ;)

Link to comment
Share on other sites

But what would it have hurt to ask people to refrain from using the actual deployment until the beta people had checked it as an actual deployment? 

That still might not have caught everything, but it could have constrained at least some of the Day 1 problems to a small group of people who would have at least some idea what they were seeing.

Instead: a green bear telling people - no, STILL telling people - that they need to migrate immediately and stop using their boxes. 

Which idea is stupider?

Mine or the bear?

Link to comment
Share on other sites

Doubtful the numbers themselves had anything to do with it, but rather the bug(s) causing those problems happened around that timeframe, and whatever was causing it was disabled, fixed (marketplace can be tweaked live without going down) or otherwise halted in its tracks.

 

That sounds about right. We still don't know exactly what the problem was, but if we're going to speculate then it does seem to have happened during a particular time frame. I'm not a database guru, but I know there are ways to verify the data as a whole (but I'm not sure with such a large database if verification would really scan every single record; I don't think it does.) So yeah, they should have taken a closer look at the data before messing with it. On the other hand, if they didn't know a segment of the database was damaged then they'd have no real reason to do that if the general verification looked good. There are millions of records in there, so I'm sure they relied on some kind of automated process. Manual inspection would take years.

 

In all likelihood the problem would have been caught by QA, because it was still there to be plainly seen when viewed manually. It obviously wasn't completely global, but it wasn't rare enough to be anything near an edge case that could have only been caught afterwards in production.

That's what we don't know, and it would depend on how widespread it was. If the damage showed up a lot then yes, they should have seen it. If it was only in a small segment among millions of transactions, then it would not be so noticeable.

 

Automated tests of a simulated 50k merchants, 50k shoppers and 100k products with tests to check listing integrity would be what you might expect as a part of the process. Even the associated images can be tested automatically.

That true, and since many of us haven't had any trouble with listings, at least some of it seems to work. But they still would never have found this particular problem if it really is due to a damaged subset of data. Any simulated data, as well as any generated data from the test grid, would create all new transactions, storefronts and listings. Results would be based on a system that was already fixed.

I don't know why cross linking happens. Is it related only to the damaged set of data, or is it a more widespread thing? Ann asked if it is still happening; is it?

 

(Sorry for the bolded quoting, this forum doesn't make partial quoting easy.)

Link to comment
Share on other sites


Josh Susanto wrote:

But what would it have hurt to ask people to refrain from using the actual deployment until the beta people had checked it as an actual deployment? 


What would be the odds that they'd happen to test with this particular subset of damaged data during a random sampling?

Link to comment
Share on other sites


Josh Susanto wrote:

But what would it have hurt to ask people to refrain from using the actual deployment until the beta people had checked it as an actual deployment? 

That still might not have caught everything, but it could have constrained at least some of the Day 1 problems to a small group of people who would have at least some idea what they were seeing.

Instead: a green bear telling people - no, STILL telling people - that they need to migrate immediately and stop using their boxes. 

Which idea is stupider?

Mine or the bear?

No, that actually would have been good.

Often when a company tests (and especially when they don't foist the beta work on their customers) there are a few stages of testing. Unit tests for their code, automated tests in a simulated environment and then something like you mentioned, where you do the final tests on a copy of the production data in a sandbox.

Kind of like how the beta grid is set up with a copy of your real inventory.

The latter could have caught every production entry at the time the snapshot was taken. Less likely with live testers, but in this case, scanning the appearance of the actual listings manually is the first thing beta testers probably would have gone for. If the problem was happening, it'd be visible in the production sandbox and they probably would have caught it, so yes ... you might be right on that front..

Guessing here that the marketplace is small enough to set up a sandbox copy for development or testing a single decent machine. In this case those tests would take some days to a week to run, but obviously worth preventing problems like this.

Not disagreeing with Paladin here, because in the past they didn't have these particular problems and probably didn't think they needed that level of testing. Sloppy, but there it is. I think the only thing we differ on is our level of forgiveness when guessing at what could have happened ... to quote the song, my give a damn is busted when it comes to why the problems happened.

I was watching that other thread where they discovered the number range where the problem occured. Awesome research and useful info and great teamwork. Unfortunately LL already knew "when" the problem happened because they stopped it from happening at the end of that range.

So on top of lack of testing, asking their users to do what they should be paying for themselves, they're pissing their customers efforts away while they go dark on the details.

At this stage it only matters that it's back to what passed for operational before, but some heads need to roll and changes need to be done before they go much further, because there are sequels in the making if not.

On the bear? Ok the bear is far more stupid than extreme views on what might be going on. The bear was like a victory dance only to discover you scored a touchdown on the wrong side of the field.

Link to comment
Share on other sites


Dartagan Shepherd wrote:

 

The latter could have caught every production entry at the time the snapshot was taken. Less likely with live testers, but in this case, scanning the appearance of the actual listings manually is the first thing beta testers probably would have gone for. If the problem was happening, it'd be visible in the production sandbox and they probably would have caught it, so yes ... you might be right on that front..

 

That's where I think it went off the tracks. It didn't show up in the sandbox.

 


Not disagreeing with Paladin here, because in the past they didn't have these particular problems and probably didn't think they needed that level of testing. Sloppy, but there it is. I think the only thing we differ on is our level of forgiveness when guessing at what could have happened

It's true. I do tend to be forgiving because I've been caught out myself. All the developers I know are passionate about their work and want to do it well. Those who don't feel that way don't last.

So my first reaction isn't that errors were malicious, or even (usually) incompetence, but that they were honest mistakes. Maybe I'm projecting. But I can't get as angry as some do because I know how easily a single typo can hose software and how bad I feel when it happens. I once made a mistake in a client project that was not only embarrassing for its stupidity but affected his customers. I felt awful.

This error is much worse than mine was but I'm not in any position to throw stones.

An aside, not directed to you Dartagan, but in general: It bothers me is how little consideration the forumites give to the engineers. I don't think I could read these forums if I was working at SL, it would be too disheartening. Someone on another thread tried to say something nice and got completely shot down. It's like it's against the rules to be kind.

 ETA: My comments in no way resolve LL of responsibility. They need to fix the problem and sort out the finances. No one should lose any money and accounts need to be rectified. And they need to do it fast. It's only the name-calling that I think is wrong.

Link to comment
Share on other sites

True, I hate saying some of these things and it's not personal and they're probably scrambling.

And there's of course a chance that it's not their fault. I mean the best manager in the world can only throw so much work on too few people.

Which might be another problem, when you've got too many managers with too much time, they start to "think stuff up".

What I'd really like is to have it go up the chain instead of at this team.

Got a problem with their ethic, which isn't this teams fault. The overpricing and convolution, depreciating product for the price, blah. Cute move with Land Impact resulting in less resources available to users. Gained a tad in sinks and offer less polygons per "prim count". Yes, mesh is great, but it didn't have to cost "more" than existing prims in resources, and throw size and scripts under the bus to boot, that's just a cheap cheat. As you grow the idea is to offer more, more cheaply. Small time stuff. And their games with cash-out limits.

Other than that if I could tell their board/management one thing, it'd be to get your people across the board to "stop thinking stuff up".

When you grow up and actually have a product on your hands you lose the experimental, lab "thinking stuff up" phase and leave that to managers who can balance what the customers want and need with the direction of the company. When you're declining you need to drop every semblence of "thinking stuff up", when it creates emotional distress and loss of more users.

And this is a really, really good product. It's not the first virtual world, but it's the best one yet and it's a beautiful thing. Uptime is awesome, etc. But that's all you have, LL. Just a virtual world. Not a social-media-experimental-whatever-you-think-up kind of thing. It's pretty simple stuff, crack that whip, heads down and make it a better virtual world.

Your customers need an abstract framework, that's all.

An example, what is a team responsible for building a marketplace doing playing around with "how" sales are made, optimizing those sales, spending thousands of hours researching how amazon does it, how ebay does it and god knows how who else does it? What are you doing playing with google, facebook, search other than the bare basics of finding a product. when the delivery was the priority in the first place?

They created a team of people too busy "thinking stuff up" rather than focusing on the bare basics of commerce.

I call it clipboard-itus. Give the title and they'll be walking around with that clipboard in no time getting half the work done.

Heads down, Fast, Easy and Fun was it? Your World?

If you want a startup go do something revolutionary like a global bathroom locater that stops millions of people from wetting themselves daily, cycles millions of dollars around in a circle until people realize they can just look for the bathroom signs or ask someone and that by the time they've used your mobile social bathroom app, the accident is almost upon them.

Link to comment
Share on other sites

>What would be the odds that they'd happen to test with this particular subset of damaged data during a random sampling?

A lot better than none, which is what they chose.

Especially if they had bothered to design the tests to consider different parts of the total data set. 

They had to find it anyway, eventually, they just got a bunch of people totally pissed off by deciding to find it the hard way.

Link to comment
Share on other sites

>Guessing here that the marketplace is small enough to set up a sandbox copy for development or testing a single decent machine. 

In terms of just catching order processing problems, they wouldn't even have needed to do that.

If a bunch of penguins know better not to jump in all at once in case there's a sea lion, SLM users should have been encouraged to apply the same principle.

Instead, the penguins were told "quick, jump in - there's a shark hiding behind that snow drift."

Who would tell them to do that?

Why, the sea lion in the water, of course. 

Link to comment
Share on other sites

>Ugg, no. In cartoons they always turn green when they eat something they shouldn't have.

According to Wikipedia the bear could either be:

Do Your Best Bear

or

Oopsie Bear

The color is more like Do Your Best Bear, but having a Linden logo instead of  belly badge is more consistent with Oopsie Bear, who doesn't otherwise have a belly badge.

The preponderance of the evidentiary weight suggests that Oopsie got a job at Linden in order to use the Linden logo as a belly badge.

Mystery solved?


Link to comment
Share on other sites

>Which might be another problem, when you've got too many managers with too much time, they start to "think stuff up".

If that have that much time, maybe they can think up where to find subordinates who know how to legitimately close a JIRA.

>What I'd really like is to have it go up the chain instead of at this team.

It needs to go up to at least the person suggesting the release dates, which seem to be calculated to produce maximum utility disruption per user dollar invested. 

>Your customers need an abstract framework, that's all.

Good point. That Xstreet produced the magic boxes before Linden did is a pretty good example of what the users can do if LL will just get the f### out of the way.

But LL's "creative" people also need to express themselves, right?

"I know - let's do something really interesting, like... make the boxes not work.!"

Link to comment
Share on other sites

  • 7 months later...

All true. The single most important thing is they need to beef up the virtual economy, so that people can create viable micro-businesses and earn real money in SL. By expanding their land base they wiped out the virtual real estate market. They also need to lower tiers, and offer a progressve tier discount as you acquire more land. Id love to hear other people's suggestions on how to get the SL economy moving again!

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 4220 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...