Jump to content

Tools and Technology

  • entries
    126
  • comments
    916
  • views
    285,162

Contributors to this blog

About this blog

Entries in this blog

Linden Lab

Puppetry User Group Meeting on September 22, 2022


Tools & Technology

The Puppetry User Group meeting will be a bi-weekly meeting discussing an experimental new feature that allows hardware and other software to connect to Second Life and provide animation data for an avatar. The purpose of these meetings is to connect with developers and coders who might want to experiment with it. 

Learn more about Puppetry in our blog post.

Join the team on Thursday at 13:00 PT on the Aditi beta grid, Castelet region
 

Linden Lab

Puppetry User Group Meeting on September 8, 2022


Tools & Technology

The Puppetry User Group meeting will be a bi-weekly meeting discussing an experimental new feature that allows hardware and other software to connect to Second Life and provide animation data for an avatar. The purpose of these meetings is to connect with developers and coders who might want to experiment with it. 

Learn more about Puppetry in our blog post.

Join the team on Thursday at 13:00 PT on the Aditi beta grid, Castelet region
 

Linden Lab

Viewer Profiles: Everything old is new again!


Tools & Technology

With our latest release we’d like to welcome back Viewer Profiles! The Second Life viewer has made the transition back to an integrated Profile floater. This is the place to find more information about any Resident including a bio, picks, and more!

image1.png

Residents may still use web-based profiles in a web browser. However this change brings back a more responsive experience in the Viewer as well as giving you the information you’re most interested in seeing on the first page. 

As part of this change, Groups now moves to the main tab.  This change also means that hiding Groups no longer requires opening and viewing each one.  Just click on the eye next to the group to hide or show.  

A big thank you to the Firestorm Viewer team for your contribution!

How things have changed

The viewer Profile, as opposed to the web Profile, shows all of your Second Life Bio, Feed, Picks, Classifieds and Real Life bio when viewed by other residents logged in to Second Life.

The Feed will remain a web property displayed in the Profile floater in its own tab.

Interests are accessible on the old web Profiles but will no longer be included in the viewer Profile.

Linden Lab

Introducing Second Life Puppetry


Tools & Technology

image1.png
Photo by Alexa Linden

The idea

Wouldn’t it be cool if you could animate your avatar in real time?  What if you could wave your arm and your avatar could mimic your motions?  Or imagine if your avatar could reach out and touch something inworld or perform animations?  Linden Lab is exploring these possibilities with an experimental feature called “Puppetry.”

We have been working on this feature for some time and now we are ready to open it up to the Second Life community for further development and to find out what amazing things our creators will do with this new technology.

The codebase is alpha level and does contain its share of rough edges that need refinement, however the project is functionally complete and it is possible for the scriptors and creators of Second Life to start to try it out.

See the section below “How to participate” to learn how to use Puppetry yourself.

Take a Look

We have some basic things working with a webcam and Second Life but there's more to do before it's as animated as we want.

Puppetry-2.gif

Puppetry Technology

Puppetry accepts target transforms for avatar skeleton bones and uses inverse kinematics (IK) to place the connecting bones in order for the specified bones to reach their targets.  For example the position and orientation “goal” of the hand could be specified and IK would be used to compute how the forearm, elbow, upper arm, and shoulder should be positioned to achieve it. The IK calculation can be tricky to get right and is a work in progress. 

The target data is supplied by a plug-in that runs as a separate process and communicates with the viewer through the LLSD Event API Plug-in (LEAP) system.  This is a lesser known functionality of the Viewer which has been around for a while but has, until now, only been used for automated test and update purposes.

The Viewer transmits the Puppetry data to the region server, which broadcasts it to other Puppetry capable Viewers nearby.  The receiving Viewers use the same IK calculations to animate avatars in view.

For more details about the Puppetry technology, take a look at the Knowledge Base article Puppetry : How it Works

Uses and Possibilities

We are excited about Puppetry’s potential to change the way we interact inside Second Life.  For example, using a webcam to track your face and hands could allow your avatar to mimic your face animations and finger movement, or more natural positioning of the avatar’s hands and feet against in-world objects might also be possible.  Alternative hardware could be used to feed information into Second Life to animate your avatar - a game controller or mocap equipment.  There's a lot to explore and try, and we invite the Second Life community to be involved in exploring the direction of this feature.

How to participate

The Puppetry feature requires a project viewer and can only be used on supporting Regions.  Download the project Viewer at the Alternate Viewers page.  Regions with Puppetry support exist on the  Second Life Preview Grid and are named: Bunraku, Marionette, and Castelet.

When using the Puppetry Viewer in one of those regions, if someone there is sending Puppetry data you should see their avatar animated accordingly.  To control your own avatar with Puppetry it's a bit more work to set up the system.  You need: a working Python3 installation, a plug-in script to run, and any Python modules it requires.  If you are interested and adventurous: please give it a try.   More detailed instructions can be found on the Puppetry Development page.

What's next

We look forward to seeing what our creators do with the new Puppetry technology. Compared to other features we have introduced, it’s quite experimental and rough around the edges, so please be patient!  We will keep refining it, but before we go further we wanted to get our residents’ thoughts.

We will be hosting an open discussion inworld on Thursday, Sept 8 1:00PM SLT at the Bunraku, Marionette, and Castelet regions on the Preview Grid.    We're also happy to talk about this at the upcoming Server User Group or Content Creator meetings.  Come by, let us know what you think, and hear about our future plans!

Linden Lab

Increase in Server Crashes: June 30 to July 8


Tools & Technology

You may have noticed – or fallen victim to -- the unfortunate increase in server crashes last week. We apologize to everyone affected! 

Here’s what happened:

  • On Thursday, June 30, just before the July 4 holiday weekend in the U.S, we released a change to my.secondlife.com.
  • Early on the following Tuesday, we noticed and started investigating a significant increase in server crashes. We also started receiving Support reports about these.
  • We determined that the web update in my.secondlife.com  included a library update which, unknown to us, dropped support for the texture format our servers expect.
  • This led to Viewers being unable to deal with newly updated textures, and under some circumstances the Server would attempt to be more helpful in opening those, than it should have been - and crashing. This is why some residents saw problems with their profile photos. 

On Tuesday July 5, after we investigated and confirmed what was happening, the team quickly rolled back the web site change to prevent the creation of more broken textures. The web team identified these textures, then ran a script to convert them. This was completed by Thursday morning, July 7.

The server team fast-tracked a fix to prevent servers from crashing. We were able to QA and release the fix to RC channels on the afternoon of Thursday, July 7 and then to the main SLS + Events channels on Friday, July 8.

Lessons:

  • Second Life is big and complicated and fails in unexpected ways (we knew that!)
  • Library upgrades may need additional testing for dependencies we couldn’t imagine.
  • We work really well together when Second Life fails in unexpected ways                                                    

Again, we apologize to everyone affected!

Linden Lab

Coming soon: Login improvements


Tools & Technology

We will be performing an update to the Second Life Login service on Monday, April 11th beginning at approximately 9:00 AM PDT and expected to complete before 10:00 AM PDT. Viewer logins will be unavailable for a portion of this maintenance period.

This update includes a fix to new device email notifications for newly created accounts, performance improvements for the viewer login handshake, and improvements to metrics and diagnostics for our internal tools. Most of these changes are behind the scenes and should not change the login experience for Residents once the maintenance is completed.

If you experience viewer login issues during the deployment window, please wait and try again later. Keep an eye on the Second Life Status Blog for updates. Thank you for your patience!

UPDATE: The maintenance is now completed.

Linden Lab

Recent Outages


Tools & Technology

Hello Residents!

Well, this is not the way we wanted the past week to go -- multiple outages in the same week! Not fun for anyone. Here’s what happened.

Login and Inventory Issues: Monday, 1/24 - Friday, 1/28
On Monday, January 24, residents’ inventories stopped responding to requests. Some residents were unable to log in as a result. We identified the affected accounts, addressed the problem and restored services to full operation in a little under 3 hours. 

Beginning on Wednesday of this week, we received reports of intermittent inventory issues – failures to rez, unpack and delete inventory items; and intermittent errors upon login.

When the number of reports increased, it became clear that this was an outage. Several Lindens jumped in to diagnose the reports. After some digging, we discovered our back end infrastructure was being overloaded. Once this was resolved, the positive impact was almost immediate. The data now looks good -- on the back end and we are no longer receiving inventory reports.

We’re taking steps, including a deploy late last week, to prevent these issues in the future - and we have already seen progress in making the service more robust.

Weekly Rolling Restarts: Tuesday, 1/25
Every week we restart the simulator servers to keep them running smoothly. We only restart a certain number at the same time, allow them to finish, then start another batch. That’s why we call them “rolling restarts.” But on Tuesday a recent upgrade to the simulators meant that the usual number of simultaneous restarts was too much for the system to handle. The result was load spikes and numerous regions going down. Ultimately more than 12,000 regions were stuck in restart mode, which is 40% of the grid. Not good!

The team came together quickly to bring simulators back up in smaller batches, and then manually fixed blocks of regions. After some trials and monitoring, we found a smaller number of concurrent restarts that worked better. By 1:00 PM all regions were restored and operating normally. We apologize for this lengthy outage!

Advance… Retreat!: Wednesday, 1/26
We had planned a deploy to the production grid that would help us gather information on group chat performance. Unfortunately, the procedure used during testing did not work in production. We figured -- OK, we’ll just roll back. But then the rollback itself had problems!  Residents experienced issues with login, group chat, and presence information. The team was able to isolate the problem and complete the rollback, getting a few more gray hairs in the process. We’re going to take what we’ve learned and do a better deploy that will give us the information to improve group chat in the long run.

Last week was no fun for a lot of people at the Lab. Thank you for your patience. We really don’t like interrupting your enjoyment of Second Life. Here’s hoping we don’t have to come back with another one of these blog posts for quite some time! 

With love, 
The Lindens
 

Linden Lab

Hello Residents!

We’ve just released an update to our previous Maintenance viewer (here). We introduced a media playing bug last time and this update fixes that.

More information on the previous bug here.

Work around for the previous bug here.

Please feel free to try it out and let us know of specific problems by filing a Jira.

Thank you for your patience as we worked through this problem! 
 

Linden Lab

Upcoming Security Improvements to Second Life


Tools & Technology

Hello Residents!

If you run an inworld service that logs in as a scripted agent (aka, a bot), or maintain a Third Party Viewer, please pay attention. This blog post is for you. :)

We are making some changes to improve the security of Second Life!

On November 1st, 2021 we are going to be discontinuing the use of two older security protocols, TLS 1.0 and TLS 1.1, on our login services. We’re doing this to increase the security of everyone on the grid.

In March of 2021 the Internet Engineering Task Force (IETF) officially deprecated these two older protocols, and now we’re gonna do the same. TLS 1.0 was released in 1999, and TLS 1.1 was released in 2006, and while they’ve had a good run, it’s time for them to enjoy a nice retirement into Internet history.

On Wednesday afternoon this week we inadvertently turned off TLS 1.0 and TLS 1.1, and we received reports that several inworld services (such as older bots and some very old Third Party Viewers) were unable to log in. Since we hadn’t given Residents any warning this was going to happen, we turned them back on this morning. We want to make sure folks have a chance to update their services before we turn them off again permanently on November 1st, 2021.

If you were impacted by the sudden removal of these older security types this week, we’re sorry they were turned off without warning. We should have communicated it better. We also want to thank you for taking the time to improve the security of your service! The grid will be safer for us all as a result.

For everyone else, you have nothing to do! Our viewer, and almost all of the popular Third Party Viewers have been using the latest versions of TLS for years. You’re all set!

Looking forward to a more secure Second Life,
April Linden, Gridbun

Linden Lab

Introducing the 360 Snapshot Tool


Tools & Technology

Elvion
See full 360 Image by Alexa Linden on Flickr

You can now take a 360-degree snapshot of a location in Second Life. Better than a simple panoramic image, the 360 snapshot covers 360 degrees in all three dimensions, allowing you to see everything both above and below your avatar as well, as though the image is projected on the inside of a sphere. Technically speaking, the snapshot is an equirectangular image projection. Better than a static image, the 360 snapshot can be clicked and dragged so you can move your ‘camera’ anywhere in the snapshot, at any angle, as if you are inside it. 

When opened, the tool quickly creates a low resolution preview taken from the location of your in-world camera. You can click and drag on the preview image in all directions to see if you like it. Re-frame your shot by moving your camera in-world and creating another snapshot. 
 

Speed and quality

Snapshot creation speed depends on your computer’s graphics capabilities. It’s not necessary to use preview quality while you are composing your snapshot, but if a higher quality level is slow on your computer, you can save it for the final snapshot. While a snapshot is being created, Second Life may appear to freeze. Just give it some more time to finish.

The quality of the snapshot is also affected by your graphics preferences (Me > Preferences > Graphics). You may have turned off features such as shadows and water reflections to make Second Life run faster, but for your 360 snapshot you may want to change some settings. The Hide Avatars checkbox is good for when you want to take a landscape shot but you don’t want to wait until no one is at that location.
 

How to use the saved snapshot

The snapshot will be saved in .jpeg format, which does not require a special viewer. All web browsers can display it, so you can add it to your social media feed or your favorite photo-sharing service. Check out these 360 snapshots from Alexa Linden -- and don’t forget to click and drag on them! 

We hope you enjoy the new 360 Snapshot feature and we can’t wait to see what you create with it!
 

Known Issues

We are aware of some issues in this first Project Viewer which we will address later:

  • It is possible that some things in the world will be missing from the snapshot. Make sure everything in-world loads before you take the snapshot. Rotating your avatar will ensure everything around you loads.
  • Snapshots may be corrupted on some systems with older or less powerful graphics cards, or if your graphics settings are turned down low. Try changing your graphics settings in Preferences.
  • Higher quality snapshots may take a long time to create. We’ll continue to work on increasing the speed.

 

Download the Viewer here. UPDATE: The 360 Snapshot feature is now in the official Second Life viewer, download the latest version now!

For more technical input, please file a Jira
 

Linden Lab

New Voice Improvements!


Tools & Technology

Happy Wednesday everyone!  

Today we released the latest Maintenance Viewer Build.  Among a slew of fixes and improvements is a key feature we want to highlight: ability to tweak your Voice Activity Level. Lately we’ve had increasing problems with voice cutting out, at events or just talking to friends nearby.  With these changes you will have a way to easily manage the problem. It’s been working seamlessly in our internal testing, including large events. 

Voice Activity Detection
This Viewer exposes 3 VIVOX VAD (Voice Activity Detection) variables via Debug Settings and disables the (previously enabled) automatic mode. By making changes to these variables, we should be able to come up with a collection of settings that we can base new default values on in settings.xml.

The Debug Settings are:

  • VivoxVadAuto
    • Enable (1) or disable (0) automatic VAD - you will almost certainly want this set to 0 [off] and change things using the other settings
  • VivoxVadHangover
    • The time (in milliseconds) that it takes for the VAD to switch back to silence from speech mode after the last speech frame has been detected.
  • VivoxVadNoiseFloor
    • A dimensionless value between 0 and 20000 (default 576) that controls the maximum level at which the noise floor may be set at by the VAD’s noise tracking. Too low of a value will make noise tracking ineffective (A value of 0 disables noise tracking and the VAD then relies purely on the sensitivity property). Too high of a value will make long speech classifiable as noise.
  • VivoxVadSensitivity
    • A dimensionless value between 0 and 100, indicating the ‘sensitivity of the VAD’. Increasing this value corresponds to decreasing the sensitivity of the VAD (i.e. ‘0’ is most sensitive, while 100 is ‘least sensitive’)

The default values (updated) are (using VIVOX names):

  • VivoxVadAuto: 0 (disabled)
  • VadHangover(s): 2000 (Valid values are 1 - 60000 milliseconds)
  • VadSensitivity: 0 (Was 43 - valid values are 0 - 100)
  • VadNoiseFloor: 576 (Valid values are 0 - 20000)

Early testing suggests that VivoxVadNoiseFloor can only be changed by restarting the Viewer or teleporting away and coming back (needs a new voice connection) but the other 2 work in real time as you change them. After some initial testing with VIVOX, we have settled on starting from VivoxVadSensitivity set to 0. This will result in no dropouts because the microphone is sending everything to the voice channel. However, in a noisy environment (talking in background, vacuum cleaner, TV on etc.) it will also transmit that too. With modern microphones with built-in noise cancellation, sending everything may be a good thing as the microphone may have done all the heavy lifting of noise cancellation first.

Please try it out and let us know what you think!  If you’re experiencing any bugs, please let us know!

Linden Lab

A Light in the Cloud: A Migration Update


Tools & Technology

A Light in the Cloud - A Migration Update.png

Hi Residents!

I’ve come to ask for a favor.

We’re in a really exciting time in the history of Second Life. We’re in the home stretch on moving the grid to the cloud. We hit a fun milestone a few days ago, and now there’s over 1,000 regions running in the cloud!

Everyone in the Lab is working hard on this project, and we’re moving very quickly. I just got out of a leadership meeting where we went over what’s currently in flight, and there’s so many things moving that I lost track of them all. It’s amazing!

The favor I’ve come to ask you for is your patience.

We’re doing our very best to fix things that come up as we go. This means that we might need to restart regions more often than you’re used to, and things may break just a little more often than we’ve all been accustomed to.

In order to get this project done as fast as possible and minimize the time (and resulting bugs) we have to spend with one foot in our datacenter and the other in the cloud, we don’t want to limit ourselves to restarting regions just once a week. We’re ready to get this project done! We’ve seen how much better Second Life runs in the cloud, and we’re ready to have everyone on the grid experience it.

I’m sorry that things might be a little rough over the next few weeks. It’s our goal to finish the cloud migration by the holidays, so that everyone, Resident and Linden alike, can have a nice quiet holiday with our friends and families.

We can’t promise we’ll make it by then, but we’re sure giving it all we’ve got. The mood around the Lab is really positive right now, and we’re all working hard together to make it happen. I’m really proud to be a part of the team that’s transforming Second Life as we know it.

Thanks so much for hanging in there with us. We know it’s frustrating at times, but it won’t last for too long, and there’s a better future on the other side of this. We truly appreciate your understanding and patience as we finish up this project.

Thanks everyone. 💜

April Linden,
Second Life Operations Manager
 

Oz Linden

Uplift Update


Tools & Technology

We've been working hard on the Uplift of Second Life. If you have not been following this project, that's what we're calling the migration of our Second Life simulators, services, and websites from a private datacenter to hosting in The Cloud (Amazon Web Services). It's a massive, complicated project that I've previously compared to converting a steam-driven railroad to a maglev monorail -- without ever stopping the train. This undertaking has at times been smooth sailing, at other times a very bumpy ride. We wanted to share some more of the story with you.

Our goal has been to move SL incrementally to give ourselves the best chance of minimizing awareness among the residents that these changes were happening. We feel we’ve done better than we expected, but of course it’s the bumps in the road that are most noticeable to our residents. We apologize for recent service disruptions, although what’s perhaps not apparent is the progress we’ve made -- and the improvements in performance that have quietly taken place.

First, the rough spots:

  • Region Crossings
    One of the first troubles we found was that region crossings were significantly worse between a cloud region and a datacenter region. We did a deep dive into the code for objects (boats, cars, planes, etc) and produced an improvement that made them significantly faster and more reliable even within the datacenter. This has been applied to all regions already and was a good step forward.
  • Group Chat stalls
    Many users have reported that they are not able to get messages in some of their groups; we're very much aware of the problem. The start of those problems does coincide with when the chat service was uplifted; unfortunately the problems did not become clear until moving that service back to the datacenter was not an option. We haven’t been able to get that fixed as quickly as we would like, but the good news is that we have some changes nearly ready that we think may improve the service and will certainly provide us with better information to diagnose it if it isn't fixed. Those changes are live on the Beta grid now and should move to the main grid very soon.
  • Bake Failures
    Wednesday and especially Thursday of this past week were bad days for avatar appearance, and we're very much aware of how important that is. The avatar bake service has actually been uplifted for some time - it wasn't moving it that caused the problem, but another change to a related service. The good news is that thanks to a great cross-team effort during those two days we were able to determine why an apparently unrelated simulator update triggered the problem and got a fix deployed Thursday night. 
  • Increased Teleport Failures
    We have seen a slight increase in the frequency of teleport failures. I know that if it's happened to you it probably doesn't feel like a "slight" problem, especially since it appears to be true that if it's happened to someone once, it tends to keep happening for a while. Measured over the entire grid, it's just under two percentage points, but even that is unacceptable. We're less sure of the specific causes for this (including whether or not it's Uplift related), but are improving our ability to collect data on it and are very much focused on finding and fixing the problem whatever it is.
  • Marketplace & Stipend Glitches
    We've had some challenges related to uplift for both the Marketplace and the service that pays Premium Stipends. Marketplace had to be returned to the datacenter yesterday, but we'll correct the problems that required the rollback and get it done soon. The Stipends issues were both good and bad for users; there were some delays, but on the other hand we sent some users extra stipends (our fault, you win - we aren't taking them back); those problems are, we believe, solved now.

Perhaps the above makes it sound as though Uplift is in trouble. While this week in particular has seen some bumps in the road, it's actually going well overall. Lots of the infrastructure you don't interact with directly, and some you do, has been uplifted and has worked smoothly.

For a few weeks, almost all of the regions on the Beta grid have been running in the cloud, and over the last couple of weeks we've uplifted around a hundred regions on the main grid. Performance of those regions has been very very good, and stability has been excellent. We expect to be uplifting more regions in the next few working days (if you own a region you'd like included, submit a Support Ticket and we'll make it happen). Uplift of the Release Candidate regions, which will bring the count into the thousands, will begin soon. When we're confident that uplifted regions are working well at that larger scale, we'll be in a position to resume region sales, so if you've been waiting - the wait is almost over.

Overall, the Uplift project is on track to be complete or very nearly so by the end of this year (yes, 2020… I know I've said "fall" before and people have noted that I didn't say what year 🙂 ; the leaves haven't finished falling at my house yet…). It's likely that there will be other (hopefully small) temporary disruptions during this process, but we promise we'll do all we can to avoid them and fix them as fast as we can. This migration sets the stage for some significant improvements to Second Life and positions us to be able to grow the world well into the future.
 

Linden Lab

Explaining the Downtime_square.png

Hi Residents!

A lot of people had trouble connecting to Second Life yesterday (Aug 30th, 2020), and I want to explain a bit about what happened, and why we didn’t post on the Second Life Status page right away.

Early in the morning (US time) yesterday one of the major Internet providers had an issue that impacted a lot of the Internet as a whole, including Second Life. Several other Internet companies have written their own really good blog posts about what happened. We got caught up in the same thing.

The reason we were slow to post on the Second Life Status page is because, from our monitoring systems’ point of view, Second Life itself and all our systems were functioning normally. That means, our operations team wasn’t getting alerted. Of course, from the Resident point of view, Second Life was effectively down in some parts of the world, and that’s really what matters.

To help us react quicker in the future we’ve made a few changes.

Yesterday evening we added a new monitoring service that checks on some of Second Life’s core systems from all around the globe. It’s a service that a lot of other companies use too, so we’ll get alerted better in the future. When Internet-scale events like yesterday happen there’s not a lot we can do about it, but we can post on the status page quicker to let our Residents know we’re aware things aren’t right.

We’re sorry for the lack of communication yesterday. We know how important Second Life is to our Residents, and we’re taking steps to increase our visibility into issues outside of our servers. It’s our hope that these steps will enable us to communicate better with y’all in the future.

See you inworld!

April Linden
Second Life Operations Manager
 

Linden Lab

Whoosh-1.png

Hi Residents!

We rolled out some changes to how region crossings work this week, and I want to explain a little about why and what we changed.

Please note that this blog post is gonna be a bunch more technical than my normal posts, because this is a tricky technical thing. Here’s a really quick summary, however: As part of moving regions to the cloud we discovered that region crossings between the cloud and our data center were terrible. Since we now had an easy way to reproduce the issue, we dug into it, and were able to find some really old bugs, and fixed them. Hooray! :)


And now for a fuller description!

The process of crossing from one region to another when you’re riding in a vehicle is pretty involved. The region you’re leaving needs to be able to tell the region where you’re arriving  everything it knows about the vehicle, and has to do it really quickly. That includes all of the scripts in the vehicle, everything that’s attached to it, the direction and how fast it’s going, and lots of other stuff.

To make this happen quickly, early on in Second Life’s history we made some assumptions about our network, including things like how big a packet can be. Those assumptions generally worked okay on our own network, but not outside it.

When you crossed from one region to another, the regions were putting a lot of information into large packets and sending them across our network. This was usually okay because our network was purposefully built to run Second Life. Then, as soon we tried to do this on someone else’s network (in the cloud), things didn’t work quite right. The problem was most noticeable when crossing from a region in our data center to one in the cloud.

The first thing our engineering teams tried was breaking those large packets up into smaller ones, but that actually made the problem worse. Rather than send one big packet and wait for the other side to say it received the data, with smaller packets, it had to repeat that bunch of times for each packet. (Send, get an acknowledgement, send another piece, get an acknowledgement, etc.) It was still mostly okay across our network, but way worse when a region in our network was talking to one in the cloud. We now knew this code would never work well, so we needed a different approach.

Next, our engineering team decided to use another way to send the data across the network, using the same protocol and method we use for other types of data. Most importantly, it is faster and more reliable. That did the trick! We’re still collecting statistics on the impact this change has, but things are looking very positive.

Once this new code was written, the performance when going  from region to region got a lot better, and it worked between our data center and the cloud! The improvement was so dramatic that we decided to not make our Residents wait for uplifted simulators, and rolled the changes out right away. That code is what rolled out to the grid this week.

It’s really exciting that the cloud migration is helping us find really old bugs and make Second Life better as we go.

A gridbun that’s really ready to hop among the clouds,
April Linden
 

Linden Lab

UPDATE: Well, it looks like that may have been a quicker fix than we'd expected. Things should be back on track. Please let us know if you run into any issues using the submission form.

 

Hello all!

Our Destination Guide is a great source for exploring places in Second Life. In fact, you may have noticed we regularly feature some of these spots in our ongoing Destination Guide Video series

We believe that the content created by our communities is some of the most compelling, creative, and interesting content under the sun (virtually and atomically!) That is why we are always looking for new places and events to add to the Destination Guide - or DG as we have come to call it here at the Lab.

There are usually two ways to submit your spots for consideration - a web form and email. Unfortunately, it was recently discovered that the form is not working 100% of the time. We have identified the issue and put it on the radar for fixing - but our folks are working really hard on a few other priority things - things you all have been eagerly awaiting - and it may be a bit longer before that form gets fixed. Fear not though - we’ve got the editor@lindenlab.com email for this specific reason. In fact, we have been taking submissions via that address for many years, and glad to have it available for you to send us your cool Regions, spaces, spots, events, and experiences. 

When emailing editor@lindenlab with your submission, please provide the same information that you would through the form:

  • Title
  • Description
  • Image (657 width by 394 height)

More information about submission guideline specifics can be found in the Knowledge Base here. We will be sure to update once the form is back in full swing. Share what you’re creating with us!

Linden Lab

Hi Residents!

This morning we had an issue on the Second Life Marketplace which caused a mixup on what information was shown to Residents that were using the Marketplace at the time. I would like to explain a bit about what happened and provide more details.

The issue started this morning (Nov 4th) around 9:30am PST/SLT and ended around 12:30pm PST/SLT. That’s about three hours total.

During this time, if you were logged into the Second Life Marketplace and went to the user account page, you may have seen a page for another Resident that was currently logged in at the same time as you. The user account page shows your Second Life account name, L$ balance, a small portion of your past Marketplace activity, wish lists, received gifts, and an obfuscated version of your email address. (For the email address, it shows the first letter of the username and domain name, plus the top-level domain. So, secondlifefan@example.com appears as s****@e****.com. It’s just enough information to let you confirm an address you already know, but not enough that someone else could use it.)

We estimate that no more than 500 Residents visited the account page during this time, and not all of those would have been mixed up. You could not pick which Resident’s information you saw if you didn’t see your own. Instead, you’d get a page from another more-or-less random Resident that had also pulled up the account information page during this time.

It wasn’t possible to make purchases using someone else’s account, and you couldn’t have made changes to some else’s account.

So how did this happen?
We’ve been working to make the Second Life Marketplace more robust and handle higher numbers of page views at once. Due to a change made this morning, the user account page got cached when we didn’t mean for it to be. Once we realized what had happened, we rolled back the changes immediately and deleted all of our caches. No other parts of Second Life were impacted.

Our engineering teams are now working with our QA (quality assurance) team to make sure we develop better testing for this in the future. We want to make sure we catch something like this long before it makes it out into the hands of Residents.

We’d like to extend a really big thank you to everyone who reported the issue to us the moment they saw it! Because of your vigilance we were able to react really quickly and limit the time that this misconfiguration was live. (Seriously, y’all rock! 💜)

We’re sorry this issue happened this morning. We’re working to make sure it never happens again, and developing better test procedures for use in the future.

Your coffee-loving GridBun that chugged way too much coffee this morning,
April Linden
Second Life Operations Manager
 

Linden Lab

Hi Residents!

This past Sunday wasn’t very fun. Second Life had issues for a bunch of hours. I want to explain what happened.

The trouble started back on Thursday. We had some pretty bad problems talking to key services (packet loss) on one of our Internet links. It didn’t impact everything, but the stuff it hit was pretty important. It started for a few hours on Thursday, but just magically went away on its own. It’s never good when a problem just magically fixes itself because it’s pretty likely it’s gonna happen again.

The same thing happened again on Friday, but once again it went away on its own before we were able to debug what actually happened.

We were nervous going into the weekend, and sure enough, it bit us again on Sunday. We started having more Internet communication problems, but this time, it didn’t just go away.

Now that we had it in a bad state we started to troubleshoot and figured out really fast that it wasn’t our equipment. Our stuff was (and still is) working just fine, but we were getting intermittent errors and delays on traffic that was routed through one of our providers. We quickly opened a ticket with the network provider and started engaging with them. That’s never a fun thing to do because these are times when we’re waiting on hold on the phone with a vendor while Second Life isn’t running as well as it usually does.

After several hours trying to troubleshoot with the vendor, we decided to swing a bigger hammer and adjust our Internet routing. It took a few attempts, but we finally got it, and we were able to route around the problematic network. We’re still trying to troubleshoot with the vendor, but Second Life is back to normal again.

While we were troubleshooting, I happened to look at the forums a few times and noticed people asking if it was related to the power outages we’re having here in California. As much as I want to say that it is, we have no reason to believe that's the case. The people on my team did get hit by them, however! One of our engineers was working in the dark in a house without power, with his laptop being powered by a long string of extension cords that ran to a generator outside.

We are actively working on moving some services around to make us more resilient to incidents like what happened this weekend. It’s our top priority right now.

We’re really sorry that this past Sunday wasn’t very fun. The weekend before Halloween is a really fun time to be Inworld, and it was a frustrating day all the way around. (I personally love the way our Residents really get into Halloween in a way that’s only possible in Second Life!) Knowing that it wasn't as awesome as it could have been makes me sad, and we’re working to make it better in the future.

If you are having problems which you believe began during this outage, Support is ready to help.

April Linden,
Second Life Operations Manager
 

Linden Lab

As part of our ongoing efforts to improve script performance, we recently made changes to how scripts are scheduled and events are delivered.  Unfortunately, we found that those changes caused some widely-used scripts to break, which led to the grid rollback last Saturday. (We were apparently unlucky in how few of those scripts were on the Release Candidate regions the previous week). We have now made further improvements that should prevent most of those problems, but even with those fixes there will be some changes in the timing and order of how scripts are run. On the whole, those changes will improve performance, but there are some scripting best practices which you should be using.  These will help you avoid being dependent on any particular ordering or timing of event delivery and script execution.

One common cause of problems is communication between objects immediately after one creates the other. When an object rezzes another object inworld using llRezObject or llRezAtRoot, the two objects frequently want to communicate, such as through calls to llRegionSayTo or llGiveInventory. The parent object receives an object_rez() event when the new object has been created, but it is never safe to assume that scripts in the new object have had a chance to run when the object_rez event is delivered. This means that the new object may not have initialized its listen() event or called llAllowInventoryDrop, so any attempt to send it messages or inventory could fail. The parent object should not begin sending messages or giving inventory from the object_rez() event, or even rely on waiting some time after that event. Instead, the parent(rezzer) and the child(rezzee) should perform a handshake to confirm that both sides are ready for any transfer. 

The sequence for this process is:

  1. The parent registers a listen event using llListen on a channel.
  2. The parent calls llRezObject and passes that channel number to the new object in the start_param.
  3. The child creates a listen event using llListen in their on_rez handler on the channel passed as the start_param.
  4. The child performs any other setup required to be ready for whatever communication it will need with the parent.
  5. The child sends a “ready” message using llRegionSayTo() on the channel to the parent.
  6. The parent transfers inventory or sends setup commands via llRegionSayTo to the child.
  7. The parent sends a “done” message to the child and may now shut down the communications channel.
  8. The child receives the “done” message and may now teardown any setup it did to enable configuration (such as calling llAllowInventoryDrop(FALSE).) 

You can find sample code for both the parent and the child below.

It's worth noting that this communication pattern has always been the best way to write your scripts. Even without the scheduler changes, the ordering of when scripts execute in the new object and when the object_rez event was delivered to the rezzer was not deterministic. It does seem to be true that making the scheduler faster has made this race condition somewhat less predictable, but making all scripts run with less scheduling overhead is worth the ordering being slightly less predictable, especially since it wasn't assured before anyway. We hope these new changes help everyone’s world run just a little smoother! To share your thoughts on this, please use this forum post.

Rezzer.lsl
////////////////////////
// Rezzer script.
//	Rez' an object from inventory, establishes a communication channel and
//  gives the rezzed object inventory.

integer COM_CHANNEL=-17974594; // chat channel used to coordinate between rezzer and rezzee
string 	REZZEE_NAME="Rezzee";

string CMD_REZZEE_READY = "REZZEE_READY";
string CMD_REZZER_DONE = "REZZER_DONE";

key 			rezzee_key;

default 
{
	//...
	touch_start(integer count)
	{
		state configure_child;
	}
	//...
}

// rez and configure a child
state configure_child
{
	state_entry()
	{
		// where to rez
		vector position = llGetPos() + <0.0, 0.0, 1.0>;	
		// establish rezzer's listen on the command channel
		llListen( COM_CHANNEL, "", "", "" );	
		// rez the object from inventory.  Note that we are passing the 
		// communication channel as the rez parameter.
		llRezObject(REZZEE_NAME, position, ZERO_VECTOR, ZERO_ROTATION, COM_CHANNEL);	
	}

	object_rez(key id)
	{	// the object has been rezzed in world.  It may not have successfully 
		// established its communication yet or done anything that it needs to 
		// in order to be ready for config. Don't do anything till we get the signal
		rezzee_key = id;
	}
	
	listen(integer channel, string name, key id, string message)
	{
		if (message == CMD_REZZEE_READY)
		{	// the rezzee has told us that they are ready to be configured.  
			// we can sanity check id == rezzee_id, but in this trivial case that is
			// not necessary.
			integer count = llGetInventoryNumber(INVENTORY_NOTECARD);
			// give all note cards in our inventory to the rezzee (we could 
			// do scripts, objects, or animations here too)
			while(count)
			{
				string name = llGetInventoryName(INVENTORY_NOTECARD, --count);
				llGiveInventory(id, name);
			}
			// And now tell the rezzee that we have finished giving it everything.
			llRegionSayTo(id, COM_CHANNEL, CMD_REZZER_DONE);
			// And we can leave configure child mode.
			state default;
		}
	}
}
          
__________________________________________________________________________________________________________________________________________

Rezzee.lsl
// Rezzee

integer com_channel = 0;
key parent_key = NULL_KEY;

string CMD_REZZEE_READY = "REZZEE_READY";
string CMD_REZZER_DONE = "REZZER_DONE";

default 
{
	//...
	on_rez(integer start_param)
	{
		com_channel = start_param;
		state configure;
	}
	//...
}

state configure
{
	state_entry()
	{	
		// Get the key of the object that rezzed us
		list details = llGetObjectDetails( llGetKey(), [ OBJECT_REZZER_KEY ] );
		parent_key = llList2Key(details, 0);	
		
		// establish our command channel and only listen to the object that rezzed us
		llListen(com_channel, "", parent_key, "");
		// Our rezzer will be giving us inventory.
		llAllowInventoryDrop(TRUE);	
		// finally tell our rezzer that we are ready
		llRegionSayTo( parent_key, com_channel, CMD_REZZEE_READY );
	}
	
	 listen( integer channel, string name, key id, string message )
	 { // in a more complex example you could check that the id and channel 
		// match but for this example we can take it on faith.
		if (message == CMD_REZZER_DONE)
		{	// the parent has told this script that it is done we can go back to 
			// our normal state.
			state default;
		}
	 }
	 
	 state_exit()
	 {	// turn off inventory drop.  
		llAllowInventoryDrop(FALSE);	
		// We don't need to clean up the listen since that will be done automatically 
		// when we leave this state.
	 }
}

 

Linden Lab

Recently, experience creators have been dealing with an issue where some experience-enabled scripts stopped being associated with their experience. We have traced the problem to a loss of data in one of our internal systems. 

This data loss was due to human error rather than any change to server software. Why do we think this is good news? Because we can now easily prevent it from happening in the future. 

We have engaged in a first pass of recovery efforts which have yielded the restoration of the experience association for a number of scripts, and we are testing a server-based fix which will automatically correct most others. That fix is working its way through QA, and we will highlight this in the server release notes when it becomes available. In the meantime, there’s a workaround for any experience creators or contributors who want to fix the issue sooner:

  • Open the script in an object in-world or attached to you  
  • Make sure the bottom widgets have your experience selected 
  • Save

This will get the experience enabled scripts running again. 

We’re sorry this happened. We’ve enacted changes to fix our process so this doesn’t happen again. And we appreciate everyone's patience as we investigated this issue and now attempt to repair the damage. We are committed to enabling Second Life’s content creators of all stripes and we recognize that incidents like this damage that commitment, which is something none of us ever set out to do. 



 

Linden Lab

For many years, we have introduced changes to the Region simulators by deploying updates first to one or more of our Release Candidate (RC) channels, and rolling them on the Wednesday following the main channel roll on Tuesday. We evaluate the performance and stability in those RCs before making the changes to the rest of the Grid. This is an essential element of evolving Second Life because the size and variability of our virtual world are so great that there is no way we can test (or even know about) all the ways in which you're using it. 

We're working on a series of changes to this process designed to provide us with better data on the reliability and performance of each server update. These process changes have already begun internally with better tracking and monitoring of server performance and stability. Starting with this week’s rolls some of the externally visible changes will begin. The first change you'll be able to see is that the channel name will no longer be obvious: when this change is fully deployed (it will only be on one or more RCs this week) the channel name displayed by the viewer or available to LSL will always be the main channel name ("Second Life Server"). This is simply to avoid spurious associations between the RC names ("BlueSteel", "LeTigre", "Magnum", and occasionally other smaller ones); we hear interesting but incorrect assumptions that are made about those channel names, such as that one channel runs on better (or worse) hardware than another one does . For now at least, you'll still be able to determine that your Region is on some RC by the fact that it rolls on Wednesday rather than Tuesday (it would be nice to get all rolls onto just one day or otherwise disassociate roll days from whether or not they're on an RC … we'd like to get there eventually), and by comparing the simulator version strings (which are getting a small format change with this version) to the versions in the release notes. What's really important is the simulator version, so be sure to report that with any problem (reporting the channel name alone today just means that we have to figure out when you were reporting for and look up the version you had at that time, since it can change).

Speaking of release notes, the server release notes will soon be moving from the wiki to the new releasenotes.secondlife.com site; that site has been used for viewer releases for some time now. The process which creates the notes on the new site more accurately reports when we fix a bug you reported or a feature you requested by using the externally visible BUG ids.

Future improvements will make each RC channel a better model of the Grid as a whole. Support will continue to be able to accommodate Region owners’ requests that a Region be in the RC for a particular feature or fix they want as soon as possible, or that it be excluded from any RC. It is generally  better if Region owners do allow us to select Regions for RCs because it improves the chances that we'll detect problems early - if your Region is unusual in some way, it may be the best place for us to detect a problem and avoid sending it to the entire Grid. The RC sandbox Regions will, of‌ ‌course, stay in the RCs, so you'll always have somewhere to test the latest changes.

 

Linden Lab

Some of you know me as Soft Linden. I’m the information security manager at Linden Lab.

A large number of you attended the Tilia Town Hall  last week. Aside from the many questions you had about how Tilia affects Second Life L$ and monetary activity, privacy was a common concern. Grumpity asked if I would answer a few of the questions about Tilia privacy and security which surfaced in the town hall and in our forums. This has been a busy time for everybody who has worked on Tilia, but I’m glad I can take a few moments to share some information.
 

Where did the Tilia team come from? And why should I trust Tilia with my personal information?
 

The Tilia team is made up of people you previously knew as Linden Lab employees. We’re part of this team because we are passionate about privacy and security. Tilia includes employees who use Second Life alts in our free time. We know many of you as friends and creators in Second Life. So not only are our practices aimed at complying with an ever expanding list of U.S. regulations and laws, but we strive to go above and beyond. We want to protect the best interests of ourselves, our friends, and the countless Residents who support the world we love. We fully believe that Second Life wouldn’t be possible without working to earn your trust.

For example, we don’t like the way many other companies resell customer information. Because we disagree with those practices, the information you store with Tilia is never provided to third parties for purposes such as marketing. We want you to feel confident that you can play, experiment, and explore in Second Life without outside strangers learning anything about you which you have not shared under your own initiative.

We won’t even provide that information to the US government unless we are compelled to do so through a legal process such as a subpoena or a search warrant. 

But the privacy and security story goes much, much further.


Does Tilia change how my information is secured?
 

Yes! This project began years ago. Quite a bit of the work we do to improve Second Life is "behind the scenes" - things that users cannot directly interact with. Often it's not even possible for users to detect that something has changed. This is one such case.

A few years ago, we looked at Second Life, and how information security has evolved in the time since Second Life was created. We asked ourselves how we could better protect our most sensitive customer information.

Our engineers created a new “personal information vault” project. This vault uses modern algorithms to encrypt sensitive information in a way that would require both enormous computing power and an enormous amount of memory for an attacker to crack… if they could even get a copy of the encrypted data. These algorithms are specifically tuned to defeat expensive decryption acceleration hardware. And all of this new encryption is wrapped around the encryption we already used - encryption which was the industry standard at the time. These are entire new layers using encryption technologies which didn’t exist when Second Life was new.

Even after all of these changes, the old protection remains in place at the bottom of that stack. Figuratively speaking, we locked the old vault inside a bigger, stronger vault. We chose an approach where we didn’t need to decrypt information in order to enhance your protection.

There is another key part of this project: Our storage mechanisms for sensitive customer information are now isolated from Second Life. The information isn’t stored at the same physical location anymore, and hasn’t been for a while. But the difference is more than physical.

Second Life’s servers do not have direct access to Tilia information that isn’t required for daily Second Life usage. Even developers who have worked at the company for a dozen years - developers who have full access to every last Second Life server - do not have access to the servers that store and protect the most sensitive information. A policy of least privilege means fewer opportunities for mistakes.

Even within Tilia, key information is further segmented. This means that compromising one database inside of Tilia is insufficient to decrypt and correlate sensitive data without compromising a different service. We have deployed numerous commercial products which help monitor for access, abuse, or data copying attempts for data that is made available to Tillia employees. This means that even an attacker with all employee access credentials, access to employee multifactor authentication tokens, and all Tilia access permissions would still face some challenges in avoiding early detection.

That was a lot to explain. But it is all important, because this is the technical foundation of Tilia. It’s a core piece of the Tilia story, and it is something we have worked on for years. Tilia was created in large part because we saw an opportunity to share these technologies with other businesses.

These technologies are in place today for all of the information you entrust Tilia to handle. 

I am proud of what our engineers have accomplished. These same technologies are only in the planning stages at other companies and institutions. Many of the bigger businesses who already handle sensitive data like credit reports and medical records are working to complete similar projects. But we have it today.
 

It sounds like a lot has changed at once. Aren’t large changes risky?
 

Tilia was designed with security and privacy as its primary considerations. These considerations apply not only to what we create, but how we create it, and how we validate ongoing changes to what we create.                                

For Tillia, we chose a newer security-focused programming language over Python and C++, the older languages which make up much of Second Life. It’s more difficult to make security errors in modern security-focused languages, but it’s not impossible. This is why we have created thousands of automated tests which exercise nearly every aspect of Tilia. Every change to Tilia triggers the execution of these tests, and the change is rejected if it causes nonconformant behavior.

The Tillia team also pays a security testing company to attempt to hack Tilila and perform routine vulnerability assessments. Any Tilia service that is exposed to Second Life users is also exposed to outside security testers. These testers evaluate changes in a staging environment before they are ever presented to Second Life users.

We enlisted outside specialists to review some of our key privacy and security practices and procedures. We then invited a team from Amazon Web Services to sit in our offices with us and review every aspect of our service deployment and hosting infrastructure.

Every step we have taken has been cautious. When it comes to privacy and security, the Tilia engineering team believes that the tortoise wins the race.
 

What does Tilia mean for Second Life privacy and security in the future?
 

We have many plans for Tilia. Additional work is already under way.

While we have already moved regulated information out of Second Life and into Tilia, we are actively migrating additional forms of information. Now that we have a new privacy and security foundation, we can extend the amount of information that enjoys this level of protection. If it pertains to your real life identity, we believe in leveraging Tilia protection wherever possible.

Tilia will enable future Second Life projects as well. We designed Tilia to support additional business customers, so we are able to justify larger privacy and security projects to benefit new business customers and existing Second Life Residents alike.

Aside from ensuring compliance with upcoming privacy and security regulations, our early goals are largely driven by Second Life. These goals include the option for users to select stronger authentication mechanisms, better mechanisms for our team to identify callers who request account help, and additional tools which support our fraud protection team.

As to Second Life itself, by relieving the team of many of the heaviest privacy and security burdens, we believe we can help them be even more effective in developing the virtual world we all love.

Stay tuned to see what we can do.

Soft Linden

Linden Lab

Hi Residents!

We had one of the longest periods of downtime in recent memory this week (roughly four hours!), and I want to explain what happened.

This week we were doing much needed maintenance on the network that powers Second Life. The core routers that connect our data center to the Internet were nearing their end-of-life, and needed to be upgraded to make our cloud migration more robust.

Replacing the core routers on a production system that’s in very active use is really tricky to get right. We were determined to do it correctly, so we spent over a month planning all of the things we were going to do, and in what order, including full rollback plans at each step. We even hired a very experienced network consultant to work with us to make sure we had a really good plan in place, all with the goal of interrupting Second Life as little as we could while improving it.

This past Monday was the big day. A few of our engineers (including our network consultant) and myself (the team manager) arrived in the data center, ready to go.  We were going to be the eyes, ears, and hands on the ground for a different group of engineers that worked remotely to carefully follow the plan we’d laid out. It was my job to communicate what was happening at every step along the way to my fellow Lindens back at the Lab, and also to Residents via the status blog. I did this to allow the engineering team to focus on the task at hand.

Everything started out great. We got the first new core router in place and taking traffic without any impact at all to the grid. When we started working on the second core router, however, it all went wrong.

As part of the process of shifting traffic over to the second router, one of our engineers moved a cable to its new home. We knew that there’d be a few seconds of impact, and we were expecting that, but it was quickly clear that something somewhere didn’t work right. There was a moment of sheer horror in the data center when we realized that all traffic out of Second Life had stopped flowing, and we didn’t know why.

After the shock had worn off we quickly decided to roll back the step that failed, but it was too late. Everyone that was logged into Second Life at the time had been logged out all at once. Concurrency across the grid fell almost instantly to zero. We decided to disable logins grid-wide and restore network connectivity to Second Life as quickly as we could.

At this point we had a quick meeting with the various stakeholders, and agreed that since we were down already, the right thing to do was to press on and figure out what happened so that we could avoid it happening again. We got a hold of a few other folks to communicate with Residents via the status blog, social media, and forums, and I kept up with the internal communication within the Lab while the engineers debugged the issue.

This is why logins were disabled for several hours. We were determined to figure out what had happened and fix the issue, because we very much did not want it to happen again. We’ve engineered our network in a way that any piece can fail without any loss of connectivity, so we needed to dig into this failure to understand exactly what happened.

After almost four very intense hours of debugging, the team figured out what went wrong, worked around it, and finished up the migration to the new network gear. We reopened logins, monitored the grid as Residents returned, and went home in the middle of the night completely wiped out.

We’ve spent the rest of this week working with the manufacturer of our network gear to correct the problem, and doing lots of testing. We’ve been able to replicate the conditions that led to the network outage, and tested our equipment to make sure it won’t happen again. (Even they were perplexed at first! It was a very tricky issue.) As of the middle of the week we’ve been able to do a full set of tests including deliberately disconnecting and shutting down a router without impact to the grid at all.

Second Life is a really complex distributed system, and it never fails to surprise me. This week was certainly no exception.

I also want to answer a question that’s been asked several times on the forums and other places this week. That question is “why didn’t LL tell us exactly when this maintenance was going to happen?”

As I’ve had to blog about several other times in the past, the sad reality is that there are people out there who would use that information with ill intent. For example, we’re usually really good at handling DDoSes, but it requires our full capacity being online to do it. A DDoS hitting at the same time our network maintenance was in progress would have made the downtime much longer than it already was.

We always want what’s best for Second Life. We love SL, too. We have to make careful decisions, even if it comes at the expense of being vague at times. I wish this wasn’t the case, but sadly, it very much is.

We’re really sorry about this week’s downtime. We did everything we possibly could have to try to avoid it, and yet it still happened. I feel terrible about that.

The week was pretty awful, but does have a great silver lining. Second Life is now up and running with new core routers that are much more powerful than anything we’ve had before, and we’ve had a chance to do a lot of failure testing. It’s been a rough week, but the grid is in better shape as a result.

Thanks for your patience as we recovered from this unexpected event. It’s been really encouraging to see the support some folks have been giving us since the outage. Thank you, you’ve really helped cheer a lot of us up. ❤️
 

Until the next time,
April Linden
Second Life Operations Manager

 

Linden Lab

Over seven years ago, I posted my first set of Viewer release notes to the Second Life Wiki, where we have kept all of our release notes to this day. Over the years, we’ve made some minor tweaks to the appearance and how we generate them, but for they most part they have remained the same.

While the wiki has served us well for release notes, it’s time to improve their readability and browsability. We’ve been putting together the finishing touches to a new website dedicated solely to release notes, with a new look and feel that makes the individual pages easier to find, and easier to read - take a look!

Previous release notes will still be archived on the wiki, however, new releases will be shared and published on the new website.

Our goal is to improve overall accessibility and ease in browsing and reviewing release notes. I, personally, am excited to see the dedicated new website and hope you are too!

Steven Linden  

Linden Lab

Due to continued changes in the Facebook API, as of today the Second Life viewer will no longer be able to support Facebook Connect for sharing your inworld photos and posts.  We apologize for this inconvenience and will be removing the UI from the viewer shortly. We will, of course, be happy to see your SL posts on Facebook going forward, and you can always say hello and check out what’s happening on our official page: https://www.facebook.com/secondlife

×
×
  • Create New...