Jump to content

Second Life Puppetry


You are about to reply to a thread that has been inactive for 218 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

6 hours ago, Qie Niangao said:

I'm very sure even the irrelevant Rosedale knows perfectly well what the primary use case for this stuff will be.

No. He wants to have ZOOM calls but with avatars.

Really.

 

There will never been a high level meeting inside LL to discuss the need to put extra time into interpersonal interactions. Why would avatars touch?!

Edited by Coffee Pancake
  • Like 3
Link to comment
Share on other sites

It's really interesting the negativer eaction this particular forum is having to this announcement vs. other congregations of users and their approval - like the Reddit announcement's comments and the genuine excitement in Discord (in the furry mod community for the avatar in the preview video) about the possibility of using your avatar like a Vtuber's to stream.

Edited by Blaise Glendevon
  • Like 4
  • Haha 1
Link to comment
Share on other sites

12 minutes ago, Blaise Glendevon said:

It's really interesting the negativereaction this particular forum is having to this announcement vs. other congregations of users and their approval - like the Reddit announcement's comments and the genuine excitement in Discord (in the furry mod community for the avatar in the preview video) about the possibility of using your avatar like a Vtuber's to stream.

I been looking at the same. It really is a bit of a wonder. I suspect that some are just not looking at the possibilities that puppeteering will provide. Too much focus maybe on adult activities which I really don't think will be a big thing for the puppeteering but even if it is, S/L is a thing exactly because of that. Another benefit I can see from this morning while sitting in various seats was the tendency of my slight non default avatar to have arms cutting through breasts or intersect with parts of the chair etc. To me this new feature will allow me to reposition those so it is not doing that.  Win win.

  • Haha 2
Link to comment
Share on other sites

I'm not reading the reaction here as exactly "negative", @Blaise Glendevon, perhaps "cynical" would be more accurate. There's lots of positives to this if it can be done right, it's just that many of us are rather used to a SL feature implementation being, well, not right.

In this case, though, I - personally - am moderating my cynicism. The approach seems a little different this time. I'm looking forward to what develops.

  • Like 3
Link to comment
Share on other sites

39 minutes ago, Da5id Weatherwax said:

I'm not reading the reaction here as exactly "negative", @Blaise Glendevon, perhaps "cynical" would be more accurate. There's lots of positives to this if it can be done right, it's just that many of us are rather used to a SL feature implementation being, well, not right.

In this case, though, I - personally - am moderating my cynicism. The approach seems a little different this time. I'm looking forward to what develops.

I hope I am "wary" rather than "cynical." I think that if this is done well, it might be a net positive. If it isn't, it's just going to be another metric that doesn't make SL look good next to things like VRChat, and will actually have a net negative impact.

I also don't, personally, still see a whole lot of use for this for most SL residents. It seems pretty niche to me, still. Personally, I'd rather have an in-viewer poser (a la Black Dragon) -- but I recognize that the needs of SL photographers are also pretty niche.

It will be interesting to see if this brings in a new type of user. I am, again, a little wary about how that might impact on SL's overall culture, but at the worst it would likely lead to the kind of fragmentation / disconnection that currently exists between those who favour voice, and those who hate it.

  • Like 1
Link to comment
Share on other sites

8 hours ago, Theresa Tennyson said:

The SL skeleton already has collision bones - they're what allow fitted mesh to work. Animations didn't recognize them before, but the information is already part of your avatar.

Recognizing collisions is step one. Doing something visually acceptable to address them is step two.

The LL announcement Nalate's linked doesn't get into much detail, so I don't know if/how collisions will be handled. It does seem that the focus is on real-time mocap, which I am not personally interested in. It might also be that collisions are to be handled by the external mocap system, which could mean that body sliders do not affect animations (and therefore can't be used to prevent collisions), or would have to be exported from SL to inform the external mocap system.

We'll see how it goes when we see how it goes.

  • Like 1
Link to comment
Share on other sites

1 hour ago, Blaise Glendevon said:

It's really interesting the negativer eaction this particular forum is having to this announcement vs. other congregations of users and their approval - like the Reddit announcement's comments and the genuine excitement in Discord (in the furry mod community for the avatar in the preview video) about the possibility of using your avatar like a Vtuber's to stream.

This is interesting. Are there particular reasons why the furry community might be especially interested in and/or excited by this feature? Are they generally more engaged in Vtubing?

Link to comment
Share on other sites

I forgot the main point I wanted to make in my previous post. I very much look forward to handling of collisions, even if only in the animation of individual avatars. I can't tell whether this is something that will be handled within SL, or if the expectation is that the external capture systems will clean up collisions before sending the data to the system. I'd love to see (if it's even remotely possible) the ability to massage existing poses and animations in a way that eliminates the "voodoo surgery"* we all do on ourselves.

I've less hope of LL addressing the surgery we do on each other.

 

 

*the removal of internal organs, simply by reaching a hand into the body cavity

Edited by Madelaine McMasters
  • Like 2
Link to comment
Share on other sites

36 minutes ago, Scylla Rhiadra said:

This is interesting. Are there particular reasons why the furry community might be especially interested in and/or excited by this feature? Are they generally more engaged in Vtubing?

I think there's an overlap between the furry community and the gamer community in a larger proportion than the general SL userbase. But the main consideration is the price of a custom Vtube avatar and equipment - it's a huge outlay for someone who can't be sure their particular personality or gaming skill set will take off as a social media personality. Being able to animate your Second Life avatar in a comparable way for a much lower price would be a good way to get your foot in the door. And while Twitch still bans Second Life, Youtube's live stream capabilities have gotten huge improvements in the last two years. 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

4 hours ago, Blaise Glendevon said:

It's really interesting the negativer eaction this particular forum is having to this announcement vs. other congregations of users and their approval - like the Reddit announcement's comments and the genuine excitement in Discord (in the furry mod community for the avatar in the preview video) about the possibility of using your avatar like a Vtuber's to stream.

I think that's down to a difference in expectations. What we have seen so far is a world away from where the industry is right now.

 

 

  • Like 1
Link to comment
Share on other sites

11 hours ago, Madelaine McMasters said:

collision bones

I don't think "collision bones" actually collide in SL. To collide, you need some kind of model wrapped around the collision bone, to say how big the "meat" attached to it is. Those models collides. That's what I was discussing earlier - a minimal definition of limb dimensions so the IK system can prevent body parts going through each other. VRchat has something like that.

  • Like 1
Link to comment
Share on other sites

Im beginning to get excited about this project , it has lots of potential.Facetracking and motion capture can really ramp up prescence and immersion without all the VR stuff getting in the way. Waving at somone and having them smile back at you is one of the best feelings I ever had in a virtual world.

Chaser Zacks spent an age helping me fix my install and get it all working. I made copious notes and have tried to make a bit of an idiots guide to getting it installed and working.

https://judasshuffle.blogspot.com/2022/09/second-life-has-just-put-out-test.html

if i have missed anything out let me know.

  • Like 2
  • Haha 1
Link to comment
Share on other sites

9 hours ago, animats said:

I don't think "collision bones" actually collide in SL. To collide, you need some kind of model wrapped around the collision bone, to say how big the "meat" attached to it is. Those models collides. That's what I was discussing earlier - a minimal definition of limb dimensions so the IK system can prevent body parts going through each other. VRchat has something like that.

I don't think SL has the concept of "collision bones" as I've described them (a definition I learned 30 years ago when in grad school, and which might no longer be in use). As I learned it, "bones" are lines (coded by two end points), representing the simplest description of a solid link in the animation armature/skeleton. In recognition of the physical limits of joints in a real body, the virtual joints have some limits on their motion. That prevents obviously impossible bone positions (backwards knee bends) but doesn't prevent collisions. I think this is the current state of the SL avatar skeleton. It's a collection of lines, connected by hinges having motion limits. (I know there are limits in the various posing systems we use. I don't know if there are limits in the SL animation system that reads the pose data.)

The first step in collision prevention is to give the bones some crude, rigid shape. Those were (and maybe still are) called "collision bones". They're approximations, but require only modest computation to prevent the most egregious self clipping. A physical realization of a collision bone system would be the wooden 3-D mannequin you showed earlier in the thread.

The next step above that was "collision mesh/skin". Instead of collision bones, the animation system uses a simple rigged mesh wrapped around the skeleton, to more accurately reflect the visible shape of the avatar and the soft tissue deformations that occur as a result of joint movement, but ignoring deformation due to collision. Such systems do a better job of preventing clipping, but don't model the skin deformations that actually occur during a collision, such as dimpling of fleshy areas when poked by a finger or a rigid object.

The next step above that is to compute deformations of the collision mesh by contact with other portions of the mesh (or other meshes). At this point, computation complexity soars, but soft tissues will dimple under "pressure" from a colliding object.

As the realism of the collision modeling system improves, it becomes increasingly important to incorporate avatar specific modifications, such as the full character skin mesh and the geometry of clothing and attachments. Ultimately, collision systems understand the behavior of skin, clothing, hair, worn objects, and the entirety of the world the avatar moves through (solid objects, flexible objects, clouds of smoke, ad infinitum).

I'm not familiar with VRchat, but you've described something that's at least at the "collision bone" level.

I think SL would benefit from a system as simple as "collision bones". Given the wide variety of even just human avatar shapes, the system might want to query at least the shape slider settings to adjust the geometry of collision bones. The further an avatar deviates from human proportions, the worse this will work. I'm more excited about puppeteering bringing some form of collision handling than I am about live mocap control of avatars. We'll all benefit from some collision handling.

 

Edited by Madelaine McMasters
Grammar, it's not just for dinner anymore.
  • Like 2
Link to comment
Share on other sites

This is so confused.

We all know people call OpenSimulator, an open-source re-implementation of much of what makes Second Life work, "OpenSim".  Many of us know that OpenSim is a product of The National Center for Simulation in Rehabilitation Research (NCSRR).

OpenSim:  https://opensim.stanford.edu/

OpenSimulator:  http://opensimulator.org/wiki/Main_Page

Second Life:  https://secondlife.com/

This conversation had me wondering if OpenSim could be used to make models for Second Life and OpenSimulator.  So, I started searching and found this jewel of a page.

https://www.softwaretestinghelp.com/opensim-tutorial/

I cannot stop chuckling about it.  It's like somebody tossed the three into a baking bag, shook it up and told an AI to write an article about it.

Edited by Ardy Lay
  • Like 2
Link to comment
Share on other sites

1 hour ago, Ardy Lay said:

This is so confused.

We all know people call OpenSimulator, an open-source re-implementation of much of what makes Second Life work, "OpenSim".  Many of us know that OpenSim is a product of The National Center for Simulation in Rehabilitation Research (NCSRR).

OpenSim:  https://opensim.stanford.edu/

OpenSimulator:  http://opensimulator.org/wiki/Main_Page

Second Life:  https://secondlife.com/

This conversation had me wondering if OpenSim could be used to make models for Second Life and OpenSimulator.  So, I started searching and found this jewel of a page.

https://www.softwaretestinghelp.com/opensim-tutorial/

I cannot stop chuckling about it.  It's like somebody tossed the three into a baking bag, shook it up and told an AI to write an article about it.

Would be nice. Looks easier then blender

OpenSim-software-being-used-to-build-a-m

  • Haha 1
Link to comment
Share on other sites

8 hours ago, Madelaine McMasters said:

I think SL would benefit from a system as simple as "collision bones". Given the wide variety of even just human avatar shapes, the system might want to query at least the shape slider settings to adjust the geometry of collision bones. The further an avatar deviates from human proportions, the worse this will work. I'm more excited about puppeteering bringing some form of collision handling than I am about live mocap control of avatars. We'll all benefit from some collision handling.

 

 

They're already there. They already respond to the sliders. Most of them have always been there and a few new ones were added when "fitted mesh" came out. In order to see your collision volumes go to "Develop" - "Avatar" - "Show Collision Skeleton."

  • Like 3
  • Thanks 2
Link to comment
Share on other sites

I'm not a techy kind of person and I know nothing about collision bones, but I think this sounds like fun!  I love my VR headset.  I will most likely give it a try when it's a bit further along in development.  I hope my computer can handle it.  Would the movements be finely tuned enough for sign language?

I bet some creative people could make some inworld games.  I dance in my chair now when I'm listening to music. Oh, fudge... ear worm. "I get up, I get down and I'm jumping around, and the rumpus and ruckus is comfortable now..."  I might join a chair aerobics class if someone had one - just plant myself on a stationary bike and do the arm motions and chat with others.  I hope it doesn't pick up every smirk and eye roll, though.  

I would like to see a speech to text and text to speech option added even if it sounds a bit monotone.

  • Like 2
Link to comment
Share on other sites

6 hours ago, Theresa Tennyson said:

They're already there. They already respond to the sliders. Most of them have always been there and a few new ones were added when "fitted mesh" came out. In order to see your collision volumes go to "Develop" - "Avatar" - "Show Collision Skeleton."

Do they actually work, overriding animations that would have an avatar intersecting itself?

  • Like 1
Link to comment
Share on other sites

11 hours ago, Ardy Lay said:

This conversation had me wondering if OpenSim could be used to make models for Second Life and OpenSimulator.  So, I started searching and found this jewel of a page.

https://www.softwaretestinghelp.com/opensim-tutorial/

I cannot stop chuckling about it.  It's like somebody tossed the three into a baking bag, shook it up and told an AI to write an article about it.

That's how people sometimes / used to monetize the internet. Write articles about basically anything, and sell them as content for generic websites.

  • Like 1
Link to comment
Share on other sites

3 hours ago, Madelaine McMasters said:

Do they actually work, overriding animations that would have an avatar intersecting itself?

No; animations just whip around bones willy-nilly and I think all collisions are calculated from your "bounding box." Probably for performance reasons when the basic structure of SL was set up. However, there is something that could be detected if the work necessary to detect things is done. It was either a rare example of future-proofing in Second Life or just pure dumb luck.

 

  • Thanks 1
Link to comment
Share on other sites

Thinking about this development some more I think a valuable addition would be to have some way to expose the current position of the IK-targets (the REAL position - anims+puppeteer), relative to the avatar center to scripts. I make guitars (replicating my RL instruments for SL, and doing the same for other performers from time to time just because I can) and I'd love to have some way to make 'em "attach here and orient the guitar neck towards this hand, wherever it is" rather than making a playing anim and then laboriously tweaking that and the instruments attachment rotation so that they match up for an individual avatar. Or dusting off my old spinning wheel project and have a way for the yarn to follow the avs hand without them needing a separate invisible attachment there to target particles on.

Given the client-side nature of anims I'm not sure how that could work, but the advent of this project makes me at least hope it would be possible down the road.

Link to comment
Share on other sites

Uses for this tracking technology.

Posing for photos, tired of searching for that perfect pose.

Performers will love it, think Shakespeare in the park.

Imagine if you could record the performance and then play it back in world but with the actuall avatars replaying their movements and speech.the video of the future

Going to a club in sl to dance would become a workout.

It just seems like fun , people wouldn't be compelled to use it, but bringing some of your personality into your avatar whilst retaining your privacy has an appeal.

Having tried this out for a few days I can see theres alot of work still to be done to it, and it throws up many questions about  what will be added into it, full face and finger tracking etc  potential performance hits with many people using this at once, but I suppse thats what they are testing.

  • Like 1
  • Haha 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 218 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...