Jump to content

Second Life Puppetry


You are about to reply to a thread that has been inactive for 217 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

12 minutes ago, Scylla Rhiadra said:

What kinds of situation are SL residents likely to find themselves in, where this will be a useful or worthwhile extension of our current abilities?

People who want an audience. Vtubers. DJs. Strawberry Linden.

  • Like 5
  • Haha 2
Link to comment
Share on other sites

27 minutes ago, animats said:

People who want an audience. Vtubers. DJs. Strawberry Linden.

Ok, so, basically people who are making videos of themselves talking . . . and DJs.

Those are pretty niche, surely? And, maybe I'm not going to the right clubs, but I'm pretty sure that the people at the sets I'm going to are not visually focusing on the DJ. In fact, probably half the time, the DJ is dancing on the floor.

There's got to be more to this? I suppose one can argue that VTubers showcasing the technology will help attract new users . . . who then will probably not find themselves in contexts where they themselves are going to use it.

Honestly, I'm not "against" this. I don't see any reason to be, except insofar as it diverts resources away from other things that will have a much broader impact.

But I'm also not seeing a whole lot of point to it.

 

  • Thanks 3
Link to comment
Share on other sites

8 hours ago, Coffee Pancake said:

This needs to be as good and as polished as VRChat, even going as far being able to make use of motion trackers in addition to a camera.

LL need to look at https://freemocap.org/

 

If this is not as good as VRC, there is no point as we're just going to get dunked on for trying.

I was just recently looking into the VR Viewer code and thinking of implementing it... with the twist that you can actually move your head and hands.. now with THIS i could make these sync to others. This ACTUALLY gives me reason to do this. Niran experimentation time inc.

8 hours ago, Scylla Rhiadra said:

Darn. For a moment, I thought this was about adding a poser to the viewer.

Work on this is somewhat predictable, given the return of Rosedale to a role here. I'd think it's going to be a very niche affordance, assuming it ever makes it into the main viewer.

Fred not, as far as i understand i can absolutely abuse this system to sync the poser in exactly the way i always wanted them to add to begin with. Now they are LITERALLY giving me what i want, except not with the goal to support the Poser but i can make the Poser use this system.

7 hours ago, Paul Hexem said:

Theoretically it could allow us to write scripts to do exactly that with it, potentially without clunky "move each bone one at a time" huds. 

Theoretically i can not only abuse this for my Poser but i can also abuse this for LIVE animation sync, guess it IS time after all to implement the animation editor i've always been wanting to make.

 

If i figure out how to abuse this server side sync for my stuff expect me to vanish for a while, i'll be sitting in my basement creating dasdardly evil things while laughing maniacally.

  • Like 6
  • Haha 1
Link to comment
Share on other sites

28 minutes ago, NiranV Dean said:

I was just recently looking into the VR Viewer code and thinking of implementing it... with the twist that you can actually move your head and hands.. now with THIS i could make these sync to others. This ACTUALLY gives me reason to do this. Niran experimentation time inc.

Fred not, as far as i understand i can absolutely abuse this system to sync the poser in exactly the way i always wanted them to add to begin with. Now they are LITERALLY giving me what i want, except not with the goal to support the Poser but i can make the Poser use this system.

Theoretically i can not only abuse this for my Poser but i can also abuse this for LIVE animation sync, guess it IS time after all to implement the animation editor i've always been wanting to make.

 

If i figure out how to abuse this server side sync for my stuff expect me to vanish for a while, i'll be sitting in my basement creating dasdardly evil things while laughing maniacally.

I'm assuming you mean "abuse" in only the nicest possible way, right?

🙂

  • Like 1
  • Haha 2
Link to comment
Share on other sites

I just realized this reminds me of the Xbox Kinect. It uses multiple cameras and sensors which can detect and track two separate people. There were many games that could be played using body tracking such as bowling, tennis, volleyball, and all sorts of non-sports games.  

  • Like 3
Link to comment
Share on other sites

12 minutes ago, Bree Giffen said:

I just realized this reminds me of the Xbox Kinect. It uses multiple cameras and sensors which can detect and track two separate people. There were many games that could be played using body tracking such as bowling, tennis, volleyball, and all sorts of non-sports games.  

The Kinect, especially version 2, with a LIDAR, was a very clever device. It was a bit too early. It wasn't that useful for non-VR games, and was discontinued. Some people still use it with VRchat.

The target market for tracking technology is people who want to be looked at. Watch some VRchat videos. It's all about me, Me, ME! VRchat is for extroverts. In VRchat, extroverts look good. In SL, all we have are overacted gestures, which are merely annoying. VRchat looks alive, where SL looks dead.

This matters. If this makes SL fun for more people, it's a huge win. As I've commented in the past, there are technical solutions available for most of SL's annoyances, but what's really needed is to make it more fun. Yes, some existing SL users won't like an influx of extroverts. Don't worry. SL is so huge and has so much unpopulated space that it won't be a problem.

This is going to be an interesting ride. I see lots of technical and social problems ahead, but they are worth overcoming.

  • Like 4
  • Haha 1
Link to comment
Share on other sites

2 hours ago, Scylla Rhiadra said:

I'm assuming you mean "abuse" in only the nicest possible way, right?

🙂

What? No. If i figure this ***** out you can be sure that i'll smack LL on the forehead a couple times. This is essentially what i wanted for the Poser for 5 years now and whats amazing is everyone can see it and it looks like you can use it for ANY bone in the skeleton, this means the Poser is a GO. Live posing here we come! You can be sure that i'll not just abuse this system for the Poser. I'll revisit my headlook improvements, with this i can make them not only play on top of other animations for yourself but visible for everyone, avatars are finally going to look more alive. I can extend this to mouselook so you can see your head movement looking around even while other animations are playing. The new IK system might also be useful to make make something better that also plays on top of the current animations, heck we can use this to go all the way and make our hands PHYSICAL so they can't click through things, with surface raycasting we can check if something is in the way, and have the hand stop on the surface, then sync it to others... my head explodes just thinking of all the ways i could use this...

  • Like 6
Link to comment
Share on other sites

11 hours ago, Nalates Urriah said:

Today at the Server and Scripting User Group Rider Linden announced a new feature for SL, Puppetry built into the viewer. Just before the meeting the official announcement popped in the SL Blog. SL Puppetry

For now this is an alpha stage development, meaning you have to download the project viewer and travel to the Preview Grid to see it work.

I am not going to explain or go into detail as the blog and other sources will be describing this in more detail. I do have a transcript of the meeting chat. If enough people ask for I, I'll clean it up and post it.

There will be a group that includes residents to help develop the feature farther. Check out the blog article for more details.

well, great.. now everyone can see how bad of a dancer I really am.. Oh but the possibilities.. for... you {wiggles eyebrows} know.... 

I do wish sometimes I could just make my avatar do something whilst using another animation.. Are these saved in a bvh file? Or just spontaneous applied then forgotten? 

  • Like 1
Link to comment
Share on other sites

This puppetry software will need features like, "disable just my left arm / hand", and let me use an animation for that instead. 

ETA: for smokers, people who just can't put down their phone, people with a paralyzed / broken arm, etc. 

Edited by Love Zhaoying
  • Like 2
  • Haha 1
Link to comment
Share on other sites

Of course pr0n is always at the leading edge of tech, so this—in SL!—won't be an exception.

There's nothing new under the sun, so probably VRChat and other platforms have surely taken it in every possible direction by now, but what comes immediately to mind is "mirror" IK, where the motion frame of reference is reversed, in two possible ways:

  1. Targets on the puppeteer's body are transposed to an in-world participant's avatar. 
    ("Reaching out, touching me, touching you" thank you Neil Diamond)
  2. An in-world participant's avatar is controlled by the puppeteer, rather than their own avatar.
    Could be just RLV kink, especially if it's one-way—or, if mutually swapped control, potentially kind of profound
5 hours ago, animats said:
  • Viewer->Python->TCP->sim server->TCP->Python->Viewer may introduce too much latency and jitter. We'll have to see. SL, unfortunately, outsources voice to Vivox, so this stuff can't go on the voice data path, which would be convenient. The voice data path goes to the same people who need to see the animations, and voice and gestures need to be in sync. Measurements will be needed. Useful first test: log in from two machines side by side, and puppeteer on one while watching the other. See how bad the lag is.

What I find most interesting about the part I emphasized is how voice can be limited to specific chat participants, and how that could help reduce lag in the puppeteered animation stream, compared to the standard "everybody gets to watch" approach (that I assume is kind of locked-in to the current design). Some privacy advantages there, too, given my apparent pr0n puppetry predilection.

  • Like 2
Link to comment
Share on other sites

vlcsnap-2022-08-31-02h53m20s335.thumb.png.8f6fab565a7f1d0215290f47efef9d2c.png

So I spent the evening getting puppetry code merge over to a Firestorm fork just so I could run it on Linux, it was easier than trying to fix LL's viewer to compile on there...

Got it to work without having to change any of the python code, which was a shock since I don't think they tested this on anything other than Windows. I'm thinking that part is all down to OpenCV "just working", which it was quite well. The previewer of the OpenCV output was working well even in my terrible lightening and noisy background. Some caveats though, the plug-in scripts need executable perms, and none of this works with conda env. Might look in to fixing those in the future.

What wasn't working too well was the actual puppetry part. Was able to test with somebody I ran in to on the test region who was running the project viewer on Windows, we were both having similar cases of virtual boneitis. At least it seems like my merge didn't seem to break anything. Likely a "need more info on how this should be setup" sort of things.

This is a pretty good start though. :3

Hopefully we can get Leap Motion working on this at some point later on, I've known some people over the years that have wanted better ASL support.

https://cdn.discordapp.com/attachments/319806009814155264/1014470334084304896/2022-08-31_02-38-48.mp4

https://github.com/Kadah/phoenix-firestorm_puppertry_exp

Edit: Camera track is cool and all, but the really neat part is that the inputs on this aren't specific to camera tracking and pretty much wide open to anything somebody wants to code.

Edited by Kadah Coba
  • Like 8
  • Thanks 1
Link to comment
Share on other sites

29 minutes ago, Prokofy Neva said:

Puppetry is the result of the LL intelligentsia being exiled from SL, then returning. 

I'm waiting to hear back from @SecondLife as to whether surgeons could do virtual surgeon with the new puppetry.

If I understand yours and @Kadah Coba's post, that would be surgery for "virtual boneitis", correct?

  • Thanks 1
Link to comment
Share on other sites

8 hours ago, animats said:

The Kinect, especially version 2, with a LIDAR, was a very clever device. It was a bit too early. It wasn't that useful for non-VR games, and was discontinued. Some people still use it with VRchat.

The target market for tracking technology is people who want to be looked at. Watch some VRchat videos. It's all about me, Me, ME! VRchat is for extroverts. In VRchat, extroverts look good. In SL, all we have are overacted gestures, which are merely annoying. VRchat looks alive, where SL looks dead.

This matters. If this makes SL fun for more people, it's a huge win. As I've commented in the past, there are technical solutions available for most of SL's annoyances, but what's really needed is to make it more fun. Yes, some existing SL users won't like an influx of extroverts. Don't worry. SL is so huge and has so much unpopulated space that it won't be a problem.

This is going to be an interesting ride. I see lots of technical and social problems ahead, but they are worth overcoming.

That's a good point about tracking being for extroverts. I tried going into something called Meta venues which is like going to a movie or a concert. It has a lobby where people gather and decide where to go and is essentially a place full of avatars talking in a room. The moment I went into that place, I experienced intense social anxiety and went full introvert. It's what I'd normally feel in the same real life situation. Even with the cartoony legless avatars, seeing them move and talk in VR makes it very immersive.

  • Like 3
Link to comment
Share on other sites

11 hours ago, Scylla Rhiadra said:

Very genuine question, because I honestly don't know the answer.

What are the contexts in which this application is likely to be used? How is it used in VRC, for instance?

What kinds of situation are SL residents likely to find themselves in, where this will be a useful or worthwhile extension of our current abilities?

Hi Scylla, were you responding to Rowan's question copied below?  It is hard for me to keep track sometimes, due to low-comprehension (my usual and quite serious excuse).

17 hours ago, Rowan Amore said:

Just can't wait to see all the people who will think it's funny to.pick.their nose so we can all watch!  Good times ahead!  What could go.wrong?!

 

Link to comment
Share on other sites

6 hours ago, Kadah Coba said:

What wasn't working too well was the actual puppetry part.

Great that the basic machinery is working, especially on Linux.

Getting puppetry calibrated so that it looks good is hard. VRchat has solved this, though. They got it right in what they call "IK 2.0". There are two levels of setup. First is getting the tracker calibrated so that the tracking program is accurately following what your face and body are doing. You do that once per user. Second is getting the avatar to follow the tracking info without the avatar getting messed up. You do that once per avatar. Those are separate problems.

Here's an overview of VRChat's IK 2.0, which went live earlier this year.

This is how it works when it's done right.

More on the technology later. For now, it's enough to note that good solutions to this exist. If you get this wrong, movement looks creepy. Elbows go through the body, stuff like that. Breaks immersion.

 

 

  • Like 5
  • Thanks 1
  • Haha 1
Link to comment
Share on other sites

20 minutes ago, animats said:

Elbows go through the body, stuff like that. Breaks immersion.

SL code does not, at the moment, have way of measuring the "outer limit" of mesh, does it? By that I mean we don't have any form of collision detection, do we? At least as regards things outside of the basic avatar bones?

If true, how can they prevent an elbow clipping your body? And account for different avatar shapes? If one has a more "curvy" body, for instance, the same animation that works fine on a slender avatar is going to produce clipping, would it not?

I am sure there is a great deal here I'm missing, but I don't see how this works without clipping, without rejigging everything with the addition of collision detection.

Edited by Scylla Rhiadra
Typo
  • Like 1
Link to comment
Share on other sites

13 hours ago, Scylla Rhiadra said:

I'm assuming you mean "abuse" in only the nicest possible way, right?

As in the way we "torture" prims!

13 hours ago, Scylla Rhiadra said:

Ok, so, basically people who are making videos of themselves talking . . . and DJs.

Those are pretty niche, surely? And, maybe I'm not going to the right clubs, but I'm pretty sure that the people at the sets I'm going to are not visually focusing on the DJ. In fact, probably half the time, the DJ is dancing on the floor.

There's got to be more to this? I suppose one can argue that VTubers showcasing the technology will help attract new users . . . who then will probably not find themselves in contexts where they themselves are going to use it.

Honestly, I'm not "against" this. I don't see any reason to be, except insofar as it diverts resources away from other things that will have a much broader impact.

But I'm also not seeing a whole lot of point to it.

I don't imagine many people will want to actually perform the RL motions they wish to see performed by their SL avatars. If I "waves to Scylla!" in public chat, most everybody sees that. If my avatar waves at you, almost nobody sees it. Conversely, if we all use mocap to puppeteer, SL will look like 2008, with people typing everywhere.

That said, though we don't currently focus on the DJ (other other entertainers), I can imagine a future in which we will because their motions are part of the entertainment. The extreme version of this would be live solo dance performances, in which puppeteering is the enabling technology. I love the idea of SL supporting displays of physical intelligence.

23 minutes ago, Scylla Rhiadra said:

SL code does not, at the moment, have way of measuring the "outer limit" of mesh, does it? By that I mean we don't have any form of collision detection, do we? At least as regards things outside of the basic avatar bones?

If true, how can they prevent an elbow clipping your body? And account for different avatar shapes? If one has a more "curvy" body, for instance, the same animation that works fine on a slender avatar is going to produce clipping, would it not?

I am sure there is a great deal here I'm missing, but I don't see how this works without clipping, without rejigging everything with the addition of collision detection.

We don't need perfection, just something less obviously broken than the current animation system. An avatar model should be able to do some rudimentary collision avoidance by reading the sliders and computing approximate limits for bone motion, with some overlap allowance to crudely fake deformation of RL bodies. Collisions with clothing would be more difficult. I expect that my little devil will continue to stab herself with her trident and burn herself with her torch.

I'd love SL to reach the holy grail of full collision detection between avatars. It's somewhat challenging to invent new explanations, while dancing or cuddling, for our intrusions into each others bodies. I think I've driven my voodoo witch doctor shtick into the ground.

I know it seems odd to hear from me, but "stealing someone's heart" really works better as a metaphor.

  • Like 5
  • Thanks 1
Link to comment
Share on other sites

9 minutes ago, Madelaine McMasters said:

The extreme version of this would be live solo dance performances, in which puppeteering is the enabling technology. I love the idea of SL supporting displays of physical intelligence.

This TOTALLY makes sense to me.

Other in-world use cases? Not so much.

Part of this may be generational: older users such as myself don't focus much on avatar movement because there was seldom any real reason to do so. I suppose a new, younger generation of residents, used to VRChat and other such platforms, might?

But . . . an example. I dance with the same guy every Sunday night (and sometimes on Friday nights as well -- you know who). And when we dance, his avatar is invariably staring off somewhere into the middle distance over my shoulder, like he's lost in thought and totally unconnected to me or the dancing. So, I explained to him how to focus his avatar's face and direction of sight on his dance partner -- as I always do when we dance, or even just talk. (I do this when talking to you in-world too.) "Look at my face: you'll see that I'm looking up into yours. Isn't that nicer, and more intimate?" And his answer was, essentially, "I don't look at us or you while we are dancing." And, by god, his avatar makes that clear -- he still doesn't bother "looking" at me while we dance.

It DOES make a difference in terms of "connection." But he, and I suspect a majority of older residents, really don't think of SL visually that way.

Edited by Scylla Rhiadra
  • Like 3
Link to comment
Share on other sites

13 minutes ago, Scylla Rhiadra said:

SL code does not, at the moment, have way of measuring the "outer limit" of mesh, does it?

No, it doesn't. And without that, you can't map body motion tracking to an avatar properly.

This is going to get kind of technical.

While full body collision detection would be nice, something much simpler could work. What's the minimum amount of information the IK system needs to keep arms, hands, and elbows outside the body? Probably something no more complex than this.

posemannequin.jpg.287528e25d21889ddba821e49627a65a.jpg

Wooden pose mannequin. Standard artists tool. That's a good guide to how much info is needed.

A pose mannequin is spheres around the joints, and simple forms, cylinders or ellipsoids, around the body parts. That's all the info we need. In terms of numbers, the minimum values are a radius for each joint, and the maximum cross-section for each body part, in two axes. So, three additional numbers per joint. A simple collision detection system is needed. For puppeteering, it's needed only for the joints that are being tracked. Face tracking only, not needed. If face and hands are being tracked, the arms, torso, shoulder, and head dimensions are needed. For full body tracking, you need the rest of the joints. If hand tracking is good enough to follow finger moves, all the fingers of a bento avatar need those dimensions.

This gives a modern IK system, such as VRChat's IK 2.0, enough info to get the body parts to look sane. Most tracking systems know where the hands are, but aren't sure about the elbows. The IK system has to place the elbows. To get an intuitive sense of this, while seated, grab some fixed object such as a table or chair arm firmly. Now move your elbow around. Humans have enough joints that you can do that. So, if you only have hand position and shoulder position, you have to guess the elbow position. This is called an under-determined inverse kinematics problem. The IK system has to come up with a solution that doesn't cause a collision or a jerky move. There are many such solutions, just as you saw when you moved your ellbow around. The usual goal is the one that takes the least movement to get to from the previous position. This is a classic problem in robotics, and was solved a few decades ago.

Once we have those numbers, they have other uses. If AVSitter had that info, and the skeleton dimensions, it could put avatars in chairs without any need for user adjustment. So both viewer and LSL need access to that info.

End of technical section.

So, after all that, some fun. Here's VRChat's IK 2.0 in action.

A demo of a good tracker being used by a good performer. This is fun to watch. (Admittedly, it's promotional, but ignore that and watch the movement.)

This is someone who's very expressive with her hands and body. VRchat is able to track her movements well, without immersion-breaking. The subtle hard cases work. She brings her hands alongside her hips - no problem. She moves her arms in front of her body - no problem. She touches thumb and forefinger in a gesture - no problem. She brings one hand behind her body - no problem. And no visible lag.

So that's what it looks like when done right. Getting SL up to that level may be tough, but if LL wants to play in this league, they have to get that good.

This could be a lot of fun in SL. I'm encouraged to see LL taking on a hard problem.

  • Like 3
  • Thanks 3
  • Haha 1
Link to comment
Share on other sites

Hmm...this does make me wonder...

As a person who does not own or want a webcam, has no motion trackers, and has little space to perform backflips and gymnastics around my home - is there anything in this new tech I can even use?

It sounds like it's solely for people who perform live and stream and/or record to Twitch/YouTube, which means I'll likely never even see it in action, either. I mean, the temptation definitely IS there to start random twerking in the middle of a clothing store (which I do tend to do on a regular basis...don't judge me), but not if it requires equipment and physical space around my PC to make it happen. My lazy self is far more likely to just activate one of my many twerk animations from my inventory and call it a day.

  • Like 4
  • Thanks 1
Link to comment
Share on other sites

2 minutes ago, Ayashe Ninetails said:

As a person who does not own or want a webcam, has no motion trackers, and has little space to perform backflips and gymnastics around my home - is there anything in this new tech I can even use?

You just gave me an idea!

What if we could use the "puppetry" feature to use our RL pet animal's movement instead of our own?

Or just capture a RL "dummy" for posing purposes, etc.?  (If no "dummy" is available, one's RL spouse should suffice.)

  • Like 1
  • Sad 1
Link to comment
Share on other sites

2 minutes ago, Love Zhaoying said:

You just gave me an idea!

What if we could use the "puppetry" feature to use our RL pet animal's movement instead of our own?

Or just capture a RL "dummy" for posing purposes, etc.?  (If no "dummy" is available, one's RL spouse should suffice.)

Might work. Still requires equipment, though.

I will say, I do already have dances that are fully mocap - but if this new feature can provide me with even MORE absolutely ridiculous mocap dances from a wider range of stores, then by all means - animators with motion trackers...get twerkin'!!!!!! 🤣

  • Like 3
Link to comment
Share on other sites

10 minutes ago, animats said:

This is going to get kind of technical.

Yep, but I get how this would work, I think. Essentially, it fakes true collision detection by measuring the dimensions of the part of the body, and sort of adding that to the basic calculation of how far away the other part of the body is?

I have no idea how complicated this might be, but presumably this is something that High Fidelity had already addressed. The trick, I suppose, would be translating that into SL.

(On a side note, it was interesting to see how often her virtual clothing clipped with her body in this video. It made me feel less judgmental about SL!)

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 217 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...