Jump to content

Second Life Puppetry


You are about to reply to a thread that has been inactive for 219 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

10 minutes ago, Ayashe Ninetails said:

I mean, the temptation definitely IS there to start random twerking in the middle of a clothing store (which I do tend to do on a regular basis...don't judge me), but not if it requires equipment and physical space around my PC to make it happen. My lazy self is far more likely to just activate one of my many twerk animations from my inventory and call it a day.

I am attracted to the idea of being able to dance in RL, and have my dancing translated into SL . . . but how often would I do this? Probably not very. And yes, the space and cost would be an issue (not to mention not wanting to look like a TOTAL dork to my bemused and amused RL partner).

PS. I AM a better dancer than Elaine Benis. But still . . .

  • Like 1
Link to comment
Share on other sites

Just now, Coffee Pancake said:

Gonna keep saying .. if we can't hold a candle to VRChat, we shouldn't attempt to do RL->SL motion, we should find ways to do it better in SL.

Latency.

Smoothness.

Realism.

Fluidity.

Accuracy.

 

So far. we have NONE of these.

I sort of suspect that part of the push for this (beyond Rosedale's apparent obsession with it) is the "Keeping up with the Joneses" (or VRChat) thing. If they have it, and we want to be viewed as serious competition, then we have to have it too.

And if that's the case, you may be right: it might actually be counterproductive to do it poorly.

  • Like 2
Link to comment
Share on other sites

1 minute ago, Ayashe Ninetails said:

Hmm...this does make me wonder...

As a person who does not own or want a webcam, has no motion trackers, and has little space to perform backflips and gymnastics around my home - is there anything in this new tech I can even use?

It sounds like it's solely for people who perform live and stream and/or record to Twitch/YouTube, which means I'll likely never even see it in action, either. I mean, the temptation definitely IS there to start random twerking in the middle of a clothing store (which I do tend to do on a regular basis...don't judge me), but not if it requires equipment and physical space around my PC to make it happen. My lazy self is far more likely to just activate one of my many twerk animations from my inventory and call it a day.

As I see it, and as evidenced by the demonstrations Animats has posted here, mocap puppeteering will fairly require voice. You can't type while while dancing or scratching tunes.

There's an unintended consequence in this. For those with physical disabilities they currently "escape" via SL, mocap risks exposing them again. SL's crude and indirect (click a button, channel the grace of a mocap dancer)  approximation of reality, driving primarily by human fingers, has acted as an equalizer. I hope the VR community is sensitive to this, and works to maintain inclusivity.

  • Like 6
  • Thanks 2
Link to comment
Share on other sites

1 minute ago, Madelaine McMasters said:

As I see it, and as evidenced by the demonstrations Animats has posted here, mocap puppeteering will fairly require voice. You can't type while while dancing or scratching tunes.

There's an unintended consequence in this. For those with physical disabilities they currently "escape" via SL, mocap risks exposing them again. SL's crude and indirect (click a button, channel the grace of a mocap dancer)  approximation of reality, driving primarily by human fingers, has acted as an equalizer. I hope the VR community is sensitive to this, and works to maintain inclusivity.

Really excellent points, Maddy. Well caught.

  • Like 2
Link to comment
Share on other sites

3 minutes ago, Scylla Rhiadra said:

I am attracted to the idea of being able to dance in RL, and have my dancing translated into SL . . . but how often would I do this? Probably not very. And yes, the space and cost would be an issue (not to mention not wanting to look like a TOTAL dork to my bemused and amused RL partner).

PS. I AM a better dancer than Elaine Benis. But still . . .

Totally agree. It would be fun to do, but I'm not about to buy a bunch of new equipment solely to do it.

Now, if it means that others will be able to use their own equipment to do it, and then bring it to us in the form of better, smoother, more optimized animations we can then use in our AOs and dance machines and whatnot, that's a whole different thing to look forward to! But I initially got the impression that this is more for "real-time execution," for lack of a better phrase. I guess we'll see!

And Maddy, you make a GREAT point, too.

  • Like 4
Link to comment
Share on other sites

3 hours ago, Madelaine McMasters said:

There's an unintended consequence in this. For those with physical disabilities they currently "escape" via SL, mocap risks exposing them again. SL's crude and indirect (click a button, channel the grace of a mocap dancer)  approximation of reality, driving primarily by human fingers, has acted as an equalizer. I hope the VR community is sensitive to this, and works to maintain inclusivity.

What solution do you suggest?

@Madelaine McMasters, Puppetry could also enhance the possibilities for those with disabilities in Second Life. Example: ASL via puppetry!

Edited by Love Zhaoying
  • Like 2
Link to comment
Share on other sites

1 hour ago, Madelaine McMasters said:

As I see it, and as evidenced by the demonstrations Animats has posted here, mocap puppeteering will fairly require voice. You can't type while while dancing or scratching tunes.

There's an unintended consequence in this. For those with physical disabilities they currently "escape" via SL, mocap risks exposing them again. SL's crude and indirect (click a button, channel the grace of a mocap dancer)  approximation of reality, driving primarily by human fingers, has acted as an equalizer. I hope the VR community is sensitive to this, and works to maintain inclusivity.

So, thinking about this, in my resolutely ignorant and non-geeky way . . .

One obvious answer to one of the issues you raise here is the proper integration of voice-to-text software into the viewers. They should do that anyway, and it's sort of bizarre (to me) that we have voice morphers, and not such a fundamental and well-established technology incorporated directly into the platform.

As for the disadvantage that this might place those who have mobility issues and the like . . . one very imperfect and partial way to address that is through the implementation of what you might call "dynamic AOs," that react to particular text inputs using dedicated shorter animations. A bit like "gestures," but much better and more sophisticated. So, suppose I input the phrase "You see . . ." into chat, either through voice-to-text (or just voice), or by typing it. The dynamic AO might respond by playing an animation that raises a hand, with one finger pointing up. Or something like that. It would, at the very least, be responsive to what we are saying, and it would break up the monotony of AOs that simply cycle through the usual group of non-responsive animations. ("I think I'm in love with you," she said. And his avatar bent over to brush off his pants.)

ETA: I forgot to say . . . the larger issue here is discrimination. And that's going to be a problem. There is already a pretty marked divide between those who use voice a lot, and those who hate it -- and both sides can often be seen discriminating against the other. I've known people on voice to simply ignore text chat directed at them. And I've known texters absolutely refuse to turn voice on, out of principle almost.

So, yeah. We're going to need to learn to be more sensitive and understanding of different approaches, and acknowledge that sometimes these aren't even deliberate choices.

ETA Again: And we need voice-to-text to be capable of translating OTHER people's voices into text, for the hearing impaired. A taller order, but the tech exists now.

Edited by Scylla Rhiadra
  • Like 4
  • Haha 1
Link to comment
Share on other sites

Dance the night away is easy in SL, much easier than in RL.
The moment people will have to make these moves themselves .......
Same with walking elegantly trough the sim, while in RL you have to share the room with your partner or the kids watching TV.....
Smiling or looking sad or waving your hand are the easy parts.

Dead in the water for SL, if you ask me.
So better don't ask me.

Edited by Sid Nagy
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, Scylla Rhiadra said:

I am attracted to the idea of being able to dance in RL, and have my dancing translated into SL . . . but how often would I do this? Probably not very. And yes, the space and cost would be an issue (not to mention not wanting to look like a TOTAL dork to my bemused and amused RL partner).

PS. I AM a better dancer than Elaine Benis. But still . . .

For a non-nerd you've been asking really good questions. Animats is providing good answers and you seem to have understood them. Keep it up.

 

Here we have tended to look at just what the Lab is offering. NiranV is looking past what the Lab is doing and how it might be used for NiranV's ideas. I think Niran is going for the UberPoser.

Remember what happened with mesh. The Lab thought mesh would be for building THINGS. While several tried to convince them the primary use was going to be for avatar bodies and clothes the Lindens continued on to build mesh for THINGS. I think those still at the Lab now, that were around at the time, learned something. The development of features since then, like Bento, was a very different experience. It was way more inclusive of resident input. I am hoping that will be the case with puppeteering.

With mesh we got The Deformer (user driven and coded - never completed), Liquid Mesh (a user hack of the existing system), and finally the Lab's capitulation and development of Fitted Mesh. My point is we often see SL developers use features in ways never intended or imagined by the Lab's engineers. I expect to see some interesting directions taken by third party viewer devs.

We do lots of photos in SL that appear all over the web. I think it would be great if I could have an IK Poser that would let me put hands exactly where I want them. Photos similar to this shower snap (NSFW - Shower Ecstacy) trying to capture the heat are difficult   even with good poses and Poser help. I still had to use PS to clean up arms and legs passing through bodies.

If a new IK Poser can work to pose other avatars that would be great. Even if each person has to pose their own avatar, everyone being able to see the pose would be awesome. As it is now, I can see BD's Poser moving me but no one else can see those changes.

So putting in a pipeline that can allow IK Poseing to be sent through the servers to others would be way awesome. Third party devs will find many interesting ways to use it.

 

And then... there is that "what is Phillip up to" thinking. We have little speculation on where SL is going in the face of Meta. The puppeteer feature doesn't really seem like a BIG factor for that competition. Hopefully Phillip learned something playing with VR. But what? I think most can agree that VR was once again over hyped. But VR and virtual worlds and computer games in general are in a large part about visual appearance. Why would the Lab not be reworking the render engine to move up to Vulcan? With ray-tracing rendering being the THING why stay with OpenGL?

So what is Phillip thinking and planning? While I know the Lab does not talk about their plans, they do have them. The Puppeting seems like a building block on a road map. Or may be... it is THE THING. which I would find a bit disappointing even if I do think it has the potential for many neat new features.

  • Like 3
Link to comment
Share on other sites

5 hours ago, Scylla Rhiadra said:

SL code does not, at the moment, have way of measuring the "outer limit" of mesh, does it? By that I mean we don't have any form of collision detection, do we? At least as regards things outside of the basic avatar bones?

If true, how can they prevent an elbow clipping your body? And account for different avatar shapes? If one has a more "curvy" body, for instance, the same animation that works fine on a slender avatar is going to produce clipping, would it not?

I am sure there is a great deal here I'm missing, but I don't see how this works without clipping, without rejigging everything with the addition of collision detection.

I'm wondering if it puts choreographers, based on MoCap animations, or clips of movements they purchase and assemble, out of business. Somehow, I think not.

  • Like 1
Link to comment
Share on other sites

15 minutes ago, Prokofy Neva said:

I'm wondering if it puts choreographers, based on MoCap animations, or clips of movements they purchase and assemble, out of business. Somehow, I think not.

I very much doubt it, at least for most day-to-day purposes. Dance is very big in SL, but most people don't treat it as a mode of self-expression. It's something fun to watch while you're at a club listening to music, flirting, or connecting. This isn't going to change that.

And there are a lot of dance animations out there that are better than I am in RL. 😏

  • Like 2
Link to comment
Share on other sites

2 hours ago, Nalates Urriah said:

And then... there is that "what is Phillip up to" thinking. We have little speculation on where SL is going in the face of Meta. The puppeteer feature doesn't really seem like a BIG factor for that competition. Hopefully Phillip learned something playing with VR. But what? I think most can agree that VR was once again over hyped. But VR and virtual worlds and computer games in general are in a large part about visual appearance. Why would the Lab not be reworking the render engine to move up to Vulcan? With ray-tracing rendering being the THING why stay with OpenGL?

So what is Phillip thinking and planning? While I know the Lab does not talk about their plans, they do have them. The Puppeting seems like a building block on a road map. Or may be... it is THE THING. which I would find a bit disappointing even if I do think it has the potential for many neat new features.

I've seen a video of Philip collaborating remotely on some sort of biomedical project wherein the 4-5 participants were using VR and puppeteering to discuss and each individually manipulate a hovering biomedical object between them. Quite interesting the control they had of it and the ability to point directly at some part. Even just watching them do it was very immersive. It was also easy to see that this is obviously the way things need to go and that he was playing a part in the ability to do it. VY hype was maybe a little premature but it is definitely coming. It has to keep people engaged. 

12 minutes ago, Prokofy Neva said:

I'm wondering if it puts choreographers, based on MoCap animations, or clips of movements they purchase and assemble, out of business. Somehow, I think not.

It is my opinion that it was that potential that had the Lab shelve the puppeteering project back in 2008. Sure there will still be a need for more automated and repetitive animations aside from having an ability to control and manipulate arm/hand/leg/head movements from the MoCap animations/dances and tweerking that would be too tiresome to control specifically. That shouldn't however cancel this puppeteering project now.

  • Like 1
  • Haha 1
Link to comment
Share on other sites

Though I mentioned the disabled as potentially suffering from the introduction of puppeteering, I don't actually have much concern about that happening. I'm able bodied, yet prefer using a keyboard to drive SL. I have no interest in my avatar reflecting my RL movement, even just facial expressions. I am not alone.

I would be interested in a system that efficiently converts explicit commands (voice, keyboard/mouse, or some easy to use interface) into facial expressions and movements, smoothly transitioning and merging with libraries of animations. I want this system to understand context, so I'm not burdened with endless detailed specification in my commands.

I don't see puppeteering affecting the animations market. Animation creators have been using mocap for many years. They'll take advantage of any improvements in the SL avatar animation system to creat better products we can use pretty much as we always have. Far into the future, I can imagine AI "assistants" gathering information from us with the intent to infer animations from whatever input we give the system, whether text, voice, mocap, disabled access devices or other affordances. Such a system might see me type (or hear me say) "set @animats on fire" and instantly animate my avatar launching a fireball at him while creating a matching chat emote.

If there is a peril here, it's that some users will puppeteer their avatars live, in some way that distinguishes them from the rest of us. This will start with performers, but spread to enthusiasts. If they seek each other out, this might begin to feel as voice regions currently do to typists like me.

Edited by Madelaine McMasters
  • Like 3
Link to comment
Share on other sites

6 hours ago, Scylla Rhiadra said:

I sort of suspect that part of the push for this (beyond Rosedale's apparent obsession with it) is the "Keeping up with the Joneses" (or VRChat) thing. If they have it, and we want to be viewed as serious competition, then we have to have it too.

And if that's the case, you may be right: it might actually be counterproductive to do it poorly.

It's going to end up like every major feature LL has implemented over the last 12 years, Mission Accomplished™ and follow zero/dated industry standards.

  • Like 2
  • Thanks 3
  • Sad 1
Link to comment
Share on other sites

4 hours ago, Love Zhaoying said:

Puppetry could also enhance the possibilities for those with disabilities in Second Life. Example: ASL via puppetry!

I think that text communication is a much better solution, though, and we already have that. In fact, before voice was an option, text was the great leveler between the abled and the hearing impaired user.

Maddy and Animats have hit the nail on the head. A full body (or even partial body) avatar puppeting solution is fundamentally incompatible with SL's current system of keyboard and mouse avatar control and communication.

Many times, I've wanted to be able to say to someone, "Look over there!" and point to what I'm talking about. But I would not care to turn SL into VRChat merely to gain that ability.

Another thing about a VR-powered virtual world: it almost demands the use of a first person (Mouselook) viewpoint. While this has some definite advantages, it has one major drawback, at least from my perspective: I can no longer see "myself". I've gotten used to that, over the years, and if I use Mouselook now, I actually LOSE some of my sense of immersion.

  • Like 5
Link to comment
Share on other sites

7 hours ago, Scylla Rhiadra said:

Yep, but I get how this would work, I think. Essentially, it fakes true collision detection by measuring the dimensions of the part of the body, and sort of adding that to the basic calculation of how far away the other part of the body is?

Yep. In primitive skeleton animation systems, the bones are just lines, which can't really collide because they have no volume. The SL skeleton has some limits on joint angles, derived from basic human limits, like not being able to bend our legs backwards at the knee without a trip to the ER. Those limits don't prevent setting several joint angles in such a way that body parts collide.

By giving the bones some simply shaped collision volume (which would respond to the sliders, so you can adjust for thin/thick people), it's possible to compute approximately when they collide. You'd allow overlap in some cases (elbow creases) but not others (fingertips against anything), simply because humans are squishy, but you don't have to go through the horrific calculations of deforming a complex mesh to simulate contact. We're not looking for perfection, we're just trying to avoid entire hands vanishing into our (or someone else's) abdomen. Inherent in such a system is some knowledge of human motion that allows a collision to be "unwound" in a visually acceptable way. If a pose puts my hand through my chest, I don't want the system to fold my hand backwards at a gasp inducing angle to fix that. I want it to move at least my upper and lower arms as well, in some way that looks plausible.

Edited by Madelaine McMasters
  • Like 2
Link to comment
Share on other sites

4 hours ago, Lucia Nightfire said:

It's going to end up like every major feature LL has implemented over the last 12 years, Mission Accomplished™ and follow zero/dated industry standards.

This is my fear. Resurrecting a long dead project to try and convince the kids we can be hip and trendy too.

If it follows the standard operating procedures, It will be too little, too late, and beset by usability problems & jank so severe that even the technically minded and able bodied will struggle to make it work.

The project will be destroyed from within as rather than focus on delivering a winning experience to existing end users that meets current standards and expectations,. It will end up overtaken by some evergreen nonsense fixated on developing business value and intellectual property by people who wouldn't know which way up a set of motion trackers might go (and can't even acknowledge the primary use case might just be interpersonal intimacy rather than waving).

I'm sorry if this sounds jaded and depressed, we need a win so very badly. I want this to be good.

  • Like 2
Link to comment
Share on other sites

1 hour ago, Coffee Pancake said:

It will end up overtaken by some evergreen nonsense fixated on developing business value and intellectual property by people who wouldn't know which way up a set of motion trackers might go (and can't even acknowledge the primary use case might just be interpersonal intimacy rather than waving).

I'm very sure even the irrelevant Rosedale knows perfectly well what the primary use case for this stuff will be. Would we expect their sample videos to show horny seniors puppeteering their avatars doing the nasty?

They appear to believe it's tablestakes for the platform and that may not be wrong. This really will be pretty core to many users' choice of social virtual world. Sure, SL can be the one without it for a while, but it might end up being as much a market-limiting factor as lacking voice.

Of course, some of us still won't use SL voice. I eventually realized that voice is prerequisite to making any practical use of puppeteering, so I doubt I'll ever do more than test it out, despite at first being all excited about the tech and how open the Lab is making it for developers.

  • Like 1
  • Haha 1
Link to comment
Share on other sites

15 hours ago, Scylla Rhiadra said:

SL code does not, at the moment, have way of measuring the "outer limit" of mesh, does it? By that I mean we don't have any form of collision detection, do we? At least as regards things outside of the basic avatar bones?

If true, how can they prevent an elbow clipping your body? And account for different avatar shapes? If one has a more "curvy" body, for instance, the same animation that works fine on a slender avatar is going to produce clipping, would it not?

I am sure there is a great deal here I'm missing, but I don't see how this works without clipping, without rejigging everything with the addition of collision detection.

The SL skeleton already has collision bones - they're what allow fitted mesh to work. Animations didn't recognize them before, but the information is already part of your avatar.

  • Like 3
Link to comment
Share on other sites

On 8/30/2022 at 11:03 PM, animats said:

I'm encouraged to see LL doing something ambitious.

I'm 100% with you on this, but two things ...

I wish they'd tackle the massive tech debt they have with such enthusiasm. And I spend my time in SL sitting down at a desk moving through the world here. Why do I need motion tracking puppetry to track myself typing on a keyboard? The need for this is soooo limited.

Edited by Katherine Heartsong
  • Like 3
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 219 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...