Jump to content

Madelaine McMasters

Resident
  • Posts

    22,945
  • Joined

  • Days Won

    19

Everything posted by Madelaine McMasters

  1. Recognizing collisions is step one. Doing something visually acceptable to address them is step two. The LL announcement Nalate's linked doesn't get into much detail, so I don't know if/how collisions will be handled. It does seem that the focus is on real-time mocap, which I am not personally interested in. It might also be that collisions are to be handled by the external mocap system, which could mean that body sliders do not affect animations (and therefore can't be used to prevent collisions), or would have to be exported from SL to inform the external mocap system. We'll see how it goes when we see how it goes.
  2. Yep. In primitive skeleton animation systems, the bones are just lines, which can't really collide because they have no volume. The SL skeleton has some limits on joint angles, derived from basic human limits, like not being able to bend our legs backwards at the knee without a trip to the ER. Those limits don't prevent setting several joint angles in such a way that body parts collide. By giving the bones some simply shaped collision volume (which would respond to the sliders, so you can adjust for thin/thick people), it's possible to compute approximately when they collide. You'd allow overlap in some cases (elbow creases) but not others (fingertips against anything), simply because humans are squishy, but you don't have to go through the horrific calculations of deforming a complex mesh to simulate contact. We're not looking for perfection, we're just trying to avoid entire hands vanishing into our (or someone else's) abdomen. Inherent in such a system is some knowledge of human motion that allows a collision to be "unwound" in a visually acceptable way. If a pose puts my hand through my chest, I don't want the system to fold my hand backwards at a gasp inducing angle to fix that. I want it to move at least my upper and lower arms as well, in some way that looks plausible.
  3. Though I mentioned the disabled as potentially suffering from the introduction of puppeteering, I don't actually have much concern about that happening. I'm able bodied, yet prefer using a keyboard to drive SL. I have no interest in my avatar reflecting my RL movement, even just facial expressions. I am not alone. I would be interested in a system that efficiently converts explicit commands (voice, keyboard/mouse, or some easy to use interface) into facial expressions and movements, smoothly transitioning and merging with libraries of animations. I want this system to understand context, so I'm not burdened with endless detailed specification in my commands. I don't see puppeteering affecting the animations market. Animation creators have been using mocap for many years. They'll take advantage of any improvements in the SL avatar animation system to creat better products we can use pretty much as we always have. Far into the future, I can imagine AI "assistants" gathering information from us with the intent to infer animations from whatever input we give the system, whether text, voice, mocap, disabled access devices or other affordances. Such a system might see me type (or hear me say) "set @animats on fire" and instantly animate my avatar launching a fireball at him while creating a matching chat emote. If there is a peril here, it's that some users will puppeteer their avatars live, in some way that distinguishes them from the rest of us. This will start with performers, but spread to enthusiasts. If they seek each other out, this might begin to feel as voice regions currently do to typists like me.
  4. As I see it, and as evidenced by the demonstrations Animats has posted here, mocap puppeteering will fairly require voice. You can't type while while dancing or scratching tunes. There's an unintended consequence in this. For those with physical disabilities they currently "escape" via SL, mocap risks exposing them again. SL's crude and indirect (click a button, channel the grace of a mocap dancer) approximation of reality, driving primarily by human fingers, has acted as an equalizer. I hope the VR community is sensitive to this, and works to maintain inclusivity.
  5. As in the way we "torture" prims! I don't imagine many people will want to actually perform the RL motions they wish to see performed by their SL avatars. If I "waves to Scylla!" in public chat, most everybody sees that. If my avatar waves at you, almost nobody sees it. Conversely, if we all use mocap to puppeteer, SL will look like 2008, with people typing everywhere. That said, though we don't currently focus on the DJ (other other entertainers), I can imagine a future in which we will because their motions are part of the entertainment. The extreme version of this would be live solo dance performances, in which puppeteering is the enabling technology. I love the idea of SL supporting displays of physical intelligence. We don't need perfection, just something less obviously broken than the current animation system. An avatar model should be able to do some rudimentary collision avoidance by reading the sliders and computing approximate limits for bone motion, with some overlap allowance to crudely fake deformation of RL bodies. Collisions with clothing would be more difficult. I expect that my little devil will continue to stab herself with her trident and burn herself with her torch. I'd love SL to reach the holy grail of full collision detection between avatars. It's somewhat challenging to invent new explanations, while dancing or cuddling, for our intrusions into each others bodies. I think I've driven my voodoo witch doctor shtick into the ground. I know it seems odd to hear from me, but "stealing someone's heart" really works better as a metaphor.
  6. ...bats her eyelashes at you, then hands you the chicken.
  7. If you are hidden (and you are) you will see yourself in the list, greyed out. Nobody else will see you.
  8. I don't think it would me -- most of us flit about so much. Knowing that you (for instance) had posted somewhere might make a difference. Unless there is a central location from which to see where people are, finding potential responses under construction could require significant effort. Finding existing posts is easy, as is finding the most recent posts from any individual. I see no personal value in paying more attention to less (possibly no) information. I think Arielle is pointing out a shining example of the psychology at work in these new affordances. They are designed to increase engagement, to draw more of our attention. Instituting forum guidelines to limit engagement seems at cross purposes with adding new mechanisms to promote it. ETA: I should add that by "engagement", I mean "profit". If you think of it that way, it's more difficult to discern the reasoning behind anything that happens here.
  9. I entertained exactly the same idea, using Snugs. She had to keep her window active, so I couldn't push her into the background. Your method, using a spare computer, might work, but I'd not be surprised if there was a timeout mechanism in place. It does seem I can leave Snugs (in the background) in a forum indefinitely (an hour, at least), peering down on everyone else in the thread. I could also hide RootBeerDrinkingLampshade and park her somewhere else, to make an ominous anonymous overlady.
  10. To be fair I don't have enough energy to judge everyone I see like that, but I am well aware that everything has the potential for being a point of judgement! I use "judge" somewhat differently than many, who use the judgy/judgemental definition in which judgments are seen as unreasonable, unwarranted, or excessive. I just use it to mean judge, and yes we all do it, all the time, mostly un/subconsciously. That's why we're on top of the evolutionary ladder. For me, the question is whether I know all the judgments I make and whether they are reasonable. Do I blindly follow unseen breadcrumbs? If there are people who make a living out of exciting the nosy neighbor I harbor, what's she seeing that I don't? What's she having me do about that? To be contrary, I do have enough mental energy to judge everyone I see at some level. To be honest, I'm just like Nekomimis, and haven't the conscious energy to think much about that... ...unless I'm posting here.
  11. How do you know it doesn't matter? You aren't the only person harboring a nosy gremlin. Your disclosure has produced a judgment on my part, which is reflected in my reaction to it.
  12. This is probably the intention? The intention is almost always engagement. Chat is more engaging than the forums. I wish I'd bookmarked/saved the article, but years ago I read that Microsoft had game psychologists and designers on the MS Office team. Their goal was to increase the amount of time users spent using Office by offering endorphin producing incentives throughout the suite. I presume the users of Office have different goals. Hell yes. I sense manipulation all over the place as it elicits emotional reactions I don't want. It takes time for me to assess the manipulations I detect and determine what to do with or about them. I also detect manipulation after the fact when I begin to wonder why I'm making the choices I do. Yep, I have examples of my own. Some years ago, at the Milwaukee Film Festival, I sat through a screening of "The Giant Spider Invasion", a craptastic low budget sci-fi flick that was filmed in Steven's Point, WI. Bill Rebane, the director, gave a talk before and answered questions after. He explained that the studio's "market research" had a heavy hand in the film's creation. The film was laden with tropes so threadbare that we in the audience could anticipate them. I think that's what makes opening run flops into cult classics. We eventually revel in our ability to spot the manipulation and pat our sophisticated selves on the back. The last laugh, of course, goes to the manipulators. After the screening, Mr. Rebane thanked us for continuing to fund his retirement.
  13. Don't look at the world from your perspective Han, it's only yours. You need only Google "typing indicator" to see the countless articles explaining why they exist. Most of those articles are sales pitches from tool providers. Here's one that isn't... https://medium.com/swlh/the-loss-of-micro-privacy-baa088f0660d From that article... In order to set expectations and make conversations feel more engaging, the team introduced what they called the typing indicator. Every time users started writing a message, it sent a signal to the server that would in turn inform the person on the other end that the user was typing. This was a massive technical bet considering the cost of server space. Around 95% of all MSN traffic was not the content of the messages itself, but simple bits of data that would trigger the iconic dots to show up and disappear! From an engagement model perspective, the typing indicator flipped all the right behavioral switches that got people hooked. Every time someone started typing, it created anticipation followed by a variable reward. Today, this is a well-researched area in psychology that serves as a foundation for anyone attempting to build addictive products. I'm part of a human population that's being studied endlessly by another part of the human population. I take little comfort from believing I understand the psychology being exploited by online social tools, and that might somehow vaccinate me. Even if I am unaffected (I have significant doubts), I live in a world full of people who have proven they are.
  14. Honestly, do you really think Invision (market cap EU$37million) are interested in implementing functions that nobody is interested in?
  15. I wondered why everybody was crowded around the throne.
  16. PBR = Pabst Blue Ribbon (beer). Though Schlitz was the beer that made Milwaukee Famous, it was PBR that made my classmates stupid.
  17. My region doesn't have much on the ground. It's filled with auto-rezzing skyboxes that vanish when their occupants depart, leaving only their objects floating eerily in space. Now that you mention it, I do recall seeing clouds when I use a skybox scene that's open to the sky. I agree that lag seems related to "stuff".
  18. Things may have changed over the years, but I think the viewer stops rendering terrain, water, and clouds above a certain elevation. I've flown up and down the region I live on and don't see any significant change in FPS with altitude. I should fly up above 4096 and see what happens.
  19. Be grateful, Coffee. That's so much better than Joni Mitchell's "Don't it always seem to go, that you don't know what you've got 'till it's gone."*. *Which really should read "you don't know what you had 'till it's gone". Damned poets and their licenses.
  20. I'm working from ancient foggy memory here, but enable the Advanced Menu, then uncheck "Limit select distance" and check "Disable camera constraints".
×
×
  • Create New...