Jump to content

Dahlia Bloodrose

Resident
  • Posts

    8
  • Joined

  • Last visited

Reputation

5 Neutral
  1. Because they have them at all? Because having them attached where they aren't relevant is pointless lag for others? Or because they have adult body parts attached but hidden in non-adult spaces? Are things like the Catwa Tears add-on a bad sign? I am asking this because I am trying to refine my understanding of this etiquette. In RL, my non-visible body decorations are strictly private. I never considered that giving my avatar the same decorations would have a social impact. I mean I know you can list another avatar's attachments, but I'd always thought about that in terms of people being curious about where to get an item. I'm enough of an introvert and sufficiently cautious that I'd never approach someone and just add-friend. But there's a bunch of SL social rules in here I'd like to understand more deeply than "Don't do/be this."
  2. How does the Lelutka Axis face posing HUD work? It appears to use more bones than are available for animation. It's strictly for static poses, but perhaps that technique can be used for comparatively slow micro-expressions. Not what you really want, but still. On a tangent, it would be really nice if dancing actually matched the music. Not just the (almost) same BPM, but properly synchronized on 2 & 4 (at least for 4/4).
  3. My apologies for not making it clear I wasn't criticizing you for suggesting them. Their marketing is crappy, not you. I don't think you did or said anything wrong.
  4. Their stuff looks nice, but the description of the product is pretty hateful with respect to people and avatars that aren't traditionally masculine men and traditionally feminine women. I get that they have limited resources and that body shape variation in a huge challenge in SL animation, so I'm not salty about the limits as to what they support, But they say "MF means MF, no gender theory here, a male is strong, behind, with flat feet, a female is cute, sensual, I won't adapt animations for a reversed use." What they support doesn't create a problem for me, as I used the ASA reference shape. But I'll be damned if I'll buy from someone who feels compelled to trash my friends (1L and SL) like that.
  5. I can't thank you enough for this answer. It brought me to a much better under of what's really going on with body mesh(es), skins, BoM, and appliers. I had misled myself about several things by overfocusing on the Maitreya HUD. That and having a reasonable understanding of 3d graphics but not recognizing that being in production for 20 years means additional complexity is inevitable. Dahlia
  6. I have a tattoo that I love, but it encroaches more than a little on the area used by Bento female bits. I have a Maitreya Lara body and have experimented with the bits made by Sensations and the bits made by Sessions & ASA. Apologies for being cutesy, just trying to keep this far away from being unambiguously adult. The tattoo has both Maitreya BoM and Omega Appliers. Except for the part that overlaps, both appliers work fine. Is there a way to make either of them (or another one) pick up the part of the tattoo that overlaps. I've tried different orders of application (and applying to different layers in the Maitreya body). I can't seem to make it work. Am I missing something?
  7. You could automate the Machinima consent requirement in the recorder. It would know which avatars were present and request their affirmative consent before proceeding (or just not recording their avatar). It could automatically ensure that landowner/region permission was present. This would provide substantially more controls than are present with screen capture approaches. Linden could also build this the recording side of this as a monthly service for real money. They've pretty much abdicated the high quality tools side of things, so the playback function would have to be somewhere else. Also, you'd really want high-end GPUs. I hear you on the asset ripper front, but isn't that happening already? With the viewer code being open source, it's just too easy to run a nefariously patched viewer.
  8. What I'm proposing is not original. It's essentially how high quality in-engine cut-scenes are created in many video game engines. The core lesson from those engines is to decouple capturing the data for rendering from rendering itself. It occurs to me that it should be possible to modify an existing Second Life viewer to be optimal for ultra high quality cinematic style recording by recording the message stream from Linden for subsequent playback and rendering. In cinematic capture mode, the viewer would let you log in as a bot that automatically followed another avatar. The avatar could be a floating cinematic camera (or invisible, I suppose but that's creepy). The key thing is that cinematic capture mode would not have to do any rendering, at all. No processor cycles would be spent on anything other than capturing and logging the traffic required for rendering. You would still be subject to server side and network lag (more on that later), but client lag could be eliminated entirely. Note: You'd have to block recording of voice chat message packets to avoid running afoul of wiretapping laws. In playback mode it would let you pick time slices and render them based on the captured stream. After the fact camera movement would be entirely possible (you wouldn't be entirely locked into the original camera angles). The key thing here is that the render would not have to be done on a one for one time basis. You could specify unreasonably high rendering quality if you were willing to have it take 5 hours to render a 30 minute sequence. As a later Improvement, you could minimize the impact of network lag by automatically fix up the recorded stream after the fact to deal with lagging mesh & texture data (scan forward into the stream to grab it). This would also help some with server lag. The capture component could even be run on a Linux VM in AWS, so recordings made on private regions would have virtually no network lag and dropout. And avoid AWS egress fees for Linden, I might add. Conceivably, you could have a streaming mode that did rendering on a one for one time basis with a time delay (you always want a time delay). I would donate a significant amount of money to have this incorporated into one of the viable third party viewers (one that has a low risk of abandonment). Standardization of the capture format could make this work across viewers, I imagine. Incidentally, rendering of captured sequences would be a great debugging tool for rendering issues.
×
×
  • Create New...