Jump to content

Elrik Merlin

Resident
  • Posts

    7
  • Joined

  • Last visited

Everything posted by Elrik Merlin

  1. The question to answer would be how you would get the audio/video feed into the Gear VR unit. It essentially needs to be able to be streamed to the phone so it's not going to be as straightforward as the Rift would be connecting to a graphics card. My expectation is that you would have to use a streaming system that could stream the SL Viewer output to the phone. I wonder if Bright Canopy could do this, for example. Obviously they would have to support the Gear VR format but in theory at least you are using a web browser to see a remote Viewer screen so I would have thought it would be possible - if Bright Canopy want to support it in that way.
  2. When we are shooting in-world, we use Skype for the audio on our shows. However, we also use SL Voice to cause avatar lips to be seen to move in time with their speech. A significant change in the way in which "lip-flap" is derived from SL Voice levels may cause issues for machinimatographers who use this method to achieve "lip sync" with characters in-world. I would be most grateful if you could assist in bringing this to LL's attention. The amount of lip movement (and green "speech waves") you see on an avatar when you have avatar lip movement enabled has traditionally been derived from the pre-fade SL Voice audio level fed to the audio mixer. As a result your audio mixer voice fader setting had no influence on the amount of lip movement you perceived in another avatar who was speaking via SL Voice. The intensity of green waves and the amount of "lip-flap" were determined solely by the Voice settings of the avatar speaking and not the mixer settings of the person looking at them. In the latest Viewer (eg FS 4.5.1, or Second Life 3.6.11) this appears to have been changed so that the source is post-fade. This means that the amount of "lip flap" you perceive in another avatar is dependent on your voice slider setting. Important for machinimatographers, this also means that if you mute SL Voice in the audio mixer or reduce the fader level to zero, no lip-flap is visible. This is ill-advised. Conceptually, the amount of lip-flap should be based on the source level and should not depend on the fader setting of the person seeing them - in RL, if you put your fingers in your ears you may not hear a person shouting so easily, but their mouth movements are unchanged. This video shows the difference: http://www.screencast.com/t/T8VISU4cy3 between pre- and post-fade lip movement sourcing. Apart from being non-logical, this is also a potentially a major problem for machinima makers who use SL Voice for lip-flap but another audio comms medium such as Skype for audio. In this case SL Voice is used to speak but it is not listened to, using Skype audio instead. If the machinimatographer wishes to capture in-world sounds then other audio via the mixer must be recorded. However if the Voice fader is muted or turned down to avoid picking up SL Voice audio, there is no lip-flap. A theoretical work-around is to set the machinimatographer's SL Voice output device to "No Device" but this does not fully function as there is leakage in the virtual audio mixer, producing low-level, but audible, SL Voice audio crosstalk. The latency of SL Voice and Skype (for example) are different so this results in an echo. I would be most grateful if machinimatographers who are potentially affected could bring this to the attention of Linden Lab. The problem results from a change in the Vivox Voice Server 4.5.0009.17865 and as a result ALL Viewers are affected if they use this code. A change has to come from LL or from Vivox; as a result I would be grateful for your support in bringing this issue to LL's attention, and request that the amount of lip-flap be once again derived from the pre-fade audio level instead of the new post-fade setting. The following Firestorm JIRA entry refers: http://jira.phoenixviewer.com/browse/FIRE-12283 - there are also equivalent SL Viewer bug reports, but you probably cannot access them. Thank you! --Elrik Merlin
  3. I'm not sure how good "going mobile" is proving for Blue Mars. Anyone got any stats? They seem to have sunk out of sight by and large. I'm looking for immersion. Phones are fine for comms and perhaps for some limited types of gaming but a tiny handheld is not exactly immersive.
  4. I make it a rule never to take at face value a graph that doesn't have zero as the origin of the Y axis. Looks to me as if this so-called dramatic drop-off is little more than 5%. Especially when you check the median values.
×
×
  • Create New...