Jump to content

Further Monigal

Resident
  • Posts

    5
  • Joined

  • Last visited

Reputation

0 Neutral

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Well let me explain and perhaps you can draw logical conclusions that might be able to coalesce with my own. If you have a basic Stream. It is simply data that is being sent to the program - an interpreter (in this case your SL client) . There is a degree of latency at the start of broadcast that wil cause a desynchronization if say you pressed the "on" button on 100 comptuers at exactly the same time because of a number of hardware, environmental, and software factors. I'm undoubtedly sure you're aware of this, but for explanations sake I'll go through it step by step so that there is no room for confusion. There is no desynchronization from that point forward pending the conditions are ideal (i.e no programmatic glitches that make the stream pause momentarily thus further desynching the indivduals machine from the other 100 machines) it happens, but the data itself can't (improbably) won't be interpreted differently in a way that would slow the stream or speed up the stream from how the other 100 machines interpret the stream Yes there are other possibilties that can desynchronize it, but they're obscurer and listing them ALL is counterproductive. So returning back to the setting assuming all 100 computers started the stream at the same time they'd all be perfectly in synch with each other and for at least some period of time we can expect them to stay that way. Multiple information types are being transmitted with the stream, not just song data, but other technical information about the song like title, name, and other things. This information could include something as simple as the keyboard character "X", and given the nature of digital media the position at which the streamer (not the SL client, but the source of the stream) places that "X" or more accurately gives the X as information is unmovable, else we encounter data corruption and at this point we have a much bigger issue. So it is possible to tell the streamer to do something (for instances manually updating song information after the "enter" key stroke") and as soon as you press the enter key that song information is included along the data stream to be picked up by an interpreter (your SL client) and dispalyed as text on the interpreters interface. As a product of using both it is possible to make this relationship exploitable for the aformentioned purposes I've explained. So knowing that the data will not be delayed between at the very least between the streamer and the interpreter (SL client) we know that every one of our 100 model computers would do the same thing at the same time in ideal conditions (i.e the possibility to start them all perfectly without any latency causing factors). In a real world setting all the computers would undoubtedly start their stream within micro seconds (or seconds+ sometimes) of eachother, rather than simultaneously.Each machine would still interpret the data (pending no nuisanced conditions are present like a player that slows the song rate...) at the same rate for each individual client. (E.G If at 1:00pm Daniel eats 1 piece of pie every hour, and 1:05 Sarah eat's one piece of pie every hour are they eating pie at the same rate -- The answer is yes) even though they may not be eating pie together, they're still eating the same way. So even though the 100 computers aren't interpreting at the same physical time of the 24:00 clock, they're still interpreting at the same rate. So now we focus on the indivdual level. Since every machine at the very least is interpreting at the same rate then all we have to focus on is maintaing low latency between the interaction of SL client "listening" objects and the SL client streamer (rather than all 100's computers client and streams and object interacting with each other), which is very simple. You simply tell the SL client that whenever it see's the X or whenever new information is presented too perform an action (like a color change), and the response times within the program, since they're are hanedled locally, are fast and indescirnably accurate even if they're off by nano seconds. Now to go to the animations examples about why they get off so much, it's because there's no consistency in the handling of information. With the streamer a computer HAS to interept it this way and play it this way and include all information at the time the data stream demands. And a object HAS to read that x when we tell it to read the X so everything happens in a rigid order that provides structure and no room for dilation. Conversely, gesture and sound affect based animations don't have this structure. Two Examples of how it gets off: IF you're playing the gesture: Your SL client will say "start gesture" and "start sound clip" at the same time, but the animation clip might take a second longer than sound clip to actually start forcing it off on your machine] If your're receving the gesture: Your SL client will have first download the neccessary data from the server, then say "start gesture" and "start sound clip" a the same time, and inevitably the faults of example one return. so with this in mind you can see why it just gets off for EVERYONE. So if we had a built in program in SL that could specifically link information at a point in a song to the next part of an animation, you wouldn't have that problem. Two Examples fixed and revisted. If you're playing the gesture: your SL client will say "start gesture" and "start sound clip" at the same time, but start gesture isn't allowed to begin until the first "action point" (remember the X) is met, and it will not move onto the next gesture part until the next action point in the song is met forcing the animations to be together. If your're receving the gesture: Your SL client will have to first download the neccessary data from the server, then say "start gesture" and "start sound clip" a the same time, but with the new conditions we maintain synchronziation. In other words you're forcing your computer to dance to the music! instead of blindly Elaine Benes dancing. Music is just as inherently mathematical as everything else and in fact even more so. People don't start dancing or keep on dancing if a part of a song hasn't occured yet they stretch and slow or move more quickly to be in rhythym. In our case we don't have to worry about an animation not moving at the right rate, but we have to work about the relationship between the song and action still, and we correct it by not starting a movement section until the music commands it. If you're familiar witha 4/4 time signature you can easily understand how to break down your animations to work in tempo and apply the previous concept. Does this make sense? Addendum: There were 3 replies in between this one and the one I originally wanted to reply too, but it all should address things all replies have said thus far. Most notably it is possible to send information to SL without the referenced function, and you are corect that your particular methodology would fail (i.e having multiple different information request be read at different times), but mine guarantees synchronization at the important points.
  2. Well the precision needed for dancing Vs the precision needed for a simple light moving is greater than that of the latter, there's a bigger window for latency, so I'm entirely convinced the method isn't entirely useless yet. (Addendum: Your dances would get off much faster becuase of the complexity, but a light or movement has a low impact on frame rate, and more importantly even if the previous light were to be off sync since the server is constantly updating the light with new information it will constantly be re-synced. Perhaps you can implement a similar system that checks the animation syncronization checking part of song vs part of dance and forcing the animation ahead only at the proper points or for more fludity but less precision forcing an animation backwards or forwards whenever it loses synchronization.) That being said I was thinking about it and I've improved upon the concept a little. Since as you said there's a degree of latency between all viewers then the only option is to force each listening object to respond to when the data is received which means it will be imparative that the detection system transmit the data coupled with the data stream and not on seperate systemss guaranteeing that at least on every indivduals viewer there will be synchronization, even though their amy be degreess of global desyncronization, since you can't see everyone elses machine it wouldn't be noticeable. If I'm not mistaken this is already handeled with current methods since stream data is constantly being interepted and executed it will already naturally happen in real time as long as we make sure that the data being transmitted is being transmitted with the proper system. Let me know if I'm missing something, but I thoroughly convinced this is entirely doable.
  3. New idea then: So if my shoutcast server can detect the kickbass of a song or the Baslline Beat, couldn't I using the built in streamer detection to output some form of recognizable data by SL (in this case most likely text) in in reponse to the event being the shoutcast server side software dectecting the beat. For instance, for every beat detected the server outputs the text "X", which can be read in local chat or on whatever channel I want and in reponse the items I want listen to that channel for every occurence of the "X" they react in some way. Most suitably bouncing,moving,color changing lights for a club. Point of clarification: I mean Shared Media audio or perhaps streaming audio, but I find it odd that there are no LSL functions for voice when their are in-world voice gestures that activate when people speak..... And yesI am aware of latency issues, but it's a thought.
  4. Okay then second question as I am unfamiliar with Lsl limitations. Is it possible to design your own function for this purpose? Does lsl have access to the audio sources at all?
  5. Are there any LsL functions available for detecting audio. I am interested in designing something that reacts to audio in Sl, but haven't been able to find anything in the LsL library. The best I've been able to find are the lip sync gestures that move in reponse to oice, but those only work with the slvoice. Any suggestions?
×
×
  • Create New...