Jump to content

Want: Cinematic Optimized Viewer


You are about to reply to a thread that has been inactive for 101 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

 

What I'm proposing is not original.  It's essentially how high quality in-engine cut-scenes are created in many video game engines.  The core lesson from those engines  is to decouple capturing the data for rendering from rendering itself. 

It occurs to me that it should be possible to modify an existing Second Life viewer to be optimal for ultra high quality cinematic style recording by recording the message stream from Linden for subsequent playback and rendering.  In cinematic capture mode, the viewer would let you log in as a bot that automatically followed another avatar.  The avatar could be a floating cinematic camera (or invisible, I suppose but that's creepy).  The key thing is that cinematic capture mode would not have to do any rendering, at all.  No processor cycles would be spent on anything other than capturing and logging the traffic required for rendering.  You would still be subject to server side and network lag (more on that later), but client lag could be eliminated entirely.  

Note: You'd have to block recording of voice chat message packets to avoid running afoul of wiretapping laws.

In playback mode it would let you pick time slices and render them based on the captured stream.  After the fact camera movement would be entirely possible (you wouldn't be entirely locked into the original camera angles).  The key thing here is that the render would not have to be done on a one for one time basis.  You could specify unreasonably high rendering quality if you were willing to have it take 5 hours to render a 30 minute sequence.  

As a later Improvement, you could minimize the impact of network lag by automatically fix up the recorded stream after the fact to deal with lagging mesh & texture data (scan forward into the stream to grab it).  This would also help some with server lag.  The capture component could even be run on a Linux VM in AWS, so recordings made on private regions would have virtually no network lag and dropout. And avoid AWS egress fees for Linden, I might add.

Conceivably, you could have a streaming mode that did rendering on a one for one time basis with a time delay (you always want a time delay).  

I would donate a significant amount of money to have this incorporated into one of the viable third party viewers (one that has a low risk of abandonment).  Standardization of the capture format could make this work across viewers, I imagine.  

Incidentally, rendering of captured sequences would be a great debugging tool for rendering issues.

 

 

  • Haha 1
Link to comment
Share on other sites

This breaks shared experience rules and comes perilously close to a region / bulk asset ripper.

Suggest you file a feature request JIRA with LL, it's unlikely any reputable TPV would be prepared to put in the work required without LL signing off on the whole concept.

Link to comment
Share on other sites

You could automate the Machinima consent requirement in the recorder.  It would know which avatars were present and request their affirmative consent before proceeding (or just not recording their avatar).  It could automatically ensure that landowner/region permission was present.  This would provide substantially more controls than are present with screen capture approaches.  

Linden could also build this the recording side of this as a monthly service for real money.  They've pretty much abdicated the high quality tools side of things, so the playback function would have to be somewhere else.  Also, you'd really want high-end GPUs.

I hear you on the asset ripper front, but isn't that happening already?   With the viewer code being open source, it's just too easy to run a nefariously patched viewer.

 

Link to comment
Share on other sites

16 hours ago, Dahlia Bloodrose said:

In cinematic capture mode, the viewer would let you log in as a bot that automatically followed another avatar.  The avatar could be a floating cinematic camera (or invisible, I suppose but that's creepy).  The key thing is that cinematic capture mode would not have to do any rendering, at all.  No processor cycles would be spent on anything other than capturing and logging the traffic required for rendering.

.../...

In playback mode it would let you pick time slices and render them based on the captured stream.  After the fact camera movement would be entirely possible (you wouldn't be entirely locked into the original camera angles).

Problems would have to be solved first for the ”capturing and logging the traffic required for rendering” step: you must understand that the viewer does not grab (neither get sent) everything around your avatar, but only what it needs to render the scene; it depends on your avatar position, the configured draw distance and the camera angle and focus point; it means that your wish to allow camera movements on replay is not really feasible ”as is” (though, one could imagine rotating continuously the camera around in capture mode to trigger ”interest list” updates covering all surrounding objects, obtaining a 360° field of view).

You would also need to store permanently (as part of the captured data) all the textures, meshes, animations data, but also particle systems parameters, environment data changes, etc. I'm not sure how this would be considered, on a legal point of view, but it might be perceived as a form of contents ripping/copy-botting (and could certainly be abused in such a way)...

I'm skeptical on the feasibility of such a project (and not so much on the strictly technical aspect than on the legal one).

As for the benefits in ”lag” terms on replay, you might get disappointed in the end, for the replay viewer would still need to decode textures, meshes and objects data at the proper LOD, run animations, etc, meaning the main loop won't be any faster than what happens after you got everything downloaded and in caches with a normal viewer (at which point, the exact same amount of time would be spent in your replay viewer and in a normal viewer, to render the scene)...

Edited by Henri Beauchamp
Link to comment
Share on other sites

17 hours ago, Coffee Pancake said:

This breaks shared experience rules and comes perilously close to a region / bulk asset ripper.

Suggest you file a feature request JIRA with LL, it's unlikely any reputable TPV would be prepared to put in the work required without LL signing off on the whole concept.

It doesn't at least implemented properly.

1 hour ago, Henri Beauchamp said:

Problems would have to be solved first for the ”capturing and logging the traffic required for rendering” step: you must understand that the viewer does not grab (neither get sent) everything around your avatar, but only what it needs to render the scene; it depends on your avatar position, the configured draw distance and the camera angle and focus point; it means that your wish to allow camera movements on replay is not really feasible ”as is” (though, one could imagine rotating continuously the camera around in capture mode to trigger ”interest list” updates covering all surrounding objects, obtaining a 360° field of view).

You would also need to store permanently (as part of the captured data) all the textures, meshes, animations data, but also particle systems parameters, environment data changes, etc. I'm not sure how this would be considered, on a legal point of view, but it might be perceived as a form of contents ripping/copy-botting (and could certainly be abused in such a way)...

I'm skeptical on the feasibility of such a project (and not so much on the strictly technical aspect than on the legal one).

As for the benefits in ”lag” terms on replay, you might get disappointed in the end, for the replay viewer would still need to decode textures, meshes and objects data at the proper LOD, run animations, etc, meaning the main loop won't be any faster than what happens after you got everything downloaded and in caches with a normal viewer (at which point, the exact same amount of time would be spent in your replay viewer and in a normal viewer, to render the scene)...

This specifically is very important.

You'd be recording a playback file, basically an action history of what happens when and where with what and who and play this back. As Henri said, you'd still be bound to what the Viewer does for the most part except there would be no more network traffic (which does improve performance a good bunch, see what happens when you Freeze Time). Since to be able to playback in the Viewer you'd still have to first download and prepare all needed content before being able to play it back there's at least some waiting time included before you can even do something.

I'm not even sure if you could manipulate server communication in a way that you request certain things you are not somehow physically part of, such as the region.

But putting the pre-requirements aside and assuming you can get the entire scene get prepared, you can have a viewer do such a playback although it would look exactly the same way it would look for any Viewer since all the packages you record are the same a client would receive, so if an object lags or jumps around, it will be recorded and played back like that unless you could manipulate that... but then we're stepping into possibly abusive territory.

Edited by NiranV Dean
Link to comment
Share on other sites

1 minute ago, NiranV Dean said:

It doesn't at least implemented properly.

If implemented to the degree needed to accomplish the OP's  stated intent, it's the copybot king.

Rip entire region. Fetch all possible content. Store it all locally and then allow a disconnected viewer to rebuild and render the content.

Link to comment
Share on other sites

When asked how I "bullet time" scenes I record in our dance club I had to think a bit about what they were asking.  I simply tell the viewer to play all animations at reduced speeds.  Its in the menus.   This, of course, only affects avatars.  It might affect animesh entities, I haven't tried.  Slowing animations has no effect on physical or scripted objects or on clouds or particles.  Those items, if you want to slow them down so you seem to have extremely high frame rate rendering and recording, will require script adjustments and environment adjustments.

I know this is NOT what OP requested but it's 100% legal.

Edited by Ardy Lay
Link to comment
Share on other sites

7 hours ago, Coffee Pancake said:

If implemented to the degree needed to accomplish the OP's  stated intent, it's the copybot king.

Rip entire region. Fetch all possible content. Store it all locally and then allow a disconnected viewer to rebuild and render the content.

Nowhere do i see where she specified this to be done. It was specified that the message stream (action/packet history) would be recorded for later playback. I don't find anything being said of locally saving things.

Link to comment
Share on other sites

6 minutes ago, NiranV Dean said:

Nowhere do i see where she specified this to be done. It was specified that the message stream (action/packet history) would be recorded for later playback. I don't find anything being said of locally saving things.

Read the OP's post again, find the intent .. what is the OP hoping to achieve as an end goal, what would that require. Ignore the technical suggestions.

The OP does not want a simple record log of data intercepts to record or playback SL exactly as it happened.

Link to comment
Share on other sites

when the bot is logged in to SL, and the bot is acting as a server conduit to a local viewer, then it should pass muster

there would be a delay (of some length) between what the local viewer user sees, and what other users see on their screens using a standard viewer in real time, but I don't think that would be a deal breaker  for creating cinematic views for things like vids, short films, etc. Would also be able to scroll back and local edit the objects in the scene as well

Link to comment
Share on other sites

14 hours ago, Coffee Pancake said:

Read the OP's post again, find the intent .. what is the OP hoping to achieve as an end goal, what would that require. Ignore the technical suggestions.

The OP does not want a simple record log of data intercepts to record or playback SL exactly as it happened.

I think you are reading too much into the bad wording. You make it sound like this is solely and exclusively intended to be a copybot that functions in a grey area of LL's ToS and mangles definitions of things so much that LL doesn't see the copybot attempt.

Also as i said before, whatever the secret intentions of OP is, what the actual thing does in the end is still up to the implementation, as long as it does exactly what its supposed to while not overstepping the TOS i see no issue here. And as long as it would be implemented as a simple action/timestamp history of what happened there is no way this could be used for anything malicious that the other viewers can't be used for either since the viewer would just play back said history requiring everything a normal viewer would do too (downloading and caching the assets, decoding textures etc)

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 101 days.

Please take a moment to consider if this thread is worth bumping.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...