Jump to content

Real VR inside Second Life with Google Cardboard and Trinus Gyre


Nimra Tomsen
 Share

You are about to reply to a thread that has been inactive for 2826 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Hello all,

I would like to share a very interesting experience I made today: A real VR immersion in SL possible without any expensive and/or not-yet-available gadgets like the Oculus Rift.

1) Google Cardboard

A very simple cardboard device, sold via Amazon for a few bucks. You need an Android or IOS smartphone with a display of 4 to 5 inches, this plays the role of the display when inserted. Check out https://www.google.com/get/cardboard/ - this is only the construction principle, and you may build one as DIY. But there are some sellers offering this thing neatly prefabricated. This cardboard item is a flash for itself, independently from SL. Check out the amazing apps available (my favourites: "Tuskany Dive" and "Lost in the Kismet"). The higher the display resulution the better. I use it with a Sony Xperia SP with 1280 x 720 btw, which is perfectly fine.

2) Trinus Gyre

A little piece of software, which

a) streams the content of your PC monitor to your smartphone display (by USB or wifi),
b) is able to emulate the SBS effekt neccessary for the cardboard (and all other HMD devices), and
c) will re-feed the position changes of the cardboard detected by the sensors of the smartphone to your computer.

http://oddsheepgames.com/trinus/?download. Gyre is still experimental somehow, but the effect is astounding, when working: You can use this not only for SL for virtually all games and software to get the video stream and some 3D effect into your cardboard. I tried it with Anno 1404 - my cathedral never looked that good before!

3) Second Life

Just start the normal viewer in window mode and reduce the window to 640 x 720 pixel, or whatever your smartphone display is able to compute. As there are two separate pictures you need half the resolution of the smartphone on the screen.

4) Result

Now you can put the cardboard to your eyes and you will see the SL landscape filling your whole sight. Your mouse pointer is moving according to the turns of your head. In normal mode this is not very impressing, but try out mouselook! You can turn around yourself, watching all the 360 degrees of scenery. And if you put a finger on the forward button at the keyboard you can move. I enjoyed walking through some jungles immensely, looking left and right all the time.

Of course this is a cheap solution ($10 or so) and not comparable with high-end-devices. But still it gives you a way to be into SL unknown before. I am not really a computer guy or VR specialist or so, just a normal user. But it seems that SL isn´t be used with the cardboard really. So hopefully this posting stirs some experiments - I´m curious about your experiences, opinions and proposals for further improvements.

Now I have to leave. Can´t wait to visit the next beach by VR...

Have fun!

 

Armin

 

Link to comment
Share on other sites

The  3D method is very old and it will only work with apps that are designed for it. What are needed are 2 views looking at the same thing but from slightly different agngles - the width of the eyes difference. That's the only way to get the 3D effect.

SL only allows one view per avatar but I'm wondering if a viewer could be produced to give 2 views. I don't think so.

Link to comment
Share on other sites

But isn't the "2 views looking at the same thing but from slightly different angles" exactly what the Oculus Rift does? And there's been a way to use SL with the Rift dev kits for a long time now. Google Cardboard is more or less the same as Oculus Rift, give or take, so I'm not surprised this works, and this is the first guide I've seen for setting it up (but surely there must be something that tells the viewer to display in side-by-side offset-cam mode, right?).

(Incidentally, for those who will surely object that the Rift is vastly different from Cardboard: Yeah, well, no. The HoloLens is different, maybe, although nothing like as different as what we can expect to emerge from labs in the next few years. That said, HoloLens is an important change in desktop computing and display technology: No more monitors, just image a display wherever convenient, as many as you like.)

Link to comment
Share on other sites


Qie Niangao wrote:

But isn't the "2 views looking at the same thing but from slightly different angles" exactly what the Oculus Rift does? And there's been a way to use SL with the Rift dev kits for a long time now. Google Cardboard is more or less the same as Oculus Rift, give or take, so I'm not surprised this works, and this is the first guide I've seen for setting it up (but surely there must be something that tells the viewer to display in side-by-side offset-cam mode, right?).

(Incidentally, for those who will surely object that the Rift is vastly different from Cardboard: Yeah, well, no. The HoloLens is different, maybe, although nothing like as different as what we can expect to emerge from labs in the next few years. That said, HoloLens
is
 an important change in desktop computing and display technology: No more monitors, just image a display wherever convenient, as many as you like.)

All but one 3D methods that I know about use the same 'slightly different views', whether or not the views are side-by-side. The one method that I've never understood was used in a short Dr. Who special, that was made for the purpose of demonstrating it. It worked on normal TV screens, and without the need of any special glasses. But it only worked when the camera was on the move. So they had the camera moving round and round a couple of people who stood still while chatting.

For SL, though, it's really a question of whether or not a current viewer can be modified to show two different views simultaneously. LL can do it, because they can cause the server to send two different views when a particular viewer is being used, but I don't think a third party would be able to achieve it. I may be wrong, but that's what I think.

Link to comment
Share on other sites


Phil Deakins wrote:

For SL, though, it's really a question of whether or not a current viewer can be modified to show two different views simultaneously. LL can do it, because they can cause the server to send two different views when a particular viewer is being used, but I don't think a third party would be able to achieve it. I may be wrong, but that's what I think.

But the thing is, they've already done that with the LL viewer, quite a while ago (see this tag-specific archive of Nalate's blog for some of the history, and there's at least one TPV that specializes in it.

At the other extreme, the most exotic 3D I've every seen was a volumetric display that one could walk around and see a (very low resolution) image generated with mirrors mounted on something akin to a loudspeaker dome. (This at a SIGGRAPH convention decades ago.)

Link to comment
Share on other sites

The blog said nothing, and was inaccurate in a couple of points that have nothing to do with SL, so I won't go into them.

The question about LL's occulus rift viewer is, is the code available to viewer modifiers? If it isn't, then I still do't think that TPVs can do it. Either way, it raises another couple of questions...

(1) Is LL's occulus rift friendly viewer an actual 3D implementation, with two slightly different views?, or is it merely feeding the same view to each eye, and turning the head with the occulus rift just operates which way the user is looking? If it's the latter, then it isn't 3D, of course.

(2) Will the carboard thing work with LL's OR viewer? It won't, because there's nothing in it. It's merely a way of seperating two views. Turning the head won't change the direction of view either.

Link to comment
Share on other sites

As far as I understood the technology used by Gyre and the cardboard, the 3D effect is emulated (or faked) by a slight differenciation of the two pictures for the eyes. The percepted depth of the visible world is limited, but the immersion effect still works well, first because of the filling of the complete view, and second because of the interaction by moving the head around.

As I said: This is not a NASA device, but it enables a quite different experience with SL.

Link to comment
Share on other sites


Phil Deakins wrote:

(1) Is LL's occulus rift friendly viewer an actual 3D implementation, with two slightly different views?, or is it merely feeding the same view to each eye, and turning the head with the occulus rift just operates which way the user is looking? If it's the latter, then it isn't 3D, of course.

http://community.secondlife.com/t5/Featured-News/Using-the-Oculus-Rift-with-Second-Life/ba-p/2728824

 

"You can use the Oculus Rift for an immersive 3D experience anywhere and everywhere in Second Life."

 

https://www.oculus.com/rift/

"The Oculus Rift creates a stereoscopic 3D view with excellent depth, scale, and parallax. Unlike 3D on a television or in a movie, this is achieved by presenting unique and parallel images for each eye. This is the same way your eyes perceive images in the real world, creating a much more natural and comfortable experience."

 

Link to comment
Share on other sites


Phil Deakins wrote:


For SL, though, it's really a question of whether or not a current viewer can be modified to show two different views simultaneously. LL can do it, because they can cause the server to send two different views when a particular viewer is being used, but I don't think a third party would be able to achieve it. I may be wrong, but that's what I think.

The server doesn't send "views", it only sends the locations and descriptions of the objects/avatars, how they're moving, what happens when one object bumps against another, etc. It's the viewer's job to take this information and draw what your avatar (or more accurately, the camera attached to your avatar) "sees" given this information. There's already enough information sent to allow a viewer to draw the same scene twice from slightly different angles/locations if the viewer itself has the smarts to know how to do this.

Link to comment
Share on other sites


Theresa Tennyson wrote:


Phil Deakins wrote:


For SL, though, it's really a question of whether or not a current viewer can be modified to show two different views simultaneously. LL can do it, because they can cause the server to send two different views when a particular viewer is being used, but I don't think a third party would be able to achieve it. I may be wrong, but that's what I think.

The server doesn't send "views", it only sends the locations and descriptions of the objects/avatars, how they're moving, what happens when one object bumps against another, etc. It's the viewer's job to take this information and draw what your avatar (or more accurately, the camera
attached to
your avatar) "sees" given this information. There's already enough information sent to allow a viewer to draw the same scene twice from slightly different angles/locations if the viewer itself has the smarts to know how to do this.

Of course. That was stupid of me. It's not as though I didn't know. I did know, but my brain wasn't functioning properly when I wrote that.

Link to comment
Share on other sites


Coby Foden wrote:


Phil Deakins wrote:

(1) Is LL's occulus rift friendly viewer an actual 3D implementation, with two slightly different views?, or is it merely feeding the same view to each eye, and turning the head with the occulus rift just operates which way the user is looking? If it's the latter, then it isn't 3D, of course.

 

"You can use the Oculus Rift for an immersive
3D experience
anywhere and everywhere in Second Life."

 

"The Oculus Rift creates a
stereoscopic 3D view
with excellent depth, scale, and parallax. Unlike 3D on a television or in a movie, this
is achieved by presenting unique and parallel images for each eye
. This is the same way your eyes perceive images in the real world, creating a much more natural and comfortable experience." 

Ther use of the word 'parallel' in that last paragraph is either wrong, or, if it's right, then it doesn't work as the eyes work, because the eyes don't receive parallel images. If they did, we wouldn't see in 3D.

To see SL in 3D, the viewer must create the data for 2 different images (2 different camera positions, one sent to each eye.

It's academic, anyway, since it's been posted that the cardboard thing doesn't do that. No doubt it produces an interesting effect, which may be better than viewing with a monitor. Anything that changes the view with head movement is bound to be better - usually :)

Link to comment
Share on other sites


Phil Deakins wrote:

To see SL in 3D, the viewer must create the data for 2 different images (2 different camera positions, one sent to each eye.

That's exactly what the viewer does; it creates two unique images - with 2 different camera positions - one for each eye.

The "parallel images" in Oculus Rift means that there are two unique images displayed on the Oculus Rift screen, side by side:

• On the left side of the screen there is an an image for the left eye

• On the right side of the screen there is an image for the right eye

The lenses in the Oculus Rift will show:

• Only the left side of the screen for the left eye

• Only the right side of the screen for the right eye

So both eyes will see only its unique image, the brain will interprete this as an 3D view, just like looking at real world.

 

(PS. Perhaps you are confusing this "parallel images" with each eye's line of sight when the eyes are focused looking at some object on the scene? Naturally the lines of sight are not parallel when the eyes are focused on an object.)

Link to comment
Share on other sites


Phil Deakins wrote:


Coby Foden wrote:


Phil Deakins wrote:

(1) Is LL's occulus rift friendly viewer an actual 3D implementation, with two slightly different views?, or is it merely feeding the same view to each eye, and turning the head with the occulus rift just operates which way the user is looking? If it's the latter, then it isn't 3D, of course.

 

"You can use the Oculus Rift for an immersive
3D experience
anywhere and everywhere in Second Life."

 

"The Oculus Rift creates a
stereoscopic 3D view
with excellent depth, scale, and parallax. Unlike 3D on a television or in a movie, this
is achieved by presenting unique and parallel images for each eye
. This is the same way your eyes perceive images in the real world, creating a much more natural and comfortable experience." 

Ther use of the word 'parallel' in that last paragraph is either wrong, or, if it's right, then it doesn't work as the eyes work, because the eyes don't receive parallel images. If they did, we wouldn't see in 3D.

To see SL in 3D, the viewer must create the data for 2 different images (2 different camera positions, one sent to each eye.

It's academic, anyway, since it's been posted that the cardboard thing doesn't do that. No doubt it produces an interesting effect, which may be better than viewing with a monitor. Anything that changes the view with head movement is bound to be better - usually
:)

I think you're misunderstanding these devices. The Oculus Rift and Google Cardboard do the same thing, which is to present stereoscopic images to the eyes. This is the area of the 3D display technology market that is getting crowded now; indeed Cardboard kits were originally handed out at a Google I/O conference as a kind of goof on Facebook for having spent so much to buy Oculus, compared to some Googler's twenty-percent project which does remarkably close to the same rather simple thing.

In contrast, head- and eye-tracking aren't necessary for VR applications, but become crucial for Augmented Reality, and that's why Microsoft's HoloLens is rather more of a breakthrough than the Rift / Cardboard stuff. Magic Leap is another current AR player, with Google fundage.

Link to comment
Share on other sites

Until such time that the avatar movements are replicated from body motion and objects can be interacted with via natural hand motion, my interest in these things which are merely a 3D viewer of canned animations remains low.

 

One can hardly say "here I am dancing in a virtual world" when your sitting on a chair in the real world but have just clicked "play animation". Only when your RL movements are rendered virtually, does this all start to stick together and this won't happen on the existing SL platform.

 

 

Link to comment
Share on other sites


Coby Foden wrote:


Phil Deakins wrote:

To see SL in 3D, the viewer must create the data for 2 different images (2 different camera positions, one sent to each eye.

That's exactly what the viewer does; it creates two unique images - with 2 different camera positions - one for each eye.

The "parallel images" in Oculus Rift means that there are two unique images displayed on the Oculus Rift screen, side by side:

• On the left side of the screen there is an an image for the left eye

• On the right side of the screen there is an image for the right eye

The lenses in the Oculus Rift will show:

• Only the left side of the screen for the left eye

• Only the right side of the screen for the right eye

So both eyes will see only its unique image, the brain will interprete this as an 3D view, just like looking at real world.

 

(PS. Perhaps you are confusing this "parallel images" with each eye's line of sight when the eyes are focused looking at some object on the scene? Naturally the lines of sight are not parallel when the eyes are focused on an object.)

I reqd it as meaning lines of sight, so, yes, it was confusing.

Link to comment
Share on other sites

Just adding a bit more info on this matter... :matte-motes-big-grin:

There are different methods to create 3D image pair, such as:

Parallel cameras: the camera axis are parallel
Toed-in cameras: the camera axis will intersect at certain distance
Off-axis cameras: the camera aperture is off-axis (not centered on the lens axis)

All the above have the lenses separated by the same distance as human eyes are. In parallel cameras and off-axis cameras the image pair will have their scene planes aligned so there is no distortion in the 3D image. In toed-in cameras the image pairs will not have their scene planes aligned with each other nor with the display screen, this results in keystone distortion, and also "depth plane curvature" - flat planes can appear bowed in the centre toward the camera.

The easiest, most hassle free, method is to use parallel cameras. Just snap the photo, no post processing is needed to correct distortions as there are none.


It is possible to make parallel camera 3D image pair with the single SL camera. Take a snapshot, then move the camera 6 centimeters to the side (along Y-axis) and take another snapshot. Put those two images side by side and there you have your 3D image pair. You can view the 3D result by the "crossed eye method".

Help: How to Free-View the Stereo Pairs

Here's a sample image pair what I made. When you look at the resulting 3D image it's pretty convincing 3D effect.

Stereo-image-pair-with-SL-Camera.jpg




PS. Even if the image pair is made with parallel camera, you will actually look at it just like real a life scene, you're not looking at it with you eyes parallel. Here is what happens:

Let's suppose that there is an object in the center of the scene. We place our camera so that the left lens is pointing directly on it (i.e. perpendicular). We snap our photo. In the left image the object will in the center of the image, on the right image the object is a bit to left from the center.

When you look the resulting 3D image and you stare at the object your left eye will look straight ahead perpendicular to the image plane. However, when your right eye will look at the object the line of sight is not perpendicular to the scene plane because the object in the right eye image is towards to the left, not directly in front of your eye. So you're looking the virtual 3D object just like you will look an object in the real world.

Link to comment
Share on other sites

I don't know what you mean by 'off-axis'.

Parallel lines of sight don't produce 3D images when viewed with the system we're talking about. To see it in 3D the eyes need to adjust. So that's not an option for regular 3D viewing. It might also have a negative effect on the eyes if done a lot. Many years ago, I was very interested in 3D and I played with the 'crossed eye' method by drawing lines on paper and adjusting my eyes to make them appear as one line sticking up out of the paper Yes, it can be done but it's not a valid 3D method.

Link to comment
Share on other sites

Because there appears to be some confusion about "what is what" I try to give some explanations here.

1. Methods to create a 3D image pair.

The image pairs can be generated either by real physical cameras or by using virtual 3D software cameras. The same basic methods will be used in both cases.

A) Parallel axis camera method

3D-Camera_Parallel.jpg

B) Toed-in (aka converged) axis camera method

3D-Camera_Toed-in.jpg

C) Off-axis aperture camera method

3D-Camera_Off-axis.jpg


One more picture about the three above methods:

3D-Cameras_Off-Axis_Converge_Parallel.jpg

Many sources state that the off-axis aperture method produces the best 3D image pair. The toed-in method is the worst method of the above three because it causes distortion in the images (the distortion is due to the fact that the image planes are not aligned with each other nor with the image viewing plane).

The image pairs produced by the above methods can be 'free-viewed' (without any devices), and they can be viewed also with viewing devices. Both methods will produce exactly the same 3D image. However, HMDs (Head Mounted Displays) have an advantage over other methods (monitors / TVs / printed pictures) because of their ability to create wider field of view.


2. How to 'free-view' (without any viewing devices) 3D image pairs?

There are two methods:

A) Cross-eyed viewing
B) Divergent (aka parallel) viewing

3D-Image-free-viewing-methods.jpg

• Cross-eyed viewing
-- in cross-eyed viewing the left eye looks at the right image and the right eye looks at the left image
-- cross-eyed viewing is practical for all image sizes as most people can cross-eye to a great degree

• Divergent viewing [is sometimes called also as "parallel viewing"]
-- in divergent viewing the left eye looks at the left image and the right eye looks at the right image
-- divergent viewing is mostly limited to view small images (to the width of eye separation, which averages 65 mm) because most people cannot diverge the eyes enough to view large images
-- divergent viewing is also called as "parallel viewing" because the lines of sight are very close to parallel, especially so with very small images. However the lines of sight are never exactly parallel because any single detail in each eye image is in a slightly different location (when those slightly different images are combined in the brain the result is the 3D view).


Below is a sample image for cross-eyed vieving.
If this image is viewed with divergent viewing method then the 3D image will not look right.

3D-Cross-Eyed-viewing-image.jpg


And below is a sample image for divergent viewing.
If this image is viewed with cross-eyed method then the 3D image will not look right.

3D-Divergent-viewing-image.jpg

 

Hopefully this brief introduction to 3D clarified the matters.

 

Link to comment
Share on other sites

You are getting confused by your research, Coby. I don't particularly understand most of what you copied and pasted. I haven't really tried because none of it is simple explanation, and it's not really an explanation of what we're discussing. But when I got down to #2 - with the tops a heads - I can see where your confusion comes from. The parallel view there refers to eyes looking at 2 images. That's the 'parallel lines of sight' that you mentioned, but that's not what we were talking about. We were talking about the taking of images - not the looking at images. Parallel lines of sight, when taking the images, cannot produce a 3D image.

There are very few 3D methods, none of which incorporate parallel lines of sight when taking the images - not even for cross-eyed viewing. You were confusing looking at images that were taken for 3D viewing, with the taking of images for 3D viewing. It's the taking of images for 3D viewing that this dicscussion is about, and that can't be achieved with parallel views.

Link to comment
Share on other sites


Phil Deakins wrote:

You are getting confused by your research, Coby. I don't particularly understand most of what you copied and pasted. I haven't really tried because none of it is simple explanation, and it's not really an explanation of what we're discussing. But when I got down to #2 - with the tops a heads - I can see where your confusion comes from. The parallel view there refers to eyes looking at 2 images. That's the 'parallel lines of sight' that you mentioned, but that's not what we were talking about. We were talking about the taking of images - not the looking at images.
Parallel lines of sight, when taking the images, cannot produce a 3D image.

There are very few 3D methods, none of which incorporate parallel lines of sight when taking the images - not even for cross-eyed viewing. You were confusing looking at images that were taken for 3D viewing, with the taking of images for 3D viewing.
It's the taking of images for 3D viewing that this dicscussion is about, and that can't be achieved with parallel views.

Incorrect. (not necessarily the entire post, but specifically the highlighted text)

Using parallel cameras (virtual or actual) will give differing images for each eye, which is what you need for 3D (or strictly speaking, sterescopic) viewing. Whether that is the best set-up for creatinga stereoscopic image is a matter of hot debate. Skewed cameras create distortion in the final image. Parallel cameras place the convergence point at infinity, which has an effect on where everything appears on the plane of depth - effectively 'in front of the screen'.

Parallel vs Converged (pdf doc)

Camera Converged or Parallel (cinema forum debate)

I'm not sure how this would translate to images render so close to the eyes, but as there is a debate as to whether skewed or parallel is better, it's patently obvious that a parallel camera set-up can and does create a stereoscopic image.

Link to comment
Share on other sites

Lol Phil. Of course you don't understand because you don't care to read carefully what has been clearly explained with the simplest possible terms, and with clarifying images. This is not "rocket science". Not too difficult to understand for anybody who cares to understand.

First I showed how 3D image pairs can be produced by any of the three methods:

• Parallel axis camera method
• Toed-in (aka converged) camera method
• Off-axis aperture camera method

Then I showed how the produced images can be viewed without any device:

• Cross-eyed method
• Divergent (aka parallel) method

A special note (for Phil):
I definitely don't confuse the three image pair producing methods with the two image pair viewing methods.


Phil, you wrote:
"There are very few 3D methods, none of which incorporate parallel lines of sight when taking the images - not even for cross-eyed viewing. You were confusing looking at images that were taken for 3D viewing, with the taking of images for 3D viewing. It's the taking of images for 3D viewing that this dicscussion is about, and that can't be achieved with parallel views."

As I said already earlier above, I will repeat here: I am not confusing "looking at images that were taken for 3D viewing, with the taking of images for 3D viewing" like you claim. That's your misinterpretation of what I have said. It's just your brain which makes you to read something in my explanation what is not there. Your hasty reading without thought has made a trick on you.

What are we talking here about?

This discussion started with Google Cardboard, right? What does it do? It's a simple HMD which enables one to view image pairs, producing a full single view. If the image pair has two indentical images, then the view in Google Cardboard is 2D. If the image pair is a 3D image pair (produced by any of the afore mentioned three different image producing methods), then the view in Google Carboard is 3D.

I have talked about producing the images and viewing the images. Maybe that has totally cofused you because you have been thinking that this is only about producing the images. Why to limit the discussion only to producing (or "taking" in your terminology) the image pair? The Google Carboard can without doubt view the images too. I haven't checked can the Google Cardboard produce 3D image pairs or not. Even if it can, or cannot, it would change nothing what I have explained here about producing and viewing the images.


PS.
Phil, you have one very strange statement:
"There are very few 3D methods, none of which incorporate parallel lines of sight when taking the images - not even for cross-eyed viewing."

I guess it's beneficial to clear that too here because the statement is very confusing.
(Again that "parallel lines of sight" which have nothing to do with producing the images).

That implies that you think that there is a definite separate image pair producing method for cross-eyed viewing and that there is a separate image pair producing method for diverged [aka parallel] viewing. Perhaps a different method for producing image pair for viewing in virtual world, eh?

Well, the fact is that there is no special image producing method which can be used only with one viewing method. All images created with any of the three (Parallel, Toed-in, Off-axis) image creation methods can be viewed with either of the two viewing methods. The image creation methods are not viewing method specific - they are universal what comes to the viewing methods.


What determines what viewing method to use in the "free-viewing" (i.e. without viewing devices)?


The image location of each eye image determines by what viewing method the images will be viewed, not the method by which the image pair was produced.

- Cross-eyed: camera's right lens image on the left, camera's left lens image on the right
  (thus cross-eyed viewing; both eyes see the image meant for it)

- Divergent (aka parallel): camera's left lens image on the left, camera's right lens image on the right
  (thus divergent viewing; both eyes see the image meant for it)

3D-Image-free-viewing-methods.jpg

Another special note (for Phil):
- The above image has nothing to with producing the image pair.
- It is only about 'free-viewing' the image pair.


[EDIT]
Corrected some spellings and added some colour coding too for producing and viewing,
in an effort to make make it easier to understand: about what (producing or viewing) something was said.  :smileyvery-happy:

 

Link to comment
Share on other sites


Kelli May wrote:


Phil Deakins wrote:

Parallel lines of sight, when taking the images, cannot produce a 3D image.

Incorrect. (not necessarily the entire post, but specifically the highlighted text)

...

Using parallel cameras (virtual or actual) will give differing images for each eye, which is what you need for 3D (or strictly speaking, sterescopic) viewing.

Yes, a camera set setup where the two lenses' axis are parallel with each other can definitely produce an image pair for 3D viewing. I wonder why Phil insists to debate against the general established knowledge how image pairs for 3D viewing are made? (Or it might be that he hasn't understood anything what I have explained, and thus thinks that I am confused about "how stuff works".) :smileytongue:

I have taken such image pair photos. I even made such snapshot image pair with SL's single camera by moving the camera an eye distance (60 mm) sideways between the snapshots. I posted the image pair in one of my earlier post. Each image is slightly different and thus produces a 3D view. All I can say is that it works as expected.

Phil's terminology is rather confusing as he keeps talking about "parallel lines of sight" in reference to image taking. Physical cameras don't have a "sight", nor do virtual cameras in 3D applications. It's not clear what he means by the word "sight" here. Is it a camera "sight"? Or is it human eye sight (which has nothing to with producing the image pair)?

The confusing terminology makes me to go like  :smileyindifferent:  :smileysurprised:  :smileyfrustrated:  :smileysad:  aaarghh...

 

{ :smileywink: }

 

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 2826 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...