Jump to content

Meshroom Photogrammetry


Ulfilas Graves
 Share

You are about to reply to a thread that has been inactive for 774 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Considering how many shots you're going to need for good results, if you're only working with a single camera (meaning you have to move between each shot), it might not be very feasible. For "live subjects" that might move, you'd ideally have multiple cameras set up that will all take a picture at the same time to minimize any motion. Of course, this would be expensive. The mesh generated by the program will also be very dense, it's definitely not a quick process to scan a head into bento. You'd definitely need to remesh the whole head by hand afterwards.

Link to comment
Share on other sites

16 hours ago, Wulfie Reanimator said:

. The mesh generated by the program will also be very dense, it's definitely not a quick process to scan a head into bento. You'd definitely need to remesh the whole head by hand afterwards.

This software generates high dense meshes by default, but its node approach also includes a optimal low poly mesh generation. The process is included in their own official demo video. 

Edited by OptimoMaximo
  • Thanks 1
Link to comment
Share on other sites

I would guess this should be reasonably easy to use photogrammetry to scan in a head shot. I would set up a camera on a tripod at head level. Then, I would have a helper to snap photos as I slowly spin around on a swiveling bar stool. Then do the same with the camera set higher on the tripod and pointing down toward your head. You should not have to be still since the software can handle shots from various angles. I believe the difficult part will be lighting. You don't want any reflections or shiny spots on your face or head because that often confuses the photogrammetry software. The background should be as simple as possible, like a plain gray wall.

As said above, you will want to do some serious cleanup of the resulting mesh; it will be much too messy and complicated for SL as is. It will also be a lot of work getting the Bento details applied and working well. But it sounds like fun; I may do it myself.

There is a pretty good Meshroom tutorial on YouTube here: 

 

 

~Sean Heavy

Link to comment
Share on other sites

6 hours ago, Sean Heavy said:

Then, I would have a helper to snap photos as I slowly spin around on a swiveling bar stool.

That would change the light conditions on the surface, making the resulting texture a huge mess and probably confusing the scanner, as it uses surface details as anchor points. You should do it the other way around: you stay still on the stool and the tripod would be placed at different angles around you on a marked circle on the floor, head level, above and underneath

Edited by OptimoMaximo
Link to comment
Share on other sites

7 hours ago, Sean Heavy said:

As said above, you will want to do some serious cleanup of the resulting mesh; it will be much too messy and complicated for SL as is.

As noted in my previous post, this software provides a node for a reduced version of the highly dense mesh it generates by default and the method is shown in their own video tutorial on their official website. Sure it won't be perfect for animation. Plus, it's a surface scan, so the mouth bag will need to be modeled from scratch along with tongue and teeth. 

 

Link to comment
Share on other sites

  • 1 year later...
15 minutes ago, IvyTechEngineer said:

Just tried Meshroom on a set of images I captured from a mountain I have in SL. It failed because "focal length could not be determined" for the pictures. Also there are missing meta data. Maybe this info could be added to a SL image?

Take a look at this: https://sketchfab.com/3d-models/gaustatoppen-3071abf95b7044a78767da96fec3a073

Lovely isn't it?

Now check the triangle count. 1.9 megatris. That translates into about 100,000 LI and as I told you in another thread, no LOD model reduction is going to make any difference to the land impact for items of this size.

Look at it in wireframe (click on "More model information" and select "Inspect the 3D model"). It looks almost solid.

This is what you get with photogrammetry. There is no way you can use a mesh like that in any game or virtual world without spending so much time retopoing it'd be quicker to make it from scratch.

The textures photogrammetry generates aren't useful either. The one in the link above isn't donwloadable so here's another one:

https://sketchfab.com/3d-models/medieval-arch-13th-century-rawscan-62b3b295fc164f32b190b1fb67b98a72

And here's its texture. Scaled down from 8192x8192 to 512x512 but you see what I mean:

bilde.png.4c62b50eb7d32dfdb7bf4dce8a921f12.png

Bits and pieces of the texture surface broken up and scattered randomly across the UV map. Try to make this fit your painstaikingly retopoed version of the mesh.

Sorry to disappoint everybody but as amazing as photogrammetry is, there is no place ofr it in games or virtual worlds.

Amazingly I know of one occasion where somebody made the effort to convert a photogrammetry model into something that could have been used for a game.

Here's the original scan:

https://sketchfab.com/3d-models/betsey-trotwood-pub-e9c1824a218d4befaa6acef9a2a64ce0

and here's the low poly version:

https://sketchfab.com/3d-models/paper-model-betsey-trotwood-pub-254208d462be4ca384b7c8468a285fe0

I'm not sure if the guy deserves a medal or a room at a quiet institution where they keep their guests away from sharp objects for it. Possibly both. In any case, I'm willing to bet he'll never do it again.

Let this be a warning to us all.

Link to comment
Share on other sites

7 hours ago, ChinRey said:

Bits and pieces of the texture surface broken up and scattered randomly across the UV map. Try to make this fit your painstaikingly retopoed version of the mesh.

One word: Reprojection.

It can be used to easily bake one set of textures to another UV map. You can find tons of tutorials for it (specifically for photogram. models) and it's been used to convert textures between mesh bodies in SL.

Edited by Wulfie Reanimator
  • Like 2
Link to comment
Share on other sites

8 hours ago, ChinRey said:

Sorry to disappoint everybody but as amazing as photogrammetry is, there is no place ofr it in games or virtual worlds.

Photogrammetry has already been used to generate some of the assets for several games including titles like Call of Duty: Modern Warfare, Star Wars Battlefront and Red Dead Redemption 2.

  • Like 1
Link to comment
Share on other sites

2 hours ago, Wulfie Reanimator said:

One word: Reprojection.

 

1 hour ago, Fluffy Sharkfin said:

Photogrammetry has already been used to generate some of the assets for several games including titles like Call of Duty: Modern Warfare, Star Wars Battlefront and Red Dead Redemption 2.

Oh. Maybe it isn't as hard as I thought then. But even so, I had a look at one of the YT tutorials, this one: https://www.youtube.com/watch?v=Kfxol3rprA0

I don't know how representative it is but the end result there was still 100,000+ tris for a fairly simple object and the texture was still a mess - only a mess that fitted the slightly reduced version rather than the original. Still not something that could be used in a dynamic 3D environment.

I doon't know about those games Fluffy mentioned but I will hazard a guess that those photogrammetry objects were based on photos of isolated objects on a neutral background, with controlled light and taken with top notch professional (and maybe most important, precicely positioned) cameras. That would be a completely different story of course.

Link to comment
Share on other sites

4 hours ago, ChinRey said:

Oh. Maybe it isn't as hard as I thought then. But even so, I had a look at one of the YT tutorials, this one: https://www.youtube.com/watch?v=Kfxol3rprA0

I don't know how representative it is but the end result there was still 100,000+ tris for a fairly simple object and the texture was still a mess - only a mess that fitted the slightly reduced version rather than the original. Still not something that could be used in a dynamic 3D environment.

It's hard to see, but he reduced it down to 7637 tris. The smart UV unwrap he did was even worse than the original UV map. 🙃 But hey, this was just for demonstration purposes I guess.

That whole model looked like a total mess. Poor Yoda. 😏 However, modeling this mess by hand would be a challenge. So I think the result he got is quite good actually.

Link to comment
Share on other sites

I think we're likely to see a lot wider use of photogrammetry in games and virtual worlds as time goes on, the technology required to take the source images and the software used to process them into 3D models are continually improving, and the use of extremely high poly models is already widely considered an essential part of the workflow for creating 3D assets for real-time environments thanks to software like Zbrush and Marvelous Designer.

For example here's a sculpt I created in 3D Coat a while back which consists of 5,828,398 triangles

JOL_Sculpt.thumb.JPG.6cd546dd57e0bf3c18aa80134642f293.JPG

after a little retopology the resulting low poly model (which was used as the highest lod when uploading to SL) weighs in at just 2663 triangles

JOL_Retopo.thumb.JPG.481c1b99bb5b6408b53741b61757ec5e.JPG

and here's the result once the details have been transferred over from the high poly mesh to the normal map of the retopologized model and some basic texturing has been applied.

JOL_Paint.thumb.JPG.357467b31ebb6bcfc17001fdf334469b.JPG

 

Of course the source mesh in this instance was a digital sculpture but regardless of whether the high poly mesh originates from a sculpt, a cloth simulation or photogrammetry the process of retopologizing and projecting details from a high poly model onto an optimized version will be the same.

Link to comment
Share on other sites

4 minutes ago, Fluffy Sharkfin said:

the process of retopologizing and projecting details from a high poly model onto an optimized version will be the same.

It's not quite the same because a high poly model created on a computer, whether it's manually or automatically, tend to have clearly defined edge loops and lines and also often a well defined pattern of quads rather than tris. Those are much easier both for a human and a computer algorithm to handle than the seemingly random triangle soup we typically get from photogrammetry. Most of the time you don't really need retopoing as such, just seøect and delete the superfluous edge loops and ninety percent of the job is done in seconds.

Link to comment
Share on other sites

5 hours ago, ChinRey said:

I doon't know about those games Fluffy mentioned but I will hazard a guess that those photogrammetry objects were based on photos of isolated objects on a neutral background, with controlled light and taken with top notch professional (and maybe most important, precicely positioned) cameras. That would be a completely different story of course.

Apparently all three of the games that I mentioned make use of Quixels Megascans library.  According to this article 

This little-known company helped make Red Dead Redemption 2 the most realistic game ever

they use handheld scanners for smaller objects and drones for anything "larger than a bus".  The fact that they're using specially designed hardware to capture the source images is probably why their results are far superior to those using a regular camera and some opensource software.

  • Like 1
Link to comment
Share on other sites

15 minutes ago, ChinRey said:

It's not quite the same because a high poly model created on a computer, whether it's manually or automatically, tend to have clearly defined edge loops and lines and also often a well defined pattern of quads rather than tris. Those are much easier both for a human and a computer algorithm to handle than the seemingly random triangle soup we typically get from photogrammetry. Most of the time you don't really need retopoing as such, just seøect and delete the superfluous edge loops and ninety percent of the job is done in seconds.

That's actually not the case when it comes to 3D Coat, when dealing with voxels it creates geometry like this.JOL_Wireframe.thumb.JPG.24d1a08ab8d99799bfb3312235016ca0.JPG

Edited by Fluffy Sharkfin
Link to comment
Share on other sites

27 minutes ago, ChinRey said:

It's not quite the same because a high poly model created on a computer, whether it's manually or automatically, tend to have clearly defined edge loops and lines and also often a well defined pattern of quads rather than tris. Those are much easier both for a human and a computer algorithm to handle than the seemingly random triangle soup we typically get from photogrammetry. Most of the time you don't really need retopoing as such, just seøect and delete the superfluous edge loops and ninety percent of the job is done in seconds.

While a lot of programs do have a setting to alternatively export as quads, we know that how the quads are generated can dramatically change the form of the object.

On one hand you might get smooth curves, on the other you'll get sharp jaggies.

That aside, handling data as triangles is computationally cheaper, so most simulation (cloth/fluid) or raw-data-style programs (photogram.) will output triangles, and they can rarely be directly converted into quads without hand-fixing it afterwards.

Edited by Wulfie Reanimator
Link to comment
Share on other sites

26 minutes ago, Wulfie Reanimator said:

While a lot of programs do have a setting to alternatively export as quads, we know that how the quads are generated can dramatically change the form of the object.

On one hand you might get smooth curves, on the other you'll get sharp jaggies.

That aside, handling data as triangles is computationally cheaper, so most simulation (cloth/fluid) or raw-data-style programs (photogram.) will output triangles, and they can rarely be directly converted into quads without hand-fixing it afterwards.

Yes but I was talking about how you handle the mesh during the editing process. It has to be split into triangles ventually of course.

Link to comment
Share on other sites

3 minutes ago, ChinRey said:

Yes but I was talking about how you handle the mesh during the editing process. It has to be split into triangles ventually of course.

The retopology of a high poly model with millions of triangles really doesn't involve any deleting of superfluous edge loops, and is typically done by creating an entirely new model using the high poly version as a reference/guide..

 

Link to comment
Share on other sites

15 minutes ago, ChinRey said:

Yes but I was talking about how you handle the mesh during the editing process. It has to be split into triangles eventually of course.

When you said "a high poly model created on a computer, whether it's manually or automatically, tend to have clearly defined edge loops," I was thinking you meant a model that has come in from another program (Marvelous Designer) or is hand-sculpted (ZBrush/Blender).

I probably phrased my post weirdly but I was explaining why most programs like that will export models as triangles by default (or at all) rather than quads like you said.

Edited by Wulfie Reanimator
Link to comment
Share on other sites

2 hours ago, Fluffy Sharkfin said:

 

they use handheld scanners for smaller objects and drones for anything "larger than a bus".  The fact that they're using specially designed hardware to capture the source images is probably why their results are far superior to those using a regular camera and some opensource software.

Actually, any studio that can afford Nuke can do that from a video. I participated in a production, time ago, that used it to generate a mesh from a point cloud of a video of an object taken from all possible angles, and exported the mesh with texture. They ended up using just the model for a shadow Catcher material in a series of shots where the CG creature needed to cast its shadow onto that object.

Oh and I forgot to mention, that the mesh was also processed with a mesh reduction node, so it was feasible to use, and the uv was not a mess. Needed work, but it was usable regardless

Edited by OptimoMaximo
Link to comment
Share on other sites

Somewhere buried in this thread is a question about faking out Meshroom by using images captured in SL and adding fake photo metadata. I have watched a few videos on fixing photos. I am not sure exactly what metadata is needed and whether the a screen captured image can be converted into a photo. Any suggestions?

 

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 774 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...