Sign in to follow this  
Followers 0
Lexie Linden

Texture Optimization

29 posts in this topic

Here's a copy of the guide I wrote on texture memory, file formats, texture sizes, and how to choose the right size for the job, which was stickied at the top of the old 1.0 forums, and which was subsequently published in in Brian White's book, "Second Life: A Guide to Your Virtual World".  There doesn't seem to be a link to access that old archive anymore -- the current forum archive only seems to show stuff from the last version of the forums, not the one before that -- so I figured I'd repost it here.

There's obviously a lot more to say about optimization than just this, but the information in this post is THE place to start.  In order to be able to optimize at all, you have to know a bit about how image size and bit depth affect performance.  This post provides a nice, easy overview of the concepts involved.  It's a must-read for any budding texture artist.

 

File Size vs. Texture Memory

It's not uncommon for those new to texturing to assume they should use highly compressed formats like JPEG out of the mistaken belief that keeping file sizes small will increase performance.  People usually come by this presumption as a result of every day experiences using the internet, where it's easy to see that web pages load faster with smaller image files than with larger ones.  For the novice, the expectation that this same behavior would apply equally to graphics applications is logical, but it's incorrect.

In truth, this perceived correlation between file size and speed on the web is merely an illusion, which has nothing to do with graphics processing.  Where the internet is concerned, speed is primarily determined by the rate at which files can be sent from computer to computer.  Since smaller files have less information to deliver, of course they get delivered faster.

The speed at which graphical images are processed on screen actually has nothing whatsoever to do with file size.  Graphics processing is all about texture memory, not about storage space.

Texture memory is always determined by the amount and depth of the actual pixels in the image, not the bits and bytes in which the file is stored.  While any given image’s file size can vary depending on in what format it is saved, its actual texture memory consumption will always be the same.  The amount of pixels in an image times the number of bits in each pixel will always equal the amount of texture memory the image uses, no matter what. 

Why that's important to the SL texture artist is pretty simple.  Knowing the rudimentary mathematical principles of how images affect performance enables you to optimize your textures so you can ensure your creations are as lag free as possible while also being as high in quality as they can be.  The key to success in any real time graphics application is always finding the right balance between detail and speed.  Make your textures too big, and you use too much memory, slowing down your frame rate (and everyone else's).  Make them too small, and while your system performance will be relatively good, your imagery might look terrible.

File size is not part of the equation when thinking in this context.  What's relevant to performance and visual quality are the actual images, not the files that store them on hard drives.  Graphics performance is affected not at all by file size, but by texture memory consumption.  Texture memory is determined by the amount of pixels that make up each image, and the amount of memory bits in each pixel.  That's it.

For more on bits per pixel, see the Transparency Guide, stickied at the top of the Second Life Texturing Tips forum.

 

How to Calculate Texture Memory

Determining how much texture memory an image will consume is fairly straight forward.  It's basically a count of the total amount of pixels in the image, multiplied by the number of bits in each pixel.

RGB color images without transparency have 24 bits per pixel, and those with transparency have 32 bits per pixel (see the Transparency Guide for more on this).  So, for example, if you've got a non-transparent color image that is 1024x1024 pixels, here's how the math would break down:

 

  • 1024x1024 = 1,048,576 total pixels
  • 1,048,576 x 24 bits in each pixel = 25,165,824 total bits in the image
  • 25,165,824 bits / 8 bits in every byte = 3,125,728 bytes, or precisely 3 megabytes

Pretty simple math.  A 1024x1024 image (sans transparency) will always use exactly 3 megabytes of texture memory.  That's regardless of whether or not the file is compressed for storage.  As far as the graphics card is concerned, an image is just a collection of pixels to be drawn, not a file to be saved.

Just for informational purposes, here's a quick breakdown of all the texture sizes SL allows, and their corresponding texture memory requirements:

texture_memory_chart2.png

Notice how much memory the larger textures demand.  It doesn't take all that many to overwhelm a 256MB or 128MB video card.  The biggest reason SL operates as slowly as it does is because of poor texture management on the part of resident content creators.  Many people use textures that are simply way too big, and as a result, video cards choke.

 

The average video card can only process a few hundred megabytes worth of textures at a time.  Professional game artists are well aware of this, and so they make sure to optimize all their textures to keep them as small as possible.  SL, as a mostly amateur-created environment does not tend to benefit from the same professional wisdom, and as a result,  an average busy scene in SL can have literally gigabytes worth of textures on display.  Obviously, that's not a formula for effective real time performance.

For everyone's sake, always keep all textures as small as they can be.  I usually suggest as a rule of thumb that about 80% of textures should be 256x256 or smaller, about 15% should be 512x512, and about 5% should be 1024x1024.  SL is extremely good at displaying small textures on objects at full screen size, better than just about any other program I've ever seen, in fact.  It's not often that there's a legitimate reason to go much larger than 256x256.

 

Choosing the Best Texture Size For the Job

The two most important factors in determining what size to make a texture is to think about how much screen real estate that texture is likely to occupy, and how much fine detail does it really need relative to its size.

For example, if you're doing a life size replica of the Sistine Chapel, with giant ceiling murals that are likely to fill the entire screen, and with lots of fine details that people are likely to zoom in on and study, go with 1024's by all means.  However, for the parts that aren't likely to fill much of the screen, like your little donation box next to the front door that no one's gonna look at, use something MUCH smaller.

Of course, just because something will fill the screen doesn't automatically mean it demands a large texture.  A brick wall, for example, is just a repeating pattern.  The pattern itself doesn't need to be very big to be effective.  You could paint just a few bricks in exquisite detail at maybe 128x128, and then repeat the texture across the wall surface many times via the repeat and offset settings inside SL. 

If the wall needs other details embedded into it like windows, doorways, creeping vines, etc, then it's no longer just a repeating pattern; it's a whole painting.  In that case, you'll need to go larger with the texture to fit all those things in.

What you want to avoid is doing things like slapping a 1024x1024 on a little 2-word sign that no one's ever gonna zoom in on.  For that, something as small as a 64x64 would probably be plenty.  Always make every texture only as large as it needs to be, not one pixel more.

Again, it's all about finding the optimum balance between texture memory and texture detail.  That means using good, sound judgment.  Choose your texture sizes carefully.  Make appropriate decisions.

 

Usable Source File Formats For SL Textures

SL allows you to use any of three different image formats as your source files for uploading, TGA, BMP, and JPEG.  Here's a bit of brief info on all three:

TGA - TARGA File Format

Advantages:

  • High quality
  • Entirely lossless
  • Supports transparency
  • Entirely predictable file size
  • Simple bitmap formatting is easy for all computers to read
  • Industry standard format for textures, also used extensively in video applications

Disadvantages:

  • Large file size
  • Not readable by low-end graphics programs or by programs not intended for serious graphics work (you can't view a TGA in Windows Picture Viewer, for example)
  • Not well suited for printing

History:

  • Invented in 1984 by Truevision for TARGA graphics cards
  • Upgraded to its current version in 1989
  • Stand for Truevision Advanced Raster Graphics Adapter
  • Was the first file format to support truecolor on IBM compatible PC's

 

BMP -  Windows Bitmap Format

Advantages:

  • High quality
  • Entirely lossless
  • Entirely predictable file size
  • Simple bitmap formatting is easy for all computers to read

Disadvantages:                

  • Large file size
  • Does not support transparency
  • Not well suited for printing
  • Not a common format of choice in the graphics industry

History:

  • Developed in the 1980's by Microsoft as an image standard for the Windows operating system
  • Today, it's used for simple things like desktop wallpaper images, not much else

 

JPEG - Joint Photographic Expert Group Format

Advantages:

  • Small file size
  • Viewable in almost all programs

Disadvantages:                 

  • Lossy compression
  • Image quality degrades every time it's saved
  • Does not support transparency
  • Not well suited for printing

History:                   

  • Invented by the Joint Photographic Experts Group in 1986
  • Today, it's the most commonly used image format on the web, the last place where file size is still more important than image quality.
  • It's also commonly used in digital cameras, but that's changing fast.

 

PNG – Portable Network Graphics Format

Advantages:

  • Small file size
  • Lossless compression
  • Supports transparency

Disadvantages:                  

  • While arguably superior, PNG is not as widely supported as JPEG or GIF
  • Not well suited for printing

History:                   

  • Invented in 1995
  • Originally intended as an no-license-needed, visually superior alternative to GIF
  • PNG is sometimes interpreted to stand for the recursive “PNG’s not GIF”

 

Note, when you upload an image to SL, it gets stored on the server in JPEG2000 format, an optionally lossless type of compressed file.  Your original source image file never actually leaves your own hard drive.  When you hit the Upload button, SL makes a JPEG2000 copy of your image, and it is that copy that gets uploaded.

I won't bother posting much information about JPEG2000 itself, since SL's implementation of it is outside our control as users.  If you're curious, you can read all about it at http://www.jpeg.org/jpeg2000/index.

For best results in SL, I recommend always using TGA as your source file format.  It's long been an industry standard for texturing and other on-screen graphics work..

I recommend not using JPEG ever for texturing.   It's great for web pages, but it's not well suited for 3D work.  Since the SL servers will store your images as JPEG2000 anyway, there's no need to be concerned about file size, which nullifies JPEG's one and only real advantage, leaving you with nothing but all the disadvantages.  It's a lossy, low quality format. 

Also, since SL's implementation of JPEG2000 is a bit lossy, using a JPEG as your source image just compounds the problem.  When you source your upload from a lossless TGA, you only compress once, and only lose quality once.  When you source from a lossy JPEG, you compress twice, and lose quality twice.  It's akin to the "copy of a copy" effect.

If you're a web designer, use JPEG all day long, by all means, but if you're a 3D texture artist, steer clear of it.  For texturing work, like I always say, use TGA, every day, TGA all the way.

7 people like this

Share this post


Link to post
Share on other sites

Excellent advice here.I would add a few points. I constantly run into new builders who go with the rule 1024 is bad, go with 512, and then throw caution to the wind and slap a new 512 texture on every prim, sometimes a new 512 texture on every face of every prim. We know a 1024 texture uses 4 times the memory, *but* 8 unnecessary 512 textures use twice the memory of a single 1024. 

Personally I believe a combination of video card ram overload and jpeg 2000 resolving problems deserve the most blame for images not appearing rather than bandwidth, as SL is not a real time scene loader in the sense of say online mulitplayer gaming. We are all used to just standing there while things 'rez'. JPEG 2000 is computationally heavy (price you pay for its good stuff) and sometimes when taking its second or third pass at resolving a texture, (progressive transmission?), it just plain gets stuck. Our builds at Empyreal Dreams make heavy use, as a percentage of the build, of 1024 textures. we get away with it because of one simple fact; We block all view outside of the builds. This means your graphics card only has to deal with the build itself and the avatars in it. People should think carefully when applying textures to prims about where those prims are going to live. If your new art gallery you just built is overlooking the virtual hanging gardens of babylon, your poor graphics card has to chew on your prims and the hanging gardens at the same time. Anyone can test this easily, TP to a sim high up, fly to the edge of the map, then drop down to ground level while facing outward away from the sim. Now turn around at ground level and face in, and listen to your computer go nuts trying to chew all the data.

The best advice I give is this; imagine all the objects you see are real but made of plywood, and you are set decorator on a movie. You have to paint or wallpaper every wooden surface. Which is going to be the biggest pain to do, a scene in a single room or one involving an entire street half a mile long. Be wary of long views, your graphics card has to eat everything in them up to the settings limit. Even if your graphics distance are at the average of 128m, at ground level that is still a lot of prims to process.

Share this post


Link to post
Share on other sites

Wow, Chosen, great guide.

 

I am curious about what I'd read some time ago: that using four 512's combined in a 1024 (as an example) makes for quicker loading because there is only one texture to read, rather than four.  Is this accurate?

Share this post


Link to post
Share on other sites

 


Eidolon Aeon wrote:

I am curious about what I'd read some time ago: that using four 512's combined in a 1024 (as an example) makes for quicker loading because there is only one texture to read, rather than four.  Is this accurate?

 

Yes, and no. 

The yes part is that if you have four 512x512's combined on a single 1024x1024 canvas, you'll cut down on network load, because you only have to request one asset instead of four.  This may or may not translate to faster load time, though, since there's usually no particular rhyme or reason to the order in which things load. The only thing guaranteed is that when all four panels are part of the same texture, then all four will load at the same time as each other (since they're really all just one).  If the 1024 happens to be the first thing to load in the scene, then all four of its 512x512 panels will rez right away.  But if the 1024 happens to be the last thing to load, then those four panels will all seem to load slowly.

The no part is that it doesn't actually have to do with "reading the texture", in the sense that you were probably thinking.  The main benefit, again, is cutting down on network overhead.  That doesn't really speak to how the graphical data is "read" after the image file has been downloaded.

Bottom line, it's generally best practice to combine textures wherever and whenever you can.  Say you've got eight different 256x256 textures, and two 512x512's that are all going to be in the same build.  In that case, it makes sense to put all of them onto a single 1024x1024.  One asset is way more efficient than ten.

But there are cases in which it might be best to keep them separated.  If you want to be able to swap textures in and out with relative ease, or if you need a texture to be able to repeat more than once across a surface, then having everything combined could be problematic.

So, as with anything else, weigh all the options, and make the best decision you can for each case.

Share this post


Link to post
Share on other sites

my most important observation is that it's usually worth the effort of cropping natural textures to dimensions that SL actually uses for textures.

if a stone surface image (for example) is 800 on one side, i'll tend to cut it down to 768, since SL is just going to squish the pixels into each other anyway

usually there's some corner of the picture that doesn't contribute much value to the total image, so cropping the edges that converge to that down to 1024,768,512,384,etc is often an intrinsic improvement on top of the mitigation of data drift

Share this post


Link to post
Share on other sites

 


Josh Susanto wrote:

...768 ..384...


Just to be clear, so nobody gets confused, 768 and 384 are not directly usable texture sizes.  Only powers of two can be utilized in SL.  If you upload a 768, SL will downsize it to 512, and if you upload a 384, SL will downsize it to 256, the nearest power of two down in each case.  1.5 times a power of two is not a power of two.

Photoshop, and other similar programs will almost always do a better job of resizing imagery than the SL uploader will.  For best results, ALWAYS size your images properly to a power of two, prior to upload.

I never ever want to just let SL "squish pixels into each other".  I want to control how my textures turn out.

Share this post


Link to post
Share on other sites

I find I have real issues with blurriness if I download anything other than square in SL, even if I pre-squish it in Photoshop to a power of two.  Is there something I could be doing better, or is it just best to work with square images?

Share this post


Link to post
Share on other sites

instead of pre-"squishing", pre-expand.... it'll up the pixel density for the final product giving it more detail.

ETA:
the point being, is that you lose information when you downsize and image, but with stretching it, you are actually adding (although most of the addition will be lost visually on your object.... of course if you are working with huge images to begin with, perhaps chossing the other dimension for working to the closest ratio for downsizing may help)

Share this post


Link to post
Share on other sites

>Just to be clear, so nobody gets confused, 768 and 384 are not directly usable texture sizes.

I tested that.

Files saved at 768x768 actually look clearer than files reduced to 512x512.

Also, they load back out at 768 with more of the original data left to manipulate.

People have also told me it's stupid to load sculpt maps as 128, but I tell them they can always shrink it to 32 and then enlarge back to 64 if that's something they think is really necessary to optimize.

Otherwise, I can always load my 128's back out and cut them into several smaller sculpts, etc.

It's a lot easier to subtract excess data than to replace lost data.

Share this post


Link to post
Share on other sites

 


Josh Susanto wrote:

Files saved at 768x768 actually look clearer than files reduced to 512x512.

then you are using the wrong method to reduce them... the default SL method (pixel resize, IIRC) can do ok for some images, but in general a bilinear or bicubic resample (or fractal resample) will do far better for most images

 

 


Josh Susanto wrote:

Also, they load back out at 768 with more of the original data left to manipulate.

not only untrue, but not physically possible... they export at 512 (what SL shrunk it to on upload) with the 512^2 pixels worth of data at 24bits of color data, and 8 bits of transparency data (if an alpha channel was included) each

 

Share this post


Link to post
Share on other sites

 


Void Singer wrote:

instead of pre-"squishing", pre-expand.... it'll up the pixel density for the final product giving it more detail.

ETA:

the point being, is that you lose information when you downsize and image, but with stretching it, you are actually adding (although most of the addition will be lost visually on your object.... of course if you are working with huge images to begin with, perhaps chossing the other dimension for working to the closest ratio for downsizing may help)

Very much agreed. Never squish your texture, expand it.

 

 

Share this post


Link to post
Share on other sites

Checking again, it seems you're correct.

Temp images on Imprudence load as 768 and load out as 768; not so for permanent SL data.

This doesn't explain, though, why the imagas loaded to SL as 768 should be any clearer than those loaded as 512.

If an image is already 512, there shouldn't be anything to compress... am I right?

Share this post


Link to post
Share on other sites

Not quite.  Regardless of what file format you use on your local machine (TGA, PNG, BMP, JPG), your file is converted to JPEG2000 format when it's uploaded.  Even if it isn't also resized at that point, the file conversion results in compression, so you do lose some information.  That said, I'd be surprised if you could usually see much difference in clarity between an image that was uploaded at 768x768 and one that was uploaded at 512x512, since both will end up as 512x512 JPEG2000 images. (That is, assuming that your 512x512 was originally created as a larger image and then downsized on your own machine, so we're comparing images with similar sizing histories.)  Generally speaking, it's safer to make any size adjustments yourself rather than trusting SL to do it for you, especially if your image aspect ratio isn't in powers of 2, so I'd always prefer to upload as 512x512 myself. 

Share this post


Link to post
Share on other sites

Wasn't SL at some point doing 384 and 768?

It seems to me that is was when I was still using Emerald, at least.

In any case, I absolutely concur about making one's own adjustments as much as possible.

In principle, images with prime numbers of pixels on each side should distort the most... correct?

 

Share this post


Link to post
Share on other sites

 


Josh Susanto wrote:

[...] In principle, images with prime numbers of pixels on each side should distort the most... correct?

 

theoretically, larger, non- mersienne primes should, but realistically it depends on the actuall content of the image

 

and yeah, temp uploads don't actually go on the server, they are served from your local machine cache (the original trick used the avatar skin backgrounds)

 

at one point SL did support up to 2048^2, and even upsized images that were closer to be the larger of two power-of-two values.... but other than that, they've never sopprted a non-power-of-two since at least mid 2005 (before that I never really took notice)

Share this post


Link to post
Share on other sites

 


Josh Susanto wrote:

Wasn't SL at some point doing 384 and 768?

 

 

Nope.  I actually made the same mistake of thinking that, myself, several years ago.  I can't remember why I thought it was true, but I ended up incorrectly arguing the case right here on the forums.  After someone suggested I go in and take a look at the actual texel dimensions in-world, I then came back and posted about how wrong I'd been.  A friendly Linden even joined the thread (which should tell you how long ago this actually was, since Lindens don't generally do that anymore), to confirm that at no time had SL ever had the capability for non-power-of-two textures.

The reason for the power-of-two restriction, by the way, is that it used to be a requirement within OpenGL.  From what I understand, modern versions of OpenGL can now support arbitrary sizes, but not all graphics cards are OK with it.  Some crash instantly upon encountering "odd" sized textures, in fact.  Therefore, most OpenGL applications, SL included, still maintain the restriction.

 

 


Josh Susanto wrote:

It seems to me that is was when I was still using Emerald, at least.

 

 

I've never used Emerald, so I can't speak intelligently on it.  My guess is that if it did allow it, it was probably just for temporary images.

Technically, one could hack a viewer to handle arbitrary sizes, if one were so inclined.  But I would hope there are server side safeguards in place to prevent upload of non-power-of-two images, since "normal" viewers might crash upon trying to display them.  And some hardware configurations certainly would crash, even if the viewer itself were OK with it.

 

 


Josh Susanto wrote:

In principle, images with prime numbers of pixels on each side should distort the most... correct?

 

 

I can see why one might make that assumption, but I'm not sure it's actually true.  Perhaps someone with a better understanding of the actual algorithms involved could chime in with a definitive answer, but in the mean time, allow me to theorize for a few moments.

It's easy as a human, who's used to thinking of everything in terms of decimals, to assume that prime numbers would probably be the worst candidates for resizing cleanly to powers of two (or to any other numbers besides themselves).  But in practice, I don't know that scaling 521 down to 512 is going to look any worse than scaling 520 or 522 to 512.  None of those numbers can resolve cleanly into 512, or into 256, or into any othe power of two.  Massive re-interpolation of the image is likely going to have to take place, in all three cases.

It's equally easy, as a (non-computer-scientist) human, to assume that something like 768 might divide very easily into 512.  But agan, in practice, it might not actually be true.  As a human, removing a third is an easy concept to deal with, certainly much easier than removing, say, nine 512ths.  But again, that kind of thinking doesn't necessarily speak to how the math is actually done. Computers don't conceptualize like people do.

Here's something to think about.  For the sake of argument, imagine the math were to be done decimally,  Thirds would then be a nightmare to deal with.  After all, a third, by definition, has an infinite repeat in decimal math, and computers don't tend to deal very well with infinity.  (Just ask Redjac from Star Trek!)  There's going to have to be a round somewhere, and with rounds come artifacts, always.

Now, I highly doubt that decimals play a role at all in what really happens, so please no one get too caught up in arguing about that specific exmaple.  My point is simply that many of the concepts we humans think of as simple and clean can very quickly become quite complicated and messy when you try to apply various mathematical models to them.  If even the most familiar of mathematics (decimals) can get thrown for a loop by a concept as simple as a third,  imagine how convoluted things could get when you factor in the far more complex math that computers actually do use for these kinds of operations.

With that in mind, how do prime numbers stack up against any other type?  I'm afraid I'm not qualified to answer that question for sure.  In the absence of more definitive information, the best I can say for now is this: "It doesn't matter anyway, so quit making my brain hurt!  You're gonna use powers of two, young man, and you're gonna like it.  Them's the rules."  (And I mean that in the nices possible way. :D )

Share this post


Link to post
Share on other sites

I'm trained as a music theorist, so I actually do a lot of thinking in base 12 rather than base 10.

I only mention prime numbers (my favorite numbers) as being an obvious problem in order to encourage further illucidation here. It also occurs to me that exponents of 3 should be about as equally problematic for basically the same reason; such numbers, as collections, are maximally dissimilar from exponents of 2, as a collection.

If its a question of distributing one pixel of color data across a line, the Fermat primes might even work out better than might any exponents of 3 which are in nonsuperparticular relation to the next exponent of 2.

Does the algorithm happen to work in such a way that it simply favors numbers whose factors include fewer numbers other than 2?

Share this post


Link to post
Share on other sites

the open GL requirement is based on memory management and block alocations... power of two diminsions are just faster to calculate than arbitrary ones for memory management and basic manipulations of size.

as for resizing algrithms, it depends on which one... and really comes down to details within a texture, rather than the absolute texture size, to determine the amount of distortion... and even then the details shape and angle can have an impact. some algorithms are better than others, but are more intensive to use.

Share this post


Link to post
Share on other sites

I'm most grateful for this, and similar blog posts by Penny Patton. But there is one point that's still a mystery to me; namely, whether repeating a texture has any effect on performance. Given that “graphics performance is affected not by file size, but by texture memory consumption,” do the following cost the same in terms of performance?

512x512 Texture. UVW map not scaled.

NR.png

512x512 Texture. UVW map scaled x times.

R.png

Share this post


Link to post
Share on other sites

In terms of memory usage, they're exactly the same.

With regard to processing, there may be a slight amount of increased overhead from tiling the texture, depending on the particular implementation.  But even when that does happen, it's nowhere near as much of a performance hit as you'd get from increasing the actual texture size to repeat the image the same amount of times.

1 person likes this

Share this post


Link to post
Share on other sites

Thank you so very much Chosen *Shakes Hands* It may surprise you that before troubling you, I tried finding even a simple explanation by searching the web, but got next to no results! Your explanations, and help, are as always invaluable and most appreciated.

Share this post


Link to post
Share on other sites

Excellent Excellent Excellent!

This probably should be the first thing that anyone joining Sl should see when they log in. Maybe everytime they log in? :-).

Share this post


Link to post
Share on other sites

But...if you were to upload at 1023x1023 which is going to be downscaled to 512x512 would not that 512 texture look better than if you had uploaded a 512x512? I know the more info you start out with you get better quality when compressing to jpg in phototoshop etc. Same thing happens when uploading to youtube...the more data they have the better the compressed file. Would be an interesting experiment?

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  
Followers 0