Jump to content

bi-curious? You should be - or why your assumed wisdom may not be correct.


Beq Janus
 Share

You are about to reply to a thread that has been inactive for 1896 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

We've seen a number of questions already arising about a "debug setting that miraculously improves texture quality". I'd like to explain the background and the underlying facts.

Firstly though, let's establish a couple of facts.

  1. There is no magic button or debug setting to improve the resolution or quality of all textures. 
  2. There is no way to display textures of a resolution greater than 1024x1024 in Second Life

So what is all this muttering about and is there any substance to it?

The "muttering" stems from some investigation by @Frenchbloke Vanmoer that was published by @Hamlet Au on his New World Notes blog with the title "How To Display Extremely High-Res Textures In SL's Firestorm Viewer" and in spite of the headline's conflict with the facts listed at the head of this post, yes there is substance to this news as it happens.

I'll keep this post relatively short. If you want to see more rambling on how and why @Frenchbloke Vanmoer hit upon something interesting you can read about my subsequent investigation in my blog post, compression depression - tales of the unexpected.

The bottom line is that whether by luck or by judgement the Second Life viewer uses a bilinear resampling algorithm when it resizes images. Until yesterday I, like many other, and I would suspect most of you reading this, had somewhat slavishly followed the generally accepted advice that bicubic resampling gave better results, more specifically that bicubic-sharper was the ultimate "best for reduction" choice. The evidence that Frenchbloke stumbled upon goes contrary to that advice and, in all my tests so far, for the purpose of texturing in SecondLife where you typically want to retain high contrast details bilinear gives better results.

I should re-assert here, you do not need ANY debug setting. The original article used an obscure debug setting but it was only a means to an end, you are in general far better off and have far more flexibility if you use your photo tools as you always have.

So what are bilinear and bicubic and why do we care?

When you downsize an image, information (detail) gets discarded,  deciding which information to keep and which to lose is behind these choices.

All resampling methods try to decide which data to keep, or how to blend the data into some kind of average value that will please most people. Put simply a bilinear sample takes the 4 nearest points to the current pixel and produces a weighted average of those as the new value for the resulting output pixel. Bicubic takes this further, using  16 adjacent points to form its result. By virtue of the larger sample you get and smoother average which ultimately is why it fails us when we want to preserve details. On the flip-side of this is that for smooth gradients you may find more "banding" using bi-linear sampling.

Why should we not use the debug setting?

Firstly, as a general rule, debug settings are not a good thing to go playing with. They can frequently have side-effects that you do not realise and we often find that people tweak some random settings because "XYZ person recommended it" and perhaps it achieves their goal at that time, or as is often the case, it seems to fix things but doesn't really. In any case, they forget the changes and move on. A week or so later they are furious because things don't work anymore, they've forgotten all about the debug changes of course. 

More importantly in this case, if you use the max_dimension setting to force the viewer to rescale for you, then you will only see the benefit in 1024x1024 images. 1024x1024 is appropriate to large texture surfaces but not so much for smaller objects. If you can use a 512x512 you are using a quarter of the memory of a 1024x1024. That can make quite a difference to the performance of a scene.

Many people remark that using a 1024 is the only way to get the detail that they feel that they need. I urge you all to take the lesson here as an opportunity to increase the clarity and sharpness of lower resolution textures by resizing from the large form originals directly to the target size in your photo tool of choice.

Don't forget, and this may sound obvious, You need to have high-resolution images to start with. You cannot create something from nothing and whatever you do don't save the resized image to disk as JPEG before uploading, use TGA or PNG both of which or lossless. 

Give it a try today, and raise a glass to Frenchbloke while you marvel at the increased detail.

A quick example

My blog post above shows a worked example, but I thought I would show you another on a natural scene.

Here, an original high-resolution image has been resampled down to an SL friendly 1024x1024 using both methods (entirely within photoshop to avoid all doubt around various other compression factors).

First is the bilinear

545a21efb514ed16051f791ea9d527c4.jpg
https://gyazo.com/545a21efb514ed16051f791ea9d527c4

Second I give you the bicubic

a969ec986746ced323e6f2f0ddbda0e8.jpg
https://gyazo.com/a969ec986746ced323e6f2f0ddbda0e8

On their own, they don't look that different, but the bilinear shows a lot more detail which is most noticeable in areas of high contrast such as the steps on the hillside

453cac6c4a3c9c1b25ba013d8a08db10.gif

 

 

  • Like 1
  • Thanks 5
Link to comment
Share on other sites

That's really good advice. I mentioned it briefly in a previous thread here but it's something we really ought to talk about more.

It's not true that bilinear always is better than bicubic though. There is a reason why image editors give you a choice between both and usually other methods too. So, if you want the best possible texture quality always

Scale before uploading!

I've added a post at my own blog with a few more details.

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

Thank you here for the help in understanding texture definition for SL use. Kind of existential questions I ask myself since I create assets for SL ^^ I also consider bicubic is "better" than bilinear. Bilinear leads to add a not wanted difference to the "soft" effect of the texture or image. (Sorry for the vocabulary, English is not my native)

Link to comment
Share on other sites

7 hours ago, Pierre Ceriano said:

I also consider bicubic is "better" than bilinear. Bilinear leads to add a not wanted difference to the "soft" effect of the texture or image.

It does depend on how soft you want the image of course.

I have to admit I haven't really kept track of how often I use the various scaling methods. I think I usually end up with Fant most of the time and bicubic only on a few rare occasions (usually when scaling up rather than down). I can't remember if I've ever used straight bilinear but I may well have. In theory bilinear shouldn't have any advantages over Fant. But then again, in theory it shouldn't have any advantages over bicubic either but as Beq's illustrations show,  it often does.

Edited by ChinRey
Link to comment
Share on other sites

8 hours ago, Beq Janus said:

Don't forget, and this may sound obvious, You need to have high-resolution images to start with. You cannot create something from nothing [...]

But but but... they do it on TV all the time!!!!!

Seriously though, what you said in the quote here is something I have tried to tell people for AGES and not only in the context of graphics.
I have known a few people that would download low-quality MP3 files (like 128kb/s and less), then transcode these files as 320kb/s in the belief that it would magically increase the audio quality.
As far as pics and textures go, I have learned to resize before upload when I first got myself a website in the 90s. You want the pics to load fast, not have to wait 5 minutes on a 14k4 modem connection (about 5kb/s at best) before that 800x600 pic loads which you are displaying at 120x90.
From there I learned that when you do the resizing yourself and not rely on the browser (or in SL's case, the viewer) to adjust the size of your pics/textures, you can control the image quality by choosing the resampling algorithm and saving it in the dimensions you expect it to be used at.

  • Thanks 1
Link to comment
Share on other sites

My main reason for looking into this was curiosity.  I had discovered a few years ago that Firestorms default was 2048x2048 when previously I had been scaling down everything for 1024s. I did some tests with materials to put an end to the myth that was banded about when materials first appeared in SL by some eejits that sticking lo-res norms and spec layers work with higher res textures.  For your eyes sake please don't don't do this.   Anyway, I noticed some improvement in quality in the 2K upload tests ( I did 256, 512, 1024 and 2048)  which I have stuck to ever since.  Poking about the debug settings ( who doesn't ?) I was looking to turn off the texture upload confirmation window - you know, the you paid $10L one, turned that off ( nothing worse when uploading a lot of textures in bulk) and then went looking for anything else texture related. 

The pictures used of the face in the NWN article are in the wrong order and a chance is that you mistook the 8K one for the 1K one.  

I was looking to see how fine a detail you could get on a mesh head as heads as we all know and should have, contain very fine details, wrinkles and knobbly bits. I deliberately made a normal for it with hairline cracks and some larger ones to see which ones would appear and which ones wouldn't.  The area near the mouth and cheek has a very fine "crack" that radiates in a fork upwards and down. in the 1K it's clearly visible and nothing like the original. In the 8K, it's very faint - some of it isn't actually visible but what was made me realise that facial wrinkles, creases, pores, lines, stubble and importantly, eyebrows that aren't a mass of pixellated blergh are technically possible to the right person with the right skills. 
https://www.flickr.com/photos/galleriedufromage/shares/T8Y29f

same flickr has several earlier tests of higher res uploads.  

The audio comparison above is quite right - you can't make a lossless audio file transcoded from a 128kbps mp3 - well, you can but it'll sound awful. 
The old rule of Better Quality In = Better Quality Out applies. 

 




 

  • Like 1
Link to comment
Share on other sites

4 hours ago, Frenchbloke Vanmoer said:

Poking about the debug settings ( who doesn't ?) I was looking to turn off the texture upload confirmation window - you know, the you paid $10L one, turned that off ( nothing worse when uploading a lot of textures in bulk) and then went looking for anything else texture related. 

In case you didn't find it (though I would guess you have by now), there is no need to poke around in debug to disable upload charges. Use preferences search and type "upload", you'll find the following highlighted. 😉

d8825efd2a423aed54faa65cfb920e04.png

Link to comment
Share on other sites

One thing has been confusing me.

I use The GIMP for graphics creation and editing, and that uses "Linear" and "Cubic" to label the interpolation methods used when scaling an image. I had to do a bit of digging to find out whether they were the same as "bilinear" and "bicubic" (This is why I get so picky about multiple labels for the same thing—"download weight" or "streaming weight", guys?). The reason for the apparent sharpness is that the bilinear interpolation adds an artefact that is similar to what an edge-enhancement filter does.

Problem 1: What happens when multiple texture pixels are contributing to the same screen pixel? A 1024 texture is tall enough to fill your viewer display vertically. Some objects use the UV mapping to put several views of an object onto one texture. I did that with an ISO shipping container, which means each side is using a 256-pixel high block, but how often do you see it from close enough for that to matter?

This can also be where anti-aliasing comes into play, which is a sort of blurring. But the precise vertical lines in text don't need to be blurred to avoid the step patterns you get on diagonals and curves. So what do you do? And does it matter if you scale the image before or after doing anti-aliasing

Problem 2: Different parts of the image respond differently to the same tool: some can look worse and some can look better.

This is one reason to use layers. It's fairly easy to anti-alias a layer carrying text, but not the rest of the image, but it gets more complicated if you were to want to use different forms of interpolation for scaling the image. Also, scaling the whole image, still split into layers, can be less than ideal.

Problem 3: Sometimes you just have to try the alternatives, and see what works best. At least I can use the Local Textures option in Firestorm, because the Viewer and the nature of the object can have an effect on it all. As with a mesh and the smoothing, I am not sure there is any way you can reliably see what happens without using a Viewer.

Truth be told, there are huge numbers of textures being used in Second Life which don't need to be 1024 pixels across. How close do you have to be for somebody's eye to be that many screen pixels? Most of the time I work on a texture at 2048 size, and scale it down to a 512 for upload and use. And why do people still leave an alpha channel in?

Yes, there are reasons to use an alpha channel in a specular or normal map, but in a diffuse map, set to 100% opaque for the whole image, it's just a huge lump of unused data.

Link to comment
Share on other sites

1 hour ago, arabellajones said:

 

Problem 1: What happens when multiple texture pixels are contributing to the same screen pixel? A 1024 texture is tall enough to fill your viewer display vertically. Some objects use the UV mapping to put several views of an object onto one texture. I did that with an ISO shipping container, which means each side is using a 256-pixel high block, but how often do you see it from close enough for that to matter?

 

Truth be told, there are huge numbers of textures being used in Second Life which don't need to be 1024 pixels across. How close do you have to be for somebody's eye to be that many screen pixels? Most of the time I work on a texture at 2048 size, and scale it down to a 512 for upload and use. And why do people still leave an alpha channel in?

 

This is where Texel Density comes into play. 
In games what you see up close and personal are the high resolution textures - the things you don't interact with  - say the underside of a car, the top of tall buildings you can't otherwise gain access to, things waaay over there - that kind of thing - those should be lower. 

This maybe Maya specific but the general rules apply : https://80.lv/articles/textel-density-tutorial/   , another is http://forums.joinsquad.com/topic/23545-3d2d-setting-up-your-texel-density/    
I'd say more or less anyone who has slapped a texture on a prim has been guilty of this. 

There's nothing worse than seeing pic on Flickr of a high res avatar dressed in all sorts of finery that someone has taken weeks to create standing in front of a texture that looks like it hadn't rezzed completely, only to then discover that is how the creator of that object intended for it to look.  It's not pretty.  Of course it may be only me that gets annoyed and develops a twitch when confronted with terrible use of textures, lazy mirrored texture jobs,  materials layers that look like they had been made for 256 x 256 textures covering 20 metres or so, worse still, shiny everything ( which is just as bad as full bright on a no-mod item), materials that look as if 6 inches of tar was coating everything ( maybe it's a trendy thing, I dunno).

There is an entirely different argument for photographic backgrounds having high res textures solely as they generally tend to be temporarily rezzed items that people have , strike a pose, save the snapshot and put it away.  It's intended to be looked at in every which way. 

SL is kind of a wild west for standards, or lack thereof.  It's up to the users to fix it, which may be like setting fire to the stables after all the horses ran away or something. 
Sadly we just accept the things with the shiny everything , but it's mod so we can fix it,  the wonky texture job that is salvageable and so on and so forth and don't call the creators out when they make a mistake or blatantly think they can get away with it. 

Something fun to try is seeing which of the big clothing creators forgot to turn shine off on something that's no mod and not meant to be shiny. ( it happens from time to time and usually by accident ) 






 

Link to comment
Share on other sites

1 hour ago, Frenchbloke Vanmoer said:

This is where Texel Density comes into play. 
 

Thanks for the sources on that. It's pretty much what I try to do, though there are elements of SL which seem poorly documented. I'm pretty sure that some sort of mip-map system has been used, different sized textures sent to the viewer for objects at different distances, but where is it described? Your example points up one reason, I'm not using the right bit of jargon, but the only evidence I have is what happens when downloads are sluggish, and | see the low-res texture switch to a higher resolution. And how does it relate to how LOD works on the mesh?

Yeah, lower texel density on such things as the underside of a vehicle is a good move. For mesh, you have to start with the UV mapping. I can think of a couple of projects I have where that could be done. 

Link to comment
Share on other sites

2 hours ago, arabellajones said:

Thanks for the sources on that. It's pretty much what I try to do, though there are elements of SL which seem poorly documented. I'm pretty sure that some sort of mip-map system has been used, different sized textures sent to the viewer for objects at different distances, but where is it described? Your example points up one reason, I'm not using the right bit of jargon, but the only evidence I have is what happens when downloads are sluggish, and | see the low-res texture switch to a higher resolution. And how does it relate to how LOD works on the mesh?

Yeah, lower texel density on such things as the underside of a vehicle is a good move. For mesh, you have to start with the UV mapping. I can think of a couple of projects I have where that could be done. 

All images in Secondlife carry 33% overhead baggage for all the discard levels (effectively CPU space mipmaps) which discard level is shown depends upon the screen space resolution of the texture. discard0 is full size discard 1 half, discard 2 quarter and so on... On the whole the viewer does an OK job at that, OK but not better than OK.

It's briefly mentioned here http://wiki.secondlife.com/wiki/Image_System

 

  • Thanks 2
Link to comment
Share on other sites

Yep, very standard stuff. It's an explicit file format in Kerbal Space Program, but all the textures are already on your hard drive.

Looks like the way I've been thinking of textures, just big enough for a 1-to-1 mapping of texture pixels to screen pixels, which is not the same as LOD switching. But that means some of those 1024 textures will never get used at the full size.

Link to comment
Share on other sites

7 hours ago, arabellajones said:

I use The GIMP for graphics creation and editing, and that uses "Linear" and "Cubic" to label the interpolation methods used when scaling an image. I had to do a bit of digging to find out whether they were the same as "bilinear" and "bicubic"

Linear and bi-linear is not supposed to be the same. But I can't imagine GIMP actually use linear interpolation, it's more likely whoever made the UI was a bit lazy. ;)

Cubic as different to bi-cubic doesn't seem to make any sense at all - unless somebody has managed to create a one-dimensional square recently that is. But yes, it does exist. Don't ask me how it works.

Edit:

Btw, I think I have a vague idea what Fant interpolation is. It seems to be very poorly documented and most of the hits on Google are message board posts by others who are also depserately trying to figure it out. But if I understand right, it's essentially bi-linear and bi-cubic on top of each other with more weight to the linear than the cubic aspect. All I really know is that it tends to work very well for SL textures.

Edited by ChinRey
Link to comment
Share on other sites

2 hours ago, ChinRey said:

Linear and bi-linear is not supposed to be the same. But I can't imagine GIMP actually use linear interpolation, it's more likely whoever made the UI was a bit lazy. ;)

The mathematical basis is linear, bi represents the dimensionality a bi-linear is a combination of two linear functions, in the same way, the tri-linear is applied in 3d space. when dealing with raster images bilinear makes most sense and is most familiar to people but perhaps they deliberately avoided the term in case of silliness from lawyers (can you patent troll maths? ) or to deal with pedantic users making 1 pixel high images 🙂

Compare that with Lanczos, which is only ever referred to as Lanczos not "bi-lanczos" hence my *shrug*. You could, therefore, argue that citing the underlying mathematics is more consistent. That said I don;t know of Lanczos being used outside of video image processing, though I am sure it is somewhere.

Calling them bi-cubic makes sense in the same way that bi-linear does because they are both the result of combining two 1 dimensional functions. An example of a cubic function in linear space is when a curve is fitted to a set of data points by creating a continuous mathematical function for that data. A Bezier curve is a cubic function. in the bicubic analysis, the curves are fitted in two dimensions allowing the points to be interpolated, whereas in the linear case it is a far more rudimentary weighted average that is used giving a simple gradient.

 

  • Like 1
Link to comment
Share on other sites

17 hours ago, Beq Janus said:

The mathematical basis is linear, bi represents the dimensionality a bi-linear is a combination of two linear functions, in the same way, the tri-linear is applied in 3d space.

Yes but one-dimensional linear interpolation is actually used for scaling images. I can't imagine GIMP actually uses it though, it's the quick-and-dirty solution you go for when (conversion) speed is essential.

 

17 hours ago, Beq Janus said:

That said I don;t know of Lanczos being used outside of video image processing, though I am sure it is somewhere.

Laczos is not generally suitable for still images since it tends to add artifacts but no rule without exception.

 

17 hours ago, Beq Janus said:

...

A Bezier curve is

...

(etc.)

I've said it before and I'll say it again: SL and LL need more mathematicians!

Good content creation is soooo much based on algorithms and much of the maths is at a level a bit beyond what it's reasonable to expect programmers and content creators to be comfortable with. We really need more experts to sort it out and create good formulas for us.

Edited by ChinRey
  • Like 1
Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 1896 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...