Jump to content

Chosen Few

Resident
  • Posts

    1,790
  • Joined

  • Last visited

Everything posted by Chosen Few

  1. Jo, you're trying to assign undue rhyme and reason to anecdotal evidence. You could drive yourself mad trying these kinds of experiments until the end of time, and you still wouldn't glean any conclusive results. This is because you're operating from a false premise. Your expectations of how it SHOULD work are not in line with how it actually DOES work. The "flicker" you're referring to is properly called "alpha sorting", or more specifically, "the alpha sorting glitch". It has a lot more to do with how your graphics hardware works than with anything SL is or isn't doing. It's an inherent part of the nature of how blended transparency works in any realtime 3D simulation. There's no changing it. Understand that depending on your graphics card, your drivers, and a few other factors, the precise order of operations that go into drawing the actual pixels in each frame on your display may vary slightly. Therefore, it's entirely possible that a texture that looks like it's "in back" to you at any given moment, might well look "in front" to someone else viewing it from the exact same angle and distance. The bottom line is this. When you overlap two or more 32-bit images in 3D space, you're going to have sorting problems, period. Understand, the computer does not think of concepts like "in front" and "in back" in the same way a human being does. When it's confronted with the task of determining how to blend color values together in order to make it look like you're seeing one partially transparent surface through another, it considers a few mathematical variables in order to best guess at how to display both. The primary factors are the camera's distance from the center of each object, and the difference between the camera vector and the surface normal vector (the angle of view). There are others, but those are the two biggies. Very generally speaking, whichever object's center is closer to the camera will be drawn as "in front". When the two centers are equidistant from he camera, then whichever one is most directly facing the camera will be "in front", again very generally speaking. There are a multitude of other factors which also affect the draw order in various ways. No amount of tweaking the opacity values of various pixels in any of the imagery itself is going to change the principles involved. The fact is this. If both surfaces have 8-bit transparency, whether it's 1% in 1 pixel, or 100% in 27 pixels, or 66% in all pixels, or any other combination of values you could possibly think of, sorting is sorting is sorting. From certain angles, and from certian distances, "back" textures will appear to jum to the "front", and vice versa. It's important to understand that this has absolutely nothing to do with SL itself. The alpha sorting glitch happens in every single 3D simulation on Earth, from freebies like SL to commercial video games to high end 3D modeling platforms that cost thousands of dollars. Realtime 3D + 8-bit transparency = alpha sorting, and with the sorting comes the glitch. There's no escaping it. The only reason you don't see it happening in games very often is because professional game artists are well aware of the issue, and we go to pains to design around it. We simply don't build situations in which mutliple 32-bit images wouild ever overlap (except when we want to actively take advantage of the glitch for simplifying the geometry of certain things like trees, fire, chandeliers, etc.). SL doesn't provide for that kind of design control. First, even if you do all the right things, your neighbor might not, and then it's back to square one. There's no way to reliably prevent your neighbor's alpha textures from interfering with yours. Second, unlike in games, there's no way to control people's angle of view in SL. Everyone has complete 360-degree freedom of movement for their cameras. It can be very challenging to design an environment that does contain 32-bit textures, but in which no two of them will ever appear to overlap, when every aspect of the scene can be approached from an infinite number of viewing angles and distances. Third, there's no controlling what's attached to avatars. Say you do manage to design an ever-so-carefully-constucted build that ensures no 32-bit textures ever can overlap, from any viewing angle. Well, as soon as someone shows up sporting a big old pair of giant butterfly wings, and hoochie hair from hell, and all manner of translucent bling, then all bets are off. Their crap is going to interfere with your windows and your plants and anything else of yours that has transparency. My advice, don't drive yourself crazy trying to fix the unfixable. The sorting glitch happens. Live with it. That's all we can do.
  2. From that screenshot, it looks like everything is fine. It is possible you're just not working in a mesh-enabled area in-world? I noticed just last night that when I was on an island belonging to one of my clients, I couldn't upload meshes there. I could rez them from inventory, but I couldn't upload them. The option wasn't even there, in the upload menu. I had to jump back to the mainland to do my uploading. It would seem that not every region on the grid is running the server version that allows for mesh upload. Try heading over to Cordova Sandbox or something, and see if it works there.
  3. A general rule of thumb in game art is that characters should be around 6000 tris. But as it sounds like you already know, there can't be any hard rules on stuff like this. One issue with "replacing" the default avatar mesh with a custom one is that the default one will always be there. You can hide it with an alpha skin, but that doesn't actually make it go away. Those triangles will still be there in the rendering process, just as with all transparent objects. So, right off the bat, there's no way to actually stay under the 6000-triangle magic number. As with all "how many should I use" questions, the only real answer is as few as you possibly can. Don't expend hundreds of extra faces into smoothing out a curve when just softening an edge normal will simulate the same effect. Don't make a 24x48 sided sphere for an eyeball, when a 3x8 sided half sphere will do the job just fine. Don't model the geometry of every tiny detail that nobody will ever zoom in on when a good texture will suggest all the same details just as concincingly, from typical viewing distance. In short, just be smart about it. Strive for efficiency at all times, and you won't go wrong. If you're new to character modeling, it's tempting to think, "Just give me a total number to stay inside of, and I'll make it work." But really, it's best to go the other way around. Do a few of them, and you start to build a sense of how many you'll need when approaching any given problem. Work from the bottom up, rather than the top down, if you know what I mean. Chances are your first one will be WAY overblown on the poly count. When you do it a second time, you'll probably be able to cut the count in half. By the time you've made a bunch of character models, you'll have your technique down, so that you'll probably just naturally end up around that 6000 number, without even consciously trying. And remember, texturing goes a long way, especially with organic forms. You'll be surprised how little geometry you actually need, when the paintjob is right.
  4. There are two possibilities I can think of. One is that your browser cache is messed up. The other is that there's something's wrong with billing. Regardling the latter, there's nothing on the status page right now about any problems, but that doesn't necessarily mean there isn't one. If you haven't done so already, try clearing your browser cache, restart, and see if it properly updates. Also, of course restart SL, if it's currently running. If that doesn't work, the next thing I'd suggest is to wait for a few hours, and see if anything gets fixed on LL's end. If it's still not working later, file a support ticket.
  5. Great idea, Drongle! Voted. Of course, we can't assume that the average user is going to have any idea of what distance values are best for any given mesh. But even if they get it all wrong, it can't make anything worse than it is now. It can only help. For those people who would be able to make good sense of it, it would really be a great tool.
  6. Rolig Loon wrote: Yes, I considered that possibility myself, Chosen, but I couldn't figure out what quailfies as a "new" sculpty. If I rez a sculpty from inventory, or shift-drag it, is it new? Or if I create a new prim , declare it to be a sculpty and apply an "old" sculpty UV map? Or only if I use a newly-uploaded UV map? This feels like the -- heaven forbid -- arguments about when life begins. What's a "new" sculpty? I'd base it on the date the sculpt map was uploaded. Rolig Loon wrote: Absolutely. There's no question about that, Medhue. You and I both know, however, that a distressingly high number of landowners don't understand these things at all. Recall, for example, how poorly many landowners understand the effect that scripts have (or don't have) on sim perfomance. I am not confident that many landowners can get past two ground-level concerns: (1) I have to stay within a prim allocation and (2) I want to keep costs down. They do fret about lag all the time, but that's harder to understand. So,.... I build a house that has four large pillars, each with a nice base and capital. I make it as a sculpty and its PE is 1.0, no matter how big I make it. If I understand the rules right -- I don't, so please correct me if I'm wrong -- the bigger a mesh pillar is, the higher its PE. A potential customer looks at my house with sculpty pillars and compares it to your identical one with mesh pillars and says (1) "Geez, his is gonna eat more of my prims." and (2) "You only had to pay L$10 to upload your sculpty map, so you can sell your build to me for less than he can afford to." Please understand .... I'm not trying to pick an argument by asking these things. I am really trying to sort out matters in my own mind as a small-time content creator. Setting aside temporary issues like the fact that a lot of residents can't even see mesh items yet, I'm searching for a good reason to stop making sculpty components. Despite the real lag advantages of mesh, I see economic reasons to stay with sculpties .... at least for the near future. There will always be some items that lend themselves better to sculpties' characteristics than others. A simple column that needs to be able to be infinitely resized without affecting its PE may indeed fit that particular bill. Objects that will be more statically sized will usually have better PE as meshes, when you compare the amount of detail you can include for any given PE number. For the heck of it, I just uploaded a fluted Greek column I made. If it were made of sculpties, I'd likely need six or seven of them to include all the same details that are in the mesh. When I size the column 8 meters tall, its PE is 7. So, as long as I don't need the column to be taller than that, it matches or beats the sculpties' PE. If I stretch the mesh to 64 meters tall, the PE jumps to 42. If it needs to be that big, sculpties are probably the better choice. Is there a fixed point at which prim count becomes more important than poly count? I'd have to say no, there cannot be any one univerally applicable rule for things like this. The tradeoff point will vary, depending on the goals of each build. All we can do is try to apply good judgment, on a case by case basis. If you're concerned that people might or might not buy the mesh version of a model for various reasons, or that they might not buy the sculpty version for various other reasons, I'd suggest offering both. It's more work, but it's also likely to translate to more sales, so it's probably worth the extra effort.
  7. Rolig Loon wrote: OK, so then let me continue the thought in my last post. If you are right that sculpties will never cost more than one prim, where's the buyer's incentive to purchase an equivalent mesh object that has a higher PE? ( I don't mean the flashy objects that can only be made in mesh. I mean objects that could be made just as easily as either mesh or sculpties. ) If a buyer has a choice of taking home either a lovely mesh object that has a PE of 10 or a sculpty object that looks the same and has a PE of 1, why not choose the lower-PE one? Unless there's a good reason for lots of people to choose the mesh object, why should content creators follow your example and convert their sculpty items to mesh? For me, this is not a theoretical question. I create fairly simple sculpty components (ribbons, pillars, cushions, ...) as sculpties now and I would seriously consider making them as mesh instead, except that if I do, their PE will probably be higher. People on a tight prim budget will be less likely to buy them than if I stick with sculpties. So .... unless LL does apply a PE penalty to sculpties, why should I switch? Here's the thing. Very rarely does a given object need all 2048 polygons that a sculpty has to have. Something like a ribbon, even complicated one, could be made from just a very tiny fraction of that. Unless they've changed the rules since the last time I checked, such low-poly objects can have a PE of less than one. The mesh model could very well would beat the sculpty model handily in PE, as well as in performance. Worst case scenario, of the lower limit is 1, then at the very least, the mesh would have the same PE as the sculpty, and the consumer simply wouldn't know the difference. Either way, there would be little if any reason not to make the switch. The only time using a sculpty would remain advantageous whould be in the exceedingly rare circumstance in which you absolutely need all 2048 polygons, and you've got good reason to only want to invest one prim into it. I'll agree that simply CONVERTING sculpties to mesh models would be silly, if all you're doing is a 1:1 conversion. But if you're replacing the sculpty with a far more efficient mesh, it's almost always going to be a win.
  8. Thanks for the warm welcome back. Rolig, Ceera, I'm glad to know you're both still here and active. Medhue, I'm not sure if we've met before, so it's nice to make your aquaintance. That pesky thing called RL kept me mostly away from SL for the past few months. Feels good to be back. Rolig, you're certainly right that we can't erase the history, no do-overs. And yes, sculpties will always be with us. Everyone has raised good points about how any retroactive change to sculpty PE would mess up existing builds, and could piotentially damage some existing business practices. Were a change actually to happen, there's an easy way to prevent that. Just grandfather pre-existing sculpties into the one-prim scheme, and apply new PE numbering only to newer ones. Historically, megaprims (loosely) serve as precedent for this type of model, at least in principle. During the time when megaprims could be created, a 100x100x100 cube was a one-prim object. When creation of them was then disallowed, any newly created cube of that size had to be at least a 542 prim object (under the old 10M prim size limits). But pre-existing one-prim cubes of that size were allowed to remain. Granted, the megaprim situation was quite a bit different from the sculpty situation in several key ways. However, the principle still fits. I see no reason not to apply this logic to sculpties. Old ones stay one prim, while new ones get more realistic PE. I'd say that would yield the best of both worlds for all concerned. That's really the way all changes should be made whenever possible, anyway. Preserve the pre-existing; apply the new rules to new items. As for the question of converting existing sculpties to mesh models, I'd have to give the same answer I always gave whenever people used to ask how to 'convert' existing mesh models to sculpties. Don't bother trying. It's better to build a brand new model, for a multitude of reasons. Will this be inconvenient? Sure. But progress of any sort is rarely convenient during the transitional stage. Often we have to take a step back in order to make a leap forward. Not everyone is going to have the stomach for that, of course, but it is what it is. We didn't decide not put the telephone to use, just because it was going to take a lot of work to place the lines. We didn't reject the invention of the locomotive, just because the tracks weren't laid yet. We didn't dismiss the light bulb, just because most houses didn't yet have electricity. All of these things took a lot of peripheral work to get going. But once they did, the world never looked back. It's likewise going to take a lot of work to wheen SL off the sculpty, and onto the mesh. But if you recall, we had a very similar discussion about prims vs. sculpties when sculpties first hit the grid. Every time a better way to do things comes along, there's going to be a natural period of uncomfortable transition. But after that, the world does move on, and the "newfangled" becomes the commonplace.
  9. This may be just an academic discussion if, as Marton pointed out, the change in question hasn't actually happened. But either way, I think there are some interesting points worth talking about. Abigayle Bracken wrote: From what I read posted by Jeremy Linden, this is supposed to give regular prim builds and mesh builds equal weight. How does this affect sculpties? Well, most of them will have a higher prim count now. I know this isn't going to be a popular response among the "squeeze as much as you can out of every resource" types that make up certain populations in SL, but if LL were to make sculpties cost more than one prim, I for one would be in favor of it. LL's done its share of dumb things over the years, but this wouldn't be one of them. I'll explain. Consider that every (regular) sculpty has 2048 polygons in it, while the average prim by usage has only about 188*. This means that a sculpty on average is about 11 times more costly than a prim, when it comes to rendering overhead and processing. Yet historically, each sculpty has only counted as one prim. There's a tremendous imbalance there, to say the least. But that's not necessarily a problem. What IS a problem is this: while sculpties offer a huge "bang for the buck" in terms of the number polygons per prim that you get to use, their geometric structural constraints almost always result in the use of far more polygons per actual model shape than would otherwise be necessary. This directly slows down rendering, often dramatically. Sculpties have been one of the most over-abused resources in SL since the day they first hit the grid. The infamous Luskwood Tree, for those old enough to remember, was an early demonstration of this. For anyone who doesn't know, it was a gigantic plant, constructed from several hundred pillow-shaped sculpties, with a total poly count well into the millions. It lagged the hell out of every single person within a 2-sim radius. Abuses like that continue to this day. People routinely pack builds to the brim with sculpties, with 512x512 or 1024x1024 textues on each one, and then they have the gall to complain that SL is laggy. As we all know from visting certain island builds that somehow magically yield killer high frame rates, even though they're chock full of great looking content, the fact is SL works quite well when builders are mindful to do the right things. But unfortunately, most users have no idea how to self-manage their resource usage. So, if LL were to finally do something to help manage it from the outside, that would be fantastic! We all should be applaud that kind of move (assuming they could find a way to keep existing content from being auto-returned unfairly, of course). It's also worth keeping in mind, by the way, that sculpties were never intended to be anything other than a stop-gap measure while mesh support was still on the drawing board. Qarl, being the genius that he was (and still is), realized that since SL was already set up to import textures, and already set up to render polygons, it wouldn't take much extra tinkering to get one to drive the other. The invention of the sculpty was simply a clever way of getting SL to get SL to be able to do SOMETHING more than simple prims, without having to alter the fundamental capabilities of the system. But even Qarl would be the first to admit that in the grander scheme of how 3D modeling and rendering are supposed to work, sculpties are a very clumsy and extremely inefficient way of getting the job done. That's why they've never existed anywhere else but SL. Everyone should understand that from a cost/benefit perspective, sculpties have never been winners in any category except prim count (and that's just because prim count was artificially imposed). When you look at every resource that makes a real impact on how the management of any 3D simulation actually works (things like rendering time, load time, poly count, texture usage per model, etc.), sculpties have always been tremendous lag machines, and always will be. That's just the nature of what they are, and there's no way around that. Now that we can use arbitrary meshes, there's zero need to keep using sculpties at all. Traditional mesh models are not only MUCH more efficient, they're far easier to make, and they have capabilities that sculpties could never touch (multple textures per object, custom UV layout, rigging, etc.). What's more, they can lower the prim count of any given model much further than sculpties ever could, even if sculpties were to retain their old one-prim-per-sculpty PE counts. Abigayle Bracken wrote: Does this really save so much strain on the servers, when mesh objects take up so many more resources? The servers have nothing to do with it. The primary concern is rendering efficiency. And as we just discussed, sculpties are supremely inefficient in that department. If you ever want to get the kinds of framerates in SL that you get in regular video games, or even anything in the same ballpark, efficiency has to be encouraged. The prim count, as a simplistic means of resource management, made some sense way back in SL's beginnings. But it never made any sense to stick with that and only that for so many years. People should have been encouraged to think about things that really make a direct difference to performance, like poly counts and texture memory, all along. Abigayle Bracken wrote: Do we get higher prim counts available on our land to compensate us? You wouldn't need higher prim counts. Just use more efficiently made models, and you've got nothing to worry about. Those sculpty horses should be replaced with mesh versions. They'll have lower PE counts than even the old sculpty ones, and they'll look a hundred times better. Abigayle Bracken wrote: After four years of faithful membership (most of that being paid premium membership), I am giving serious thought to chucking it all. It never ceases to amuse me how every time a significant improvement comes to SL, a certain percentage of people threaten to quit over it. Anyone remember how many prim builders threatened to quit when sculpties first arrived? The forums were filled with complaint after complaint after complaint, from those who wanted to preserve the status quo. Oh, and remember when they made changes to the way permissions worked on textures, and there were so many posts from texture artists threatening to quit over it? My personal favorite was when copybot first got out, and content creators of all types said they were gonna close up shop because of it. Uh, let's see, you think you MIGHT potentially lose some money by getting ripped off, so instead you're gonna make certain that you make no money at all ever again, by not even trying. I still have to actively contain my laughter when I think about that one. Some people are geniuses. I'd strongly encourage you to do yourself a favor. If sculpty prim counts were to go up, embrace the change. See it for the net gain that it actually would be for everyone. Anything that would cut down on abuse of real resources would be a good move. *My figure of 188 polygons in an "average prim" was arrived at by adding up the total amount of polygons in a bunch of random builds around the grid, and dividing by the number of prims in each build. The number is relatively low, because simpler prims tend to be used more often than complex ones. For example, cubes, which have only 108 polys each, are used far more often in builds than toruses, which have 1152. Cylinders, which have 192 polygons, tend to be used more often than tubes, which have 672. Etc., etc., etc. Because whole prims tend to be used so much more often than cut and/or twisted ones, I did not consider the added polys from twists or the subtracted polys from cuts to be statistically significant. Attachments, like prim hair, are a whole other animal. I did not factor those in. I'm just talking about builds here.
  10. Josh Susanto wrote: 1) I'm not particularly interested in the difference between channels and layers. I'm sorry to hear that, Josh. If you were to take an interest in the difference, you'd find it would aid you greatly in all your graphical endeavors. Without at least a cursory understanding of the fundamental characteristics and application of each, you're really flying blind in your approach to image creation and graphics manipulation. That unfortunately translates directly to a tremendous inefficiency in terms of time and energy expended in the creation of each image. A little understanding goes a long way toward making things MUCH easier and more time-effective. Josh Susanto wrote: But the "channel", in any case, IS a "layer" inasmuch as the data behind it is still there and masked by it. No, a channel is NOT a layer, and a layer is NOT a channel. They are two entirely different things. I cannot overstate how absolutely fundamental the differences between them are. For an analogy, I usually put it like this. If all the layers in an image were to get together to play football, the channels wouldn't be players in the game. They'd be the force of gravity keeping the players feet on the ground, the warmth of sunlight illuminating the field, the push of the wind affecting the course of the ball . Channels are the fundamental forces of the graphics universe. You can have an image without layers in it, just as you can have an empty football field with no players. But you cannot have an image without any channels in it, any more than you could have a football game without gravity. As for what you said about masking, the fact that layer masks and channels operate on similar principles doesn't mean they're the same thing. Masks are not layers. Masks are not channels. Masks are masks. Josh Susanto wrote: It is every bit as much a layer as the "color layers", which, actually being integrated to a single image, are not layers at all and should probably best be referred to as "channels". No, the layers in an image should not be referred to as channels, ever. By making such a statement, you're only further underscoring your own admitted lack of understanding of the definition of each term. They are NOT interchangeable in any way. Josh Susanto wrote: So, yeah... I'm sorry I sometimes get momentarily confused by technical terminology that is totally ass-backwards to begin with. They're not "ass-backwards". Each term has its own distinct meaning. There's no overlap, no reversal, no interchangeability. I'm sorry you find it confusing. I'm guessing you simply weren't taught this stuff before, and there's certainly no shame in that. I do find it a little troubling, that despite you're having "no interest in it", you see fit to write about it anyway, thus injecting incorrect information into the duscussion, which can potentially hinder others who are legitimately trying to learn the subject for the first time. If you truly have no interest in it, I'd ask that you refrain from commenting on it. That would only make sense, don't you think? If, however, you would like to take an interest in it, GREAT! I'd be happy to help you learn, as I'm sure would many others here as well. Josh Susanto wrote: I don't export TGA from Photoshop, because 99% of the time, I'm not even using Photoshop. Photoshop, not being free, is also not readily portable. If I'm going to buy Photoshop just to do conversions, maybe I should also just hire someone else to press the button for me. OK, then, export from GIMP, or Paint.NET, if you prefer free applications. As for portability, you can easily throw any of those programs on a laptop, and take it wherever you want. Josh Susanto wrote: 3) The transparent component that appears after conversion to TGA is already present in the PNG. Otherwise it would not appear. So this crap about PNG not even supporting such data is... well... crap. I never said PNG didn't support transparency data. In fact, I said just the opposite. It supports multiple forms of transparency, which is why it's so prone to accidental inclusion of transparency. What I did say PNG does not support is layers. That is a fact. PNG is a flat format, which means it's inherently layerless. The same is true of TGA, BMP, JPG, JPEG2000, and many others. Some of the most commonly used raster formats that do support layers are PSD, TIFF, XCF, and PSP. It should be noted that Second Life cannot directly make use of any of these. They're meant to be used as working documents within source applications like Photoshop, GIMP, Paintshop Pro, etc. The above mentioned flat formats are considred "output formats", to be imported into target platforms like SL. As for the transparency "appearing" or "not appearing", that's irrelevant. While you're right that one method of creating an alpha channel is by first creating an image with visual transparency in it, and then converting that transparency to alpha channel data. But that's only one method among thousands, and it's hardly the most powerful, most efficient, or fastest way to do it. It's actually among the more clumsy and unreliable methods one could possibly choose. Now that you've said Lunapic is your tool of choice, I understand why your thinking is the way it is. Because Lunapic is a web app, its functionality is pretty limited. While I do have to give it props for being pretty decent at doing the few things it does, the fact remains that it simply can't do the hundreds of other things that desktop graphics apps do.  It's just not in the same league as professional, or even semi-professional, texturing tools. Lunapic doesn't offer access to channels, which I'm guessing is the reason you appear to be having such a hard time understanding what channels actually are, and how to use them. It also doesn't support layers, which probably explains why you also appear to be a little unclear on what constitutes a layered vs. flat format, and how layer masks work. I'll spare you the technical details about these things for for now (unless do you want to read them, in which case I'll be happy to stee you toward articles that I and others have written on the subjects), but suffice it to say your notion that the transparency couldn't appear if it weren't visually present in the working document is 100% false. We don't need to actually see the transparency visually in order to map it in a channel. As a professional texture artist who makes images every single day, I can promise you, I almost never work with visual transparency. I map my transparency in a channel, and that's that. I do this because it's infinitely faster, and way more flexible than doing it the WYSWYG way. The logic is very simple. If we color a pixel white in the alpha channel, that pixel will be fully opaque in the assembled image. If we color a pixel black in the alpha channel, then that pixel will be fully transparent in the image. And if we color a pixel any of the other 254 available shades of gray that can exist in an 8-bit channel, then that pixel will be translucent in the image. The darker the gray, the more transparent. The lighter the gray, the more opaque. That's really all we need to know. This data mapping model is an extremely powerful tool (even if you couldn't care less about the actual math involved) which saves an incredible amount of time, if you understand how to use it. For example, let's say you want your image to grade smoothly from opaque around the edges to transparent in the middle, like a pane of stained glass. If you tried to paint such an image just by visually looking at the transparency itself, it could take hours or days to get it right. But by working directly on the alpha channel, you can be done in literally two clicks. Simply apply a black-to-white radial gradient, that's it and that's all, done, in less than a second. And that's just for one pane, by the way. Imagine if you had an entire stained glass window to create, with hundreds of individual panes. With the WYSIWYG work flow you described, it could take months to make it look good. With alpha channel work flow, it would take a couple of minutes at most. Josh Susanto wrote: Try it yourself if you like. Here's Lunapic: http://www.lunapic.com/editor/ Here's Convert Hub: http://www.converthub.com/ Conversion to TGA is necessary ONLY because SL does not read the transparency in the PNG, and will only read it once converted to TGA. I'm quite familiar with Lunapic, and its limitations. The reason SL "won't read the transparency in the PNG" is not because SL can't actually do that with PNG's. It's because Lunapic doesn't give you the option to save the file with the correct color mode. This is a serious flaw in Lunapic, which has nothing to do with SL.
  11. Lyra Foxclaw wrote: Okay, so after fussing with this for a good hour, I cannot figure out what the issue is. I have a clothing texture I've been working on, and in order to keep from having a white line around the edges, I use an alpha channel in Photoshop and save it as a 32 bit TGA and still keep the transparency I want. This has always worked in the past. However, for some reason, this time it refuses to upload with the transparency. Yes, the alpha channel is checked, yes I'm saving it as 32 bit and not 24 bit TGA. No, the alpha channel is not reversed from what it should be. If anyone can offer help it would be much appreciated. Here are a couple of common alpha channel related pitfalls: 1. What version of Photoshop are you using? If it's 7.0, then that's your problem right there. That version does not handle alpha channels properly. Also, if you've ever installed any kind of autmatic alpha channel generator, chances are it's just a repackaged version of that same borked TGA saver from 7.0, and it WILL ruin your current version of Photoshop. If so, the only fix is to completely uninstall PS, and reinstall. I'll never understand why people feel the need for that auto-alpha garbage. Making a real alpha channel by hand takes all of two clicks. That autmated stuff never makes anything any faster, and only causes problems. 2. How many channels are in the working PSD? Is it four, and ONLY four? If you've got any extra channels in there, you'll end up with an all white alpha in the exported TGA. Delete any extraneous channels, and your PNG will come out just fine. If neither of those issues are the problem, then as others have suggested, I'd need to see the file before I could determine what exactly is going wrong with it. Chelsea Malibu wrote: You cant go to 32 bit for these. it will not import the alpha also try a PNG and not a TGA. Not sure where you might have gotten that information, Cheslea. SL has always been able to import 32-bit TGA files. In fact, before PNG was added as an option a couple of years ago, 32-bit TGA was the only way to do texture transparency in SL. That's how we all did it for years. PNG was only added after an enterprising SL resident donated the import code to LL. Nowadays, PNG works, of course, but there is a catch. Because the PNG format supports multiple forms of transparency, there's a fairly wide margin for user error. It's very common for one to end up with a 32-bit texture where one's intention was 24-bit. The workflow utilized when TGA is the intended output lends far more control to the user. That's one of the reasons TGA has remained an industry standard for texturing for decades. Josh Susanto wrote: Nothing I've evert tried to import as PNG has ever come through with a proper alpha layer Considering that there's no such thing as an "alpha layer", and that the PNG format doesn't even support layers in the first place, that's hardly surprising. It's alpha CHANNEL, people. There's a world of difference between layers and channels. In any case, I'm not sure why you've had such trouble importing PNG's with transparency to SL. Usually, when there's an error, it goes the other way around. Unintended transparency in PNG-sourced textures abounds. Josh Susanto wrote: so I convert everything to TGA first and that seems to work when I use Converthub or Photoshop, but not when I use Irfanview. I'm curious why you'd use a converter at all. Why not just output a TGA in the first place, directly from your working document in Photoshop?
  12. Textures will bake considerably faster from poly surfaces than from NURBS surfaces. Generally speaking, that's really the only practical difference, from a user perspective. (Not counting the diffrences in modeling techniques, UV considerations, scene setup, etc. that are inherent to the two different media.) When you're baking from NURBS, the renderer first has to tesselate the surface to polygons before it can bake a thing. Depending on your settings, the tesselation is likely to be in the millions of polys, in order to preserve the closest possible approximation of the the NURBS surface shape. That slows the process down a lot. When you're baking from a surface that is already polygonal, that re-tesselation doesn't have to happen. It only has to deal with the actual polygons you've put there yourself. As for why it's crashing on you, there's no way to determine that without knowing about your system, your software versions, what other plugins you've got running in Maya, your drivers, the scene itself, your render settings, etc. There are a million possible factors. If the model you're trying to bake isn't properly set up for baking (no impossible geometry, no collapsed polygons, no double-sided polygons, no overlapping UV's, no collapsed UV's, etc.), that will obviously cause problems. If your scene is overly complex, the render process could be overwhelming your machine. If your graphics hardware and drivers aren't up to snuff, all bets are off. I could go on all day thinking of possible causes. Does it crash every time you bake anything polygonal, or just a certain scene?  I'd suggest baby-stepping your way through the troubleshooting process. Try to bake a very simple object, like a default cube, with simple lighting, and with very basic render settings. Then try a more complex model (but make sure it doesn't have any of the above mentioned problems), under the same settings and lighting conditions. If that works, step up the render settings and bake each model again. If that worked too, then step up the settings again, and bake again. Keep risning & repeating until you find whatever setting is causing the problem. Once you step all the way up to the settings levels that you were previously using, and if it still doesn't crash, then you can be fairly certain the problem was with your original scene. If it crashes right away with just the cube and the minimal settings, then I might suggest uninstalling Maya and Turtle, and reinstalling from scratch. Ditto for your graphics drivers. If it still doesn't work after all that, contact Autodesk. Assuming your software is legit, they have good tech support. The Autodesk forums are a good resource as well. On a side note, just so you know, there's no such thing as "a nurb". NURBS is an acronym, which stands for Non-Uninform Rational B-Splines. The logic goes like this: "I've got a spline. What kind of spline is it, you ask? Why, it's a B-spline. And you want to know what kind of B-spline it is? It's the rational kind. It's a rational B-spline. Oh, and you also want to know to know if it's uniform or non-uniform? It's non-uniform. So, it's a non-uniform rational B-spline." If you remove the S, you take out the only one of those words that actually means anything on it's own. All the other letters stand for words that merely describe the spline. The spline itself is the only actual noun in the whole thing. Rather than "a nurb", the correct terminlology would be "a NURBS surface" or "a NURBS curve". There's no other way to say it.
  13. JubJub Forder wrote: So lemme get this right - your recommending more clicks for a TGA file in one post, then in another saying don't resize cause it makes for more clicks if you change your mind? You misunderstand. Working with alpha channels isn't about "more clicks". It's about avoiding all the hundreds of extra steps that are necessary throughout the entire image creation process if you're not using them. It's about making things easier, not harder. Say I want to make a simple image that grades from transparent to opaque. If I've never been taught about alpha mapping, I'm going to have to spend hours with the eraser, subtly decreasing the opacity, line by line by painstaking line. On the other hand, if someone was kind enough to explain the (exceedingly simple) logic behind how alpha mapping works, so that it has become an inherent, ingrained part of my basic thought process when approaching the entire subject if image creation, I'm just going to know instinctively that all I have to do is apply a simple black-white gradient to the alpha channel, and I'm done in less than a second. You really want to tell me WYSIWYG is the easier method? Again, there's a big difference between what's immediately obvious, and what's easy. JubJub Forder wrote: We're trying to help a beginner with simple tips. Exactly. I've been helping beginners with this for more years and years and years. I can promise you from LOOOONG experience that teaching people good non-destructive work habits right from the start makes things infinitely easier on them. By treating alpha channels as some mysterious thing that only advanced users would ever touch, you make things so much harder on the student, it's almost criminal. I teach people about alpha mapping logic in their very first lesson, and from there, they never have to look back. I treat it like what it is, the most fundamental part of how graphics works. When the student isn't taught to see it as a big deal, they never become afraid of it, never think of it is hard or foreign. They just use it, period. Since simplicity is your goal, consider this. It actually takes several times more chemical reactions in the brain to un-form a habit than to form one. It's literally orders of magnitude harder for us to forget than to remember. So, if we're presented with two options, ordered as "the beginner's way" and "the advanced way", most of us will just stick with the former forever, even if the latter is actually easier. Once we've formed a habit, everything else becomes "hard", whether it actually is or not. I take the subject of habit-forming very seriously in my teachings. I don't want any budding texture artist ever to have to struggle to re-learn something that rightly should have been taught to them in the beginning. There's simply no such thing as "beginners' methods" and "advanced methods". If you yourself were not taught about alpha channels right in the beginning, then it's possible that no amount of arguing from me will ever change your perception that they somehow have to be hard for newbies to understand. But of that's indeed the case, then only underscores my point, as it means your own habits are firmly embedded, and can't be easily broken. Here's something you might want to consider. On the old, old SL forums (yes, there were old ones, and old old ones before them; these are the new ones), the question of how to create and work with alpha channels used come up literally several times a day. It was the number one most frequently asked about subject, literally more common than all the other FAQ's combined. So, I took it upon myself to write a "transparency guide", which explained what alphas are, what they're for, how they're used, and how to make them, in plain English, very newbie-friendly terms. That guide became the very firs thread ever stickied in those old old content creation forums. Within three days of its appearance at the top of the texturing forum, virtually all the questions about alphas stopped. The frequency dropped from several times a day to once or twice every couple of months. From then on, the answer was always just to ask, "Did you see the transparency guide at the top of the forum?" to which the reply would almost always be, "Oh, I didn't notice it. Thanks for pointing it out," and then they'd get it. To this day, I still get thank you notes from successful in-world designers (then newbies) who say they never would have learned to do the things they do had it not been for that guide. I mention this not to toot my own horn, but simply to demonstrate just one example of how presenting people with the right information in the very beginning yields success. When the old old forums were replaced with the old forums, stickies were no longer supported, so the guide disappeared into the archives. Now that the new forums are here, and they do support stickies, perhaps it's time to resurrect it. Clearly the need for it hasn't gone away. It could use some updating, though, so I'll get on that as soon as I can. JubJub Forder wrote: PNG is a simpler format to use because it has less clicks for process. A format that you can easily see - no alpha channels hidden from view to trip a beginner. I couldn't disagree more. What if I'm a beginner, and I want to make something with varying levels of transparency, like a stained glass window. It's seemingly simple in concept, but in actual practice, it's a somewhat challening item for a beginner to create successfully. If I haven't been taught about alpha channels, it's going to take me hours upon hours to paint/erase each of those little panels with their graded opacities. The image as a whole could take days to create. Further, say I want to do three different versions of the same window, in three different color schemes. If I'm working with WYSIWYG methodology, I'm going to have to repeat the entire thing three times. I'm looking at potentially weeks of work, just for three little windows. Man, texturing is hard!!! Maybe I shouldn't even bother trying. Now let's say a kindly instructor happens by, and informs me about how alpha mapping works, in terms that I as a newbie can easily understand. Wow, now all of a sudden this project is easy. It's only going to take me a few minutes to create all those different transparency levels, since I can just paint them into place, as shades of gray on the alpha channel. Then I can copy that same alpha to each of the other images, and all I have to do to create the other versions is just change the colors. The transparency levels are already taken care of. Wow, this is exciting! I can't wait to see what else I've been missing out on up until now! I can't tell you how many times I've rescued people from utter despair by showing them how alphas can save the day so easily. I make sure my own students never have to experience that kind of frustration, because as I said, they just don't know any different. Alpha mapping is part of how they're taught to think about imagery, right off the bat. I've said it a thousand times before, and I'll repeat it once again. Just because something isn't necessarily immediately obvious doesn't mean it's not easy. I promise you, WYSIWYG is almost always harder than alpha-map work flow. As for your notion that alpha channels are somehow "hidden from view", I'm not sure where you're getting that. In programs like Paintshop Pro, and PS Elements, maybe, since those programs lack a channels palette. But in Photoshop, everything's right there. For people using a sans-channels-palette program, I recommend putting all the layers in a group, applying a mask to the group, and using that as a proxy for the alpha channel. When everything is done, simply save the mask as a channel. This method works perfectly well in Photoshop as well, if anyone feels they need to see the transparency as they work, rather than just trusting the grayscale values to do their job. So, nothing need be "hidden from view", ever. JubJub Forder wrote: You may recommend a beginner not avoid 'issues' and spend more time learning your 'proper way' before they get results - I don't. Again, you misunderstand. I don't recommend doing anything at all BEFORE getting results. I recommend getting results right from the sart, by learning the most universally applicable ways to get them, in the very beginning. I don't ever want to put someone in a position of forming a habit they'll later have to struggle to break. JubJub Forder wrote: I have 23 years experience with Pshop much of it fulltime and I still don't know every aspect of it - I don't pretend to... but i certainly don't try to help people learn by recommending harder ways, or things they don't actually need to know. It's hard enough to learn this stuff as it is. It's great that you've accumulated so much Photoshop experience. But how much of it has been in teaching budding texture artists? From the way you talk about it, I would venture to guess that the majority of your experience in using photoshop has been in doing things like photo editing. Photographers tend to be taught this stuff in a very different order from how texture artists are taught. I teach everyone the same way, including photographers. But knowing how most photographers traditionally get presented with this stuff, I can appreciate why you're having hard time seeing what I'm advocating as anything other than bass ackwards. I can assure you it's not. Web designers tend to have it worst of all, in my experience, by the way. They tend to be taught very linear methodologies, which aren't always the most practical when artistry is the goal. I had an interesting discussion with one just the other day, in fact, when he happened to stop by. I was editing photos for one of my real estate clients (yes, I do more than just 3D work), and he was pretty dumbstruck by my methods. The guy's been a photo editor for 20+ years, and he'd never been shown 99% of this stuff. Like you, he thought alphas were for "advanced" people when he started. End result, two decades later, he's still never embraced them. When I think of all the over-billing to his clients, since he's racking up the hours by doing it the hard way with WYSIWYG, it's almost criminal. As for teaching people things they don't need to know, there's no such thing. It's ALL need-to-know. That's not to say there's no order to how this stuff is best absorbed. There certainly is. But the notion that alphas can't come first is patently absurd. JubJub Forder wrote: A long time accepted, and general rule of thumb, is to start with a larger size and size down... you can argue the exceptions all ya want - its still a general rule of thumb thats useful for beginners and for most situations. And i point to the pictures above to prove our learner already has useful/better results. Not sure where you're going with this paragraph. I stated as much, myself. OF COURSE working at a large size, and then downscaling afterward is almost always a good idea. I never disputed that. Quite the opposite, I encouraged it. I just don't like when people present it as a crutch to work around problems that should never exist in the first place. JubJub Forder wrote: Perhaps you've had to write this stuff "thousands of times over the years" because you're over complicating things and people don't get it? No, I've had to write it thousands of times because there are always people out there who haven't read it yet. That's just common sense. This subject appears and reappears all over the place, all the time, and I was just venting about that in what I thought was a an obviously tongue-in-cheek manner. Perhaps the tone of the statement didn't come across as clearly as I'd thought it would. I certainly never meant to imply that it's the same people asking the same questions every time. That would be pretty silly, wouldn't it? I've never had anyone not get it. I've had people refuse to even try to learn it, of course. Unfortunately, there's no changing that in some people. But I've never had anyone actually sit down with the material, and go, "Uh, I can't understand this." It's simple, and really easy to absorb, for anyone who wants to try. Your stubborn insistence that this is "overcomplicated" is frankly insulting to the intelligence of every newbie out there. You're presenting it as unnecessarily scary. Your argument is troublingly pessimistic, and arguably detrimental to the open-minded learning process of anyone who might be seeking to learn this stuff for the first time. There's absolutely no reason people can't learn to use alphas right from day one. To insist otherwise, just because you yourself maybe didn't have that opportunity, helps no one.
  14. Rolig Loon wrote: However, anyone can cam in from a neighboring sim and see your screen with no problem unless your sim is totally surrounded by water. :smileywink: That's true, but they wouldn't be able to see or hear the media stream unless they were physically in the parcel. Camming in from outside, they would just see the static media texture, not the movie.
  15. JubJub Forder wrote: Another hint: Use .PNG instead of .TGA - you won't have trouble with white edges, managing alpha channel, etc. Oh, how my heart goes out to people refer to alpha channels and halo-prevention as "troublesome". I've written so extensively on this topic, so many thousands of times over the years, on this very form and elsewhere, but still, it seems there will always be those who just assume that using alpha channels must be difficult, simply because it's not immediately obvious. People end up making things so much harder on themselves this way, it's so sad. All it takes is literally an hour or two of learning at most in the beginning to save countless thousands of hours over the course of your texturing activities. The fact is it takes all of two clicks to create an alpha channel, once you know how. And depending on how smart you work while you're creating the image itself, it's anywhere anywhere from zero to three clicks to make sure there's no haloing going on. If you're using the WYSIWYG work flow, you're wasting so much time, and sacrificing so much control over what you're doing, it would make your head spin to witness even 1% of the benefits you've been missing out on. There's nothing faster, easier, or more reliable than alpha-map work flow. That's why it has been THE staple of the entire graphics industry for the past 40 years, and why it will likely continue as such for countless decades to come. And if we really want to do the logic behind it justice, it's worth mentioning that the very same kinds of processes were utilized in analog photography for centuries before the first computer was ever even dreamed of. Trust me, if illiterate 18th century photographers in primitive homemade darkrooms could composite imagery by using masks, so can you. There's really nothing to it. If you need help, ask. Don't just dismiss what you don't yet understand.
  16. Josh Susanto wrote: So far, does anyone have a reason NOT to resize things larger in order to get smoother curved edges? Good question. I can give several reasons: 1. It's completely unnecessary. If you work properly, right from the start, to ensure that all your lines are anti-aliased as soon as you create them, then you'll never have to worry about this issue. Resizing an entire image, just for the sake of smoothing a line that should have been smooth in the first place, is nothing more than a waste of time. 2. It sacrifices control. As I said in my earlier post, I want to directly control where each and every pixel in my images ends up. I don't want to rely on automation for things like this, ever. Uncertainty is simply not in my job description, and it shouldn't be in yours either. 3. It can be very unnecessarily repetitive. If after resizing, the line doesn't look quite how you might have wanted, you have to do the whole damned thing over again to make it right. And if it's still not right, then it's rinse and repeat, and repeat, and repeat... This can be tremendously time consuming, especially if you're working with a complex texture, or with a series of many textures. 4. It's semi-unpredictable. So far, we've been discussing this as if there will only be one line to smooth in the image. Consider that a good game-quality texture is very likely to have dozens, if not hundreds, of individual layers in it, each of which will have elements that have their own individual edges. Do you really want to have every line look jagged while you're working, so you have no idea what your final result is truly going to look like until after you've finished? Not only would that be just a maddening, tear-your-hair-out, experience throughout the entire work process, what happens if you then, upon downsizing the image, discover that it looks like crap (which it likely will, if you've been working so blindly all along)? Do you really want to have to start over again from scratch, only to repeat the same kinds of mistakes, all because you never bothered to learn how to prevent them? With that approach, an image that should have taken a few hours to create could take days or even weeks. Not cool. 5. It's destructive. As I mentioned earlier, I always want to work as non-destructively as possible. I want infinite freedom to go back and change things as many times as I want, without making any sacrifices to quality and without any unnecessary addition of time. Further, I always want to make sure that as many image elements as possible are reusable for other images. For example, if I want two or more garments to have the same neck line, it would be a waste of time to have to create it more than once. I'd rather just copy the line itself from one to the next. The vector path/layer mask method I outlined is infinitely re-editable, and infinitely transferable from image to image. 6. It can be overkill. What if I've got an image in which I want smooth lines and jagged lines, both? If I apply any process that smooths out everything, I'm screwed. To borrow an expression from modern politics, we want to apply the scalpel here, not the machete. Again, I want direct control over the smoothness and jaggedness of each of my lines, in real time. The last thing I'd ever want to do is allow any automated process, be it a resizing algorithm or anything else, take away my powers of decision. 7. Forgive the expression, but it's a "dummy's" way to work. I can never sanction simply covering up a mistake with a band-aid. While it happens that it will often appear to be effective for the very simplistic kinds of imagery we've been discussing in this thread, that's really as far as it goes. This kind of cover-up approach simply won't work for everything. if you don't learn how to create smooth lines from the get-go, you will fall flat on your face when you run into a circumstance in which resizing alone won't fix your mistakes. I really hate to see people experience that kind of frustration. Therefore, the only methods I will ever teach or recommend are those that are universally applicable. The smart way to work, always, is to prevent a given problem from occurring in the first place, rather than just covering it up after the fact. I could probably go on all day listing more reasons. Here's something that might drive the point home, succinctly. Try this. Take a step back from everything you think you know about making textures, forget whatever habits you've picked up that are likely coloring your outlook about what is and isn't possible, and just think about the topic in the simplest possible terms for a moment. Does it make any immediate logical sense at all that a line that is supposed to be smooth shouldn't just be smooth, right from the start? I'm sure you'd agree, the only possible answer to that question is a resounding no, it doesn't make sense. In simplest terms, if something is supposed to be smooth, it should just be smooth, period. So, with this in mind, the only question worth asking becomes how do we make a line look smooth, right from the start. The question of how to force a jagged line to become smooth really isn't even relevant. If things are smooth in the first place, there's simply no need to even go there. Prevent problems, and you'll never have to worry about covering them up. Josh Susanto wrote: I use Lunapic ... The tools I have don't even work with TGA. Josh, I can't stress this enough. Do yourself a favor; get better tools! Lunapic is a severely limited online photo editor, not a texture creation tool. If you don't want to spend the money on Photoshop, that's understandable. But GIMP and Paint.Net are both free, and Paintshop Pro is only $99. All three of those options are full-featured image creation programs, more than suitable for high quality texturing at great speed. Lunapic simply isn't. The amount of time you've been costing yourself, and the limitations you've imposed on yourself, by using such an under-capable tool as Lunapic are staggaring to think about. It almost makes me weep for you. If the likes of Lunapic are what your texturng experience has been limited to thus far, then I have to say it makes sense why you'd be looking to options like resizing to solve your aliasing problems. 99.99% of the tools we'd normally talk about for preventing such problems (as well as for preventing tons upon tons of other problems) simply don't exist in applications like Lunapic. Just so you know, I've been posting on this forum almost daily for the past seven years, and you're the first person I've ever seen say they've been using Lunapic for texturing. I'm not even sure I've ever even seen it mentioned at all, come to think about it. Unless you're a very dedicated massochist, get yourself something proper. Josh Susanto wrote: For file conversion, I use the free download from Converthub, which only does one file at a time, but is at least free. Irfanview is also free, and it does batch conversions. If a stand-alone converter is what you want, Irfanview is considered by many to be the best one out there. Just about any full-featured image editor will, of course, also do batch conversion, including the aforementoioned GIMP and Paint.Net, which are both free. Josh Susanto wrote: A tip for anyone doing partially transparent PNG's is to convert them to TGA before you load them, or you'll tend to get black where you expect transparency checkerboard. I've never seen that happen before. It's more than likely a symptom of the tremendous limitations of the specific software you've been using. It sounds like the transparency in your PNG's isn't being generated in a way that the SL uploader fully understands. The PNG format supports multiple forms of transparency.
  17. Neelah Xue wrote: So, since no one has anything useful to say on these "new" forums, Excuse me, what?! No one has anything useful to say? No one at all? Wow, way to insult the ENTIRE community that you're trying to ask for help from. On behalf of everyone who's ever posted anything on this site, I take great offense to the whole of your opening statement. I think we'd all appreciate it if you'd edit your post, and delete that part, especially since the fact that your question was answered in less than 40 minutes absolutely 100% proves your comments false. Look, some of us volunteer SERIOUS time here, answering people's questions, providing TONS of helpful information for anyone and everyone who asks, often for not even so much as a thank you. If you can't bring yourself to acknowledge that, well, that's your loss. But kindly don't go around outwardly pretending the opposite is true, calling our efforts "not useful". That's a disservice to everybody here, yourself most of all. The fact is everything posted here is useful. Thousands of people benefit from what's written on these forums, every single day. If you don't want to believe that, again, that's your loss, and you're welcome not to count yourself among them. Just don't disrespect the rest of us by insisting nobody's getting use from the things that clearly ARE important to people other than you. Next time, how about just asking your question, and leaving out the insuliting rant? Neelah Xue wrote: Also, as a note, Second Life, regardless of the viewer, DOES use a pie wheel for anything involving right click. Not that it really matters, but just so you know, tha's not entirely true. Only 1.x-based viewers use the pie menu. Viewer 2 uses a standard linear context menu, just like you'd find in almost any other program on earth. Of course, if you prefer not to use Viewer 2, that's up to you. But whether you do or you don't, it's not helpful to pretend it doesn't exist. On a side note, for what it's worth, I would encourage everyone to give Viewer 2 a whirl, if you haven't played with it in a while, or if you've never used it all. It's true that it was all kinds of disastrous when it first came out, but since then, it's come an awfully long way. It really is a fine viewer these days. It's got its share of annoyances, for sure, but then so does every 1.x-based viewer. They're just different annoyances is all. Neelah Xue wrote: It is rather suprising, since more people use Pheonix Viewer then any other viewer in Second Life, even the official one. Where are you getting that statistic? Was it just wishful thinking by a Phoenix fan, or do you actually have a link to published demographics on viewer usage? If it's the latter, I'd love to look at the stats. In the absense of any hard statistics, I'd have to assume that the vast majority of SL users use whatever is most current on the signup page, since most SL users are new. For every one of us old timers who make informed decisions from experience, there are probably at least several hundred newbies who just singned up yesterday. It's also worth noting that there's a sizable percentage of old SL'ers like me who won't touch any third party viewer with a ten foot pole. I was part of the SL Views team that LL consulted with when they were making the decision to open-source the client. I knew long before the general public that third party viewers were to become a reality, and I was extremely cognizent of the potential dangers that could come from their arrival. We've seen quite a few malicious viewers come and go, including the one that was formerly the most popular third party viewer ever. Call me paranoid if you want (and no doubt some of you will), but I don't want my login info going anywhere except to LL. Those of you who are braver than I in this regard, I wish you the best of luck, and I sincerely hope my fears are unjustified. For my part, I don't think a handful of features that I can easily live without could ever be worth the risk.
  18. Neelah Xue wrote: Don't use filters I simply can't imagine why anyone would suggest that fuilters should not be used. They're incredibly useful, for all kinds of things. Filters are a tremendously important, arguably vital, part of the modern image creation process. If you haven't been using them, you've been missing out. If you need help understanding what filters do, what they're for, how they work, and/or how to use them for specific texturing challenges, ask away. There are plenty of us here who would be happy to walk you through it. Neelah Xue wrote: what was told to you about "liquefy". That's a bunch of crap, It's not a "bunch of crap". Just because you don't understand someone else's technnique doesn't mean it's invalid. Obviously the person who posted the advice about the liquefy filter has been getting success from it, or she wouldn't have posted it. Nobody's here to mislead anyone. That said, I do agree with you that liquefy filter wouldn't be my first choice for dealing with alaisaing. It's not that it wouldn't work; it's just that there are much more efficient ways to prevent the problem in the first place. Neelah Xue wrote: all that will do is blur edges, and still keep pixels visible. That doesn't have to be true. The liquefy filter is no different from any other tool, in the sense that it can be used literally hundreds of thousands of different ways, and for literally millions of different purposes. You MIGHT end up with the results you describe, if you use it a certain way. But you can also end up with all kinds of other results, by using it other ways. My guess is that the person who suggested it has been using it to 'fold' the aliased edges into the filled areas, effectively removing the aliased pixels (jagged edges) from view. There's no reason that wouldn't work. Again, I'd rather prevent the problem in the first place, so that the whole liquefy step would be unnecessary, but that's personal preference. If someone prefers doing it the harder way, then so be it. Bottom line, there are a gazillion ways to skin a cat in Photoshop. Don't EVER suggest that something can't work, just because you haven't personally experienced success with it, especially when someone else so obviously has. Otherwise, your comments only serve to underscore your own lack of experience at best, and your own closed-mindedness at worst. Neelah Xue wrote: Put on a white skin and wear your clothing if you want to see what I mean. I fail to see how just looking at results in isolation, whether good or bad, would explain "what you mean". I coud use the liquefy filter right now to smooth out aliased edges, apply the clothing over a white skin, and completely invalidate everything you've said here. Or I could botch the job, and seemingly confirm your claims. All we can say with any certainty is that there are lots and lots of usable techniques. For countless reasons, some tehcniques will jive with any given person, while others won't. Please don't do anyone the disservice of insisting that your particular way is the only way. We're here to discuss these things, and learn from each other, not bash each other for daring to use foreign techniques that we ourselves have yet to master. Neelah Xue wrote: Just increase the resolution of your image when you paint it. I.E. paint it at 3000 x 3000 when you are using your template. Working at a large size, is almost always good policy, yes. But I would NEVER go with an arbitrary number like 3000x3000. For texturing, stick with powers of two, always. The image is going to end up at 512x512 when baked into in-wolrld avatar outfit, no matter what. By sticking with powers of two, from start to finish, you eliminate a huge of potential artifacting from uneven divisions in the down-sizing algorhithms. I usually work at 1024x1024, or 2048x2048. There's not much point in going bigger than that, but if you really want to, the next step up would be 4096x4096. In any case, I take issue with your use of the word "just" in this context. You make it sound as if starting big is the only way to deal with the issue at hand. You appear to be suggesting working around the problem, rather than tackling it head-on. I can promise you, I can make a neck line at the native 512x512 that would look as good or better than any that could be made by "just" upsizing and then downsizing. Again, my preference is to make sure the edges are anti-aliased properly in the first place, rather than realy on scaling to hide the problem after the fact. There are a number of ways to do this (of course). I usually work with the vector path tools, myself. They're resolution-independent, so they provide a good means to create clean-looking lines at just about any size. Further, they're procedureally transferrable from image to image. And, of course, there's always good old fashioned hand-painting. Use a mildly soft brush, and a Wacom tablet, and you can get super clean lines each and every time, with no fuss to speak of (assuming you're good at drawing/painting in the first place). Neelah Xue wrote: SL will transition everything down to the right size while retaining a much higher detail edge. I don't ever want to surrender control of my imagery's appearance to SL, or to any other automated system. I always want to directly control how lines are anti-aliased, myself. I choose what kind of resizing algorhithms are in play. SL doesn't get a say in that. I make sure each and every pixel is how I want it, not how SL thinks I might want it, before I upload anything. The purpose of working at a large size is not so that downsizing will clean up your edges. They should be clean right from the start. The reason to go big is so that you have more freedom of movement while you're working. That's it. You also get the side benefit of having more magin for error with things like seam matching. If you're at normal scale, and a seam is off by one pixel, the mistake will be visible. If you're at 4 times normal size (1024x1024), and you're off by one pixel, chances are very good that the mismatch will get averaged out when you downsize. Now, don't get me wrong here. I'm not advocating that such mistakes are OK, or that anyone should be cavelier about them. You shoud always make every effort to make sure everything matches 100% perfectly, right from the start. But the reality is we're all human, and such mistakes do happen. Working at large scale increases tolerances. Here's my general advice for how to create clean neck lines (and clean everyhting else): 1. Work at 1024x1024, or 2048x2048. 2. Draw the lines, exactly how you want them to be, with the pen tool. (And be sure to save the path, in case you want to use it again.) 3. Fill the entire layer with the base color or base pattern you plan on using for the garment. 4. Ctrl-click the path to form a selection from it, and then create layer mask from the selection. The mask will "cut out" your edges. (The reason I prefer masks is because they're non-destructive. If you want to change the neck line later, you won't have to repaint anything on the layer itself. Only the mask itself will need to be changed.) You should now have a totally clean looking neck line, without having to downsize first, and without having go back in with the liquefy filter (or any other tool) to clean up your mistakes. If it's not shaped quite how you want it, simply edit the mask accordingly. You can do this by altering the vector path and repeating the process to make a mask from it, or simply by painting on the existing mask itself. When everything is done, and looking exactly how you want it, then and only then should you downsize. For best results, export to TGA at full size, and then open the TGA, and resize it to 512x512. Keep your layered PSD at full size. There's never any reason to shrink that.
  19. Romano Bianco wrote: Image for this post question: ( link) The linked image is just a thumbnail, way too small to see how it relates to the post. Can you post a bigger image? Romano Bianco wrote: the painting on the 3D model is easy to do, the only problem is to make the result be saved as useful UV texture, how do I do that? Photoshop's handling of 3D models and textures can be somewhat confusing, if you don't yet understand what the program is actually doing. One thing that might help you get your head around it is to remember that Photoshop is not a true 3D appication. It was never designed for that. It's therefore not capable of organizing a scene the same way a 3D modeling program, or even a true 3D paint program, would. Organizationally, all Photoshop really knows how to do is stack layers together, and composite them to form 2D images. So, Photoshop treats 3D models, the same way it does any other image element. It displays the model on a layer, which we call a "3D layer", and all of the model's textures are then grouped as sub-layers, under the umbrella of the 3D layer. To put it in common 3D scene heriarchy terms, the 3D layer is the parent, and the texures are its children. The textures themselves are images, of course, and they can be accessed as such. Once you open up texture PSB file, it'll behave just like any other PSD in Photoshop. You can give it multiple layers, effects, filters, adjustments, anything and everything you'd normally use. To export a 2D texture from out of a 3D layer, simply do the following: 1. In the layer stack in the main image, notice the 3D layer has a subsection underneath it called "Textures", and underneath that are listed all textures that are on your model. Double-click on the name of the texture you want. It will open up in a new document window (or a new tab, depending on your Photoshop preferences), as a PSB file. 2. Make sure the window with the texure PSB file in it is the active window, and the just hit File -> Save As... Name it whatever you want, and put it wherever you want. That's it. It's really very simple, once you've gotten used to how PS is organizing everything. Romano Bianco wrote: My second question is this: I read this (link) and then noticed that when I double-click a texture in the Layers panel to open it for editing two things can happen: 1) if the UV texture is a bitmap 2D image I get that thing for drop shadow, inner glow, etc. and the 3D > Create UV Overlays is greyed out; When you double-click the texture, it will open as PSB file, in its own window. Make sure that window is the active one when you go to activate the UV overlay. If any other window is active, even if it's the one with the 3D model in it, the option will be grayed out. Remember, the option is there in order to show you how the UV map fits over the 2D texture canvas. It's not there for displaying UV's on the model itself. If you want to see the edges and/or vertices on the model, you'll find those options under 3D -> Render Settings (in the window that is showing the model, of course). Note, the UV Overlays opotion will also be grayed out if your model has no UV's. So make sure (in your 3D modeling program) that the UV's are there before you import the model into PS. Romano Bianco wrote: 2) if the UV texture is part of a 3D file I imported to PS I get a 3D editing panel and the 3D > Create UV Overlays is still greyed out. I'm not sure I understand the question. The texture should ALWAYS be part of the 3D model. You have to assign materials to the model, in your 3D modeling program, BEFORE you import the model into Photoshop. Otherwise, it just won't appear to have any texures on it at all. By the way, just to make sure this has been said, don't ever change the UV mapping of a sculpty. Sculpties require a perfectly uniform UV grid, spanning the entire 1:1 UV canvas. If you change that in any way, you'll bork your sculpty. So don't use the "reparameterize" function in PS CS5. Paint diffuse textures all day long, but leave those UV's alone. Ditto for the avatar. If you change the mannequin's UV's, the clothing and skins you make won't fit the in-world avatar. Don't do that.
  20. Ejjarufaf OHare wrote: How do I make it a line instead of a point? To collapse the vertices to a point, you would have had to scale along two axes. If you scale along just one axis you'll collapse half the points to meet the other half, thus forming a line instead of a point.
  21. Cool video. Looks like Deep Paint 3D has indeed had quite a few bells & whistles added to it since I last tried it. I like the technique the artist was using (although the end result did look a bit creepy). Mudbox approaches projection painting slightly differently, but you can definitely do the same kind of thing. Allow me to respond to your last comment: sounds Turner wrote: If the program's any good, i shouldn't have to go back to editing in 2d. I have to take issue with that notion, for two reasons: First, if your intended purpose in getting a 3D paint program is to avoid 2D work, you're going to be sorely disappointed. You're always going to have to do at least some 2D editing, no matter what. How you should be looking at 3D painting is as an enhancement to your existing 2D techniques, not an outright replacement of them. Notice the video you posted was chock full of examples where the artist went back and forth between 2D and 3D views, to paint in both. That's how a 3D paint program is meant to be used. No matter what you're texturing, there will always be lots and lots of points throughout the process where working in 3D just isn't efficient or practical. 2D work remains essential, always. To put it bluntly, if you don't want to work in 2D at all, then don't be a texture artist. I wish there were a gentler way to say that, but there really isn't. Second, I'm a little troubled by your choice of wording, with "if the program is any good". In this particular context it's not about how good or bad the program is. It's about how good you as an artist are at using whatever tools you've got. I could hand you the be-all-end-all of 3D paint programs right now, but if YOU are not (yet) good at texture painting in both 3D and 2D, it's not going to yield good results for you. Conversely, I could hand you a total piece of crap, like MS Paint, and if you're a good enough texture artist, you'll be able to make fantastic skins with it. It'll take you a lot longer with worse tools than with better ones, of course, but the end results will be the same either way. Consider this. 5000 years ago, Egyptian stone masons were producing some of the finest works this planet has ever seen (before or since), with nothing more at their disposal than hammer stones, and the occasional copper chisel for the very lucky. Today, sphinxes and ka statues can be turned out by the truckload, carved en masse by CNC machines. But are they better sphinxes? Better ka statues? Absolutely not. The modern tools merely increase the speed of the work. What used to take 40 years now takes 40 minutes. The quality of the results is an entirely different matter, which has little if anything to do with the specific tools. I doubt there are a whole lot of people who could tell the difference between a hand carved statue and a machine carved one. It's the same thing with whatever 3D paint program you end up choosing. It'll speed up various aspects of your work, vs. doing everything in 2D. But it won't make your textures look any better, all by itself, and it certainly won't eliminate the need for 2D work, no matter how good its 3D capabilities. Again, what it really comes down to is you, not the program. If you're good at 2D painting, chances are you'll be good at 3D painting, too, whichever program you use. But if you're not yet as good as at the one as you'd like to be, the other isn't going to do a whole lot for you, again regardless of which specific program we're talking about. It may well turn out that the excitement of having a new fun tool at your disposal will prompt you to practice more, and the practice will make you better. But understand, it's you doing the practicing, not the program. Artwork is 99% about the artist, and 1% about the tools, always. Add those two points up, and here's the better way to have phrased your last sentence. "If the program is good, and I find I enjoy using it, then once I get really good at 3D painting, I won't have to do all that much 2D painting." I hope that makes sense.
  22. I've read through this thread three times now, trying to figure out where exactly were the "snipes", as it was put, that served to spark the, uh, 'bleep war'. I just can't see it. I don't know if someone was just bored, and looking for a fight, or if there's some history between the two participants, of which the rest of us are unaware. (And please, don't anyone recount that history, whatever it might be, if indeed it exists. We really don't need to know. It's none of our business.) Whatever's actually going on, I'm gonna suggest you both let this one go, guys. As a reader, I can promise you, the argument appears to make absolutely no sense. It just seems to have popped up out, totally of left field, for no particular reason at all. I haven't a clue what you two are on about, and I very much doubt many other readers can tell either. Whatever it is, it's a discussion best had in private, or not at all. The rest of us really don't need to hear it. Not to be a stone caster, I'll of course readily admit I've been involved in my share of equally inexplicable battles, myself, on occasions when someone or other has decided to fly off the handle for no reason I can fathom. I'm always bewildered when that happens, but unfortunately, I'm not always able to resist the temtation to fire back when I feel falsely and unfairly accused of having had malicious intent for a post. In such cases, it often takes a random good Samaritan to jump in and say, "Both of you, stop it!", in order for me to be reminded I probably should have had the good sense just not to have responded in the first place. There's no way to win in these situations, after all. So, call me the random good Samaritan this time. I mean this in the nicest possible way: "Both of you, stop it!" By the way, the link in question, the one that just said "Clothing & Skin Templates", now says "Higher Resolution Clothing & Skin Templates". Everyone happy now?
  23. This is one of those "ask a hundred people, get a hundred answers" questions. The only really true answer is this. The best program to use is whichever one you have access to, and that you know how to use. As Luc said, they're all complicated. You're not going to get good with any of them over night. It's going to take time, no matter which one you choose. To put it bluntly, either commit to the long haul, or just don't start. The journey's a lot of fun, and very rewarding, but it's no small investment of time. I love Maya, which makes it the best program for ME, for a lot of things. But there are plenty of other programs out there that are just as capable at doing the things I use Maya for. If you find Maya "speaks your language" like it speaks mine, then it'll be the best choice for you as well. Or if some other option, like Max or Blender or Lightwave or any of the dozens of other modeling programs that are out there, fits your way of thinking, then that'll be best. The reason I use Maya for sculpties is not because it is inherently any better at making them than any other program. It just happened to have been the program I was already using when sculpties were invented. I use it every day in my work. Now, as Gaia mentioned, Maya does have THE best included help documentation of any program in existence, hands down. Because of that, it's relatively easy to learn, compared with some of the other choices. Also, the entire program is based upon a singular underlying logic. Every part of it works the same way as every other part. So, once you get into the groove with it, you can intuit your way around quite effectively. Start with the Getting Started tutorials in the help file. That's where everyone, from Oscar-winning Hollywood animators to casual hobbyists, begins. Go through the whole series, from start to finish, and you'll gain a really solid mastery of Maya's basics within anywhere from a few days to a few weeks, depending on how much time you want to put into it each day. From there, you'll be in position to approach pretty much whatever else you want to do with the program, with relatively little struggle. Whatever you do, don't ever start any program with "I want to make _____ " in mind, whether the blank happens to be sculpties or anything else. If you put the cart before the horse, you'll only experience frustration. The only way to proceed that always works is to learn the program itself first, and then apply that knowledge to whatever it is you want to make. Master the basics that apply to everything, and then specialize from there. Sculpties in particular actually compound the problem, if you try to learn them before you're ready, by the way. They're total oddballs, unique to SL, and the techniques used in making them aren't completely applicable to anything else. Learning to make sculpties won't help you become a good 3D modeler, but learning to be a good 3D modeler first will absolutely prepare you to be good at making sculpties, if you follow my meaning. Again, it's generalize first, then specialize. That's the only surefire way to success. Regarding Blender, check out Gaia's and the Machinimatrix team's tutorials at http://blog.machinimatrix.org/3d-creation/video-tutorials/ , if you haven't already. They're superb, totally changed my outlook on Blender.
  24. I haven't used Deep Paint 3D in several years. I remember I had a mixed review of it at the time. While it was certainly adequate to the task of applying color to a surface, it was lacking in a lot of ways, mostly in terms of its then quite limited tool set. Everything I made with it, I had to enhance or refine in Photoshop afterward. But 3D paint programs have come a very long way since then, with Zbrush and Mudbox leading the charge. I'd be surprised if Deep Paint 3D hasn't learned a trick or two in the time since I last used it. Even back then, if all you wanted to do with it was paint skins and clothing for SL, it was sufficient. If you've already committed to Deep Paint 3D, then I'd say keep using it. No doubt you'll develop your technique with it, and do just great. But if you're still shopping around, I highly recommend Mudbox. It's hands down the best 3D paint tool I've ever used. I absolutely love it. Most other 3D painters I've tried get all kinds of confused whenever the UV layout is particularly sloppy (like it is on many parts of the SL avatar), but Mudbox just doesn't ever seem to care. The paint goes where you tell it to go, and that's that. You can also paint in 2D and 3D at the same time, which is great. Give it a whirl.
  25. You render settings look right. I might suggest turning on backface culling, as that will improve performance a bit, but that's obviously a separate issue. If the problem is not happening in older versions of PS, then you can be reasonably sure it has nothing to do with your computer. It's almost certainly an issue within PS CS5 itself. Do you have all the latest updates from Adobe? I know there were (and maybe still are) quite a few bugs in CS5, which prompted many a user to go back to CS4 for certain things. That's one of the reasons (along with lack of money) that I haven't upgraded myself, yet. I'm still using CS4. Have you tried asking about this on the Adobe forums?
×
×
  • Create New...