Jump to content

Chosen Few

Resident
  • Posts

    1,790
  • Joined

  • Last visited

Everything posted by Chosen Few

  1. Porky Gorky wrote: One trick I have used a few times now is to repeat the texture in SL, then layer another plane right on top with a baked alpha shadow. You retain resolution from the repeats in SL and also get the shadowing, however it obviously pushes the LI up slightly. Good tip, Porky. I've also been doing that for years. It's really the only solution in a lot of cases, given SL's extreme limitations. Hopefully, SL will get lightmap support some day, and we won't have to use those kinds of hacks anymore. The three things on my wishlist for the materials project were normal maps, spec maps, and lightmaps. Looks like we're getting the former two, but not the latter. That's a shame. While normal and spec will allow for tremendous improvements in the overall look of models in SL, which will be wonderful, lightmaps would not only contribute just a much, but judicious use of them could also cut SL's texture overhead by well over 90%, which is arguably far more important.
  2. Sae Luan wrote: Both of these items were automatically mapped, then adjust using the layout options once in the UV map to save time on my end due to a lot of RL being busy and SL event deadlines. Just so you know, because Maya's automatic mapping tends to create a ton of indivudual shells (islands), you end up with much higher download costs in SL than you otherwise would. Each UV point counts as a vertex. Say three shells have a UV point in common, you've now got three times as many extra vertices as you would have if those three shells were merged into one. Multiply that by the sheer number of shells the automatic mapping tends to create, and it can really add up fast. Your land impact can go through the roof on a model that would otherwise be really low. When I use automatic mapping, I consider it just a starting point. The very next thing I do is go in with the "Move and Sew UV Edges" tool, to very quickly combine as many shells together as I reasonably can. On a typical model, you can cut the number of shells down by at least 80%, in just a minute or two. That said, I rarely take that approach anymore, because a few years ago, I discovered the magic of Unwrella. It's an icredible time-saving plug-in for UV mapping that works far more intelligently, a whole lot more easily, and way faster than Maya's built-in automatic mapping. Just define where you want your seams to be, and Unwrella does the rest. It automatically calculates the optimal UV layout, to ensure uniform texel density, across the entire surface, with the least possbible texture stretching. To do the same thing by hand can take anywhere from 10 to 100 times as long, and the results are rarely as uniform. Give the free trial a whirl. You'll be amazed. When you do decide to buy it, which I'm sure you will, you'll be pleased to find that Unwrella isn't even pricy, at just under $200. With the amount of time it saves, it pays for itself in the first job.
  3. If you're asking about how to rotate the objects by hand, the answer is simple. Just select them all, and rotate the entire selection via the on-screen manipulator. Don't try to use the numbers in the editor, as that will only affect the last selected object, not the whole selection. If you're asking how to get the whole thing to rotate continuously on its own, like a revolving restaurant or something, you'll have to use some more advanced scripting. I'm guessing you used TargetOmega to rotate the Rez-Free box? If so, the reason that that only rotated the box is because it's just a client-side visual effect. The server doesn't know it's happening, so the other objects never get the message to move. To make it work, you'll need the rotating box to announce its orientation, at regular intervals, so the ther objects can know about it, and respond accordingly. The results may or may not synchronize all that well, depending on network traffic conditions, and other factors. You could also try using pathfinder options, to make the individual objects move along a circular path. I really haven't played with the pathfinder tools myself yet, so I can't speak to how well it works, or what the pitfalls might be. Synchronicity may well be a problem wth that, too. I would suggest you ask about this over in the scripting forum. There are likely plenty of people over there who could offer better guidance that we can here. ETA: Your post brings back memories for me, by the way. One of the first things I ever built in SL, way back in the day (over nine years ago), was a giant pyramid. Its footprint was a half sim x a half sim. While not exactly gargantuan by today's standards, it was one of the very largest buildings in the world at the time. (The entire grid was only about a hundred regions, back then.) Inside the pyramid were gardens, pools, a nightclub, apartments, and a whole bunch of other stuff I don't quite remember, on multiple levels. Nothing rotated, though.
  4. You're welcome, Nacy. Let me respond to your additional questions. Nacy Nightfire wrote: why is this not also the case when I repeat the texture right in SL? When you repeat the texture, whether it be in SL, or in Blender, or anywhere else, you're not doing any resampling of the image. Each instance of the image remains at its full resolution. Again, if you take a texture that is 1024 texels wide, and you repeat it 10 times across the surface, you end up with a surface that is 10,240 texels wide. Image-wise, it's no different than if you applied the texture to a plane, and then rezzed 10 copies of the plane, side by side. Nacy Nightfire wrote: I mean also in Photoshop before you commit the texture to 1024... are both SL and PS showing the texture as 1024 x the number of repeats in actual pixels? Short answer: yes. In my Photoshop example, you'd be creating an image 10,240 pixels wide, and putting 10 instances of a 1024-pixel-wide image in it. It's no different when you repeat a texture in a 3D program. The surface has no way of limiting how many texels are on it. If you repeat the texture 10 times, you just get 10 times as many texels. Each individual repeat remains at its original resolution. Nacy Nightfire wrote: I think that's what you said, but I would have thought SL reduced the texture to 1024 after you asked it to repeat it x number of times. It sounds like you were thrown off by the fact that SL's texture image size limit is 1024. That only applies to the actual image files themselves. It says nothing about what happens when an image gets applied to a surface. Again, the surface has no way to know or care how many texels in total are on it, and the texture has no way to know or care how many times it gets repeated. The texture itself remains the exact same image, whether it's repeated once, or a hundred times, or 1/100 of a time. If it actually did work the way you were thinking, a complete re-bake to a brand new file would be required every time you change the number of repeats. That would require an enormous amount of extra processing and storage, and would be awfully slow, not to mention would largely defeat the visual purpose of using repeats in the first pace. Nacy Nightfire wrote: RE-editing to add: I realize I didn't understand the difference between texels and pixels (actually I never heard of a texel before). So I'll study up on the topic and I'm sure your explaination will become much clearer to me (and now I admit I thought I understood what you wrote, but clearly I didn't because I'm missing some important technical info here) "Texel" is just a term of clarification, to distinguish between the pixels that make up a texture, and the pixels that make up other things, like your screen. If we only used the word "pixel" every time, it would be harder to keep it straight.
  5. Think about the mechanics of what's happening. Say you tile that source texture 10 times across the surface. The surface is now 10,240 texels wide, meaning each texel (texture pixel) is really tiny. So, of course, it still looks great when you zoom in on it. Now, you bake a new texture from the surface, and the end result is only 1024 pixels wide. You've reduced the resolution by 90%. Each new texel is an amalgam of 100 others. So, of course the new texture will be blurry, compared to the tiled original. It's the same thing that would happen if you tiled the image 10 times in Photoshop, to produce a picture 10 times as wide, and then you reduced the size of the whole thing by 90%. It would end up relatively blurry in that case, too.
  6. Asset server delivery hiccup, maybe? As you probably know, every texture in SL contains 4 progressive LOD's. If the highest level isn't trickling in, it could conceivably cause this issue, assuming the peripheral areas are more intrusive in those versiaons. If that's the case, then a larger bleed could help hide the problem. You can either set it a lot higher in Maya, or you could bake with alpha enabled, and then use a solidify filter in Photoshop, for 100% bleed coverage. To clear up the view on the existing textures, I'd suggest you tell thsoe people to clear cache before relog, if you haven't already.
  7. Could you define "doesn't seem to work"? What exactly is the problem you're seeing? You should be able to open your Maya 2012 (or older) files in 2013 without any trouble. The exception would be any scenes that are heavily dependent on version-specific scripting. Opening one of those in a different version could cause some things in the scene to break. But even in such cases, scene elements that are commonplace, such as geometry, bones, etc., should still work.
  8. Min Barzane wrote: Uploader isnt confused at all,colada exporter triangulates mesh thus making ewery quad face in to 2 triangles! Its just way it is! SL mesh isnt quad based but tris based! Simple as that. Min, I'm afraid I don't understand how your reply relates to the issue the OP was talking about. Would you mind explaining further? While it's of course true that every quad is made of two trinagles, I'm not sure what that fact has to do with the OP's reported problem. He said normals were reversed on some of the objects he uploaded. Simply tesselating quads into tris won't cause that to happen. What exactly did you mean?
  9. Syle Devin wrote: I forgot to say this isn't a view distance issue because it doesn't matter how close to them in viewer three you are Are you going by how close the camera is, or how close the avatar is? And have you gone through the settings in both viewers yet, to see what is different?
  10. Hi Rhys. The answer is simpler than you might think. Make the extra triangle really small, and use the Analyze feature in the uploader. As you probably know, Analyze is usually not a good idea, but for a case like this, it serves as a good hack. It tries to streamline the physics shape, so ignoring a tiny stray triangle is right up its allley. I went ahead and double checked, while I was writing this post, to make sure this little trick still works. Indeed it does. I whipped up a quick bowling pin in Maya, with a teeny tiny little triangle a couple meters above it, and uploaded it. Without Analyze, it triggers an invalid asset eerror, every time. I assume this is because that trinagle is so small, SL has no idea what to do with it for physics. With Analyze, it uploads just fine, and the physics work perfectly. A side benefit, by the way, of making the triangle super small, is you don't have to waste a material to make it invisible. It's virtually always smaller than a screen pixel, so it never gets drawn. As you can see in the attached screenshot, I had to blow the pin up over 15 meters tall, just to be able to kind of sort of see the triangle, and even then, it was only visible by its selection outlines. See that little yellow pixel, near top of the image? It might take you a while, but if you really study the screen, it's there. When the model is not selected, the triangle is impossible to see, unless you happen to fly right up to it, and even then, it's pretty tough. If you try to zoom in on it, it gets to about 10 pixels wide, before the camera's near-clip culls it. And that's at 15 meters tall. At regular size, there's no way anyone would ever have a prayer of seeing it.
  11. For specific questions about a particular product, you really should be talking to the creator of that product. It's that person's job to provide you with customer service, after all, not ours. That siad, if you're referring to the maps on this website (which I just found through Google, since you did not see fit to provide a link yourself), then the answer is yes, they are UV maps. That would be why the link at the bottom of the page is entitled "Viss uv map released!" and why the name of each file has the word "uvmap" in it.
  12. Sounds like the creator does not expect you to retexture it, then. You could try contacting the creator, and asking for UV maps.
  13. Del Harrop wrote: With a sculpty you rez a prim > change it to sculpty default form > add a UV map = object You mean sculpt map, not UV map. Huge difference. Del Harrop wrote: But what on earth do you do with a Mesh?. If you want to use the same kind of 'equation' you used above, it would look like this: Create mesh model in a 3D modeling program -> export to COLLADA format -> upload to SL = object. It's not like with sculpties, where you're simply deforming a pre-existing object. With mesh the model is just the model, period. You don't have to start it out as one thing, and then turn it into something else. It already is the thing it needs to be. The shape it appears to be is the shape it actually is. Remember, sculpties are a kluge, nothing more. They were invented as a clever way to get SL to appear to be able to create arbitrary shapes, within the very narrow confines of the closed-ended architecture it had at the time. They're quite bizarre, and should not be thought of as any kind of example of how anybody in their right mind would choose to go about 3D modeling, outside of those confines. Mesh modeling is how it's always been done everywhere else. From the rest of the 3D modeling world's point of view, it's almost comical that there are people in existence who feel that prims and sculpties are normal, and that mesh modeling is foreign. If LL had it to do all over again, they most certainly would have included mesh support from the very beginning, and sculpties never would have had to exist at all. Del Harrop wrote: I have `blindly` bought a mesh version of a furniture item and am searching the web for help, as I was told they are not made as you would a sculpt item. What exactly are you looking to do with that piece of furniture? Del Harrop wrote: This thread is the nearest I have found but basics not explained The reason you haven't been able to find the basics of mesh modeling explained in any forum thread is because it's not possible to do. It's just too big a subject, and too broad a question. Forums are good for answering very specific how-to questions, but they're not suited for teaching entire subjects from A to Z. You can no more expect to learn 3D modeling from a forum than you could expect to learn to play the violin from a forum. It's something you have to actually do in order to understand, and something you have to practice in order to get good at. You can't just read a few paragraphs about it and go. If you want to become a mesh modeler, here's how. Pick a modeling program, pick a good introductory tutorial series for it, and follow along religiously. Complete the entire series, from start to finish. Do not try to skip around, or cherry pick just what you think you need to know. Let each lesson build upon the last, as designed. When you're done, you'll have a working knowledge of the basics. From there, it's practice, practice, practice, and more practice. And when you're done with that, it's time for more practice, and then some more. Also, just about every college in the civilized world offers classes on 3D modeling these days, and there are tons of places to take online courses, as well. If you don't want to do that much work to learn to make mesh models, that's fine; don't be a mesh modeler, and find something else to do with your time. If you do want to be a mesh modeler, understand and accept that it's going to be a long learning process. The good news is it's a fun and rewarding process, as long as you enjoy it for what it is, and don't try to rush things. It only gets frustrating or tedious when you try to jump into things you're not yet ready for. So take it slow, and enjoy.
  14. I never heard of "Bunny Puzzle Viss" until this moment, but upon Googling it, I see it appears to feature a scripted system for swapping visibility on various included mesh shapes and textures, to create different expressions and facial animations. If the creator provided instructions for replacing the textures with your own, follow them. If not, you'd probably have to dive in deep, to figure out how the thing was put together, in order to avoid breaking anything. It might not be designed to accept arbitrary textures. Also, as with any model, you'd need to know the UV layout in order to be sure your textures will fit. DId the creator provide UV maps?
  15. LisaMarie McWinnie wrote: I managed to weight paint it,and I am happy with the results,specially for my first time. Good. I told you it wouldn't be very hard Glad you agree. LisaMarie McWinnie wrote: The right sleeve was a bit tricky,and don't look as good as the left one though. If the model is symmetrical, you can mirror the weights from one side to the other. I don't know the exact command for this in Blender, but I know it does have the capability. LisaMarie McWinnie wrote: I think something is wrong,in the uploading panel,it looks like this: What does it look like with the Skin Weights checkbox checked?
  16. I thought you'd be pleased with the interface tip. Isn't it cool when something so simple makes what had felt like a huge problem just go away? As for the weight painting, yes, you'll absolutely need to study up on that. As I recently said in another thread on the subject, expecting to be able to rig successfully without becoming an accomplished weight painter is like expecting to be able to cook without first knowing how to boil water, or expecting to be able to read and write without first learning the alphabet. Yes, it's that basic, and that crucial. There's absolutely no way around it (nor should there be). Again, the good news is there's nothing difficult about it. Despite the fact that just about everybody tends to be intimidated by it at first, you'll find it's really one of the easiest things you'll learn to do in the 3D modeling world. As long as your model is well topologized for animation, it almost paints itself.
  17. JackRipper666 wrote: Also I noticed some things that might help others with mesh, maya has some weird issues with importing a piece of hair it's as if the scale is all bunched together, I tried deleting history, freezing transforms but if it's not rigged most likely it will bunch up and not be correct scale. So I usually import the mesh to blender and rexport just the parts I'm testing back out of blender using the collada from there. Once I do that an upload it the scale comes out correctly in world. Not sure why it does this something else that puzzles me lol. Jack, could you post a picture of what you're talking about? I'm afraid "scale is all bunched together" isn't a terribly narrow description. I'm picturing about 10,000 different things that that might mean. Whatever the poblem is, I'm sure it's solvable. Just need to make sure we're both talking about the same thing.
  18. Most likely you've got different settings in the two different viewers. As SID said, the first thing to check is your draw distance, in each. If it's set too short in one viewer, then the pins would end up culled fairly quickly in that viewer, well before other objects in the room, since they're so much smaller than the others. If it's set longer in the other viewer, you'd have to get further way before the pins would be culled. Another setting to take a look at is LOD factor. If it's too low in one viewer, then the pins will drop to lower LOD's over shorter distances, in that viewer. If the pins are mesh models or sculpties, and if they're not well built to be "LOD-proof", then they could certainly seem to disappear when the lower LOD's are triggered, if the low LOD models are tiny enough. If it's set higher in the other viewer, you'd again have to get further away before you'd see the effect. Set everything the same in both viewers, and you should see the same kind of behavior in both. That said, do keep in mind that everybody uses different viewer settings. So, even if you solve the problem on YOUR machine by changing YOUR settings, that's no guarantee that other people won't have problems. One soultion that should work for everyone is simply to make the pins bigger. As you probably know, you can do that without making them look bigger. Here are a few suggestions. If the pins are mesh models, then it's super easy. Simply add an invisible triangle, a good distance away from the rest of the surface. If that triangle is, say, three meters away, the pin will now be considered three meters larger. Keep the physics model the same as it was, and the extra triangle will not affect the way the pins fall. (That assumes you're using physics to knock them over. If you're not using physics, then it's even less of a concern.) You can also make sure to put enough detail into the low LOD models, to ensure they still look like pins, even when viewed from quite far away. If the pins are prims or sculpties, link each one to a larger transparent prim. This will affect physics, though, so if that's a concern, you really should use mesh. If you're not using physics, then it doesn't matter, of course. Another possibility is that your cache for one of the viewers got borked. To check for that, simply clear the chache, and relog.
  19. LisaMarie McWinnie wrote: You see how the skirt and top are moving with the arm?How can I fix it,and make a decent rigging? The areas that are misbehaving are doing so because they've got weight from the arm bones that they shouldn't have. Most likely, the shoulder is affecting the rib area, and the shoulder and elbow are both affecting the wide hip area. This is a very common occurrence when you assign weights automatically, by proximity. Since the shoulder joint is physically closer than the chest joint is to that side rib area, the shoulder is what was assigned the most influence over it. You as a human artist of course know that the chest joint is what should have the most weight on that area, but the computer has no way of knowing that until you tell it. Ditto for that very wide hip area. The elbow and shoulder joints are closest to it, so the automation had to assume that those are what should be pulling on it the most. You, the human, know that the chest, and/or pelvis, and/or hips, are what should be controlling that area, but again, the computer can't know that until you tell it. In short, when you use a proximity bind, the only factor the computer can go by is proximity. The good news is that the kinds of issues created by that are really easy to fix, assuming you know how t use your weight painting tools. You simply need to weight the right areas to the right bones. (More on this in a minute.) As you may be starting to see by now, it's crucial to understand that any automated process you use for rigging is always going to be just a first step in what must always be a multi-step journey. The next step is to go in and clean up the inevitable mistakes that the automation will always have made. It's totally normal, so don't think of it as a problem in any way. It's just how the process works. This subject seems to be coming up a lot lately. Below are some basic general instructions for the next steps you should take. I've pasted these into several other threads on this topic, since I first wrote them a while back. They apply equally to all programs, Blender included. In most cases, you'll find that the weighting process works best when you start from the extremities, and work your way inward. For example, start by painting a hand to be 100% weighted to the wrist joint. You'll inevitably bleed a little onto the wrist skin area of the forearm. Just let that happen. It's a good thing. Now, paint over the whole forearm, additively, to weight it to the elbow joint. You'll add elbow weight to the parts of the forearm that were already weighted to the wrist, and a little bit of that bleeding from the hand will remain. That's exactly what you want. If you did it right, you'll now have a perfectly functioning wrist. If the wrist area distorts badly as the wrist bends, that's a sign that you haven't yet weighted the area strongly enough to the elbow, so just add a bit more paint. (Those wide sleeve cuffs in your picture, by the way, should likely be weighted 100% to the elbows.) Repeat the process, working up the chain, from wrist to elbow, from elbow to shoulder, from shoulder to spine, and you'll have a well rigged arm. Do the same for a leg, starting at the toe, then working to the ankle, to the knee, to the hip, to the pelvis. Finally, do the head, then the neck, then each spine joint, all the way to the pelvis. You'll find that by working this way, from the outside in, you'll get good results, fairly quickly. I do not recommend trying the opposite, working from the inside out, as you'll end up having to subtract weight instead of adding it, and then you lose a lot of control. You can end up spending all day playing whack-a-mole with stray vertices that won't cooperate. As soon as you squash one subtractively, another pops up to misbehave somewhere else. By working from the outside in, entirely additively, you'll never encounter that kind of trouble. A rig that might have taken you a whole day or more to do subtractively can be done additively in an hour or two, or in many cases, just a few minutes. To put it in terms of hierarchy, it's always more effective add your way up from the bottom of the chain, than to try to subtract your way down from the top of the chain. If any of the above does not make sense to you yet, that's OK. It's just a sign that you're at the beginning of the learning process. It will make total sense, once you've gotten a little more experience. I'm afraid I won't be able to dive into specific how-to's for Blender, since I'm not an active Blender user (Maya is my weapon of choice). There are, however, plenty of tutorials on the Web for weight painting in Blender, and there are lots of Blender users here on the forum, who can help you with the program specifics. Again, the concepts described apply equally to all modeling/rigging programs. LisaMarie McWinnie wrote: Also,the interface of the file I have saved with the avatar and skeleton have a really different interface,how do I keep the file,and change the interface to default? I do know enough about Blender to answer this one. There are several options, but the easiest one is this. When you click File -> Open, notice there's a checkbox in the dialog for "Load UI". Simply uncheck that, and you'll retain your current UI settings, rather than override them with the ones used by the person who made file.
  20. Ah, thanks for clearing that up. I believe there is a way to use attachment points, as well as collision volumes, as pseudo-bones, for rigging. It's complicated to set up, though, from what I've read. Given your description of how the Avatar Center behaves, I'm not sure it's the same kind of attachment point as the others. It seems more likely it falls above the rest of the avatar in the scene heirarchy. If that's true, then it would be unlikely that you could use it for rigging. I'm just guessing, though. Hopefully, someone who knows more about it will pop up.
  21. SL has no way of understanding that kind of heirarchy. Remember, your modeling program is a completely different type of environment from any destination platform you may be modeling for, whether it be SL or any other. There are all kinds of things that work in the modeling environment that will not work elsewhere, including the one you just discovered. All SL knows about are bone influences (weights) on the rigged model's surface (skin). What causes deformation of any part of the skin is weight applied from more than one bone. Therefore, if you don't want those buttons to deform, you have to make sure they're each influenced by only one bone. This will somewhat limit the places you can put them, and the range of animations you can use. If they're in less than ideal locations, and/or if you use animations that were not designed with those rigid buttons in mind, the buttons could end up disappearing into the body, lifting away from the body, or traveling across the body, as the avatar animates. In other words, they'll act just like any other rigid attachment. There's no perfect solution for this. You're going to need to pick your poison. I'd suggest that in most caes, allowing the buttons to deform a little is less of a sin than allowing them to move separately from the rest of the skin, but obviously that answer will be a little different for each model.
  22. I'm not sure what you mean by "Avatar Center". The way you've got it capatilized, it sounds like a place, as in, "Hey man, have you been to Avatar Center lately? I hear they've got some cool stuff over there." I take it you meant to say "the avatar's center", as in the middle of he avatar's body? I'm still not sure exactly what that might mean, either, though, from how you worded the rest of the question.. The center of the avatar, just like the center of a real human being, is the pelvis. If you're asking if you can weight part or all of a bound skin to the pelvis bone, the answer, of course, is yes. Anything so weighted would remain rigid and unmoving, relative to the rest of the avatar body, as the various body parts animate. The only thing that could cause it to move would be animations that translate or rotate the body as a whole, like lying down, or doing a cartwheel, or falling over, for example. I'm having trouble picturing how your balloon scenario would work. You could certainly make the balloon stationary, as described above, and weight the other end of the string to the hand, as you suggested. That would cause the string to respond to the avatar's hand movements, but the effect would not in any way resemble how a real string resonds to a real hand, nor how a real balloon responds to a real string. Pulling on the string would merely cause it to stretch, as the balloon would remain in its fixed position, relative to the avatar. The balloon would not be able to bob up and down, or travel laterally, in response to the string pull. It would look pretty ridiculous. Also, if the avatar were to lie down, or fall over, or do anything else that rotates the pelvis, the balloon would rotate with it. Turn the avatar upside down, and the balloon would be below it, instead of above it. It would not be able to stay upright, relative to the world. Maybe you meant something else? If so, please explain.
  23. It would help if you said what modeling program you're using. In the picture, it looks like it's probably Blender, judging by the orange selection lines, but really, it could be anything. As or the disappearing faces, I can think of three possibilities: 1. It could be you have too many materials. SL can only support eight per surface. If there are more than that, the faces the extra ones are applied to can end up beng ignored by the uploader. -OR- 2. It could be you've got your normals reversed on those faces. If that's the case, then the faces are still there; you just can't see them from the outside, since they're poining the wrong direction. To prevent this, always work with backface culling enabled in your modeling program, so you can see right away from which side your faces are visible. To fix it, simply select the faces that are wrong, and reverse the normals on them. -OR- 3. Perhaps something went wonky in the way you combined so many different meshes together. It's not generally the best idea to combine more than two at a time. If your program utilizes construction history, you should delete history from each object before you combine them, and delete it again afterward. And of course, you should merge any duplicate vertices, if your joining process doesn't do it automatically. If it's not any of those, I'm out of ideas, for now. No doubt others will chime in with additional suggestions.
  24. Sounds like you're pretty well married to your NURBS-to-poly procedures, so I'll let that subject drop. Here's a tip that may help you with your alpha issues. This may well have occurred to you already, since I know you're already practiced in how to work around the alpha sorting glitch. But just in case it hasn't... Use two materials, rather than just one, and create two versions of your hair texure, one with alpha, and one without. Apply the 24-bit version to most of the length of the hair, where you don't need transparency, and put the 32-but version only on the ends, where you do. As for the invisiprim-like bug, I'm still unable to recreate it, so I'm afraid I can't offer any advice there. Seems it's as true now as it was a year ago that it only affects some people, and not others. I wasn't able to make it happen then, either. Oh, and just so nobody reading gets confused, the person who said the bug only affects PNG textures could not possibly have been correct. I take it from Avant's "oh well" that he's well aware of this. To be clear, SL does not and can not have any way of knowing what a "PNG texture" is or a "TGA texture" or anything else along those lines. When you upload an image to SL, it gets copied to JPEG2000 format, as a first step, BEFORE the actual upload process begins. What gets uploaded is the JPEG2000 copy, NOT the source file. The source image never leaves your local hard drive. The JPEG2000 save process is NOT capable of differentiating between input formats in any way. It physically CANNOT make an image sourced from a PNG come out different than one sourced from a TGA, not in any way at all. So, the end result is exactly the same, no matter what the source format happened to be. It simply cannot be otherwise. Further, even if there were a difference in the files (which I repeat, there is NOT), it still wouldn't make any difference to how the image behaves on screen. The renderer doesn't know anything about files. By the time the data gets to that point in the graphics pipleline, it's basically just pixels, nothing more than a collection of color and transparency information to be processed and drawn, no longer a file to be saved. There's absolutely no way that the file format can affect the drawing process at all. People who insist there is any kind of difference are victims of their own ignorance, nothing more. As I so often try to remind everyone, this stuff doesn't run on magic. The same laws of physics and computer science apply to SL as to any other program.
  25. Avant Scofield wrote: I personally work much faster with NURBS If that's the case, then I have to assume it's just because you happen to have had more practice with NURBS modeling than poly modeling? So you know, if you really want to, you can perform virtually all the same maneuvers with poly surfaces as with NURBS surfaces, by using various deformers and/or proxies. But you can also do countless other things with polys that you could never do in a million years with NURBS. Once you're well experienced with poly modeling, you'll kick yourself for ever having thought NURBS modeling was faster. For most things, it's quite the opposite (with sculpties remaining a notable exception). NURBS were first developed for the industrial design world, and were semi-adopted in the film world for a while, both fields in whiich the advantage of infinite resolution trumps inherent drawbacks of a NURBS-centric work flow.. For game artists, NURBS don't really do a whole lot. Where SL is concerned, Maya users may have an unusual attachment to NURBS, since they were the original surface source type for sculpties. There were some good reasons for that at the time, but we're well past all that now. For mesh modeling, the best source is, well, mesh modeling. Avant Scofield wrote: I know the converter can produce very similar results, minus having to delete some faces after the conversion, which I'm guessing you must of had to do as well anyway? I'll leave it to Raster to explain his/her techniques, regarding his/her own creations. I will say, though, that adding and deleting faces during the poly modeling process is as natural as breathing. It's doesn't have to be like with NURBS, where most of the time you're manipulating an existing surface. With poly modeling, you can (and do) actively create and destroy as you go. Avant Scofield wrote: Also correct me if I'm wrong, but I think this hair wasn't made for SL. Again, I'll leave it to Raster to explain what his/her creation was or wasn't made for, but I have to say that that hair model looks perfectly viable for SL, to me. It lacks that overly tubular "sculpty-hair" look that so many SL hairpieces have, but I think that's precisely the point. Arbitrary mesh can look so much better than sculpties, or sculpty-style models. Consider that in RL, locks of hair do not tend to be so cylindrical. Often, they're quite flat. So why limit them to cylinder-based construction in a model? Raster's model looks to be far more plane-based than cylinder-based, which is the way it's typically done. It not only looks better that way, it uses far less resources. Total win-win. That's not to say yours looks bad, by any means, so please don't take any of this the wrong way. It looks as good as any sculpty hair I've ever seen. It's just not (yet) taking advantage of what arbitrary mesh modeling can do. Avant Scofield wrote: Another reason I forgot to mention that my hair might be a little higher on the polys is because of SL's alpha rendering bugs out when we layer alpha textures. In order to get a nice messy hair look I'm having to layer pieces on top of eachother. I have worked around this in some areas. All the same alpha sorting problems will happen in any real-time environment. SL actually handles it better than a lot of game engines do. While it's true that were it not for the alpha sorting glitch, one could conceivably use less geometry for a hair piece, the difference really isn't as great as you might think. When I create messy hair pieces for characters, they're typically 2000-8000 polys, depending on length, regardless of the presence or absence of alpha. Avant Scofield wrote: My main issue at the moment is hidden faces. It's quite tedious to go in and delete all faces that are overlapping and Maya's "Clean up" tool only works for lamina faces, which are directly crashing. If I could find a script that deleted all overlapping faces I think my current hair would be a lot closer to 8k! I've used the avatar head to delete parts I had extended into the scalp. If anyone has any suggestions for these overlapping faces that would be much appreciated I gotta defer to Mr. Miagi on this one (and I mean the real Mr. Miagai, not the remake wannabe!), "Best block is no be there." In other words, the very best way to deal with hidden faces is just not to create them in the first place. By working with polys from start to finish, instead of beginning with NURBS surfaces, you can create arbitrary shapes, rather than limiting yourself to just rectangular topology. Those arbitrary shapes can preclude hidden faces having to exist at all. Learning to work without creating a lot of hidden faces is pretty much a rite of passage for all new modelers. People almost invariably create a ton of them at first. Eventually, something clicks, and you start seeing pathways around the problem as you work. That said, in instances where you do have hidden faces, if it's tedious to go in and remove them by hand, so what? Artwork is a dirty job. To do it well, you have to be willing to stick your hands in it, and squish them around. The more you look to broad-sweeping tools like converters and cleaners and whatnot to do the work, the more you only hold yourself back. Automation will only get you so far.
×
×
  • Create New...