Jump to content

Chosen Few

Resident
  • Posts

    1,790
  • Joined

  • Last visited

Everything posted by Chosen Few

  1. I just took a quick look at the Photosynth website, and it looks similar to Quicktime VR. I don't think it's creating 3D models, but rather just generating a spherical projection of the panaoramic image.
  2. Dora Gustafson wrote: Primstar shows an alpha layer too Showing an "alpha layer" would be a neat trick, considering there's no such thing as an alpha layer. It's an alpha CHANNEL. Layers and channels are entirely different things. This forum is the only place in the known universe where some people insist on using the two words synonymously. It makes it really confusing for people who are trying to learn graphics for the first time. I make it a point to try to correct the error whenever I see it. Terminology is important.
  3. Robin has Blender tutorials? Since when? Got any links, Morgaine? I'd be interested to see her take on it. All I see on her site are the Photoshop, LightWave, and SL tutorials that have been there forever, plus some Bryce ones I hadn't noticed before. Personally, I really like her other tutorials, but I can see how they wouldn't be perfect for everybody. The video you linked looks quite good, as well. Neither are really how I would approach the subject, so sooner or later, I'll have to do a series of my own. (I still need to write that Photoshop book I've been back-burning for years, too.) As for what you said about PS's tools being "not that fab", I can't disagree when it comes to the camera controls, and model manipulation controls. They're pretty silly. However, the fact that you've got all of Photoshop's other painting and editing tools at your disposal is nothing to slouch at. No other 3D paint program's tools can equal them. It's just a shame that Adobe got the 3D interface so wrong. For what it's worth, I use a combination of Mudbox and Photoshop for all of my 3D painting these days. Mudbox has the superior 3D interface, and is much faster to use. However, Photoshop is still much better for thing like cloning and healing, and also has better anti-aliasing. If I absolutely had to pick just one 3D paint program, Mudbox would win by a mile. But since I don't have to do that, the two of them together fill in each other's weak points really well.
  4. If remember correctly, you need to do the test for each grid. If you've only done it for the main grid, you'll need to do it again for the beta grid. If you've already done it for both, you might need to file a support ticket.
  5. You may have misunderstood my meaning of "neat trick". I wasn't trying to say placing it in front of a different background woud produce a different result. The phrase was just a mildly tongue in cheek attempt at expressing that there's no way to use something that does not exist.
  6. The reason it's acting like an invisiprim is because it IS an invisiprim. Notice in the first pic you linked, the bottle is both shiny and transparent. The only way to create that effect is with an invisiprim. SL's rendering engine has undergone several changes since way back when "Twisted" was made. Invisiprims no longer work the same way they used to. I don't know if the shine trick still works on any of them. If it does, consider it dumb luck. Invisiprims were never an official feature. They were a bug that was left deliberately unfixed for a long time, because the effect happened to be considered useful. Now that avatars can be made transparent through other means, there's non longer much inventive to preserve invisiprim functionality. If the bottle is not currently set to be shiny, try applying a high shine to it, and it MIGHT start to look like its old self. If it's already set to shiny, and it's still not working, then there's probably nothing you can do. By the way, placing it in front of an "alpha layer" would be a neat trick, considering there's no such thing as an alpha layer. It's an alpha CHANNEL. Layers and channels are entirely different things. This forum is the only place in the known universe where some people insist on using the two words synonymously. It makes it really confusing for people who are trying to learn graphics for the first time. I make it a point to try to correct the error whenever I see it. Terminology is important.
  7. Before I begin the job description, let me first state that this is a RL job, for RL pay, consistent with what any RL freelance digital artist should expect to earn for his or her considerable skill set. It's a full time contract gig, for at least several weeks, with possibility of long term. This is NOT a hobbyist activity for L$ micropayments. Please only apply if you're serious about what you do, and you expect to be paid what you're worth. A video game project I'm on has an immediate opening for a 3D Artist who can specialize in hair. The work includes content creation for DAZ Studio, SL, OpenSim, and some other platforms. The character development pipeline for the game relies heavily on DAZ Studio, so the bulk of the work will be for use in that program. The SL and OpenSim components are secondary companion products to the game, purposed toward building additonal social experiences for the player community, beyond what's possible inside the game itself. The game has tons of characters, and the client wants a wider range of hair styles than what we've presently got in our library. Our current stable of artists, myself included, have way too much on our plates to add this to the top, so we need to hire a specialist. Prior experience with content creation for DAZ Studio is preferred, but not required. If you're a skilled 3D artist who's good at making hair in any standard 3D modeling program (Maya, Max, Blender, etc.), we can bring you up to speed on how to make your content DS-compatible pretty quickly. It's a quirky piece of software, for sure, but training on it is relatively easy. Most of us on this project are long-time SL residents, so if you're on this forum and reading this, chances are excellent that you'll get along with our team very well. The client pays well, and pays regularly (twice monthly). It's a great gig for the right person. If you or anyone you know has the availability and the skills, let me know ASAP. We need somoene right away.
  8. http://www.joecartoon.com/index.php/episodes/frog-in-a-blender/ When I saw the title of this thread, I couldn't resist.
  9. Forgive me if this sounds too simple, but have you tried rebooting your machine? I had the same problem for a few days last week, in three different programs (Maya, Mudbox, and DAZ Studio). Every time I would try to export a selection, I'd end up with the whole scene exported. I figured it must have been a memory glitch or something along those lines, since only something really foundational could affect three different programs in the same way. I was somewhat irrationally reluctant to reboot, though, as I had a mountain of work on my plate, and a really tight deadline in front of me. Strange as it seems in retrospect, deleting the extra stuff after each export actually felt like less of an annoyance at the time than just stopping for a minute to reboot. Funny how priorities get out of whack when you're stressed. Eventually, I had no choice but to reboot. Photoshop also started malfunctioning. It refused to save changes to 3D paint files, and it also began throwing occasional errors at me, claiming my graphics card doesn't support hardware acceleration (which of course, it does). A quick reboot made all the problems go away, and the world was right again. So, if you haven't tried that yet, give it a whirl.
  10. The best and fastest renderer for what you're looking to do is Turtle. If your copy of Maya is part of one of the Creation Suites, Turtle wil have come with it. If it was just Maya by itself, then unfortunately, you won't have Turtle. (When Autodesk bought Illuminate Labs, maker of Turtle and Beast, they discontinued Turtle as a stand-alone product, a decision that made absolutely no sense to anyone. That's one of the biggest reasons I still have not upgraded from Maya 2009.) If you dont' have Turtle, you can get just as good results from Mental Ray, but it will take longer. Mental Ray has never been any danger of wiinning any "world's fastest renderer" awards, but it can produce fantastic looking results, as long as you're patient with it. That said, there's no reason it should take half an hour to bake out a 512x512 from a flat plane. Something in your scene must be borked. You've asked a lot of questions about lights, but I notice you don't seem to be factoring your materials into the equation. If you're using POV-dependent features that do not translate well to baking, like sub-surface scattering, volumetrics, deep reflections, refractions, etc., you can slow things down to a crawl pretty easily. I'd look to that sort of thing as your bottleneck, before I'd even think about your lights. As for questions as worded: King Bright wrote: how do u set up your lights? how many? and wich kind of light? and wich settings you use for the lights and the mental ray render settings like fg / gi / ao.... There's no way to provide a blanket answer to any of these questions. How I set up my lights depends on what's in my scene, and what effects I'm looking to create. There can be no such thing as "how to light your scene for baking in 10 easy steps." If you've got a specific scene, and a specific effect you're trying to create within it, that's something we can answer how-to's about. But just broadly asking, "How do you set up your lights?" doesn't work. It's like asking, "How do you cook your food?" There's no way to provide a worthwhile answer, since the question is obviously way too broard. But if you were to ask something specific, like "How do you cook a spinach souffle?" you could get tons of useful answers about that particular subject. King Bright wrote: oh and when i try to rotate the sunlight, it just turns to black ^^ dont know why this happens... but its confusing :-( Please explain what you mean by "sunlight" in this context. Are you using Mental Ray's sun and sky system? Or is this a light you created yourself? Something else? Also, what turns black, the scene, or the light itself? King Bright wrote: oh and which is the best way to bake textures? should i bake ambient occlusion and an light only map to merge them both in photoshop? cause thats what i often read... Once again, it depends on the specific needs of your project, and also on your own personal style. Some people prefer to render each surface attribute in a separate pass, and then combine them in Photoshop afterward, because that approach affords an extra degree of control. You can adjust the intensity of each element via layer opacity in Photoshop, and if you decide you want to change a particular element, you can re-render just that element, vs. having to do the whole thing over again. Also, if you're creating for a platform with a better graphics environment than SL's, you may be able to connect each image to a separate channel in the in-game shader system, for much better effect than just the fully merged bakes to the diffuse channel that SL can handle. On the flip side, some people prefer to get everything exactly the way they want it in Maya, and then render everything out to a combined texture, all in one go. That's can be much faster and less labor intensive, as long as you're sure you've got it right the first time. I do either or both, depending on the project. King Bright wrote: but theres still the problem with the light only baking, cause it take a lot of time but still has a very poor output... By "light only" I assume you mean you're just rendering light and shadow, with no color or other attributes. That should be super quick, even if you've got a complex lighting setup in the scene. If that's really taking a long time, then I'd again have to suspect something funky is going on with your shaders. If that's not what you meant, please explain. King Bright wrote: and how can i get the textures i added in maya to my baked maps? or do i have to use the bake light & color option? If you're not baking color, you're not going to see anything from the color channels of your shaders. King Bright wrote: oh and before i forget to say: i have the trial of maya cause i get my students full version on 3rd march... i've read something like the trial is only 32bit but will it reall change so much in render time with the full version? I don't see why they wouldn't let you try the 64-bit version. I haven't had cause to download a trial version in almost a decade, though, so I couldn't say for sure. If indeed you've been using a 32-bit version, that could go a long way toward explaining why things are taking so long. By definition, 32-bit applications can only use 4GB of memory, including graphics RAM and system RAM combined. If that HD6780 has 1GB of video memory, that only leaves 3GB of system RAM available to the program. The 32-bit verison of Maya reqires 2GB, just to run, which means you've only got 1GB of overhead. That's not much. 64-bit applications, on the other hand, can use as much RAM as you can throw at them. The limit is a hair over 17 billion gigabytes. To build a machine with that much RAM, you'd need about 2.2 billion 8GB RAM sticks. Considering that they tend to be spaced at roughly a half inch on center in most motherboards, you'd need a motherboard about 16,000 miles tall, to fit that many sticks in it. So, the top of your computer tower would be in outer space. So, yeah, there's just a wee bit of a difference between 32-bit and 64-bit. What it means in the machine you've got is that Maya would be able to to use the full 8GB of RAM, instead of just the 2-3GB it's currently stuck with. That would likely translate to a huge difference in render times, not to mention every other area of performance. Now here's the bad news. The student version of Maya puts a watermark in every rendering, including bakes. You won't be able to output any usable imagery from it. If you want to use Maya for anything besides student projects, you're going to need to buy it. If you can't afford it, I'd suggest you start learning Blender.
  11. If it's already set to Y, set it to Z. It really sounds like the scene's orientation is at odds with your global orientation.
  12. My guess is you've got your world orientation in your Maya preferences set differently from that of the scene you're trying to open. That can sometimes put the camera into gimbal lock. By default, Maya is set to Z-up. If the scene you're opening is Y-up, that may be your issue. If that's indeed the case, then the fix is simple. Get out of the scene you're in by creating a new one, and then click Window -> Settings/Preferences -> Preferences. In the Preferences dialog, click on Settings, in the left hand column. Now, on the right, under World Coordinate System, change the up axis to Y, and save. Again, get out of the scene by creating a new one, and then open the scene that had been giving you trouble. If the world orientation was the problem, it should be solved now. If that wasn't the issue, pease explain the problem a little more clearly. What exactly happens when you try to move the camera? Also, I'm not sure exactly what you mean by "pivot point" in this context. A camera's pivot point can be different from its focus point. To change the focus point of the camera, simply select an object (or a component of an object) and press F. The camera will now orbit around the item when you alt-drag. To change the acual pivot point of a camera, you'd need to select the camera (while looking at it through another camera), press Insert to show the pivot manipulator, move the pivot to where you want it, and then press Insert again to return the manipulator to normal. Ditto for movin the piviot of any other object (or component of an object). Is one of those things what you were talking about, or did you mean something else?
  13. 3 different facial expressions would be my guess, but it might also be three different appearance options of any kind. Whatever it is, there's probably a scripted system in there, to swap visibility between the three.
  14. Last I looked at Sketchup, it has no UV mapping tools. It automatically creates the UV layout as you go, which makes texturing a nightmare. (It also automates topology, which sends your poly count through the roof, but that's another subject.) To fix the problem, bring the model into a proper modeling program (Blender, Maya, Max, etc.) or a stand-alone UV mapping program (DeepUV, Unfold3D, UVLayout, etc.), and give it a proper UV map, so your textures will fit.
  15. I'm only able to budget a few minutes of time to the forums today, so I'll have to make this fast. I'm up against a tight deadline on a game project, so I won't be able to go into my usual level of detail on the how-to's this time, sorry. Effects like that padded leather are best done with a normal map and/or a displacement map, depending the topology, and the capabilities of whatever platform you're creating the model for. Your best bet on the fabric wrap is modeling by hand. You can create wrinkles like that pretty easily with sculpt brushes (assuming your modeling program has them). Otherwise, a displacement map could do the job, assuming the model's base topology is well suited for it. Obviously, these solutions will not translate directly to SL (yet), so you'll need to bake the effects, if SL is your target platform.
  16. Thanks so much for your help, Medhue. The transfer utility did the trick for me, since all the assets in this phase of the project are humanoid. Just transfer from the Geneisis figure each time, and it's gold. Whiule it's obviously nowhere near as elegant as simply importing the model with its existing rigging, it'll do, and I think we're in good shape on our deadline at this point. Just so you know, nobody on the DAZ forums was able to come up with this suggestion, and neither was anyone at DAZ itself, when I called them. You saved the day, man.
  17. Thanks, Medhue. I'll take a look at that transfer tool. That might do the trick. For now, there's no animation work needed, but for the next phase, there might be. The trobule is the next phase won't happen if the aforementioned super tight deadline is not met.
  18. OK, this is not an SL question, but I've spent the last 9+ years answering enough SL questions that hopefully that can be forgiven. I know there are DAZ Studio users here, and I urgently need your help. Before I explain the question, let me first say I have already asked over at the DAZ forums, but so far nobody there has been able to provide an answer. I'm hoping someone here might be able to step up. :) OK, here's the situation. I’ve been hired at the 11th hour to create a ton of content for a project already far along in development. For reasons I really can’t get into here, the project is married to DAZ Studio, as the hub of its animation and rendering pipeline. If I could work entirely in Maya, as I normally do, there would be no problem, of course. But the fact that I need to bring everything into DS is proving to be crippling. I've hit a wall. When I try to import a rigged model (made in Maya) into DS, I get an error, saying: “Rigging limitation: bones without root skeleton”. The skeleton, although fully listed in the scene pane, is invisible in the viewport, and does not deform the object when bones are rotated. If I export an existing DAZ character to FBX from within DAZ Studio, open it in Maya, re-export to FBX, and then re-import to DS, it works just fine, so it’s clear that there’s no direct lack of Maya/DS interoperability. However, if I bind anything else to that same character’s skeleton in Maya (such as a pair of pants I made, for example), I end up with the error in DS. I’ll be damned if I can figure out what’s different about DAZ’s pre-existing rigging from my own rigging. Within Maya, everything appears to be rigged identically. What am I missing? The people on the DAZ forum have tried to steer me towards tutorials on how to rig within DS itself. Nobody seems to know how to bring in an already rigged model. The bottom line on this is I don’t have time to re-rig a million models internally inside DS, when they’re already rigged in the standard manner to be compatible with nearly every other 3D modeling platform, animation program, and game engine on this planet. I have to believe it can be done. Frankly, it would be beyond ridiculous if the program can’t understand something so utterly standard and simple as a basic skeletal bind. There’s got to be a way. Thanks in advance for any help.
  19. This is something many of us have been asking for as a feature for quite some time, and now we find out it's doable because of a bug? Wow. I don't know if I can call that good news, exactly, but I'll take what I can get. Assuming it never gets 'fixed', that is. Typically, this sort of thing is done with multiple UV sets, and a layered shader. In this case, it might just boil down to some doctoring of the DAE file, though. I would hope Kitsune will eventually explain what she did. If not, I'll be curious to take a look at the file, and see if I can figure out the technique. I won't be able to get to it for a while, though, as I'm buried in work projects for the next 10 days. I'm sure lots of people will be all over this, now that it's been demonstrated. No doubt someone will tutorialize it fairly quickly.
  20. What does the UV map look like for each of the LOD models?
  21. Desktop sharing via Adobe Connect, or even Skype, might be a viable option here. I know it's less than ideal, but it could work.
  22. Kwak, you've well accounted the fact that tere's really nothing a bump map can do that a normal map can't. However, your argument doesn't seem to have considered that there's a lot a normal map can do that bump map can't. Say I want single texel to face sideways, without raising or lowering any of its neighbors, there's no way to do that with a bump map. A normal map, on the other hand, can do it with ease. Granted, that's a purely academic example, since I probably would never have actual cause tilt just one texel. The result wouldn't even be representative of anything possible in the real world. But when you extrapolate the principle across the whole of a textured surface, the possibilities are huge. Take a look at the two images below. They're both the same flat plane, with the same map applied to them. The one on the left has the map applied as a 3-channel normal map, and the one on the right as a single-channel bump map (using just the height channel from the normal map). You can see the difference is rather extreme. On left, every angle is fully is expressed, from the subtle curvanture of the slight depressions in the sky area around the worm, to the ramped sides of the worm itself, to the rounded edge along the crest of its height above the surface. On the right, it's a totally different story. Because the lateral/angular information cannot be conveyed by the bump map, only the height of each texel, the curvature just isn't there. The concave parts of the sky are flattened out. The sides of the worm are unable to slope nearly as much, since they can only step from height value to height value, rather than tilt. The crest flattens out, rather than forming that steep rounded edge. Further, the grass area looks a lot more uniform in the right hand image. In the left image, it appears to sweep rightward a little, as if there's a gentle breeze blowing on it. In the right image, it just doesn't do that. Plus, much of the subtlety of the various shapes within the grass are lost on the right. Without the length and width vectors, and with only the height vector present, there just isn't enough information to convey the composition as intended. (Not that it's a particularly good composition in the first place, but hey, for the whole 30 seconds I spent on it, it's a masterpiece, dammit!) If I really wanted to take the time, could I beat that bump map into submission, to get the result on the surface to match that of the normal map? Perhaps. But keep in mind, it's a really simplistic example, and it would take an enormous amount of work, just for that. Imagine what it would take for something really complex, like a character skin. It could be days worth of tweaking, if it could even be done at all.
  23. Kwakkelde Kwak wrote: Still with a bump map you can get a real 3D effect by using gradients and you can even bake them into normal maps. I thought the big difference was the way and moment they were calculated in the rendering process. Bump maps using less storage space (and therefor streaming time), normal maps using less render time. I think you might have missed my meaning of "one-dimensional effect". The point is there's no way with a bump map to get a single texel on a surface to face any particular direction. All you can do is raise it or lower it, along the surface normal. With a normal map, the texel can not only be raised, but also rotated. There's a huge difference there. You're right, of course, that with a gradient, you can create the apearance of a slant in a group of texels, by increasing the rise incrementally from one to the next. But still, each individual texel faces the same direction as the surface normal. So, it's basically a staircase, rather than a ramp. Assuming the resolution is high enough, the steps can appear smoothed out, but since no single texel can be turned to reflect along a tangent like it can with a normal map, the results won't be as realistic as they could be. As for normal maps using less render time, I'm not completely sure on that one, but it makes sense. Theoretically, the more information is precalculated, the less has to happen in real time. Kwakkelde Kwak wrote: For realtime rendering displacement maps are pretty much useless, no argument there. One small addition though, in the current 3ds Max materials (so I bet also in Maya and other programs), you don't need the dense geometry for a displacement map. This is a material feature though, not a modifier, that does need the actual geometry. Since in the final render (with the displacement material) surfaces are moved, I suspect to the renderer it looks like there's extra geometry. I'm not exactly sure how that works. Yeah, Maya's had that capability for a while now. It can even create geometry directly from a displacement map, which makes for a really fast way of modeling complex surfaces. It's all useless for real-time, of course, since obviously the game engine can't use Maya's internal features.
  24. Syle Devin wrote: No matter what I try I always get a white background when trying to bake anything alpha. The background color is irrelevant. Color is just color; it says nothing about transparency. The transparency map is the alpha channel, so what matters is whether or not an alpha channel is present in the image. Is there one?
  25. Nacy Nightfire wrote: Will the normal maps be influencing bump or actual displacement? Neither. Bump maps, displacement maps, and normal maps are three different things Bump is a one-dimensional effect. It just simulates height (protrusions or intentations in the surface). Normal mapping is a fully three-dimensional effect. It not only simulates the height of surface detials, but also the direction they're facing in 3D space, for a much more convincing look. You can think of a normal map as a bump map on steroids, if you like. Displacement maps are different. They move the actual geometry of the model. Sculpt maps are displacement maps, for example, as are some of the channels in RAW terrain maps. If you have enough polygon density in a surface, you can use a displacement map to do the same thing a normal map can do on a much lower-poly model. But for real-time purposes, that wouldn't be terribly useful. The point in using a normal map is to simulate the look of a high-poly model, while actually using a low-poly one. Nacy Nightfire wrote: Re: bump, I suspect people frequently resort to a greyscale version of their diffuse image, but that can't possibly be accurate, since not all white in a texture is meant to be "up" and the reverse with black. So getting good results from this kind of map takes some real skill, I'm guessing. Correct. A grayscale version of the diffuse texture can sometimes work, but certainly not always. Not only do the lights and darks in the diffuse coloring not always correspond perfectly to the height of the surface details, it's often the case that they simply don't correspond at all. The keyboard that I'm typing on right now, for example, is white with black letters on it, and it has matted surfacing. The black letters are not indented, but under bump-map rules, they would be, so that correlation is out, right away. Further, the white and black areas all share the same level of bumpiness, as the whole thing has the same kind of matting all over it. If I were modeling this keyboard, and using a bump map on it, the bump map would be just be a noise map, which would have nothing at all to do with the diffuse color map. You're right that it takes skill to create and use the various kinds of maps effectively. There are, however, tools out there that can help simplify the process. CrazyBump, for example, is a fantastic tool. A lot of the existing filters available for FilterForge include channels for matching bump, spec, and normal maps, and the software also allows you to create your own. 3D paint programs like Zbrush and Mudbox allow you to paint bump, spec, normal, and other kinds of maps, right as you paint the diffuse coloring. None of these are "for dummies" tools, though. Time-saving as they are, it still does take brain power to operate them. Those in search of the ever-elusive "make it easy" button will remain in out of luck.
×
×
  • Create New...