Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by RipleyVonD

  1. Kitsune Shan wrote: I would also suggest, to those that thinks that adding extra bones for translation simulation, to try to do some animations with that or to build a whole rig with them. I can tell in advance that wont be an easy task :matte-motes-whistle: Your right Kitsune, adding extra bones to the face just to mimic what simple bone translating can achieve would make things more difficult for animators. For the face "rotations only" is difficult to work with properly if at all. I agree bone translating is the way to go.
  2. A simple way to put it is some bones on the face need to be double or even triple jointed to provide the range of animation freedom bones translation provides. Its a question of what is more difficult to implement be it because of amount of code or performance needed, more bones in the face, or bones translation. For clarity only some bones need extra joints not all the face bones. Animating the face with only rotations as they currently are is like animating a foot that is attached to a knee joint you can only move that foot up, down, left, and right. If you add an extra joint to that leg now you can move it also front and back. Then their are animal avatars where animating long snouts would be very awkward and impossible without adding extra joints to the mouth bones. The extra mouth joints would make joint offsets possible to animate the face properly with rotations only. I imagine rotations only is important because that's how the avatar currently works? Being an animator myself I prefer bones translation, but if "rotations only" provides better performance or is better for overall avatar shapes then the above is needed. I've never filed a JIRA before. lol I suppose a video would make this easier to explain but I'm not sure how quickly I can make it.
  3. I think to animate face bones with rotations only some bones on the face would need an extra bone/joint at the tip to compensate for angle, like the lips for instance and maybe the eyebrows. It does mean a few extra bones but it would help. This isn't the preferred way to animate face bones animators know this but it seems "rotations only' is important to the Lab.
  4. It seems like thats the bug, it can be confusing without seeing this kind of glitch with an image. But basicaly it happens when building one half of a mesh model, copy and mirror it to make the opposite symetrical half and use the same normal map.
  5. Christhiana wrote: A big thank you to everyone who took the time to respond with all these usefull tips and examples! Your welcome Christhiana. :smileyhappy: Drongle McMahon wrote: In case anyone is prepared to put up with the tapering/pinching effect, it is also possible to use the selected-to-active bake to transform the prebaked map. In this case, both meshes are identical except for their UV maps. Just duplicate the low poly from before and move one copy up a bit so you can select them in the right order ("from", "to"). Then edit the UV map of the "from" mesh so that it fits the parallel planks. There is one very important step. If you leave the meshes as they are, the remapping will produce nasty effects because of the unequal triangulation. To avoid that add three levels of subdivision modifier - simple, not Catmull-Clark, and don't apply it. BUT Don't apply the modifier on export - otherwise you will have huge LI. Either delete it before exporting the mesh or uncheck the Apply modifiers option of the exporter. Here are the pictures. I hope it's clear what they all are. Inworld at the bottom... Your baked Normal map came out just right Drongle. I'm regularly impressed by Blenders modern builds.
  6. If you adapt the UV's to the Normal map it's not necessary to rebake the Normal map, in this example. With more complex geometry this dosen't apply.. If you adapt the mesh UV's to the Normal map, the Normal map will curve with the mesh.
  7. Thank You. :smileyhappy: In this case i would take the points of the UV's of your corner mesh and align them to the beams of the Normal map. The simplest way is to drag the UV points to match the beams on the Normal map. Instead of adapting the Normal map to the mesh, you adapt the mesh UV's to the Normal map. I hope that makes sense.
  8. I thought this was a bug that was fixed a while back. Its possible to use Normal maps on mirrored UV's but it depends on if a 3d rendering engine supports it or not.
  9. I'm thinking of a possible solution but it would help to see how the original Normal map looks.
  10. The FBX improvements makes Blender a more complete modeling suite. I second Gaia, and I think Blender does a very good job of creating things for SL. I do appreciate your info for the FBX exporter Astrid, Thanks.
  11. It looks like you have two hand meshes occupying the exact same space. Check your blender scene for duplicate hand meshes, you might also have one mesh with duplicate triangles.
  12. For someone just starting out I want to say that yes, it does become fun, once you learn the basics. The other important thing I want to say is, YOU are your biggest obstacle. Everyone is born with an instinct to run away from an intimidating situation. If it overwhelms you enough you'll likely end up quiting or running away from it. Its not our fault its just the way we are made. Fitting against your urge to quit is key in learning how to 3D Model because Rolig is right, it can be fairly steep to learn 3D modeling. But it can be the most rewarding decision you make for yourself. I started learning back in 2003 on 3ds Max 6, I took what I learned there and then learned Cinema 4D and when I tried Blender it wasn't a problem learning it. The skills you learn from learning to 3D Model will open doors to an enormous amount of money making opportunity's for you, and new ones are popping up everyday. One more thing, be sure to ASK questions there are short cuts for doing complicated things in 3D Modeling that makes it allot easier.
  13. Medhue Simoni wrote: Like I said, not all 3D programs do morphs like Maya. As far as I know, Maya is the only program where you have to create multiple instances of the mesh to create a blend shape. The Cloud Party people used Maya. That is why they think that way, and also why they had so many issues with the avatar skeleton. An FBX file with morphs does not contain multiple instance of the mesh, as far as I know. The Secondlife Avatar uses morphs, and we are not downloading a new mesh every time we change it. All the expressions, and hand poses are morphs also, and are played like an animation. You don't see thing freeze or hesitate when playing hand morphs. I just think you are making assumption that you can't really make. I just did a quick test with my snake. I exported 1 FBX with all the blend shapes, then I exported another without blend shapes. The 1 with no blend shapes was about 1mb. The 1 with 9 blend shapes was 1.26mb. I re-read my posts and I am not seeing how what I wrote can get cofused as me stating the Second Life avatar works a certain way, or works the way my hypothetical Gatcha situation explained. 3ds Max, and Cinema 4D also need multiple meshes for blend shapes. Thank You Gaia for your explanation.
  14. Medhue Simoni wrote: Well, I really think you are wrong about how the morphs get displayed. It's all in the 1 file. There is nothing else to download. You mean for the Second Life default Avatars? The Gatcha example was hypothetical and one approach in implementing them. It seemed to me dynamically downloading morph targets would relieve the burden of downloading multiple meshes contained in one big file at first download, since their not all used at once. If a way exists to implement morph targets without duplicating the original mesh then I'm not aware of it, but I've always known that's how they work. I love morph targets, but I don't know of a way for them to not place burden on download cost for an online world like Second Life or any other online world. I remember when some one asked the Cloud Party developers if they would be implementing Blend Shapes at some point and they responded with saying they would never be implemented because you would need to download multiple versions of the same mesh. I would think they would know of a way to safely implement them but they didn't.
  15. Medhue Simoni wrote: RipleyVonD wrote: If a system like this was implemented every morph that is not used will add to the download weight of a clothing item. This is not true, at least as far as I know. Yes, in some programs, like Maya, in order to create a morph or blend shape, you need to basically have another instance of the mesh with the vert changes. This is not what the FBX format saves tho, again as far as I know. It doesn't save different instances of the mesh itself. I've imported FBX files with morphs into Unity and the meshes are not more verts. Plus, even if it did, you are only rendering 1 mesh in SL, so the download cost or LI would not be affected. In Blender, you don't need multiple instances of the meshes at all to make morphs A problematic situation I immediately imagine would be an event like Gatcha where you have lots of Avatars adjusting what their wearing at the same time and clothing with morph shapes having become commonplace, your viewer would need to download every new morph shape that these Avatars decide to use. Its true a viewer wouldn't need to download all morph targets at once just dynamically when a new one is used. Having a compressed format while a morph target downloads to a viewer would be a way to ease the download burden. But we still have the problem of careless creators who don't care about triangle counts, the only solution to this problem is putting a system in place that dosen't render items above a certain amount of triangles like Cloud Party did. 3D Editors do have nice ways of doing things, but the environment is different they get to have that freedom. I thought it was way cool when Unity added support for Blend Shapes. Never really looked into how it was implemented though. If a resident one day decides they want to be a tall reptilian lady, a morph target is not gonna pull the dress over her tail. lol I know its a silly example but it shows how morph targets wouldn't be a solution.
  16. Right, but the problem with morphs is it adds download weight because every morph target is a duplicate of the original, in vertice count, not shape of coarse. This is why the Lab hasn't allowed this freedom to creators, allowing morph targets for creators when creators have the habit of creating meshes with huge triangle counts would be extremely problematic. Morph targets aren't practical not just because a persons viewer would end up re-downloading a persons worn clothing every time a person adjusts their worn item and uses a different morph target, but because people don't care about creating efficient meshes for 3D Rendering engines. Morph targets for the current avatars don't pose a problem because morph targets are shared with everybody in the viewer. in Real Life customers like to pull and adjust what their wearing to make it fit. Its in their hands, a similar system Inworld dosen't have to be as complex as it sounds. Plus the big bonus is creators get to relax a bit whether their clothing will fit or not. Giving customers the ability to push and pull at an article of clothing is not expecting them to do the morphing, it giving them the freedom to do so.
  17. Medhue Simoni wrote: It's also worth pointing out that if LL implements the ability to include morphs in the FBX format for the New World, then we don't need inworld tools, as the creator will just create as many morphs options as possible. If a system like this was implemented every morph that is not used will add to the download weight of a clothing item. Adding Inworld tools is giving a resident control over how a clothing item will fit and their will be no more guessing whether an item would fit. As a creator myself creating as many morph options as possible sounds like a bit of a burden that can be avoided with Inworld tools. Weight painting isn't only related to weight painting for bones its also commonly used for creating weight maps, I felt it was the best way to describe tools that work similar to weight painting for bones. Yes it is confusing and I apologise for that. The approach for a simple push and pull system is best left to the engineers of SL 2.0, and its also true that it wouldn't work for weight painting for the avatars bones. Also its only a name, its currently very confusing for new creators learning the difference of what "Materials" means from Linden Lab compared to what "Materials" means to the 3D Modeling world. In its simplest form a system like this would be a resident pushing and pulling at vertices. And yes Morph tools would be a better name, Thanks for pointing that out.
  18. The idea would be a creator weight paints the clothing item professionally to move with the avatar, then a person buys that clothing item and takes it into the weight paint room to push and pull vertices to custom fit it to their unique avatar. A SL 2.0 resident wouldn't need to start from scratch with weigh painting the clothing item they just bought, that wouldn't be practical. I think a good approach to start for something like this would be Karl's deformer but using weight paint brushes to adjust its influence on the mesh, its just an idea maybe theirs a better way. As for the tools, Blender, Zbrush, and 3D Coat are examples of software with 3d brushes that push and pull at vertices that can serve as inspiration. Also I imagine Torley Linden releasing a video explaining how simple weight painting for your avatar works.
  19. Right! Exactly, that's exactly the kind of system I had in mind Cathy. It would also be helpful having a mirror mode for brushes. A simple push and pull system to start hopefully wouldn't require too much code, but it would help immensely.
  20. It seems by now that having weight paint tools In-world is a necessity. There were two technologies proposed for Second Life to help residents fit their clothing after purchasing it and neither was a perfect solution, and my personal opinion relying on a perfectly coded system is either not possible or would be too performance intensive on a viewer or persons PC. I always think of Mobile devices when I think about Second Life creation, because it seems at some point a large portion of residents are gonna be using mobile devices to log in. Having weight paint tools or a weight paint room where you can view your purchased clothing and influence its shape would be the solution for this problem. It dosen't need to be anything fancy to get the job done and this will accomplish what an automatic system will not, custom tailoring clothing (you can say). Another thing that would complement this tool-set would be painting your alpha maps on-to your meshes while in the viewer, sort of like a live spray paint session. The it feature for Second Life at its beginning was Prims (in my opinion). The it feature for Second Life 2.0 can be Weight & Alpha paint tools for clothing. Well this is my suggestion for SL 2.0, here's hoping. I realize this might not be the simplest thing to implement, but it would help and take some burden off of Second Life creators.
  21. Drongle McMahon wrote: Good philosophy. I hope they vsee it that way too. :matte-motes-smile: Yeah lets hope so. Meanwhile Blenders future is looking better than ever, Epics interest in supporting Blender is a really great turn of events! http://www.cgchannel.com/2014/07/epic-games-funds-blender-development/
  22. Ohooo, thats right FBX is owned by Autodesk I hadn't thought of that. I can't imagine Autodesk is happy about developers using a free app like Blender instead of Max or Maya to develop for Unreal Engine 4. If Autodesk decided to cut effective access to FBX for Blender because it competes with Max and Maya it would not only be unwise where PR is concerned, it would also cut ALOT of potential future Max and Maya customers that currently use Blender. A scenario I can think of is developers starting with Blender and learning how to 3D Model and Animate while developing for UE4 and then they decide to move on to Max or Maya because they need to for the company they just got hired for. They learn to use Max or Maya and find they are comfortable with the workflow and decide to stick with it thanks to the skills they managed to learn from a free app like Blender. And as a plus they are already used to the FBX workflow. Providing full access to FBX or atleast useful access to FBX for Blender would be smart for Autodesk. Blender users might have a need for other Autodesk software, not necessarily Max or Maya and having FBX availiable to them would make Blender users pontential Autodesk customers too. Its a win win.
  23. I've been seeing a ton of videos on YouTube showing mesh and rigged characters being exported from Blender into Unreal Engine 4, I think that's great. The moment I saw this happening I new Blender had found its perfect companion software. A free copy of Blender and a $20 subscription to UE4 any Indie developer has the tools they need to create what ever they want. The visual fidelity of UE4 makes it a perfect substitute for expensive rendering farms, making it perfect for aspiring animated film creators. I was telling my coworker the other day that if I had the opportunities for realizing my dreams and aspirations at a young age like kids have available to them today, I would be a millionaire by now. lol
  24. Thank you, thaks good to know.
  25. I'm currently waiting for Materials scripting to be finalized and I'm curious about a few things. I've noticed some residents regularly visit developer meetings and hopefully might have updated info kind enough to share. Thanks for anyone's help. When Materials scripting is ready for the main grid do we need to update our viewers or will the new scripting functions just work in our scripts? Is Material scripting close to being finalized and ready for use?
  • Create New...