Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. Oh yes, i was talking about the optimization feature that 3dsmax does on export that Arton explained earlier, and why it would do so instead of just removing doubles while keeping the face in place. However you can easily select the geometry in such a situation by using UVShell/UV Island selection, but i'm not sure whether Blender can do that on the 3D viewport or only on the UV Map.
  2. Kyrah, i didn't say you were implying something about my competences. What i did was showing how the small differences were only in the nomenclatures of tools used in production, but the main concept behind it all is the same. Analyzing features and pipelines should not rely on terminology, this should be done on methodology. You can call a feature "JohnFoe" and still do the same thing that another engine calls "Godzilla" or "DistanceBlindFolder" for that matters See? i just went on explaining along with the info source you pointed out for you to consume and relate to methodology rather than terminology. I understand that i might sound serious or disappointed by making the point across, but that's the nature of text communication, which tends to be misinterpreted even when it just hides the best intentions =)
  3. too many images, new post: And finally the distance culling bypassing the other triggers here, to let the player look in the distance. Faster loading just by using LoDs (hint hint) If you wish i can continue so for the whole rest of the video =) but i hope i gave you an idea about the fact that i really work also on real game engines and i seriously know what i'm talking about. SL is my free-time social 3D personal enjoyment and a little cash help at the end of the month
  4. I've just watched the video you pointed out, and i can find just a difference in terminology from what i listed above, being developed in Unity while i used a more UnrealEngine4 type of nomenclature for the same exact features. i'll attache relevant screenshots from the video and will relate them to the nomenclature i used. I made sure to capture the time of the video so that you can listen to her explanations: They call it sectors to work on patches of the world and avoid bad merging, this is a pipeline method to create assets in parallel without overwriting content VolumeCulling, they call it nearby loading. Essentially they make the engine render only a specific part of a volume: This is the AreaCulling, that they call DoorandPortalStreaming, essentially the objects that make something load after a object is passed through and triggers the next area. I made an example about it placing them around a corner, but of course, like you pointed out for Quake2, this is the very case of doorways: I called the following "splitting the world in cells", they call it sectors again to facilitate world streaming. She admits there are no doors or volumes, so they had to be "extra smart"... ...so they added these toggle switches "walls" to load and unload the areas the player enters/leaves. Note that this is not what i called AreaCulling, this is the "distance threshold from adjacent world patches" to start loading a specific cell/sector. But what they did is really something that Unity, being sub-industrial standard crap, can't quite manage as UnrealEngine4 does (do i show my bias towards UE4? ) and indeed they had to rely on a 3rd party plug in for this system to work, which made a sort of hybrid with an AreaCulling (in my terminolgy from UE4) object to serve as distance threshold detector for each sector:
  5. You have to use Avastar's feature "use bind pose" that, quite long time ago when i used Blender for SL, i asked @Gaia Clary to implement. It's a common feature in many 3D softwares but Blender's native architecture doesn't handle by itself. I'm tagging her here hoping she could give you more detailed explanations as per how to use this scripted solution of hers, as i'm not that up-to-date on Blender that much.
  6. Removing doubles in such a situation would create what is called lamina faces, which is as bad as non-manifold geometry. No wonder that 3DSMax would do an optimization during export by removing the already existing vertices. As Arton noted above, there's no actual reason for bridging your meshes through Blender, I use Maya and i export from there as well, no problem with uploading exactly like Arton does from 3dsmax. Since i'm at it, quick definition of the two geometry monsters, lamina faces and non-manifold, so you know what to avoid in general: Lamina faces: faces that share ALL vertices (like the overlapping surface you did with inverted normals) Non-Manifold geometry: Mesh object that has connected geometry running through the volume of the object.
  7. Kyra, please read my post above, in the section about terrains and its splitting into cells, it's exactly what you're describing for Firewatch. Fact is that whoever works or has worked on a game engine using a huge terrain has to do this procedure. Area Occlusion objects as in my post above about culling tools (I myself have worked on commercial titles as well as doing freelance and indie games jobs extensively using game engines from the time of CryEngine2, which i believe was called so because it makes anyone cry when it comes to set up terrains).
  8. Rigging and animation assumptions differ a lot, you can't simply animate the same rig you used to do the skinning and export. Please read the instructions and run the skeleton generator button to be able to export animations. MyAniMATE produces an animation rig scaled up to meters in order to comply to the animation standards that differ by a LOT from the rigging standard. I know it sounds clunky but it's how LL has made the systems. You may want to follow this video of mine to set up a character. MyAniMATE-How to set up your character Even if it's a human character in the video, everything still applies the same. You can delete any joints you won't use, like you are doing, the only one that is mandatory to keep is the mPelvis (AKA hip)
  9. If you bake colors and do not do a full illumination bake, the texture comes out black because that-s what a diffuse texture for metal is: black. Metals have no diffuse components and their color is represented in the specular color map. I don't use C4D, i'm Maya user, but i would suggest to find a bake type called along the line of "Full Illumination", "Diffuse Lighting" or "Render to Texture" to achieve your bakes. Hope that helps
  10. Alright, then i correct myself about it: addition of working mip mapping as i can't see any resolution degradation happening on distance increase, except for LindenWaters. Which reminds me that refraction is available for that but no refraction is possible on any other content. But that's a matter of shaders and i don't believe LL is going to tackle anything in the materials code in the next foreseeable future. EDIT: i was missing a part: That's done when texture memory is very scarce. This is not the typical behavior of mip mapping, as it is intended in general applications: they're intended to switch exactly like mesh LoDs, what you're describing is a faint resemblance of true mip mapping.
  11. YOU do buy that stuff, i make my own avatars, while the onion skinned bodies could easily go away by allowing a second UV set instead of that bake on mesh necro feature crap as i asked billions of times when i was used to waste my time at the content creator meetings Actually, game engine do a LightMap for AO, then they have a light volume within which illumination is being calculated realtime. Unity perhaps still doesn't as being sub-industrial standard as Sansr is, but Unreal Engine 4 showcases that really well. Nope, that's the culling and it is calculated realtime, there's no precalculation of ALL possible camera and/or player position whatsoever or the loading screen would take for ever. Read my post above about culling objects. The object to object occlusion (the "don't render this item if it's behind another") is done on camera raycast, again at runtime. Not to mention that you can have multiple camera layers, but that's another story (post processing) This is truly correct. Add also less yet better optimized stuff.
  12. That can happen if you put more than 8 materials, otherwise SL displays both sides when you do the procedure you described. I would make sure that the "behind" parts do have normals pointing the correct direction before you export.
  13. As i see the issue now, i would prefer to see the LoD system to work consistently to what it is supposed to do at the cost of "breaking" existing content (greatly deserved punishment to those whining c***s that bypass and/or circumvent rules and limitations with their "design choice" of having no LoDs) with the addition of mip mapping the exceeding textures load BEFORE jellydolls come into play, with these latter showing up as signal to the user saying "this thing shouldn't have been uploaded at all in the first place".
  14. Like Klytyna pointed out, this simply is not realistic. The problem is, if you're playing a videogame then everything you see on screen was created by professionals working very hard to create an experience with consistent FPS. They have resource budgets for various aspects of the game (the game should never use more than X amount of texture memory, or exceed an amount of Y polygons), and the game engines come with various tools to help make that happen (such as culling elements you don't see, pre-baked lighting, fixed camera positions or otherwise restricting the camera to where the developers want you to be able to see). On top of that, game devs try to be smart about how areas of a game are put together. Level maps will be broken into segments with hidden loading areas where the area you left is removed from memory and the area you're approaching takes its place. In large, open world games you still have multiple separate maps. The outdoor area will be a series of maps with hidden loading areas. Interior environments will be their own separate maps separated completely from the outdoor environment. Once again, i point to modding resources from games in order that everyone can access this stuff to see what is going on in there to compare with SL. In the following text, i will explain how Skyrim and UnrealEngine4 manage a few of these things, the first one for the sole purpose of being easily and readily available to take a look at its working finished stage. Aside from systems like mip-maps (the texture's own LoD equivalent for texture memory saving), there are quite a few different assumptions to begin with. Huge terrains obey LoDs too, in the first place, and only then they're divided in "cells" of about 64/96 meters squares (depending on the area, devs have control over this cell sizing). Your graphics card will be rendering only the cell where you stand and, when reached a certain threshold distance from the cell's border, the adjacent one gets rendered. Quite easy to accomplish in an open world where rocks and trees (with PROPER LODS) do a lot of the hiding job. All other cells keep simulating mathematically, storing changes as data only. Graphics assets belonging to an area aren't put in the scene at all unless the player enters the cell. Cameras aren't free like in SL, so the sick idea that everything should be smooth and with max texture/polygon resolution everywhere on the surface is removed altogether. All object types get 2 sets of different UV channels, one for the texturing work, the second one for the sole purpose of baking lighting, most often (if not always) grouping tens of objects in one single texture using the same LightMap UV to multiply over the main texture UV Set. As an example, in Skyrim all house types are made by separate meshes, each one has its set of textures on the main UV Set (diffuse, normal and specular) but ALL of them in a specific building style share the same LightMap UVSet. This makes easier to use tileable textures, keeping the advantage of having pre-rendered lighting on top, regardless of the tiling values set on the main textures. Most of the interiors (as Penny pointed out already above) require a loading screen as they're not part of the open world itself. In Skyrim's Modding tools you can see that when you load a dungeon, it's just a floating cave in the void. When you play the game, you can notice that distant rooms are NEVER in a straight line in front of the entrance, and if there's a long distance in straight line, chances are that you can see particle fogs or mists to confuse the view and make LoD degradation way less noticeable. Particle Systems are volume based, those are cubes you can stretch to your liking but they contain the "fluid simulation" within it only, it's precalculated and looping continuously, with the container box being keyframe animated to hide or confuse the obvious looping. Culling tools like AreaCulling (a "wall" that culls whatever is beyond it until the player gets there, usually put around corners) and VolumeCulling (a "cube" that culls off everything within it until the player walks into the cube, to then culling whatever is outside the cube's volume) objects are essential for pretty loaded almost-open-areas Dynamic lighting (casting shadows) is limited on a per scene or on a per culling tool basis to an established number, the rest of the lights are pointlights until the player enters/leaves a culling area or volume. And the most important thing of all because it applies to any graphics content: LODs! Everything has their LoDs! Also textures have their LoDs called mip maps! Therefore i ask a question: how can one claim "modern standards" when all of these features aren't available in SL, except for the LoDs? There is absolutely NO ground for comparison. Setting changes, viewer side, on an automated basis is most likely to be more resource consuming than the scene it is trying to save resources on.
  15. If you aren't using Avastar, the model is oriented the wrong way: it needs to be facing +X and your pic shows it's facing -Y. That's the first thing that will break it. Secondly, if the mirroring took place withthe model in another orientation, the exporter will NOT Apply rotations and/or the modifier and therefore the mirroring either doesn't get exported or if it is exported, it is most likely having the mirrored half overlapping the original side. What you can do: Remove the mirror modifier first if it's still in the modifier stack (there is another method but this is easier and less prone to user error) Select the skeleton and rotate it by 90 degrees on the Z axis so that the avatar faces towards +X Select now both the skeleton AND the mesh, hit CTRL-A and choose "Apply rotation" Put the mirror modifier back (if it was there in the first place) Export using the stock Collada exporter Notice that if you're using collision volume bones for fit mesh, Blender doesn't handle bindposes as the SL avatar assumes and, if that is the case, your mesh will look like squished towards the bones, looking like it was vacuumed or withering. There is no easy solution to this in Blender without Avastar, the only thing you could do is to go to each single collision volume bone and create custom properties for the scale according to the skeleton definition scale and rotation factor you can get from there.
  16. The feature limits the FPS that the viewer tries to accomplish, because it already tries really hard to push it to the maximum possible. The problem with this approach is a fluctuation due to the rendering processes "falling behind" on some aspects and framerates go lower trying to catch up. you can see that this feature is correctly called "Limit Framerate", which works as a capping tool that tells the viewer to not even try to go above this number. Thi way, the viewer won't attempt to run things too fast to find itself overloaded to the extent to be forced lowering the framerate to recuperate processes that fell behind in the meantime. My machine is capable to stay quite stable between 50 and 60 fps but why? what benefit would that give me? It's just more taxing on my GPU while physics simulation and all that is related to isn't going to benefit from that at all. I prefer to have a better responsiveness of the overall viewer rather than a fluctuating performace that may well drop below acceptable standards in order to try and keep up with the framerate. Hopefully that makes sense. EDIT: i guess i would explain this in other words for simplicity. Say you're a great typist and you can type 600 letters per minute as your top performance. If you go ahead and try to do that constantly for a certain amount of time, you will have to eventually come back and correct punctuation, mistypes here and there, which leads to some loss of time to fix the mistakes. However, since you're capable of that feat, if you try to stay at 300 letters per minute, you will be quite fast at typing anyway but chances are that you are less in a stressful hurry and, given your full power ability, that typing rate will lead you to be more relaxed and less inclined to mistypes and punctuation mistakes because, well, you're typing "slow". Something similar happens to the viewer.
  17. I'm sorry Klytyna, but it's no longer the case since at least 2011. Take a look at Skyrim/Fallout modding and you'll see that they actually have bodies under the clothing and it's not a full swap when you switch garments. However they also had to implement a system to allow such a thing on ALL characters, player and NPC, called dismemberment. This method was set in place so that clothing could be worn on top of the bodies AND heads and arms could be chopped off and roll away during the kill-cam actions, while at the same time the skin-clusters deformations wouldn't cause too much of a strain on the system. How? Reassembling the full skeleton at runtime from the dismemberment sections, so that it's never a full single mesh and one single skeleton, minimizing the amount of data that each mesh carries AND allowing the "drop" of entire bodyparts. However, their engine requirements still want a polygon budget to be respected. The best use of built in textures, dedicated normal and specular maps and sparing of as many polygons as possible still is a thing to keep in mind also there even though it seems "free highpoly modeling".
  18. I guess i missed this post from you. I am one of the Builders Brewery instructors (Maya, formerly Blender) and, at the time of this writing, there is no Blender class. There was a Blender study group last winter though, maybe it could come back soon. However there are also dedicated BB group for both Maya and Blender at the group joining board, the easiest to spot is at the sandbox entrance. I often hang in there, so feel free to ask questions if you see me there =)
  19. I will add my 2 cents: To get "proper training", what you need is time and your software manual. This latter will help you understand the topics better and at your own pace, while reducing the amount of time needed to learn. However, i have to also quote Rolig Loon in her post above: This can be true, however there is a flaw in these few statements. Most of the courses you can find in RL focus on rendering for motion picture, for which you get a huge freedom in comparison to game assets creation. I myself have had a almost-3-years-long Master in Maya and VFX, during which i had to endlessly fight to be shown, or just be pointed in the right direction, in order to know the tools used in game industry, sometimes (if not most of the times) with no luck, either being shun aside for irrelevancy to the class topic, for lack of knowledge from the instructor or for being a useless feature or for lack of will to share that "elite information" altogether, admittedly or not. So it had to be my personal research to extract what was necessary to me from the "common professionals' knowledge". I did study twice as hard to acquire both fields knowledge, and it is all thanks to manuals i bought. Motion picture modeling, texturing and rendering is made for speed, thing that is lower when creating game assets, but i can say it greatly improved the quality of my motion picture aimed models, although each and every platform may have different standards. I can work with legacy shaders as well as VRay's or Arnold's or MaxwellRenderer or Renderman or FurryBall because i looked into how those interpret light's physical behavior, to produce maps accordingly while integrating the game asset knowledge i was gathering along my study. So, IMHO, whoever claims to be a professional and blames a platform for not being as they're used/wish to, is not a professional because of a simple reason: 3D art is about finding a solution to get the desired result on the platform while complying to its requirements, rather than getting the asset in anyway and trick around it to mask its flaws in regard to the platform's behavior/look/requirements. It doesn't take a big effort to look at the tech specs to roughly interpret how a certain aspect should be handled. You're given a spec or a limit and you check on those while you're working. Therefore the whole process of learning is as hard for you in SL as much as it is for the "professionals", with a little difference: "professionals" practiced a lot on modeling, think they've arrived to the goal and no further learning is needed "because it's SL that has to adapt to the TRUE standards", while the SL "amateur" has to learn modeling and texturing in a very limited environment knowing full well (hopefully) all the internal specs, standards and limitations. So, to conclude: learn from the manual(s) to get the basics as commonly shown for motion picture production, do the homework projects for still renders and animations even if it's gonna be totally useless for SL. Then start moving onto a game engine like Unity (Unreal is overly complicated in asset management for a beginner so i wouldn't suggest trying that at first), see how content is made in there using the previously acquired knowledge to analyze the workflow and learn more about specific tools that were used in this other type of production. That will give you a better grasp on the main topics (like polycount and material maps) to let you finally move onto your desired platform, SL in your case. It all boils down to the time spent learning and understanding the principles behind production and a manual is the best starting point to minimize that time and maximize the amount of absorbed data.
  20. There are deformers in all 3D tools. In my opinion, using the Lattice deformer in a few steps doesn't destroy the detail and texture. Never use models straight out of MD or sculpting tools for game type of assets, do a retopology on it, extract the maps for your texturing work and reshape the thing on each body using the lattice deformer. As i said, don't do it all in one shot, do it in a few passes with differently and increasingly higher resolution Lattice boxes and you're good to go. The weighting is best to be redone from scratch by copying the weights from each single body.
  21. Penny is absolutely right about maintaining an average FPS to save on resources during realtime rendering. I do the same myself, i capped my Firestorm to 30 FPS which is more than enough to run animations and simulations. Lower and stable FPS brings an overall smoother experience than claiming higher FPS, and for what? Physics are capped to 45 anyway, which applies to raycast calculations too; hence, for the gaming experience with shooting and whatnot, a higher FPS is relatively useless, unless you look at your camera movement (machinima may benefit from this higher FPS. Is the whole SL userbase machinima makers?). The avatars are being moved around using physics and animations are encoded in such a way, that the original FPS doesn't count when decoded and played back, as the animation curves are being re-built from time steps relative to start and end of the animation file, not using the viewer fps: animation stuttering is either due to the animation design itself or the viewer can't quite keep up with the whole rendering process, which sees animation playback as one of the last priorities, therefore likely to fall behind. As the scene to be rendered increases in complexity, chances are that the lower the FPS you set on your viewer, the better it performs. There is a hard limit to this, of course: under 15 FPS the human eye can catch the frame changes. That's a rendering issue though: animations smoothly made at 5 FPS look smooth inworld too at 30 or 60 fps, because of the encoding/decoding/curve rebuilding system that accounts for the animation TIME, not frames. So the problem sits in the TIME it takes to render a scene in regard to the scene complexity, which includes scenery and avatars. It doesn't matter what engine or simulator that runs the game and the data streaming required: if content is unoptimized (unnecessarily highpoly) and a scene is loaded with such content, there's no way one can get a stable very high FPS when the drawing distance (AKA culling distance), antialiasing, anisotropic filtering, shadow quality, objects, water and terrain details are all set to the max. SL doesn't have any culling tools that other engines have to help with this problem: those were introduced to obviate the limits that a simple culling distance (like SL's) has. It's like re-making a triple A game without all those and then wondering why it performs bad. Those games can afford the "modern standards" because of these "tricks", while in SL the "trick" is to optimize content as much as possible.
  22. To some extents, it does. It uses it to calculate the initial size (for display only) and the subsequent rescale factor in the uploader last tab. The problem is what Klytyna points out about the arbitrary softwares units, in this case applied in reverse: The conversion into a binary format doesn't account for any linear unit, with all the multiples and submultiples it can imply. The established 2 bit integer conversion of vertices locations implies an arbitrary unit derived, as i was saying earlier, from the need to make the model fit into a box, in which you get 65536 (integer) subdivisions per axis and that's it, the actual metric size isn't accounted there because that's the job for the transform node that contains the geometry node. Similarly to what happens in Blender, when you scale up/down only the geometry in edit mode, no scaling shows up in object mode.
  23. It looks like you have some Empty object in your character's skeleton hierarchy? This might have been included in the bind pose and, obviously, not recognized by SL, resulting in "heat weighting: failed to find solution for one or more bones". Moreover, SL wants the avatar facing the +X axis: Avastar handles most of that for you but the stock Collada exporter doesn't, therefore supplying a rotated avatar misplaces all joints inworld when the uploader isn't complaining about the heatmap solution.
  24. Switch the Blender Units to Meters and the grid doesn't change size, unlike with centimeters or any other unit available in there. For small detailing and close ups, the camera clipping distance is the way to go, not the scene scale
  25. Actually, they still are. That's why Python was integrated in Maya (which embedded scripting language can't convert to any other C struct) and 3dsmax got the C struct conversion in Maxscript as well. edit: 32 bit double, not integers, my bad
×
×
  • Create New...