Jump to content

FinnfinnLost

Resident
  • Posts

    47
  • Joined

  • Last visited

Everything posted by FinnfinnLost

  1. I've seen Blender's voxel remesher work for simple shapes with acceptable tri count. Although for the time being, yes, manual's going to be better for real-time rendering environment. I'm well aware
  2. To elaborate, this is something very common in videogames in general. Some games offer LOD settings directly, some include it in the "model quality" settings, blending it together, but handling LOD is one of the biggest factors in performance. In fact, there exist a few games that allow the player to ONLY render lower LODs even at close range. Looks terrible, but that's how they can run on a potato. This is why content creators should never rely on "eh, low LODs won't be visible anyway". While lower LODs do not have to look good, the object and any states it might be in should be possible to identify as long as the object is within rendering distance. Which is another aspect Yes, I'm sure your fancy dress/object looks very nice when I get up close, but why should I approach your mass of mangled triangles in the first place? And, in case someone doesn't render high LODs due to system limitations, why did the owner of that pub pin that same mass to the wall?
  3. Weights looked fine, but with a topology like that (edge loops wrapping from belly to the bottom of one pant leg), there's just no way for a rig to properly deform the mesh. Thank you for the input though.
  4. Oh, I would never request or even accept payment for a failed project. Not trying to convince you here, but I wouldn't want anyone to consider me that kind of guy. I hope you get the rig you're looking for
  5. We figured it out. Unfortunately, zBrush's remesher produced topology entirely unsuitable for the mesh to deform properly in the crotch area with edge loops going all over the place. Manual retopology is required.
  6. If we need an isolated testing environment for stuff like that, someone may create "Benchmark Island". Although I'd like to throw in that we don't know whether the netcode influences the "effective FPS", as in frames that are indeed drawn to the screen. Sure, this might sound silly, but we still get games that suspend drawing if the network connection acts up to this very day. While I don't think LL's developers are bad, SL's old age makes remnants of old practices likely.
  7. Finnfinn#1222 Hit me up, we can try to fix it
  8. Thing is, optimization doesn't pay. People look at screenshots, not at specs. You chuck your sculpt up there, set LODs to minimum and post screenshots with, if applicable, an emphasis on the low LI. Besides, the stuff you don't see might be terrible, but some of those models are simply gorgeous. And if you don't know anything about the more intricate stuff, that's all you'll care about. Heck, some creators of beautiful models might not know what they're doing wrong themselves. Tools like ZBrush, Marvelous Designer and others I'm surely forgetting make it easy to create without ever touching more in-depth topics. Not saying they're not to blame, but it's something to think about. I don't blame LL for being reluctant about it. They're not a charity organization and a large part of their income is due to people churning out pretty things others want, who then buy L$ and/or have a premium subscription. But complete inaction will make things worse in the long-term.
  9. I suspect it's less about the rigged mesh and more about the amount of triangles rigged to it. But that's rather pedantic at this point and yes, enforced LOD targets (with enforced custom LODs probably being a good start) would solve it well. Inspired by this thread, I've been investigating how other games handle modular characters. You know, characters with clothes and equipment and whatnot. No, SL was not one of them. I put special focus on one particular MMORPG, since that genre tends to have a LOT of player characters on screen which, as opposed to NPCs, are not easily baked into one single mesh. Here are some particularly interesting aspects in the context of this thread: - Clothes and weapons are indeed separate objects. No surprise there. I could not extract the rig or weights since I didn't reverse engineer any game files, but the clothes deform with the character's movements which really only leaves rigged clothes. Live physics would murder performance due to the sheer number of suspected animation layers. - In some contexts which I could not discern because, again, no reverse engineering took place, every piece of visible clothing is present twice. While this could be due to technical limitations in the method used, the pieces are placed to prevent z-fighting. Don't know what this is about, but it increases rendering load. - With about 30-50 people taking part in some engagements simultaneously and omitting any characters from rendering not being feasible due to gameplay mechanics (although several generic models exist), this clocks in at ~8 layers of rigged equipment per character, plus equipment that is attached but probably merely parented to a bone. While SL characters can go beyond that, if properly modelled their load should not be anywhere near the one of said MMORPG. So, what can we take from that? There's two possibilities why avatars can bog down performance this bad. Either the SL engine is incredibly terrible to the point where a few characters are enough to bog it down while others can handle 50 while being at least playable. I am absolutely positive that this is not the case. Unless I missed something, this leaves but one possibility. A boatload of avatars and their clothes and attachments are TERRIBLY optimized. As mean as it sounds, it's indeed the fault of lots of creators, which is unacceptable. I'm aspiring to create content and am struggling with LODs and LI. And I'm sure this will make it harder for me as well, but enforcing a minimum of knowledge in the LOD and complexity departments is absolutely necessary. As mean as it may sound to some. I hope some of you found this interesting.
  10. Avatars might actually be a bigger problem overall. Rezzed furniture and such is predictable, performance for it won't change unless something is removed, replaced or added. If an area is bogged down with horrible sculpts, at least you know and can avoid it. However, a 10000 tri hair avatar or ten may just appear within render distance at a moment's notice and ruin performance for everyone involved, no matter how clean and optimized the rezzed objects in the vicinity are. In addition, I'm getting the feeling that the "low LI" measure of quality is applied to every object offered on the MP, with "low" not being a variable based on complexity, but some absolute value expected by customers. I'm currently creating a neon sign outlining a human. I've been optimizing the crap out of it, retopologizing multiple times, and created custom LODs. Since the shapes are complex and curved, I can only do so much and it clocks in at a minimum of 17 LI while still somewhat resembling the shapes. Can probably shave off one or two more, but loss of detail would increase drastically. My point is, I see how people are tempted to just zero out every LOD to get to as low LI as humanly possible. It just looks good in the description and that generates sales. How would we even go about tackling that psychological aspect? FAST MESH is a decent strategy to begin with, but would it be enough?
  11. I might be misunderstanding you here and if that's the case, I apologize. However, it's my opinion on the whole topic either way. The thing is, most best practices talked about here are not difficult for an amateur to learn. If you're going to create stuff for any domain (be it your own game or a mod/content for an existing), you just have to read up on it a bit so you can create decent stuff. It's like when you're doing basic carving. Yes, shaving off flakes from your workpiece is not a difficult task to learn, but you'll have to know how wood behaves and how to deform it or you'll end up with scrap. If you want to create a model for a videogame, you should be willing to learn how to model for a videogame. That includes designing with conventional tools (as opposed to sculpting the end result with a ridiculous amount of polygons and just uploading it), creating LODs (breaking your model down to its essentials) and normal mapping (to add details without it being too taxing on others hardware). You're going to negatively impact other people's experiences with inefficient work and then having the audacity to tell them to dial up their graphical settings lest they have to look at a scrambled mess is insulting. My point is not that high details are a bad thing, but as a content creator it's your responsibility to make sure your content looks good for the majority of people. It's not the job of the same people to buy a new graphics card because the creator couldn't be bothered to optimize their stuff even to a basic degree. I'm not a professional modeller. In fact, this is merely a creative outlet for me. So I qualify as an amateur. Guess what? Before creating models for real-time rendering and game engines, I looked at best practices for a few hours. And if content-creation is your passion, you surely can invest those hours as well, right? If you take your time and create well-optimized, you should be rewarded. If you continue to learn and optimize your future work even more, you should be rewarded. If you churn out hastily sculpted topology that makes even forgiving renderers like Blender's Eevee cry, you should be penalized. Harshly. To add a little rant, I watched numerous tutorials on Youtube about modelling hair for SL. Many well-done tutorials out there. But one of them seriously found it feasible to upload one single hairdo with a whopping 5300 tris. I'm working on a character model right now. Head, excluding hair, torso, legs and feet clock in at 5800 and optimization is still underway. We don't have to cater to these creators, do we? @Coffee Pancake I disagreed with you earlier, but having looked at some more products and tutorials... you're correct.
  12. Can you take a screenshot of the topology or upload the model? I'll try to help.
  13. First of all, don't rely on automatically generated LOD models. Second, I ASSUME it's how they decimate the geometry. You might have had an n-gon somewhere in your model, a large one perhaps, which could have confused their decimation algorithm. That's not a criticism of Linden, it's just that decimating while preserving volume (read: mangling your model as little as possible) is a complicated task to achieve. Even if it does work, the topology usually looks like crap. Goes double for n-gons.
  14. Cleverly done and mpressive! Thank you for the insight. Yeah, what I attempted was basically the other way round. I didn't use tracking to translate an object attached to the tracks, in this case it was the camera motion being inferred from the translation (and scaling) of points in the scene. If it works, it offers greater flexibility, however, it requires more accurately trackable points and you need to block out the scenery to really make it worth the extra effort. If you want to check it out more in-depth, Ian Hubert got a few videos on his Youtube channel that explain it quite well.
  15. Oh, nice. Clean solve right there. Any advice on placing tracking points? There's little difference in color and I'm wondering how you tracked it. Couldn't be automatic detection or could it?
  16. Howdy folks! Blender's motion tracker received a pretty big update recently so I wanted to play around with it again. Since motion tracking might offer a wide variety of possibilities for SL machinima (adding props and surroundings beyond SL's capabilities, limited changing to lighting or, given an ideal setup, even adding new features to characters), I wanted to check how well the tracker works with the graphics and camera movement. https://streamable.com/grx015 The above doesn't look like much, seeing how I merely placed a cube in the environment, but it still serves to demonstrate what worked well and what didn't. First of all, a short explanations for those who wonder what the heck they are looking at without too much detail: I recorded a video in SL, nothing fancy, just my avatar standing around and me zooming out. I threw it at Blender and fired up the motion tracker, had it detect notable features and try to find out what the camera movements are (which is away from the character). Blender applied its best "guess" to the camera. I projected the video on a shape very roughly matching the SL surroundings. Inserted a cube for testing. Note how my avatar doesn't change his size compared to the rest of the scene, especially the cube and the plane which plays back the footage. Really, you could stick anything in there and it would stay roughly in place next to the avatar. That's because it's now Blender's camera moving away from the scene just like the SL camera did. There's a few caveats though, especially visible on the cube sliding around a bit. Blender has difficulties detecting and tracking features due to the rather muddy textures. This makes camera solving somewhat imprecise, although this is largely owed to the lack of distinct features on the ground and the hedge. Probably works better on sharper textures or, better yet, a scene with points added specifically for Blender to track. Given these disadvantages, I still find the performance quite impressive. So what do we do with this? A few ideas: Cut out a window in an interior scene and add a huge city with details that would not be possible with LI and space limitations! Land a spaceship next to a character for him or her to get on! Add complex animations of animals, monsters or machinery! Correct the camera motion in post-production (it's way more forgiving than you'd think)! Have elements added in Blender cast lights on SL scenery and vice versa! Add high-detail environmental effects like dust particles, smoke and fog! Render it all with raytracing and stress test your PC in the process! Either way, this was hugely fun and, given the area I was chilling in, I got better results than I thought. I welcome feedback and, since this is not at all common knowledge or easy to understand, will try to answer any questions you might have. If you wonder if you can use it in a piece of footage you have for your hot new machinima, I'll be happy to play around with that as well. Thanks for reading!
  17. That screenshot is from Blender 2.7, looks different now. Check if ANY part of your mesh has weights for bones that you're not using. If the weight painting looks funky, parent them with empty groups and paint them again. Manually, that is, it's very often better or even necessary with meshes these complex. Moving bones, if you're exporting the armature, makes things a bit difficult to troubleshoot. Maybe leave them where they are by default and just empty their weights. Before exporting select your stuff in Object Mode, press Ctrl+A and apply rotation and location and make sure ONLY YOUR MESH is selected. Try the following settings, sorted by tab: Under Main, tick Selection Only, Copy UVs and only use the currently selected map. Under Geom, have Blender triangulate before export. I believe the importer checks triangulation as well. Oh, and apply all modifiers. Under Arm tick both boxes, especially the one about OpenSim and SL (duh) Under Anim, include all actions and include animations with sample keys. Under Extra, Blender profile and sort by object name. These are Blender's operator presets for rigged OpenSim and SL, I don't know if 2.93 still has 'em. They're there in 2.92. Giving us the .blend file so we can check stuff on our end might still be the better option.
  18. I asked for the .blend file, not a .dae file. With the former, I can check on your rigging and mesh work. Paste the log to a pastebin next time, it keeps the topic nice and clean. I'm a newbie to Second Life, yes. But the issue could very well be with your model or rigging work, so I offered a second pair of eyes to see if you have overlooked something. I find "you're a newbie to SL, so you can't help me with rigging and modelling" to be insulting. Cut that stuff out.
  19. Hi there! I can take a look at your model if you upload the .blend file to https://pasteall.org/blend/.
  20. I can give it a shot if you want. I can't promise anything though. Drop me a message ingame if that sounds alright with you.
  21. When talking about clothing though, most (not all) clothes shouldn't have a true thickness on the inside anyway. "Tucking in" that edge works for the illusion and has way less tris either way. Back to topic, automatically generating LODs is something that is very much situational. And I've yet to see a detailed model for which a simple decimation works for lower levels, I'm told UE4 generates great LODs most of the time, but that's just a sidenote. Encouraging manual creation of LODs is something I support, but removing generation altogether might take it too far if you ask me.
  22. Hi, I'm Finnfinn! Completely new to Second Life (account has been active for a few days). Still very much learning the ropes, SL attracted me mainly for an opportunity to chat with other creators, look at their stuff and get an opportunity to see my own in action. 3D modelling is the one creative hobby that managed to really capture me and I like to think I'm not bad at it either. So, if you want to chat about Blender or modelling in general, feel free to drop me a message. I don't bite
×
×
  • Create New...