Jump to content

NaomiLocket

Resident
  • Content Count

    37
  • Joined

  • Last visited

Everything posted by NaomiLocket

  1. Alternatively. Nvidia have long since published titles like GPU Gems. :B Complete with both opengl and direct x shaders for every example. Dynamicly moving hair was as old as voodoo3 cards if I remember right at all, but for a time only tomb raider and few others ever seemed interested in implementing it.
  2. Thing is they never had to look very far. There have been several forums dedicated to artists in the industry as long as secondlife. The problem is more if they can be assed to go looking for an artist with the goods to be a consultant. Hell some of them have long since written custom shaders for viewports in popular 3D modelling programs. It wasn't unusual for a studio to make plugins for either Maya or 3D max as a tool directly in their pipeline for making a game. And now and then they would share their more private time things with the community. The challenge for them is if they can find someone available at the time at those places. The challenge for us is to be sure they are actively recruiting instead of hoping someone is going to bother to come to their site and request a job. When the usual stated response to people talking to outsiders about second life is "that is still around?" as a running joke, my optimism that any of them would bother chasing down the Lab for a job stays fairly low.
  3. Triangle strips are mostly about consolidating that index into the shortest size in as few drawcalls. It doesn't mean we don't have an index of verticies, however. There may be other reasons behind why strips are not working presently or maybe appear to not optimise like they are supposed to. I still feel it is a grave error that they are not seemingly to work or be used. (perhaps the sorting is the first place to look)
  4. Yet it might mean something, because the sides of a generated prim are also completely flat. But the setting references the wrong indexes in the list visually for the strip and crosses over a whole row to form a triangle. Causing conflicts, on top of the every second being inverted normal issue. So maybe it is actually a thing. (also the point of forming degenerate triangle strips is so that submeshes that are normally split are no longer split and form a single triangle strip. Meaning it makes sense to form strips where ever and on what ever it is possible)
  5. Game assets are a bit fluid. They don't all suscribe to the same theory. As an aside on lists and stats of figures used. I gave you two figures used in heavier titles that appear in other peoples lists and articles. In one of the art forums, heaven help me to find it just like you and your list, was a speciliased shader that made use of extensive splits in UV to benefit from critically small memory in the actual texture. Go figure, high splits would normally mean bloated "verticies" but the UV soup actually worked. It was based on a square enix publish of deus ex. But I imagine the thing would be built to handle that. I wouldn't recommend causing splits without a desired reason. But modularity and repeats on custom geometry has a distinct appeal to it. (eta: of course we could use the technique as a middle part of the workflow and bake down to a publish. but that would lose the small texture size.)
  6. Well maybe someone can. I heard a rumour someone supposedly compiled under Vulcan and got twice the rate. But when I saw that happen with that setting I figured, if that rumour was true, chances are they did more than just simply compile with a different library. I speculated maybe they fixed that problem for starters. Well maybe someone that is familiar with the codebase can sort it or help us understand what is exactly going on and where we are at. Anyway, thanks for at least taking a look at it seriously, Wulfie.
  7. I'm glad you liked it. I liked it too, find it fascinating even. I take it with a grain of salt, though, because every studio team is different on a vast timeline and they all develop differently. Consider sorting algorithms and a google engineer interview. I certainly picked this one because it felt partially dated but still modern enough. And the concepts are fairly solid and concepts age better than some other things. Yes, historically there was one time I went off the rails on purpose in a particular thread to see how far some usual suspects would go. And I raised that point myself on people creating for the context of the asset not the location. But that is a natural thing here. It doesn't negate the responsibilty the Lab has to us in code. 99% of everything is at the mercy of it. It doesn't change the fact that artists are at the very end of the blame spectrum. Which Eric subtly pointed out. He was a technical artist in the middle of artists and programmers and told the artists to ask the level designer. Because they outlined the requirements. The merchants have customers. I imagine from casual observation there is sometimes as much trouble talking to merchants as customers as there are artists talking to programmers. But not always, some of them are actually kinda nice! Second life is like. Well. A very dysfunctional studio of hybrid level designers and artists. But if people give up getting the Lab to treat content management, rendering pipelines, and their data structures as an artform itself we will never see it improve as gamers have seen theirs improve constantly. And it isn't like the Lab hasn't done work ever. They talk about it now and then when they have something significant. But it has been sixteen years. I mean really.
  8. Yeah exactly. I turned that debug setting on from false (which it was defaulted) and my display on both mesh and generated prims became corrupted in terrible ways. Ways that it shouldn't. If you can explain any condition I can adjust myself that could be the cause, that would be appreciated. I don't doubt OpenGL, I know it is a library. I know it supports features and handles some things natively. And I also know that an application vendor actually has to make use of it, not just compile it or just write glsl. They have to actually implement some application level code. (which I know you know) Seeing as you've just linked a feature in a previous build. I'll just tell you now without a screenshot. I turned on the option and noticed the "problem". So I wanted confirmation. This is suggesting that it did not and favoured lists.
  9. There is literally no reason why I should have to provide sources in a sea of people feeling to debate optimisation. They should have already researched it and either be able to provide additional information or simply accept it. But as you wish, here is the opinionated helpful tips of one dude from one studio. That happens to be linked from the Polycount forums. That happen to be an active art community that is industry focused and participated in the old domination war competitions that were to scout up coming talent. That also happen to be among the groups that were into the whole limited sub 150 triangle mini speed competitions as well. IMO small triangle targets are back in fashion mainly for three things. 1) Practice 2) Mobile and ARM 3) Stylisic reasons. Mr Eric talks drawcalls, texture memory, transform vs fill bound. and blame shifting you should take with a grain of salt but some truth. artists talk among themselves. Also a link to a nvidia white paper. because industry.
  10. That matters when the engine works as it should. Bending the work to work for an engine that runs like sand instead of fuel in the tank just means in general we suffer the performance and lack of enjoyment no matter, always. It won't improve for as long as people pretend there isn't a problem with it and bend content to those problems. As a contrast, Digital Extremes reworked their IP and built their own particle engine from the ground up when they didn't like the direction a vendor went for their purposes. On the same hardware not only did they give twice the graphical goodies but also twice the performance. SL can barely handle thousands of particles, but they can do millions. On the same hardware. I am just going to double stress that point.
  11. The latter, the wiki link. The basic fact has everything to do with it. The card manufacturers and Nvidia in particular along with the studios over the years have explicitly gone to that direction. That is why crysis boasts 3 million triangles for a scene and handles fine for its target. That is also why the larger ships of star citizen were rumoured to be ~7million on their own. They don't care about the triangle count. They care about how they are used. Which is also why a lot of factors actually cull the amount of triangles processed anyway. Triangle stripping makes the draw calls efficient. It batches the work. The more that exist and are put in the strip for efficiency is where the optimisation is. Not the total number of them. Shaving off triangles for 4fps is a waste of effort and time, and that is why they go more for things that impact to a greater extent - in code..
  12. Also if I remember from previous readings on complexity and display metrics. There was no mention of it actually representing the work "your" card load is doing. Just that it uses this feature, that feature, and comes up with a "figure". Our rending load changes with the wind of our camera direction. Which is more a level designers direction than an individual asset. I guess Penny does partially imply that in the OP at least.
  13. Before anyone keeps beating the optimised horse, I'd like confirmation if triangle stripping actually works in secondlife like it was supposed to since 2005. If it doesn't I don't care for any opinion on triangle counts and optimisation. Triangles are the base unit that all modern cards are made to handle best at great quantity. The industry went to great length to make it that way. They are more concerned about draw calls and memory when it is texture bound issue (but not always). So. Does second life triangle strip effectively and properly including using degenerate triangles for the same material?
  14. Ah, I am guessing that very impediment and lack of it sits before the differences in cutting then. Being able to have that leftover geometry and manipulate or correct it is an intermediary step. I always liked the feeling of being able to cut anywhere and make a face anywhere at any time.
  15. Did you point him in the direction of the graphics preferences, shaders, state of the opengl implementation, and the asset design principles SL banked on? If he is a professional, he'll answer his own question in less than a minute.
  16. I don't know how much things have changed by now, but Max typically had better poly cutting tools out of the box that impeded less too.
  17. Oh there has been, just not as well disseminated, but that isn't your point. Thank you for the clearer picture. I am well aware of our limitations to affect the code to a point, where it counts, but that also plays into why I still push for the code. If we get so efficient at compromises and taking forever to do things as efficiently as possible, we eliminate any reason for the code to change. We've gone through these many years with selection highlighting unable to cope with the systems sheer size of vertex data, in some cases (and so needing to turn it off to return to performance). It is not like certain things have changed regardless of being efficient or not. I am pretty sure there has been an ancient article somewhere on how to engineer application code to more efficiently use shaders for exactly the purpose of handling selection feedback, vaguely. While we do not control that, we do have the needs, and there are benefits to be had. I don't agree with the practice of those trees, however in principle, and I understand your efforts and need. I wasn't too keen way back when clothing tools were being discussed simply from seeing the resulting topology, and knowing how the system tends to be in general. While I do stand somewhat on the clumsily logician side of things in these topics, I would still keep to "reasonable" application and freedom of design. Partially because of the rigidity of the system itself. Outside of some aspect of the systems rigidity, yeah sure, I agree. Perhaps somewhere down the line, a more relatable example and demonstration. Something that solves a particular, or couple of, constraints of the system, that may itself influence the decisions and principles in its construction. And demonstrates in general a wider performance gap. Though I'd like clarification on what the FPS counter is actually counting. I heard it wasn't strictly rendering performance, and included most if not the whole of applications activities. Your example in the other thread, making purposeful dual use of the shadow plane, is actually a decent point to keep people thinking about the how's and why's to do something a particular way (Even if ALM comes with its own shadows).
  18. By and large yes. This is why I tend to push more to optimisation and performance through code first, before we push for people to be too picky with their triangles. Once there is a better foundation and a reasonable offering to work and build with in the first place, then being fussy about triangles matters more. Not just for performance considerations, but the responsibility becomes more mutually shared in general. Facilitating good practices helps waste less effort in promoting them. When we know full well people are up against three levels of shader programs of radically different appearances that every asset is bound to. Given that actual uploads not on the test grid are fixed copies, monetising oneshot database entries almost wordpress publish style (where have we heard that not long ago), we can not define multiple geometries based on graphics preference (you need more triangles for vertex shadow painting), and the upload process is generally painful, people will always take shortcuts. Even if they can or know how to do better. It's a time management and sanity thing. I was actually happy to hear animesh was happening. Instantly knowing there would be (hopefully) less reason to alpha/mask flip. Instantly reducing the amount of links and geometry needed for given effects. A step in the right direction. It sadly should have been on the design before SL went public, given UT 03/04 at the same time, and general shared features that were largely pre-existing concepts.
  19. You do have a point there and there was certainly some interest.
  20. It might be worth noting the specific contexts for which this is actually true and relevant. Or rather underline and stress that. It is nice that you've taken the time to try and do a basic test, and find something out. But to be realistic, it is missing some fundamental points. A stripped down basic flat shader will render hundreds of thousands to millions of triangles without sneezing on old outdated hardware. Though if the controlling program, or shader is written particularly poorly, it might not. But that is a hard one to get wrong. In terms of SL, different windlight settings will impact the same viewed content. Even exponentially. The content is not the problem in those cases, clearly. That is to say, the same content that renders fine in 30-60fps can by changing scene settings crawl to a snail pace. We can observe that out in the wild. Other teams and software packages set the bar for now outdated "nextgen" at 10k triangles a character years ago. You're talking 4.4 frames per second. No one ever realistically feels that unless you are under fifteen, but some may argue twenty. I've stubbornly played a FPS game averaging 8 frames per second. I know from experience when and what that kind of pain is. The number of triangles you are talking about in your test is fundamentally tiny and doesn't relate to issues of using high poly content that is not designed to be used for real time rendering. Which is probably the more significant topic. Textures, being images are just a table of data. A shader need never use all of it. I doubt the underlying library even does explicitly all the time. OpenGL has many revisions over the years and is supported/implemented by nvidia. That doesn't necessarily mean that SL doesn't, I guess, I haven't actually looked at the code to be sure. But it will come down to how it is written and used. In that sense texture size for us, that do not touch any of the viewers rendering pipline, is mainly a time to download concern. Circle of control and circle of concern. Of course most reasonable creators are not totally suicidal, and if it doesn't work on their own viewer and machine, it clearly doesn't work. Just a few things to consider. It is worth being reasonable and wise with triangle counts, but it is not imperative or necessary to stress them.
  21. Some of the decisions people make are based on personal convenience or solving for a particular thing. I have my doubts something like this will be as hugely technical as many felt, or that it marks them at the time good or bad in a particular way based on someone's opinion. Though it does suggest they are not trimming as much as possible at the end of the workflow. Some of the basics of SL mesh implementation dictate that the bounding box will be stretched and filled to. Or at least I am pretty sure it was mentioned as a caveat when doing custom physics shapes and why it may not match the visual geometry. A persons personal preference on what object to pad the ratio of the geometry with will likely be based on their selection tastes in the tool they are using. A tiny triangle that may be backface culled is prone to more difficulty to work with and place than a cube. SL's rendering of non-masked transparencies would cover the entire surface or may impact click-through ability in some conditions. So limiting the surface area and making the object humanely convenient at speed seems a likely factor. And I left out points of origin and snaps used for translation, well until now, I just mentioned it. Though the previous explored possibilities may still weigh into it. My instincts immediately go to workflow, human sanity, a hybrid between tool and target, and time reasons, before anything deeply technical and mathematical.
  22. NaomiLocket

    Why Mesh?

    Build with all the tools. Preferably with the LOD/Object detail setting at 2. But build with all the tools as often as you can muster. They all have their purposes and strengths. Unfortunately there was maybe some politics in their implementation along the way. But push on with them all. And do it little or no apology. Even the Unreal engine for a long time if not still, had/has its own dynamic building blocks. Even though you can build everything there with static mesh, it isn't recommended. Same thing with Second Life. The prims are a mutable construct created procedurally/programmatically (as far as I have felt) from a defined set of parameters. They are not always optimised. Sculpties are an image-data packed 3D lattice array. Basically a subdivided plane. Which is synonymous with the terrain, but as a modifiable object. It was pretty much an obvious genius no-brainer back in the day. Given that Second Life's typical upload and asset database use was textures (images). People used it for as much as they could. And for what ever reasons, its implementation never grew to solve actual typical static mesh uses. Such as physics, multiple faces, and rigging. It had to have a specific shader to create the representing image from other software packages. It is probably the single thing I know of that Second Life really pioneered deeply. I've defended sculpties in debate with some of my friends before, within limitations. Mesh, is a love-hate thing. It is the opportunity for a custom prim. But it is also static. The implementation isn't really ideal. And the costing is weird in some places. The whole prim-equivalence debate was probably the worst thing for it. I've had the importer dialog suggest to me that a UV mapping point was an extra vertex. Which just undermines and penalises attempts to "optimise" mesh anyway. Still there is a lot of freedom that comes with "mesh". It will just take you a while to get to grips with "acceptable costs" and getting around the tax issues. Project bento being introduced finally opened up new opportunities for Second Life to grow a bit, at least on the avatar front. It will simply take time to find out what each item is best at. My suggestion is to play with them all. Don't worry too much about optimisations and lag. Just be level headed about it. There are times and places optimising for a system that is far from optimised becomes a bit silly. Learn, practice, and have fun. Find what works, and what does not.
  23. I alternate between Sublime-Text, Atom, and VS Code.
×
×
  • Create New...