Jump to content

Fluffy Sharkfin

Resident
  • Posts

    916
  • Joined

  • Last visited

Everything posted by Fluffy Sharkfin

  1. That's correct! Probably the best solution is to block out the boat you want to drape netting over using prims, take note of the dimensions and relative positions of each prim then recreate them in a 3D app and use cloth simulation to drape a sheet over them, then you can optimize the resulting object, upload it, add a net texture and position it over the rowing boat.
  2. I'm fairly certain that it is, flexi-prims are heavily penalized when calculating complexity. It's possible that the reason you suffer from less lag in a sim full of tinies despite their complexity being higher is that, unlike tiny avatars, human avatars have a huge selection of mesh bodies, clothing, skin textures, jewellery, makeup, etc, etc. which translates to a far wider variety of objects and textures which have to be loaded and stored in memory. If you have 50 avatars all wearing the same basic mesh body and then a small selection of objects and textures that are re-used on multiple avatars then you'll most likely see a marked improvement in performance compared to 50 avatars each wearing a completely unique ensemble of high resolution textures on high poly mesh.
  3. Well I was (mostly) joking, although the idea of creating a giant sim-sized bathroom or bedroom and creating a Micro Machines style race track does sound kind of fun (I've always loved games/sims where all the environment is huge and the players are teeny tiny by comparison). While there are a few advantages to scaling down your avatar and environment, it's not really the ideal solution for poor performance. I'm not sure why you're seeing such a drastic contrast in complexity when changing avatars, but it most likely has something to do with the items worn on each avatar rather than their comparative size.
  4. Seems like a good opportunity for a compromise. People could just use tiny avatars and equally tiny boats and planes. If you're 1/8th the size you can have a 128m draw distance and it'll feel like 1024m, it would be like playing Micro Machines in SL!
  5. Yes, I posted a link (a few pages back) to a blog that has a comparison of PBR Metallic vs PBR Specular workflows and outlines the purpose of each map. My point was that since the specular exponent map is essentially the equivalent of a glossiness map inverting it would give you the approximate equivalent of a roughness map. As you point out the colour component of the existing RGB specular maps will be redundant since the colour of the reflections will be dictated by the base texture so the RGB channels of existing specular maps can effectively be discarded. As for the environment intensity map currently stored in the alpha channel of 32-bit specular maps, wouldn't that be a closer approximation to a metalness map, since it dictates how much of the environment is reflected on the surface i.e. how reflective/metallic it is? (ETA: The environment intensity map would, like the specular exponent map, have to be inverted since in metallic maps black indicates metallic/reflective surfaces while white indicates non-metallic surfaces.)
  6. As I understand it, the RGB specular maps will essentially be completely redundant and the only requirement will be a roughness map which, since it's basically the polar opposite of a glossiness map, could be approximated by inverting the specular exponent in the alpha channel of the normal map.
  7. You're 100% right! ... that doesn't make sense! But I got what you meant anyway
  8. Agreed! While I may be a technophile at heart, in reality I'm currently using a refurbished office mini-pc that goes for around $300 on amazon and has a power supply so weak that I had to opt for a bottom of the range graphics card that doesn't even have it's own cooling fan, so I certainly don't want to see LL completely abandon those with low end hardware. At the same time I know just enough about the technical aspects of the current forward renderer and ALM and the theory behind the new PBR system to know that the upcoming changes could potentially make a huge difference to the visual quality of SL. I say "potentially" because some of the new features are going to rely on more than just how well LL has implemented them, they're also going to be heavily reliant on how residents use them when creating content! As a note in one of the meetings concerning reflection probes mentions I think this is one area in which LL has consistently underperformed over the years, and in regards to the new reflection probes, PBR materials system and viewer performance on low end hardware I suspect that investing time to provide creators with proper instruction on how to use the new features correctly to minimize the polygon count and texel density necessary to create detailed content would probably be quite beneficial. As has been pointed out in the past by various forumites the poor performance of the SL viewer has less to do with limitations of certain hardware than it does the way in which certain features are implemented in the software. That aforementioned cheap graphics card that I currently have (average price $120) is also what I use when working in 3D Coat, it tends to get a little choppy if I venture over 20 million polygons on screen but other than that it works just fine. Even low-end modern hardware is capable of displaying detailed, high quality graphics depending on how the software is written, so until we actually see the performance of this new viewer I really don't see any reason to panic.
  9. So far all we really know about this project (in regards to performance) is that they're swapping to a less memory intensive system than the one currently being used and removing the old forward-rendering option if I'm certainly not a fan of the "get a better computer!" approach either and if LL were to announce that they intended to exclude a significant portion of their user base then I'd most likely be among the first to denounce that decision since, quite frankly, they need every user they can get. However, as you said yourself, some compromise is necessary and LL are trying to update their aging platform to conform with modern standards and provide a more unified and (arguably) superior visual experience which will potentially benefit the vast majority of existing and future users and, while I respect that everyone has their own uses for SL and therefore varying requirements in order to enjoy their in-world experience, the sacrifices made have to impact both sides or they aren't really compromise at all. LL have already made it clear that they're taking into consideration the effect this will have on the majority of users and will only implement it if "ALM runs roughly as well as non-ALM for those using the latter" and yet we have some folk acting as if LL are an evil landlord who's just turned up at 3am and thrown them and their entire family out onto the street, all based on conjecture about a new unfinished feature and viewer that we don't even have access to yet.
  10. Most 3D software capable of generating normal maps at least has the option to use MikkTSpace (if it isn't already the default). In fact I suspect the majority of creators using normal maps aren't even aware that there are different types/standards available and are most likely already uploading maps generated using MikkTSpace, which would mean the new PBR system will make the majority of existing content with normal maps applied to it look better rather than worse.
  11. We have no idea what the performance of the future PBR viewer will be like, comparing framerates of the current viewer with and without ALM enabled doesn't really provide any indication of how well the new viewer will run. While I don't normally like to accuse people of hysteria, considering the fact that you started and ended your post with I think it's fair to say you're being a little alarmist!
  12. I'm starting to reconsider my position on AI generated images not being art, these seem like very expressive pieces that appear to be a commentary on the average non-residents perception of SLs graphical capabilities.
  13. I was reading an article about GET3D a few days ago and, while the results may seem a little crude and unoptimized at the moment, I think this could eventually be a game changer for developers of virtual worlds. I've always said that one of the main stumbling blocks for any platform attempting to compete with SL has been the initial lack of content but if AI is capable of populating these virtual worlds with an unlimited variety of characters and objects then that hurdle will no longer exist and SLs main advantage over its competitors will disappear along with it.
  14. Well, capitalism may not have a place in the utopian best-case-scenario of a Metaverse to which you're referring, but according to Neal Stephenson (the guy who coined the phrase "metaverse" in the 1992 novel Snow Crash) it seems that he envisioned capitalism being as rampant there as it is in RL... He even refers to the social stigma associated with wearing off-the-shelf avatars and using low end equipment on a few occasions, referring to those who use pay-terminals as "black-and-whites"... If it's any consolation the sad truth is that once you've learned how to use all those creative tools then, rather than feeling your creativity is stymied by lack of expertise, it will instead be tempered (and perhaps even hampered) by the knowledge of what isn't possible and a deeper understanding of the vast amount of work involved in creating what is. On a more positive note, there's no reason why your particular brand of ASCII-based creativity can't live on inside the wider metaverse, since typing is still a thing (at least for now)!
  15. Sounds peculiar! If the whales were falling rather than flying away (and were inexplicably accompanied by a bowl of petunias) then I'd say perhaps you'd discovered Douglas Adams Whale of Magrathea?
  16. You're close but, as usual, it's a wee bit more complicated. At the moment we can toggle between using the alpha channel in the main diffuse texture for either transparency or emissiveness (glow), so essentially we already have an 8-bit (greyscale) emissive map but it's being changed to a separate full colour texture. They are including a new type of map, the Occlusion map, which will be in the red channel of the Metallic-Roughness map (which will actually be three separate greyscale textures combined into the red, green and blue channels of a single image). The materials system we have now is basically the equivalent of a Specular PBR workflow while the new system will be a Metallic PBR workflow. You can find a fairly straightforward explanation of the differences between the two and a list of pros and cons for both in this blog post PBR Textures Metallic vs Specular Workflow but essentially this quote sums it up pretty well...
  17. The PBR compatible maps they're implementing are pretty much industry standard at this point so will probably be easier to work with than the current SL materials for those using 3D software that supports PBR workflows. The reference to Unreal Engine is only in regards to the upcoming support for the .gltf format, it's certainly not going to be a requirement for any aspect of creating content for SL.
  18. Thanks, I had a look through a few of the meeting notes and linked the blog post that had information relevant to impact on performance, but there's definitely a ton of additional information in subsequent posts that's worth reviewing. I'm quite excited to see the old SL environment/reflection map getting replaced, hopefully it won't take long for people to start utilizing them once the feature is released.
  19. I haven't had a chance to try it yet but from what I read in some of the previous meeting notes here it sounds like the majority of the heavy lifting is dependent on the number of probes sampled. I'd assume an additional consideration would be the rate at which probes are updated which according to the blog post is currently...
  20. @ral61 Whichever approach you take it's going to require a lot of work if you want something other than a completely static mesh object since it will need to be rigged. You may be better off trying to find an existing mesh head of approximately the right shape then adjusting it with the appearance sliders and creating a custom skin for it?
  21. Retopology and creating LODs is certainly a tedious and frustrating art form, it's definitely one stage of the creative process in which I'd like to see AI implemented so I could just drop in a digital sculpt with a few million polygons and type in "make low poly model good now please thank you!" and go make a coffee instead!
  22. Between that and learning how to use reflection probes they're going to be pretty busy.
  23. According to the article a flat plane for now but later on a proper placeholder object
  24. When you say "shiny and full bright" are you referring to the old legacy shiny and fullbright settings or the newer materials based equivalent? I think the point Qie and others are trying to make is that, while adding additional detail to a diffuse texture can be used (very effectively) to simulate depth and shininess on objects in certain circumstances, it can't replicate the effect created by using a normal map since it's still just a static diffuse colour texture whereas a normal map is literally bending light in realtime. To illustrate here's a quick auto-retopo I did of a doodle/practice sculpt I was playing with (it's just under 2000 triangles, so about the same polycount as a sculpted prim and has a blank grey diffuse texture)... and here is the same object with different lighting... As you can see the placement and colour of any highlights and shadows on the surface of the object that are simulated by the normal map will change based on the direction and colour of the lighting under which it is viewed and that's just not something that can be achieved with a static diffuse map.
  25. A question was asked and subsequently answered, the only reason this thread is 4 pages long is because some folks decided that they needed to express their opinions on something they didn't even see. Had nobody tried to offer their opinion on an incident they didn't witness or have any specific information about and had instead stayed on topic by answering the question asked in the OP then there would be no drama at all. Honestly, the only drama in this thread has been created by those insisting that this thread is about inciting drama... and the reason these so-called "firecrackers" didn't crack that well is because they are a figment of your imagination!
×
×
  • Create New...