Jump to content

Biggay Koray

Resident
  • Posts

    21
  • Joined

  • Last visited

Posts posted by Biggay Koray

  1. 6 hours ago, Fritigern Gothly said:

    Would Vulkan be a possible avenue of exploration in this case?

     

    I literally started learning opengl just a few months ago. To put something in perspective, just to render a single triangle in opengl takes maybe 100 lines of code. To render a triangle in vulkan usually takes 500+ lines of code. So nooo.

    • Like 2
  2. Just a mild poke.

    https://i.gyazo.com/75df3048684a095ab5d2b6a8eb3957d3.mp4

    https://i.gyazo.com/462d2db1effc304b09b0adfc9614ed5b.mp4

    I have been reading about image based lighting and split sum approximations used for certain lighting models. I managed to add Second Life's pre integrated indirect lighting to my reflectance model. As a side note, this whole project has kind of ballooned, and I have been working on my own fork of Firestorm. Currently my goal is to gut anything pre opengl 4.0 and move towards a more modern pipeline, current testing has shown promising results.

    I really would like to add some sort of a global illumination system back in, like bouncy light and the baked gi lightmaps that second life had years ago. However, I am intrigued by the results of screen space directional occlusion to achieve radiosity. Additionally, ive been working on trying to sort out and modernize the shaders using layout qualifiers to make them more deterministic, full optimization passes utilizing fused multiply add functions, and I have gutted anything that looks for fixed function checks at the moment. Im hoping to fix up the way the current transformation matrices are handled because oh man is it god awful.

    • Like 3
  3. On 6/17/2020 at 2:18 PM, Ansariel Hiller said:

    Apple is only deprecating OpenGL because they haven't been able to get a proper implementation for ages and been stuck at OpenGL 2.0 or something like that, now using the "old-tech" argument to distract from their own incompetence. Apart from that, forcing developers to use Metal comes with some nice lock-in effects and we all know how Apple likes to lock in their users.

    The only positive effect of switching away from OpenGL to Vulkan (if they do, since they just even are about to start evaluating how many users are actually capable of using it) would be, that we finally might get away from 2004 rendering methods - at the most likely cost of (un)expected borkage of existing content.

    This. I didn't even know what to think when I saw them doing shiny fragment calculation in their vertex shaders.

    Additionally, the Lindens also seem insistent on supporting 16 year old hardware and are still doing opengl checks for below 3.0 in llgl.cpp. Even their hardware query for vram, opengl version, and device driver stuff is still handled like its 2002. Plus there are snippets of code for ATI compatibility floating around, its ludicrous their gpus haven't existed for 15 years. Frankly, if people are running 15 year old gpus, they likely are not the ones spending money on the game. Linden Labs needs to get a grip and up the min opengl spec to at least to 3.3 or 4.

     

    As a side note, I've seen your work in firestorm. Nice.

  4. 5 hours ago, KjartanEno said:

    A performance improvement would be good. I've played around with alpha channels in normal maps using the information in the SL Wiki, http://wiki.secondlife.com/wiki/Material_Data

    Is the information here up to date? http://wiki.secondlife.com/wiki/Second_Life's_light_model_for_materials

    It seems to be. I just think the way its explained is a bit technical for the layman lol. I really wish they would add some pictures of how to implement the extra alpha channels in photoshop or something using the color channels, as that would probably help a lot of people. One of my jobs when I was working at a tech company was doing technical writing and documentation, and man oh man, they need to hire some interns or something at Linden Labs and just get them to document stuff.

     

    Hint hint, im available Linden Labs, as long as the pay is decent for part time work!

  5. 5 hours ago, OptimoMaximo said:

    I've been trying to advocate for a better use of glossiness and environment maps since when materials were introduced. Nobody ever listened. 

    Anyway, since Cook Torrance model is NOT the industry standard, in order to get decent metals from the base shader model it is based on (blinn, aka old gen) the diffuse component should be full black and the color should be transfered over to the specular map. The environment map you can use as metalness map is useful to achieve the black diffuse if you use it as a subtracting map to alter the base diffuse. 

    Why isn't cook Torrance model the industry standard? Because it is and has always been a flavor of the Blinn model. Blender has always had that shader since its inception, when there was the Blender render engine, waaaay before cycles was ever even thought. The fact that Blender still uses it as the base model on which the current bdrf models build upon doesn't make it an industry standard. Indeed those that use it (unity and Sansar for example) are usually subpar in comparison to the others using the actual standard model (unreal, redshift, Arnold, to name a few) originally developer by Disney for their renderman render engine. 

    Firstly, the UE4 devs looked at the Disney BRDF but nowhere did they implement it, and I can provide proof of such in the form of the Siggraph paper that they wrote about the subject. That said, while the Disney BRDF is more commonly used overall, it is used in a lot more applications like visualization and real time rendering out side of games. In regard to the industry standard comment, I was speaking with respect to modern game engines, as Second Life is a game not a 3D modelling rendering engine being run on an expensive workstation card. You will find that implementing subsurface scattering, anisotropy, clearcoat, and sheen which the UE4 devs completely passed up on, is overkill for most games and what they end up with is something closer to the aforementioned Cook Torrance. Moreover, if you want to look at Guerilla Games, Frostbite, and yes Unity you will find many using something a lot like Cook Torrance there too. I can provide papers on the subject if you would like.

    As a tid bit, my borked environmental reflections using some jury rigged image based lighting is actually modified from the stuff talked about in the Unreal Paper, and my other goto for resources have been LearnOpenGL, Realtime Rendering Fourth Edition, Physically Based Shader Development in Unity, and Physically Based Rendering from Theory to Implementation.

  6. So, hello everyone.

    I have been mucking around with Second Life lighting's model, and I thought I would share some of my results.

    Current Iteration of Cook Torrance Physically Based BRDF

    Video of it in Action

    PBR rock texture straight from some website with scan data into sl.

    Video of an older Revision

    My goal has been slowly working towards unraveling the demon that is the rendering engine and attempting to improve what we get out of this beast. Currently, I find it very difficult to produce good looking metals and glass in Second Life due to the abysmal environmental reflections. Additionally, I have noticed that many people do not exploit the additional specular exponent/gloss map that may be encoded in the normal map's alpha channel, so part of my motivation was to raise awareness with this project and really push the limits of what Second Life can do.

    I have been primarily recoding and modifying the shaders found in the appsettings folder of the client and have managed to port over the lighting model used in spotlights, point lights, and environment lighting to use cook torrance, since it is one of the industry's defacto standards nowadays. At the moment, I am still trudging through learning opengl but my current goals are to add support for screen space reflections, screen space directional occlusion; and recycle the diffuse alpha layer when not used for emissive masking or transparency, to add support for displacement maps used for parallax occlusion mapping.

    For those curious about how this is all implemented:

    • Diffuse is being used for albedo.
    • Specular Alpha/Environmental Reflection intensity is being used for metallic.
    • Normal Alpha/Specular Exponent/Glossiness is being inverted and being used as roughness for the cook torrence brdf.
    • Spec RGB is being mixed with the specular output of the brdf to preserve specular tinting.

    As of currently, I have not noticed any broken content from these shader changes. Additionally, I have ditched a significant amount of branching statements in my version of the shaders so there is a mild performance uplift.

    • Like 3
  7. 9 hours ago, CoffeeDujour said:

    Give SL's render engine a lighting boost and people will still favor the content we have now simply because they have no idea how to effectively light anything. Lighting in SL is either massively neglected (to the point people make things full bright - because it's dark), used so badly it's painful to look at, or abused to make up for cube map deficiencies.

    Backward compatibility is not something LL are going to compromise on, at least up until such time as they decide maintaining something is no longer feasible.

    Sculpts could easily be converted to regular mesh, the blocking issue is Li accounting, and while most sculpts would come out fine, there are edge cases (like giant off-region landscaping) that wont. SL is a rats nest with another edge case gotcha around every corner.

     

    SL without the legacy content or design decisions .. that's supposed to be Sansar.

    I am reinvigorated from sleeping and am ready to spit some fire about this here rendering engine. 
    wl3gfaejhd611.gif
     

    Alright first point there. We already have a deferred lighting engine kind of sort of with advanced lighting.

    • Full bright or shadeless can work in deferred lighting systems just *****ing fine, and already does with advanced lighting on, which as previously stated, is a franken hybrid deferred system. Nearly every major game engine has been doing this just fine for years - see unlit shaders in Unreal and Unity or light path shadeless materials in Blender.

     

    • Traditionally, full bright materials are usually considered "emissive" without lighting other objects and appear unshaded. This can either be done via a light path check using a ray and the current camera position before shaders are calculated, or if you want to get really lazy, by making the object emissive like a standard glow object and cranking down the funky bias and fuzzy glow effect around it. If you want to get really basic, have a default fallback shader that is unlit and just applies a uniform amount of light to every pixel that is visible in the current frame on a full bright object. All of this could be implemented without breaking the current full bright content in the game and likely is to some degree already - would need to look at the source some more. Again, this is one of the reasons why fully deferred systems can be highly advantageous as they can be very scalable, as after your geometry fetch, the amount of shaders and data you need to fetch and move can be highly attenuated and threaded accordingly.

     

    • Don't give me this argument that people are too dumb to figure out how to light a scene. If I can explain to a biology major who has never worked with lighting nodes before, let alone a game engine, what, when, and how lighting nodes are used in UE4, the same can apply to Second Life. A big reason Second Life's lighting and scene setup is so horrible is due to the painful user experience and bad practices that comes from using and interacting with the game. People simply don't know what to learn or don't fully comprehend the effects of their actions, and as much as I love Torley and his cheerful videos, this game needs much better documentation, I think the UE4 dev docs would be a good place to start for inspiration. Additionally, a stronger emphasis should be placed on rewarding good content creation practices and explaining in world content creation and manipulation in as many ways as possible - not all people coming to this game are highly technical or possess computer savvy backgrounds and they need context. (Don't get me *****ing started on coders or engineers who cant write instructions for regular human beings or add some *****ing pictures. The amount of amazing CS students and engineers I have met that cant make a pitch to a VC or explain their ***** to a regular layman is maddening. The budding cognitive scientist in me is screaming embrace multi aptness and elegance.)

     

    Pfew, alright backwards compatibility and sculpts.

    Dont give me this horse *****. It is costing their company more money with server and bandwidth upkeep costs.  Change land impact and innovate a little. Hell, you are in god damn California, grab some interns and pay them in pizzas or something or grab some bored 80s hot shots that were fired over their age. (I literally got a job at a quantum computing lab once by breaking into it and walking into the directors office and asking for a position, and then my mother began dying, I was stressed as ***** at school, and then I tried to kill myself - have some courage. If someone tells you its impossible or has too many got yous ask them why and keep asking them why?)

     Here's some ideas:

    • Do some data mining and get a sense of how complex and texture heavy something should be with respect to its size in the game world. (Both from a hardware and a desired fidelity standpoint.)
    • Punish or "encourage" content creators to attempt to adhere to this. (Favoured advertisement and promotion, reduced land impact, or some exclusive features or bells and whistles)
    • Look into some kind of dynamic cache management that along with the current mipmapping techniques, can generate and store lower res stuff locally and pull it first so the engine has to work a little less.
    • Geometry Instancing! (Not sure if this exists already but this could be huge for sim surrounds and highly tileable and repeating geometry)
    • Embrace heterogeneous computing. This game is data heavy and dynamic as hell. Some form of virtual memory system or the ability to swap ***** more efficiently back and forth between the cpu or gpu or use both to varying degrees can make a big difference. If I were to make a case for Vulkan, this would be one of the big reasons as opencl and opencl like mixed computing can be done much more readily within it. Depending on load, whats going on in the sim, and hardware availability, preference could be given to whatever processor is fastest and most appropriate for the task or is available.
    • FOR THE LOVE OF GOD SOME BETTER POST PROCESSING, THROW SOME SHARPENING FILTERS OR SOMETHING IN THERE OR A FEW LIGHT WEIGHT AND SCALABLE IMAGE PROCESSING HEURISTICS (If people can click a button to make something look sharper or better, they will likely do that over adding more geometry or adding more resolution to their textures, especially if they are punished for doing so.)
    • Better and more procedural content. Take a look at what nodes can do in UE4 or Substance Designer. (This can extend beyond textures and can be combined with ***** like geometry instancing.)

     

     

    • Like 1
    • Haha 2
  8. 4 hours ago, animats said:

    Ray tracing for worlds as complex as SL's is a ways off. But there is a way.

    The whole graphics system of SL needs modernization, but that's a huge job. One new hire isn't enough. The next new hire will probably be stuck trying to make the viewer use Vulkan (Microsoft) and Metal (Apple). Until recently, you could just use OpenGL on all platforms, but that's changing. Vulkan seems to be the future. There's Vulkan for Windows, for Mac (using an open source adapter called "MoltenVK", if that works) and for Linux.

    This is a job I would consider "hard", "not fun", and "write once, debug everywhere". This is far harder than EEP, and look what a mess that turned into.

    Maybe, if we're really lucky, we get physically based rendering out of this. Principled BSDF, which is a Disney/Pixar standard and which Blender understands, is probably the way to go. It's more texture layers. Right now, SL has diffuse, emissive, specular, and normal ("bump") textures. This adds more textures to the mix:

    render_cycles_nodes_types_shaders_princi

    Putting all those layers together is something modern GPUs can do fast. Much faster than ray tracing. The main use cases are for skin, for which the "subsurface" layer contributes to realism, and automotive paint jobs, where "clearcoat" and "clearcoat roughness" matter. Most of the time, you don't use all those textures at the same time.

    From an SL perspective, it's almost all viewer side. The server just has to tell the viewer the UUIDs and URLs from which to get the texture images, and some numbers associated with how they're assembled. There's no wiring up of shader plumbing, as with Cycles render. It's close to the way SL represents materials now. So LL might be able to pull this off.

    At lower graphics settings, the viewer would skip some of the more subtle layers. Think of this as "Advanced Lighting Model, Boss Level". You'll need a good GPU. Time moves on and GPUs get better and cheaper.

    What do the graphics people think of this? It's worrisome that, with all the tutorials on "Principled BSDF" for Blender, very few show photorealistic humans. Lots of shiny things, garden gnomes, etc., but not many humans.

     

    image.jpeg

    image.jpeg

    image.jpeg

    PBR could be doable but I would really really emphasize they move to a fully fledged deferred rendering engine and ditch the legacy crap. Also openGL is hardly dead, and though vulkan would be a huge improvement, they would likely have to up the min system spec considerably and porting from opengl to Vulkan is not trivial. The last time i peeked over the rendering code, they were still using a ton of early ass opengl crap that they extended with gl arb calls to maintain backwards compatibility. Also to clarify, PBR materials are more than just adding more texture channels as part of the shaders are reworked to handle energy preservation approximations which factors into specular hightlights, fresnel, IOR, etc. Additionally, it looks like this was started or kind of attempted with the way parts of the environmental shading system and time of day was implemented, at least if I'm understanding things correctly. The other thing to take into account, is that one need not be limited by the texture channels strictly provided by conventional PBR workflows, and you can implement stuff like parallax occlusion mapping, subsurface scattering, etc. Almost all modern day triple A game engines use custom spun PBR rendering engines with extra perks thrown in like fancy subsurface scattering for skin and translucency or parallax and height info for fancy depth details.

    An example of PBR workflow taken to 11 in UE4 without ray tracing.

    Blender PBR realtime engine.

     

    General Rant Time About the Engine
    A big gripe of mine is how a lot of assets are handled which seem to encourage poor content design practices and likely cost LL a good chunk of change. From a data storage and streaming standpoint, they are punishing players for using a ton of geometry which is relatively inexpensive even by today's standards compared to no penalty for plastering 1k texture maps over everything. Then they turn around and make geometry mandatory that usually entails some extra either colour data or preassigned textures which players seldom remove. This makes me shake my head for several reasons when compared to a modern day graphics engines or something that handles lots of streamed data - those being:

    1. PNG's are typically on the order of magnitude of several megabytes for 1k maps compared to a few dozen kilobytes for meshes, yet people are punished with land impact based primarily on geometry count rather than VRAM usage. This is puzzling to me since it ends up being more expensive for LL, and can result in people going full retard and producing content with preferably lower resolution meshes, though not always, that can still end up having ridiculously sized textures on them. (Yes modern GPUs have more vram on them, but I will get back to this.)

    2. With the above point in mind, LL partially encourages this behavior by having a single UV channel which results in people prebaking lighting into their diffuse textures that they want to look pretty so they use huge resolutions. To avoid this, modern engines use tillable, procedural, and conventional texture solutions in the first UV channel and then a secondary UV channel for lower resolution lightmapped data when its needed. Though you increase the texture calls by using this approach, you are still streaming and loading less data into the gpu, and if people get clever with their light mapping techniques or texture tiling methods, you can still optimize down some of the additional texture calls.

    3. Alright getting to VRam usage and people with thicc boi HBM and GGDR6 video cards. Yes, modern day GPUs can handle more textures, however by having to stream or decompress all these damn things, especially if you are sim hopping or people are loading in, you are introducing a bottleneck into the render pipeline. The textures first have to be downloaded, decompressed, and then sent to the graphics processor. That is a lot of god damn waiting and intermediary steps, and the juicier your textures are,  the more time you are going to end up having to wait. Additionally, there has to be some kind of scheduling or polling system that checks or notifies the client when there is new content to load which also slows crap down. Now before all the smart asses with fast internet connections attack me, I will clarify that every draw call, every intermediary step, and every texture that has to load is going to have an impact on frame time, and render engines really dont give a rat's ass if you have gigabit internet when they need to go and fetch something, because every millisecond or fraction of a millisecond counts. - Dont believe me? Go load up Rage and go see how mega textures turned out in their first incarnation, and now imagine that for a more dynamic environment.

    4. We need to move to a fully deferred lighting engine and ditch this legacy ***** once and for all. There is so much prebaked bull***** in this game that tries to emulate richer and more complex lighting environments and it drives me god damn crazy because everybody wants to combine that ***** into 1k diffuse maps or alphas all over the place. What does a deferred lighting system do? Well it allows you to have ass loads more lighting sources, especially static ones which could totally be implemented along with the dynamic ones, that get calculated after your geometry data is loaded for the shaders. This allows you to basically only care about what geometry is on screen and only do the lighting calculations necessary for what you as the player would actually see, and the engine can ignore all that other bull***** - from what I could understand about the current engine, SL kind of does this already but its half baked. This would also dramatically reduce dev time because you wouldn't have to be maintaining more or less two render engines.

    5. LL needs to not be afraid of *****ing breaking some functionality. Day and night cycle and environmental effects? ***** it. Focus on one god damn thing and do it well. Looking at this code is insane. I have sat down with notebooks and *****ing dependency chart graphing tools, and I still cant fully figure out this *****ing mess from where it starts or ends. Get us a stripped down client with just a simple skybox, some geometry,  and some textures to put on your geometry. Build up your engine from there. 

    6. Get rid of sculpts and figure out a way to convert the preexisting ones to mesh. If someone has managed to convert prims which I suspect are some freaky BSP thing to mesh, then there should be a way to retroactively convert all those sculpt maps to mesh - go grab Moy or something. It will save you storage on your servers and reduce the amount of insanity that the client has to handle. You could then merge all that crap into the current existing system that handles conventional mesh and streamline the engine a bit. - Less ***** to maintain and worry about and easier to improve.

    7. Decals and nodes. Now, I can see why lighting and particles are connected to objects in world for obvious reasons like script control, dynamic stuff, etc. However, especially in the case of lighting, having this be the only option is *****ing dumb. Lets break down what it takes for a player to place a static light in a scene:

    • Hmm I want a light source here.
    • I need to res an object: which contains positional data, likely colour and texture data because I'm lazy or have no idea what I'm doing, and data specifying who it belongs to and permissions. Note* We have several vectors and possibly megabytes worth of array data here for something that might not even be visible. - This is god damn insane.
    • I now need to choose what kind of light I will have, be it projector or standard, and all of the properties. More vectors and and a potential array.

    Now I ask, if the person owns the sim or has some kind of elevated land editing permission, why do they need to waste so much damn data for a static object? All they need is position, permissions and ownership, and properties of the light source. Same god damn thing applies for particle effects or decals which are usually implemented as alphas on prims that waste unnecessary geometry and texture calls. LL could save so much additional processing, bandwidth, and boost performance a great deal by implementing some kind of empties or node system for crap like this. This could also have the added benefit for things like reflection map capture sources or cube maps down the line, as well as simple in world scripts and sensors that dont need geometry.

    I am going to stop here because I have stomach flu or something, and I need to rest, but I could continue. Disclaimer I am not an expert and have limited graphics programming experience. In fact, I am a huge nooby, and my *****ing C is rusty as *****. However, I may have had the pleasure of working for a certain company involved heavily in games and graphics for a year very very recently and perhaps witnessed how development, testing, and optimization worked for applications ranging from games to 3D suites to self driving cars. Although, I commend Linden Labs for the amazing work they have done and how amazing this crazy ass game is, the rendering system needs some serious love.

    1. Really map the thing out. - Figure out how it works
    2. Cut down as much unnecessary crap as possible and streamline. - Make your life easier
    3. Begin building from there. - Start fixing and improving it.

    (Side grumbles about possible opencl or gpu accelerated particles and multi threading)

     

    • Like 1
    • Thanks 1
  9. On 5/17/2019 at 9:33 PM, Vixus Snowpaw said:

    Prepare to be absolutely blown away then.
    There are plenty of content creators and gaming enthusiasts who also play Second Life.
    I can name 8 people, 8 who just happen to be people I hang out with inworld.
    If that's any statistic to go by, then I'd be relatively certain in saying that your metric is off by a landslide.


    Oh and OP: The community will deliver. There are plenty of us enthusiasts around to make the dream come true, especially since a few viewer devs (including myself) have a vested interest in making SL look as pretty as possible.

    The same content creators that fill their products full of unnecessarily large textures, geometry, and bloated scripts in a game that penalizes people more for geometry count rather than texture data which is orders of magnitude more expensive to store and stream? The same content creators that all rehash work by either Linden Labs or give up and say that the rendering engine is too big and scary and the code too obfuscated? 

    Anyways, you can get ray tracing working with Second Life its just a bit of a pain in the ass and is limited to depth buffer/screen space effects for the moment, at least with ReShade.

    https://i.gyazo.com/c918495088980042e9de2c70fe974c19.mp4

    https://i.gyazo.com/ae671903e97e5ca0a3fabbe5598a936c.mp4

    https://i.gyazo.com/f3a1a73bdd075bd7df8b48b80e97263c.mp4

    https://i.gyazo.com/418304ccea6490f3988578e0fdd5018e.mp4

    I was exploring the idea of allowing for overriding of the dynamically generated cubemap that windlight settings create that is later sampled by the environmental reflection engine in SL. Managed to ***** up the sampling and texture scaling on it so it ended up pulling from garbage areas of memory.  My plan of attack was going to revolve around using stbimage and shunting in a custom cubemap for the environmental reflections. I had to give up due to work and school *****. Id imagine it could be possible to sample fullperm textures or uuid's from ingame and then somehow feed that into the engine as well. A kind of cludgy cube map or reflection capture node or something.
     

    https://i.gyazo.com/c1a78312b6e96e5f9e4a59827806192c.mp4
     

    2f78a5e8f6635302329fff3e847b3aab.gif

    9368774b75716d1cf3ba6ca6c2c4fcfd.jpg

  10. Thanks for the words of encouragement im not usually this much of an **bleep**. Its just really irritating when  you have a really nice model all done and ready and then you cant even figure out how to get the material face id thingy working and are losing hope fast. Ive basicly spent the last 3 days trying to figured this crap  out so im sorry everyone for screaming at you and getting rather grouchy.

  11. Heres a bit of clarification of my understanding 

    Ordered in steps

    1. Model your object

    2. select your object face id's

    3. make uvmaps

    4.use uvmap templates to design textures for you object

    5, once done your textures open them up in the material tool using a blank material and load them up using a bitmap in the diffuse layer

    6. Use a multi- subobject material and load in the textures you made for it from the other 2 materials and match them accordlingly to your face id presets and apply to object

    7.export by going to file and export as .DAE file format

    8.upload into second life and wonder why i cant select just one face instead of the entire thing?

  12. <library_materials>
    <material id="_02 - Default" name="_02 - Default">
    <instance_effect url="#_02 - Default-fx"/>
    </material>
    <material id="_03 - Default" name="_03 - Default">
    <instance_effect url="#_03 - Default-fx"/>
    </material>
    </library_materials>

    Is basicly what mines coming up with. Furthermore can someone explain to me whether or not i should be using the multi subobject material method or just materials set onto faces?

  13. I was pissed last night  and I still am but  this is my problem. I need to to be able to have an object in second life made from mesh that has faces that are selectable. Real god damn simple right? I apply the materials to the faces of the object and export it as dae format and I don't give a damn about textures anymore but when I go to load it into second life THE OBJECT STILL HAS ONLY ONE SELECTABLE FACE. What the hell is going on? Now I've had a person send me a dae from blender that has selectable faces in second life. Now when I load that crap into autodesk and I don't do anything to it except export it as an object named something else then the original  but the same thing thing happens and I am left with an object with only one selectable face. Now I have tried everything here, I have read nearly every god damn tutorial for this crap, I've practically slayed a god damn dragon here and I cannot get this crap running. Now one more thing if your going to go all grammar nazi on me  you can shove those god damn similes, apostrophes, exclamation marks, euphemisms, capitalized letters, metaphors, hyphens, commas, periods, semicolons, and question marks up where it the sun don't shine.

  14. Ok I am seriously frustrated, confused and lost with this. I have a decent low poly of a pedastal that i want to upload with textures but I cannot not seem to figure out how to do this. I have tried to apply the material selection in 3ds max using bitmap and diffuse layers. Now an important note I would like to state here is that I have tried multi sub object materials with  id's and i have tried manually selecting each polygon  and applying the right material to it but when I go to export it and load it into second life IT HAS NO FREAKING TEXTURES. My goal here is simple I want to upload a mesh into second life with textures from 3ds max but  have faces that can still be retextured in game by someone else should they choose to. All help apreciated. 

    Im running 3ds max 2012 service pack 2 and the most recent version of the fbx plugin.

     

×
×
  • Create New...