Jump to content

OptimoMaximo

Resident
  • Content Count

    1,209
  • Joined

  • Last visited

  • Days Won

    3

OptimoMaximo last won the day on February 15 2019

OptimoMaximo had the most liked content!

Community Reputation

1,364 Excellent

About OptimoMaximo

  • Rank
    Maya MahaDeva

Recent Profile Visitors

1,646 profile views
  1. She's learning the general workflow and as a testing item like Suzanne, however bad the UV layout is doesn't matter. SL and game engines in general don't support UDIMs (yet), which supposes the use of 1 material and the the shader reads the texture numbering. However, if you assign a different material to the geometry corresponding to the UVs within an udim tile, that works in SL.
  2. The hole doesn't have edges connecting it to other vertices, and the automatic triangulation is probably creating problematic geometry to resolve. Try reducing the number of edges within the hole and connect the outer vertices to other vertices on the triangle surface using the knife tool. Right now that's a massive ngon and it might be the culprit
  3. In blender 2.8 the operator panel has been moved to the top right corner of the export window and is visible by pressing the little cog icon. In there you should be able to find the export presets Kyrah is talking about, you need to select the SL static preset,hopefukky ensuring a correct data ordering line avastar does (that exporter is maintained by the avastar devs)
  4. I's basically telling you that the collada .dae file was written incorrectly and an attribute can't be found. We need a bit more info on what you did from the beginning of your export procedure, starting from what 3d software down to the options you turned on/off in the exporter options. Screenshots are useful
  5. only the first one of the two options you list is meaningfulto an environment such as SL. The second should really scale down the UVs to fit into the regular UV tile, and this also assumes that UDIMs were laid out accordingly (in a set of tiles that make a square). UDIM workflow assumes that in the U direction you can "only" have 10 tiles (1001-1010) and then it restarts to the next row above (1011-1020), with no filling up obligation before you move to a new vertical tile and virtually no vertical limit. While perfectly doable, such a set up defeats the purpose of UDIM as a whole, losing resolution and increasing the chances to waste texture space. See the example below: Suppose you laid out the UDIMs within the 9 tiles marked with the big square, and then you want a single texture, all of them need to be shrunk down to fit the single tile in the bottom left corner. Instead, after working on the textures and each tile has its own, you can assign each tile to a different material, split the 9th into its own geometry object and put another material. (in this example they're 9, no need for this step if you worked your way to have 8 tiles) and on upload it all works as intended.
  6. @ChinRey You're missing the point of UDIM as a whole. While it is a feature that automates different textures assignment on a UV space base without using multiple materials, as a general workflow it can ease out selection of an object's parts by UV clumping. In game production, pure UDIM is not supported yet, however in a multi-material asset such like those we can have in SL, materials separation becomes easier on objects that need multiple texturable faces, without overlapping UVs, helping procedures like maps baking. Its primary intent is to provide room for more textures and therefore higher resolutions, on the other hand it can also be used to avoid overlapping UVs when duplication or mirroring of geometry is needed, also saving time during texture bake by avoiding cumbersome and time consuming setups. An example here: This is the UV map for a character I've made for a game. Parts that could use mirroring without being detrimental to the general look have been moved to the next UDIM tile, in the same relative position. UV spaces don't end with the usual square one could be used to. From the squares labels, you can see two different nomenclatures, U1V1 followed by 1001, U2V1 followed by 1002 and so on. The first nomenclature is a pure labeling based on the grid, the second is the UDIM label. Now for this particular setup, not only i avoided to waste texture space on the main tile (1001) ensuring more pixels for the whole packing, it also helped with texture baking from the high res model which wasn't receiving interferences from overlapping UVs of the mirrored parts, in one go, without the need to separate the parts, bake and combine them back later, merge the overlapping vertices etc. One side automatically reads the texture from the main tile on the other side. Moreover, if i wanted to get rid of the mirrored effect, i could make a variation of the texture and easily apply it as another material to the mirrored part without much fuss or mistakes, just selecting the relevant geometry by selecting the UVs on the 1002 and quickly assign the material. This latter isn't really optimal, but arranging the layout differently with another working method in mind (more texturable faces) you can see how easier it is to manage a multi material mesh by packing the relative UVs on their own tile.
  7. Aside from the VR hyped aspect that didn't appeal to me, personally, here's what I think. Next-gen content must come with up to par tools. They failed. The point @entity0x makes is right, retention is achievable only when content is replayable and engaging, but the given tools are laughable to say the least. During the closed beta (actually a pre-alpha by any other company standards) I was there and I gave feedback over the feature that content needed the most at the time, a proper material editor. Not only to switch textures after a model upload, but to actually composite a material. And I wasn't even advocating for a system as complex as UE material editor or Unity shadergraph, something like a precompiled shader with four texture channel driven layers independently tileable. Something that would help avoid endless test uploads and a basic look everything has in there. Guess what? They announced a "material editor", a one time window available on upload to do what? Basic crap. Scripting. The choice of the language might have been even ok, without the relative quirks given by a made up language, tons of libraries to use, plenty of documentation but the functionality to make things interactive has come too slowly and late. Character controllers have been in a castrated state for too long, with inability to run, sit and jump. Animation control also is lacking, a platform where individuality is sooo badly wanted, there must be a way to switch the state machine animations with different ones. All of the keypoints from user feedback have been neglected to focus on side aspects nobody really wanted or needed, but LL mentality has shown itself in that as well as it does in SL: least amount of work and effort, procrastinate as much as possible and then put a patch glued over with a spit.
  8. I'd say that anyone who has or had any RL professional or even amateur experience in 3d assets creation would not call themselves "mesher", rather "modeler" or "3d artist". The moment someone addresses themselves as mesher, I instantly know that they are self taught and began in SL with no RL background in the field. So most likely these people don't really have the experience to carry on a custom project proficiently to ensure a delivery date, a delivery at all, optimization in geometry and texture use. Which I understand the latter because of size limit, materials rendering AND the customer trend which demands high res on everything cammed up lose, even the smallest detail (which BTW needs to be modeled with separate geometry, or it's not good enough) to ensure consistent sales and not occasional transactions from those who are reasonable enough to appreciate the overall model and how it works in a scene or on an avatar. Why a RL 3d modeler wouldn't take works in SL when they happen to be users, like in my case (in my opinion, feel free to jump all over me, I don't care): 1. I do enough modeling already at work with no creative freedom: I'm given a piece of concept art that I have to replicate 1 on 1, with the freedom of speech about what I think might not be working for animation and provide ideas for possible solutions. Case that happens very seldom, as concept artist usually have a grasp by now about problematic accessory placements. Once in SL, if i feel I want to model something, it's for my enjoyment. What I sell under a brand is what I use for myself and make adaptations where and if needed (like multiple bodies support) 2. Tedious work flow. To deliver something up to expectations, the work flow is just way longer and prone to mistakes that need rework. All inquiries I've received, despite my profile states that I don't do custom works, focused on having detailed (high res) baked textures. 3. Unreasonable requests. I can't count how many times I've received requests for a a custom body or head that included 1. compatibility with standard SL avatar uv mapping 2. higher res textures than the commercially available ones 3. "life-like" as main and unprescindible points for delivery, plus a slicing system that would be finer than double of the available slices on most other commercial bodies, with a hud, and compatible with X body brand clothing as semi-unprescindible points with the additional request of a scripter for the hud if I couldn't do it. Aside from SL uv mapping compatibility which is fine, this conflicts with point 2 as the limit is standard and to go over that resolution limit, compatibility needs to be broken with a custom UV that takes more texturable faces. But the third point is really what makes me angry. Life-like to me means as seen in high end AAA games cinematics and is not conceivable to achieve in SL (as I stated in the unpopular opinion thread, your high res, realistic avatar looks cartoonish regardless of how much effort is put into achieving otherwise) Edit to add: forgot to mention the slicing, which is absurdin its concept already, let alone the adaptive slicing shape that should run along main clothing lines, requiring a lot of work to get it right in the first place, also the requests were about finer control (thinner slices to alpha out millimeters of skin) making the overall polygon count unreasonable, dirty topology, and a crazy amount of separate objects. Speak of optimization then..
  9. I do, and in both engines things are way easier to make look and work good. But I am also tech savvy, I am used to nodes and scripting in general, so you can say I'm sorta biased toward these methods.
  10. I've just read the article. If the project shuts down, it doesn't mean the platform does as well instantly. As noted past scheduled events are keeping them from doing so, and eventually will. How quickly that happens will determine the amount of final financial loss. And I hope that happens as quickly as possible. Hopefully, without that money sink failure, LL can focus more on SL improving it with new meaningful projects without milking the economy for nothing as they've been doing while Sansar project was up.
  11. I've been pushing it up in my case, because it was never high res enough eve if I pack UVs extremely close to zero pixel waste. Nope, you've been in the business for longer than me. It's years for me now, but before I started, I was only teaching.
  12. What you describe is something done in film production and its animation/shader driven, while in general games pipeline that is not even remotely contemplated. There are basically 2 specific blending modes for Normal maps, detail and inverse detail, plus no blending and overlay. For the use that BoM has to do with them, using the same approach you describe for Specular maps works just fine in most cases but, aside from that, an overlay method would do most of the other minor cases, it's not an obscure or secret blending mode and it can be tied to a checkbox. The search for the extreme line, assigning such a property by "flexibility" of a material (and maybe changing it realtime as one moves) is simply something unrealistic to seek. Simply put, it is something that AAA games don't do because it isn't feasible to do in a realtime environment, at least it's not yet. Alpha cutting the normal map with no blending does the job in most cases, and the remaining cases just work fine adding the two normals together using an overlay blend mode so that an upper, more flexible material gets the underlying material's detail. Specularity would do the difference
  13. That's what I do, but if the strings are just "painted on" people complain, they want the 3d eyelet for those strings that have to be a long continuous tube, and I've also read and seen people compliment someone because they had modeled the stitchings too, calling that "highest quality". And no, that's not close enough, I've got screenshot of closeups on details much closer to the surface, at the limit of the clipping plane, to point out a pixel that ruined the seamless look from a UV shell to its neighboring one. At my question "can you notice it at normal distance?" the answer was "no but its not lifelike and it ruins the immersion when I zoom up close"... Again, I do agree with you and with the article contents posted above. I'm just pointing out what is perceived as "quality" in secondlife from what I gathered along my years of creation and customers. The things that sell 75-80%of the times after a demo is delivered are those where I deliberately modeled useless geometry that a texture would have sufficed for. Where I went for a texture, 5-10% of the times demos turn into an actual sale.
  14. There's a reason for this. No materials support, no use. Arguable, to say the least.
×
×
  • Create New...