Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by OptimoMaximo

  1. It's written on the physical copy, all rights reserved. Even if the game in question was downloaded to run on a emulator, this statement is most likely found in many other files in the game in question. The fact that a person might have 1) bought the game in the past or 2) have acquired it as shareware since it's a quite old game, doesn't imply any legal right on the property of the component contents in your acquisition. Trying to find a way to convert a proprietary format into an object to upload to SL doesn't only show a complete ignorance of all the rules that govern IP protection and SL ToS; It's highlighting a general mindset where the work put in making a model is nothing of value or worth of an Intellectual Property respect, not to mention any money... except those they can get by selling other people's work.

    • Like 4
  2. 13 hours ago, Alwin Alcott said:
    13 hours ago, jimmythepony said:

    ok ok sorry i dont know how the risk was

    yes you do , otherwise you wouldn't pass the upload test.

    This guy never stated he did, he probably would have AFTER finding out the method to convert the file he is talking about.... quickly answer all questions correct and ignoring the fact that all those questions were pointing to a illegal action he's actually doing, just "ah well it doesn't apply on me/this case because it's me"

    • Like 2
  3. The best way to preserve your high res texture is to work with  multiple UVSets, where the first is a highres image with all UVs laid within the UV space, the second one instead uses the ssame UVs split by material. This way you can work highres on a single image using a UVSet, to then bake onto the second UV set's UVs. The two sets of UVs stay separate and unrelated from each other in regard of the editing you do, so you basically might also have your second UVSet done as 4 lighmaps. The baking will do the map conversion for you

    Here's an example: a cylinder, UVs covering from 0 to 1, 1 UV tile

    Screenshot_1.png.9d0c92dba2e2657e010aaf58dcd3d101.png

    This belongs to the default UVSet, now i will copy this UVSet into a new one and make some adjustments

    Screenshot_3.png.5baba5cfa4d7b324b1a777d3889fe053.png

    Screenshot_2.png.1fd7ff7a5c7b8e8e0c155fa3fd35c106.png

    When baking, you can designate any texture mapped onto the first UVSet to bake its shaded results onto any UVset you want. Once you've got all done, you can delete the first UVSet and all of your transfer/remap textures work exactly as they did on the original. Cutting more shells or rearranging the UV entirely will NOT affect the transfer and seams will be preserved. So, say you had a 2K texture for the first UVSet, you can reasonably let the software do its job on multiple images in order to retain the number of pixels that each UV shell had in the original texture.

    Screenshot_4.png.18ff35d23dacb0a6fea4a214571d6325.png

    Screenshot_5.png.aac63e986e509cb1ce6f4ffdf4fb06b1.png

    In this case, since the original UVs on the cylinder were badly made, the texel density is higher on the second UVset now. If i wanted 100% texel density preservation, i would have sampled it at 2048 resolution, then set it at same value on all shells in the second UVset, sampling it to a 1024 texture size. Cutting and slicing would most likely occur to split exceedingly large shells to fit within multiple UV ranges (which i like to call "tiles" for sake of simplicity). After the image baking is done, I will later use the UVs in these tiles to assign materials for SL to split the texturable faces on the imported model. Nothing prevents you from using tileable textures as the bake input, your second UVSet will output a matching resolution baked image

    • Like 2
    • Thanks 2
  4. I forgot to mention another thing, in regard of standardization of weight maps. This can definitely be done using, again, the UVs. Most softwares can export weight maps in form of images (Maya does it in greyscale) or using other formats (Maya has a plug in called DoraSkinWeight which stores the weight as text for a specific mesh using either vertex order, Uvs or XYZ world position). With the exception of valuesOnText based weight export like DoraSkinWeights, these means aren't very reliable and precise, but it might be an option.

     

    Edit: that's what i proposed at the beginning of the Bento Project in the creators meeting, by the way.. A revamp of the classic avatar with everything improved, from the foul UVs to the weights. An Avatar 2.0 is out of scope for this project, was the answer...

  5. 11 hours ago, ChinRey said:

    Does that mean if somebody came up with and published a standardized weight scheme and it became reasonably common, new and smaller mesh body creators could use that and be sure there would be some mesh clothing that fir their bodies? Or is it more complicated than that? I notice that at least some of the brands with multiple mesh bodies on the market use different eights for them and I assume they have a reason.

    If a mesh has the same UVs, weights can be transferred using UV coordinates as well as doing it in world space. The problem arises when the base body shape is completely different (imagine the difference between the variations in Belleza's bodies) and the weight map needs to adjust accordingly. Making then clothing fit is easy enough, making a copy and lattice-deforming it to the new shape. A wise use of weight copy tools does the rest. Typically, when the clothing i've worked on is complete with all textures, i start the rigging process. 15 to 30 minutes to fit and rig a body type. In a matter of 2-3 hours, all six bodies i have are complete, Slink both, Belleza 3 bodies one variation only though, and Maitreya. Recently i added also Altamura, a pretty nice and apparently quite low weight mesh body. All these have nice weighting and the devkits are very accurate to the inworld body, both shape and weights. What i find unworkable is Aesthetic, because both mesh and skin weights are WAY off from the inworld mesh body

  6. 8 hours ago, ChinRey said:

    Is it possible to make distrinctively different mesh bodies using the same weights btw? I suppose the answer is no but I'm not sure.

    Yes it is possible. Some softwares' weight copy tools do actually work 100% accurate between meshes of different topology, using the right settings

  7. Assuming that your character is facing the correct orientation (+X) when you're trying this *without* Avastar, and that all weights are correct and not leaked onto the shoulders area, i think you might be able to solve this issue by applying the new rest pose. Sure thing it would be easier if you showed a picture of the deformation you report though. 

    After the bone position editing, you should Apply Pose as Rest Pose before exporting. In PoseMode, select all bones and hit ctrl+A, Apply Pose as Rest Pose. If you did the editing in edit mode, this shouldn't be necessary as it is if you did the editing in Posemode.

    In Avastar, if you did the editing of the green bones, either in edit or pose mode, it's a good idea to use the bone snap tools available in the left side panel (toggle open/close with "T"). Snap base bones to animation bones or a similar name.

  8. A parte il fatto che questo post andrebbe nel reparto commercio e offerte di lavoro, dubito che tu abbia trovato qualcuno per il primo punto. Primo, perchè un programmatore scrive codice, fare modelli e textures non è sua prerogativa. Secondo, il prospetto del compenso in relazione alla scala dei progetti descritti è ridicolo. Per un lavoro custom,  in scala e a scopo commerciale non puoi ragionare su una scala valutaria basata sui linden. Fatti una conversione in valuta reale, chi si metterebbe a fare una villetta custom per un guadagno tra 8 e 20 dollari reali totali (i 2000/5000 linden), mentre un condominio multi piano con determinate caratteristiche suppongo, pagato fra i 40 e 160 dollari. In base alla tua soddisfazione, per giunta. Già mi figuro che tutti i lavori saranno buoni, MA non buoni a sufficienza per il pagamento della somma massima. Lo dico perchè chiunque, fra i designer, sono sicuro, hanno avuto questo stesso pensiero, leggendo il tuo annuncio. Ma la parte migliore è quella dell'assunzione come manutentore: 300 linden al mese, una cosa come spicci, per stare a chiamata tua, pure con la fretta entro 3 giorni sennò lo licenzi. Piuttosto, mi spiegheresti cosa dovrebbe fare un manutentore? Di roba che si rompe come in RL non ne esiste, in SL. O forse lo si dovrebbe chiamare il designer privato a disposizione tua per una cifra inferiore al costo di un espresso al mese. Poi leggo gli altri lavori e sono meglio pagati del lavoro reale in questione? Sarebbe meglio dare una riordinata alle idee...

    • Like 1
  9. 1 minute ago, ChinRey said:
    6 minutes ago, OptimoMaximo said:

    oh no wait, the committee says they want Python as scripting language. And C# just to be no less than Sansar

    But Ruby is the programming language with the prettiest name!

    Hm here the RolePlayers representants in the committee say they want Rust too, because it sounds rough and tough enough

    • Like 2
  10. 31 minutes ago, Rolig Loon said:

    As usual, students are saved from the nastiest questions by the fact that the faculty have to answer them first, before they can grade the exam.  B|

    In the meantime you might want to develop a LSL script to make the up axis serverside change in one click from HUD... oh no wait, the committee says they want Python as scripting language. And C# just to be no less than Sansar

    • Like 1
  11. 4 minutes ago, ChinRey said:

    Interesting. The uploader is supposed to handle those issues.

    It does, but when that happened to me from my Maya scenes, freezing transforms solved the issue for me, as wel as the scene scale. Maya defaults to centimeters in all instances, and internally keeps thinking in centimeters even if you set your scene to meters. In the past, with previous Maya versions, i was forced to keep the scene in centimeters to keep the scale right because changing to meters always gave me a centimeter scale object, very very small, like at export it set the scale to cm as default, and writing down the meter scale scene unit value as they were, and not recomputed to fit a meters-to-cm scale scene conversion. Now i work in centimeters by habit, if ever need a different scale i just change the grid settings but for me, 3 meters length will always display as 300 units (cm). I never tested the scene scale again since then (2011). But working in cm has its advantages, at least for me

  12. 1 minute ago, ChinRey said:

    That was the group assignment. Something like that needs to be designed by a committee to make sure all bases are covered and all points of view are respected

    That's right! i dind't think of it, of course there's need of a committee, never to be seen on Earth that the option to switch the world axis to Y up instead of Z isn't there for me! And i want it server side.

    • Like 1
  13. 1 minute ago, ChinRey said:

    I thought we were going to ask them to write a brand new and better cross platform viewer (with a dae optimizing precompiler) in machine code? Or was that next week's homework?

    oh i thought we were going to ask to develop and distribute a SL optimized collada exporter for each single 3D program that must be ready by next two hours, including all possible softwares like Wings3D and ArtOfIllusion of course, and a new viewer written directly in binary code so it would lag less clientside, also to release in the next two hours. But i took too long to finish writing this post, so they got 1 hour and 50 minutes left from now

    • Like 1
  14. Hi Kate and welcome =)

    to me your issue could be due to the scene scale in Max, it should be set to meters by default, check whether that's the case. If all is good, you could double check whether your objects  have non uniform xform: translate and rotate values should be all zeroes and your scale at 1 for all objects. I just checked and it has the same name as it does in Maya: Freeze Transformations https://forums.autodesk.com/t5/3ds-max-forum/freeze-transforma-tions/td-p/5475442

    1 hour ago, ChinRey said:

    Check for loose vertices. The uploader will autoamtically scale all LoD models and the physics model to the same overall size as the main one. But when ti calculates the size of each model, it takes everything in the dae file into account, including any isolated vertices that may have accidentallyended up far away from the actual model.

    That would make the bounding box bigger, from the picture, instead, i gather that her physics models are being shifted away and scaled smaller than intended. Moreover, both Maya and 3DSMax don't allow floating (unconnected to a shape) vertices like Blender does, so it couldn't be the case anyway. The minimum for some geometry to be unconnected to a shape is a triangle.

    • Like 1
  15. 54 minutes ago, ChinRey said:
    6 hours ago, OptimoMaximo said:

    I should mention that most of my builds are organic/rounded models. Structures made of sharp edged cubes aren't a useful example as those are quite straightforward to make well working LoDs.

    Maybe but even then you have to take into account the lack of precise control GLOD gives you.

    Here i meant to make my own LoD models works fine when it's for non rounded shapes.

     

    59 minutes ago, ChinRey said:

    On a side note, I haven't really tested or checked it and I may well be wrong but I have the impression that the poly count is far more significant to the DL than the vertice count is.

    The two things are tied together in my opinion, where vertices represent the actual data and the surfaces (faces) go to render cost. For download, to me it would be more relevant counting the vertices, since those are the ones carrying position values in 3D and UV space.

     

    1 hour ago, ChinRey said:

    That is very odd indeed. I don't know about Maya but with Blender it is nearly always better to export with as little triangulation as possible and leave the rest to the uploader.

    In some cases, it's advisable in Maya to triangulate the model yourself before the export, sometimes a few triangles on the High LoD go missing and triangulating in Maya with Maya's triangulate and NOT fbx export triangulation (which works unreliably, in comparison) fixes that. Otherwise, export leaving all with quads and fewest triangles possible works in Maya as well. But this doesn't make any difference in my case. 

     

    At this point i guess the only thing that may address a more optimal upload through Collada might be to find a specific parsing order that results in better gzip compression. As i pointed out in a earlier post, there is some sort of flexibility shown in the scene unit section, where data is being fed like string = float in the MeshStudio, Lighter DL Collada as opposed to the Blender's version where it's like unit = meter scale =1, and this might make a difference when converted to binary and then compressed. If it happens for one attribute, it may as well happen in others.

  16. I just had the time to read the docs i linked yesterday, while Beq already summarized it. The only problem i would see here is the generation of data that is currently handled upon upload, like the creator name and the rendering cost assignment; one is part of the header and the other is the last block in the file. So there should be an uploader that handles these chunks of data, while the user feeds the remaining from their software.

    6 hours ago, ChinRey said:
      5 hours ago, Beq Janus said:

    When the material meshes are made they are individually compressed using an equivalent of "gzip -9". Zip compression works on repeated patterns and some data is more compressible than others. in some objects it is plausible that a mesh rotated by 90 degrees is more easily compressed.

    So basically, you're saying that the more repeatable patterns we can create, the easier it would be to compress and therefore lighter weight? For example, if i managed to have all my materials including all the same number of vertices, so that all submeshes are equally sized, would that be the case?

     

    6 hours ago, ChinRey said:
    16 hours ago, OptimoMaximo said:

    For what matters, .anim format is also proprietary to LL, no other company uses this specific type of .anim.

    Oh, I thought it was the same as the anim format used elsewhere.

    I wish it would! The original .anim format back in the day was the internal Maya animation exchange format, but the specs were different, first off it's not a binary. Unity also uses .anim for the animations created and saved within the editor itself, this one is binary format but of course the encoding is totally proprietary to work in Unity with the specific content you created it from/for. File extensions really can be arbitrary as heck.

    6 hours ago, ChinRey said:
    7 hours ago, Beq Janus said:

    The size differences can come from a number of sources, most typically they arise from people using generated LOD models. The GLOD library that is used to produce the simplified meshes has some random seed in it (for reasons I have never understood, nor really investigated) and as a result, the generated LOD models can vary a bit with each upload.

    LoD is not relevant to my examples here since they were all done with full LoD, that is with all levels set to "Use LoD above".

    @ChinRey LoDs always keep some relevance. Guess what LI and DL you'd have got with proper LoDs. Sure thing they wouldn't be as visually stable as they are in your example, i know.

    @Beq Janus why is it that if i feed in my own LoDs, regardless of the methods i tried so far, the result LI and DL is always higher than using generated LoDs? Even if slight, the difference always leans best toward the generated LoDs. Considering that the uploader doesn't really care about UV/mesh materials borders retention, when i do my LoDs i make sure to keep them as intact as i possibly can, to avoid holes in the mesh or UVs ending up outside of a UV shell. I tried a few methods: keeping quad based mesh, both making it manually and with reduction tools, and with free triangulation from the reduction tool. 

    So far, oddly enough, the one which gave me best results was  triangulation with NO symmetry against: 

    triangulation WITH symmetry

    keeping quads both with and without symmetry preservation

    that's why of my previous question, but it's still seen in the uploader as heavier than a higher vertex count generated LoD. My last test, i managed to get my LoDs to be around 10/15% less vertices than those autogenerated in the uploader and the final LI and DL weights were still higher. I should mention that most of my builds are organic/rounded models. Structures made of sharp edged cubes aren't a useful example as those are quite straightforward to make well working LoDs. 

    I also ran a sort of benchmark/test to see how these models' LoD worked in other game engines, and i tried on both Unity and UnrealEngine4, of which the latter is the most picky in regard of geometry. Both engines accepted my models and their custom LoDs no problem, showing a drawing resource reduction of around 120% at each LoD during runtime profiling (profiled it in an empty scene and running it on the model alone). Unity is more forgiving, but UnrealEngine also didn't complain: no warning were thrown at inconsistent geometry materials/UVs or vertex orders/normals. Also Skyrim mod tools didn't complain about them, passed through the NIF tools consistently, and i could inject my models into the game no problem. I don't understand why my lower-poly-than-the-generated LoDs result in a higher DL and LI than with those crappy LoDs the uploader makes. It's not a BIG difference, like 1 or 2 LI. Still it puzzles me as per why this happens? There must be a condition to be met for better "compatibility" of my LoDs to what the uploader would generate and expect in order to give you an optimal LI

  17. 1 minute ago, ChinRey said:

    I'm not absolutely sure but I believe it's a propriatary file format because it has too many quirks to fit any open format I know of.

    For what matters, .anim format is also proprietary to LL, no other company uses this specific type of .anim. It is a hardcoded summary of a text file like bvh, which wants a specific order for the data to be read. It's just a matter of taking the data in the wanted order, write it down with the values arranged as per LL's binary compression specs (LLSD if i'm not mistaken). Right now the viewer repository on line https://bitbucket.org/lindenlab/viewer-release/src/8579cefad3049e139efaa1b40a94f0357fcd0274/indra/ appears to be unavailable at the time of this writing, however they have docs about the mesh format out there http://wiki.secondlife.com/wiki/Mesh/Mesh_Asset_Format what is missing is an uploader that accepts outsourced content in that format

  18. 1 hour ago, ChinRey said:

    Yes, that would be the ideal solution. I doubt it's going to happen though.

    Why not? I mean, before the ability to upload .anim files, only bvh could be used. Earlier there was no interest. Arising interest might be of use. And it would be much easier (for me at least) to write an exporter for that than managing the Collada text format. Plus, they would ensure that content would be uploaded properly and as clean and performant as it possibly can be

×
×
  • Create New...