Jump to content

OptimoMaximo

Resident
  • Posts

    1,809
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by OptimoMaximo

  1. Figure how outraged i was at the time, to make me switch to Blender for my SL activities.
  2. I have had instances with lots of troubles: really LONG export time (my personal Hall Of Records about this states 22 minutes to complete the export as longest export time, and it wasn't even a rigged item), ridiculous LI even hammering the mesh LoDs to death, parsing errors and LoDs not generating in the uploader, is my experience with FBX_DAE (how the plug in reads among the options for the Collada export, because it's part of the fbx plug in). But this was long ago, with Maya 2011, which was supposed to be the version of Maya LL used to develop mesh, at the time. I don't recall where i read this, but i later found the info about exporting fbx and convert it to Collada with fbxConverter to have it perfectly functional with no mistakes. Which indeed changed the whole experience i had importing the same objects that, previously, had an unjustified too high LI. I never tried the native collada exporter again since then, by now it's part of my workflow, and it's easy to use: just drag and drop multiple files into the input window, choose Collada and click export. Flawless WYSIWYG Maya -> SL results in way less time. Except for rigged meshes that take a little longer anyway, and with quite high number of vertices the export time increases dramatically, it's still better than waiting overnight for the native Maya Collada export in my opinion, also considering the uncertainty of a good import arising from my previous experience.
  3. What i'm thinking is that LL might want to do as they did with the animation formats, and perhaps be able to upload a mesh in SL native binary format directly from our softwares, it would be the really ONLY way to ensure an output that is the slimmest version possible of the required data.
  4. i'm sorry to add to the confusion, i need to phrase better. I meant: the fact i'm using Maya doesn't include my use of fCollada, exactly because that's Maya's default. OpenCollada is based on MayaCollada with slight differences afaik. To avoid using that, export to fbx and then conversion with fbx converter. When importing a mesh with some properties in Maya through Collada, properties get baked into TRS (transforms) and, as a result, the export procedure outputs the same file structure it got in input, regardless of the native Maya structure, as long as the file format doesn't change (and switching to fbx and back to Collada isn't a file format change for what these data structures are concerned, apparently). I could witness this behavior while testing the mesh "layers" bug to reproduce it from Maya. A working-bug collada imported and re-exported worked as intended; making it anew in Maya didn't, until i found the node combination and structure causing said bug to be exported as a correctly working-bug from Maya.
  5. I know it doesn't affect anything, i was just thinking how messy their assets have been created. The faulty axis definition is what makes the rotation conversion internally, however. It's a reference to the input coordinate system so that the uploader knows what and how to convert the matrix to match the expected orientation. When i make a static mesh with Y up, i upload it just fine, rezzes just fine but when i select it, it's showing a rotation by 90 degrees on the X axis. Another thing, which in terms of syntax shouldn't matter, but in terms of binary data writing it makes the file slightly bigger <unit name="meter" meter="1"/> this is Blender <unit meter="1.000000"/> this is MeshStudio Same type of data in a flexible parsing pattern that respect a syntax. In terms of code they're both accepted of course, but to encode strings in binary you've get to get a representation like this: (python 2.7 code): bytes(bytearray(joint,'utf8')),0) #where the last zero means LL encodes strings with a null character at the end In the C structs each piece of data gets a slot, and there i can see 2 strings to encode instead of one. Spread this behavior across the whole file and it ends up with unnecessary bytes slots occupied that could be summarized in less slots. here you can see the bytes equivalencies between regular scripting data types to C struct data types https://docs.python.org/2/library/struct.html Note that by default every 3D software, internally, encodes floats as type double (8 bytes) as opposed to the simpler float (4 bytes); also the type of integers can vary between 2 and 4 bytes. Hence i suppose that: - introducing a specific order in which this data is being fed; - introducing a better representation of these values as strings in the Collada text (for both strings and values, you said Blender uses integers where MeshStudio uses floats) so that the uploader parses it reading the values as it would expect to avoid conversions or silent equivalencies Should reduce the amount of compatibility operations on parsing orders and syntax equivalencies. I see what you meant about flexible syntax, but that's not real flexibility: delete a < or > and the file doesn't validate anymore (see Collada validation docs). Indentation doesn't count, which makes it a syntax flexible file type because of this; make a wrong indentation in Python and it won't work. The raw text you input though MUST follow a rigid syntax. Again, remove quotes or punctuation and the file goes to the trash. About the Collada version you were talking about earlier, i'm not using fCollada because i'm on Maya. I don't use the Collada exporter in there because it takes ages to export (at least rigged stuff does), the LI and DL are higher in my case, and it doesn't always work for Second Life. What i do is to export to fbx version 7.1 (2011), the same version LL used to implement mesh at the time, then convert it with another software called FBX Converter (from Autodesk, free), which ensures the official Collada conversion. Otherwise i might install the OpenCollada plug in, but then every time i upgrade to a new version i have to wait for the Khronos group to update the plug in too. With the conversion method from fbx 7.1, i always get the lowest LI and DL, tested across different fbx versions converted to Collada using the same model.
  6. One thing i notice as a main difference is the Up Axis definition, in the Blender version is set to Z, from the mesh generator it's set to Y. It shouldn't make a difference, perhaps your linkset comes in sideways or rotated, i don't know. I would not be surprised if LL created the prims using the Y axis up, considering the already well known scene orientation and scale discrepancies there are between rigging skeleton and animation skeleton.
  7. It has some correlation and affinity with compression ratios, because of each data type needs to be encoded in a C struct data type (U16, F32 and the likes) so that a number can be written with a value represented with 2 or 4 bytes, depending on the data type. This is done primarily for data compression, so you're right on this side. The missing link to me is the way LL managed to have a binary file (with fixed packets size and order to maintain) to allow a random parsing order. When dumping data to a binary file, to respect the given parsing order, data needs to be detected/read and immediately dumped as soon as it was collected, basically not collecting the data, reorganizing as needed and written out. Hence my idea (just speculation again) of those "partitions" as i have temporarily named, awaiting a better definition. It was designed with a particular set of data entries in mind, in a specific order the devs were given, so they based the internal mesh format to that standard. Then, allowance for the same data block to be created a second-third-N time to allow different parsing orders, leaving entire chunks of files full of zeroes. Which still counts as data, as per the file size/load go: needs to be fully parsed for the viewer to render it. Reading a lot of zeroes you could have avoided does add to file size (download) and computation time spent on reading the file. It's fractions of a millisecond maybe, but that's the time scale a binary file is meant to allow to work at. For bvh animations at least it's how it works, i am not sure the mesh uploader does the same, but both are text files and i'm assuming the file reading happens similarly. I iwll dig in the viewer code as i did for the animation exporter when i have some more spare time.
  8. Not exactly actually......... Ok, improvements to the base mesh confirm they have it on their servers, where they expanded the base library.
  9. a big file size is almost always cause of higher load. And now it's just a speculation i'm doing, but what comes to mind now is: (Speculation ahead, i don't really know these details) What if the encoded model can accept data types in, say like partitions, and the messier the order is, the bigger the file becomes to allow both a free order on the Collada file and still maintain a strict encoding order in their internal binary format? Missing chunks might be added later in a separate partition when the next related chunk is found, leading to a fragmentation. This fragmentation, when it comes to binary data dump, MUST be of fixed byte size, and if this chunk of file doesn't fill it all, the remainder is empty, but still there. Next chunk found and a new partition opens. Same if the chunk exceeds the partition size and the remainder is continued in the partition right after it, but just for a few lines, then again the remaining space contains zeroes. That's another reason for me to think of a specific parsing order being the most optimal rather than a random one.
  10. It's this type of fragmentation that is granted by Collada's format definition, to me causes the different LI: the uploader has to do the conversion and accepts all data it can convert in any order. It's when the object needs to be read back, that the differences between the fed parsing order and the native one can make a difference. If the description of the object is very messy, it may take longer to retrieve all necessary data in the necessary order for the viewer. Therefore a higher computational load and, consequently, a higher rendering cost.
  11. I was phrasing it incorrectly i guess, my previous post addresses the method. I'm not saying it's taking all from your cache to make the conversion, it's taking the data from SL and the mesh construction is left outside, using the same meshes you would have had in your cache.
  12. I'll expand better on this section. If you get the encoded prim meshes from the assets and use it as a set of prefabs in your externally held application, your SL scripts just need to pack those parameters and the relative transformations (size, rotation and position) and send all over to the server, which has all the files needed to construct a Collada from LL's native meshes descriptions. Who got these prim meshes converted them from LL's mesh encoding (binary) which describe the mesh in all the statii it can get using the torture parameters, then applying the transforms. It's not much different from what Machinimatrix's PrimStar had as a complex sculpty build reconstructor via LSL script. You had to put a prim in the inventory, along with sculpt maps. Using always the same prim, the script rezzed as many copies of that prim as many sculpt maps there were in inventory, change the parameters first, then the transforms to reconstruct the multi-sculpt build you had. Doing the reverse is similar to exporting prims with the viewer: collect prim data (torture parameters and the likes), then the transforms and rebuild from premade meshes. Now if mesh studio does the cleanup and still uploads low LI is because who made that system made sure that the cleaned up mesh conformed to how a prim was built in terms of the parsing order, making them match as closely as possible.
  13. I don't know if this is for testing purposes only, but for what i can observe inworld, prims dynamically get different lods depending on the prim size and viewing distance.
  14. Yes they do. Since the prims are strictly parametrized and encoded meshes, all you need is to get their parameters, apply the parameters and create a mesh from the prim definition. Which is what is encoded in the internal mesh format. Take that and apply the necessary transforms to get a copy of your prim model. It wouldn't surprise me if the mesh from a viewer export came with the root prim's center at the center of the 3D software scene: prim positions are taken relative to the root prim, which conveniently translates to the center of a 3D scene for reconstruction. Try to load an XML prim build file changing the order of their parameters keeping syntax correct, and see what happens trying to reupload it. Sure they had this affinity, it is a convenient file format for easy binary-text-binary conversions. Figure that they were so fond of XML that the internal animation files is binary encoded, all meshes get also encoded into binary formats, internally. XML meant for them that someone can easily output a text file to translate data with Maya, which at the time supported only 32 bit signed data values. Actually, it's MEL (Maya's native scripting language) that has this limitation still to date, indeed i had to use Python to write my anim exporter. But python wasn't integrated in Maya, at the time. Thing that i know, but again if the parser expects some data being fed in with a set of parameters and some are missing or incomplete, parsing won't occur, or at least it would, just incorrectly. That is plain fact. Indeed you can get parsing error when the order is too messy for the uploader, or XXX block missing (where these X are substituting an acronym, most seen is MAV when materials have issues) , as an error. The parsing error explains itself. The XXX block missing is a syntax error, where a block of data is expected and isn't to be found anywhere. It may well be in there, but if the construction of such block misses a "marker" for the end of the previous statement, it's seen as part of that and not recognized, therefore not found and missing. Pretty much like LSL, when a semicolon is missing and gives you a syntax error: it sees the next command as part of the previous, creating a wrong syntax. Now if you export a binary FBX doesn't mean it's human unreadable. FBX is Autodesk take at evolving Collada. What the binary or ascii definitions mean, is the type of C struct being used for values. ASCII is just plain text for all types of data, leaving the software to figure out the type of data by its look and by the attribute it's attached to. The binary one instead also leaves a trace of the data type. Indeed, in Maya it's best to save scenes with objects in .mb (binary), while shading networks are best suited for the ASCII file (.ma), because all you need in there is just names of connections, files and their paths and the material values which are established in the software, so a string representation works fine.
  15. It has to be a little bit of both. The big difference between the second and third of my examples are almost certainly caused by the parsing order but I can't see how that can explain that small difference between my first two examples. For the "little bit of both", please read my previous post. It's certainly a matter of parsing order, where the uploader accepts different orders as per Collada's definition, which allows it. Now the order in which this data comes in might make the parser struggle more or less, depending on what the basic .llm encrypting was primarily built for to have as optimal order. One hint of this is that the scripts in these converters take the data from what SL whips out for your screen to render, which is the native order and therefore a Collada generated back from there MUST be the most efficient file version. As a parallel, i can tell you my experience on dumping a .anim file which uses the same type of binary data dumping. With all data correct and in place, the order in which i fed this data to the file writer affected the working order of my animation file. I won't go into detail here, IM inworld if you're interested. Yeah don't buy that, there are tools which can be used for SL stuff that Blender doesn't have. You have to get Avastar for a Component Editor for vertex weights editing, just to name one. That's not really relevant for Mesh Studio since it doesn't depend on the SL software to create the mesh. What The Black Box did when he made Mesh Studio was reverse engineer the entire prim system and make his own custom software to handle it. The only thing the inworld Mesh Studio script does, is read as much of the prim properties as a script possibly can and send it to a server far away where the real magic takes place. Above i partially answered it, since it DOES start from the mesh that the viewer unpacks, which was created in Maya. Leave alone the XML export as it is a raw binary to text conversion from the internal mesh format. But exactly like you did, trying to explain to me the details of a Collada file, i got carried away with some history there. I'm now wondering, are the sizes consistent? Blender default cube is 2 meters in all 3 axis, regular cubes in SL are half a meter. I'm sure you're keeping this consistency, but you haven't mentioned it so...just asking.
  16. Oh and another thing: syntax is by definition rigid. Again it's the parsing order that can be flexible. If syntax is not respected, a syntax error occurs because a command or attribute can't be "flexible". Arguments need to be respected in order for the correct file construction or the parser won't recognize what some text may mean with a differently ordered/incomplete set of arguments. On the other hand, the order in which this data gets dumped to file is the parsing order (for when the file needs to be read). I'm even overlooking the remainder of your sentence
  17. But I can. Please expand more on this concept, because at the current state, what i'm getting from your statement here is that you could optimize my models more/better than i do? in that case my answer would be "i strongly doubt" or worse "you wish"... however, reading your other posts, it doesn't seem to me that you'd be that arrogant, so i strongly doubt of my text perception first I know how LL manages to encode data quite well, since i'm the one who released the only .anim exporter for Maya so far, plus a rebuilding tool from Maya scene to SL (which doesn't incorporate much encoding knowledge actually). The whole mesh feature was built around the Collada 1.4.1 version, resulted from binary FBX 7.1 (2011) conversion using FBX converter. That's the only way to ensure the cleanest Collada possible for SL from Maya, in my opinion. Tested a lot of times, other FBX versions work BUT the Collada file was output a little different with every version i tried. And in the uploader, the lowest LI i got from the same model comes with the model exported using 2011 version. Collada per se is a dinosaur file format, with lots of flaws. Ever noticed how Collada files aren't that widely spread used, if not for SL use? It can't handle stuff like groups (not like those in Blender, it's a container object) or geometry parenting, thing that instead is very much required for an exchange format. But as the Wiki states about it, paraphrasing and not quoting here, it's an animation exchange format. Which means you animate, BAKE animation in the file and export what's the animated result, stripping away anything else. If you use FBX converter, also, you will see how FBX files in the order of the Kilobytes end up in a Collada file of size in the order of Megabytes. With this said, it's probably a matter of parsing order, not syntax, what you're talking about. And if something like that becomes a problem, in Maya you just recalculate the vertex order and the model gets a clean(er), better organized one, aimed at better binary compressability, since Maya saves binary (.mb) and ascii (.ma) files. If it wasn't for @Gaia Clary , maintaining also the default Collada exporter, Blender wouldn't even have a working Collada exporter for SL (at least working as a standard Collada which didn't work for SL). Looks like (not saying that you are) you're assuming that content creation and file formats standards revolve around what's available in Blender, while the truth is that Blender users have struggled quite a lot to have stuff working in SL, before Gaia and the Machinimatrix guys took this burden on their back. This truth also includes the fact that everything inside SL was made in Maya, and things like Collision Volume bones rely on features that Maya has, and Blender doesn't support: each Collision volume bone has specific rotations and scale settings, which goes with the free bind pose skinning. In Blender, you can't successfully attach something to armature, if each single bone isn't set to zero rotation and scale at 1, see what happens if you do, without Avastar to handle the custom bind pose. But i'm digressing. A file's parsing order can also depend from the 3D software architecture it was output from, how the data is being stored and retrieved internally to the software. Blender notoriously uses a quite unique architecture.
  18. On a side note, the "go away" part of this thread title seems to have sparkled more interest than it did with its first title LOL
  19. Which is exacly what you just built. Consider that each imported prim, also, has each single face disconnected, and would require a merge on the edges (remove doubles in Blender) Well that's exactly how prims were made and how they really are. All prims are the result of Maya's NURBS. In Maya, NURBS or polygons come with UV from their creation. NURBS always have a square, full-UV space UV map. A cube was made of 6 NURBS planes snapped (and not stitched) together. This means that it's MeshStudio and the others that do the clean up you report Thanks, but this can't make me change mind I accept the lowest LI i can get from my models because i know how much i optimized them and at upload time, i am sure i can't go any lower. Plus, most of my detailing goes into material textures. Using imported prims as a mock-up isn't my style either. I know how to build something up-to-scale directly in my software. It's just how i trained to work.
  20. @ChinRey The OP was asking which prim to mesh toolkit is best, considering the various prices they sport, also asking what there is to Mesh Studio to make it worth its higher price, other than a couple of features i can't remember. Fan of the sterile grid floor here, btw =P
  21. i wish there could be such a technology! However, i guess you're referring to "appliers" as the different versions for different avatars. That's up to the designer, and you may contact them to ask if they're willing to update the product including also the default avatar. However, many designers make fatpacks with a texture change HUD, which is a sort of applier. Again this is up to the sole discretion of the designer
  22. This change of topic title reminds me of another thread about rudeness...
  23. Alpha masking is required for a reason: not all dress shapes can get a weight copy such perfect to not have some skin poke through. Plus, the classic avatar skin weights aren't exceptionally well made, in my opinion. As angeoco said When shapes get too extreme, you may be noticing sort of "cracks" along the mesh between bending areas, where one side sharply overlaps another. An example is on the back of the shoulder, where the arm bends. When the shoulders get a too narrow, you would see those cracks happen. On the reverse, the same area in avatars with very broad shoulders would show a strong stretching in those same areas. All this, IF some skin doesn't begin to poke through in the meantime. If you look around on the net for Game Modding communities, you will see: deletion of part of the body that's covered by clothing, marking the areas somehow (vertex colors or joint based dismemberment/partition) or using alpha textures. SL uses textures. It's an inevitable process because clothing, most likely, gets a different geometry compared to the base body's, therefore the deformation data can't transfer that exactly. Hence the two geometries (body and overlaid clothing) will never have the same, precise deformation, very very close but not the same. That's why clothing is built keeping a small "grace distance" from the body (in SL, also for tattoo layers, as of now, when it comes to mesh bodies. In the future we'll have the texture baking service on meshes too). Most of the clothing perhaps works on all shapes, basically, but it might be that they get some distortions at some point, during the shape inflation/deflation, that looks bad and requires to have a separate starting shape to keep looking nice and undistorted. in my opinion, you were very lucky to find a dress that fits you so well, but there's no guarantee that the designer did that on purpose, or it happened by chance. You should test this out for yourself, keep buying from this person and see how stable this feature is going to be across all of their products.
  24. i don't use mesh generators and prim to mesh conversions of any sort, but if your concern is the price, there's one you can try for free, in Firestorm viewer. Right click-->More (a couple of times if i recall correctly)--> Save As--> Collada This feature in the viewer doesn't generate LoDs or physics shapes files for you though, however, depending on the distance you're viewing the exported objects from, the currently showing LoD is the one being exported out (or so it did last time i checked on this feature, and it was quite long ago)
  25. Reading the release notes here https://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.79 There is a compatibility warning for Rigify created rigs, hence i'm supposing that something has changed in how rigs are handled, internally. There might be dependencies from the rigify procedures that may affect Avastar too. Also, the Python API has got a couple changes that may affect how mesh and materials are being handled. But if you just imported a skeleton that has no dependency to Avastar or the Rigify add-on, it should work. I'm sure you've got all the export settings right, as there is much information about that. Just a single detail may have been overlooked though... If you're using plain Blender, the first rule to Second Life compatibility is that your Avatar faces the +X direction. The default Blender expectation of having your character facing -Y and the arms spanning across the X axis doesn't impede the upload, but as a result everything is results deformed. Blender has a limitation on the mirroring functions on skeletons, where you can get a mirroring effect only across the X axis. You can work as much as you like using the default orientation, but when it's time to export, make sure your avatar rig faces +X, Apply Rotation (might be needed on both rig and mesh) and then export. Please come back and tell us whether you got it working or not
×
×
  • Create New...