Jump to content

Runitai Linden

Lindens
  • Posts

    144
  • Joined

Everything posted by Runitai Linden

  1. I wouldn't recommend selling an .SLM file. The SLM file is not guaranteed to be compatible with future SL versions, so if someone bought the SLM file there's no guarantee they'd be able to use it. We do plan on supporting direct SLM upload, but it's very low priority. There is no creator or uploader information in the SLM, so anyone who uploads from an SLM file will be listed as the creator.
  2. It's a feature -- rigged attachments use the avatar skeleton bounding box when determining LoD. This is because a rigged attachment's non-rigged size as known by the CPU can vary wildly from the rigged size as known by the GPU. Synchronizing between the two takes a lot of cycles, so we just use the avatar skeleton bounding box as a best guess.
  3. Mesh assets are as secure as textures, which is to say they are not secure. Selling a .dae is like selling a Photoshop .psd file, and selling an .slm is like selling a jpeg. If you want the "creator ID" to always be your ID, never sell or share an .slm or .dae file. Mesh is vulnerable to copying just as textures are, which is one of the reasons we require payment information on file to upload a mesh. When you upload a mesh, the simulator embeds your agent ID and the date the mesh was uploaded inside the mesh asset, which makes it much easier to resolve disputes of who uploaded what first.
  4. Two things: 1) We're aware this is an issue, and decided it wasn't a blocker for initial release because: - you can fairly easily hit similar numbers by attaching 8000 sculpted prims. - as you've seen, the viewer just stops rendering triangles on attachments once the attachment gets insanely heavy - if you "mute" someone, you stop rendering their attachments 2) The plan is to make the visual "muting" automatic based on client preferences -- if you wear incredibly laggy content, others will see you as they see muted avatars. Unfortunately, doing the analysis on an avatar to determine if said avatar is laggy actually creates lag (you may have noticed reduced framerates when displaying ARC), so the two main blockers are to a) define "laggy", and b) make determination of "lagginess" not laggy.
  5. We actually kicked this around a bit and settled on "Mesh" because at the core platform level, what you're uploading *is* a mesh. In COLLADA terms, In the importer, each "geometry" node in the scene is turned into a single distinct mesh asset, and each geometry_instance in the visual_scene becomes a Second Life object that references that mesh asset. You get a "face" or "texture entry" for each "material" entry for the "geometry" node in the COLLADA file. You're correct that the importer is dealing with "scenes," and the "Model..." button was in fact labeled "Scene..." early in the beta, but since a "Model" is technically a collection of meshes, it's just as accurate to say "Model" and it's more obvious what the button does. In terms of what you get after import, it's exactly the same scene and object management as the old platform, except the geometry that makes up an individual object is pulled from a mesh asset instead of a set of primitive parameters. Of course, the details of making that work efficiently, securely, and in a way that's scalable are what's kept us busy these past two years, but there it is in a nutshell. So there you go, "mesh" import is what's happening, and we import "scenes" by importing lots of "meshes." Also, we *only* support mesh import, so claiming full compatibility with COLLADA beyond meshes would be incorrect (we don't support nurbs for example).
  6. The import is misleading you -- You need to go to the "modifiers" tab and check "skin weights" and "joint positions" The gears drop down just contains what's rendered in the preview, not what's imported.
  7. Looks like he popped down to a lower LoD. This has been happening to me, too, on occasion, but I didn't notice the correllation with region crossing. Thanks for the heads up, this information should help track down the cause.
  8. We'll suss out a better placeholder graphic in future releases, but since we don't have a time machine and can't inject code into legacy viewers, this is what meshes will look like for anyone who doesn't update their viewer.
  9. Short version: Update to build 236082 or later if you're seeing prims where you used to see meshes.  236082 is building now (2011-7-18 21:12 SLT) and might not be available. Installer links here: Windows -- http://automated-builds-secondlife-com.s3.amazonaws.com/hg/repo/mesh-development/rev/236082/arch/CYGWIN/index.html Mac -- http://automated-builds-secondlife-com.s3.amazonaws.com/hg/repo/mesh-development/rev/236082/arch/Darwin/index.html Linux -- http://automated-builds-secondlife-com.s3.amazonaws.com/hg/repo/mesh-development/rev/236084/arch/Linux/index.html   Long version: In an attempt to make some non-mesh enabled viewers (i.e. viewers based on the 2.7.2-2.7.5 releases) display something meaningful (and not crash) when encountering mesh content, we're testing out a protocol change that is not backwards compatible with old mesh viewers.  A simulator using this protocol has been deployed to the Mesh Experimental channel, so you'll need to update to the latest mesh viewer to view mesh content on the following regions: Mesh City 2 Mesh Sandbox 1, 20, 21, 27, 28, and 32 MeshHQ 3   The new viewer *is* backwards compatible with old sims, so you can use it everywhere, not just on Mesh Experimental sims.  If you're wondering how we pick which prim to show for which mesh, it's based on the number of texture entries in the mesh asset (the prim you see has the same number of texture entries as your mesh).  Your mesh object will also appear as this prim if you take it to a region that is not mesh enabled, and this is the prim that people with any legacy viewer will see in place of your mesh.            
  10. Ann Otoole wrote: Runitai Linden wrote: ... The current budget of 250 thousand was used by examining the triangle count of various inworld locations and looking at performance characteristics and capabilities of target systems (which are slower than you might think). ... What are you "targeting"? Mobile systems? Tablets? iphones? This is a fair question that we definitely haven't answered fully. Here are some statistics I'd like to share: As you may know, the viewer categorizes machines into 4 classes defined here: http://wiki.secondlife.com/wiki/GPU_and_Feature_Tables#GPU_Class What you probably don't know is what percentage of residents (your customers) fall into each class: Class 0 - 34.86 % Class 1 - 17.08% Class 2 - 14.25% Class 3 - 26.95% The ramaining 6.87% fall into an "unknown" or "unsupported" category. To give you an idea of what qualifies as "Class 0", as of the last report, these were the top 10 chips in that class: 1. Intel Bear Lake 2. Nvidia GeForce 6100 3. Intel 965 4. Intel 945G 5. NVIDIA PCI 6. Intel 945GM 7. ATI Mobility Radeon 8. ATI Radeon X1xxx 9. Intel Cantiga 10. NVIDIA GeForce 7000 We're basically talking about $400 laptops, so that's the target. If you've got a beefier machine, keeping the scene lean will let you crank up your mesh detail so you never see those ugly low LoDs, push your draw distance out, and turn on effects like shadows and water reflections.
  11. Drongle McMahon wrote: *the present max_area effectively sets the draw distance of the low-end user, who the limit is designed for, to 181m, when the medium graphics setting that is supposed to be the target, is 96m. The effect of this inconsistency is about a doubling of PE, and disproportionate increase in PE for larger meshes, but I don't think that is it's intention. I think it is just a mistake. You keep bringing this up, but look at how the math works out and you'll see that the larger the max_area is the lower the streaming cost is, because the streaming is based on the average number of bytes visible over max_area. Effectivelly, the average number of triangles visible over max_area approaches the number of triangles in the lowest LoD as max_area approaches infinity.
  12. The streaming cost component of PE is based on the average number of bytes visible within the area of a circle that circumscribes a region. Larger objects LoD sooner, increasing the average number of bytes visible over that area, making them use more bandwidth. While it's true the actual displayed LoD is dependent on user settings, the streaming cost calculation is based on a constant LoD ratio and is therefore viewer setting independent. The target triangle budget is 250 thousand triangles visible from the center of a maxed out region, but you can increase that budget for your viewer by increasing your detail settings in preferences or increasing your draw distance. The current budget of 250 thousand was used by examining the triangle count of various inworld locations and looking at performance characteristics and capabilities of target systems (which are slower than you might think). The budget is also biased a bit low for initial release because it will be easier to raise the limit later than lower it.
  13. Failed Inventor wrote: They say don't compare meshes cost to prim cost, well honestly follow your own words LL. Forget prims give parcels a triangle/vert limit, and same with avatars to. Why? *Waits for the linden fan boys with their torches and pitchforks to burn me at the stake for expressing non prolinden views. That's exactly what the "streaming cost" PE does, it just presents the number in terms of "prims" because that's what the existing accounting system is setup to handle. It does exactly what you're talking about, it just converts the number to the same units the system is used to dealing with.
  14. Ok, getting this very helpful, insightful thread back on the rails -- Two quick things and one long thing: 1) We were using diameter instead of radius, fixing that now, and the error margin seems to be getting smaller because of it -- thank you. 2) The new total area of 102k is the area of the circle that circumscribes a single region -- I found that using the area of the region (65536) tended to give the lowest LoD less weight when determining the average and resulted in a higher than accurate streaming cost. The long thing: realistic budgets I'm absolutely thrilled that the discussion is turning towards triangle budget and framerate analysis instead of comparison with individual prims, but there seems to be some misunderstanding of what "scene statistics" reports. The "Render Info" display describes the currently rendered frame and includes information about everything in your current view. The "Scene Statistics" console describes everything in the region you're currently in that the viewer is aware of, so it could be missing small object that are far away, and it definitely excludes content from neighboring regions. What this means is the triangle budget of 250K is for a single region at medium mesh detail settings. If you have a machine that can handle more than that, you can spend the extra power bumping up your detail settings, pushing out your draw distance, and enabling effects that require multiple render passes (like shadows and water reflections). Someone asked for a report on graphics hardware. I'll see what I can dig up.
  15. Drongle McMahon wrote: I have wondered about the W before, but have no real idea what it does. I did read something about it being used in transformations to give the texture a perspective transformation. I would guess it is ignored, but I really don't know. Runitai, are you there? Help! It is ignored -- actually, I think if it shows up in the .dae file, UVs won't parse with our importer. Anybody got a .dae with 3D texture coordinates to test?
  16. Drongle McMahon wrote: Almost, except that there are only 6 different face normals in the smoothed cube, There are 8 vertex normals (each of which is the sum of the three adjoining quad face normals at a vertex). Face normals are not stored (or used by opengl, as far as I know). The faces of a flat shaded triangle just have three identical vertex normals. I need to check one other thing with Runitai - there do seem to be separate vertex lists for each LOD, but does the 64K limit (necessary for 16 bit indices in triangle lists) apply separately to each, or to the sum of all of them? I also left out the vertex weighting and the havok private and physics cost info, but I think I will leave it that way so that it's just for static meshes and vaguely comprehensible data. The 64k limit applies to each submesh separately. Also, vertex weight data is in the LoD block -- the "skin" data block contains a list of joint names and bind matrices to match up to joint indices in the weight data.
  17. Gearsawe Stonecutter wrote: So basically from what I understand now you would have to turn your draw distance up to 256, stand in the center of the sim, slowly turn around 360 to making sure everything changes to its correct LOD. Take the number of visible triangles then multiply that but (view angle/ 360) to get and average number of triangles in the view from that given point. Default view angle is 120 degrees (Ctrl-9). I tried to an area select but that did not work out to well. Or visible simple means the number of triangles in a 360 degree view not visibly in front of you. Bingo -- "visible" in the render info display and statistics console means the total number of triangles at the currently calculated LoD for all objects the viewer is aware of. Since the viewer only calculates LoD for objects that are onscreen, you'll need to do a full 360 turn. There's also some error introduced by the interest list (the simulator will tell the viewer to unload some objects at certain distances based on size), but I'm ignoring that for now because the same error exists with prim builds, which is what the triangle budget is based on, so it should be a wash.
  18. Drongle McMahon wrote: As far as effects on streaming cost are concerned, it is clear that the extra data required for faceting and uv fragmentation increase the download data size, and thus the cost. Whether it increases the gpu worload, I don't really know enough to speulate. As far as I know, you still have to feed all the same amount of data for each triangle into opengl, and if anything you might think that rendering flat faces would be faster because it doesn't need interpolation of the normals. Maybe someone who knows more about opengl and the internals of gpus could enlighten us here? * normal = the direction in space that is at right angles to the surface. Faceted meshes are more expensive to render than smooth shaded due to cache misses in the post-transform cache. Basically, if a vertex is used more than once and we did a good job cache-optimizing the index buffer, it only needs to be run through the vertex pipeline once for every triangle it is part of, and the fragment (pixel) pipeline uses the cached transformed data. If it's faceted, it's effectively not shared between triangles due to the normal, so color values are different for each triangle.
  19. Dain Shan wrote: Glad to be helpful. Im still trying to hammer some of your points in my head. Especially the line: "Not all triangles are equal -- the streaming cost assumes an average of 10 bytes per triangle, but on low poly or faceted objects, many vertices are used in only one triangle, so the average triangle size (in bytes) for those objects is often much more than 10 bytes, not to mention the overhead of the mesh header and LLSD tags, which can outweigh the size of a tiny mesh. It's fair to charge more for these for a few reasons:" Im a bit baffled. How can a triangle have more then 3 vertices? By the definition of an triangle, thats simply not possible? Only explanation i could come up with is creating an irregular mesh, where one corner of a plygon sits somewhere on an edge of an adjacting triangle. But still then .. this Vertice has nothing to do with this triangle at all ? Other thing would be an Subdivide of an edge. But even then the uploader would change that subdivide into another additional triangle. Maybe you can explain a bit further what is meant here ? Dain As always, im sorry about my bad english ... Meshes are stored as "indexed triangle lists." If a vertex is shared between two triangles, the position, normal, and texture coordinate are stored once, and that data is referenced by an index array. For smooth shaded meshes, any vertex that shares a position tends to share a normal as well, so the vertex data only needs to be stored once. For a faceted cube, there are actually 3 vertices at each corner, each with a different normal, which increases the data size. If the cube was smooth shaded, only 8 vertices would need to be stored total, but since it is flat shaded, 24 vertices must be stored. 3D Modelling programs tend to use separate arrays for position, normal, and texture coordinate data, but this results in bloated index arrays. Since the vertex data in a Second Life mesh asset is compressed, it's better to save on indices than vertices.
  20. Rusalka Writer wrote: This has probably been asked and answered fifty times, but so have most of my questions— is streaming cost (or prim cost, whatever) something that is assigned at the time of upload, or does it change as the calculations change? If I upload something now that has a nominal cost of 50, and the cost is later recalculated, will my object's cost stay at 50 or will it adjust to the new cost? Don't want to spend a lot of time uploading if I'll have to do it again and again and again. The cost is calculated post upload and adjusted dynamically as the calculations change. At this point, anything you upload WILL NOT need to be uploaded again. The format is finalized and any future changes must be backwards compatible.
  21. Gearsawe Stonecutter wrote: Soooo making the object scripted/physical increasing cost will be done away with? I can understand some of the reasoning behind streaming cost and rendering cost but this part make no sense. No, the "Prim Equivalence" will still be simulation cost or streaming cost, whichever is greater. This conversation is just focusing on streaming cost. I'll leave physics and script cost to some other discussion.
  22. Ashasekayi Ra wrote: Thanks for that explanation. So, if an LOD has 3 times less triangles than the one above it, the object will have less prims than say an LOD that is only 2 times less? Exactly. If you graph it, the cost gets exponentially lower until you hit a ratio of around 4x.
  23. arton Rotaru wrote: I have a question about the testing case you described. You say fill the region up to it's prim limit with our meshes. Currently the region counts the number shown in the edit floater against the parcel/region prim limit. That means, I can copy about twice as many (and more) meshes on the region than I if would take the number shown in 'Selection Streaming Cost' for the given mesh. So which number should we take to count up to the 15000 prim limit, to make the budget tests you mentioned above? I apologize in advance, if I misunderstood something here, but I'm not a native english speaker, and I probably have a harder time reading and understanding all this. Use the "streaming cost" number in show render info and make it add up to 15000 for now -- hopefully we'll get a sim out soon that agrees with the viewer as far as streaming cost is concerned.
  24. Gaia Clary wrote: While experimenting a bit further i came across this issue: Both kettles are mesh objects. Both kettles have exact the same appearance, the same LOD decimation, everything is equal, exept : I joined the 2 objects into one mesh object (i did that for each LOD separately) i simplified the physics mesh Here is the result: Kettle optimized (left side) original (right side) type mesh mesh objects 1 2 faces 1280 1024+256 Physical objects 1 2 faces of phys. obj. 16 45 complex hulls 1 2 size 0.49,0.47,0.32 0.46,0.46,0.31 (kettle) 0.48,0.25,0.16 (handle) Prim Cost 5 3 Streaming cost 7.2 3.2 + 4.4 upload cost 100L$ 110L$ Maybe that needs either a fix or another explanation. I am also not at all sure about what "upload cost" means here (Develop->Show Info -> Show Upload Cost). This probably is also not fixed yet ? The "upload cost" debug display is kinda useless right now, I wouldn't worry about that yet, and the "prim cost" not matching "streaming cost" is a bug -- go with "streaming cost" for now.
  25. Gaia Clary wrote: Now here are some questions raising: 1. So this means that meshes with few triangles will result in higher streaming costs (assuming that on each LOD we must display at least one single triangle)... is this correct or wrong ? 2. And if the former is true, then meshes withmore triangles can become more cost efficient because we can reduce them better. Is this correct or wrong ? 3. If we would reduce the number of triangles by a power of 3 for each level, then from the streaming cost presumptions meshes with 512,64,8,1 faces on LOD3,2,1,0 should be (in average) the most efficient meshes we can have ? Is this right or wrong ? 4. The resource costs displayed during upload are meant as "this is the fraction of your 15000 budget that will be needed to rezz this object with the given scaling", correct or wrong ? 5. Sorry, i am so curious but i realy want to know if we can have more than 15000 mesh objects on a region. From what Nyx told us last monday , this should be possible (he told us he could place 24000 spheres into a region). 1 & 2 -- In general, any mesh that can be reduced to a lower number of triangles will be cheaper than a mesh that cannot be reduced, depending on size and reuse of vertices. 3. If by "efficient" you mean the most triangles in the high LoD for the lowest cost, then yes, that would be very efficient. 4. Yes, and if everything's working correctly, the number in the upload floater should match the number in the build floater after first rez -- it currently does not, which is a bug. 5. Yes, you can have more than 15000 objects per region provided you use the new physics options appropriately (link non-physics meshes together and set their physics shape type to "none." However, this is unlikely and probably undesireable for meshes, as one of the benefits of meshes is less scene fragmentation, as we get with prims. Ideally you'd want to find a balance between keeping the number of objects low, keeping the number of visible triangles low, and keeping the overall aesthetic quality high. It's all very yin and yang.
×
×
  • Create New...