Jump to content

Kathrine Jansma

Resident
  • Posts

    189
  • Joined

  • Last visited

Everything posted by Kathrine Jansma

  1. I did recompile them (both as LSL and as Mono), but didn't change the behaviour.
  2. I dusted off some older AO (Sassy Ponygirl AO) in my inventory and now encounter instant "Stack-Heap-Collisions" on init/rezzing of it in some regions. In some regions memory on init shows 7% free, in others its just 1% and crashes. I would expect it to be the same and not vary by region. Did something change with Script memory handling in recent updates?
  3. Might be easier to answer, if you tell a bit what you usually do in SL. Most viewers have some niche to shine. Some are better for taking pictures. Others for managing your inventory. And so on...
  4. Ah, your aiming too low. 😉 More Buzzwords! Blockchain based tool penetration attestation by AI vision processors using zero knowledge proofs linked to pseudonymized biometric ID! Maybe add some SmartContract buzzword stuff.
  5. Digital Sex toys have all kinds of hilariously bad security issues. Def Con had some presentation a few years ago with the lovely titel "Adventures In Smart Buttplug Penetration testing" that talked about some Lovense stuff too. There was also some presentation at CCC conference called "Internet of Dongs".
  6. They can do that with the "Toy Event API". That can cause Websocket events for Buttons pressed on the devices.
  7. Lovense is a bit weak on (public) API documentation for the tools, so you get other implementations like buttplug.io that allows control without any cloud service leaking data around. The current QR code/numerical code linking stuff looks a bit like OAuth2 device flow (https://oauth.net/2/device-flow/) happening to access the cloud service that connects to the app running on a phone or desktop machine. Not sure about the details, as i haven't found any detailed docs how to register the client, probably needs registration. (maybe hidden somewhere in here https://developer.lovense.com/docs/standard-solutions/standard-api.html ) So sure, a viewer could implement some scripting bridge to this API, but that works somewhat okayish with the available LSL HUD based solutions already. Those probably just use the HTTP out capabilities in LSL to call some REST API with the given access tokens. Its a bit like the Firestorm Viewer side AO, it can all be done with LSL script, but might work better with integration into the viewer. For example there is some noticeable lag with the HUD based solutions. So if any TPV developer wanted to integrate a Lovense style Bridge to mimick some inworld HUDs, thats basically doable and not even hard. There seems to be some websocket based 'Lovense toy event' API that would not be implementable in a HUD, so maybe their viewer supports that?
  8. Ten minutes sounds excessive. With enough threads thrown at the problem, even a busy club should rezz much faster. But might be comparing apples & oranges. Which viewers did you try? I would recommend you give Henri's Cool VL Viewer a spin, especially with the 'boost texture/fetch decode after TP' features. I tend to tp into fairly busy places and usually the crowd is looking proper after less than a minute. And thats with not really super modern hardware. And make sure you have a AV exception for your viewer cache, makes a huge difference.
  9. You should revisit your image of "Open Source Developers". Henri works on his viewer in his spare time, gets no money for it and scratches his own itches and the stuff that annoys him. Henri publishes all the source code for his viewer openly, so if LL wanted it, they could use it. And they actually get some good patches or comments from Henri. There is no reason for him to try and persuade LL to use his better code. Or well, sometimes he tries anyway, when he feels like. If you think Firestorm or LLs viewer could use the improvements, you could (in theory) take the code, ask Henri to allow dual licensing it as GPL/LGPL and work with Firestorm or Linden Labs official process to get such patches integrated. Like e.g. Firestorm adopted my multi-threading texture decoding code from Cool VL Viewer (which has now been superseeded by a new threadpool based approach in the Linden PBR viewer). In the end, you both are right: - Texture compression is a shoddy solution that can better be fixed with smarter code, as done in Cool VL Viewer - Sometimes texture compression is better than nothing, even with hickups and bad FPS effects. - If you lack VRAM, you have to make compromises.
  10. And what about the graphics drivers? Both the newer AMD and the NVIDIA graphics drivers are multi-threaded even for OpenGL by now. It is true, that OpenGL has quite some limits for multi-threaded use and that the SL viewers main render loop is still single threaded (and especially the shadows stuff which hurts). So yes, the viewer dies a slow death of a thousand cuts when it is running the main loop and has to issue a gazillion draw calls due to badly optimized content. But at least extra cores help to free up the main thread to do the rendering and not get context switched away all the time. BUT: 1. Texture decoding and OpenGL binding of textures is using extra threads, so if you need to decode textures, more cores help (in Firestorm, Cool VL Viewer and maybe others, and in the modern SL Viewer too) See: https://github.com/secondlife/viewer/blob/a592292242e29d0379ee72572a434359e1e892d1/indra/llimage/llimageworker.cpp#L64 2. Cache operations and filesystem I/O get pushed to the threadpool as well 3. A few other things may get pushed to threads too. In a mostly stationary scene, the extra cores do not help much (besides the driver stuff). But when moving around or shuffling lots of textures in and out, those help quite a bit. Some extra points: AMD iGPUs aren't that bad. Sure, a modern 200+ $ GPU will run circles around it (burning 3-10x energy) and usually has maybe 100% more fps. But it is quite okay for most cases, unless you really want to visit that busy club or need ultra high resolution or draw distance. Notebook APUs tend to have fast soldered on memory, and memory speed is pretty important for APUs The AMD (windows) drivers improved massively in OpenGL performance over the last few years, especially if the GPU allows ReBAR/Smart Access Memory.
  11. Well, technically it is a good thing to use all available memory if there is good use for it. It is just trouble if things like disk cache and other ram hungry places battle for it and there isn't enough in total.
  12. For AMD dedicated GPUs you should also try to have "Smart Access Memory" enabled. It can make a significant difference. It is the AMD name for the PCI Resizable-BAR feature.
  13. The problem would still be, if LI was a proper metric for render cost/lag. Probably it isn't. Just like the ARC numbers are useless. So instead of the ban systems kicking users for "too many scripts" you would have more or less random bans for "too much metric xyz", and still might skip the worst offenders. Often the basic mesh body is probably already the worst offense to start with. How many land owners would ban people wearing some of the more popular bodies due to the atrocious load from all the submeshes used for hiding segments instead of using BOM alphas?
  14. A newer Firestorm is able to use more of the CPU/GPU resources due to some multi-threading optimizations that have been added. So it is expected that it goes a little harder on your machine than an older version. And using the CPU/GPU more consumes more power, heat and makes the fans start. Try playing with some of the tuneable settings in: https://wiki.firestormviewer.org/preferences_graphics_tab#hardware_settings this may reduce the load a bit. But the MBP and charger should still be able to handle that without failing.
  15. Github does offer a nice view about contributions to at least the SecondLife Viewer, which is developed more or less openly: Contributors to secondlife/viewer · GitHub
  16. Thats slightly optimistic. Not even Linus Torvalds managed to do that. Basically it is usually true if you compile yourself, but once you start shipping binary packages, it becomes a mess. That said, the mainstream distros usually work okay, when you stay away from very old or very new versions.
  17. For people that care about the Python issue that causes this: After installation on Windows7, 64bit Python 3.9.0b5 reports "api-ms-win-core-path-l1-1-0.dll" missing and doesn't start · Issue #85584 · python/cpython · GitHub Basically Microsoft restructured their Win32 runtime in the effort to modernize it as part of the UWP project and some functions got moved around to new DLLs like the mentioned one. You can see a list here: APIs present on all Windows devices - Windows UWP applications | Microsoft Learn This means a viewer built against the newer set of Win10 headers & libraries will not find the long outdated old locations of those functions anymore. Thats pretty normal when things move on, just like a TP to an old landmark in SL leads you nowhere.
  18. It is important, but not for the reasons you seem to assume here. Apple is going to kill OpenGL in one of the next OS X releases, at least it deprecated OpenGL support since 10.14 (but added newer stuff for M1 surprisingly, see https://unlimited3d.wordpress.com/2020/12/09/opengl-on-apple-m1/ ). So once Apple goes forward and removes OpenGL APIs from the system, there will be no way to get the current viewer to run on OS X. So it is not "would be nice to use the power" but "move to new tech or your out", when it comes to the OS X/Apple ecosystem.
  19. I doubt it makes any real sense to compare CPUs for this when also switching the GPU around. Comparing the CPUs with same GPU, drivers, OS, RAM amount available might be useful, but with a setup like this you introduce too many differences that have nothing to do with the CPU itself. You have vast system differences: DDR4 vs DDR5 memory 1 TB SSD vs 2 TB SSD (probably also NVMe vs SATA or different PCIe levels) Different OS Different GPU So basically you compare systems, not CPUs.
  20. Up to now, the ARM offerings beside Apples severly lacked in the GPU and memory department. And Apple basically abandoned OpenGL support, so viewers do not run to full potential offered by the hardware. So not really good targets to run SL viewers on. BUT if you have a nice powerful ARM desktop running e.g. Linux ARM you can compile a viewer for it and run it. Henri Beauchamp has/had an ARM build of the CoolVL Viewer at some time ( http://sldev.free.fr/forum/viewtopic.php?f=10&t=2212&start=40 ). You could probably get the current viewer to compile on Windows ARM too, maybe with minor changes, but it lacks prebuilt ARM versions of the dependent libraries, so it would be some effort to get it to compile.
  21. GDPR compliance is not automatically transitive. Especially not when stuff like the complex US<->EU regulation with Safe Harbour/Default Contract Clauses, or what its called today comes into play. You consent not to "data transfer" but to "data transfer for a certain purpose to a certain processor/party". So bringing a new entity or purpose for data processing to the GDPR party usually adds trouble. And using Discord for group chat is surely either a new `purpose` or a new processor involved. Thats a problem on-top of any issues with TOS differences between Discord and SL.
  22. That is mostly FUD. The same is true for any third party packages, even if open source. If the developers of the open source drivers basically abandoned it or do not keep up with the kernel development or compiler/library changes, it breaks after some time. The main point is you need active maintainers. You may have problems if you are on a rolling release/bleeding edge distro and have some bleeding edge hardware or some vintage hardware. Thats totally true, but thats no difference to the Windows experience actually. Immature drivers are common for both. The only bonus you get from "in-tree" open source drivers is the convenient fixing by the kernel people when they break the ABI again or push more stuff behind the GPL curtain. Once you have a working system, it tends to stay working for quite some time. Sure if you intend to use hardware for 5-10 years, aiming for ultra stable in kernel drivers might be a thing. But 5-10 years service life for "gaming" systems is not really a thing usually. And 2-3 years is usually far less of a problem.
  23. The drivers are much much better than they were a few years ago. They are still worse than NVIDIAs in a lot of cases. They do crash in some games. Part of the reason for that is crappy games only testing heavily with the NVIDIA drivers. Part is buggy drivers. Basically, especially with Vulcan and modern DirectX the driver has a much more hands off approach and lets the developer micromanage a lot of details for speed. But those APIs also remove the security net and verification OpenGL provided (that slows things down). So if your game developer/engine designer makes a mistake, it leads to hard crashes now. And as NVIDIA usually gets more resources in testing due to market structure (e.g. on Steam Hardware Surveys https://store.steampowered.com/hwsurvey/videocard/ ), so more bugs get squashed.
  24. I wonder why viewers do not offer to "wear" one of those, if they are detected as missing after some short timeout? Or is this not detectable by a viewer?
×
×
  • Create New...