Jump to content

Kathrine Jansma

Resident
  • Posts

    193
  • Joined

  • Last visited

Everything posted by Kathrine Jansma

  1. TextureTest2 looks useful for tuning the pipeline. But guess i need a bigger network pipe (as it can saturate my 100MBit/s link but not the CPU when running with 16 texture decode threads in CoolVL Viewer and forcing full size texture loads).
  2. The hard part with benchmarks is usually to come up with useful metrics AND make the measurment halfway reliable and repeatable. With around 500 functions, which one do you care about? Sometimes performance is a little suprising. E.g. i tried to script some system to detect objects based on proximity and wind etc. (basically a primitive smell simulation with a bit of monte carlo sampling thrown in). I assumed it would be much faster to detect statically named objects in range via llSensor() as thats basically a trivial geometry query with a static condition, which should be super fast if backed by any kind of database with spatial index (e.g. https://en.wikipedia.org/wiki/SpatiaLite ). Turns out llSensor(fixed_name, ...) is so high on latency that it is around 10x more efficient (for latency) to drop a script with a listener in every single target instead and send out static pings and listen for the echos. Which is sad, script spamming for no good reason. Other parts of the code create random numbers for a Monte Carlo simulation and get starved on the llFrand() performance (and would love a CSPRNG like /dev/urandom) so would see improvements there. Another part does vector manipulation and distance calculations for many objects, which would really benefit from SIMD optimizations. So if the server JITed those loops to use AVX it would be awesome. But totally different performance metric again. I'm surprised the script works at all, but it is super lag/script performance sensitive.
  3. TextureTest2 seems unaccessible. TextureTest works, MeshTest2 also, but could not TP to the TextureTest2 region on the Beta grid.
  4. No shadows = Faster. Enable Vertex Buffer Objects is on for all. Disabling it makes things slower. That is with just my AV in sight. The 9/20fps max are with a club with around 20 AVIs with lots of rigged mesh attachments.
  5. I'm all ears if you can tell me why it is that bad. In benchmarking tools the card hits the expected range, so its not fully broken. But well, to compare FPS, i did some minimal thing. Just logging in at my home location, default view with quite similar settings with the current performance viewer, Firestorm 6.4.21 and the current Cool VL Viewer, all Windows 10 Pro, latest AMD drivers for the card and on a Ryzen 2700X without overclocking, 2560x1440 resolution. Quite varied. So, Performance Viewer got around 15-20fps in that setting, Firestorm clocked in around 75fps, Cool VL Viewer around 80fps. Second Life Project Viewer Performance Improvement 6.4.24.565324 (64bit) Release Notes You are at 55.1, 47.0, 211.2 [...] Second Life Server 2021-10-25.565008 Release Notes CPU: AMD Ryzen 7 2700X Eight-Core Processor (3699.99 MHz) Memory: 32694 MB OS Version: Microsoft Windows 10 64-bit (Build 19042.1288) Graphics Card Vendor: ATI Technologies Inc. Graphics Card: Radeon RX Vega Windows Graphics Driver Version: 30.0.13033.1000 OpenGL Version: 4.2.14761 Core Profile Context 21.11.1 30.0.13033.1000 Window size: 2560x1377 Font Size Adjustment: 96pt UI Scaling: 1 Draw distance: 128m Bandwidth: 9000kbit/s LOD factor: 1.125 Render quality: 5 Advanced Lighting Model: Enabled Texture memory: 512MB VFS (cache) creation time: November 09 2021 09:37:23 J2C Decoder Version: KDU v7.10.4 Audio Driver Version: FMOD Studio 2.01.07 Dullahan: 1.12.2.202109230751 CEF: 91.1.21+g9dd45fe+chromium-91.0.4472.114 Chromium: 91.0.4472.114 LibVLC Version: 3.0.9 Voice Server Version: Vivox 4.10.0000.32327 Packets Lost: 11/6,681 (0.2%) November 09 2021 09:47:31 Firestorm: Firestorm 6.4.21 (64531) Jul 21 2021 21:00:53 (64bit / SSE2) (Firestorm-Releasex64) with Havok support Release Notes You are at 55,1, 47,0, 211,2 in [...] Second Life Server 2021-10-25.565008 Release Notes CPU: AMD Ryzen 7 2700X Eight-Core Processor (3699.99 MHz) Memory: 32694 MB Concurrency: 16 OS Version: Microsoft Windows 10 64-bit (Build 19042.1288) Graphics Card Vendor: ATI Technologies Inc. Graphics Card: Radeon RX Vega Graphics Card Memory: 8176 MB Windows Graphics Driver Version: 30.00.13033.1000 OpenGL Version: 4.6.14761 Compatibility Profile Context 21.11.1 30.0.13033.1000 RestrainedLove API: RLV v3.4.3 / RLVa v2.4.1.64531 libcurl Version: libcurl/7.54.1 OpenSSL/1.0.2l zlib/1.2.8 nghttp2/1.40.0 J2C Decoder Version: KDU v8.1 Audio Driver Version: FMOD Studio 2.01.09 Dullahan: 1.8.0.202011211324 CEF: 81.3.10+gb223419+chromium-81.0.4044.138 Chromium: 81.0.4044.138 LibVLC Version: 2.2.8 Voice Server Version: Vivox 4.10.0000.32327 Settings mode: Firestorm Viewer Skin: Firestorm (Grey) Window size: 2560x1377 px Font Used: Deja Vu (96 dpi) Font Size Adjustment: 0 pt UI Scaling: 1 Draw distance: 128 m Bandwidth: 500 kbit/s LOD factor: 2 Render quality: High-Ultra (6/7) Advanced Lighting Model: Yes Texture memory: 2048 MB (1) Disk cache: Max size 2048.0 MB (2.5% used) Built with MSVC version 1916 Packets Lost: 1/2.917 (0,0%) November 09 2021 09:52:43 SLT And Cool VL Viewer (self built): Cool VL Viewer v1.28.2.46, 64 bits, Nov 6 2021 15:29:31 RestrainedLove viewer v2.09.29.24 Release notes You are at [...] Second Life Server 2021-10-25.565008 Release notes CPU: AMD Ryzen 7 2700X Eight-Core Processor (3699.99 MHz) Memory: 32694MB OS version: Microsoft Windows 10 64 bits v10.0 (build 10586.1288) Memory manager: OS native Graphics card vendor: ATI Technologies Inc. Graphics card: Radeon RX Vega Windows graphics driver version: 30.00.13033.1000 OpenGL version: 4.2.14761 Compatibility Profile Context 21.11.1 30.0.13033.1000 Detected VRAM: 24467MB J2C decoder: OpenJPEG: 1.4.0.635f Audio driver: FMOD Studio v2.02.03 Networking backend: libcurl 7.64.1/OpenSSL 1.1.1l/zlib 1.2.11.zlib-ng/nghttp2 1.43.0 Embedded browser: Dullahan 1.12.3/CEF 91.1.21/Chromium 91.0.4472.114 Packets lost: 13/5669 (0.2%) Built with: MSVC v1916 Compiler-generated maths: AVX2. Compile flags used for this build: /O2 /Oi /DNDEBUG /D_SECURE_SCL=0 /D_HAS_ITERATOR_DEBUGGING=0 /guard:cf /GS /Qpar /GL /DWIN32 /D_WINDOWS /W3 /GR /EHsc /std:c++14 /EHs /arch:AVX2 /fp:fast /MP /TP /W2 /c /nologo /GS /Zc:threadSafeInit- /DLL_WINDOWS=1 /DUNICODE /D_UNICODE /DWINVER=0x0601 /D_WIN32_WINNT=0x0601 /DXML_STATIC /DLL_PHMAP=1 /DLL_IOURING=1 /DBOOST_ALL_NO_LIB /DLL_FMOD=1 /DAPR_DECLARE_STATIC /DAPU_DECLARE_STATIC /DCURL_STATICLIB=1 /DLL_NDOF=1
  6. A small part of the fixes to rigged mesh rendering were ported to Cool VL Viewer, and the effect was similar for my lackluster AMD RX Vega 56 on Windows. FPS in a club with lots of people doubled from 9fps to 17-20fps. Massive quality of life improvement. But well, Henri got like 70-80fps in the same club with an NVIDIA 1070ti with same settings on Linux, which shows how abysmally bad the AMD OpenGL drivers are.
  7. While it might seem obvious, that prepping and optimizing upfront has huge benefits, does it hold when trying? With a modern I/O pipeline and multi-cores, you should be able to real-time stream the necessary data into the rendering pipeline (at least for higher spec systems). Like having io_uring or similar APIs feed a ton of parallel request to the SSD thats usually idleing due to the inefficient serialized request patterns? That obviously doesn't help with occlusion maps etc. but how about JITtting that part? Like have one viewer compute the part and offer it to the server for caching and other viewers opting in to just download and use it. Do you have any numbers how many MB/s or objects/s one needs from the I/O and texture decoding subsystem for a smooth experience?
  8. The authentication isn't Microsoft Auth or something but the TOTP standard at work: https://en.wikipedia.org/wiki/Time-based_One-Time_Password or for a more graphic explanation try https://www.allthingsauth.com/2018/04/05/totp-way-more-secure-than-sms-but-more-annoying-than-push/ While it would be nice to have other, additional options, it is technically one of the safer options that does not require any privacy invading apps or phone numbers or similar stuff. WebAuthn would be another step up in security. If your country has a nice and working eID implementation, thats cool (and sadly still rare worldwide). But LL probably does not target all the fragmented eID ecosystems in all the countries, especially not for smaller countries as a priority. In addition, some countries have hillariously bad national eID programs. For example germany with its citizen cards, "Personalausweis", which is technically very advanced and really good for eID, but hampered by stupidly complex requirements for anyone trying to use it. If you, as a provider, wanted to use the german eID functionality, you would have to register with the federal office, buy some expensive license, setup some specialized service etc, all amounting to costs of at least 50.000 €/per year or more to offer it at all, according to some estimates. So if other countries have similar programs it would be quite expensive to offer the specialized service for every country SL works in.
  9. Makes obvious sense, when thinking about it. The number of draw calls might scale linear, but the visible effect doesn't.
  10. Nice as well, but, well I'm trained in physics, so not scared of graphs and statistics. Thats quite some significant FPS drop in your intro text: Does that scale linear? Like 2x Maitreya users => 81 FPS, 3x => 69 etc.? Otherwise it looks pretty conclusive from the numbers and graphs.
  11. You could probably get it to work with liberal use of stuff like NTFS Reparse Points (Junctions), but performance over network drives tends to be atrocious. (Unless you happen to talk about stuff like very low latency 40 GB/s Mellanox cards and good SANs). So just not worth it. If you still have spinning rust (aka HDDs), just get any SSD. Always better.
  12. More memory is always pretty nice. Might also wish for some efficient shared memory between scripts in the same object. E.g. allow one script to "export" a read only list to other parts of a script set, to avoid tons of messages and duplication back and forth. That could simplify some things even when the 64kB limit of writeable memory per script survives.
  13. Excellent read, really curious about the numbers, even if all benchmarks are advanced examples of lying with statistics. I guess there is no real chance to get the current render pipeline to use stuff like glMultiDrawElementsIndirect() to optimize the batches for those bodies, other than just rewriting the whole stack for Vulcan? But guess the OS X (non-)support for anything OpenGL means a solid no, as it stops with OpenGL 4.1.
  14. If you got a Windows 11 or Apple MacOS Box, the viewer could simply do some FIDO/WebAuthn stuff based on the device TPM and you are mostly done for 2FA without complex extra user input. Throw in some TOTP tokens (aka Google Authenticator) for people without phones or some email codes and App push/SMS for the people that really want to use a phone just because they are used to it. But using MFA/2FA for simple logins is probably total overkill anyway. The hard part is doing proper tech/user support and preventing the two horrible scenarios of 2FA. The tech part is easy: Attackers fast talking support into doing a password/2FA reset with just a single factor left... Being locked out of the account due to failing hardware tokens/phone number/email address lost without any reasonable way to claim it back.
  15. @Monty Linden Any plans to run the CDN with Quic/HTTP2/3 instead of old HTTP1.1 Pipelining? It does not serve https:// so the default http/2 stuff obviously does not work yet. With the newish multithreaded texture decoders the fetching can be a bottleneck, so some more concurrency without head-of-line blocking would be nice to have.
  16. In most viewers it is a single thread to fetch and decode textures. But some viewers have a multi-threaded decoder by now (Cool VL Viewer, and i saw the change in the Firestorm repos too, but not yet released) which can speed up texture loading&rezzing quite a bit. Those viewers have settings to adjust the number of threads for texture decoding.
  17. It's called TOR. And if you look at the shenannigans that fly out of the browser customization needed to keep it safe, it is pretty much an uphill battle if you want to enable interactive stuff like Javascript and external resources. The other way you could do it would be Appstore style walled garden with compiled apps that simply lack the APIs to get close to anything identifying. And just watch the whole supercookie, canvas fingerprinting or HSTS fingerprinting efforts by the ad industry you can basically forget it as well. You do not need the IP to identify a machine/user. Once you have device dependent timings, you have lost. So, if one really wanted it, one could do a kind of "streaming texture", where you attach something like a VNC/RDP datastream to a prim texture and just have the browser running headless on some common cloud services rendering its output to the VNC display. (think Google Stadia/Gefore Now or similar concepts). But that has obvious costs attached as the VNC setup would need to be run by a trusted 3rd party.
  18. Would have helped me back in the day, when i tried for around 2 hours to unpack my first bought outfit. So a modern help system would be okay. Its kind of adding a little of what makes Electron Apps attractive to people. Use web stuff for boring UI tasks and have it look pretty. And lock it down to just a few allow listed origins.
  19. Looking at the openssl 1.1 patch i really wonder a bit about compile options: (and really hope it is 1.1.1 as 1.1 is out of support as well...) # disable idea cypher per Phoenix's patent concerns (DEV-22827) perl Configure "$targetname" no-asm no-idea zlib threads -DNO_WINDOWS_BRAINDEATH \ --with-zlib-include="$(cygpath -w "$stage/packages/include/zlib")" \ --with-zlib-lib="$(cygpath -w "$stage/packages/lib/release/zlib.lib")" zlib is kind of "YES, give me CRIME attacks". So most Linux distros disable TLS compression and LL should too, gives a nice CVE (https://nvd.nist.gov/vuln/detail/CVE-2012-4929 ) so better build with no-comp. Even if it works around the lack of compression support on the HTTP layer but well, textures should be highly compressed anyway, and you want HTTP/2 with the efficient Header compression in the end. no-asm is also not really a smart thing to do, as it disables AES-NI support which costs a lot of CPU time (see https://www.openssl.org/docs/faq.html Listed at "Does OpenSSL use AES-NI or other speedups?") LL could be way more liberal in disabling dead algorithms and unused stuff (e.g. Triple-DES, RC4, and a ton of others you do not need) No one needs SSLv3 anymore, and while at it kill TLS 1.0 & TLS 1.1 defaults as well (https://datatracker.ietf.org/doc/html/rfc8996)
  20. Well, WebAuthn claims to be phishing resistant. And it actually helps a bit for people that have some amount of common sense left. There are still some gulliable people that fall to ANY phishing attempts, but thats unfixable. https://i.blackhat.com/USA-19/Thursday/us-19-Brand-WebAuthn-101-Demystifying-WebAuthn.pdf
  21. Dual passwords are basically longer passwords. So why not simply require longer passwords?
  22. Doing anything with the username is worthless for other reasons too. Lets imagine you use email instead, like all the other sites. Great! You just enabled trivial password spraying attacks. So the only benefit of username = inworld name is that attacks on a specific users account gets a tiny bit harder. But it fails anyway if the password is strong enough so there is no benefit. Which is basically https://en.wikipedia.org/wiki/Kerckhoffs's_principle So if your password is weak, no username trickery will save anything for long.
  23. Actually it is current best practice to move away from password complexity rules towards other measures. The current best practice is use a long and unique password for each service and do not enforce any complexity rules. Thats at least the recommendation of US NIST, German BSI, and the UK. Mandatory XKCD https://xkcd.com/936/
  24. 2FA done right is a good thing, no doubts there. But most 2FA is done in a terrible way. A few of the really common pitfalls. offer SMS only (or worse: demand SMS to a mobile phone only) offer a push TAN app on some proprietary non jailbroken mobile phones only. Use TOTP from RFC 6238 but modify it slightly so only your own app can generate the codes Have a support hotline that ignores 2FA and just asks some trivial knowledge based questions to recover credentials Have a password recovery process that just asks for the 2FA key and common knowledge (turning 2FA into single factor again) Try to do WebAuthn and have users run away from the complexity of setting things up properly Allow only a single hardware token per user for 2FA (you need at least two, or you end up with the password recovery process ending up as 1FA, it only works with a single token in a corporate environment where the IT can establish identity out of band) Have super aggressive timeouts like 5 minutes between 2FA (e.g. shop on the marketplace and get asked 2FA code for login and again 5minutes later for checkout), thats what some banks do due to PSD2 regulations in the EU. Ask the payment industry to do it (you end up with junk like Verified by Visa (https://www.cl.cam.ac.uk/~rja14/Papers/fc10vbvsecurecode.pdf ) Ask the government to do it (you end up with hyper complex junk like the german eID system, safe but unusable) Ask the smart card industry to do it (you end up with a smart card based system that needs new cards every year as a source of income) If i was to decide upon 2FA or auth system options, i would probably go with ALL the following: Offer TOTP as a good secure baseline system: Good enough, works everywhere. Offer SMS or some App for people that really want it. Because people know it. Offer WebAuthn for the technology savy people that want hardware tokens or have high values in their account. Good tech, strange crypto, a bit high entry barrier still. Allow linking accounts to external ID providers via OpenID connect or social logins/services in order to have a third factor for password/2FA recovery. Do some basic risk based assesment when to ask for 2FA (e.g. logging in, buying/selling RL currency, large transactions > 25.000 L$, land changes, many account data changes). The hard part is not really the tech. The hard parts are "Ease of Use/UX and Password Recovery."
×
×
  • Create New...