Jump to content

gwynchisholm

Resident
  • Posts

    296
  • Joined

Everything posted by gwynchisholm

  1. Enjoying christmas themed dance music and discovering new bugs found that if you take a snapshot, dont save it, and just close that window, itll leave your SL to stutter every second, like sub 1fps lag spikes, and thats fixed by taking another screenshot and saving it, i was able to replicate this a bunch of times but nic could only do it once
  2. There’s a lot of problems with the mobile version of the forum from my experience, like getting stuck in quote boxes and not being able to edit outside of them. Or not being able to “select all” We also don’t have a trash button like other invision forums do on mobile? which means since you can’t select all, and you can’t tap out of quote boxes, and you can’t delete all… if you need to delete a quote you need to refresh the page
  3. There just aren’t many of them, they’re definitely not a popular gpu. Oddly enough I see more used A380’s than 770’s. I think a lot of people buy the A380 because on paper is the perfect GPU for an older pc, low wattage, low profile options, 6gb of vram, it usually matches the GTX 1060 6gb in performance, so it seems like a good pairing for an old office pc. Except the A380 relies on pcie 4.0 and rebar, and if your system doesn’t have those, the performance is cut in half. Meaning it’s not viable for pre 11th gen intel systems which makes up 99% of the old budget pc market. Rip. I have an A380 on my GPU shelves because I got it on launch day just to see what it’s all about, and from that launch day one thing it did exceedingly well was Minecraft, and nothing else even loaded. it’s hiding back there next to the eclaro Battlemage will be interesting, they’ve put a lot of consistent effort into the drivers and the card is far more capable than it was on launch, and in some specific scenarios hits well above its tier. I get 120fps in 4k at max settings in halo MCC across every title except halo 4. After a recent driver update said “up to a 750% uplift in performance in halo MCC” and they were absolutely not lying, tripled my average framerate and removed all stutter. For reference that’s higher frames than my 3070ti gets at the same settings. Thats not a 3060 tier card in that scenario, that’s well above it. But then just the inverse happens where it gets 60fps in 4k max settings in GTA V, which is about on par with a 3060. And then in cs2 it matches the 3060ti, and then in some picky older games like gta iv or Skyrim whatever remastered edition, it’s doing worse than a 3060.
  4. That’s in 4k, at absolute max settings in some of those shots. The preset slider doesn’t update based on what the rest of the settings are. Arc definitely doesn’t understand “power efficiency” in the slightest, but that performance is pretty in line with its equivalents like the 3060ti and RX 6600xt if you were to set everything to their max settings in 4k dropping shadows to 1x scaling helps a lot, and then not playing in 4k to begin with can help depending on the area, I’ve found some places don’t care about resolution at all, some places 1080p to 4k can cut the framerate by 3/4
  5. For others finding this later, as what happen above. Using a RAM disk for sl cache is such an absurdly overkill method for fast storage that it’s almost difficult to explain. This is like going “my Corolla has a bad time on the expressway, better buy a fighter jet”. Any modern ssd is more than fast enough for SL’s cache, anything further is very far into the land of diminishing returns. What kind of m.2 ssd is pulling sub 600mbps? Is that like an msata drive or something?
  6. LL does not have the time, money, staffing or hardware to port this entire game over to UE5 without losing content. So while other game engines are extremely capable, SL is kinda stuck where it is due to the nature of this game, it’s locked into supporting legacy content because it’s all legacy content.
  7. Its also worth mentioning how much people impact vram I dont normally go here but london city always has a bajillion people here, i got nic to hop on for a second to add to it, and yeah you can zip past 12gb of vram like this without even trying I dont know why you would play like this though, the higher detail shadows look great in photos but for just playing normally, i usually keep it at 1 so if you were playing like this for whatever reason, and you had a 12gb card, compressing the textures or limiting them to 512x512 would potentially reduce your vram usage enough to not run out of vram (which tanks performance), but then still id just turn the shadows down and cut the vram in half like that its still noted here that render distance doesnt really change up vram usage much, though it does just tank framerate in other ways, 1024m or 32m
  8. This definitely would help if you were tapping out your vram, so i would recommend the same to people running out of video memory but i think if you have more vram than that you wouldnt need to worry about it. Another point worth mentioning is that yeah you can easily tap 12gb of vram if youre playing in 4k in some places, and most of it has to do with shadows and such Examples, 4k max settings shadows at 1x detail scaling: 4.3gb of vram used, nothing crazy, looks nice though 4k max settings shadows at 4x detail scaling: Using over 10gb of vram, it really only looks better up close imo 4k max settings with 4x shadows but with lossy texture compression: 9.5gb of vram use, so theres definitely less being used, if you needed to save .5 to 1gb of vram i could see this being helpful, and considering im most commonly seeing my gpu at 9-11gb anyway, for 12gb cards this could be very beneficial i have a 16gb A770 so i dont really hit the limit of my vram but ive definitely seen it get over 12gb in some locations Heres another cool bit i didnt actually expect, is that lowering render distance doesnt really impact vram usage that much at all, its mostly using vram for the shadows and lighting, heres 32m render distance and a client restart + cleared cache just for posteritys sake: Still 9.5gb of vram, which is a little in line with what i expected, that the textures arent actually using that much video memory, but compressing them can help anyway its not the textures using 8gb+ of video memory, theyre maybe using 2gb at worst, most of it is lighting.
  9. Christmas related places and events Because it’s December and I can only run from Mariah Carey for so long before she inevitably catches up with me after defrosting
  10. This is a fun thread to see necrobumped but I’ll hop in anyway. So this is something I’ve tested a little ages ago, and from my experience it’s wildly different depending on the hardware you use to make this comparison. Most of what makes a Quadro a professional card is that its drivers and the hardware itself are certified for stability. It’s not really that it’s inherently more capable, you’re paying more for the name because nvidia is putting their certification behind it and showing you it will work with 100% reliability with certain applications. A GTX 980 will have no problems rendering in blender. But a Quadro M6000 along with just being a bit more suited for it with more video memory (in a moment) is guaranteed to work without any problems in blender. For big companies that’s important and that’s what they pay for. It’s why you’ll usually see consumer equivalents as options alongside professional cards when spending out something like a prebuilt workstation. They’re asking you “do you need or want certification?” more than they’re asking about if you want a different performance tier. And then as I mentioned there, they tend to have more video memory because most professional applications will utilize more video memory, just on average. Something the tier of a GTX 980 in the context of gaming probably doesn’t need more than 6gb of video memory because most games aren’t using vram like that, or at least weren’t at the time. But it’s Quadro equivalent came in 12 or 24gb variants because even if the GPU wasn’t that much faster on a comparative scale, the applications it would be used for could use that kind of video memory. Right tool for the right job. But SL does still see benefits from workstation gpus, I think a lot of it has to do with how it uses video memory and as you mentioned, sharing a lot of attributes with 3D software rather than games. But the difference is small and variable from my limited testing, which is why I kinda leave it at that. I don’t have the money to be tossing around at modern high end workstation cards, and I don’t recommend anyone else do that either because unless you have some other primary use for it, spending thousands of dollars on a GPU for sl is absurd. My testing with that was limited to a few generations, Tesla (the architecture), fermi refresh and early Maxwell. Teslas comparison being the consumer 8800 series vs the Quadro fx line, the Quadros performed better by a thin margin except the fx 5600 wiped the floor with everything else for one crucial factor, 1.5gb of vram vs 768/896mb. That nullified the test at the time since vram was a limit and one card simply had a higher limit. Fermi, a Tesla C2050, Quadro 5000 and GTX 570. Once again similar story, the Tesla beat the rest because it had 3gb of vram, and this time by a massive margin (relatively speaking) of 10-15fps higher on average in almost every scenario. But then we get to Maxwell where vram is no longer a limiting factor in at least 1080p, and then the differences dwindled. The GTX 980 vs Titan X vs M6000. The titan X was just faster overall but the M6000 and 980 were within margin of error of eachother. And that was a 6gb 980 vs a 24gb M6000, neither tapped out their vram, having so much didn’t matter. They performed very similarly. A lot of that comes down to this is still just an openGL game, there’s only really so much that can be done with it, and hardware optimization for openGL is just kind of a built in taken for granted feature in any GPU made in the last 20 years. There aren’t really any optimizations for openGL that haven’t already been done and aren’t already in effect. It’s why my intel arc A770 plays sl fine, and about the same as it did on launch day. Because while intel has been freaking out trying to get dx9/10/11 working smoothly on it, and optimizing Vulcan and such, OpenGL has just been fine and has been the same. And changing the hardware regardless of driver and hardware optimizations won’t change that, which is what workstation cards do.
  11. Any wearable object that’s larger than maybe a 10m square cube. Or at least they should be hidden by default unless you choose “show oversized attachments” or something. Making the mistake of standing at a social island for more than 3 minutes before someone goes nuts with a fullbright white block the size of the entire pavilion and begins to spin.
  12. I posted this I think just this Sunday, a screenshot from Saturday night In a virtual world you can watch the sunrise or sunset any time of day, which is convenient for me because I work 2-10 and you won’t catch me dead awake early enough to watch the sunrise irl
  13. SL is actually what got me into internet social worlds and culture ages ago. Around 2008ish I was just a dumb kid who shouldn’t have been here but in that year i discovered a ton of different social games at once. And while I didn’t stick with SL because my computer was a toaster, SL was the first place I tried and I really liked it and wanted to do more of whatever that was. I ended up playing smallworlds, Roblox, habbo and RuneScape and it’s what I know best from that era. But from that same time there’s presumably an ancient secondlife account with an hour of playtime out there somewhere, maybe with a chatlog where I talked to some strangers on the internet about MST3K before my laptop thermal throttled itself into crashing the client. And I’ve come back to and left SL on and off again over the years, but it’s definitely still the first. It’s in my eyes kind of the baseline, always been there and always will be there example of a social game. Kinda how people compare any roll and stall dark fantasy rpg to dark souls, it’s the example, the reference you compare to. SL is that to me for any other social game, even if there are others just as old or older that have existed alongside it.
  14. That font on the dark theme that is physically painful to read, dark purple italic fancy font on the dark mode background Light mode is not any better
  15. I don’t know how to describe how much this song means to me, the discovery of it I think will be remembered as a kind of life landmark and a turning point The vocals are by Bernard Sumner of New Order
  16. That reminds me, click to interact/emote tails. I always forget they’re a thing and never disable click to touch and then I’ll go somewhere and some random person will practically get an autoclicker on my tail and spam up chat. Just begone with the entire concept, if you want click emotes fine, but they shouldn’t be this super prevalent default.
  17. After a new battery is installed basically you want to fully charge and then fully drain it a few times in a row. To get the system to know the batteries various charge states. Between the cpu and the heatsink is thermal paste, it is a thermally conductive compound that allows for better heat transfer to the heatsink. The cpu and the heatsink contact surfaces are both imperfect, the thermal paste fills in the microscopic air gaps. It’s a metallic paste and it tends to dry over time, dry thermal paste makes for a poor thermal conductor. Thermal pads are the same concept but for lower heat and wider gaps, it’s a foam or fiber pad impregnated with thermal paste. They will usually be on particularly hot power delivery components, memory, and sometimes the integrated graphics portion of some cpus that have a dual die design. They can also dry up with age and perform worse. So my guess is that something is getting too hot, and the system is limiting power delivery to not damage itself. It may be the cpu, it may be power delivery components to the cpu, it may be the battery or power delivery to or from the battery.
  18. Got my i5 one out for reference, same generation, different layout. But the idea is the same these are not hard MacBooks to get into, to at least figure out what’s going on, no weird clip tabs or glue like later models, at least on the board half but if you’re not good with the idea of doing it yourself, it’s a pretty quick and easy job for any computer repair center, a lot more friendly of a device to work on than say, the m1/2/3 MacBooks are
  19. You may have a faulty battery or didnt get that battery conditioned properly after it was installed. The system is going to draw any more power under load than its capable of normally. SL in any viewer will absolutely max out a crystalwell i7 quadcore and draw as much power as the system will let it. Getting that error that your system cant charge faster than its discharging means power delivery degradation or battery degradation. Or it may be throttling its power delivery due to being too hot, that can be caused by a variety of things but redoing thermal paste/pads on the cpu and power delivery components can help with that. This is a hardware issue, no viewer or settings will fix that. You need to get your laptop serviced if youre not down to do that kind of work yourself. Though the 2015 mbp's are fairly easy to work on as far as macbooks go. I have a 2015 i5 retina mbp and its absolutely a little toaster oven if the paste has dried up or if theres a contact issue.
×
×
  • Create New...