Jump to content

Unxpected Cache Location Changes (May be Windows assumptions affecting Linux viewers)


You are about to reply to a thread that has been inactive for 812 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Every so often, for no apparent reason, my cache location shifts from my SSD to the conventional HDD which my viewer is installed on,

Since I am running Linux I don't expect any good answers here, but I have noticed, a couple of other times, odd behaviour which could be related to assumptions about how Windows does things, buried in the program code. One instance of this is the Linux version of Firestorm ignoring my OS-level settings for Cursor size.

Since my programming limit is shell scripts ("batch files" for Windows users) the suggestion that I submit a patch is rather useless.

Once or twice, it's possible that the switch of location is connected to a more-complicated-than-usual system restart, but I rarely start my viewer immediately after a system restart.

Anyway, I am assuming that, buried in the viewer code, is a check that the cache location is working, and an automatic reversion to the default location if the check fails. This is, on my machine. on the same drive partition that Firestorm is loading and running from. I admit I would rather have the program loading stop, with an error message, than a change like this which forces a large cache to be cleared. What might be tolerable with the default cache size is questionable with the cache set to the maximum size. A lot of new textures, not currently cached, slows the frame rate. That may be a consequence of the limited multi-threading capacity of viewers: I know Firestorm shows no sign of generating a significant multi-core load on my computer.

I can think of several people who might be able to shed some light on this. And several whose replies will evoke Shakespeare—It is a tale told by an idiot, full of sound and fury, signifying nothing—I got a lot of Shakespeare at school, but the teachers fitted that quote rather too well.

Link to comment
Share on other sites

1 hour ago, arabellajones said:

One instance of this is the Linux version of Firestorm ignoring my OS-level settings for Cursor size.

This will not be specific to Firestorm: the viewer cursors are actually sprites, that do not get resized depending on your system (Linux desktop) cursor settings.

 

1 hour ago, arabellajones said:

Anyway, I am assuming that, buried in the viewer code, is a check that the cache location is working, and an automatic reversion to the default location if the check fails.

Looking at the viewer log whenever such an occurrence happens, would likely tell you (and us) what went wrong: failures to find a folder or write to it would be logged as a warning. This would allow to pin-point the cause for the cache folder relocation. Note that is in no way a ”Windows” coding specificity; under any OS, when the viewer cannot find the configured cache folder, it creates a new one, in the default location...

 

1 hour ago, arabellajones said:

A lot of new textures, not currently cached, slows the frame rate. That may be a consequence of the limited multi-threading capacity of viewers: I know Firestorm shows no sign of generating a significant multi-core load on my computer.

Try the Cool VL Viewer: while rezzing (and provided your network can follow, i.e. got a high bandwidth), it will happily load all your CPU cores with its multi-threaded texture decoder, providing a blazing fast rezzing experience... It is also primarily a Linux viewer, so the Linux code is especially well tailored and optimized.

Edited by Henri Beauchamp
  • Like 1
Link to comment
Share on other sites

3 hours ago, Henri Beauchamp said:

Looking at the viewer log whenever such an occurrence happens, would likely tell you (and us) what went wrong: failures to find a folder or write to it would be logged as a warning. This would allow to pin-point the cause for the cache folder relocation. Note that is in no way a ”Windows” coding specificity; under any OS, when the viewer cannot find the configured cache folder, it creates a new one, in the default location...

I see only the two most recent logfiles stored, so I have missed my chance this time, but I have a good idea of what to search for now. I have only seen logged a check for the sound cache folder, followed by some sort of check on the files in the texture cache. There's a lot about the CPU which comes before this. This is the first mention of the SSD partition I use. The preceding run was almost the same: no WARNING line, and all the preceding lines have the same timestamp.

I don't keep the sound cache between runs (a default option in preferences), and I think that could be put in a completely different partition/location.

2022-07-07T12:30:44Z INFO #InitInfo# newview/llappviewer.cpp(1147) init : Hardware test initialization done.
2022-07-07T12:30:44Z INFO #AppInit#Directories# llfilesystem/lldir.cpp(1189) setSoundCacheDir : Setting sound cache directory: /media/dave/120GB-SSD/Dave/Firestorm_Cache
2022-07-07T12:30:45Z INFO # llfilesystem/lldiskcache.cpp(150) purge : Purging cache to a maximum of 10468982784 bytes
2022-07-07T12:30:45Z INFO #TextureCache# newview/lltexturecache.cpp(1062) initCache : Headers: 1048576 Textures size: 8344 MB
2022-07-07T12:30:45Z INFO #THREAD# llcommon/llthread.cpp(145) threadRun : Started thread PurgeDiskCacheThread
2022-07-07T12:30:45Z INFO # newview/lltexturecache.cpp(1801) purgeTextures : TEXTURE CACHE: Purging.
2022-07-07T12:30:45Z WARNING #TextureCache# newview/lltexturecache.cpp(1868) purgeTextures : TEXTURE CACHE BODY HAS BAD SIZE: 392334 != 5544/media/dave/120GB-SSD/Dave/Firestorm_Cache/texturecache/7/7c78f30b-2c3f-31df-b2a6-bcf7d92c79e6.texture
2022-07-07T12:30:45Z INFO #TextureCache# newview/lltexturecache.cpp(1895) purgeTextures : TEXTURE CACHE: PURGED: 1 ENTRIES: 42476 CACHE SIZE: 2485 MB
2022-07-07T12:30:45Z INFO #InitInfo# newview/llappviewer.cpp(1165) init : Cache initialization is done.

 

Link to comment
Share on other sites

I get that too at times. It has to do with how the SSD is mounted. If the system thinks it is a removable device, it's not automatically mounted at boot. If you access the SSD with the system's file viewer or an open dialog, that seems to force a mount. But the SL viewers don't seem to force a mount. So if the very first use of the SSD after boot is as a viewer SL cache, the viewer may not find it. It then falls back to the standard cache location.

If you got a small SSD to speed up SL, and only use it for SL, this is likely to happen.

If the SSD isn't physically removable, it can be added to the list of file systems always mounted at boot time, which is in /etc/fstab. This requires Linux system administration skills. See this discussion on Serverfault.

  • Thanks 1
Link to comment
Share on other sites

12 hours ago, animats said:

If the SSD isn't physically removable, it can be added to the list of file systems always mounted at boot time, which is in /etc/fstab. This requires Linux system administration skills. See this discussion on Serverfault.

Thanks, I had a couple of drives showing similar behaviour. Had to use blkid to get the UUID, but no big problems. Haven't tested Firestorm yet, but the other drive works OK. That Serverfault reference focused on Centos, and some of the differences in the Ubuntu family may matter, but the basic idea looks solid.

Link to comment
Share on other sites

21 hours ago, Love Zhaoying said:

Wow, that's a lot of bytes!

That's set to the maximum. If you have the space, even on a spinning metal drive, and modern disk drives are pretty big, you might as well use it.I usually run with a 32m draw distance—it's the range for talking—but Firestorm makes it easy to change as that is a bit short for driving vehicles.

  • Like 1
Link to comment
Share on other sites

8 hours ago, arabellajones said:

Thanks, I had a couple of drives showing similar behaviour. Had to use blkid to get the UUID, but no big problems. Haven't tested Firestorm yet, but the other drive works OK. That Serverfault reference focused on Centos, and some of the differences in the Ubuntu family may matter, but the basic idea looks solid.

I can now confirm that the entries in /etc/fstab have fixed things.

The instructions are nearly five years old, and I urge you to check the man page for fstab. You are now strongly recommended to used the UUID to reference the drive, as the mount point can change.

This can be copied from /etc/mtab

/dev/sda1 / ext4 rw,relatime,errors=remount-ro 0 0

It's preferred now to use this form. The UUID is drawn from the physical drive hardware, and doesn't change. Other identifiers can change: unlikely but there are no guarantees.

UUID=6cd980aa-9eda-4021-84dc-174155175022	/	ext4	errors=remount-ro	0	1

It is also possible to use a Partition UUID, "PARTUUID", a more specific location. The blkid command lists the UUIDs active on your machine, and you can copy-and-paste from a terminal window to whatever editor you use.

I knew I was having this problem with another drive, USB-connected, and I kept dead-ending with the very obvious idea of checking the drive was active in a shell script. This is an instance of the solution hitting the problem from a non-obvious direction. The SSD gave the same problem with an internal SATA connection. How it ended up classed as removable I have no idea. Every drive is removable, with the right screwdriver.

  • Like 2
Link to comment
Share on other sites

40 minutes ago, arabellajones said:

his can be copied from /etc/mtab

/dev/sda1 / ext4 rw,relatime,errors=remount-ro 0 0

For a SSD ”lazytime,noatime” is best since it induces even less writes than ”relatime”. You will also want to add ”discard”, so to improve the drive performances (the driver will then auto-TRIM released memory blocks on the drive).

It should also be noted that for cache usage, and on the condition you have enough RAM, it is best to use a ramdisk (the caches are written to very often with small amounts of data, which is probably one of worst loads as far as wearing out a SSD is concerned).

 

44 minutes ago, arabellajones said:

How it ended up classed as removable I have no idea.

This could be a bug in systemd (since the distros you cite are using it), or a race condition with the drive initialization (if it initializes late in the boot process, it might be considered as freshly connected and thus removable).

Being the old fart I am, I do not trust ”modern” (and most of the time, flawed) ways of doing things, so I use a distro with a good old SysV init, I write /etc/fstab manually, I install manually the boot loader, I compile my own kernels and drivers, etc... If something fails on my system, I at least know who is responsible for the failure and how to fix it ! 😛

  • Like 1
Link to comment
Share on other sites

10,468,982,784 bytes of RAM disk is .... okay, would you save that to HDD on shutdown of SL viewer then read it back to RAM before running SL viewer?

I just cheat and don't buy garbage-class SSDs like I used to.

  • Like 1
Link to comment
Share on other sites

37 minutes ago, Ardy Lay said:

10,468,982,784 bytes of RAM disk is .... okay, would you save that to HDD on shutdown of SL viewer then read it back to RAM before running SL viewer?

No, you simply save the ram disk on shut down of the computer, and restore it on boot. With 'tar' (and no compression for speed), this only takes a couple seconds...

Here is my (SysV init) ramdisk script:

#!/bin/bash
#
# ramdisk          Start/Stop the ramdisk.
#
# chkconfig: 2345 01 99
# description: starts and stops the RAM-disk, restoring and saving its \
#              contents.

### BEGIN INIT INFO
# Provides: ramdisk
# Required-Start: $local_fs
# Required-Stop: $local_fs
# Default-Start:  2 3 4 5
# Default-Stop: $named
# Short-Description: starts ramdisk
# Description: starts and stops the ram-disk, restoring and saving its \
#              contents.
### END INIT INFO

. /etc/rc.d/init.d/functions

BACKUP_FILE=”/var/cache/ramdisk-backup.tar”
MOUNT_POINT=”/mnt/ramdisk”
LOCK_FILE=”/var/lock/subsys/ramdisk”

case ”$1” in
	start)
		mkdir -p $MOUNT_POINT
		action ”Mounting ramdisk” mount $MOUNT_POINT
		if [ -f $BACKUP_FILE ] ; then
			cd $MOUNT_POINT
			action ”Ramdisk contents restoring” tar xf $BACKUP_FILE
		fi
		touch $LOCK_FILE
		;;
	stop)
		if mount | grep $MOUNT_POINT &>/dev/null ; then
			if [ -d $MOUNT_POINT/tmp ] ; then
				rm -rf $MOUNT_POINT/tmp/*
			fi
			rm -f $BACKUP_FILE &>/dev/null
			cd $MOUNT_POINT
			action ”Ramdisk contents saving” tar cf $BACKUP_FILE .
			cd /
			action ”Ramdisk unmounting” umount $MOUNT_POINT
		fi
		rm -f $LOCK_FILE
 		;;
	save)
		if mount | grep $MOUNT_POINT &>/dev/null ; then
			rm -f $BACKUP_FILE &>/dev/null
			cd $MOUNT_POINT
			action ”Ramdisk contents saving” tar cf $BACKUP_FILE .
		fi
		;;
	load)
		if mount | grep $MOUNT_POINT &>/dev/null ; then
			if [ -f $BACKUP_FILE ] ; then
				cd $MOUNT_POINT
				action ”Ramdisk contents loading” tar xf $BACKUP_FILE
			fi
		fi
		;;
	*)
		gprintf ”Usage: %s {start|stop|save|load}\n” ”$0”
		exit 2
esac

exit $?

With the following line added to /etc/fstab for the ramdisk (here I use 16GB of my 64GB installed RAM):

none	/mnt/ramdisk	tmpfs	noauto,size=16G,nodev	0 0

Note that 16GB is a lot more than I use for the viewer cache, because I also compile the Cool VL Viewer (which can happen many times per day when working on it) in the RAM disk to avoid wasting write cycles on my SSDs.

Edited by Henri Beauchamp
  • Like 1
Link to comment
Share on other sites

3 hours ago, Henri Beauchamp said:

save the ram disk on shut down of the computer, and restore it on boot. With 'tar' (and no compression for speed), this only takes a couple seconds...

Seems sensible.  

 

3 hours ago, Henri Beauchamp said:

Note that 16GB is a lot more than I use for the viewer cache, because I also compile the Cool VL Viewer

Yeah, two good reasons to have lots of RAM in the computer to have a 'ramdisk' in.

Thanks for sharing!

Link to comment
Share on other sites

On 7/8/2022 at 6:13 PM, Henri Beauchamp said:

It should also be noted that for cache usage, and on the condition you have enough RAM, it is best to use a ramdisk (the caches are written to very often with small amounts of data, which is probably one of worst loads as far as wearing out a SSD is concerned).

You do have a lot of RAM in your system. I would have to think hard about that option. I have a good-quality SSD as a spare, but a RAM disk may not be a viable option for me. I suppose I could drop the total cache size, but my system is, in some ways, rather old. A RAM disk seems more to be something to budget for in a new, current, system.

Link to comment
Share on other sites

1 hour ago, arabellajones said:

You do have a lot of RAM in your system. I would have to think hard about that option. I have a good-quality SSD as a spare, but a RAM disk may not be a viable option for me. I suppose I could drop the total cache size, but my system is, in some ways, rather old. A RAM disk seems more to be something to budget for in a new, current, system.

I have a lot of RAM because I run VMs on my Linux system (e.g. the Cool VL Viewer Windows builds are built using a VM).

You do not need a big cache for SL... 2 GB is more than enough. The cache won't speed up your frame rates, but merely improve the rezzing speed (on condition you already visited the same place recently), especially when your network bandwidth is poor (but with today's 1Gbps FTTH, the cache is hardly faster, especially when held on HD or even SSD).

Edited by Henri Beauchamp
Link to comment
Share on other sites

I have been looking over the options on caching, and as a Linux user there are several system defaults which come into play.  Caching in RAM may not be so helpful.

The RAM not specifically allocated is used for write caching for all disks, and I certainly don't have an internet connection that comes close to what Henri has. My internet connection speed would max out long before SSD write speed.

I did a few tests, and I do have enough RAM, but the ramdisk solutions I can find all seem rather elderly, or specific to high-bandwidth server systems.

 

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 812 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...