Jump to content

Using llSleep to reduce Script Time


Vulpinus
 Share

You are about to reply to a thread that has been inactive for 3320 days.

Please take a moment to consider if this thread is worth bumping.

Recommended Posts

Preemptive TL;DR: Is it worth using llSleep to reduce Script Time while waiting for a touch event to happen? How is it best to do this, if so?

 

This is what I'm up to:

In an attempt to minimise the Script Time of my scripts, I'm experimenting with llSleep. Any input on this would be most welcome; I've googled and searched here but not found anything conclusive on the matter. Since I use quite a few things with these scripts in, a little saving would add up.

I've been playing with two very simple scripts that I wrote. One is a light switch, the other a fireplace switch. They are very similar (one was made by modifying the other): both detect a quick touch to turn on/off the device, and a longer touch to access a settings menu for things like light parameters, fire particle size etc.. The only real difference is that the fire control sets off a couple of particle effects, as well as light and glow, and it has an extra global list with some 'constant' parameters and associated message text.

I'm using a separate prim with a simple script using llGetObjectDetails to report on Script Time for the above objects.

The light script, when idle waiting for a touch event, runs at around 1.4us Script Time. The fire script runs at around 2.3us. Seems to be a big difference there for no difference in (idle) operation. I've tested on several rezzed instances of the objects and the times are reasonably consistent.

By adding a 0.24 second llSleep on a 0.25 second timer event while waiting for the touch event, the fire script Script Time drops to around 1.4us - saving about 0.9us. Operation of the fireplace seems barely affected by this potential delay.

The light script with the above change only shows an unsure 0.1us drop in Script Time. ETA: I made a mistake copying the code in - the saving is now consistently about 0.2μs with the correction.

Is there a better way to do this, or am I wasting my time (pun intended) in the first place?

 

This is the relavent portion of the code:

state Running {
	state_entry() {
		gSleeping=1;
		llSetTimerEvent(0.25);
	}

	touch_start(integer total_number) {
		gSleeping=0;
		llSetTimerEvent(0.0);
		kToucher=llDetectedKey(0);
		llResetTime();
	}

	touch_end(integer total_number)    {
		if (llDetectedKey(0)==kToucher && IsAllowed(kToucher)) {
			if (llGetTime()<.5) {
				gOn=!gOn;
				SetFire();
			}
			else {
				llListenRemove(gListener);
				gListener=llListen(gChannel,"","","");
				llDialog(kToucher,"Fireplace Control",gButtons,gChannel);
				llSetTimerEvent(30.0);
			}
		}
		else {
			gSleeping=1;
			llSetTimerEvent(0.25);
		}
	}

	listen(integer channel, string name, key id, string m) {
		llSetTimerEvent(0.0);
// Lots of menu stuff removed for brevity
if (m!="Exit") { llDialog(kToucher,"Fireplace Control",gButtons,gChannel); llSetTimerEvent(30.0); } else { gSleeping=1; llSetTimerEvent(0.25); } } timer() { if (gSleeping) { llSleep(0.24); } else { gSleeping=1; llSetTimerEvent(0.25); llListenRemove(gListener); } } } 

 

Link to comment
Share on other sites


Vulpinus wrote:

 

Is there a better way to do this, or am I wasting my time (pun intended) in the first place?


You're wasting your time picking over such small differences.  Along comes one avatar to somewhere else in the region and they can easily be bringing with them attached objects of several hundred milliseconds.  All of a sudden, your 0.5 of a microsecond doesn't seem significant.

Script scheduling priority has changed over the years, we don't tend to see anything like the same issues that we did several years ago.

Link to comment
Share on other sites


Vulpinus wrote:

The light script, when idle waiting for a touch event, runs at around 1.4us Script Time. The fire script runs at around 2.3us. 

Something else is going on, I'm just not sure what. This is the kind of result one would expect if the fire script had an open listen or some other event handler that was causing the script to burn a little CPU between touch events, but I'm guessing that's not what's happening here. 

There are places to use llSleep() to reduce script lag, but I've never seen it used like this.

The hypothetical case of the listen handler using time while idle would only arise in a sim with a lot of chatter (and certainly worse if the chatter were on the same channel as the listen). So, even though it isn't a listen problem this time, I'd suggest also testing in a sandbox if you haven't already, because I know your sim has some not yet identified performance challenges that may be making the results hard to interpret.

Also, I think you'll need to test by adding the inter-script differences one function-call at a time. I can't imagine a particle system adding script time, but obviously something is.

Link to comment
Share on other sites


Qie Niangao wrote:

Something else is going on, I'm just not sure what. This is the kind of result one would expect if the fire script had an open listen or some other event handler that was causing the script to burn a little CPU between touch events, but I'm guessing that's not what's happening here.


It's microseconds, not milliseconds, right? If so, an open listen would add a lot more than this so that can't be the cause. Any idle time below 3us - that is 0.003 ms should be acceptable really. If scripts with idle times as low as that are causing problems, it's the number of them, not the load of each that counts.


Qie Niangao wrote:

I'd suggest also testing in a sandbox if you haven't already, because I know your sim has some not yet identified performance challenges that may be making the results hard to interpret.


Wasn't there even a neighbour in that sim with a griefer object permanently rezzed on his plot?

Link to comment
Share on other sites

Makes no sense. If you are waiting for a touch the script is doing nothing and uses minimal possible cpu time. Any activity will increase the value.

If that is not the case then your listen event is listening on a busy channel and firing all the time. If you burn time with the timer and llSleep the listen has less time to fire so you get a lower CPU time.

Get a better channel or think about the listen at all.

 

Link to comment
Share on other sites

Hmmm it does seem odd. I didn't show the scripts' default state startup but it is practically identical in both scripts - just sets the allowed users list from a notecard, the initial light/fire settings and variables like gChannel.

The listener is removed except when the menu is active, on a short timer to kill it again if the menu is not used. The 'sleeping' version of the code also then reactivates the sleep, as can be seen in my excerpt.

The listening channel is a random, high-negative number, so I doubt it is busy.

I must admit that it surprised me that actually doing something in a timer to put the script to sleep, used less Script Time than just waiting for an event. The saving is definitely consistent though.

Oh, the 'unsure 0.1μs' in the light version of the script is now a consistent roughly 0.2μs saving - I had made a mistake when copying the code into SL the first time around and it wasn't going back to sleep.

You are right about testing in a sandbox - didn't think of that. D'oh! I'll try pulling the script apart, and adding things back a bit at a time to narow down where the increase in time happens. I know were' only talking a fraction of a microsecond, but I'm curious now!

 

Link to comment
Share on other sites

It makes sense that it makes little sense, but my measurements are consistent. Adding the timer to set off the sleep reduces the Script Time campared to just waiting for a touch event. If you take out the lines in my excerpt that clearly relate to gSleeping, that's the version without it.

The listener is on a high-negative random channel, and is only active if the menu is acessed by the 'long touch'. It is shut down again, either by exiting the llDialog menu or by the timer.

Link to comment
Share on other sites

Yeah, μs, not ms.

The scripts I'm working on, I might have twenty or more of them in total, so it is a case of numbers. I'm looking at combining some into linksets (like my windows) with a single controller script (already have for some) but for a lot it gets a bit unwieldy.

Griefer object??? Hmmm - I'll have a look around! Certainly something isn't good in the sim.

Link to comment
Share on other sites

I'm interested so I've put your script in a prim. (with the necessary changes to run it).

0.002 to 0.003 ms.

Commented out the sleep : 0.004 to 0.005 ms

But your timer is not switched off in all cases so i added : llSetTimerEvent(0) in the 1st line of the timer event.

Script time 0.001 ms now. (without sleep. with sleep it's higher as expected)

Looks like a running timer (even at 30s) is much worse than a running listen. So make sure that it is really switched off if you don't need it.

 

Link to comment
Share on other sites

*#!**! There's a mistake in the code isn't there? I was just making a simplified version to test, and spotted it.

It's not a case of not switching off the sleep timer, but not switching it back on when a short click happens, so it never goes back to sleep in that case. Testing now...

(funny but I'm sure I did have that in before - must have messed up somehow.)

...

ETA:

OK, that killed it. You're right, it doesn't work. I sort of knew it shouldn't, but it seemed to.

What is now annoying me is that I've just rezzed two new copies of this fireplace, and put the fixed sleeping version of the script in one with the original non-sleeping version in the other. The sleeping version is now running at about 3.5μs, longer than before now that the sleep is active all the time as I intended. So that's rubbish. But...

The non-sleeping version is running at 1.5μs (like the broken sleeping version was), while the fireplace I used initially to test this, which is just another rezzed copy of the exact same fireplace with the identical script, is still running at about 2.5μs. LL is just trying to confuse me.

I give up. I'm going to work on my window script.

Link to comment
Share on other sites


Vulpinus wrote:

Sassy, you are so right, lol. I'm going to work on more significant matters now...

Sometimes, my obsession to optimise things is my undoing!

I remember programming Dad's PDP-11/73 and being amazed that it could multiply two (floating point!) numbers in a couple microseconds. I spent the next two decades counting those microseconds to make sure my code would always run in its allotted time. Like you, most of my work was on microcontrollers in embedded real-time systems. As the processors got faster, I started counting nanoseconds. Now we're looking for ways to vectorize problems because the processors can't chew any faster, but we can get a thousand of them chewing in parallel.

We live in a different world than IT people.

;-).

Link to comment
Share on other sites

sometimes obsession with even these tiny things is a good thing

over the street I was in a chat onetime with Void Singer about optimisations. I mentioned was a good thing to do this as our scripts were working in a collaborative environment with other scripts. So anything we could do by way of optimisation is helpful

Void then made the really astute point that in this environment our scripts are also in competition with scripts written by other scripters. That when we do optimise our own scripts so that they perform faster and more robustly than other scripts that functionally do the same thing then it gives us a commercial advantage over them

that when top scripters are competing then it does come down to milliseconds. That is not just about one script. Is about multiple scripts as a body of work. That when our scripts outperform other scripters scripts consistently, script after script after script, then the commercial advantage is realised

was a pretty astute observation this

Link to comment
Share on other sites


irihapeti wrote:

sometimes obsession with even these tiny things is a good thing

over the street I was in a chat onetime with Void Singer about optimisations. I mentioned was a good thing to do this as our scripts were working in a collaborative environment with other scripts. So anything we could do by way of optimisation is helpful

Void then made the really astute point that in this environment our scripts are also in competition with scripts written by other scripters. That when we do optimise our own scripts so that they perform faster and more robustly than other scripts that functionally do the same thing then it gives us a commercial advantage over them

that when top scripters are competing then it does come down to milliseconds. That is not just about one script. Is about multiple scripts as a body of work. That when our scripts outperform other scripters scripts consistently, script after script after script, then the commercial advantage is realised

was a pretty astute observation this

We don't call it optimization for nuthin'!

That said, it's important to know what you're optimizing for.

Link to comment
Share on other sites


irihapeti wrote:

that when top scripters are competing then it does come down to milliseconds. That is not just about one script. Is about multiple scripts as a body of work. That when our scripts outperform other scripters scripts consistently, script after script after script, then the commercial advantage is realised

was a pretty astute observation this

It's all relative though and I agree that if the only measure of success is one script outperforming another but that's very rarely the commercial world.  The speed at which a hud can change a texture on the belt of a dress is usually irrelevant.  The amount of memory consumed is usually irrelevant... at making the overall product a commercial success.

You could say "ah but if I optimise then... la la la" but if the project manager points out that there's no time left in the project budget because any ROI on making the texture change 0.1ms faster or use 20 bytes less memory simply won't make a difference.

Don't get me wrong, i'm not suggesting that it's not relevant, i've poked directly to video memory to display things, written 2 bytes of code to make a .com file to warm reset the machine, 5 bytes for a far jump to the machine initialisation vector and so on but now it takes a few hundred MB to say "hello world" because of library upon library.

We have exactly the same scenario with mesh optimisation too.  Commercial success is completely different from the most optimised mesh.  Having the right product, do the right thing for the right price within budget will pretty much always beat shaving off a tiny bit of something due to optimisation.

I once had a manager tell me that I needed to be more commercially aware.  I've ticked that box long ago ;)

 

Link to comment
Share on other sites

i dont disagree that if a person is making a range of products: and doing model, texture and script. and is also doing marketing and support as well then is some time tradeoffs/ roi they do need to make

was a chat between scripters we were having. Void and me. When is optimisation appropriate. our consensus was all the time. being coders. (:

that we take this approach to our work all the time. So that it becomes first nature. Learn how to write tight fast small-footprint code. And keep doing it

simple example. divide by zero check

 

 float div(float p, float q) {    if (q == 0.0)
return q;
else return p / q; } q = div(p, q);


 the above style is often found in library code. the function code does a check on the input (q) to prevent a hard fail. The function code is handholding the calling code

optimise:

 

float div(float p, float q){   return p / q; }if (q != 0.0)   q = div(p, q);

 
the optimisation saves a function call when q == 0.0. Is no handholding

Link to comment
Share on other sites

But optimising for what?

That's a simple example of course but if the range checking is performed once, in a library function that's typically going to be called by others, lets say in a bigger project, then the function is safe, validated and reliable once unit tested.

By "optimising" like that, you're transferring the risk to the caller of the function, each and every time in that they must validate a possible failure.  They may not even be party to all the failure modes of the function if it's a library call.

Next, lets says it's a more complex script where that function call is called multiple times.  The conditional IF now has to be inlined every time, thus increasing byte code.  Now you've got another issue, which is more important to the overall result byte code size, reliability, simplicity or what and make the mistake of including that error condition check before calling the function just once and you have a potential to fail.

Optimisation, reliability, budget etc.

Fun! :)

Link to comment
Share on other sites


Madelaine McMasters wrote:

I remember programming Dad's PDP-11/73 and being amazed that it could multiply two (floating point!) numbers in a couple microseconds. I spent the next two decades counting those microseconds to make sure my code would always run in its allotted time. Like you, most of my work was on microcontrollers in embedded real-time systems. As the processors got faster, I started counting nanoseconds. Now we're looking for ways to vectorize problems because the processors can't chew any faster, but we can get a thousand of them chewing in parallel.

We live in a different world than IT people.

;-).

That sounds familiar. "How many machine cycles can I have?" Squeezing code to make it fit in 100 words of program memory. Different worlds indeed!

I would love to do development with big FPGAs. I used CPLDs in my projects for years, replacing the entire digital guts of a device with a single chip, and have done some simple things with FPGAs and soft cores. The big ones are incredible though. If I were to start again, I think I would go there.

Similar to CPUs though - the more powerful they get, the more we are practically forced to move away from coding them at the machine level, and develop instead using high-level languages that remove the fine control and optimisation that we could do better ourselves. My CPLD desigs were mostly by schematic and placed manually  - I avoided the compiler whenever I could.

...

Yeah, there's a big difference between most commercial development and what we might like to do, or once have had to do, to 'improve' things. Like Sassy mentioned, I learned long ago when to stop developing something, and just make it saleable.

It still sometimes makes me itch though, and most things that I do for myself I can't help but to push a little further on; it is my obsession. Nanoseconds matter! I once was picked to work on a project to develop a geological analysis method working at the picogram level - I wonder why they asked me???

...

You might all find these blasts from the past amusing. Mel certainly knew how to optimise code:

Real Programmers Don't Use Pascal

The story of Mel

 

Link to comment
Share on other sites


irihapeti wrote:

i dont disagree that if a person is making a range of products: and doing model, texture and script. and is also doing marketing and support as well then is some time tradeoffs/ roi they do need to make

was a chat between scripters we were having. Void and me. When is optimisation appropriate. our consensus was all the time. being coders. (:

that we take this approach to our work all the time. So that it becomes first nature. Learn how to write tight fast small-footprint code. And keep doing it

simple example. divide by zero check

 
 float div(float p, float q) {    if (q == 0.0)

return q;

else return p / q; } q = div(p, q);

 

 the above style is often found in library code. the function code does a check on the input (q) to prevent a hard fail. The function code is handholding the calling code

optimise:

 
float div(float p, float q){   return p / q; }if (q != 0.0)   q = div(p, q);

 

the optimisation saves a function call when q == 0.0. Is no handholding

The first example replaces one kind of error with another. Both can be deadly.

The second example is not an optimization so much as the beginning of an attempt to handle the error.

Again, it all comes down to understanding what you are trying to optimize.

And when did holding hands become something to be avoided?

;-).

Link to comment
Share on other sites


Vulpinus wrote:

I would love to do development with big FPGAs.


The last FPGA thing I did was a graphics controller in a Spartan 4. I spent more time wrestling with VHDL than the actual design. All of my previous FPGA designs were schematic driven. Although I actually preferred VHDL, there's still gotta be a better way.

I'm retired now, and will probably never touch another FPGA, but there are so many things that would be fun to try with one. I visited MIT once, and saw a demonstration of a phased array (of 256 transducers) ultrasound emitter that could steer a beam of audio around like a laser (your ear detects the envelope of the AM modulated ultrasound, not the ultrasound itself). That seemed like a perfect application for an FPGA with lots of internal memory.

I just purchased a laser "tape measure". It's accurate to 1/16" over 100ft. That's a timing resolution of 5ps. A Mac Pro can do 35 floating point operations in that time.

I was born at the wrong time, things are exciting today. Of course I would say that regardless of when I'd been born.

;-).

Link to comment
Share on other sites

is quite interesting this as well

like how far do we take hand-holding?

example:

 

f(p){   int m = length(p);   if (m <> 6)      return error;   else   {         ... do stuff ...      return ok;           }}    try{   f(p);}catch{   handle error;}

 

alternative

 

f(p, m){   ... do stuff ...}f(p, 6);


 

 

Link to comment
Share on other sites

yes agree. Size/space is a material factor. A hard limit. We optimise for speed within the space available

for example:

we have 64K space allocated to the job script. And our script consumes 30K of it as wrote

our speed optimisations will consume 10K extra for 40K space used. When so thats what we do. Consume the space which has already been allocated for this job

the time penalty for the extra 10K space is paid once. The script loading time at startup

+

the other time penalty factor which you did mention is also relevant. How much time does it take the person to write the script? Senior coders take less time than junior coders. (senior meaning more skilled and knowlegeable)

the test of it is when seniors are competing with seniors

example:

is Monday today. You got til Wednesday noon to get this job completed and out the door. The senior who wins this competition when both seniors do get it out the door, is the senior whose script performs best

if we dont get it out the door then we not a senior

bc the person who set the Wednesday noon deadline is a senior who knows how long it takes to do this job. If they didnt then they arent a senior person either. And who put them in charge of the work schedule (: Not that this person has to be a coder. They just have to be a person who is senior at scheduling coders

Link to comment
Share on other sites


irihapeti wrote:

is quite interesting this as well

like how far do we take hand-holding? 

Again, how far you take hand-holding depends on the context. I had a career designing medical instrumentation. If my stuff didn't work, people could be harmed, or at least not helped. So, I tried to anticipate every possible error that could occur, in sensors, in interfaces, by users, etc. And I also tried to anticipate errors in understanding my code, either by others or by me at a later date, so I wrote lots of comments. In tricky areas, my comments might be 10x larger than the code they explained. I also wrote "theory of operation" manuals for my designs, some of which were as thick as textbooks, and included the reasoning behind the overall architecture of the thing.

I was known for my offbeat design sensibilities, but only by those who hadn't read my documentation. Those who did became converts to my way of designing. Given the lack of comments in your examples, I'd reject them both.

Even though I'm a "no public displays of affection" gal, I'm not averse to a little hand holding, particularly if I'm dangling over the edge of a cliff. Both of your examples leave me for dead.

;-).

Link to comment
Share on other sites

yes. Can understand what you mean about context

my own experience is that the org I work for provides social/health/edu services. Is the field for which I am qualified. As a org we have to deliver/report on outcomes electronically, using devices to capture data out in the field, in ways that are easily done and understood by the staff. They are social workers. Not ICT workers

I ended up in the ICT department bc the boss decided that I can be the audit/liaison between the software provider contractors and the org

most of the software we use is tailored to the contracts we have with the funders. Is lots of contracts with lots of different requirements, and lots (heaps) of different software packages deployed. Is also quite a lot of custom work goes into the software bc the inputs and outputs/outcomes of the social service contracted for differs quite markedly between services. For example: men who have been in prison for a long time, have been released and are integrating back into the community and workforce. vs: a service for solo teenage girls with little or no family support who are pregnant and wish to raise the baby. vs a service for mums and children who need refuge. We provide all of these kinds of services and others as best we can. Once get beyond the basics like name, address, etc the datapoints are highly dissimilar

the conditions of the funding contracts are quite rigorous. So while we have lots of well-documented specs is my job to ensure that the code does help us deliver what the funder has contracted us for. Do the specs align with the contract requirements of the funder. Does the code actual do what the coders comments says it does, etc

what I have learned thru this work is that sometimes even when the documentation does align the delivered code itself doesnt, despite all the documentation and commentary

+

since I been in the ICT I have had to obtain all kinds of ICT certs. Is a condition of the funding contracts as well. That personnel are appropriately qualified. I am not the best at this kinda work. Is quite a few people I know who are way better than me at it. So I learn off them as best I can. The one thing I have learned from them is dont trust the documentation. Read the code. The job is to audit the software. Not to audit the documents that accompany the software as if they were the software

+

i think also that what is contextual is that we as a org dont hire or train programmers. We buy code that performs well when installed on our systems, and audit what we buy. And bc auditor then dont trust the paperwork. Just make sure the paperwork does align with the code which aligns with the contract. Not the contract between us and the code provider. The contract between us and the funder

+

also bc I am now in a position where I am a determining factor in which code we do buy then the contractors have come to understand my maxim. I am not interested in buying code that holds the hands of their employees when that hand-holding is detrimental to code performance. Any hands that are to be held are those of our staff. And the best hand-holding for our staff is that their devices and systems perform well and dont break. That the data can be entered, extracted and reported on in realtime in ways that staff need it to when out in the field, and that the funders reports are accurate and timely

most software providers/contractors get this. Others dont. That its about our staff and not theirs. We buy off them who do get it. We have approx. $65 million in funding contracts at any given time. We have approx. 83,000 separate datapoint types that the funders require us to report on regularly. So I am quite picky about who our contractors are

+

i just finish with another own experience. Is quite rare (at least where I live) to find a person who is good at both coding and documentation, and able to keep them aligned as they work

Link to comment
Share on other sites


irihapeti wrote:

So while we have lots of well-documented specs is my job to ensure that the code does help us deliver what the funder has contracted us for. 

 

I'm not a fan of specifications. I've yet to see one that's correct. And it sounds like you may not have either. And that's why your job is to ensure the code is helpful to your goals. The company I once worked for was regulated by the FDA, so we had rules to follow in the design, documentation and production of things. And those rules made no sense to me.

They sorta said:

  1. Show exactly what you are going to design (the specification).
  2. Design it.
  3. Show that you designed exactly what you said you would.

That's nonsense. What really happens is:

  1. You have an idea of what you want to design.
  2. You go to work and find out you can't do it like you thought, so you innovate.
  3. You finish within sight of the original goal, but possibly not within arm's reach.

The reality of the real step 3 makes it nearly impossible to meet the FDA's step 3. The FDA sequence does not allow for innovation during the design. All the innovation must be done before you start working on the thing!

So, the way we documented it was:

  1. Show "exactly" what we're going to design, but so nebuluously that we've covered anything we're likely to make.
  2. Design the best thing we could, a messy process at which we were very good.
  3. Amend the original document (which is allowed) to reflect exactly what we'd actually designed.
  4. Show that we'd designed exactly what we said we would (this was now very easy).

The problem with most specifications is that they don't anticipate all the nuances of a thing, nor your incomplete understanding of the problem to be solved. This is particularly true of large software projects. There's a book, "Serious Play: How the World's Best Companies Simulate to Innovate" that delves into this. The author details a large software system done by two teams, one generating a specification from customer interviews and coding to it, the other team constantly soliciting "tell me your problem" stories from the customer, the iterating prototypes in concert with them. The iterating team produced a system that scored highly with the customer and came in on time and within budget, the spec driven system was a mess.

The iterated design required less customer training than the spec driven design because the users had been participating in the design from the start and were already familar with its operation by the time it was delivered. They'd experienced prototype systems from the second week of the project (which took much longer) and were able to fix things they didn't like as things progressed. In a sense, the coders and customers held hands as they played with the prototypes.

The spec driven system was not at all like the customer expected, not because the design team didn't follow the spec, but because the customer had no idea what using the system they wanted would really be like. They had to imagine it to specify it, and their imaginations, like mine, are limited and fallible. Be careful what you specify, you may get it.

 

Link to comment
Share on other sites

You are about to reply to a thread that has been inactive for 3320 days.

Please take a moment to consider if this thread is worth bumping.

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
 Share

×
×
  • Create New...