I have been investigating rez to script execution delays ever since they started happening in early 2020. Here is what I have observed. When a script rezzes an object out of its parent prim's inventory, there is a delay between the time of the llRezObject() call and when the script in the rezzed object begins to run. Before early 2020, this delay was almost always very short—if you rezzed a projectile from a launcher, its script would almost always get control almost instantaneously. In early 2020, delays began to appear, and their occurrence was distinctly odd.
I built a test object which rezzes an object and performs a handshake in which the rezzed object sends the time its script got control to the parent, which prints the delay time. What I found is that when you run this test in a given region at a given time, the region will always be "fast" or "slow". By "fast", I mean less than 100 milliseconds (usually around 70 milliseconds prior to the uplift, and more like 30 milliseconds after the migration to AWS). A "slow" region will always have a rez-to-script-execution time of around two seconds, before or after uplift. These figures are utterly bimodal—you will hardly see a number in between them or much greater than two seconds.
If you test repeatedly over a number of hours, a "fast" region will remain "fast", and a "slow" region will remain "slow", with the exceptions that a "fast" region may become slow and stay that way, and a slow region may become fast after it is restarted (but then may become slow again a few days later).
Here is a test I ran across 25 regions on Christmas Eve, showing the delay, standard deviation, and uptime of the regions, using my "Gridmark" automated benchmark which will be available soon for free/full perm so anybody can do their own experiments.
2020-12-24 13:59 UTC
Region Delay n Std. dev Uptime Type
---------------------------- ------ --- -------- ------ ----
Fourmilab 0.0296 5 0.0102 12.7 ER
Sandbox Pristina 0.0263 5 0.0011 12.6 ER
Sandbox Exemplar 2.0227 5 0.0006 13.8 ER Slow
Sandbox Verenda 2.0228 5 0.0006 13.8 ER Slow
Sandbox Formonsa 0.0269 5 0.0010 12.7 ER
Sandbox Amoena 0.0283 5 0.0010 13.8 ER
Sandbox Artifex 2.0267 5 0.0104 12.6 ER Slow
Sandbox Mirificatio 0.0262 5 0.0003 4.2 ER
London City Brittany 0.0358 5 0.0098 2.2 EH
Debug1 2.0222 5 0.0004 12.6 ER Slow
Mauve 2.0325 5 0.0052 12.6 MR Slow
Devolin Mal 2.0310 5 0.0056 13.8 MR Slow
Limia 0.0299 5 0.0096 9.1 EH
Arowana 0.0250 5 0.0009 9.1 EH
Orville 0.0262 5 0.0009 12.6 ER
Woodbine 0.0440 5 0.0006 4.5 MH
Lapara 0.0366 5 0.0071 13.8 MR
Caledon Oxbridge 0.0609 5 0.0090 2.3 EH
Babbage Palisade 0.0374 5 0.0180 12.7 ER
Maryport 2.0224 5 0.0104 12.7 MR Slow
Combat (sandbox) Rausch 0.0266 5 0.0006 2.1 MR
Langdale 2.0309 5 0.0095 12.7 MR Slow
Sandbox Bricker 2.0209 5 0.0013 12.7 ER Slow
Vallone 2.0333 5 0.0130 12.6 MR Slow
Regions: 25, 15 fast, 10 slow.
Mean uptime (days): Fast regions 8.9, Slow regions 13.0
Total test time 66.1 minutes, 27 teleports.
Some of the regions which tested "slow" (for example, Mauve) in this test have since been restarted and the last time I tried, tested "fast". Because my test requires the ability to rez an object, I can only run it in sandboxes and rez zones, hence the odd selection of regions.
To my ancient programmer's eyes, this looks like a timeout situation. There's something that's supposed to happen in starting the scripts of a rezzed object which sometimes fails but gets retried successfully by a timeout set to 2 seconds. Why this situation appears to manifest itself more frequently the longer the simulation has run since its last restart is a mystery which remains opaque to observers outside the simulation code.