The core of any API (application programming interface) is a set of runtime-guarantees. Lately, I’ve been asked a fair bit about LSL’s llSetTimerEvent() API function and its interaction with the new region-idling model, where regions are shoved down to less frequent updates when not in certain kinds of active use.

Obviously that’s going to have some sort of effect on timers, but does it break llSetTimerEvent()’s runtime guarantees? No, it doesn’t.

The key guarantee – and you can test it yourself – of llSetTimerEvent() has always been that it absolutely never triggers a timer event sooner than the appointed time-interval. The timer event always triggers either at (or after) the specified interval. This is in common with the way timer events are handled in pretty much all multi-user systems. You specify the earliest time to be notified, and you get your notification at that time or (more commonly) some short time later.

If it then matters to you just how much time has really elapsed, you check the clock.

Traditionally, LSL has been fairly good about not being more than a second or two late in generating timer events (and often less), unless the time-dilation of a region has been particularly severe. Unless you’ve ever stopped to actually measure it, though, you’re probably not aware of quite how much latitude (or ‘slop’) is actually normal in Second Life’s timer event system. More than you’d normally think.

So, region-idling adding more or longer delays to timer-events coming back doesn’t break llSetTimerEvent()’s runtime guarantees at all. If determining how much time has actually passed between timer events is important to you, you are already (or should be) checking llGetTime().

Share this: Twitter

Google

Facebook

Reddit

Tumblr

More

LinkedIn

Pocket



Pinterest

Print





Tags: API / Application Programming Interface, llGetTime, llSetTimerEvent, LSL, Region Idling, Scripting, Second Life, Virtual Environments and Virtual Worlds