Can LabVIEW threads sleep in increments less than a millisecond?

Can LabVIEW threads sleep in increments less than a millisecond?

Post by chilly cha » Mon, 06 Dec 2004 07:40:06


Hi Tarheel !<br><br>May be you should get some idea of the kind of timing accuracy that you can reach when using a loop.<br>Use the attached vi, which runs repeatedly a For loop (10 iterations) reading the time, then calculate the average and standard deviation of the time difference between the loop iterations.<br>On my PC (P4, 2.6 MHz, W2K), I get a standard deviation of about 8 ms, which appears to be independent of the sleep duration I asked for.<br>Same thing with a timed loop.<br>Under MacOS X (PowerBook, 1.5GHz), the SD falls down to 0.4 ms.<br>I tried to disable most of the background processes running on my PC, but I could not get a better resolution.<br>Seems that the issue is not in LV but on the way the OS manage its internal reference clock.<br><br>Since you are a Java afficionado, may be you could produce something equivalent ?<br>A proof that nanosecond resolution is available on a PC could be of great help to NI. Why bother with costly timers on DAQ cards ?<br><br>By the way, it took me about one minute to create the attached vi. I would like to have an idea of the time required to do the same thing in Java.<br><br>Tempus fugit...<br><br>CC


Timing precision.zip:
http://www.yqcomputer.com/
 
 
 

Can LabVIEW threads sleep in increments less than a millisecond?

Post by tarheel_ha » Mon, 06 Dec 2004 09:40:09

Look, you guys are all thinking in terms of polling in an RTOS, which is fine, and which might very well need a high degree of accuracy.<br><br>But all I want is some way to put the thread to sleep and get it off the CPU so that the scheduler can give some other thread a chance to accomplish something. For all I care, the thread can be ordered to sleep for any episilon greater than zero, just so long as it goes to sleep and releases its hold on the processor.<br><br>Here's another example of a need for a very short sleep interval: When you stress test an application, it helps to overwhelm it by a factor of a thousand, or a million: Loops that might normally be run ten times get run a million times; arrays that might have ten elements get expanded to a million elements; sleep times of (1/10)th of a second are sped up to a mere (1/1000000)th of a second.<br><br>OOPS - can't do that last one: If your App has sleep times, there's no way that it can ever be sped up faster than that (1/1000)th of a second minimum. What's worse, if I tell an empty FOR-LOOP to put itself to sleep for a millisecond, I get only about 500 [not 1000] iterations of the loop each second.<br><br>So that's my theoretical upper bound in any App that with sleep times: The guts of the App can iterate no more than 500 times a second.<br><br>Again, in this day and age of 3GHz processor, it just seems like there's something wrong with an upper bound of 500 iterations per second.<br><br>Of course, at this point, I almost expect some LabVIEW guru to speak up and say, "Oh, you don't have to worry about putting your threads to sleep so as to free up the CPU - LabVIEW does all that for you behind the scenes."<br><br>Or maybe "sleep" isn't the correct terminology in LabVIEW; maybe there's some "relinquish the processor" command which I don't know about.

 
 
 

Can LabVIEW threads sleep in increments less than a millisecond?

Post by altenbac » Mon, 06 Dec 2004 16:10:09


 
 
 

Can LabVIEW threads sleep in increments less than a millisecond?

Post by DFGra » Tue, 07 Dec 2004 23:40:34

The major problem with waits under Windows, especially NT based versions, is that the OS timeslice for an application is 20ms by default. What this means is that when your process (LabVIEW) is swapped out so another process can run (e.g. your e-mail gets mail, your mp3 player is decoding mp3s, the system clock ticks, the keyboard gets data), there is a minimum of 20ms before LabVIEW gets the processor back. This is an automatic delay of 20ms+ whenever it happens. As a result, delays of less than 20ms are not particularly accurate and are generally only useful for throttling CPU use.<br><br>You can change this 20ms to 1ms, but it will result in a lot more thread switching overhead for the OS. This is probably not much of a problem with a fast processor, but something to consider.<br><br>Take home message - this is a problem with the operating system, not LabVIEW. Windows operationg systems are nowhere near real time. If you really need to sleep for 10 microseconds, you need to use LabVIEW RT, where the function exists. Note that this is usually not necessary. Creative use of your hardware timers and buffering data will handle most problems. For those it won't (and they do exist), there is LabVIEW RT and LabVIEW FPGA.
 
 
 

Can LabVIEW threads sleep in increments less than a millisecond?

Post by tarheel_ha » Thu, 09 Dec 2004 13:40:07

<i>The major problem with waits under Windows, especially NT based versions, is that the OS timeslice for an application is 20ms by default. What this means is that when your process (LabVIEW) is swapped out so another process can run (e.g. your e-mail gets mail, your mp3 player is decoding mp3s, the system clock ticks, the keyboard gets data), there is a minimum of 20ms before LabVIEW gets the processor back. This is an automatic delay of 20ms+ whenever it happens. As a result, delays of less than 20ms are not particularly accurate and are generally only useful for throttling CPU use.<br><br>You can change this 20ms to 1ms, but it will result in a lot more thread switching overhead for the OS. This is probably not much of a problem with a fast processor, but something to consider.</i><br><br>Three questions: Does "20ms" mean 20 <b>milli</b>seconds, or 20 <b>micro</b>seconds?<br><br>Also, do you know what registry settings need to be changed in order to alter this time [down to 1ms]?<br><br>Finally, do you know whether the server versions [W2K server & 2003 server] are tuned a little better than the desktop versions [XP & the "Professionals"]?<br><br>Again, in this day and age [of 3GHz processors], it's really hard for me to believe that a context switch automatically [and necessarily] results in a 20 millisecond time penalty to the thread that does the relinquishing.<br><br>PS: I did a little googling, and I found an article from July of 2002 where a guy at IBM did some performance testing of the Windows -vs- Linux kernels, especially as regards context switching. He found that if you use system-specific calls [rather than generic, POSIX-ish calls], then Windows was about twice as fast as Linux [although I think he may have been using the 2.4 kernel, which has a different scheduler than the 2.6 kernel]:<br><br>RunTime: Context switching, Part 1<br>High-performance programming techniques on Linux and Windows<br><a href=" http://www.yqcomputer.com/ ,lnxw03=RTCS" target=_blank> http://www.yqcomputer.com/ ,lnxw03=RTCS</a><br><br>RunTime: Context switching, Part 2<br>High-performance programming techniques on Linux and Windows<br><a href=" http://www.yqcomputer.com/ ,lnxw01=ConSwiP2" target=_blank> http://www.yqcomputer.com/ ,lnxw01=ConSwiP2</a>
 
 
 

Can LabVIEW threads sleep in increments less than a millisecond?

Post by DFGra » Thu, 09 Dec 2004 23:10:11

20ms means 20 milliseconds (20 microseconds would be 20). I also did a bit of research and found out I was wrong. By default, background processes have a time slice of 10ms, foreground processes have a time slice of 30ms on WinXP Pro. You can change these using the performance options of the OS. I don't know exactly how to do it, but search your windows help for "time slice" or "performance options" and you should find it. It is buried pretty deep.<br><br>Server versions do have different time slice allocation, by default. However, the "quantum", the smallest possible time slice, is 3.333ms on XP (three quanta equals 10ms).<br><br>This all gets back to my original statement. Windows, is not a real-time operating system (well, one could argue CE is). Linux is not any better. If you really need one, use one designed for the task (LabVIEW RT, VxWorks, QNX, etc.).
 
 
 

Can LabVIEW threads sleep in increments less than a millisecond?

Post by tarheel_ha » Fri, 10 Dec 2004 12:10:57

I wrote a little VI that allows you to run empty For Loops that do nothing at all except sleep for prescribed intervals [and there's even an option to run the For Loop with no sleep whatsoever].<br><br>On a 2.8GHz Dell, running Windows XP & LabVIEW 7.0, 1000 iterations of both "1ms" sleeps and "2ms" sleeps gives an elapsed time of about 2 seconds, so it seems to me that on this box, the smallest increment of time is about 2ms, rather than 20ms.<br><br>The "0ms" sleeps do what I needed, however: On this box, you need to run the loop about 100,000 iterations before you get any noticeable elapsed time [of course, that could be a function of the granularity of the timer, but, at this point, I'm just happy to have found something that does what I need, so I'm not gonna worry about that].<br><br>If there's anyone from NI following this thread, it would be nice if we could get this "0ms" sleep trick into the official documentation, so that this "feature" doesn't get deprecated in the future.


Empty For Loops with Sleep.vi:
http://www.yqcomputer.com/