experience with floating point numbers in Matlab. I guess it
will be same even if we use C or any other language. Actually
2.0 was represented as 1.9999999999 so the phase selection logic
advanced by one step while the previous value was picked for
data computation.

Anyone have similar experieces about floating point handling?
Is there any good document on same?

Regards
Bharat

There are many good references. Many will come up putting
"floating point" into google. (Or even "flaoting point")

The one called "What every computer scientist should know
about floating point" is a good starting point.

-- glen

Thanks Glen for the wonderful reference.

Regards
Bharat

There are three cardinal rules that I find helpful in using floating
point. Nearly all the documents that I see relating to it are
elaborations of these three -- so I'll just give you the rules, and let
you extrapolate from there:

Rule 1: Floating point is a resource hog. This isn't an issue if
you're using a PC or other 'big iron' machine, because floating point is
already built in and hogging resources even when you're not using them.
But if you're designing a small system with a fixed-point processor,
or a system built around an FPGA, it counts in either low speed, large
silicon area, or some combination thereof.

Rule 2: Floating point cannot be counted on to be exact. Never, ever
put something in your code that boils down to
if (this_float == that_float)
Do this, and you will get lots and lots of false negatives. Most of the
time that I do this, I find that I can get away with
if (this_float < that_float)
-- and then I need to be strict about doing just one test. The rest of
the time you have to use
if (fabs(this_float - that_float) < some_tolerance)
-- and choosing the tolerance can be a bear.
(Note that this is what you seem to have run afoul of).

Rule 3: Floating point isn't as precise as you think. If you have a 32
bit floating point number, then you really only have 25 effective bits
of precision in the mantissa. Nearly always you can fix this by just
using double-precision (which Scilab and I think Matlab do as routine)
-- but on some machines this forks you right back to Rule 1. Of course,
if you're working in C or C++ you have to explicitly use 'double'.
Moreover, in the embedded world I've run across at least one compiler
(TI's Code Composter for the 28xx processor) that treated the 'double'
keyword as 'please *** me over by using single precision here when I
really need double' (they did this because of Rule 1, and their
processor's 'sorta floating point' architecture that was fast with
32-bit floating point, and _not_ fast with 64-bit).

--

Tim Wescott
Wescott Design Services
http://www.yqcomputer.com/

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.yqcomputer.com/

Bharat:

You must have had something else going on. Matlab uses the IEEE 754
double precision floating point format, which is able to represent
the integer 2 exactly.
--
Randy Yates % "My Shangri-la has gone away, fading like
Digital Signal Labs % the Beatles on 'Hey Jude'"
XXXX@XXXXX.COM %
http://www.yqcomputer.com/ % 'Shangri-La', *A New World Record*, ELO

two = (4.0 / 3.0) - (1.0 / 3.0).

Or

two = exp(log(2.0)).

or

two = 1 / (cos(%pi / 4)^2)

etc.

--

Tim Wescott
Wescott Design Services
http://www.yqcomputer.com/

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.yqcomputer.com/

Of course. I presumed from his text that he was utilizing a floating
point variable as an integer index. Perhaps I was mistaken.
--
Randy Yates % "My Shangri-la has gone away, fading like
Digital Signal Labs % the Beatles on 'Hey Jude'"
XXXX@XXXXX.COM %
http://www.yqcomputer.com/ % 'Shangri-La', *A New World Record*, ELO

Need to ask the OP. The correct type-safe way to do this would be to
use an integer type, but that's not the safest way to do it under Scilab.

And yes, Scilab is exceedingly safe with floating point if you're doing
'integer' math with it.

--

Tim Wescott
Wescott Design Services
http://www.yqcomputer.com/

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.yqcomputer.com/

It's Matlab - one reason folks use it is so they don't HAVE to worry about "type safety."

And why are you talking about Scilab?
--
Randy Yates % "My Shangri-la has gone away, fading like
Digital Signal Labs % the Beatles on 'Hey Jude'"
XXXX@XXXXX.COM %
http://www.yqcomputer.com/ % 'Shangri-La', *A New World Record*, ELO

Because I forgot the OP was working in Matlab and not Scilab! Although
both statements work the same in Scilab or Matlab.

And you don't _have_ to worry about type safety in Scilab or Matlab
until you trip over it while carrying a platter full of precious crystal.

And yes, on my bad days I _am_ a bit of a code nazi.

--

Tim Wescott
Wescott Design Services
http://www.yqcomputer.com/

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.yqcomputer.com/

>Rule 1: Floating point is a resource hog. This isn't an issue if

What about the case where one needs a very large dynamic range but not
necessarily a lot of precision? With fixed point arithmetic a large
dynamic range forces huge bitwidths. With floating point you have some
freedom to trade precision for dynamic range assuming you can create your
own floating point implementation. I do agree that IEEE 754 is a hog in
FPGAs.

Oh and yes I have had this happen to me in Matlab. I had a variable that
looked like an integer and Matlab even displayed the value as an integer
when I printed it to the screen. I had to change Matlab's formatting to
long e (format long e) to see that the value was not an integer. I'm
assuming you had several floating point calculations that should have
resulted in the integer "2" but didn't due to the quantization error that
may occur during every floating point operation.

This is true. But many problems don't have all that large a dynamic
range, but people are sometimes lazy. Well, part is that very few
programming languages support scaled fixed point. One is expected
to do the scaling, and with no help for I/O.

Very true. The pre/post normalization for an adder is much bigger
than the adder itself. In many cases, bigger than a floating point
multiplier!

-- glen