## 32-bit Number * 32-bit Number = 64-bit result

### 32-bit Number * 32-bit Number = 64-bit result

I need to do multiplication that mirrors how C handles integer
multiplication. That is, one 32-bit number times another 32-bit number
equals the least significant 32-bits of the 64-bit result.

In theory, I could mimic that behavior like so:

(x * x) & 0xFFFFFFFF

But this doesn't work quite right because the initial multiplication
can produce such a large result that JavaScript looses some precision.

Any thoughts? Ideas?

### 32-bit Number * 32-bit Number = 64-bit result

"Jeff.M" < XXXX@XXXXX.COM > writes:

or (x * x) | 0, since binary operations truncates to 32 bits ...

That is a problem, yes.

Split the numbers into smaller parts and do the multiplication yourself.
Something like:

function mul32(n, m) {
n = n | 0;
m = m | 0;
var nlo = n & 0xffff;
var nhi = n >> 16; // Sign extending.
var res = ((nlo * m) + (((nhi * m) & 0xffff) << 16)) | 0;
return res;
}

(NOT tested throughly!)
/L
--
Lasse Reichstein Holst Nielsen
'Javascript frameworks is a disruptive technology'

### 32-bit Number * 32-bit Number = 64-bit result

On May 2, 3:54m, Lasse Reichstein Nielsen < XXXX@XXXXX.COM >

> n = n | 0; >> m = m | 0;> > var nlo = n & 0xffff>
> var nhi >>n >> 16; // Sign extendin>.
> var res = ((nlo * m) + (((nhi * m) & 0xf<<f) << 16)) |>0;
> return >es>
> >> >
>
> (NOT tested through>y!)
> Lasse Reichstein Holst Nie>sen
> Javascript frameworks is a disruptive technology'

Beautiful. Thanks.

### 32-bit Number * 32-bit Number = 64-bit result

Dr J R Stockton < XXXX@XXXXX.COM > writes:

That's well spotted - there's no need to make numbers small, it only
matters to keep the number of significant digits below 53.
With that in mind, the last line can be reduced (slightly) to:

(the first summand has 32 bits of precission, the second 48, and they
overlap, so it should be fine).

/L
--
Lasse Reichstein Holst Nielsen
'Javascript frameworks is a disruptive technology'