Numbers in JavaScript, unlike many other languages, do not distinguish between integer and floating-point values. In JavaScript, all integers are represented as floating-point values. JavaScript uses the IEEE 754 standard's 64-bit floating-point format to represent integers.

All integers between -2^{53} and 2^{53}, inclusive, can be represented exactly using the JavaScript number format. You may lose precision in the trailing digits if you utilise integer values larger than this. It's worth noting, though, that some JavaScript operations use 32-bit integers.

A numeric literal is a number that appears directly in a JavaScript programme. As discussed in the next paragraph, JavaScript allows numeric literals in a variety of formats. To make a number negative, any numeric literal can be preceded by a minus sign (-).

## Numbers in JavaScript

### Octal Literals

Octal literal starts with the digit 0 and continues with a series of digits ranging from 0 to 7. Because some implementations support octal literals while others do not, you should never write an integer literal with a leading zero because you never know if it will be interpreted as an octal or decimal number. Octal literals are specifically banned in ECMA Script 5's strict mode.

In the below example, it stores a decimal integer:

let n = 075;

#### Output

61

### Floating Point Literals

A decimal point can be used in floating-point literals, and they utilise the same syntax as real numbers. The integral part of a number is followed by a decimal point and the fractional part of the number reflects a real value. Exponential notation can also be used to describe floating-point literals: a real number followed by the letter e (or E), an optional plus or minus sign, and an integer exponent. The real number multiplied by 10 to the power of the exponent is represented by this notation. The following is an example:

let qty = 10; let price = 29.30; let amount = qty * price; console.log(amount)

#### Output

293