In C programming, int keyword is used to declare or define integer variables. The integers in C are whole numbers, that means it can hold all the values zero, positive and negative except decimal values.
Let’s take an example :-
Then, the output of the following code will be :-
But if we put real numbers( decimal values) to the integers, let’s see what happens ?
The above program will not raise any kind of error. If you assign a value of any numeric type (integer, floating-point) to a variable of another numeric type, the value is implicitly converted to the target type.
There are two qualifiers that can be applied to the integers in C for providing different length of integers.
A short integer can represent a whole number that may take less storage, while having a smaller range, compared with a standard integer on the same machine.
In C language, it is denoted by a short keyword. It is required to be at least 16 bits, and is often smaller than a standard integer, but this is not required.
For example :-
The output of the above code for gcc compiler :-
A long integer can represent a whole number that may take more storage, whose range is greater (sometimes even double) than that of the standard data type integer In C language, it is denoted by a long keyword.
There are some literals, which are recommended to programmers to use with long - ‘l’ and ‘L’. Although both lower case `l' and upper case `L' are allowed as suffixes for long integers, it is strongly recommended to always use `L'.
The the output of the following code :-
These short and long integers are further divided into two categories, which are :-
Signed variables of int ( either short or long) can hold positive as well as negative values. Let’s understand this through an example :-
The the output of the following program :-
The range of short and long signed integers varies from compiler to compiler in C. But for the 32-bit compiler the range of short and long signed integers can be calculated through the given formula, where n is the number of bits.
Integers in C are by default a signed type, means they can hold negative , positive and zero.
Unsigned variables are those variables, which can hold zero and positive values only. Let us understand this through an example :-
The output of the following program will be :-
The range of short and long unsigned integers varies from compiler to compiler in C. As we know for the 32 bit compiler, the size of unsigned short integers is 2 and unsigned long integers is 4.
So, the range of short and long unsigned integers can be calculated through the given formula :-