Typecasting is simply a mechanism by which we can change the data type of a variable, no matter how it was originally defined. When a variable is typecasted into a different type, the compiler basically treats the variable as of the new data type.
#include<stdio.h>
int main(void)
{
int a=5,b=8;
float c = 0, d = 0;
c = a/b;
printf("\n [%f] \n",c);
d = (float)a/(float)b;
printf("\n [%f] \n",d);
return 0;
}
In the above example, first we divide 'a' by 'b' without typecasting. While in the second attempt we typecast both 'a' and 'b' to float and then divide them.
So, what happens is, in the first attempt, the compiler knows both 'a' and 'b' are integers so it ignores the floating part of the result and stores the result of a/b in a temporary integer variable and then assign the value of this temporary integer variable to 'c'. So despite of 'c' being a float, the final value that 'c' gets is an integer ie '0' in this case.
While in the second attempt, as both 'a' and 'b' are individually typecasted as 'float', so this typecasting makes compiler think as if both 'a' and 'b' are floats, so the compiler retains the floating part of the result and assigns the float value to 'c'.
The output of the above program :
[0.000000]
[0.625000]
No comments:
Post a Comment