program a given floating point binary number to its decimal
There are three questions. The second question and third question must be used by Matlab. Thank you.
(i) What is a floating point number? Why is it “floating point”?
(ii) Explain the use of the mantissa and exponent for its representation on a computer. Why is it
(iii) Explain the terms “normalization”, “hidden bit” and “bias” in the context of floating point
2. Write a program that converts a given floating point binary number with a 24-bit normalized
mantissa and an 8-bit exponent to its decimal (i.e. base 10) equivalent. For the mantissa, use the
representation that has a hidden bit, and for the exponent use a bias of 127 instead of a sign bit. Of
course, you need to take care of negative numbers in the mantissa also.
Use your program to answer
the following questions:
(a) Mantissa: 11110010 11000101 01101010, exponent: 01011000. What is the base-10 number?
(b) What is the largest number (in base 10) the system can represent?
(c) What is the smallest non-zero positive base-10 number the system can represent?
(d) What is the smallest difference between two such numbers? Give your answer in base 10.
(e) How many significant base-10 digits can we trust using such a representation?
The third question was in the attachment.
We offer the best custom essay writing services at an affordable rate. We have done this assignment before, we can also do it for you.