Commits

Wojtek Czekalski committed 1ed9da86db2
Fix precision of float to string to max significant decimals The default precision which is used for converting floating point numbers to strings leads to many confusing results. If we take a Float32 1.00000000 value and 1.00000012 of the same type, these two, obviously are not equal. However, if we log them, we are displayed the same value. So a much more helpful display using 9 decimal digits is thus: [1.00000000 != 1.00000012] showing that the two values are in fact different. (example taken from: http://www.boost.org/doc/libs/1_59_0/libs/test/doc/html/boost_test/test_output/log_floating_points.html) I'm by no means a floating point number expert, however having investigated this issue I found numerous sources saying that "magic" numbers 9 and 17 for 32 and 64 bit values respectively are the correct format. Numbers 9 and 17 represent the maximum number of decimal digits that round trips. This means that number 0.100000000000000005 and 0.1000000000000000 are the same as their floating-point representations are concerned.