When printing floating-point numbers, print full precision by default.

To make debug output readable, we still use the faster 6-digit precision
sometimes, but only if it will round-trip.

This way, when a test fails due to a very small difference in floating-point
numbers, users will have enough digits to see the difference.

PiperOrigin-RevId: 488958311
Change-Id: Ibcac43f48a97006d89217530c69386cc4fa2735c
This commit is contained in:
Abseil Team
2022-11-16 09:17:58 -08:00
committed by Copybara-Service
parent 4408a0288b
commit 9c332145b7
2 changed files with 84 additions and 1 deletions

View File

@@ -458,7 +458,15 @@ TEST(PrintBuiltInTypeTest, Int128) {
// Floating-points.
TEST(PrintBuiltInTypeTest, FloatingPoints) {
EXPECT_EQ("1.5", Print(1.5f)); // float
// float (32-bit precision)
EXPECT_EQ("1.5", Print(1.5f));
EXPECT_EQ("1.0999999", Print(1.09999990f));
EXPECT_EQ("1.1", Print(1.10000002f));
EXPECT_EQ("1.10000014", Print(1.10000014f));
EXPECT_EQ("9e+09", Print(9e9f));
// double
EXPECT_EQ("-2.5", Print(-2.5)); // double
}