12/19/2023 0 Comments Overflow error and roundoff error![]() ![]() So ideally whatever adjusts the flight path to account for things like wind is more than enough to account for any floating-point roundoff error. Even the earth's gravitational field varies by a few permille depending where you are on the surface. (And, really, I figure these would generally try to rely on some sort of feedback anyway, since there's no way the physical system is directly controllable to 52 bits of precision. I know no reason to assume that avoiding floats would make it materially easier. And one engineering issue with the Therac-25 was setting a flag by incrementing, resulting in occasional overflows that bypassed safety checks. Ironically, the Ariane 5's infamous first test flight failure was because of truncation to a 16-bit integer. Or cause a missile to go off course due to rounding error. Rust has your back when it comes to pointer handling, but it doesn't prevent you from shooting yourself in the foot with bad FP handling, so you just have to either learn the arcana or stay away (or be comfortable with your results possibly being nonsense). I don't intend to make it sound hopeless, but it's definitely a tool with a lot of sharp edges, and I would think twice before introducing approximations into any application. The analogy with memory safety seems apt: you will get crazy results unless you follow some rules that very few people understand well, and nothing will tell you that you've broken the rules in many cases. ![]() That's where floats are most insidious - unless you know what every computer scientist should and take the proper precautions, you will get numbers out that have nothing to do with the value you wanted to calculate. Of course most people don't bother to go through the effort of doing this or lack the knowledge to do so. ![]() It's a nontrivial verification condition but you can justify it in a lot of real world cases. Maybe in your application that's okay because you are doing at most N operations, or maybe it's okay because you renormalize periodically, or maybe it's okay because you can't get outside of the game arena (or at least, you hope you can't). Most operations make the values a bit larger than they were to start with, so whatever your bound you will sooner or later exceed that bound. If you want to say "these values have to be bounded near 1.0" then okay, that's one problem solved(ish) in return for another one: making sure your values are always bounded near 1.0, which may or may not be easy to do depending on what you are doing. You can argue C is awful rhetorically, but people can still use it and be correct in doing so. What you're arguing is merely rhetorical, while alankharp is giving practical advice. And in some domains, there aren't, and you see people using C because they have to. There are many more alternatives to C than there are to floats. The C specification is indeed a mess, but there's plenty of evidence that you can do useful work with it. Every number system will act unexpectedly, and the raison d'ĂȘtre of floats is that you can coast a long way without it behaving unexpectedly, as long as a reasonable amount of error is tolerable. There really is no way for a novice to expect everything. Unexpected overflow errors happen in integers much more readily. Unexpected performance degradation happens in rationals much more readily. Unexpected rounding errors happen in fixed point much more readily. They also make your precision depend on the magnitude of a number, which means that choice of units had an impact that may be unexpected. That's why they are so compelling at the hardware level. Only that they are very efficient in terms of what you get for the tradeoffs you make. Nobody is saying floats will solve all your problems, or that they don't have trade-offs. If you're saying you'll take a 2x performance hit to avoid scaling your inputs one time, well why would you do that? Isn't it easier to pay the one time cost of learning how to use floats correctly than it is to suffer a performance penalty in perpetuity? That's the problem I'm having with this discussion: you keep saying "in general" and "independent of the values" as though that's the problem you have to solve. Yes, you have to design algorithms by making sure your important calculations are unit-scaled, but that's not like a major challenge or anything, you just have to scale your inputs. The error is bound by how far away from 1.0 your operands are. You're making it sound like any float operation is liable to explode with unbounded error. That's not so much a 'problem', it's just the way float works. Also, there are no "correctness bounds of float operations", in the sense of a bound on the error independent of the values being operated on, that's the problem. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |