this post was submitted on 18 Aug 2023
0 points (NaN% liked)

Programming

17001 readers
297 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

From Russ Cox

Lumping both non-portable and buggy code into the same category was a mistake. As time has gone on, the way compilers treat undefined behavior has led to more and more unexpectedly broken programs, to the point where it is becoming difficult to tell whether any program will compile to the meaning in the original source. This post looks at a few examples and then tries to make some general observations. In particular, today's C and C++ prioritize performance to the clear detriment of correctness.

I am not claiming that anything should change about C and C++. I just want people to recognize that the current versions of these sacrifice correctness for performance. To some extent, all languages do this: there is almost always a tradeoff between performance and slower, safer implementations. Go has data races in part for performance reasons: we could have done everything by message copying or with a single global lock instead, but the performance wins of shared memory were too large to pass up. For C and C++, though, it seems no performance win is too small to trade against correctness.

you are viewing a single comment's thread
view the rest of the comments
[–] mrkite@programming.dev 1 points 1 year ago* (last edited 1 year ago)

My problem with C/C++ is the people behind the spec have sacrificed our sanity in the name of "compiler optimization". Signed overflow behaves the same on every cpu on the planet, why is it undefined behaviour? Even more insane, they specify intN_t must be implemented via 2s complement.. but signed overflow is still undefined because compilers want to pretend they run on pixie dust instead of real hardware.