• 0 Posts
  • 71 Comments
Joined 1 year ago
cake
Cake day: February 1st, 2024

help-circle














  • Your initial claim was that they couldn’t be measured that way. You’re right that they aren’t stored as bits, but it’s irrelevant to whether you can measure them using bits as the unit of information size.

    Think of it like this: in the 1980s there were breathless articles about CD ROM technology, and how, in the future, “the entire encyclopedia Britannica could be stored on one disc”. How was that possible to know? Encyclopedias were not digitally stored! You can’t measure them in bits!

    It’s possible because you could define a hypothetical analog to digital encoder, and then quantify how many bits coming off that encoder would be needed to store the entire corpus.

    This is the same thing. You can ADC anything, and the spec on your ADC defines the bitrate you need to store the stream coming off… in bits (per second)




  • Now, because this article got a little long, as per a friend’s suggestion, here’s a table of contents:

    Optimization gives us optimal programs

    Branch weights and the CPU’s branch predictor

    -O3 produces much faster code than -O2

    Javascript interpreters JIT at runtime because they don’t know which paths are hot ahead of time

    If you have a compiler, you don’t need an interpreter

    The middle-end is target/platform-independent

    The compiler optimizes for data locality

    -O0 gives you fast compilation

    Templates are slow to compile

    Separate compilation is always worth it

    Why does link-time optimization (LTO) happen at link-time?

    Inlining is useful primarily because it eliminates a call instruction

    The role of the inline keyword

    The best production compiler to study is LLVM

    Undefined behavior only enables optimizations

    The compiler can “simply” define undefined behavior 99% correctness rate is ok