On quirks in Ruby, leaving code better than you found it, VS Code shortcuts, and graph neural networks. Happy Thanksgiving, for those in the US!

1.5 is the midpoint between 0 and infinity in Ruby by Peter Zhu talks about a curious artifact in Ruby; that when binary searching from 0 to infinity, the first value checked is … 1.5. This happens because of everyone’s favorite IEEE-754 floating point standard.

These steps:

  • taking 0 and infinity as floating point numbers
  • reading those bits as integers
  • taking the midpoint of those integers
  • taking that as bits
  • reading those bits as a float

lead you to 1.5. Love it.


Always leave the code better than you found it from the Letters to a New Developer blog makes the case for improving code when you fix bugs or answer questions. Document what something is doing; write or improve a test; refactor if you feel comfortable enough.

All of these actions not only help others because they improve the quality of the code, they also provide examples to other developers on how to do so. For example, it is far easier to write the second test in a suite than the first. You can cut and paste a lot of the setup code and tweak only what is different. The first bit of documentation will inspire more.

The comments on Reddit were pretty polarized. A lot of folks disagreed, saying that changes that didn’t immediately address a bug should be left to a separate PR. I love the idealism, but in my experience that would mean the changes simply never get made.


Learning all VSCode shortcuts evolved my developing habits by Thomas Kainrad recounts the author’s experience learning all 149 documented keyboard shortcuts in VS Code. Oh my!

Often keyboard shortcuts are seen as a way to do things faster, compared to using the mouse. While that is true, I think there is an even more [important?] aspect to shortcut usage in IDEs: Many features are not used at all, if you don’t know the shortcut!

I love the spirit of the post—that learning your tools well is important—though learning all 149 shortcuts seems impractical. Learning how to use Sublime, back when I used it a lot, made me more productive with it. These days, I use Sublime shortcuts in VS Code instead of the native ones. I think I’ll make more of an effort to learn more now.


Deep learning on graphs: successes, challenges, and next steps by Michael Bronstein gives a nice overview of the state of graph deep learning and what the future looks like.

We have not witnessed so far anything close to the smashing success convolutional networks have had in computer vision.

Why not?

  • no standardized benchmarks, like ImageNet for computer vision
  • lack of software libraries for GNNs; PyTorch Geometric and Deep Graph Library exist, but aren’t near as popular as PyTorch itself or TensorFlow
  • scalability is challenging (and dedicated hardware doesn’t exist; see /papers/hardware_lottery_hooker.html)
  • dynamic graphs that evolve in real time are really hard
  • higher order structure, beyond nodes & edges, are not yet incorporated into graph neural network models
  • GNNs are not well understood theoretically

This is a great example of how a blog post can be useful academic output. Sure, with more depth and rigor this could have been published as a survey paper, but this reaches an entirely new audience in a far more accessible way.