CUDA Tile Open Sourced

(github.com)

92 points | by JonChesterfield 6 days ago

5 comments

  • xmorse 1 hour ago
    Writing this in Mojo would have been so much easier
    • 3abiton 59 minutes ago
      It's barely gaining adoption though. The lack of buzz is a chicken and egg issue for Mojo. I fiddled shortly with it (mainly to get it working some of my pythong scripts), and it was suprisingly easy. It'll shoot up one day for sure if Latner doesn't give up early on it.
    • pjmlp 5 minutes ago
      It would help if they were not so much macOS and Linux focused.

      Julia, Python GPU JITs work great on Windows, and many people only get Windows systems as default at work.

    • bigyabai 28 minutes ago
      Use-cases like this are why Mojo isn't used in production, ever. What does Nvidia gain from switching to a proprietary frontend for a compiler backend they're already using? It's a legal headache.

      Second-rate libraries like OpenCL had industry buy-in because they were open. They went through standards committees and cooperated with the rest of the industry (even Nvidia) to hear-out everyone's needs. Lattner gave up on appealing to that crowd the moment he told Khronos to pound sand. Nobody should be wondering why Apple or Nvidia won't touch Mojo with a thirty-nine and a half foot pole.

  • boywitharupee 51 minutes ago
    shouldn't the title be "CUDA Tile IR Open Sourced"?
  • toolboxg1x0 43 minutes ago
    NVIDIA tensor core units, where the second column in kernel optimization is producing a test suite.
  • CamperBob2 1 hour ago
    Fun game: see how many clicks it takes you to learn what MLIR stands for.

    I lost count at five or six. Define your acronyms on first use, people.

    • roughly 43 minutes ago
      The ol’ TMA problem.
    • fragmede 1 hour ago
      I did it in three. I selected it in your comment, and then had to hit "more" to get to the menu to ask Google about it, which brought me to https://www.google.com/search?q=MLIR which says: MLIR is an open-source compiler infrastructure project developed as a sub-project of the LLVM project. Hopefully

      Get better at computers and stop needing to be spoon-fed information, people!

      • iaebsdfsh 12 minutes ago
        From Wikipedia: The name "Multi-Level Intermediate Representation" reflects the system’s ability to model computations at various abstraction levels and progressively lower them toward machine code.
      • reactordev 57 minutes ago
        In this day and age, asking questions about what something is is a minefield of “just ask AI” and “You should know this”. Let’s stop putting down people who ask questions and root out those that have shitty answers.
        • ThrowawayTestr 12 minutes ago
          Google is nearly 30 years old
          • pjmlp 3 minutes ago
            And we are not counting Yahoo, Altavista, Ask Jeeves, MSN,...
        • fragmede 10 minutes ago
          I get why it feels frustrating when someone snaps "just google it." Nobody likes feeling dumb. That said, there’s a meaningful difference between asking a genuine question and demanding that every discussion be padded to accommodate readers who won’t even type four letters into a search bar. Expecting complete spoon-feeding in technical threads isn’t curiosity; it’s a refusal to engage. Learning requires participation.
          • CamperBob2 6 minutes ago
            You're posting a spirited defense of substandard technical writing. Just curious -- why is that?
      • poita66 54 minutes ago
        And yet you didn’t tell us what it stands for, just what it is. The person you’re responding to was specifically talking about finding out what it stands for
    • piskov 41 minutes ago
      If only there was a chat-based app that you could ask questions to.
  • jauntywundrkind 2 hours ago
    Will be interesting to see if Nvidia and other have any interest & energy getting this used by others, if there actually is an ecosystem forming around it.

    Google leading XLA & IREE, with awesome intermediate representations, used by lots of hardware platforms, and backing really excellent Jax & Pytorch implementations, having tools for layout & optinization folks can share: they really build an amazing community.

    There's still so much room for planning/scheduling, so much hardware we have yet to target. RISC-V has really interesting vector instructions, for example, and it seems like there's so much exploration / work to do to better leverage that.

    Nvidia has partners everywhere now. Nvlink is used by Intel, AWS Tritanium, others. Yesterday the Groq exclusive license that Nvidia paid to give to Groq?! Seeing how and when CUDA Tiles emerges: will be interesting. Moving from fabric partnerships, up up up the stack.

    • pjmlp 1 hour ago
      For NVidia it suffices this is a Python JIT allowing programming CUDA compute kernels directly in Python instead of C++, yet another way how Intel and AMD, alongside Khronos APIs, lag behind in great developer experiences for GPU compute programming.

      Ah, and Nsight debugging also supports Python CUDA Tiles debugging.

      https://developer.nvidia.com/blog/simplify-gpu-programming-w...

      • Q6T46nT668w6i3m 1 hour ago
        Slang is a fantastic developer experience.
        • pjmlp 11 minutes ago
          Especially when using the tooling from who created it, before offering it to Khronos as GLSL replacement, NVIDIA.
    • Moosdijk 1 hour ago
      > There's still so much room for planning/scheduling, so much hardware we have yet to target

      this is nicely illustrated by this recent article:

      https://news.ycombinator.com/item?id=46366998

    • turtletontine 2 hours ago
      On the RISC-V vector instructions, could you elaborate? Are the vector extensions substantially different from those in ARM or x86?
      • adgjlsfhk1 1 hour ago
        it's fairly similar to Arm's sve2, but very different from the x86 side in that the instructions are variable length rather than fixed