4 comments

  • prodigycorp 1 hour ago
    The best part about this is that you know the type of people/companies using langchain are likely the type that are not going to patch this in a timely manner.
    • wilkystyle 1 hour ago
      Can you elaborate? Fairly new to langchain, but didn't realize it had any sort of stereotypical type of user.
      • prodigycorp 20 minutes ago
        No dig at you, but I take the average langchain user as one who is either a) using it because their C-suite heard about at some AI conference and had it foisted upon them or b) does not care about software quality in general.

        I've talked to many people who regret building on top of it but they're in too deep.

        I think you may come to the same conclusions over time.

        • inlustra 0 minutes ago
          Great insight that you wouldn’t get without HN, thank you! What would you and your peers recommend?
      • XCSme 58 minutes ago
        I am not sure what's the stereotype, but I tried using langchain and realised most of the functionality actually adds more code to use than simply writing my own direct API LLM calls.

        Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.

        • teruakohatu 16 minutes ago
          JSON Structured Output from OpenAI was released a year after the first LangChain release.

          I think structured output with schema validation mostly replaces the need for complex prompt frameworks. I do look at the LC source from time to time because they do have good prompts backed into the framework.

  • threecheese 1 hour ago
    Cheers to all the teams on sev1 calls on their holidays, we can only hope their adversaries are also trying to spend time with family. LangGrinch, indeed! (I get it, timely disclosure is responsible disclosure)
  • shahartal 4 hours ago
    CVE-2025-68664 (langchain-core): object confusion during (de)serialization can leak secrets (and in some cases escalate further). Details and mitigations in the post.
  • nubg 2 hours ago
    WHY on earth did the author of the CVE feel the need to feed the description text through an LLm? I get dizzy when I see this AI slop style.

    I would rather just read the original prompt that went in instead of verbosified "it's not X, it's **Y**!" slop.

    • iamacyborg 2 hours ago
      > WHY on earth did the author of the CVE feel the need to feed the description text through an LLm?

      Not everyone speaks English natively.

      Not everyone has taste when it comes to written English.

      • crote 35 minutes ago
        I would rather read succinct English written by a non-native speaker filled with broken grammar than overly verbose but well-spelled AI slop. Heck, just share the prompt itself!

        If you can't be bothered to have a human write literally a handful of lines of text, what else can't you be bothered to do? Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?

        • iinnPP 25 minutes ago
          I prefer reading the LLM output for accessibility reasons.

          More importantly though, the sheer amount of this complaint on HN has become a great reason not to show up.

          • crote 3 minutes ago
            > I prefer reading the LLM output for accessibility reasons.

            And that's completely fine! If you prefer to read CVEs that way, nobody is going to stop you from piping all CVE descriptions you're interested in through a LLM.

            However, having it processed by a LLM is essentially a one-way operation. If some people prefer the original and some others prefer the LLM output, the obvious move is to share the original with the world and have LLM-preferring readers do the processing on their end. That way everyone is happy with the format they get to read. Sounds like a win-win, no?

          • roywiggins 19 minutes ago
            Unfortunately, the sheer amount of ChatGPT-processed texts being linked has for me become a reason not to want to read them, which is quite depressing.
      • nubg 2 hours ago
        If I want to cleanup, summarize, translate, make more formal, make more funny, whatever, some incoming text by sending it through an LLM, I can do it myself.
    • dorianmariecom 1 hour ago
      you can use chatgpt to reverse the prompt
      • XCSme 56 minutes ago
        Not sure if it's a joke, but I don't think LLM is a bijective function.
      • small_scombrus 35 minutes ago
        ChatGPT can generate you a sentence that plausibly looks like the prompt