The problem is that PCB design is hard. Writing a "cost function" for placement is basically impossible when later design steps are going to introduce hard constraints, and earlier design steps are actually extremely flexible.
For example, the general rule-of-thumb is to place one 100nF decoupling capacitor per power pin. But in practice there isn't always space for that. Do you suboptimally route your critical high-speed traces to place one? Do you add additional board layers for it? Do you switch to a smaller (and more expensive to manufacture) capacitor package size? Do you more it further away from the chip - making it significantly less effective? Do you make two power pins share a single capacitor? Do you switch to a different IC package or even a completely different chip with an easier pinout?
What is the impact of your choice on manufacturing requirements, manufacturing cost, part cost, part availability, testability, repairability, EMC/FCC/whatever certification?
Every option could literally be free, cost tens of millions, or anything in-between. Parts documentation is already woefully incomplete as it is, trying to automate routing it by requiring people to provide data describing basically the entire world just isn't realistic.
I do think autorouting is largely a UI problem for this reason. Specifying the constraints is very difficult, especially when it's also tied into assessing stuff like power distribution (where the rule of thumb of 100nF is almost certainly suboptimal, proximity probably matters less than you would think, and you can wind up with too much capacitance, but actually evaluating what matters is so much more complex that unless it's really critical it tends to be not much better than a blind guess that most of the time works well enough. I really would like to figure out a better set of rules for this, something a bit less heavyweight than a full simulation but still at least vaguely quantitive in terms of tradeoffs).
To me innovation in autorouting means being able to 'have a conversation' with it: being able to easily adjust things and see the results and map out the tradeoffs would be very useful, but it doesn't seem like this is an area that's being pushed too hard.
For the traditional "100 nF per pin" problem, there is an actual constraint based solution. What you _really_ want is an impedance and cross-impedance constraint on current loops through power pins. That's, ultimately, what matters: not some rule of thumb, but actual physics that attempts to quantify the board's response to the chip's changing load.
Interestingly, Qualcomm actually gives you these, but I haven't seen many (any?) other chip manufacturers do that. I wish that'd became common practice.
Yeah, you can do it, but it's quite a painful process and as you noted it's quite hard to actually get the required information: you can predict the impedance at the chip's pads across frequency, but only with a full-fledged simulation of the PCB, and then you don't actually know what counts as good enough in most cases. What I'd like is something that's a little easier to analyse and visualise even if a little less precise. It feels like there should be a much simpler model which gives you a view of how the impedance changes as you move away from the capacitor so that you can evaluate the tradeoffs without needing to set up and wait for a whole simulation.
(especially because as I understand it, distance tends to matter a lot less than people expect, especially because once you're up at frequencies where it might matter, it's not so much the capacitors providing the decoupling as the power planes themselves anyhow)
> To me innovation in autorouting means being able to 'have a conversation' with it: being able to easily adjust things and see the results and map out the tradeoffs would be very useful
author here: This is basically our philosophy. LLMs can churn out constraints/code very quickly to pull out the specific requirements for a design or the chips you're using. When people use tscircuit (or any electronics-as-code framework) they can talk to an LLM and just keep yelling at it in the same way you yell at an LLM to fix a web page. The success of web pages and LLMs is built from small constraint algorithms like flexbox and CSS grid, this article is just one constraint algorithm that can help LLMs approximate a solution without specifying a bunch of XY coordinates that would challenge its spatial understanding
I think using the vision decoder baked into modern LLMs is the way to go. Have the LLM iterate; make sure it can assert placement qualities and understands the hard requirements. I think it can be done.
Dont know about LLM, but AI in general isnt such a stupid idea as one might think and Chinese are particularly well positioned to take advantage.
Take for example something like XinZhiZao (XZZ), ZXW, Wuxinji, diyfixtool. They have huge databases with pictures, diagrams and boardviews of pretty much every phone, laptop and graphics card. With all this data you could build AI system ripping of^^^^^ "suggesting" routing for your design based on similarity to stole^^^training data. That way you start with layout that worked in devices shipped by the millions.
This could be build in stages, starting witch much weaker system trained on just pcb pictures + layer count. This should be enough to suggest ~optimal initial chip placement for classical auto-router.
author here: I think synthetic data, generated by ~brute force iteration with LLMs, with every DRC analysis imaginable and more, will yield a more consistent/usable/larger dataset than any existing dataset. It's a mistake to put too much weight in anyone's existing data. This is why we work hard to make algorithms that LLMs can use, because they have emerging spatial capabilities that excel when coupled with detailed analysis.
For example, the general rule-of-thumb is to place one 100nF decoupling capacitor per power pin. But in practice there isn't always space for that. Do you suboptimally route your critical high-speed traces to place one? Do you add additional board layers for it? Do you switch to a smaller (and more expensive to manufacture) capacitor package size? Do you more it further away from the chip - making it significantly less effective? Do you make two power pins share a single capacitor? Do you switch to a different IC package or even a completely different chip with an easier pinout?
What is the impact of your choice on manufacturing requirements, manufacturing cost, part cost, part availability, testability, repairability, EMC/FCC/whatever certification?
Every option could literally be free, cost tens of millions, or anything in-between. Parts documentation is already woefully incomplete as it is, trying to automate routing it by requiring people to provide data describing basically the entire world just isn't realistic.
To me innovation in autorouting means being able to 'have a conversation' with it: being able to easily adjust things and see the results and map out the tradeoffs would be very useful, but it doesn't seem like this is an area that's being pushed too hard.
Interestingly, Qualcomm actually gives you these, but I haven't seen many (any?) other chip manufacturers do that. I wish that'd became common practice.
(especially because as I understand it, distance tends to matter a lot less than people expect, especially because once you're up at frequencies where it might matter, it's not so much the capacitors providing the decoupling as the power planes themselves anyhow)
author here: This is basically our philosophy. LLMs can churn out constraints/code very quickly to pull out the specific requirements for a design or the chips you're using. When people use tscircuit (or any electronics-as-code framework) they can talk to an LLM and just keep yelling at it in the same way you yell at an LLM to fix a web page. The success of web pages and LLMs is built from small constraint algorithms like flexbox and CSS grid, this article is just one constraint algorithm that can help LLMs approximate a solution without specifying a bunch of XY coordinates that would challenge its spatial understanding
Take for example something like XinZhiZao (XZZ), ZXW, Wuxinji, diyfixtool. They have huge databases with pictures, diagrams and boardviews of pretty much every phone, laptop and graphics card. With all this data you could build AI system ripping of^^^^^ "suggesting" routing for your design based on similarity to stole^^^training data. That way you start with layout that worked in devices shipped by the millions.
This could be build in stages, starting witch much weaker system trained on just pcb pictures + layer count. This should be enough to suggest ~optimal initial chip placement for classical auto-router.