This is directionally right. The HBM vs SRAM tradeoff in architecture design was clear many years ago. Those that picked HBM are in a queue behind Nvidia and Google. Good luck with that. More broadly, LLM decode patterns favor SRAM. But unlike Gavin, I think this creates a lane for even more heterogenous silicon to support AI models in the future. Not less. I suspect that the two axes that matter are accuracy vs speed and if you can design a focused solution for a specific AI use case, there will be a market.