mini-shai-hulud
#thoughts
On May 11th, 2026, Mini Shai-Hulud, a self-replicating npm/PyPI worm orchestrated a major supply chain attack campaign in 2026, attributed to a group called TeamPCP. It’s a dead man’s switch malware that plants a watcher on your computer which nukes your home directory the moment you revoke your github credentials. Tanstack and mistralAI are some of the few packages affected (more listed here).
If you’re wondering, this is one of the many major attacks this year that has stirred up tech:
Why now?
On March 27, 2026, a preview version of, Mythos, Anthropic’s latest high-level cybersecurity, autonomous vulnerability research model was leaked. Anthropic most definitely leaked this to build hype. I’m not saying TeamPCP used Mythos-preview specifically, but the broader point stands: autonomous AI makes vulnerability hunting cheap. Pre-autonomous AI, a malicious hacker would have to dig deep in open source software to find vulnerabilities they could expose. A very time consuming and barely worth the effort task which is completely negated with autonomous AI, like Mythos-preview.
Apparently this was one of the main reasons for cal.com to go closed source
▎ “Being open source is increasingly like giving attackers the blueprints to the vault. When the structure is fully visible, it becomes much easier to identify weaknesses and exploit them.”
This stuff used to be nation-state territory. Now it’s a weekend project for anyone with API access. It’s an emerging operational threat. I’ve been going through BlueDot’s AI Safety courses because of exactly this. The attacks are getting cheaper. The tools are getting smarter. Engineers need a framework for thinking about this.