Reading List

A study finds that as few as 250 malicious documents can produce a "backdoor" vulnerability in an LLM, regardless of model size or training data volume (Anthropic) from Techmeme RSS feed.