Reading List

Study from Meta researchers suggests that training LLMs to predict multiple tokens at once, instead of just the next token, results in better and faster models (Ben Dickson/VentureBeat) from Techmeme RSS feed.