Show HN: Z80-μLM, a ‘Conversational AI’ That Fits in 40KB
via news.ycombinator.com
Short excerpt below. Read at the original source.
How small can a language model be while still doing something useful? I wanted to find out, and had some spare time over the holidays. Z80-μLM is a character-level language model with 2-bit quantized weights ({-2,-1,0,+1}) that runs on a Z80 with 64KB RAM. The entire thing: inference, weights, chat UI, it all fits in […]