• 0 Posts
  • 3 Comments
Joined 2 months ago
cake
Cake day: September 8th, 2025

help-circle
  • Re: your last paragraph:

    I think the future is likely going to be more task-specific, targeted models. I don’t have the research handy, but small, targeted LLMs can outperform massive LLMs at a tiny fraction of the compute costs to both train and run the model, and can be run on much more modest hardware to boot.

    Like, an LLM that is targeted only at:

    • teaching writing and reading skills
    • teaching English writing to English Language Learners
    • writing business emails and documents
    • writing/editing only resumes and cover letters
    • summarizing text
    • summarizing fiction texts
    • writing & analyzing poetry
    • analyzing poetry only (not even writing poetry)
    • a counselor
    • an ADHD counselor
    • a depression counselor

    The more specific the model, the smaller the LLM can be that can do the targeted task (s) “well”.



  • Can’t believe I had to scroll down this far to find this:

    Here’s the gut-punch for the typical living room, however. If you’re sitting the average 2.5 meters away from a 44-inch set, a simple Quad HD (QHD) display already packs more detail than your eye can possibly distinguish. The scientists made it crystal clear: once your setup hits that threshold, any further increase in pixel count, like moving from 4K to an 8K model of the same size and distance, hits the law of diminishing returns because your eye simply can’t detect the added detail.

    On a computer monitor, it’s easily apparent because you’re not sitting 2+ m away, and in a living room, 44" is tiny, by recent standards.