

The joke is that stopping discussion is healthy (which was obviously wrong). So I said I was stopping you–and thus the discussion–and then showed it was healthy with a salad.


I get it. You can’t get by “Ai iS slOp” at top level comments anymore. I get that kind of ending because I would add it… but then I also don’t mind collecting downvotes so ymmv I guess.


I feel like there needs to be a dedicated post (and I don’t want to write it, but maybe I eventually will) that outlines what a model really is. It is not just a statistical text prediction machine unless you are being so loose with the definition of “statistical” that it doesn’t even mean anything anymore.
A decent example of a statistical text prediction machine is the middle word suggested by your phone when you’re using the keyboard. An LLM is not that.
In the most general terms, this kind of language model tokenizes a corpus of text based on a vocabulary (which is probably more than just the words in the dictionary), uses an embedding model to translate these tokens into a vector of semantic “meaning” which minimized loss in a bidirectional encoding (probably), that is then trained against a rubric for one or more topic area questions, retrained for instruction and explainability, retrained with reinforcement learning and human feedback to provide guardrails, and retrained again to make use of supplemental materials not part of the original training corpus (resource augmented generation), then distilled, then probably scaled and fine tuned against topic areas of choice (like coding or Korean or whatever) and maybe THEN made available to people to use. There are generally more parts to curriculum learning even than that but it’s a representative-ish start.
My point being that, yes, it would be nuts to pose ANY question to a predictor that says “with 84% probability, the word that is most likely follows ‘I really like’ is ‘gooning’ on reddit”, but even Grok is wildly more sophisticated than that and Grok is terrible.
Edit: And also I really like your take at the start of this thread: user error is a pretty huge problem in this space.


A little personal flourish doesn’t invalidate the rest IMO. Humans get aggravated and humans are aggravating.


20 line commit: 5 issues and suggestions.
500 line commit: “looks good to me!”


I’m just gonna stop you right there.
🥗
I heard the nav is shit though for the self driving. Basically always just takes you home.
Holy shit let’s get someone to make them! We will be rich!
I mean sure there’s all of that… But consider this:
A horse doesn’t play music, or pick up where I left reading on my audiobook.
They don’t have mirrors or a seatbelt, so how safe can a horse REALLY be? 🤔


You should be all the way joking because giving this sort of agency to an LLM shows an all the way misunderstanding of what they are and how they work.
You not alone in these feelings, but just like the title of the article, they are fundamentally misguided.


The difference is when a LLM tells you, it’s news.
Besides, what are you gonna do if you ask AI how many rocks to eat? NOT eat rocks? People can’t handle responsibility like that.


Bc GUI is bloat ofc. And then you have to use the mouse, the greatest of workflow antipatterns!


Supposing the prices they charge are still less than what you would pay for the convenience of purchasing a product with no extra effort, why would you switch?
I have myself had aspirations to buy fewer things from Amazon. However. Even including stuff like this, I am happy to pay $10 extra to not have to dick around.
I hope Amazon has to pay money for this and that it hurts their business model, bit as a customer they are still scratching my itch 2 times out of 3.


I would like to remove the word “slop” from common speech for overuse. Sorry for your jack stand experience.
That’s not true. There are probably like at least 11, but they’ve got so many chads helping them already you’ll never meet them.
Please reach out if you have trouble. Make a buddy if you don’t have one!
So I find it to actually be a really helpful “barometer” of language skill. When I’m in France, if I go in a store and conduct s full conversation in French, I know my accent, word choice, and general language skill is good. If halfway through the exchange we switch to English, I know I either made an egregious language error or I started sounding like an American. If the conversation switched to English right away, I either made a critical language mistake OR I just happened across a very competent English speaker.
And I gotta say I really appreciate it!