dear diary today on Lemmy i saw a guy who was angry at a job title
Doing the Lord’s work in the Devil’s basement
dear diary today on Lemmy i saw a guy who was angry at a job title
I’m not sure what you mean. Integrating LLMs in a codebase requires the use of specific tools. Sure it’s not as established and wide ranging as “frontend” but it’s still something that you need to learn and work on if you want to do it well. Kind of like, you know, a skill.
yeah and we all know there is only the one type of software engineer. It would be ridiculous to separate them into categories depending on whether they work on frontend, backend, embedded lmao
I … I don’t know man !
I think some time around the year 10000 humanity solved most of their problem and the only remaining scarcity was “good living”. Like, cultures that had a sophisticated way of enjoying life through good food, good drink and good companionship suddenly came at a high premium. The people from SW France became insanely wealthy very quick, and a sort of federation was struck between the Gascony people, the Basque and the Brittons. It was really the only possible counter-power to the more colonialist and military minded Italians.
Boar religion could be described as Albigensian catharism, except in space. Their freedom-loving ways are despised by the Italian catholic church but the galaxy is so vast that religion wars never really break out, it’s just local skirmishes.
I haven’t yet determined what animal the italians have morphed into, really glad to hear any suggestion.
Oh and here’s a picture of the Assembly of the Perfecti, held annually at Baiona Station :
I think they have adapted to life in hard vacuum they just wear those helmet cause they look pretty
Since Gen art became a thing i’ve been using it episodically to create images of a civilization of space-faring boars, representing the future of my glorious South-Western France civilization. They raise ducks and grow wine in space, and the lore is getting a lot deeper than i first thought. It’s so fucking fun man.
If there’s one thing artists don’t do, it’s try and build a picket fence around Art to separate it from Not Art. Duchamp was 100 years ago i think the point that “Art can be anything and everything” has been abundantly made during the 20th century.
Remember when corporations tried to claim that money you didn’t spend on their product was theft ? This way of thinking has been recycled by the anti-AI bros.
Turns out all the money you don’t spend on struggling artists is not only theft, but also class warfare. You stinking bougie you.
Except it’s not used as a job title to describe people prompting Midjourney lol. A prompt engineer is a software engineer who specifically deals with LLM workflows.
It’s especially frustrating as the whole point of the Google search page was that it was designed to get you out on your way as fast as possible. The concept was so mind blowing at the time and now they’re just like nevermind let’s default to shitty
This comment shows you have no idea of what is going on. Have fun in your little bubble, son.
If I understand these things correctly, the context window only affects how much text the model can “keep in mind” at any one time. It should not affect task performance outside of this factor.
Yeh, i did some looking up in the meantime and indeed you’re gonna have a context size issue. That’s why it’s only summarizing the last few thousand characters of the text, that’s the size of its attention.
There are some models fine-tuned to 8K tokens context window, some even to 16K like this Mistral brew. If you have a GPU with 8G of VRAM you should be able to run it, using one of the quantized versions (Q4 or Q5 should be fine). Summarizing should still be reasonably good.
If 16k isn’t enough for you then that’s probably not something you can perform locally. However you can still run a larger model privately in the cloud. Hugging face for example allows you to rent GPUs by the minute and run inference on them, it should just net you a few dollars. As far as i know this approach should still be compatible with Open WebUI.
There are not that many use cases where fine tuning a local model will yield significantly better task performance.
My advice would be to choose a model with a large context window and just throw in the prompt the whole text you want summarized (which is basically what a rag would do anyway).
some myths are hard to kill honestly
I mean it is also true for crypto. BTC, the most energy-hungry blockchain, is estimated to burn ~150TWh/year, compared to a global consumption of 180 000TWh/y.
Now is that consumption useless ? Yes, it is completely wasted. But it is a drop in the bucket. One shouldn’t underestimate the astounding energy consumption of legacy industries - as a whole the tech industry is estimated to represent just a few percents of the global energy budget.
To clarify: AI is NOT a major driver of CO2 emissions. The most pessimistic estimations place it at a fraction of a percent of global energy consumption by 2030.
You’d be surprised! We already had banks, insurances, newspapers and other kinds of information businesses. They did employ a huge lot of secretaries.
Ok that one is hilarious
I had the same feeling with planet crafter. After a while you learn to run around with just enough materials to build a room and a door and bam, the whole oxygen management mechanic is neutralized.