Just a quick check, is this location based or something, or maybe the meme was very old? Not to say that these things don’t happen anymore, but I can access this one specifically just fine.
Just a quick check, is this location based or something, or maybe the meme was very old? Not to say that these things don’t happen anymore, but I can access this one specifically just fine.
I’m also curious. A quick search came up with these. Not sure which one is most reliable/updated
Many things are called “AI models” nowadays (unfortunately due to the hype). I wouldn’t dismiss the tools and methodology yet.
That said, the article (or the researchers) did a disservice to the analysis by not including a link to the report (and code) that outlines the methodology and how the distribution of similarities look. I couldn’t find a link in the article and a quick search didn’t turn up anything.
you should try to ask the same question using xAI / Grok if possible. May also ask ChatGPT about Altman as well
I believe experiments like these should move slower and with more scrutiny. As in more animal testing before moving on to humans, esp. due to the controversies surrounding Neuralink’s last animal experiments.
re: your last point, AFAIK, the TLDR bot is also not AI or LLM; it uses more classical NLP methods for summarization.
Is there a database tracking companies that start out with good intentions and then eventually gets bought out or sells out their initial values? I’m wondering what the deciding factors are, and how long it takes for them to turn.
Thanks for the suggestions! I’m actually also looking into llamaindex for more conceptual comparison, though didn’t get to building an app yet.
Any general suggestions for locally hosted LLM with llamaindex by the way? I’m also running into some issues with hallucination. I’m using Ollama with llama2-13b and bge-large-en-v1.5 embedding model.
Anyway, aside from conceptual comparison, I’m also looking for more literal comparison, AFAIK, the choice of embedding model will affect how the similarity will be defined. Most of the current LLM embedding models are usually abstract and the similarity will be conceptual, like “I have 3 large dogs” and “There are three canine that I own” will probably be very similar. Do you know which choice of embedding model I should choose to have it more literal comparison?
That aside, like you indicated, there are some issues. One of it involves length. I hope to find something that can build up to find similar paragraphs iteratively from similar sentences. I can take a stab at coding it up but was just wondering if there are some similar frameworks out there already that I can model after.
yeah agreed with your sentiment. I think it’s good to have an intuition about something, but it’s much better when there’s data to back it up.
Cuz then, they can do the same with others, say Youtube or other streaming services, and start to compare the numbers, like % of ads, what types of ads, how long are the ads relative to content, how many of these ads are political, how many of these ads may be harmful, …
Having these numbers can be quite handy for other researchers and regulators to look into these issues more concretely, rather than just say, “as your brothers and sisters already know, tiktok serves ads”
I think many have also been wondering about version control of legislation/law documents for some time as well. But I never understand why it’s not realized yet.
This is straight out of the movie “The Congress”
maybe even integration with uBlock if possible?
lol what’s the context here?
Wonder how the survey was sent out and whether that affected sampling.
Regardless, with -3-4k responses, that’s disappointing, if not concerning.
I only have a more personal sense for Lemmy. Do you have a source for Lemmy gender diversity?
Anyway, what do you think are the underlying issues? And what would be some suggestions to the community to address them?