• 0 Posts
  • 66 Comments
Joined 2 years ago
cake
Cake day: June 5th, 2023

help-circle

  • The text is translated to English, yes, but the original art was drawn for Japanese text which usually flows top to bottom, right to left. The entire visual design of a manga or comic book is structured around the reading direction for the language it was originally written in. When adding translations, you can’t just change the bubble locations since they’re almost always incorporated into the artwork directly.

    With the above in mind, you effectively have two options with manga: flip the artwork before adding the English translation so the bubbles flow left-to-right, or leave it alone and just explain the reading direction differences. There are often artistic, logistical, and financial reasons for the latter approach, so it tends to be more common.

    When on physical paper, most manga books are also read by flipping the pages right to left, and most of them explain this to English-language readers trying to read it the “normal” way on the last page.












  • The problem, like with many things in life, is that there’s a desire for people to place clear delineations on things for purpose of clarity and peace of mind, when it actually exists on a very fuzzy spectrum. I’d argue you do gamble a tiny percent chance of getting in a wreck every time you drive in exchange for getting places much faster. Likewise, were you to walk instead, there are unique risks and payoffs associated with that choice too.

    Whether or not the risks are well known or there’s a decision to increase the level of risk is a little beside the point. There are plenty of people addicted to gambling who genuinely believe they’ll hit it big and retire one day, and that the reward payout is inevitable even when it’s clearly not.


  • Risk management is at the core of both investment and gambling. The riskier your investment, the closer it comes to just putting the money on a roulette position in practice. There are plenty of portfolios that slowly hemorrhage money and/or eat up any would-be growth via fees: those are your 51-49 splits. Also it doesn’t matter if there’s such a split if you decide to go all in and it goes belly up, however you slice that.

    If you do risky shit with money, it’s a gamble whether it pays off. Maybe I’m misunderstanding the point you’re trying to make?






  • People developing local models generally have to know what they’re doing on some level, and I’d hope they understand what their model is and isn’t appropriate for by the time they have it up and running.

    Don’t get me wrong, I think LLMs can be useful in some scenarios, and can be a worthwhile jumping off point for someone who doesn’t know where to start. My concern is with the cultural issues and expectations/hype surrounding “AI”. With how the tech is marketed, it’s pretty clear that the end goal is for someone to use the product as a virtual assistant endpoint for as much information (and interaction) as it’s possible to shoehorn through.

    Addendum: local models can help with this issue, as they’re on one’s own hardware, but still need to be deployed and used with reasonable expectations: that it is a fallible aggregation tool, not to be taken as an authority in any way, shape, or form.


  • On the whole, maybe LLMs do make these subjects more accessible in a way that’s a net-positive, but there are a lot of monied interests that make positive, transparent design choices unlikely. The companies that create and tweak these generalized models want to make a return in the long run. Consequently, they have deliberately made their products speak in authoritative, neutral tones to make them seem more correct, unbiased and trustworthy to people.

    The problem is that LLMs ‘hallucinate’ details as an unavoidable consequence of their design. People can tell untruths as well, but if a person lies or misspeaks about a scientific study, they can be called out on it. An LLM cannot be held accountable in the same way, as it’s essentially a complex statistical prediction algorithm. Non-savvy users can easily be fed misinfo straight from the tap, and bad actors can easily generate correct-sounding misinformation to deliberately try and sway others.

    ChatGPT completely fabricating authors, titles, and even (fake) links to studies is a known problem. Far too often, unsuspecting users take its output at face value and believe it to be correct because it sounds correct. This is bad, and part of the issue is marketing these models as though they’re intelligent. They’re very good at generating plausible responses, but this should never be construed as them being good at generating correct ones.