• 0 Posts
  • 329 Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle
  • The article only calls out the “Acting” and “Writing” categories, and the language suggests they are mainly concerned with a human doing the actual substantive work. So in this case, stunt work that is duly credited will probably still be eligible, even if they alter it as you suggest. The whole point of stunt work is to have a stand-in do it, but have it look like the main character in the final product.

    Even before AI ate everything, a lot of visual effects have been created with CGI, and they still gave out Oscars for visual effects.


  • So, now, when I see senior developers (which I am not) vibe code green field projects, I am just astounded as to how they manage the architecture + understanding + optimization + maintenance context.

    My experience is, they’re not. Like the article says they are just focused on MOAR and not on the quality of the output. It may take years for the unmaintainable code to cause problems, and they may have already been laid off by the time that happens, anyway .

    I don’t write much code anymore, but when I did, there was a fair amount of embedded code, where fixing a bug is more costly than just pushing out a build to a production server. I actively sought out automation back then, but the purpose of the automation was to help cover edge cases and better test the embedded code for flaws that traced through multiple layers of code.

    Whenever I start a new software project, it usually starts with a short period of experimentation when I try out several things. Then, I coalesce on an architecture in my head (and eventually document it), and once I do that I can add more structure to the code.

    Given the state of the AI tools today, I can see myself using them to accelerate all the little fiddly parts of this (especially if I can give it a coding standard and have it stick to it). But I wouldn’t trust it more than that. I would always keep the archictecture separate, because I don’t trust the AI tools to change it on me for no good reason.






  • “The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through and actually understand what it was trying to say,” Lichtman says. But now he and Tao have shortened the proof so that it better distills the LLM’s key insight.

    This tracks with what I have seen regarding AI. It looks superficially awesome, but when you start to analyze its output it has a lot of holes that require someone trained in the art to fix. You know, someone with years of experience, and who got that experience without the benefit of AI shortcuts.

    What happens 10 or 15 years from now, when all the current crop of experts are retired and all the experts who could have curated the AI output had to spend all that time as baristas instead because the AI took all of their entry level jobs?







  • The author misses a few key points about the American model:

    First, in exchange for the local territorial monopoly, the providers are supposed to be heavily regulated by the local (or State) government, with controls in place to prevent abuse of the monopoly and promote the interests of its residents. Of course, we all know how business interests influence government to make business- friendly regulations. Governments have the ability to enforce more user-friendly practices, if they choose to do so.

    But the more important point is that in the US, we hand out different monopolies based on the connection type. For instance, where I live we have one company that owns the twisted-pair POTS landlines, a different company that owns the coaxial cable TV service, and another company that owns the direct fiber to the home. Three companies, three connections to each home, all three (theoretically) capable of delivering the same services, since there is no longer any real differentiation between voice, video, and data service: it’s all just bits.

    We just got our FTTH provider only recently. Before that, our choices were only the cable company or the telco’s astonishingly show DSL. So I subscribed to the Cable company, and their pricing model tried to force you into a bundle for the other services. Their speeds were also quite slow for broadband, until the Fiber company started digging. Then I got all sorts of emails saying “we’re increasing your speed – for free!” And sure enough, I was getting better bandwidth. But all that did was piss me off. These losers could have given me that better service all along, but didn’t bother until they were forced to.

    So I’m on the fiber now. But I know how it works, this service will be awesome at first, but once this company finishes building out they won’t sign on any new capacity and it will gradually get shittier over time. It’s the American Way!

    (And I still pay the local telco way too much money for a POTS landline. What can I say, I’m an old.)








  • Normally, I am all for Techdirt’s takes. But I think this one is off the mark a bit, because I legitimately think that infinite scroll and auto play are insidious, and actually harmful enough to be treated as a dangerous design decision.

    The whole point of Section 230 is that communications companies can’t be held responsible for harmful things that people transmit on their networks, because it’s the people transmitting those harmful things that are actually at fault. And that would be reasonable in the initial stages of the Internet, when people posted on bulletin boards (or even early social media) and the harmful content had a much smaller reach. People had to “opt in”, essentially, to be exposed to this content, and if they stumble on something they find objectionable they can easily change their focus

    But the purpose of the infinite scroll and auto play is to get people hooked on content. The algorithms exist to maximize engagement, regardless of the value of that engagement. I think the comparison to cigarettes is particularly apt. They are looking to hook people into actively harmful behaviors, for profit. And the algorithms don’t really differentiate between good engagement and harmful engagement. Anything that attracts the users attention is fair game.

    The author’s points regarding how these rulings can be abused are correct, but that doesn’t negate how fundamentally harmful these addictive practices are. It will be up to lawmakers to make sure that the laws are drafted in such a way that they can be applied equitably… (So maybe we’re screwed after all…)