return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 18 hours agoAnthropic says its latest AI model is too powerful for public release and that it broke containment during testingwww.businessinsider.comexternal-linkmessage-square97fedilinkarrow-up1194arrow-down144
arrow-up1150arrow-down1external-linkAnthropic says its latest AI model is too powerful for public release and that it broke containment during testingwww.businessinsider.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 18 hours agomessage-square97fedilink
minus-squareYesButActuallyMaybe@lemmy.calinkfedilinkEnglisharrow-up6arrow-down1·6 hours agoYou attach an epoch timestamp to the initial message and then you see how much time has passed since then. Does this sound like rocket surgery?
minus-squarestringere@sh.itjust.workslinkfedilinkEnglisharrow-up4·6 hours agoHow does the LLM check the timestamps without a prompt? By continually prompting? In which csse, you are the timer.
minus-squareYesButActuallyMaybe@lemmy.calinkfedilinkEnglisharrow-up1arrow-down4·5 hours agoIt’s running in memory… I’m not going to explain it, just ask an AI if it exists when you don’t prompt it
minus-squareGladaed@feddit.orglinkfedilinkEnglisharrow-up4·4 hours agoThat’s not how that works. LLMs execute on request. They tend not to be scheduled to evaluate once in a while since that would be crazy wasteful.
You attach an epoch timestamp to the initial message and then you see how much time has passed since then. Does this sound like rocket surgery?
How does the LLM check the timestamps without a prompt? By continually prompting? In which csse, you are the timer.
It’s running in memory… I’m not going to explain it, just ask an AI if it exists when you don’t prompt it
That’s not how that works.
LLMs execute on request. They tend not to be scheduled to evaluate once in a while since that would be crazy wasteful.