It’s a set of inputs that generates and output, once per execution.
Integrating it into an infrastructure that allows it to start external programs and scheduling really isn’t on the LLM.
You cannot start a timer without having a timer, too.
And LLMs aren’t brings who exist continually like you and me so time exists on a different, foreign dimension to an LLM.
But can it start a timer
How would it do that?
It’s a set of inputs that generates and output, once per execution. Integrating it into an infrastructure that allows it to start external programs and scheduling really isn’t on the LLM.
You cannot start a timer without having a timer, too. And LLMs aren’t brings who exist continually like you and me so time exists on a different, foreign dimension to an LLM.
Its a joke referencing how Sam altman said openai would need about a year to get chatgpt able to start a timer
https://gizmodo.com/sam-altman-says-itll-take-another-year-before-chatgpt-can-start-a-timer-2000743487
You attach an epoch timestamp to the initial message and then you see how much time has passed since then. Does this sound like rocket surgery?
How does the LLM check the timestamps without a prompt? By continually prompting? In which csse, you are the timer.
It’s running in memory… I’m not going to explain it, just ask an AI if it exists when you don’t prompt it
That’s not how that works.
LLMs execute on request. They tend not to be scheduled to evaluate once in a while since that would be crazy wasteful.