Hacker Newsnew | past | comments | ask | show | jobs | submit | thebeas's commentslogin

We do both:

We compress tool outputs at each step, so the cache isn't broken during the run. Once we hit the 85% context-window limit, we preemptively trigger a summarization step and load that when the context-window fills up.


That's why give the chance to the model to call expand() in case if it needs more context. We know it's counterintuitive, so we will add the benchmarks to the repo soon.

Given our observations, the performance depends on the task and the model itself, most visible on long-running tasks


How does the model know it needs more context?

Presumably in much the same way it knows it needs to use to calls for reaching its objective.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: