Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, for one, by eliminating external tool calling, the model gains an amount of security. This occurs because the tools being called by an LLM can be corrupted, and in this scenario corrupted tools would not be called.
 help



Prompt injection is still a possibility, so while it improves the security posture, not by much.

Prompt injection will always be a possibility, it's a direct consequence of the fundamental nature being a fully general tool.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: