Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
bsenftner
4 days ago
|
parent
|
context
|
favorite
| on:
Executing programs inside transformers with expone...
Well, for one, by eliminating external tool calling, the model gains an amount of security. This occurs because the tools being called by an LLM can be corrupted, and in this scenario corrupted tools would not be called.
help
Oranguru
3 days ago
[–]
Prompt injection is still a possibility, so while it improves the security posture, not by much.
reply
TeMPOraL
2 hours ago
|
parent
[–]
Prompt injection will always be a possibility, it's a direct consequence of the fundamental nature being a fully general tool.
reply
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: