The "nodeMCU" IoT board can execute LUA instructions. But I notice the chips have little memory for stack, so it's easy to write valid lua that exceeds memory b/c of loops that eat up the stack.
I wonder if it's possible to write a interpreter with restrictions on loops etc that can limit/verify programs on the basis of the memory they'd require?
I'm pretty sure that Turing completeness and/or halting problem imply that you can't statically determine memory use of arbitrary code.
Doesn't mean you couldn't make an useful sub-turing language for resource limited use.
On a final note, nodemcu doesn't really execute "lua instructions", it executes normal machine code like everything else but it ships with lua interpreter/vm firmware.
I wonder if it's possible to write a interpreter with restrictions on loops etc that can limit/verify programs on the basis of the memory they'd require?