Question About Defining Objectives for AI Agents
Published:
Suppose I am an AI agent that seeks to learn more about the world around me. If I wanted to learn my environment, I would surely explore it but with some restrictions (defined objectives).
In doing so, how will I ever know if the agent could have innovatively “maneuvered” within its environment unless we truly set no restrictions on exploration. For the sake of the scenario, give it a very loose objective to eliminate randomness.
I would love to hear your opinion on this. Please reach out!