What Would It Mean for a Machine to Have a Self?
Julian De Freitas, Ahmet Kaan Uğuralp, Zeliha Uğuralp, Laurie Paul, Joshua Tenenbaum, Tomer D. Ullman
What would it mean for autonomous AI agents to have a ‘self’? One proposal for a minimal notion of self is a representation of one’s body spatio-temporally located in the world, with a tag of that representation as the agent taking actions in the world. This turns self-representation into a constructive inference process of self-orienting, and raises a challenging computational problem that any agent must solve continually. Here we construct a series of novel ‘self-finding’ tasks modeled on simple video games—in which players must identify themselves when there are multiple self-candidates—and show through quantitative behavioral testing that humans are near optimal at self-orienting. In contrast, well-known Deep Reinforcement Learning algorithms, which excel at learning much more complex video games, are far from optimal. We suggest that self-orienting allows humans to navigate new settings, and that this is a crucial target for engineers wishing to develop flexible agents
Year of publication: |
2022
|
---|---|
Authors: | De Freitas, Julian ; Uğuralp, Ahmet Kaan ; Uğuralp, Zeliha ; Paul, L. A. ; Tenenbaum, Joshua B. ; Ullman, Tomer D. |
Publisher: |
[S.l.] : SSRN |
Saved in:
freely available
Saved in favorites
Similar items by person
-
Chatbots and mental health: insights into the safety of generative AI
De Freitas, Julian, (2022)
-
Bayesian Models of Conceptual Development : Learning as Building Models of the World
Ullman, Tomer D., (2020)
-
Lessons from an app update at Replika AI : identity discontinuity in human-AI relationships
De Freitas, Julian, (2024)
- More ...