Social and Emotional Interactions for AI
DOI:
https://doi.org/10.24234/wisdom.v29i1.1112Keywords:
Artificial intelligence, robotics, large language models, communication, empathy, sensory systems, emotional intelligence, social schools, human-AI interaction, experiential learningAbstract
Advancements in artificial intelligence (AI) and robotics are ushering in systems capable of mean-ingful, human-like interactions. This article explores the integration of large language models (LLMs) into humanoid robots, also emphasizing the existence of technologies allowing robots to mimic human sensory systems—vision, hearing, touch, and smell—and analyze stimuli to generate emotionally resonant responses. A central focus is placed on the role of communication in fostering empathy. Drawing on philosophical insights and technological innovations, we propose that AI systems can enhance their intellectual and emotional capabilities through experiential learning. Embedding robots in “social schools” or “kindergartens,” where they observe and practice body language, cultural norms, and emotional expressions, is suggested as a pathway to developing empathy and understanding. This approach is not just about programming intelli-gence but nurturing it, ensuring these systems embody human-like emotional depth and cultural awareness. By fostering communication-driven development, AI can evolve into companions capable of meaningful, empathetic relationships, advancing human-machine integration while maintaining ethical considerations.
Downloads
References
Aristotle. (n.d.). Nicomachean Ethics, Book 1, Chapter 7 (W. D. Ross, Trans.). Retrieved from http://classics.mit.edu/Aristotle/nicomachaen.html
Bommasani, R., et al. (2021). “On the Opportunities and Risks of Foundation Models”. arXiv preprint arXiv:2108.07258.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., & Amodei, D. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901. https://arxiv.org/abs/2005.14165
Calvi, E., Quassolo, U., Massaia, M., et al. (2020). The scent of emotions: A systematic review of human intra‐ and interspecific chemical communication of emotions. Brain and Behavior. https://doi.org/10.1002/brb3.1585 DOI: https://doi.org/10.1002/brb3.1585
CEUR Workshop Proceedings. (2023). Neuroprosthetics and AI integration: Enhancing human-machine interfaces. CEUR Workshop Proceedings, 3842, 120–130. Retrieved from https://ceur-ws.org/Vol-3842/paper13.pdf
Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., et al. (2022). PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 .
Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., & Amodei, D. (2017). Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems, 30, 4299–4307. Retrieved from https://arxiv.org/abs/1706.03741
Durkheim, É. (1984). The division of labor in society (W. D. Halls, Trans.). New York: Free Press. (Original work published 1893)
El-Kady, M. (2020). Humanoid robot actuation through precise chemical sensing signals. Advanced Materials Technologies, 5(6), 1900570. https://doi.org/10.1002/admt.201900570 DOI: https://doi.org/10.1002/admt.201900570
Gottfredson, L. S. (1997). Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence, 24(1), 13–23. https://doi.org/10.1016/S0160-2896(97)90011-8 DOI: https://doi.org/10.1016/S0160-2896(97)90011-8
Habermas, J. (1984). The theory of communicative action: Reason and the rationalization of society (Vol. 1, T. McCarthy, Trans., Chapter 1, pp. 1–42). Boston: Beacon Press.
Panov, A. D., & Filatov, F. P. (2023). Razum v Mul'tiverse vechnoy khaoticheskoy inflatsii (Mind in the Multiverse of Eternal Chaotic Inflation, in Russian). Socionauki (Socionauki, in Russian). Retrieved from https://www.socionauki.
ru/news/3562368/
Park, J. S., O'Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23), 2:1–2:22. https://doi.org/10.1145/3586183.3606763 DOI: https://doi.org/10.1145/3586183.3606763
Planet Labs. (n.d.). Precision agriculture solutions. Retrieved from https://www.planet.com/industries/agriculture/
Raffel, C., et al. (2020). “Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer”. Journal of Machine Learning Research, 21(140), 1-67.
Regier, T., Kay, P., Gilbert, A. L., & Ivry, R. B. (2007). Language and thought: Which side are you on, anyway? University of Chicago and University of California, Berkeley.
Rousseau, J.-J. (1762). Émile, or On Education. Retrieved from https://oll.libertyfund.org/titles/rousseau-emile-or-education
Smith, A. (1759). The Theory of Moral Sentiments. Retrieved from https://www.gutenberg.org/cache/epub/67363/pg67363-images.html DOI: https://doi.org/10.1093/oseo/instance.00042831
SoftBank Robotics. (n.d.). Pepper the Humanoid Robot. Retrieved from https://us.softbankrobotics.com/pepper
South China Morning Post. (2023). Robot leading mass escape stokes laughs and fears over AI in China. Retrieved from https://www.scmp.com/tech/tech-trends/article/3288056/video-robot-leading-mass-escape-stokes-laughs-and-fears-over-ai-china
Stanford Encyclopedia of Philosophy. (n.d.). Linguistic Relativity and Whorfianism. Retrieved from https://plato.stanford.edu/entries/linguistics/whorfianism.html
Stanford Institute for Human-Centered Artificial Intelligence. (2023). Computational agents exhibit believable humanlike behavior. Retrieved from https://hai.stanford.edu/news/computational-agents-exhibit-believable-humanlike-behavior
Tesla, Inc. (2024, November 28). Tesla Optimus shows off new upgrade [Video]. Teslarati. Retrieved from https://www.teslarati.com/tesla-optimus-new-hand-upgrade/
Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al. (2023). Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
van Nieuwenburg, D., de Groot, J. H. B., & Smeets, M. A. M. (2019). The subtle signaling strength of smells: A masked odor enhances interpersonal trust. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.01890 DOI: https://doi.org/10.3389/fpsyg.2019.01890
Waymo Research. (n.d.). Autonomous driving technology and research. Retrieved from https://waymo.com/research
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q. V., & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, & A. Oh (Eds.), Advances in Neural Information Processing Systems (Vol. 35, pp. 24,824–24,837). Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf
Woebot Health. (2023). Woebot: An AI-powered mental health chatbot. Retrieved from https://woebothealth.com/
Zhang, J., Pu, R., Wang, J., Huang, W., Yuan, L., & Luo, J. (2019). Monitoring powdery mildew of winter wheat by using moderate resolution multi-temporal satellite imagery. Frontiers in Plant Science, 10, 2. https://doi.org/10.3389/fpls.2019.00002 DOI: https://doi.org/10.3389/fpls.2019.00002
Hanson Robotics. (2016). Sophia the Robot. Retrieved from https://www.hansonrobotics.com/sophia/
Hume, D. (1739). A Treatise of Human Nature. London: John Noon. Retrieved from https://www.gutenberg.org/ebooks/4705 DOI: https://doi.org/10.1093/oseo/instance.00046221
IEEE. (2023). AI in neuroprosthetics: Real-time control and sensory feedback. IEEE Robotics and Automation Magazine, 30(4), 50–60. https://doi.org/10.1109/RAM.2023.9738457
Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., & Wang, Y. (2017). Artificial intelligence in healthcare: past, present and future. Stroke and Vascular Neurology, 2(4), 230–243. https://doi.org/10.1136/svn-2017-000101 DOI: https://doi.org/10.1136/svn-2017-000101
Kuindersma, J., Permenter, F., Deits, R., et al. (2016). Optimization-based locomotion planning, estimation, and control design for the Atlas humanoid robot. Autonomous Robots, 40(3), 429–455. https://doi.org/10.1007/s10514-015-9479-3 DOI: https://doi.org/10.1007/s10514-015-9479-3
Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. Chicago: University of Chicago Press.
McGilchrist, I. (2021). The Matter with Things, Vol. 1: “The Argument” (p. xvii).
Mead, G. H. (1934). Mind, Self, and Society. Retrieved from http://tankona.free.fr/mead1934.pdf
Musk, H., & Tesla Inc. (2023). Intelligent humanoids in manufacturing to address worker shortage and skill gaps: Case of Tesla Optimus. arXiv preprint. https://doi.org/10.48550/arXiv.2304.04949
OpenAI. (2023). GPT-4 Technical Report. Retrieved from https://arxiv.org/pdf/2303.08774v3.pdf
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730–27744. https://arxiv.org/abs/2203.02155
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Mariam Davtyan

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Creative Commons Attribution-Non-Commercial (CC BY-NC). CC BY-NC allows users to copy and distribute the article, provided this is not done for commercial purposes. The users may adapt – remix, transform, and build upon the material giving appropriate credit, and providing a link to the license. The full details of the license are available at https://creativecommons.org/licenses/by-nc/4.0/.









