Reaction to Prof. Chen’s Talk
Generative AI has become ubiquitous, with individuals spanning diverse age groups, from children to the elderly, using it to make life easier. In today’s speech, the lecturer highlighted an innovative technological creation known as “Kebbi.” Kebbi is an educational robot and changes children’s approach to language acquisition among children by breaking down language barriers through its “Robot & Tangible Object System.” This system encompasses highly useful functions designed to enhance learning convenience, including three types of interaction with learners: the educational robot, tangible object, and digital content.
The most impressive part is the video demonstration illustrating the robot’s proficiency in narrating stories and translating various languages. Additionally, it can also project images onto the screen and change its facial expressions in accordance with the narrative context. These functions contribute to language learning. We all know that language acquisition is more effective when initiated during early childhood. Three types of interaction provided by the R & T system--oral conversation and physical contact allow individuals to engage interactively. This heightened interactivity function may hold the potential to capture children's attention away from electronic devices, and foster independent language learning. I believe this innovation is particularly advantageous for contemporary parents, as many parents often feel tired and exhausted after working all day, leaving little energy for recreational activities or bedtime storytelling with their children.
In my opinion, A.I.-generated texts or dialogues may not be consistently accurate during interactions with chatbots, leading to potential misunderstandings. An example involves the utilization of the ChatGPT chatbot for information searching. While the system can respond accurately to queries about books or movies, searching for more detailed information may yield imprecise results. Consequently, the accuracy of AI’s responses should be examined. Users should have a basic understanding of the subject they want to search in the chatbot, or they may not be able to distinguish the authenticity of AI-generated content. Another concern is that children may be unexpectedly exposed to negative texts or dialogues during the learning process. The content within the learning robot’s database is based on what the engineer inputs. If someone has malicious intentions and inserts some negative information, children may unwittingly be exposed to unfavorable information during the learning process. Therefore, there exists a potential risk in using A.I.-generated text or dialogues.
In conclusion, AI generation is already widespread, and various AI services and products have deeply influenced our lives. Following the trend of new technologies is indispensable in our generation. However, it is crucial to use technology responsibly. For instance, students may attempt to cheat by using ChatGPT to complete their reports, leading to disputes in schools. It exemplifies the need for prudence. Moreover, the potential for AI robots to generate misunderstood or even adverse content, particularly unsuitable for children, underscores the necessity for regulatory policies. To address these challenges effectively, the government should formulate comprehensive policies to standardize the usage of generative AI products, or people should reach a consensus on how to deal with AI technology problems.