LLMs Boost Social and Emotional Learning in Higher Education
- •New framework links 19 LLM chatbot features to 15 social-emotional competencies
- •Study identifies critical lack of ethical safeguards in educational AI applications
- •Research highlights significant imbalance in how chatbots develop student emotional intelligence
A comprehensive systematic review has established a first-of-its-kind structured framework connecting the technical capabilities (affordances) of Large Language Models (LLMs) to social and emotional learning (SEL) outcomes. By analyzing how chatbots interact with students, researchers identified 19 specific AI features—such as personalized feedback and virtual companionship—that directly influence 15 core emotional competencies. This mapping provides a vital roadmap for educators looking to integrate conversational AI into higher education curricula beyond simple academic tutoring.
Despite the potential for enhancing student motivation and self-regulation, the review warns of a significant developmental imbalance. Current implementations of AI in classrooms tend to focus heavily on cognitive gains while neglecting the nuanced interpersonal skills required for holistic emotional growth. Furthermore, the analysis reveals a widespread absence of essential ethical safeguards, raising concerns about how these automated systems handle sensitive student data and emotional vulnerabilities.
The findings suggest that while LLM chatbots can act as virtual companions that reduce anxiety and foster learning autonomy, they are not a silver bullet. The researchers advocate for a more systemic approach to Education 4.0, where pedagogical design must intentionally incorporate SEL frameworks to prevent AI from becoming a purely transactional tool. As AI becomes a staple in the university experience, balancing technical efficiency with human-centric emotional support remains the primary challenge for future educational policy.