The Penn Center for Learning Analytics uses a blend of large-scale and small-scale research methods, from analytics and data mining to ethnographic and field observation methods, to study learning and engagement. We conduct research on both state-of-the-art online learning environments and traditional classroom instruction, investigating what approaches and methods can best enhance student long-term outcomes.

Changing Learning Environments

The dichotomy between traditional classrooms and online courses is blurring, with increasing numbers of students working in blended learning environments. These students may be using intelligent tutoring systems (ITSs), simulations, games, or microworlds. Even in fully online experiences like Massive Online Open Courses (MOOCs), students’ interaction with each other matters as much as their interactions with the course materials. At the PCLA, our goal is to enhance the educational experiences for all learners by studying how new learning environments work for the full diversity of students. We study the environments and the learners using them, partnering with groups in both academia and industry. At the PCLA, we do not create new learning environments; we study how to make them better.


Researchers at the PCLA have developed a number of tools and models for better understanding learning at scale, usually within the context of online learning systems. These include new techniques for the analysis of system log files and new protocols for field observations.

We have pioneered new methods for modeling student engagement online, for inferring complex learning and inquiry skill, for studying learning as it occurs moment-by-moment, and for linking what is happening in a learner’s experience now to their career and life outcomes years later. We have also pioneered new methods for combining technology with human judgment, to leverage the strengths of each. For example, the BROMP method, which was developed to study student engagement with educational software, allows us to study students’ emotional and behavioral experiences without intrusive sensors or video equipment. Similarly, our use of text replays, a method for analyzing the log files from educational software, has helped to improve our understanding of what students do when they are gaming the system, rather than engaging with educational material.

Making (Online) Learning Systems More Human

Online learning systems can offer advantages over traditional classrooms with large student/teacher ratios. By automatically detecting errors and misconceptions, these systems can provide timely support to keep students from going off task or having to unlearn poorly understood material. Whether students are in a classroom full of peers or working independently through web-based applications, they all deserve an educational experience where they are treated as an individual.

Our goal is to use these methods to make educational experiences better for students by allowing teachers to focus on what they do best—teaching. In a class with lots of students, automated scaffolding allows students to work at their own pace. It also allows teachers focus on the students at a more individual level—helping students who have deeper misconceptions rather than minor surface-level errors or making meaningful connections with students who are having problems that aren’t related to the learning material.

We believe that software can do more than simply offer hints and automatic corrections, and we’re working to show people how. By matching classroom observations of students’ emotions and behaviors to student log files, we can train software to recognize when students are bored or confused or frustrated. This, in turn, can be used to make students’ learning experiences more engaging and positive—either by changing the level of difficulty or by alerting educators to struggles that might otherwise be missed.

To sign up for our seminar series, click here