In this blog we’ll be recapping a session from our recent User Group, presented by Susan Van Patten, Professor and Director of Faculty Development, and Charley Cosmato, Director for Innovative Teaching and Learning, both at Radford University. The session, titled Letting the Data Tell its Own Story: What to Do When Your Category Assumptions are “Wrong,” focused on uncategorized and unstructured data in Activity Insight, and what to do with it.
Radford University is a public university located in Southwest Virginia, with a current enrollment of over 9,400 undergraduate and graduate students, and about 500 full-time faculty. The university offers 67 undergraduate degree programs, 22 master’s programs and three doctoral programs. Radford University has received a number of honors and accolades, based on the strength of its academic programs as well as its sustainability initiatives.
Susan and Charley shared that about a year ago, their focus was engaging faculty and promoting utilization of Activity Insight throughout the campus. Now, they had a significant quantity of data in the system, but discovered what they cleverly termed “monsters”—an unexpectedly large amount of data classified as “other.”
From a big-picture point of view, the abundance of “other” data created a reporting challenge. Because each “other” response included a written explanation, it was incredibly hard to aggregate all that data meaningfully.
The “other” data was also a potential problem for faculty. It required additional effort from each faculty member to evaluate the options available, determine their answer wasn’t a fit and then write in their response in “other.” If the faculty member’s response wasn’t included in the list of items to choose from, the need to classify it as “other” may make the faculty feel excluded or minimized.
Susan and Charley began addressing their challenge by running reports to collect all the “other” data in Activity Insight. They first realized that the “other” data fell mostly into three buckets:
- Data that didn’t answer the question asked
- Data that was misclassified (it actually had a category)
- Data that legitimately was “other”
Their next step was to use text analysis tools to dive deeper into the data from a qualitative standpoint. This allowed them to study the text of all the “other” responses and uncover themes and trends. Once themes were identified, the two decided how to incorporate each “other response” into Activity Insight, by adding to a dropdown, adding a new screen, combining screens or other modification.
The Path Ahead
Susan and Charley saw a significant decrease in the number of “other” responses after implementing the changes discussed above. They can now conduct more accurate reporting, and more meaningfully describe the activity of the faculty on a large scale.
In addition, they are now taking the opportunity to evaluate whether the changes they made were well-received. Their goal is to make the system reflect the data faculty want to enter, so they are evaluating usage of their new categories and screens.
Advice for Other Universities
Susan and Charley shared that this definitely isn’t a year one project. Although the obvious goal is to create all the correct categories and screens during implementation, inevitably there will be changes needed after real data is entered. Their advice was to use faculty input during implementation, but then revisit data quality a year after.
However, they also cautioned that this might not be the best project to do annually, or for someone six or seven years into using Activity Insight. The process of changing categories and screens does impact reporting, making it tough to do historical reporting and adding the need to reclassify past information. This is certainly a useful exercise for any university that has “other” monsters like Susan and Charley described, but should be done thoughtfully with all variables taken into account.
Interested in doing a similar type of data project with Activity Insight? Contact your Solution Specialist to get started.