How do you deal with not enough qualitative data?
So this was a question that arose during one of the SoTL Network meetings, and I just realised that there isn’t an awful lot out there that addresses the issue. Let’s go back one step to explore this issue in more detail.

Let’s talk qualitative answers in questionnaires
One of the stumbling stones I come across regularly when reviewing papers is: how qualitative answers from questionnaires are handled. These answers are raw data and need to be analysed, simply choosing some quotes to support your argument is not appropriate. A relatively simple way to undertaking such analysis is Thematic Analysis*. The purpose is to establish if there are any patterns in these open ended questions. If for some reason (no judgement!) handing out a questionnaire with open ended questions was the easiest way to obtain some feedback, even from a small cohort of learners, it will be more difficult to establish patterns, than if there was a larger cohort. So far so good.
But what do we actually do with this data? Can we identify some common threads?
There are different ways of how to go about coding the open answers, and various literature with worked examples describe this in detail**. In general this is an iterative process and can be time consuming with large data sets. So, once you have analysed the answer you might find that: say 15 out of 20 students indicated–in one way or another–that they preferred the use of MS Teams versus Moodle forums because they can use it like a chat program on their mobile. We have a pattern. It’s not conclusive. One would be hard pressed to generalise. And the data might be coincidental to the make up of the cohort.
However, it is still enough, to notice a pattern and reflect on the question: Is a there a ‘there there’? The next step would be to explore the literature, trying to corroborate if others have identified this pattern at a larger scale, or if there are enough case studies with smaller cohorts, which create a larger picture. It might also be something to follow up with either a bigger cohort of students, or run this over several semesters with different smaller student cohorts. Establishing patterns like these is a good start. Just be careful about the claims you make from this! More to this later.
What if there is not enough data?
So, it might happen that you collect data in a questionnaire, or focus group and for some reason, the results are not holding much information. It could even be that you were running a pilot, trying to establish if there was a pattern, and could not identify one. Either there weren’t enough responses or the answers were all over the place not providing you with a nice shiny pattern like in the example above. Maybe the participants shared information that you did not expect at all. Or maybe the data was simply underwhelming and didn’t tell you much at all.
If you wanted to publish the project, and you have applied for ethics, there might be considerations about how ethical it is not to use the data. But what do you actually do with it? If it is inconclusive and there is no take away what so ever, you might have to go back to the drawing board. Maybe the research tool was no the most appropriate to use? Can you identify a method that would yield in better data? Put in an ethics amendment explaining your situation.
What if there is not much data but a handful of unexpected comments that have peaked your interest, and you want to share these. This might be salvageable and so might the data that shows a pattern but not enough to generalise.
What can you do?
Now while this data might not be something that provides you and others with a conclusive finding, the learning from your project might be worthwhile sharing. My suggestion would be to tell a story. Use the idea of writing as form of inquiry (Richardson & Adams St Pierre, 2015) to talk about your project. Writing as a method of inquiry can be very powerful, as there is learning in reflection, and linking experience with existing literature. So your piece of output might not (or not yet ) be an evaluation, or educational inquiry as such, but it might become a piece of reflective practice (with some data) and thus offering a different form of learning. Questions that might be worthwhile considering could be:
- Where did the curve ball comments come from?
- How are you going to approach these?
- What made them so interesting?
- Is there any literature that addresses this already or is it something new?
- What is the reason for not having enough data?
- Is there something to learn from this?
- Was your method or even methodology the wrong way to go about this type of inquiry?
- Have others been in your situation?
- If yes, what have they done?
On that note
Try not to shoehorn your findings into a specific structure just because you think this is what it should be. Using percentages and graphs with small numbers, and splitting your writing into findings and discussion, can come across as inauthentic and overblown. In qualitative research there is not necessarily an expectation to split your writing into findings and discussion. As it often does not make a lot of sense in smaller inquiries. A more convincing way of presenting such results is to create a coherent storyline of what happened, how you went about analysing the results and how these met the literature review and your expectations (or not). Reflecting on the how and why of these issues and the learning to take forward.
I hope this makes some sort of sense.
Summary
I basically just wanted to say not enough data isn’t a right out ‘fail’ and offer some ways of addressing the issues. To avoid something like this happening, attention needs to be paid to methodology: your research plan and design. What methods have you chosen? Why? How do you expect the data from these methods to look like? How will the data be able to answer your question? What have others done to explore this topic? Too often people jump straight from research questions to methods either without any methodological considerations at all or only what I call procedural instead of qualifying considerations. But this is a story for a different blog post.
Some Resources
Hands down this is the best introduction to Thematic Analysis I have found so far:
* Braun, V., Clarke, V., Hayfield, N., & Terry, G. (2019). Thematic Analysis. In P. Liamputtong (Ed.), Handbook ofResearch Methods in Health Social Sciences (pp. 844–860). Springer Nature Singapore Pte Ltd. https://doi.org/10.1007/978-981-10-5251-4_103
** Maher, C., Hadfield, M., Hutchings, M., & de Eyto, A. (2018). Ensuring Rigor in Qualitative Data Analysis. International Journal of Qualitative Methods, 17(1), 160940691878636. https://doi.org/10.1177/1609406918786362
Sage has uploaded an open access book chapter on qualitative data: https://www.sagepub.com/sites/default/files/upm-binaries/43144_12.pdf
These are a couple of articles on data saturation:
Guest G, Namey E, Chen M (2020) A simple method to assess and report thematic saturation in qualitative research. PLoS ONE 15(5): e0232076. https://doi.org/10.1371/journal.pone.0232076
Fusch, P. I., & Ness, L. R. (2015). Are We There Yet? Data Saturation in Qualitative Research. The Qualitative Report, 20(9), 1408-1416. https://doi.org/10.46743/2160-3715/2015.2281
Richardson, L., Adams St Pierre, E. (2015). 36 Writing: A Method of Inquiry. In The Blackwell Encyclopedia of Sociology (pp. 1410–1444). Wiley Online Library. https://doi.org/10.1002/9781405165518.wbeosw029.pub2
2 thoughts on “Not enough data but still learning”