Using NVivo in Process Research

By Jane Lê, University of Sydney, Sydney, Australia

Computer Assisted Qualitative Data Analysis Software (CAQDAS) like NVivo is becoming increasingly popular in organizational research. This is partially because the analysis of rich, deep qualitative data makes important contributions to the management field (Bansal & Corley, 2011; Gephart, 2004; Lee, 2001; Rynes, 2007), but difficult and time consuming. CAQDAS were introduced to help simplify this process, both in terms of data management and in terms of data analysis support. However, CAQDAS should be employed with an important caveat: While they may assist the analytic process by acting as database, providing a structured framework for analysis, and offering some analytic tools, the software cannot actually do any analysis. Thus, the very name “qualitative data analysis software” is deceptive. You still do the analysis; the software only provides tools that may support the process. In other words, the software cannot take the all-important “creative leap” for you (Langley, 1999).

Having said that, software is useful in certain ways. I’ve been using NVivo for about ten years. I started using it during my PhD because I found it difficult to manage the data coding and analysis process using hardcopies. I was working with 110 interviews and tens of codes. While the process was initially easy to do manually, it became more complex as codes shifted and merged, and as I tried to capture more complex relationships. (If you’ve ever tried changing the colour of highlighter pen on paper or make multiple annotations on the same sheet of paper, you’ll know what I mean). I thus reluctantly turned to software. I’m not technologically savvy, but I found the software intuitive and facilitative, and I haven’t used hardcopy raw data since then.

What I want to do in this “musing” is to unveil some of the assumptions inherent in CAQDAS software, some of the benefits of using it for process research, and some the problems and potential solutions that I have come across. I hope it helps other researchers as they embark on or continue their process journey. In my musing I will draw on my experience with NVivo. I also acknowledge that some of what I write will sound familiar to those attending the annual AoM CAQDAS PDW – Both draw on my experience with the software. The increasing convergence of software packages also means that those working with other CAQDAS suites, such as ATLAS.ti, MAXDA, or QDA Miner are likely to see some functions they recognize. For more information, visit the University of Surrey CAQDAS webpages. They are wonderful.

Assumptions

Before we start discussion how NVivo can be used in process research, we should consider the background and assumptions inherent in the software. The precursor to NVivo (NUD*IST) was originally developed by Tom Richards at La Trobe University in Melbourne. He specifically designed the software to assist his wife in her work as a sociologist. The colloquial version of the story is that Tom got bored of having giant post-it notes, posters and paper cut-outs all over his house, so sought to find a simpler, more contained way for Lyn to do her analysis. I don’t know how much of that story is true, but I do know that NVivo is now the most popular CAQDAS used in organization studies. Part of the reason for its popularity is the fact that it was designed around the principles of grounded theory and thus supports both in-vivo and pre-existing codes.

Functionality & Uses

NVivo can be used in a number of different ways; it can be used simply as a database, i.e. to store and retrieve data; it can be used to code or cut data, i.e. to do the first cut of data and separate it into manageable chunks; or it can be used throughout the analytical process, i.e. to do advanced coding and relationship modelling. Most researchers probably rely on software most heavily for the first two of these uses. The uses are supported with a number of different functions:

  • Sorting / Grouping
  • Linking
  • Coding
  • Querying / Searching
  • Writing / Annotating
  • Mapping / Visualising

This means that researchers are able to rely on the software to do many monotonous tasks (e.g. counting, searching, auto-coding) and can thus dedicate their focus on the actual coding and analysis.

Working with Process Data

Process data and analysis are unique, as they require us to work with both absolute and relative time. Consequently, there are challenges in process research that CAQDAS have not yet been able to adequately address. I go through some of the more common ones below:

Data chronology

There is currently no way to adequate order the data temporally; in other words, there is no “date and time stamp”, particularly after you import the files to NVivo. There are a few ways to get around this. The most effective way that I have found is to label the data sources / documents based on their date (e.g. 2013-09-01 Strategy Meeting). This will make sure that all documents are listed and recalled chronology both in the data view and if you choose to do data or coding searches. This is extremely helpful, as it means that you do not have to reorder the data or the timeline you are constructing, but can rely on the software to order it for you. However, doing this requires all data pieces to be entered separately and assumes that data was collected or at least produced in real time. Depending on the interval at which you collect or want to analyse data, another way to bring in chronology is to use folders or sets. What you do here is to group different pieces of data together on the basis of their chronological occurrence, for instance, by grouping data by week, month or year (e.g. creating sets like “2013-09 Data”; “2013-08 Data” etc). However, this will not maintain the fine-grained order or sequence, unless you combine it with the labelling technique above. It also requires real time data.

Organizational or interpretive chronology

Sometimes we don’t have the luxury of working with real time data but instead have to rely on retrospective reports or interviews that may cut across several time periods that we may wish to separate out. In this case, there are multiple helpful techniques you can use. First, you should again categorize your data, using folders or sets. This time I suggest that you categorize your data based on type, e.g. meetings, interviews, field observations, minutes, reports, etc. That way you can filter our temporal references made in real time from those made retrospectively. It also allows you to take account of the different quality and type of data you may be working with. Second, you will want to use nodes to code data to different time periods or dates. If you know what these time periods or points are ahead of time, you can use tree nodes (e.g. time period 1 – you could even have ‘child notes’ or sub-codes for different months or occasions). If you don’t know what these time periods or points are and need to identify them in your analysis, you might want to use free nodes (e.g. references to time, important events, etc). You can use the same technique for interpretive chronology, e.g. the importance of time, the impact of a specific period or event, etc. The key is to be as accurate in coding as possible, to facilitate comparison within and across codes. A final way to code for references to time is to do queries or searches in your database. Here you enter keywords, such as (January, event, 2012, time, deadline, etc). The software will then return a set of results to you. The result could be very good, but should always be approached with caution because the search engine can only identify the exact wording you enter – it cannot account for typos, something presented in numbers instead of words, etc. That generally means going through the search results carefully and disregarding anything irrelevant and going through the documents manually to ensure nothing was missed. It is thus important to know what you’re looking for before you start the process.

All research and all software use is individual, so there will be many other approaches and solutions. I would be happy to discuss this topic further, so if you would like to continue this conversation, please contact me at Jane.Le at sydney.edu.au!

References

Bansal, T., & Corley, K. (2011). The coming of age for qualitative research: Embracing the diversity of qualitative methods. Academy of Management Journal, 54(2), 233-237.

Gephart, R. P. (2004). From the editors: Qualitative research and the Academy of Management Journal. Academy of Management Journal, 47, 454-462.

Langley, A. (1999). Strategies for theorizing from process data. Academy of Management Review, 24(4), 691-710.

Lee, T. W. (2001). From the editors. Academy of Management Journal, 44(2), 215-216.

Rynes, S. L. (2007). Academy of Management editors’ forum on rich research: Editor’s Foreword. Academy of Management Journal, 50(1), 13.

 

© 2013 J. Lê. All rights reserved