Rather than quibble over the distinctions and priority of basic research over applied research, is it not possible to conceive institutional research as social scientific research agencies, dispersed geographically and culturally, that are capable of being partners in a collaborative enterprise with a focused and balanced portfolio of scientific research in higher education? Rather than generalize knowledge that explicitly positions applied research as categorically different and subordinate to basic research, is it not possible to organize institutional research offices into a very large array of coordinated measurements of higher education phenomena to generate research findings that accumulate over time, provide important insights in the policies and practices of higher education, and push the boundaries of what is known about institutional effectiveness? Institutional research, no less than education research, is an abstraction that is constituted, administrated, and pursued as a social science to produce generalizable knowledge. If institutional research exists in higher education institutions at all, it does so only as the study of a multitude of phenomena and the even more multitudinous relationships of phenomena to each other – not the study of an object, a singularity, or an event horizon from which generalizable knowledge by institutional research does not escape. Institutional Research’s Metropolis, Jul. 5, 2015
“Any component of the college or university may have a responsibility for institutional research” (AIR, Functions of Institutional Research, 1990)
Beginning in the 1990s, J. Frederick Volkswein categorized institutional research offices into one of four stages of “development:” craft offices, small adhocracies, professional bureaucracies, and “elaborate profusions throughout the institution…”[1] The “development” of IR offices scaled with the size of institutions and offices such that the casual observer most likely finds professional bureaucracies and elaborate profusions in the larger state institutions and research universities. While the professional bureaucracies produced the active contributors to “organizations such as the AIR [Association for Institutional Research] and NEAIR [Northeast Association for Institutional Research],” Volkswein identified the “elaborate profusion” where “different offices reporting to different parts of the administrative hierarchy” perform institutional research as the most advanced development. In a 2008 review, Volkswein affirmed the lasting value of his schema, though cautioning readers “in generalizing about the practice of institutional research, because we know that organizational arrangements are highly variable from campus to campus and state to state.”[2]
The variability of institutional research offices — and inequality in institutional research capacity in Volkswein’s “developmental” shema — traces its origin to the dominant paradigm in institutional research that we label the Michigan State School of Institutional Research (MSSIR).
Paul L. Dressel and Joe L. Saupe, working in Michigan State University’s IR office in the 1960s and 1970s, played a prominent role in drafting the early documents of the Association for Institutional Research (AIR), including the first Handbook and an official memo on the Nature and Role of Institutional Research.[3] Their writings set down four fundamental tenets for the practice of institutional research that influenced the practice and organization of IR offices for the past forty-plus years. First, institutional research is “a function” (Dressel and Saupe). Second, “institutional research is different from the research of faculty members” (Dressel), and warn not “to confuse institutional research, as we view it, with the more basic research on higher education” (Saupe). Third, the purpose of institutional research is “to probe deeply into the workings of [a single] institution” (Dressel), thus institutional research is “specific and applied… and should not be expected to produce knowledge of pervasive and lasting significance…” (Saupe). Fourth, as “a function,” institutional research may be performed by non-specialists and faculty committees (Dressel) and, generally, “carried on in institutions whether or not individuals or organizational units are specifically assigned to institutional research” (Saupe).
The AIR endorsed anew the fundamentals of the MSSIR paradigm twenty years later in a document updated by Joe L. Saupe, restating the four basic tenets (bolded) of the official position of the organization:
In 1990, in a piece entitled, “Functions of Institutional Research,” posing an opportunity to define functions in higher education institutions distinct from other functional units, Joe L. Saupe defines the “nature and purpose of institutional research” such that it may be “distinguished from research on postsecondary education which has as its purpose the advancement of knowledge about and practice in postsecondary education generally.” The subject of institutional research is the “individual college, university, or system.” Saupe relegates the “translation of.. various items of data into information useful to planners and decision makers” to the domain of “management information,” (i.e., not generalizable knowledge), and therefore institutional research may be dispersed throughout the institution: “Any component of the college or university may have a responsibility for institutional research.”
The National Association of System Heads (NASH) recently surveyed public colleges and universities to measure institutional research capacities in the nation’s public higher education systems (report, 2015).[4] It was precisely institutions with “elaborate profusions” of institutional research functions that the survey encountered most often. As the authors of the report assert, “The analytic functions in most systems and campuses remain topically stove-piped, with the named ‘IR’ office focused primarily on student and student related research, with reporting and any research in other topical areas (resource use, efficiency and effectiveness, and personnel) handled by the budget and human relations offices” (n.p.). Overall, as Volkswein’s research from the prior 25 years shows, the IR offices within systems vary considerably in their capacity. Simple questions of facts, like data definitions, differ between campuses, limiting “capacity for either system or campus decision makers to compare performance across campuses or systems, to understand the reasons for differences and to use data to drive improvements.”
Consequently, the NASH survey of institutional research in the state systems and institutional offices offers an indictment of the organization of institutional research as defined by the four tenets of the MSSIR, the official documents of the AIR from the past fifty years, and the consideration for “elaborate profusion” as the most advanced development of IR (Volkswein’s schema). The variability in the quality of IR functions reflects the long-held MSSIR and AIR recommendations that institutional research root itself in particular institutions and pursue its practice without regard for contributions to generalizable knowledge. As we noted last year, the NASH survey serves to “illuminate how virtual offices or elaborate profusions of institutional research squander the wealth of institutions – both in terms of finances and knowledge…” The surprising fact is that it took more than 40 years before chief executives of higher education sponsored independent investigation into the soundness of IR scholarship (MSSIR) and the official positions of the AIR.
In response to the NASH survey, the AIR’s recent release, A New Vision for Institutional Research, unfortunately repackages the association’s fifty year old, failed vision for institutional research.
As is evident, the official position of the AIR for its entire existence has been that institutional research is 1) a function, 2) divorced from scholarship on higher education, 3) circumscribed in a particular institution, and 4) effectively decentralized in an “elaborate profusion” of local units and decision-makers. In its “new” vision, the AIR once again recapitulates the same four tenets at the heart of its stewardship of the profession, with few semantic differences:
“The move to an institutional research function—via a federated network model or matrix network model—is needed to assure that informed decisions routinely occur across an organization with the speed and flexibility required for real-world management of modern postsecondary education… [I]nstitutional researchers should be counted on to know and use the discoveries of others in forming a blended view of higher education relevant to real-world, locally-centered problems and opportunities. It is unlikely that basic research or traditional scholarly research will account for more than a minor advisory role in the future function of institutional research.” (9)
What, then, has changed in the AIR’s new vision? The historical record of institutional research:
Offices of institutional research once held the uncontested right as the ‘one source of the truth’ because of the special skills needed to access institutional data and use sophisticated analytic tools. (New Vision for Institutional Research, 7)
With this assertion, the AIR blithely dismisses a long trail of scholarship in its own annals and rewrites the history of institutional research. Volkswein must wonder what has happened to his life’s work on the diversity of institutional research offices, as there is no evidence in his scholarship that IR offices once held “the uncontested right as the ‘one source of the truth’…” In lieu of Volkswein’s sophisticated, if flawed, research on the variability of institutional research offices, the AIR offers an unsubstantiated revision of scholarship on institutional research to serve as a straw man to facilitate the “renewal” of its questionable advocacy for the elaborate profusion of institutional research functions: “A federated network model signals that the institutional research function operates as an organization-wide resource.” (8)
What substantive difference exists between the AIR’s old vision circa 1970 or 1990 and AIR’s new vision in 2016?
As we described in Institutional Research’s Metropolis, higher education institutions in the United States have operated “virtual offices,” or “elaborate profusions,” of institutional research functions for over fifty years now. These virtual offices of decentralized institutional research functions deplete the wealth of higher education institutions and, according to a recent survey by NASH, result in IR offices that are incapable of addressing “issues affecting many of the cross-cutting issues of the day — such as the connections between resource use and student success…”
Unfortunately, the AIR chooses not to engage with the NASH findings substantively. Instead of critical self-reflection on the “development” of IR offices and the elaborate profusion of institutional research, the AIR grossly oversimplifies the practice of institutional research in the past (“Offices of institutional research once held the uncontested right as the ‘one source of truth’…) and effectively evades thoughtful responsibility for the current state of the profession. More regrettably, the AIR offers a “new vision for institutional research” that perpetuates the same four tenets for the practice of institutional research that the association has espoused since its earliest years, tenets that have resulted in the current state of the profession. Perversely, in its failure to break with its past, the AIR validates the conclusion drawn by NASH: institutional research is “a field that is at best unevenly positioned to support change.”
“IR is not the first field of higher education to experience disruptive innovation.” (AIR, Institutional Research Capacity: Foundations of Federal Data Quality, 2016)
In a policy paper delivered to the Institute for Higher Education Policy as part of its series, Envisioning the National Postsecondary Data Infrastructure in the 21st Century, the AIR suggests IR offices should take their “lessons from prior disruptive innovations” to print shops: “the printing field changed quickly when desktop publishing turned personal computers into personal printing presses.” According to the AIR, “savvy print ship manager… understood that some decline in professionalism was overcome by the quantity of communications that institutions were able to create.” Likewise, mainframe computing gave way to network computing, reducing “top-down control…” In both of these examples, institutions learn to consider IR as a network of functions and decentralized process that will be “far more production and greater distribution of data for decision support will far exceed the negative reactions to some data studies that do not represent best practices.”
To be clear, rather than standardizing and improving the quality of analytic functions to bring institutional research out of the “stove-pipes” and address the issue of institutional comparability as NASH seeks to address, the AIR recommends that colleges and universities further erode the ability to conduct research that permits comparisons within systems and further distribute the functions of institutional research in each institution as “data studies” with no controls for best practices.
IR is neither the first field nor the only field of higher education to experience disruptive innovation. It may be, however, the first and only field of higher education for which its association actively works to de-professionalize its membership as a response to disruptive innovation. To our understanding, the American Council on Education does not seek to deprofessionalize the nation’s presidents in the face of disruptive innovation. The National Association of College and University Business Officers does not seek to deprofessionalize its CFOs to service disruptive innovation. The National Association for College Admission Counseling does not seek to deprofessionalize its enrollment management officers in deference to disruptive innovation. Lastly, the Association for the Study of Higher Education does not seek to deprofessionalize faculty to usher in disruptive innovation in their classrooms. The AIR stands alone in its efforts to deprofessionalize the higher education professionals it represents and redistribute the functions of IR offices to other areas of higher education outside of the profession.
What is most alarming, however, is that disruptive innovation and the proliferation to data access clearly has increased the demand for multi-institutional comparisons and the professionalization of institutional research analytics at a time when AIR wishes to take the profession in the opposite direction.
In our weekly News Items and several past essays, we have tracked the competition among data science vendors to build mutli-institutional databases of student records to “probe deeply” into student success across higher education in general, rather than a single institution. Organizations like the PAR Framework, prior to its acquisition by Hobson’s, was “a student success collaborative with more than two dozen member institutions…” Likewise, Civitas Learning continues to attract “advocates for higher education transformation” and acquire other edtech startups in order to leverage its network of “more than 880 campuses reaching more than 3.2 million students worldwide.”
Despite our misgivings about the proprietary student success solutions, Civitas Learning should be lauded for its support of a community “learning space” in which partners are able to build collaboratively on a better understanding of student needs and initiative outcomes. Rather than lowering the bar on data studies and student success, Civitas Learning raises expectations that the standards of analytics will evolve from current practices for descriptive and diagnostic analytics (i.e., fact books and correlation testing) to predictive and prescriptive analytics (what Civitas labels, “What is the best that can happen and how can we make it so?”). In an effort to warn institutional leaders about the rising bar on student success, Civitas Learning makes clear that “most institutions now lag behind the rising standards of analytics best practices…”
Beyond for-profit ventures to deliver the quality analytics that NASH leaders seek, non-profit organizations aim to provide better guidance on how to effectuate student success mechanisms through the lens of established social scientific practices. ideas42, a non-profit “with unique expertise and experience at the forefront of behavioral science,” released a comprehensive report on 16 different projects to improve student success at a number of partner institutions. While not quite as groundbreaking as the press release suggests, ideas42 serves as a coordinating agent to direct the study of student success initiatives and to report independently the results of the interventions. Notably, the non-profit does not report results on one intervention because it was “unable to run randomized controlled trial for this intervention.” In addition to the seemingly rigorous standards for results testing, the report includes precise baseline results that can be referenced if the group replicates its studies on other campuses.
Lastly, a group of administrators and faculty members convened at an event, Asilomar II: Student Data and Records in the Digital Era, to discuss the ethical use of student data in the new era of prescriptive analytics and the application of behavioral science to student success. The resources posted on the conference site record the trend toward the development of more professional standards for research using the vast amounts of data available through learning environments as well as the ethical use of student data to advance student success. As one organizer states, “There is a value to having a sectorwide conversation and coming up with a set of standards.” Although the attendees did not reach consensus, Inside Higher Ed reports that the participants “made progress on articulating a handful of ideas that participants said should guide the responsible use of student data.”
Thus, while the AIR insists that disruptive innovation in higher education portends greater decentralization for institutional research functions and the diminished need for rigorous and valid research studies, the recent trend among for-profit student success vendors, behavioral science NGOs, and the academic leadership at large is toward greater professionalization and standardization with regard to the deployment of institutional research for student success. Moreover, the leaders in disruptive innovation and higher education leadership look to the professional standards of social scientific scholarship as the preferred model for both the rigor and ethical use of research. In this new climate in which higher education leaders both inside and outside the academy recognize the demand for greater validity and reliability from institutional research, the AIR effectively sidelines the profession from the discussion: “It is unlikely that basic research or traditional scholarly research will account for more than a minor advisory role in the future function of institutional research.”
“… this new Wild, Wild West of independent data and information brokers…” (AIR, A New Vision for Institutional Research, 2016)
The National Association of System Heads (NASH) faulted the unequal capacities of institutional research offices within a single state system as well as the “stove-piping” of institutional research throughout an institution. The profuse or networked arrangement of institutional research functions constrained IR offices to the reporting of student statistics and distributed “other topical areas to… the budget and human resource officers.” Furthermore, the distribution of institutional research functions throughout the institutions in a single system led to both redundancy and undermined comparisons on basic institutional statistics:
While gaps exist in data governance and infrastructure among systems and their campuses, there is also a redundancy in reporting between system and campus, perhaps necessitated by different audiences for the different levels of work. This contributes to confusion about basic measures and metrics, and also gets in the way of potential efforts for greater sharing of work between campuses and systems in order to free up staff to do other things.
The AIR’s New Vision for Institutional Research reverses the findings and conclusions of NASH.
The AIR ignores its own 50-year history of advocacy for “elaborate profusions” of institutional research functions that have led to the current condition of IR in the nation’s public institutions of higher education. The AIR offers a semantic revision that claims “federated networks” of institutional research throughout the college and university (AIR, New Vision 2016) are far superior to the “elaborate profusions” of institutional research “in any component of the college or university” (AIR, Old Vision 1970, 1990). If anything, its metaphor, “this new Wild, Wild West of independent data and information brokers…” (AIR) evokes images of lawlessness and rugged individuality, seemingly advocating for a further erosion of data standards and a vast proliferation of IR “stove-pipes” that retard “the potential for greater sharing of work between campuses and systems…” (NASH). Lastly, the AIR shows little regard for the ethical use of student research both literally, the word “ethical” is not mentioned, and metaphorically, “the new Wild, Wild West.” With this terminology and framework, AIR’s vision for institutional research seemingly sanctions “data studies” such as Mount St. Mary’s University retention program.
In short, the AIR’s vision will continue to advance the inequality of institutional research capacities, to vex higher education leadership, and to retard national, system-wide, and institutional efforts to improve student success.
The trend toward greater professionalization and standardization of institutional research on student success is already well-established and rapidly moving. No doubt, the states’ system heads and college presidents will have to heed their own conclusions and turn to external organizations to professionalize research on student success in order to acquire generalizable knowledge and enable standardized comparisons between institutions. Whether institutional researchers welcome a modernizing era of scientific research and inter-institutional data standards, or choose to retreat into fables about the great American West, will determine the reputation and role of the profession in higher education of the future.
- J. Frederick Volkswein, “The Diversity of Institutional Research Structures and Tasks,” Organizing Effective Institutional Research Offices, New Directions in Institutional Research No. 66 (San Francisco, Summer 1990).↵
- J. Fredericks Volswein, “The Foundations and Evolution of Institutional Research,” in Institutional Research: More than Just Data, ed. by Dawn Geronimo Terkla, New Directions for Higher Education No. 141 (spring 2008), 9.↵
- Paul L. Dressel, ed., Institutional Research in the University: A Handbook (San Francisco: 1971), 38; and, Joe L. Saupe, and James R. Montgomery, The Nature and Role of Institutional Research … Memo to a College or University, (Association for Institutional Research: Nov. 1970), 8; accessed at http://files.eric.ed.gov/fulltext/ED049672.pdf on July 21, 2015.↵
- National Association of System Heads, “Meeting Demand for Improvements in Public System Institutional Research: Progress Report on the NASH Project in IR” (March 2014), accessed at http://www.nashonline.org/sites/default/files/nash-ir-report_1.pdf.↵