Cloud over Institutional Research
The typical higher education institution in the United States may spend $1,000,000 per year on institutional research – without adequate controls for its transparency, accountability, or an assessment of its contribution to the mission of the college or university. As studies of institutional research offices revealed over 25 years ago, higher education institutions often do not have an office of institutional research or have a one person office.[1] Despite the absence of substantial personnel, institutional research functions will be found distributed in an array of units and their budgets, forming an informal “virtual office” to meet the demands of compliance, the curiosity of decision-makers, and the expected assessment of student outcomes.[2] Each individual engaged in the enterprise likely works in isolation, without any specialized background or research, loosely coordinated via emailed “requests for data,” and under the supervision of their functional unit leader. While no study has sought to definitively estimate the distributed costs of institutional research at institutions without IR offices or one-person offices, a thought experiment will provide a baseline estimate of what empirical research could better measure.[3]
Whether there is no office or a one-person office, the institution will likely have an information technology professional, most likely a database programmer with SQL and SQR skills, dedicated to the extraction of data from the student information system to the specifications of the submissions (estimate = $80,000). The IT professional, however, must rely on the record management and verifications of staff distributed across the institution to support the submission of data on: admissions, enrollments (Registrar), finance, human resources, degree completion (schools), libraries, and athletics – to name the most common. While the mundane task of reporting may be handled by low-level professionals, the oversight of institutional reporting by a myriad of division leaders and program directors interested in managing data on their own performance easily amplifies costs (estimated = $30,000 per functional unit, or $210,000). The accredited academic programs entail growing obligations for both annual submissions and periodic self-study that entail standard remissions to faculty chairs and program directors and/or remissions for special purposes (estimate = $25,000 per accredited program, or $175,000 [seven programs]). Accrediting agencies expect institutions to participate in a number of benchmark study surveys each year and, in incremental budgeting, the functional units will carry the budgets for these student surveys every year regardless of whether the survey is administered. In addition, student survey research projects require both participation fees and local administrative costs, again, hidden in the personnel costs of the functional units (estimate = $80,000). One half million dollars for institutional reporting and accreditation compliance alone.
The institutional research needs of educational leadership, however, does not end with institutional reporting and the institution will seek more complex research that cannot be supported by a virtual office of non-researchers fulfilling the data reporting requirements of the institutions as a matter of compliance. The need to address very real and pressing issues to the organization spawns the need for research expertise that will only be found from external consultants and vendors who, more often than not, deliver one and only one specialized research service entailing advanced statistical methods (tuition discounting, space utilization, salary study, etc., etc., etc.). The typical cost for the specialized research services of external experts easily reaches $100,000 per project (i.e., per strategic directive of the institution), creating a significant multiplier in the costs to the institution lacking an institutional research office. To be generous, presume that the institution scales back its ambitions for operational improvements, not the available resources for operations, and pursues only three substantial improvement projects per year (estimate = $300,000). Regional accreditors require the assessment of student learning outcomes, leading to additional stipends and remissions for faculty dedicating hours to administrative work (estimate = $60,000). Accreditation self-studies, strategic projects, and student learning outcome assessments are rarely supported by the annual routines of the dedicated IT professional, which compounds costs for database programmers in the IT budget in order to extract more data (estimate = $40,000). Add to the preceding, the facile efforts of institutions that create institutional research offices by hiring a single individual who does no more than coordinate the inefficient enterprise of the virtual IR office (estimate = $85,000). In major universities, which typically have IR offices, these inefficiencies are compounded by “elaborate profusion” wherein “institutional analysis is decentralized, if not fragmented and [again] only loosely coordinated.”[4] Consequently, the costs of institutional research functions at the institution easily approaches $1,000,000 at small colleges and likely much, much more at major universities.
In return for $1,000,000 in expenditures, the institution gains fractured and uncoordinated procedures for compliance with external submissions that provide little added-value for the administration of the institution, a data feedback report from the federal government, shelves full of self-study reports on program performance and student learning outcomes, three potentially insightful recommendations for strategic planning or institutional effectiveness wholly unrelated to each other by unit and/or vendor, and a few generalized prescriptions about “best practices” or “high impact practices” from the academic literature on higher education that quicken institutional isomorphism.[5] Institutional research, even where no office exists, clearly plays a significant role in the decision-making of higher education administration and contributes to the generalized knowledge of higher education research. And, yet, a profession dedicated to the systematic performance of institutional research continues its search for order and relevance in decision-making and the higher education literature after 50 years. Institutional research’s murky horizon between administrative practice and academic purpose is perhaps the most immediate sign of its transformational potential for the academy. Everything about its annual output, including its persistent interest in studies of faculty salaries,[6] attest to the fact that the functions originated as faculty-directed research[7] with the potential to be organized and pursued with social scientific rigor. Its constitution as an administrative function, however, somehow bound institutional research to the institution like a prisoner in a prison house – whereas, to the contrary, the institution is institutional research’s metropolis.
To Be a Science
Before the science, there was the research. Go back just over a century and a half in the discipline that became physics, and the familiar distinction between natural philosophy (or science) and the mechanic arts – a distinction similar to that between basic research and applied research – vanishes. Go far enough back, and the distinction between basic research and applied research disappears entirely in the discoveries (Saturn’s rings) and inventions (improvements in the optics of telescopes) of Chistiann Huygens. Even within the past 100 years, Carl Sagan recounts in his original documentary, Cosmos, how “a mule-team driver who never went beyond the eighth grade,” Milton Humason, demonstrated an incomparable aptitude making the “fine adjustments” to the calibrations of the large, mechanical telescope at Mount Wilson Observatory. He became astronomer Edwin Hubble’s assistant for the “painstaking observations” of the redshift of distant galaxies that established the expansion of the universe, and subsequently theories of the origin of the universe, the big bang, and its age. Humason was welcomed into the community of astronomers for his contributions and awarded an honorary doctorate for thirty years of research at the Wilson Observatory.
Fast forward one hundred years, and one finds an abstruse science with transnational collaborative organizations, funded by national governments, that are extraordinarily dedicated to the painstaking measurement of one and only one phenomenon – the mass of the Higgs boson, a particle “predicted by the Standard Model.” Per “A Layperson’s Guide to the Higgs Boson” posted by the University of Edinburgh, the experiment requires the collision of protons to recreate “the conditions that existed just after the big bang when Higgs bosons may have first existed.” One aspect that I found fascinating, “Out of one billion of these proton collisions [physicists] expect to make just 10 Higgs bosons.” Per a rule of thumb for the most pragmatic research, that meant to me that physicists had to effect 40 billion collisions in order to produce 400 measurable Higgs bosons and a 95% confidence interval for the mass. Of course, physicists demand more certainty in their measurements and the scientists collected data “from about 4000 trillion proton-proton collisions at the Large Hadron Collider.” The scientific endeavor to measure the Higgs boson, estimated by Forbes magazine, “ran about $13.25 billion,” including $4.75 billion to build the technology.
As the authors of Scientific Research in Education state, no “meaningful distinction [can] be made among social, physical, and life science research and scientific research in education… [T]he set of guiding principles that apply to scientific inquiry in education are the same set of principles can be found across the full range of scientific inquiry.”[8] Based on this principle, the scientific inquiry to measure the Higgs boson permits an analogy. Imagine for a moment that the scientific advancement of higher education rested on the discovery of a Higgs boson theorized in a Standard Model for institutional effectiveness in student learning outcomes. Consider also that the Higgs boson of institutional effectiveness was equally difficult to measure and required a minimum of 400 measurements for practical conclusions – or 40 billion tests. If the Higgs boson of institutional effectiveness is a phenomenon of the institution and is only measurable on an annual basis (as the annual work of institutional research is currently structured), the institutional researcher in solitary confinement will have to work for at least 40 billion years to measure educational cohorts one year at a time (and 100 million years to measure the first Higgs boson of institutional effectiveness).
The industrious institutional researcher will reach out to other institutional researchers to form “data exchanges,” through which a consortium of three institutions could narrow the work to under 14 billion years (the current age of the universe) or a consortium of forty to around 1 billion years (just in time to beat the warming of the sun that ends complex life on Earth). The education researchers leading the National Survey of Student Engagement, with a little over 700 participating institutions annually, could measure the Higgs boson of institutional effectiveness and issue the definitive list of high impact practices in a little under 57 million years. Lastly, the scientists at the National Center for Education Statistics in the U.S. federal government, with 7,500 participating institutions each year, will be the most efficient, measuring one Higgs boson of institutional effectiveness every 13,333 years and completing the scientific enterprise in a little over 5.3 million years.[9]
Fortunately, considering the time frames necessary for the discovery of the Higgs Boson of institutional effectiveness, colleges and universities in the United States are spending no where near $13.25 billion to discover the elusive particle of institutional effectiveness and student learning outcomes. Well, perhaps. At the typical estimate of $1,000,000 per year, the direct costs for institutional research at the the 4,700 postsecondary Title IV degree-granting institutions in the United States[10] amounts to $4.7 billion per year. To that, a comprehensive accounting of the total expenditures on institutional research at each institution would have to include the total cost for academic programs and research that purport to advance the practice of institutional research in higher education (no estimate). The total costs to publish educational statistics by the federal government and its data tools for higher educations must also be factored into the costs of institutional research (no estimate). Then, to complement the numerous federal grant programs, a legion of foundations annually dole out grant dollars for the implementation and study of effective practices in higher education (no estimate). In addition, the ever-growing class of commodities known as rankings and ratings produced by the publishing, financial, and job search industries utilizing freely distributed higher education data certainly contributes to the total costs of institutional research in the Untied States as well (no estimate on costs, but one assumes by the longevity of higher education rankings that the practice is profitable).
In sum, it seems quite possible that the United States spends $13.25 billion bi-annually on the institutional research of higher education – an entire enterprise hardly, if at all, organized by any substantive theory, rigorous systematic practice, or commitment to scientific discovery – that cannot “tell when a college is circling the drain (subscription required as of July 1, 2015).” Every year, higher education institutions spend an amount equivalent to the total cost to build the Large Hadron Collider used to discover the Higgs Boson and, yet, there is no corresponding community of scientists, no standard theories of a research program, or no significant record of scientific discovery in higher education and institutional effectiveness that makes a quantum step toward the achievement of particle physicists.
How is it possible?
From Self-Study to Social Science
Writing in 1962, Loring M. Thompson in one of the first statements on the historical origins of institutional research, states that the profession “may be traced back to self-studies of academic programs by faculties and faculty committees,” but that the purview broadened to include “enrollments and prospective enrollments” in response to the generation that became known as the “baby boomers” who followed “a temporary valley [in enrollments] because of the small number of births during the depression eighteen years earlier.” [11] In the decades after the advent of the personal computer, the pace and proliferation of technological innovations portended that “[t]he institutional research offices of the twenty-first century will be very different from those of today.”[12] Not one of these three conditions for institutional research preclude its identification with social science or education research. The National Research Council’s report on scientific research in education makes clear that a site of practice, an institution, may be the subject of a scientific study, at the very least an ethnographic study, and “generate systematic observations about the focal group or site, and patterns in results may be generalizable to other similar groups or sites or for the future.” As mentioned previously, the report acknowledges that education research is applied research to support decision-making and notes, “we also believe the distinction between basic and applied science has outlived its usefulness… [and] often served to denigrate applied work.” Lastly, as in the practice of astronomy, “Embedded in their practice, scientists also engage in the development of objects (e.g., instruments or practices); thus scientific knowledge is a by-product of both technological activities and analytical activities.” Ultimately, not even the multiple disciplinary origins of institutional research matters: “[I]t is the scientific community that enables scientific progress, not… adherence to any one scientific method.”[13]
Why, then, does the profuse and costly enterprise of institutional research in the United States differ from scholarly research in education?
In part, as the prior brief suggests, the higher education literature largely ignores institutional research as an executive function and imposes an order and vocabulary on the purview of institutional research that renders its practice as secondary research or applied research (in the pejorative sense) that best surrenders its statistical findings as data (“the given”) to be appropriated and exchanged by external users, independently from the questions posed by an institutionalized system of inquiry. For instance, rather than conceptualize technological and analytical activities as the potent source of scientific knowledge in education, Patrick Terenzini warned in 1995, “The danger of being preoccupied with technology is that institutional researchers may increasingly be seen as technicians, good at what they do but with a limited perspective and understanding of important academic and administrative issues.” Two decades after counseling against preoccupation with technology in a publication on “preparing for the informational needs of the twenty-first century,” he reiterated his suspicion of technology, “I fear the same outcome, perhaps more keenly now than 20 years ago.”[14] The partnership of technology and science, exemplified by the research of Humason and Hubble and adopted by the National Resource Council for scientific research in education ten years earlier, does not resonate in the cautionary counsel regarding technology for institutional researchers. And the order and vocabulary of unscientific endeavors extend much deeper.
As an applied research, “within an institution of higher education to provide the information which supports institutional planning, policy formation, and decision making,” institutional research fared no better. In 1990, in a piece entitled, “Functions of Institutional Research,” posing an opportunity to define functions in higher education institutions distinct from other functional units, Joe L. Saupe defines the “nature and purpose of institutional research” such that it may be “distinguished from research on postsecondary education which has as its purpose the advancement of knowledge about and practice in postsecondary education generally.” The subject of institutional research is the “individual college, university, or system.” Saupe relegates the “translation of.. various items of data into information useful to planners and decision makers” to the domain of “management information,” (i.e., not generalizable knowledge), and therefore institutional research may be dispersed throughout the institution: “Any component of the college or university may have a responsibility for institutional research.” In one final statement on the subordination of institutional research, as applied research, to education research, the basic research, Saupe asserts, “the purposes of institutional research and research on higher education differ….” While the two may contribute to each other, “The problems, methodology, and results of the general research can be applied and particularized in institutional research,” whereas “the findings of institutional research may merit generalization through broader studies” by academics. Thus, the bonds of administrative institutional research to its particular institution can only be broken by the liberating generalization of academic research by higher education scholars.
At the same time, researchers on postsecondary education devised systems to categorize the diversity of institutional research structures and tasks that unmistakably elevated the activity of institutional researchers in large research universities. J. Fredericks Volkswein employs the literature on organizational life cycles to characterize institutional research offices in one of four stages: infancy and childhood, adolescence, adulthood, and maturity. At the infantile stage, institutions establish one-person “craft” offices that are “burdened by the demands of routine reporting and a modest amount of number-crunching for the institution.” At the childhood stage, offices of up to three persons formed “small adhocracies” of a flat hierarchy, simple structure, and little specialization… [and]responsive to their administrative hosts.” The “professional bureaucracy…. [c]onsisting of at least four (but usually more) professionals” who perform “a number of sophisticated research projects each year.” Volkswein characterizes staff from such offices as the active contributors “to organizations such as the AIR and NEAIR.” In the final stage, “an elaborate profusion… [of] institutional research activities and expertise proliferate throughout the institution…” to service “an analytical environment” with such complexity that institutional research is conducted by “different offices reporting to different parts of the administrative hierarchy.” The theory locks the development of institutional research offices in an ontogeny that the later stages of institutional research offices previously experienced and already mastered. Therefore, knowledge flows unidirectionally, from the latter stages to the earlier stages – or larger offices to the smaller offices – and reinforces the distinction of basic research on higher education as opposed to applied research by institutional research. In addition, the generalizable knowledge that may be gained from the immersion of an institutional researcher in a site of practice (ethnographic potential) fails to gain consideration.[15] Notably, Volkwein’s schema scales with the size of the institution. Craft offices offices are found on campuses of fewer than 5,000 enrollments, small adhocracies on campuses of 5,000 to 10,000, and professional bureaucracies and elaborate profusions at doctoral and research universities. The largeness of the institution is what is inherently valuable for institutional research, in and of itself.
The pejorative characterization of institutional research as dangerously technological applied research largely performed by immature administrative offices at small colleges reaches an inflection point in the drafting of a code of ethics in 1992. In a dispute on the use of the terms “profession” or “craft,” the committee charged with drafting the code of ethics noted that professional “occupations share several important characteristics: (1) a well-defined body of knowledge that is rigorously taught, (2) a gatekeeping function to define who may practice the occupation, so as to protect the public from improperly trained practitioners and untrained pretenders, and (3) an enforced code of behavior and practice. Institutional research does not qualify on any of these criteria.” In its rationalization on the statement of competence, after citing a short list of common if not universal institutional research functions, the committee lapses back into institutional particularity and concludes: “what constitutes competence in institutional research at one institution may be quite different from the criteria of an other.” Deriving “historically” from the supposition, the committee adds, “There is no agreed-upon theoretical definition of institutional research and consequently no consensus about the content of the body of knowledge that one ought, in the abstract, to acquire in order to practice institutional research.” Lastly, in the explanation of institutional researchers relationship to “the craft,” the report of the committee states:
One of our correspondents inquired whether… the institutional researcher should not, in general, engage in research “with broader applications, which is, presumably, research leading to generalizable truth about human and institutional behavior. We be no means discourage that kind of research. But note that it properly belongs to the domain of the scholar, typically must meet the tests of scholarly protocols and reproducibility, and is normally reported and subjected to peer criticism in refereed journals (my emphasis).”[16]
In substance, in their relationship to the profession, the institutional researchers were not regarded as empowered to develop a “well-defined body of knowledge that is rigorously taught” because that responsibility “properly belongs to the domain of the scholar” (i.e., Saupe’s division). The spirit of scientific research exemplified by astronomers a century ago, and codified in the National Research Council’s report on Scientific Research in Education in 2002, is fundamentally lacking in several of the grounding concepts from the generalized knowledge about institutional research proffered by education researchers in the past. Unsurprisingly, little progress has been made in professionalizing institutional research over the course of fifty years. There are signs of potential change such as the code of ethics adopted by AIR (updated in 2013), which states that “the institutional researcher should contribute to the knowledge base and share with colleagues knowledge related to practice, research, and ethics.” The Handbook of Institutional Research (2012) provides immensely helpful history, citations of primary sources, and consideration for institutional research as a “theoretical” and “operational” activity. Yet, the updated code of ethics does not clearly define whether institutional researchers contribute to “generalizable knowledge” about the profession, such a key phrase in debates of the past, and the handbook remains only a handbook, designed for both hands of an individual institutional researcher, working in isolation, to apply piecemeal at the local institution.[17] And, in the final analysis, to this day, the scholars that “properly” own the domain of basic research propose no underlying theory, no school of thought, no focused or balanced portfolio of research for a community of scientists qua institutional researchers.
Geographies of Institutional Research
Rather than quibble over the distinctions and priority of basic research over applied research, is it not possible to conceive institutional research as social scientific research agencies, dispersed geographically and culturally, that are capable of being partners in a collaborative enterprise with a focused and balanced portfolio of scientific research in higher education? Rather than generalize knowledge that explicitly positions applied research as categorically different and subordinate to basic research, is it not possible to organize institutional research offices into a very large array of coordinated measurements of higher education phenomena to generate research findings that accumulate over time, provide important insights in the policies and practices of higher education, and push the boundaries of what is known about institutional effectiveness?[18] Institutional research, no less than education research, is an abstraction that is constituted, administrated, and pursued as a social science to produce generalizable knowledge. If institutional research exists in higher education institutions at all, it does so only as the study of a multitude of phenomena and the even more multitudinous relationships of phenomena to each other – not the study of an object, a singularity, or an event horizon from which generalizable knowledge by institutional research does not escape.
In 1962, from the collection of articles edited by L. J. Lins for The Journal of Experimental Education, Philip H. Tyrrell of the Renssalaer Polytechnic Institute, proposed a definition of institutional research as “methodological study… of a college or university… [that] includes not only traditional data gathering, processes, and interpretation and the study of operational procedures, but also it includes those activities usually called ‘education research,’ a term whose many acceptable definitions all pre-suppose disciplined, scholarly inquiry into the processes of teaching and learning.” The distinction between basic and applied research, or the proper domains of scholars and institutional researchers, has no quarter in Tyrrell’s formula, and he envisions the mutli-faceted activity of the discipline accordingly: “there is not a spectrum of institutional research; there are spectra.” His description of an office of institutional research, the organ of administrative and applied research, figures the functions as “a microcosm that facilitates, through its special kind of research, the activities of the macrocosmic, purposeful, life-liberating assembly of scholars who undertake those individual and collective tasks necessary to the ‘creation of the future.'” The office of institutional research, “with this image of itself… both serves and is part of the intellectual community… in the study and implementation of findings concerning an institution’s human and physical networks, e.g., in networks of characteristics which facilitate or impede individual or collective creativity.” While there is much to unpack in Tyrrell’s succinct platform for offices of institutional research, he avoids the pernicious distinction between institutional research and educational research that forestalls an institutionalized system of inquiry, a scientific enterprise, to understand and explain institutional effectiveness.[19]
More profoundly, Tyrrell entertained no illusion about the particularity of an institution of higher education, “the center of confluence for many social forces.” In Tyrrell’s nascent theory, the institution is most definitely not an object, a prison house of institutional research, but a field that unites cultural and material phenomena at the intersection of past, present, and future: “Into its [an institution’s] human and physical networks are swept all known human history as well as the turbulent discoveries of the moment, and from these dynamic networks of interaction flow the hypotheses that help shape tomorrow.” In contradistinction to the black hole of particularity assumed by each individual college or university, Tyrrell proposes a theory of higher education institutions in terms that squarely place them in the domain of generalizable knowledge and envision how institutional research offices may adopt a program, or portfolio of research, for “the manipulation of events so as to attain ever-evolving objectives.” His theory of social, material, and temporal confluence at institutions of higher education “preclude[s] the possibility of existence [for an institution] outside the main stream of history…” for the office of institutional research precludes existence as an impotent bystander in an institutional world of dynamic interaction” encompassing ethics, communications, talents, politics, and the enhancement of systematized institutional knowledge – the elementary particles of decision-making as well as the foundations for scientific research in education by institutional researchers.[20]
Had Tyrrell’s pregnant notions about the confluence of social forces at higher education institutions and his institutional research program of the “unknown” and “the future” for institutional research offices taken hold in education research in the United States, the profession of institutional research would likely have at least one well-defined body of knowledge and several known paths of possibility for the future of institutional research today. There are enough suppositions in his unusually rich call to direct action and independent programming by institutional researchers to sustain scientific inquiry in higher education for years to come. Tyrrell undoubtedly understood the college or university campus as institutional research’s metropolis: an established or commonly accepted educational setting, involving normal educational practices, manifesting the phenomena of the larger social and historical forces subject to an institutionalized system of inquiry for generalizable knowledge. Moreover, foreseeing the self-incurred tutelage of the profession to others, a condition that unfortunately resonates today in the regiment of institutional reporting, and as the future aspirations of the profession may be determined by the sponsored initiatives of external foundations and vendors with product maps that have no direct relationship to the code of ethics and mission of the profession, Tyrrell advised:
Institutional research, throughout American higher education, is now at a point, where a re-examination of concepts, assumptions, and strategy is necessary. Institutional researchers should take this re-examination upon themselves, lest others do it for them.
Regrettably, ten years after Tyrrell’s publication, the literature on institutional research set course for the dismal suppositions of the early 1990s. In “Institutional Research: A Review of the Literature to 1972,” the authors neglect to cite quote Tyrrell’s radical vision, and begins its definition of institutional research with a recitation of “[t]he distinction between institutional research and research on higher education… [R]esearch on higher education is a more comprehensive concept than is institutional research.” After a summary review of the connection of institutional research to self-study, the authors conclude, “Thus, institutional research is research about a particular institution of higher learning by those within the institution for the express purpose of aiding in the administration of the institution… [I]nstitutional research differs from scholarly research. It can only be justified if it aids the institution and those who are responsible for its operation.”[21] For the next forty years, the scholastic niceties regarding the differences of between scholarly research and institutional research, between basic research and applied research, have informed the order and vocabulary of research on higher education and institutional effectiveness. Is it not time to ask: Has education research’s distinctions between institutional research and scholarly research justified its aid to any higher education institution, or any nation, responsible for the delivery of postsecondary education to aspiring college graduates?
While We Were Reporting…
To summarizing the preceding in reverse order…
The scholars of education research, who assumed for themselves the responsibilities to define generalizable knowledge about institutional research and to produce a well-defined body of knowledge that may be rigorously taught to the practitioners of institutional research, have failed to deliver on the promise of the profession in over 50 years. At the origins of institutional research, all of the conditions to form a a scientific community dedicated to professional research on higher education in a manner that would lead to the advancements of institutional research and education research together existed. Instead, within five years of the first assembly of the Association for Institutional Research, higher education researchers’ began to obsess about the distinction between institutional research and (their own) scholarly research. The paradigm for higher education research, as it relates to institutional research, sets as its paramount achievement a classification regarding the differences between the knowledge generated by institutional research and by scholarship. To this day, the distinction has circumscribed the domain and findings of institutional research to the particularity of the institutions studied and frustrated the formation of a scientific community for the study of higher education effectiveness as exemplified by the practices of physicists and codified by the National Research Council in the publication, Scientific Research in Education, over ten years ago.
Either out of deference or a lack of understanding for the scholarship and paradigm offered by educational researchers, institutional researchers accepted the denigrated role and adopted institutional particularity in its mythic form as a singular object or event horizon for generalizable knowledge about institutional effectiveness, a concept that lures institutional research into the order of a prison house. A managerial vocabulary deeply tied to “data” and “decision-making” elevated the modest landscape of common sense as a halfway house between contributions to scientific knowledge and the mindless drudgery of external submissions. Succored by the promise of making an impact on the everyday life of the campus, institutional research discovered a simple joy akin to basic research and recovered a sense of purpose derived from service to decisions made by the functional units of higher education. Enticed by this fable, institutional research further settled into the particularity of the institution which, for all of its problems, is still its home. Today, with nihilistic hopes of effecting an “elaborate profusion,” institutional research seems now to aspire to dissipation in functional units and campus constituencies, in order to to re-form “virtual offices” under the direction of institutional decision makers, more broadly defined so as to include students as well as functional units.
The reluctance of institutional researchers to take it upon themselves to re-examine “concepts, assumptions, and strategy” for higher education, and the absence of a scientific community on higher education with qualities similar to the astronomical community one hundred year ago, created conditions favorable to the externally-driven regiment of institutional reporting. External agencies and “requesters” multiplied their own systems of generalizable knowledge and accountability for higher education in the absence of leadership and scholarship from higher education scholars. In an ironic twist, then, the grotesque national agenda to commodify the generalizable knowledge of American higher education as “rankings,” “ratings,” “scores,” etc., for resale in publications to the families of college-goers forced the entire industry of higher education to reallocate, annually, billions and billions (h/t Carl) of operational dollars to the measurement of institutional effectiveness at colleges and universities.
As an upshot of institutional particularity in its factual form, one may suppose, the higher education industry absorbed most of these costs directly, to the tune of $1,000,000 per year per institution on average in the estimates above. And, thus, the higher education industry at both the local and national level has the financial resources to meet the fifth and sixth design principles of scientific institutional research in higher education specified by the National Research Council: 5) adequately fund the departments and 6) invest in research infrastructure.[22] The United States and its degree-granting colleges and universities need only direct those financial resources to the advancement of scientific research in higher education under the direction of institutional researchers.
While he was not able to foresee the cloud of unknowing that descended on institutional research in the fifty years following his visionary statement for a scholarly program for institutional research, Philip H. Tyrrell’s enthusiasm for the future may once again be within reach:
The opportunity for honest inquiry, humbly pursued, may never have been greater for education; it certainly has never had the resources which exist or have been promised and which must be nurtured, paradoxically, with boldness and with care.
If institutional researchers achieve effective programming, they will facilitate immeasurably the activities of the assembly of scholars whom they serve and of whom they are apart. Concurrently, the total institution as a creative enterprise will be enabled to contemplate without despair a future…[23]
We, the institutional researchers, must first form a worthy scientific community for the study of higher education.
Note: This post is subject to unrecorded edits through July 31, 2015.
- J. Frederick Volkswein, “The Diversity of Institutional Research Structures and Tasks,” Organizing Effective Institutional Research Offices, New Directions in Institutional Research No. 66 (San Francisco, Summer 1990), 7.↵
- Ronald P. Matross, “The Virtual Office: An Organizational Paradigm for Institutional Research in the 90’s. AIR 1993 Annual Forum Paper,” Paper presented at the Annual Forum of the Association for Institutional Research (1993). ERIC Number: ED360919 accessed at http://eric.ed.gov/?id=ED360919 on June 30, 2015.↵
- The following are observations from recent experience as an institutional researcher in the United States and attendance in sessions on office organization and purpose at AIR Forums during years, 2010-15. Cost estimates are as of 2015 in current dollars. The estimates are averaged, despite personal experience that suggests expenditures on certain areas of distributed institutional research (human resources, finance and administration, athletics) tend to exceed expenditures on the core functions of admissions and enrollments due to the more liberal access to records for Information Technology and Registrar personnel.↵
- Volkswein, 25.↵
- Paul J. DiMaggio and Walter W. Powell, “The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields,” American Sociological Review Vol. 48 no. 2 (1983), 147-160; A. C. Dowd and V. P. Tong, “Accountability, Assessment, and the Scholarship of ‘Best Practice’” Research in Higher Education, Vol. XXII (2007).(↵
- “Consider, for example, how a study of faculty salaries can, and should, be placed in the context of putting students first by assuring the retention of outstanding faculty because of their critical importance in the student experience.” Association for Institutional Research, “A Brief Summary of Statements of Aspirational Practice for Institutional Research,” n.d. Accessed at http://admin.airweb.org/Resources/ImprovingAndTransformingPostsecondaryEducation/Documents/AIR%20Statements%20of%20Aspirational%20Practice%20for%20IR%20-%20Summary.pdf on June 30, 2015.↵
- Loring M. Thompson, “Institutional Research, Planning, and Politics,” The Journal of Experimental Education Vol. 31, no. 2 (Dec. 1962), 89.↵
- National Research Council, Scientific Research in Education (Washington, DC: 2002), 51-2. To add, the subsequent analogy should not be interpreted as “physics envy” (13).↵
- The same economies of scale apply to the assessment of student learning outcomes in which general education and disciplinary programs are organized in institutional isolation and assessments are performed with reference to the unique qualities and particularities of the academic programming.↵
- U.S. Department of Education, National Center for Education Statistics. (2015). Digest of Education Statistics, 2013 (NCES 2015-011), Table 105.50. Accessed at https://nces.ed.gov/fastfacts/display.asp?id=84 on July 1, 2015.↵
- Thompson, 89. That in a nutshell may encapsulate the origins of institutional research as both academic and administrative social research. In the Handbook of Institutional Research (2012), Donald J. Reichard provides a comprehensive review of the origins of institutional research and cites three academic or faculty-directed origins: self-study, survey research conducted by academics, and research bureaus in large public universities (3). Its institutionalization as administrative office, he reports, resulted from the advocacy of the American Council on Education and regional compacts↵
- Charles R. Thomas, “Harnessing Information Technology in the Twenty-First Century,” Preparing for the Information Needs of the Twenty-First Century, New Directions for Institutional Research no. 85 (spring 1995), 72.↵
- NRC, 106, 20, 57, and 19, respectively.↵
- Patrick Terenzini, “”From the Series Editor,” Preparing for the Information Needs of the Twenty-First Century, New Directions for Institutional Research no. 85 (spring 1995), 91: Patrick Terenzini, “‘On the Nature of Institutional Research’ Revisited: Plus ça Change… ?” Research in Higher Education Vol. 54, Iss. 2 (March 2013), 139.↵
- Volkswein, 23-25. The tortuous, if not incoherent, article that immediately follows Volkswein’s article in the same number, strains to fit into Volkswein’s schema but ultimately fails. The article initially proposes to locate institutional research in ancillary organizational functions called the “technostructure” and “support staff” that form outside of the “strategic apex” of the executive leadership and the “operating core” of the faculty. Near the end, however, the author acknowledges, “In small college, an institutional research office can function effectively at the strategic apex as part of the president’s office.” As such, would not the offices in the smaller institutions be the most mature? Alton L:. Tylor, “Options for Location in the Organizational Structure,” 27-34.↵
- Michael E. Schlitz, ed., Ethics and Standards in Institutional Research, New Directions for Institutional Research no. 73 (spring 1992), 7-8, 18, 40. The rationalizations for the designation of institutional research as a craft raise impossible standards for any profession. As a trained historian, I can say with confidence that every Department of History differs from all others and there is no [one] agreed upon theoretical definition of history, and certainly no consensus on what one ought to learn to qualify as a historian. There are schools of thought, fields of study, historiographies, etc. Professions do not form wholesale, they begin retail, when a small collective of professionals join together to form a school of thought and nurture a body of knowledge. Other collectives then form under the same umbrella term, and in opposition to the theory and content of other collectives. Consequently, a unified theory and consensus over content does not exist in any scientific field with a vibrant community dedicated to the test and critique of existing knowledge in the discipline.↵
- In 2012, Reichard’s review of the debate on whether institutional research is theoretical or operational research ends with the so-called “middle ground” proposed by Paul Dressel that “[t]he basic purposes of institutional research is to probe deeply into all all aspects of an [i.e., a particular] institution…” (12)↵
- Several of theses clauses paraphrase or incorporate the National Research Council’s statement from the chapter, “Accumulation of Scientific Knowledge,” 29.↵
- Philip H. Tyrrell, “Programming the Unknown: Guidelines for the Conduct of Institutional Research,” The Journal of Experimental Education Vol. 31, no.2 (Dec. 1962), 92.↵
- ibid., 93-94.↵
- Jack Testerman and others, “Institutional Research: A Review of Literature to 1972” (Aug. 1972), 4-5.↵
- See the prior brief, The Second Research, Part II, which restates the design principles for a federal educational research agency by the National Research Council as design principles for institutional research that service the professional agenda for Scientific Research in Education (p. 9).↵
- Tyrrell, 94.↵