Technology Research

So you’ve implemented your technology… Now what?

Blog post cover image

Let’s imagine that you performed the rare task of selecting and implementing a technology system that satisfied all stakeholder requirements. It even fit within the so-called “triple constraints” of being delivered within an acceptable timeframe, scope, and budget and resulted in a well-loved and well-functioning system. You would deservedly receive congratulations and pats on the back from your colleagues for a job well done. What if, however, the provost asked you to show her what benefits, such as helping to improve student retention rates, this technology was providing? Would this be an easy question for you to answer? In our experience, most likely not. Many, if not most, higher education institutions do not include some notion of “efficacy,” or the degree to which technology provides actual benefit, in their definition of “successful technology selection and implementation.” Therefore, all too many institutions cannot be sure any technology, regardless of whether it has been well-selected and well-implemented, is providing overall value in terms of tangible and measurable benefits.

Understanding Technology Efficacy in the Complex World of Higher Education

Higher education institutions are complex, with sometimes competing priorities, technology needs, and stakeholder groups. Because of this complexity, determining efficacy may require a more nuanced approach than the financial, management, or risk-based methods commonly discussed in literature such as Return on Investment (ROI)Return on Management (ROM), and the Delphi Method. The healthcare industry seems to provide one such nuanced approach, which is not surprising given its historical focus on the efficacy of care and medication within healthcare. Efficacy, in this context, can be determined if one takes four factors into account: the desired benefit, the problem the technology is seeking to solve, the people impacted by the problem, and the conditions under which people will use the technology. This approach, it seems, may also fit the complex world of higher education and may provide the best way to determine the efficacy of technology selection and implementation.

Two Approaches to Solve the Efficacy Problem

Two organizations, College Complete America (CCA) and the Jefferson Education Accelerator, are undertaking exciting ways to solve the problem of technology efficacy but differ in the methodologies they use. CCA’s approach, GPS Seal of Approval, is to review and award products against its Guided Pathways to Success (GPS), a technology-supported approach to improving higher education completion through step-by-step roadmaps and guidance. Jefferson Education Accelerator, on the other hand, looks at determining efficacy by providing research-based benchmarking and validation of vendor products by a network of school systems and higher education institutions. Both organizations plan to publish their results and sets of recommended best practices for institutions to consider when reviewing technology products.

5 Steps to Evaluate Efficacy of Technology Selection and Implementation

If you are going it alone, however, we recommend that you take the following steps to determine the efficacy of technology selection and implementation:
  1. Define the problem domain and impacted groupsIt is important to understand why a given piece of technology is needed and which groups it will impact to ensure a correct determination of efficacy. For example, identifying the problem of “lost revenue due to falling retention rates” would allow an institution to judge a constituent relationship management (CRM) system by the benefits it provides to the entire institution, whereas the problem of “inefficient student engagement” would allow the institution to judge the CRM by the benefits it provides to students.
  2. Define benefits and map them to outcomes. Stated institutional benefits, like “improving student retention,” are often too high-level to allow for the evaluation of the efficacy of the technology. Therefore, it is helpful to map goals like these onto supporting outcomes, such as “develop early warning system” or “increase student engagement with advisors.”
  3. Map outcomes to capabilities. Once you have mapped the benefits onto outcomes, the next step is to determine what the technology would have to do (i.e., capabilities) to achieve them. For example, to achieve the outcome of “develop early warning system,” an institution may need the technological capability of “data mining” or “holistic view of student data.”
  4. Map capabilities to functionalities. Furthermore, these capabilities should then be themselves mapped onto what activities users will have to do (i.e., functionality) with the intended technology. An example of this would be the capability of “data mining,” which might need the technological functionality of combining data from multiple sources or performing complex queries on data for users.
  5. Develop KPIs. Finally, once the mapping is complete, you should then determine what appropriate metrics you would apply to your benefits, such as “10 percent increase in student retention within a year.” These metrics, or key performance indicators (KPIs), can be used to measure the efficacy of the technology selection and implementation in an ongoing manner and help identify any gaps in your mapping, such as a missing outcome for a given benefit.

Like, Follow, Share.

Subscribe card logo

Never Miss Your

Wake-Up Call