Like most seven-year-olds, I was fascinated by dinosaurs—perhaps even more fascinated by their sudden demise at the end of the Mesozoic era 65 million years ago. It was in the context of this conundrum that I forged my understanding of the word impact.

According to, impact implies “the striking of one thing against another; forceful contact; collision.” It also can refer to something impinging on another, an influence or effect and/or a “force exerted by a new idea, concept, technology, or ideology.”

Something can thus have an impact in a catastrophically evident way or in a subtle way, as a catalyst for change. In the case of the dinosaurs, the prevailing but not uncontested theory posited by Luis Alvarez is that the impact of an asteroid, comet, or meteorite was responsible for the demise of the dinosaurs. This was not a case of direct cause and effect. The asteroid in itself did not cause the extinction of the dinosaurs; according to the theory, it was a necessary but not sufficient condition. Other stuff had to happen, and the chain of events took a while [1]. And so it is with much cause and effect and all stories of credit and blame: Event at Time 1 (plus a bunch of other stuff) leads to Effect at (potentially much later) Time 2. It takes a theory to create the narrative chain, positing a precipitating moment and/or agent(s) as the source of something that happens downstream, possibly far into the future.

Given the momentousness of the event through which impact derived its meaning for me, you can understand that the word carries some weight: It implies a big thing, but also a chain of events, of causes and consequences that may take time, in which all the active ingredients may not be obvious or clear-cut. In my life I hear discourses on impact in at least two worlds: the world of HCI education and the world of commercial technology design and development. Both worlds ask of me and of my colleagues: What impact are your creations having and, by implication, what impact are you having? The asteroid was never asked to audit and account for itself as an arbiter of change—we usually are.

At the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW), Judy Olson from the University of California at Irvine gave me some food for thought. She dedicated her speech to considering how we, as socially oriented technologists and educators and as designers, developers, and evaluators of communication technologies, have impact on the world [2]. She invited us to consider who is affected by our work—calling out students, developers, consultants, and users, who may fall into specific populations or be the general public—and on what scale. A class that recruited one student to a career in the field? A technology that changed a thousand people’s practices? A policy or standard that affected millions of people? And in what time frame are our insights and innovations intended to have impact? Now? In one to three years? In 20 to 30 years? In 40 to 50 years? Over millenia? These are all great questions. She also asked us to consider what is produced and how it can have direct and indirect effects. From her own experience as a seasoned academic with decades of experience, she listed:

•           theories;

•           assessment tools and methods;

•           technologies and technological innovations;

•           guidelines, templates, patterns, toolkits, and standards; and

•           policies.

There are others, of course, and the list likely depends on role and career context. For example, I often talk about various products from work done that may have an impact. In a deliberately perverse inversion of the statistical p-value, I call these the “value-p’s”: papers, presentations, prototypes, products, and patents.

While listening to Olson’s talk, I reflected that no item in this list is an impact in itself . These are things that stand in for impact. They are typically associated with activities believed to have led to impact or that will lead to impact; they are indicators of likely impact. But to assess whether and how something actually has impact, one needs to understand three things: inputs (what was done, what happened), outputs (what resulted and perhaps in what form), and outcomes (the “and this happened” part). Inputs are resources invested, outputs are direct and tangible products of one’s activity, and outcomes are the changes resulting from the activity. Finally, impact equals the outcome(s) minus an estimate of what would have happened anyway. To assess impact is to effectively posit a theory of change. And that means you really need to know what the world was like before the activity and then see what changed.

Further, I note that when assessing impact, we often resort to counting the number of Xs a person or a program spat out and the number of Ys presumed to have been affected. However, some things simply yield better to being evaluated and reduced, to being counted and quantified, than others. When I exercise and eat well, I lose fat and gain muscle, quantifiably so. When I design a technology feature and users spend more time using my technology and use it more often, that’s easily measurable. However, the joy I get from experiencing art or the way a smooth pebble feels in my hand, the improvement in my quality of life when flowers and trees are planted along the street I live in, the consternation I feel when trying to create yet another new password because mine is expiring again, or the curious conniptions my mind goes through when I try to solve a puzzle—these experiences have an impact on the way I think about the world and how I navigate it, but their precise effect is hard to quantify. So we resort to proxies that we can measure.

To step back for a moment, it is clear that humans love to assess, evaluate, categorize, and count. This passion seems to run deep in our brains and veins, and there are all kinds of evolutionary theories about when, how, and why this propensity developed. W.H. Auden said we live in societies wherein “the study of that which can be weighed and measured is a consuming love.” Witness Lord Kelvin, who said in 1883, “I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.” Even Plato and Aristotle shied away from believing that everything is quantifiable; Plato recommended turning away from the material world because it is “always becoming” and turning toward “that which is always and has no becoming.” Alfred W. Crosby, author of The Measure of Reality, calls the Western bringing together of mathematics and measurement in the task of making sense of all perceivable reality to be a “shotgun marriage.”

Beyond the love of categories and counting, though, comes a deeper problem, and that is the need for comparison—the need for a lingua franca, which can of course lead to the love of a particular number and value system. Though numbers appear to provide a lingua franca when collections of two things can be compared (e.g., discrete objects like apples and pears), when things are less tangible, it all gets a little fuzzier. Once a single abstract system of value is placed upon objects, their value starts to shift according to ideological systems, culture, design, taste, fashion, and so on. Lewis Mumford, in his 1934 text Technics and Civilization, reminds us that “every culture lives within its dream.” And so it is with measuring impact: Each domain has its own indicators or proxies for likely impact and attempts to measure actual impact; how impact is assessed depends on the dreams and ideologies of the culture(s) within which we exist and operate. This can make assessment of work that crosses, for example, industry, consultancy, or academic boundaries quite hard to assess. Not everyone agrees on what the useful indicators are, nor can people always agree on what the world was like before and what constitutes a useful way of working out how it is now.

Also, dreams and ideologies change—sometimes slowly, sometimes not. Who has not witnessed an organization in which priorities and thus impact proxies were “realigned,” “reorganized,” or “redefined” in short order? Thus, impact proxies can also shift. Here is an example many of us are familiar with: The impact factor (IF) in publishing was devised by Eugene Garfield, who founded the Institute for Scientific Information (ISI), now part of Thomson Reuters. The IF reflects the number of citations to articles published in science and social science journals [3]. First, one has to believe that citations are a good proxy for impact—for importance—and only then can one take the leap to say that one journal in a field is better than another one—which, of course, is self-reinforcing as more well-known, resourced, and/or skilled scholars submit to those journals so as to achieve more…impact. But not everyone agrees this measurement is the right one. In 2006, Google’s Page Rank algorithm was applied to assess the impact of scientific publications, with, some say, better results. There also other measures—proxies—for impact that are aligned to emerging publishing platforms where impact is measured not by citation but by attention: Altmetrics (i.e., alternative metrics) include measures such as ‘views’ of content or mentions in social media. A main point to be underscored here is that these impact measures are often contested, with new variations appearing every few years. Further, acceptance can take time. Indeed, communities can be cleaved by differences in their beliefs around what constitute appropriate impact factors.

Processes and people are often invisible in stories of impact. A theory that leads to us focusing on one set of factors can lead to our missing other factors. Key agents may be (dis)missed from dominant narrative(s). Here is a concrete example. Cycling back to the dinosaurs for a minute, a 12-year-old girl called Mary Anning was fossil hunting in the early 1800s on the cliffs of Lyme Regis, England, when she found what turned out to be an ichthyosaur skeleton. Until this discovery, animal extinction had never been contemplated. However, because of her gender and her social class, Anning was prevented from participating in the scientific community of 19th-century Britain. Undeterred, she trained herself and built a career as a fossil hunter anyway. She went unrecognized until Stephen Jay Gould lifted her from obscurity, stating that Anning was “probably the most important unsung (or inadequately sung) collecting force in the history of paleontology.” Today, her work is recognized as the catalyst for a fundamental shift in scientific thinking about prehistoric life in the early 19th century. Her work and her ‘outputs’ had impact. But it took a champion living in a world whose value system had changed before her impact could be recognised.

I do believe that assessing whether or not something has had an impact is important; it can lead to more effective planning, more programmatic action, and the seeding of improved downstream activities. What is in question here is not the desire to assess impact itself, but the way in which it is measured, how intangibles are rendered countable (or if that fails, elided), how abstract values are placed upon tangible outcomes that are then weighted, how the change itself is assessed, and who gets to theorize about the causal chain and apply credit and blame. We have to consider who cares and how they understand impact: I may believe I have had an impact—indeed, I may get intellectual and emotional satisfaction from it and be recognized by some others for my impact—but how do I prove it to you? If you hold the power, the burden of proof falls to me and becomes particularly onerous if your stance is that of the unbeliever, waiting to be persuaded—doubly so, if your notion of what constitutes valid proof does not match mine, if my proxies for impact do not align to yours. If cause and effect cannot be easily demonstrated, if the tokens of measurement are not shared, this can become an impasse. If the power dynamics are such that your measures are not negotiable yet my paradigm does not easily yield to your measures, whatever impact I have had may be rendered invisible, irrespective of how important it is. If multiple stakeholders are involved, there may be multiple measures, some of which conflict.

So I conclude that the word impact often disguises more than it illuminates. The word needs to be carefully examined and negotiated. Existing impact measures need to be interrogated. We need to be clear on whence they derive and whom they serve. Whose epistemology is represented and whose, if any, is elided? Are the people defining the measures in the best position to be able to assess what are the best impact measures? And if not, how can they be persuaded to consider other measures? How do we address systematic and institutional blindness to impacts, and how do we do better at giving credit where it is due for the things we value? It may be that, in the end, these investigations lead one to conclude that everything is just fine, that the current indicators and measures are the right ones. But I believe the thought experiment will still have been worthwhile. And if we find current measures are wanting, we need to take responsibility for identifying what are appropriate markers of impact in our domains. We need to be part of the conversation wherein indicators are crafted, how they are counted and how their relative weights are determined. We need to take seriously the task of assessing the state of the world before an intervention or innovation is introduced as well as after. I invite you to take a moment to consider how impact is measured in your world: what is it and how to identify it, what it means to ask how much, and how who, how, and why are construed in the narrative chain.


1. The theory goes that after the collision, there was so much dust in the atmosphere that the entire planet was completely dark for one to three months, and in these conditions, cold-blooded creatures simply could not survive.

2. Slides for the talk are available on the CSCW 2012 site:


Categories: Drafts, Research tidbitsBookmark

Leave a Reply

Your email address will not be published. Required fields are marked *