Data Aware Design

In the past few years, I have sifted through more “trace data” than I care to remember. By “trace data” I mean logs of actions taken by users on Internet sites–mostly aggregated data from many users, but sometimes single user logs like search queries.

Why all this immersion in activity logs and trace-data? When designing and developing any kind of technology data are useful for many reasons, some of which are:

  • evaluation and understanding using summative methods–asking: did feature X or service Y get used (and, if so, what happened just before or just after, so one can get a sense of someone’s experience as they interact with a page or an application or between pages and applications) and if it did not get used, can we determine why not;
  • iterative design and data collection for formative and generative design–asking: did people do what we expected or desired them to do, did the user seem to be thwarted and, if so, do we have a way to understand what would have worked for them;
  • theorizing about fundamentals of human behavior–asking: what are the gross patterns we can discern and what, if anything, do they tell us about humans and technology; do they contribute to social science understandings of human psychology and/or social behavior, and in turn, do they offer any insights that may contribute to generative design efforts;
  • modeling and designing adaptive systems–asking: can we infer from a user’s action(s) what they are likely to want to do next, so we can present some model-based interface and/or interaction adaptation to improve their experience or move them through some pre-scripted sequence of experiences (e.g., moving up a gaming level); and
  • business relevance and prediction–asking: do the activity data within and across applications and interactions offer any insights that are business-relevant

These of course represent a small fraction of the activities that are part of current worldwide obsession with “data”, most publicly manifest in discussions of the global and societal implications of “Big Data”. Indeed, in the world of the Internet innovation, you can’t walk ten feet without someone talking about how much data they’ve got, and it seems calling oneself a “data scientist” is like saying you can lay golden eggs. As discussed by authors from Microsoft in the May/June issue of this magazine[1], there are many challenges facing data analysts. This is set to get worse, as more and more data are collected from wireless sensor networks, mobile devices, software logs, cameras and RFID readers; we have our work cut out for us designing better analytics tools and services.

For the purposes of this column, sitting conceptually somewhere between daily data analytics and societal concerns about “Big Data”, I want to offer a few personal observations and cautionary remarks about the frenzy over “data” and data analytics, and offer a couple of ruminations about where practitioners and researchers in HCI can–and must–weigh in.

First, while most of us in design oriented research areas are very aware of the value of qualitative methods and data, I note that many discussions where the word “data” is used tend to focus almost entirely on quantitative data, with little acknowledgement that behind every quantitative measure and every metric are a host of qualitative judgements and technological limitations: what should we measure, what can we measure, of what is the metric constituted and what assumptions are embedded within it. Choices about what is and is not measured are choices and they have weight. In their 1999 text, Sorting Things Out: Classification and its Consequences, Geof Bowker and Susan Leigh Star look at many instances of classification and remind us that “information scientists work every day on the design, delegation and choice of classification systems and standards” and that “each standard and each category valorizes some point of view and silences another.” They note–“this is not inherently a bad thing – indeed it is inescapable. But it is an ethical choice, and as such it is dangerous – not bad, but dangerous.” [2]

Second, there are a couple of problems with numbers. Numbers impress us, and sometimes even intimidate us. There is a great deal of fear and fetishism around numbers, and more than a touch of zealous reverence. This facility for intimidation means invoking numbers can be a powerful persuasion technique–afraid to show their own ignorance, recipients of “factiful” arguments (that is, arguments full of fanciful facts) especially data-laden ones can be hoodwinked into accepting invalid conclusions. Charles Seife in his book “Proofiness: The Dark Arts of Mathematical Deception”, has some compelling examples of “the art of using bogus mathematical arguments to prove something that you know in your heart is true — even when it’s not.”[3][4]

Thirdly, in part due to the aforementioned reverence, and nicely fueled by the arcane nature of most data analysis tools, trace data analysis can seems more like a dark art than a science. Although mathematical notation and clean code looks like precise languages, there is a real art and craft involved here. Data analysts sometimes take on the aura (and mantle!) of shamans, invested with oracular power, issuing ritual incantations to the murky unknown. Indeed, it was this sifting through these logged traces that led me to declare a few years ago that doing trace and log data analysis was like a séance – sifting through the traces of the dearly beloved and departed in the hope you’d find some clue as to how to make them come back.  Before you dismiss this analogy, give it some thought: In a séance you have the following elements: people you wish to contact, tools that allow you to make contact with the other side (Ouija board, candles…), a medium….and the departed who offer a shadowy, ill-formed presence and sometimes offer insights into why they left, how they feel about their departure, ruminations on their current setting and general advice for the enquirers. Data analysts can sometimes play up the role of data whisperer.

Finally, even for the highly numerate amongst us, we need to pose questions about how data were collected. There is much to do in the world of system, application and service instrumentation that focuses on gathering data to reflect user experience. Inadequate or inappropriate instrumentation affect data quality and its fit for the purpose we have in mind. The interface we design affects what data it is possible to capture and therefore what it is possible to aggregate, summarise and base one’s models on. The interface is in some ways a conversation with a user about a task or experience. A singular focus on the data without a consideration of the circumstances may lead us to miss the point and/or be overly general in our conclusions. For example, something fundamental about human communication may be being captured by the “Like” button, but I am not sure I could conclude much other than it is available to be clicked. This has been characterized as the “garbage in, garbage out” problem, which, loosely speaking, translates to “if your input data are garbage, your output results are also garbage”. If you want to derive a model of user experience or user behavior you need to instrument to gather the data needed to generate that understanding – you need to fully engage in “experience instrumentation” designed for “experience mining”, not sift through data that were collected for some other purpose and hope it will suffice. Nor should we make assertions about invariants of human social action based on analyses of activities that take place in single case, specific settings (e.g., on a particular social network) without explicitly acknowledging what that specific interface, interaction, application, service or company brings to the table and how that affects adoption and use.

These are exciting times! Let’s bring a creative and yet critical eye to the collection–the design–of data to complement the focus on the analysis of data. Of course those trained with experimental and survey methodology are continually designing data by designing experiments and instruments to address core science questions. However, I do not see this kind of thinking commonly applied when designing interfaces and interactions–I do see a deep commitment to discoverability, usability and the support of tasks and activity flows and to aesthetic appeal, but not a critical lens on the data consequences of design choices at the interface. I’d like to see more of what Tim Brown among others have called “design thinking”[5] to be applied to data capture (including application, service and system instrumentation), data management (including collation and summarization) and to user/use models that utilize machine learning techniques, as well as to data visualization and analysis (including interpretation). Brown says that “design thinking” is neither art nor science nor religion. It is the capacity, ultimately, for “integrative thinking.” In Brown’s view, a design paradigm requires that the solution is “not locked away somewhere waiting to be discovered” and embraces “incongruous details” rather than smoothing them or removing them. In the incongruous details lie the insights.

I want to underscore: I am not saying that there are no design thinkers doing data analytics, nor am I saying that no data analysts that are design thinkers I am just saying we need more. Ultimately, my observations are part of what I have calling a need for Data Aware Design (and Innovation–but I am trying to avoid the acronym DADI here) within HCI. I am intentionally playing with the word “aware”. I want to contrast “data aware” from “data driven”. Firstly, data-driven is a meaningless claim; all design is data driven in some sense but those data may be informal and/or underspecified and thus offer no metrics for determining cause and effect nor assessments of success/failure/learning with regard to the original design intent. Secondly, “driven” seems overly deterministic; a lot comes into play when designing a feature, application or service, not simply what the data tell us. I also intend with the notion of “data aware” that the data themselves to be ‘aware’. I am not invoking an anthropomorphic notion of “aware”, but rather the notion of reflective data systems and some systematic ways that gaps, errors, elisions and abstractions are noted and reported, along with carefully presented, “clean” stories of results.

In sum, given the charter of Human Computer Interaction is to address how humans interact with and through computers, as an HCI researcher or practitioner one needs to be part of the conversation that addresses what and how quantitative, trace data are collected (what is instrumented and how), how data are represented, extracted (sampled) and/or aggregated, what questions are asked of data, what processes and practices are enacted as results are generated, and how data thus extracted are understood. It is our responsibility to engage with the deeper epistemological question: how do we come to know what (we think) we know about people and their interactions with and through technologies. Data are a design problem.

 



[1] Danyel Fisher, Rob DeLine, Mary Czerwinski, and Steven Drucker, Interactions with Big Data Analytics, in ACM Interactions, ACM, May 2012

[2] Sorting Things Out: Classification and Its Consequences,Geoffrey Bowker, and Susan Leigh Star, MIT Press, 2000

[3] SIDEBAR (?) Some examples from Seife’s book, Proofiness:

“Disestimation” is when too much meaning is assigned to a measurement ignoring any uncertainties about the measurement and/or errors that could be present. In the 2008 Minnesota Senate race between Norm Coleman and Al Franken, errors in counting the votes were much larger than the number of votes that separated the candidates (estimated to be between 200 to 300). He concludes that flipping a coin would have been better than assuming any veracity in the measure–the number of votes–given these errors.

“Potemkin numbers” are statistics based on erroneous numbers and/or nonexistent calculations. Seife cites Justice Scalia’s statement that 0.027 percent of convicted felons are wrongly imprisoned. This turned out to be based on an informal calculation, with rigorous studies suggesting that the actual number is between 3 and 5 percent.

Other fruit-salad example “comparing apples and oranges”, “cherry-picking” data for rhetorical effect and “apple polishing” are other examples.

[4] Proofiness: The dark arts of mathematical deception Charles Seife, Viking Press, 2010. Another classic is Darrell Huff’s How to Lie with Statistics, W.W. Norton & Company, Inc. 1993.

[5] Tim Brown, Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation, Harper Collins, 2009

Categories: Drafts, Research tidbitsBookmark

Comments are closed.