Last year we were lucky to have some fantastic guest posts from Paul Graham Raven, Scott Smith and Christina Agapakis. Continuing the tradition into our second year, I am thrilled to welcome Alexis Lloyd, Creative Director R&D New York Times, to our blog with a great essay. When I met Alexis last year, it was clear that there were crossovers in our work, and we are grateful that she agreed to write for us, brilliantly exploring a space that we are currently preoccupied with in the studio. Over to Alexis.
IN THE LOOP: DESIGNING CONVERSATIONS WITH ALGORITHMS
Earlier this year, I saw a video from the Consumer Electronics Show in which Whirlpool gave a demonstration of their new line of connected appliances: appliances which would purportedly engage in tightly choreographed routines in order to respond easily and seamlessly to the consumer’s every need. As I watched, it struck me how similar the notions were to the “kitchen of the future” touted by Walter Cronkite in this 1967 video. I began to wonder: was that future vision from nearly fifty years ago particularly prescient? Or, perhaps, are we continuing to model technological innovation on a set of values that hasn’t changed in decades?
When we look closely at the implicit values embedded in the vast majority of new consumer technologies, they speak to a particular kind of relationship we are expected to have with computational systems, a relationship that harkens back to mid-20th century visions of robot servants. These relationships are defined by efficiency, optimization, and apparent magic. Products and systems are designed to relieve users of a variety of everyday “burdens” — problems that are often prioritized according to what technology can solve rather than their significance or impact. And those systems are then assumed to “just work”, in the famous words of Apple. They are black boxes in which the consumer should never feel the need to look under the hood, to see or examine a system’s process, because it should be smart enough to always anticipate your needs.
So what’s wrong with this vision? Why wouldn’t I want things doing work for me? Why would I care to understand more about a system’s process when it just makes the right decisions for me?
The problem is that these systems are making decisions on my behalf and those decisions are not always optimal: they can be based on wrong assumptions, incomplete understanding, or erroneous input. And as those systems become more pervasive, getting it wrong becomes increasingly problematic. We are starting to realize that black boxes are insufficient, because these systems are never smart enough to do what I expect all the time, or I want them to do something that wasn't explicitly designed into the system, or one “smart” thing disagrees with another “smart” thing. And the decisions they make are not trivial. Algorithmic systems record and influence an ever-increasing number of facets of our lives: the media we consume, through recommendation algorithms and personalized search; what my health insurance knows about my physical status, the kinds of places I’m exposed to (or not exposed to) as I navigate through the world; whether I’m approved for loans or hired for jobs; and whom I may date or marry.
As algorithmic systems become more prevalent, I’ve begun to notice of a variety of emergent behaviors evolving to work around these constraints, to deal with the insufficiency of these black box systems. These behaviors point to a growing dissatisfaction with the predominant design principles, and imply a new posture towards our relationships with machines.
The first behavior is adaptation. These are situations where I bend to the system’s will. For example, adaptations to the shortcomings of voice UI systems — mispronouncing a friend’s name to get my phone to call them; overenunciating; or speaking in a different accent because of the cultural assumptions built into voice recognition. We see people contort their behavior to perform for the system so that it responds optimally. This is compliance, an acknowledgement that we understand how a system listens, even when it’s not doing what we expect. We know that it isn’t flexible or responsive enough, so we shape ourselves to it. If this is the way we move forward, do half of us end up with Google accents and the other half with Apple accents? How much of our culture ends up being an adaptation to systems we can’t communicate well with?
The second type of behavior we’re seeing is negotiation — strategies for engaging with a system to operate within it in more nuanced ways. One example of this is Ghostery, a browser extension that allows one to see what data is being tracked from one’s web browsing and limit it or shape it according to one’s desires. This represents a middle ground: a system that is intended to be opaque is being probed in order to see what it does and try and work with it better. In these negotiations, users force a system to be more visible and flexible so that they can better converse with it.
We also see this kind of probing of algorithms becoming a new and critical role in journalism, as newsrooms take it upon themselves to independently investigate systems through impulse response modeling and reverse engineering, whether it's looking at the words that search engines censor from their autocomplete suggestions, how online retailers dynamically target different prices to different users, or how political campaigns generate fundraising emails.
Third, rather than bending to the system or trying to better converse with it, some take an antagonistic stance: they break the system to assert their will. Adam Harvey’s CV Dazzle is one example of this approach, where people hack their hair and makeup in order to foil computer vision and opt out of participating in facial recognition systems. What’s interesting here is that, while the attitude here is antagonistic, it is also an extreme acknowledgement of a system’s power — understanding that one must alter one’s identity and appearance in order to simply exert free will in an interaction.
Rather than simply seeing these behaviors as a series of exploits or hacks, I see them as signals of a changing posture towards computational systems. Culturally, we are now familiar enough with computational logic that we can conceive of the computer as a subject, an actor with a controlled set of perceptions and decision processes. And so we are beginning to create relationships where we form mental models of the system’s subjective experience and we respond to that in various ways. Rather than seeing those systems as tools, or servants, or invisible masters, we have begun to understand them as empowered actors in a flat ontology of people, devices, software, and data, where our voice is one signal in a complex network of operations. And we are not at the center of this network. Sensing and computational algorithms are continuously running in the background of our lives. We tap into them as needed, but they are not there purely in service of the end user, but also in service of corporate goals, group needs, civic order, black markets, advertising, and more. People are becoming human nodes on a heterogeneous, ubiquitous and distributed network. This fundamentally changes our relationship with technology and information.
However, interactions and user interfaces are still designed so that users see themselves at the center of the network and the underlying complexity is abstracted away. In this process of simplification, we are abstracting ourselves out of many important conversations and in doing so, are disenfranchising ourselves.
Julian Oliver states this problem well, saying: “Our inability to describe and understand [technological infrastructure] reduces our critical reach, leaving us both disempowered and, quite often, vulnerable. Infrastructure must not be a ghost. Nor should we have only mythic imagination at our disposal in attempts to describe it. 'The Cloud' is a good example of a dangerous simplification at work, akin to a children's book.”
So, what I advocate is designing interactions that acknowledge the peer-like status these systems now have in our lives. Interactions where we don't shield ourselves from complexity but actively engage with it. And in order to engage with it, the conduits for those negotiations need to be accessible not only to experts and hackers but to the average user as well. We need to give our users more respect and provide them with more information so that they can start to have empowered dialogues with the pervasive systems around them.
This is obviously not a simple proposition, so we start with: what are the counterpart values? What’s the alternative to the black box, what’s the alternative to “it just works”? What design principles should we building into new interactions?
The first is transparency. In order to be able to engage in a fruitful interaction with a system, I need to be able to understand something about its decision-making process. And I want to be clear that transparency doesn’t mean complete visibility, it doesn’t mean showing me every data packet sent or every decision tree. I say that because, in many discussions about algorithmic transparency, people have a tendency to throw their hands up, claiming that algorithmic systems have become so complex that we don’t even fully understand what they’re doing, so of course we can’t explain them to the user. I find this argument reductive and think it misunderstands what transparency entails in the context of interaction design.
As an analogy, when I have a conversation with a friend, I don’t know his whole psychological history or every factor that goes into his responses, let alone what’s happening at a neurological or chemical level, but I understand something about who he is and how he operates. I have enough signals to participate and give feedback — and more importantly, I trust that he will share information that is necessary and relevant to our conversation. Between us, we have the tools to delve into the places where our communication breaks down, identify those problems and recalibrate our interaction. Transparency is necessary to facilitate this kind of conversational relationship with algorithms. It serves to establish trust that a system is showing me what I need to know and is not doing anything I don’t want it to with my participation or data; and that it is giving me the necessary knowledge and input to correct a system when it’s wrong.
We’re starting to see some very nascent examples of this, like the functionality that both Amazon and Netflix have, where I can see the assumptions that are being made by a recommendation system and I am offered a way to give negative feedback; to tell Amazon when it’s wrong and why. It definitely still feels clunky — it’s not a very complex or nuanced conversation yet, but it’s a step in the right direction.
More broadly, the challenge we’re facing has a lot to do with the shift from mechanical systems to digital ones. Mechanical systems have a degree of transparency in that their form necessarily reveals their function and gives us signals about what they’re doing. Digital systems don’t implicitly reveal their processes, and so it is a relatively new state that designers now bear the burden of making those processes visible and available to interrogate.
The second principle here is agency, meaning that a system’s design should empower users to not only accomplish tasks, but should also convey a sense that they are in control of their participation with a system at any moment. And I want to be clear that agency is different from absolute and granular control.
This interface, for example, gives us an enormous amount of precise control, but for anyone but an expert, probably not much sense of agency.
A car, on the other hand, is a good illustration of agency. There’s plenty of “smart” stuff that the car is doing for me, that I can’t directly adjust — I can’t control how electricity is routed or which piston fires when, but I can intervene at any time to control my experience. I have clear inputs to steer, stop, speed up, or slow down and I generally feel that the car is working at my behest.
The last principle, virtuosity, is something that usually comes as a result of systems that support agency and transparency well. And when I say virtuosity, what I mean is the ability to use a technology expressively.
A technology allows for virtuosity when it contains affordances for all kinds of skilled techniques that can become deeply embedded into processes and cultures. It’s not just about being able to adapt something to one’s needs, but to “play” a system with skill and expressiveness. This is what I think we should aspire to. While it’s wonderful if technology makes our lives easier or more efficient, at its best it is far more than that. It gives us new superpowers, new channels for expression and communication that can be far more than utilitarian — they can allow for true eloquence. We need to design interactions that allow us to converse across complex networks, where we can understand and engage in informed and thoughtful ways, and the systems around us can respond with equal nuance.
These values deeply inform the work we do in The New York Times R&D Lab, whether we are exploring new kinds of environmental computing interfaces that respond across multiple systems, creating wearables that punctuate offline conversations with one’s online interests, or developing best practices for how we manage and apply our readers’ data. By doing research to understand the technological and behavioral signals of change around us, we can then build and imagine futures that best serve our users, our company, and our industry.
About the Author: Alexis Lloyd is the Creative Director of the Research and Development Lab at the New York Times where she investigates technology trends and prototypes future concepts for content delivery. Follow on twitter @alexislloyd.
Over the last few weeks we have been working on a very exciting project with the Future Cities Catapult called 'A Family Day Out Programme'. The project seeks to work with partially sighted and blind people to help identify the characteristics of future cities that will enrich their experience of it and develop potential cityscapes that would inspire them to make journeys into cities and around them. The critical objective of this research project is to identify areas of innovation around integrated city systems relating to city navigation by partially sighted people, and inspire innovation around design techniques that enrich the city experience by partially sighted or blind people.
An important aspect of this project is to engage with a diverse range of participants to create tangible instantiations of various future visions. For this collaborative visioning process, we are conducting a one-day workshop with a series of different key stakeholders: partially sighted and blind people, urban designers and planners, technology developers and funders, product designers, government agencies, transport providers and financiers. It will be an opportunity to create high level future worlds, that include the end user’s perspective alongside that of experts. This workshop will culminate in a series of early-stage future cityscapes that are inclusive and empathetic - visions that include the voices, challenges and aspirations of a large group. In the workshop we will use processes of co-creation, world-building and storytelling to collate fragments of an unevenly distributed futurity, which has previously manifested in the form of cityscape prototypes, psychogeographic narratives and artefacts.
If you are a designer, city planner, technologist, policy planner, architect, urban designer or involved in shaping our built environment, and interested in the workshop, it would be great if you can join us on the 18th of March. If you would like to join us drop us a line telling us who you are, and why you want to participate.
We are thrilled to welcome the very talented Philipp Ronnenberg as the creative technologist for the IoTAcademy project. Philipp will be working with us to develop the project's mockups, quick experiments and various workshops as part of our current work with Nominet Trust. He will also liase with expert technologists to create a robust, expandable platform. Here's a little introduction:
Philipp Ronnenberg studied Digital Media at the University of the Arts in Bremen, Germany before graduating with a M.A. in Design Interactions at the Royal College of Art. He is passionate about democratizing technology, open-source phenomena, making-hacking culture and digital protest. Philipp's work investigates the relationship between technology and society, using various programming languages, electronics, software-hardware-prototyping, graphics and animations. He is exploring past, recent and future technologies through design and developing new perspectives on the interaction between humans and technology. While working as a designer and software developer he is working in the fields of interaction and concept design, speculative and critical design. Therefore he is researching and prototyping concepts for future interactive systems, applications and products in alternative realities and on the intersection between reality and speculation. The outcome of his work has been published in various magazines, newspapers, online media and was shown in exhibitions. You can follow him on twitter @PRonnenberg
As the studio gets busier, it becomes increasingly difficult to pause and reflect on our work, our process, our ambitions and aspirations. So as the madness of pre-Christmas deadlines settles, we felt it would be a good time to share a few of the many highlights that made 2013 one of our best and busiest years yet. So here it is, a quick, little document of our process, our studio conversations and travels, a glimpse of things stored away on phones and instagrams. A somewhat candid look at what has been keeping us busy. A cathartic exercise for us to reflect and take stock of what this year meant, and how our thinking has evolved.
We would also like to take this chance to thank the new clients and commissioners who approached us with trust and confidence, and those who came back again with fantastic new opportunities. We worked with the BBC, the Government of Dubai, Future Foundation, Sony and Suncorp amongst others, on a range of projects, from speculative design and foresight, to product strategy, invention and interaction design. We lead and faciliated workshops, wrote reports, made films, created scenarios, build prototypes and designed new experiences. But most importantly, we found new audiences and made new friends. We learnt that our work and approach has gained traction within industries and organisations we would never have considered as potential clients when we first started out.
An intense week in Dubai followed by few more intense week, working with some brilliant minds to develop concepts for a (NDA-ed) project, which will be made public early 2014.
Screen grabs from our scoping report and film on IoTA: Internet of Things Academy for Sony and Forum for the Future at the start of the year, which then led to more exciting stuff.
Stills from our design futurescaping workshops with the BBC where we created detailed cardboard scenarios which were then built upon further by the participants. We enjoyed every minute of it, and will treasure being some of the last people to wander the old Television Centre.
On the Lab front, our ongoing research project exploring the future of personalised genomics and synthetic biology in the context of healthcare, found its first manifestation in the form of a courtcase: Dynamic Genetics v Mann which was exhibited at this year's Ars Electronica. Tobias also showed his brilliant project 'Into Your Hands Are They Delivered' in the same exhibition. Following IoTA's scoping exercise with Sony, we were thrilled to receive funding from the Nominet Trust as one of the ten winners of their Social Tech Social Change Challenge, in partnership with Forum for the Future. We'll be sharing most of our activities through IoTA's twitter account in case you want to follow. We are in the final stages of wrapping up the second stage of the Song of the Machine project with the University of Newcastle, creating a series of functional prototypes and apps for optogenetic retinal prosthesis. The last Lab project this year was Open Informant, commissioned by the Wearable Futures Conference. It was a great start of a theme we will be exploring a lot more in the studio over the next few months. And finally, we are delighted to have won the Grants for the Arts Award from the Arts Council England to create a pretty spectacular project at the V&A next year, so stay tuned!
Discussions from our first Open Day for IoTA at the studio.
The spitkit from Dynamic Genetics vs Mann.
Jon presenting Dynamic Genetics vs Mann at Ars Electronica.
Tobias presenting Into Yours Hands Are They Delivered at Ars Electronica.
Open Informant exhibited at the Wearable Futures Conference
Yosuke wearing the Open Informant Badge.
Lea's drawings for our Grants for the Arts project.
Patrick's photographs of the prosthesis for Song of the Machine Part 2.
The Synbio Tarot Reader being exhibited at Salone Internazionale del Mobile, Milan.
Apart from Studio projects, we gave a lot of talks this year, developing and refining our own research agenda with each presentation. These include 'Design for the New Normal' at Next Berlin, Keynote at the Open Institute, London Launch, Keynote at the Vivid Festival Sydney, Australia, Talk at Futurefest, NESTA, Lecture at Fabrica, and 'Staying with the Trouble' at this year's rather brilliant Poptech. All our talks are now online here.
This slide became a leitmotif in our presentations this year, ending up on some t-shirts. Next year, it will be different.
Presenting at NESTA's Futurefest, curated by Pat Kane.
Enjoyed being on a panel at the Design Museum with the former and present RCA Rectors, Sir Chris Frayling and Dr. Paul Thompson.
Some of the press features of this year include Economist's Intelligent Life, the Sunday Observer, Weave Magazine and WIRED amongst others. We published an essay for the DREAD book edited by Juha van 't Zelfde, wrote texts for the Design Academy Eindhoven's upcoming book and contributed our work to Anthony Dunne and Fiona Raby's new book Speculate Everything.
But the best part of this year has been about working with some absolutely brilliant people, our team members, collaborators and associates. Tobias continues collaborating with us on a range of exciting projects, Yosuke Ushigome who interned with us earlier this year is now an associate, Minsung Wang and Lea Bardin were the most fantastic interns, Elvira Grob has joined us as our new studio manager, Gyorgyi Galik has joined us to work on the IoTA project, and a creative technologist (yet to be announced) will be joining us in Janaury.
And finally, we are chuffed to find a new home for our practice, a studio in the corner of the Biscuit Factory, overlooking London's seductive, yet fragile cityscape like a little weather station. We are surrounded by our team, friends and associates, people we enjoy working and drinking with. We hope we can welcome many of you to our space next year.
Looking back at the year, Jon and I have spent a lot of time thinking about how to grow, balancing ambition with scale, which is always a challenge, but we are learning that the scale-at-speed method that used to be a measure of success does not necessarily hold. Whilst we have at times questioned the logic of running a research lab within a studio of our size, its the Lab project that have helped us keep a progressive design agenda, allowing us to explore possibilities and opportunities that keep us intellectually and creatively sustained, but most importantly, enabling us to bring vision and freshness to our client work. Ultimately, not everything is about speed and scale. Maintaing a sense of pace and resisting the urge to grow too quickly has actually helped us build a business model that has structural and economic resilience.
Here's to a very Happy New Year!
That face? Well, it pretty much sums up how we are feeling at the moment. Absolutely delighted! We'd like to welcome two fabulous new people to the studio, Elvira Grob and Gyorgyi Galik.
Elvira Grob is our Studio Manager, working with us to create bespoke systems for organising, planning and supporting our growing consulting and lab projects.
She is a designer and researcher, with a BA in Process Design/Interaction Management and an MA in Design & Environment from Goldsmiths University. Keen to pursue her design management interests, she is working with us to craft organisational and project management systems that will allow us to grow in ways that supports the studio's ambitions and further our interests.
Elvira's own design work and research also has overlaps with the studio's work. During her MA, she has explored concepts where nature becomes culture or vice versa - such as technonature, future animal biomonitors, or hyperobjects. She has also been working as a visiting lecturer in critical design and as a creative strategist, and has a special interest for working in bizarre places including a waste incinerator and an operating theatre. When she is not working, she is mainly occupied with trialling anything pickled and sour. Her personal work can be found here, and you can follow her on twitter @grobli.
Gyorgyi Galik is our Project Manager for the IoTA project, working with us to shape the project as it grows into an independent platform.
Gyorgyi Galik is a London-based designer and researcher. Her practice focuses on voluntary social change, and more specifically how we can transform socio-ecological systems and our collective relationship towards the environmental commons to address and respond to contemporary societal and environmental challenges.
She has worked frequently in collaboration and in cross-disciplinary teams in labs and design studios including: Baltan Laboratories (Eindhoven), Kin Design & Research (London), Sackler Centre, Victoria & Albert Museum (London), PAN Studio (London), Natalie Jeremijenko and the Environmental Health Clinic (New York), Hexagram Research Lab - Concordia (Montreal), CECI (Montreal), Kitchen Budapest Art & Tech Lab (Budapest).
Gyorgyi is a tutor at the Contexts in Design and Communication, Graphic Communication Design Programme, Central Saint Martins College of Art Design, University of the Arts London. She recently started her PhD in Cultural Studies at Goldsmiths’ College, University of London under the supervision of Professor Matthew Fuller (UK) and Professor Natalie Jeremijenko (NY, US).
*DEADLINE EXTENDED TO 3RD DECEMBER*
We are embarking on the development of an exciting project - IoTA: Internet of Things Academy - for which we are seeking a creative technologist to work with us on a contract basis, starting immediately.
We are looking for a passionate and ambitious creative technologist who has experience in building IoT projects, is an active member of the maker community, and is well informed with recent developments in the technology. We welcome applicants who want to push the boundaries of the technology, but are also excited about challenging assumptions within the IoT space, and want to join us in testing those assumptions by building prototypes of varying fidelity that participants in workshops will use, and break.
We are looking to work with someone who is looking for a flexible position, initially for a period of five months on a part time basis, but with the potential of a longer term contract or regular employment. We are happy to discuss a working arrangement that suits the right applicant, and arrange time commitments and salary accordingly.
Applicants should send us an email explaining why they are interested in working on this project with us, alongwith their CV, github profile and links to work samples.
Closing date: 5pm on Tuesday, 3rd December 2013.
Interviews with selected candidates: Thursday, 5th December 2013
We are thrilled to announce that our project IoTA: Internet of Things Academy is one of the winners in the Nominet Trust's Social Tech Social Change challenge. The £1m fund will support ten organisations that use technology to tackle social challenges in the UK and beyond. Each company will receive £50,000 as well as mentorship from some of the world’s leading tech entrepreneurs to develop their early-stage ideas into profitable, scalable social tech ventures.
We are excited to be working with our long term project partners, Forum for the Future to develop the experiment further by building experience prototypes and conducting workshops with a diverse group of people over the next few months. Here's a film showing early sketches of this web platform.
As members of an increasingly technologically mediated society we need to develop new kinds of critical socio-technical literacies. So making is very important, but also thinking about what we make. As stated earier, IoTA is an experiment and an opportunity for experts, non-experts, curators, challenger seekers, people, and more people to experiment with the technology and data in inventive, playful and ingenious ways. Data, however big and plentiful, that does not necessarily lead to better or more rational decisions. Through IoTA we are not interested so much in how data is made public, but more about how the public make data, build their own hypothesis and make their own decisions.
We would like to thank Nominet Trust for their invaluable support in helping us take this work forward. We would also like to thank Hugh Knowles and Louise Armstrong from the Forum for the Future, who are key partners in this project. And finally we'd like to thank Esther Maughan Mclachlan, Emily Nicoll and Chris Clifton who initiated the Futurescapes project at Sony, which led to the IoTA concept. If you are interested in collaborating or participating please do drop us a line.
(As a note to those who have asked, IoTA or the Internet of Things Academy is a placeholder name, and as the project will evolve and take shape we will think about renaming it appropriately.)
Continuing with our series of guest posts on the blog, we invited Scott Smith to share his thoughts on the notion of 'superdensity', something he has talked about in the past. Scott kindly agreed, and today we are delighted to share his brilliant reponse.
It’s the Future. Take an Umbrella.
About two and a half years ago, I wrote a blog post titled "The Future is Here Today, and It's Superdense". The phrasing was a reference to the apocryphal William Gibson phrase that's a frequent crutch for people speaking prospectively in public fora: "the future is here today, it's just not evenly distributed." The trigger for the post was a cascade of world events that made "normal" a fairly useless construction—the Arab Spring was unfolding, the Euro crisis was in full swing, and oh, Japan had been laid low by a triple-whammy of earthquake, tsunami and nuclear crisis.
My intent in describing it as superdense, something typically used to talk about neutron stars or quantum information theory, was to find a way to describe how the typical Gibsonian loose distribution of future drivers and emergent trends was momentarily compacting into a tightly clustered ball of WTF. What we think of as the future, in particular bits of dystopia and chaos, wasn’t hiding in bits and pieces under this bush or over in that desert, but was all happening at once, or so it felt.
I also wanted to get across the sense of condensation—of various threads and elements, some connected, some not, coming together in a fairly knotty but spectacular way. While the tragedies in Japan were in some sense of a chain of causation (earthquake causing tsunami causing reactor damage), the events in the Arab world and the Euro crisis were in some ways quite connected via the sensitivities of the economic markets, political weaknesses and so on.
One could say—to keep piling on metaphors—a variety of chickens were coming home to roost. Others have talked about this period of protracted superdensity as a New Normal, where the general social, technological, economic, political and environmental conditions we had previously taken for granted no longer seem to pertain. In this period of deep flux, new power structures are emergent.
So far, so good. We’ve found various bits of language to describe the state we feel we’re in, but we don’t have a good system for coding and signaling the changes in state we experience, particularly as it applies to us as individuals, or to where we live or frame our existence (to our communities, economies, networks, etc). How fast is x changing in relation to me? To others? How strong is a particular driver, trend or state at this moment, and will it change? One person’s weird may be another’s normal, for example. From Chittagong in Bangladesh, for example, a hurricane and technological blackout in the New York metropolitan area might seem like seem a more normal distribution (though certainly not wished upon others).
Occasionally, when trying characterize the dynamic, often changeable nature of the future, I’ve resorted, unscripted to meteorological metaphors, describing how what we think of as “the future” as a phenomenon that washes over us from time to time like a storm front, full of pressure changes, turbulence, and with occasional destructive force. We talk about trends as parts of particular futures, as “building,” “gaining strength” or “rising,” for example. Fans of “Game of Thrones” speak cryptically online about how “winter is coming” as a means of characterizing what they see as a long-term shift toward instability or stagnation. The New Normal is, in effect a kind of climate change metaphor, conveying an expectation that conditions under which we’ve made assumptions and decisions in the past—or even the whole physics model of our reality—has altered in a fundamental way. Temperature, precipitation, humidity are all out of whack in our decision-making models.
As I sit thinking about this problem, a familiar sound comes on the streamed radio station to which I’m listening: the audio cue that tells me it’s time for the Shipping Forecast. If you aren’t familiar with it, the Shipping Forecast is generated by Britain’s Met Office and broadcast on BBC Radio 4 at four intervals during the day. The Forecast splits the seas surrounding the UK and Ireland into 31 areas, reaching as far northwest as Iceland, east to Norway and Denmark, and south along the Continent to Spain and Portugal, and provides updated weather and sea conditions in these zones to guide both commercial and private shipping as it makes its way to and fro within the area. Similar forecast frameworks are used by other countries, with similar structures.
Many people, sailors and civilians alike, speak about the Shipping Forecast as having a sort of mythical quality—with evocative if slightly opaque names for the regions like Fastnet, Forties, Rockall and German Bight conjuring up something otherworldly, recognized but exotic. Announcers delivering the broadcast read out a standard format of information from each region: regarding wind speeds and direction, air pressure and tracking, precipitation, and so on. While the data sounds almost like a numbers station, it’s meaningful to those who use it, and from it one can create a very precise map of pressure across thousands of square miles of sea. The Shipping Forecast is a powerful shorthand that lets navigators know what to expect, how fast change is occurring, and in which direction it is moving.
Image credit: http://simonholliday.com/shippingforecast/trends
Would something like this be desirable as a means of navigating the New Normal? For understanding how to anticipate superdensity, and even to ride its kinetic energy? I wonder if what we need is a Shipping Forecast for futures—sliced into topical regions, with key forces identified, metrics described, and possible trajectories plotted? “Solar energy, veering 6 to 7, backing 3 later based on pending regulation, sporadic innovation, moderate to good.” “Surveillance, severe gale 9 to violent storm 11, hacking, squalls later, poor, becoming moderate later.” “Bioprinting, 3 to 4, fog, clearing later.”
As with many forecasts, the data is similar but the outcomes vary based on your position relative to the forces at play. Are you in a big or small craft, so to speak? Vulnerable, or protected? Is turbulence your friend or enemy? The standard language of the Shipping Forecast is interpretable by all, but value is variable depending on who or what you are, and where you stand, sit or sail, much like the security warnings we’ve grown weary of in recent years, with their orange/yellow/reds.
So, I make the modest proposal: let’s develop a Shipping Forecast for the sort of weird, New Normal futures we increasingly encounter. I’m sure we can come up with 30-odd social and economic issues, emerging technologies or environmental trends that we can all agree need tracking. Monitored by an appointed body (a Future Measurement Agency?), these factors can be reduced to publicly digestible metrics, and delivered in a daily report via print, radio and Internet.
Wondering whether Iran’s opening to the West is about to set off a chain reaction of international political reconfigurations? Want to know whether that new biotech product is an immediate gamechanger or just a slow burn? Is a new pandemic something to be concerned about? Tune in each night before bed, get a snapshot view of the future through the glow of your tablet, or a rip-and-read ticker tape via your mini-printer.
I’ll admit, it sounds a little strange, and yet we’ve spent far, far more time, money and effort developing sophisticated social media analytics, high-powered dashboards that allow financial traders at a glance views of market microturbulence, and, as we’ve found out recently, all-consuming social graphs of all of our interactions and connections. Why not, then, provide such metamaps of “future-weather” as a public good? Widespread knowledge of imminent turbulence and (dare I re-appropriate the word) actual disruption might go a long way toward connecting our actions and reactions to wider conditions.
Unlike the actual Shipping Forecast, to which sailors and ship captains can only respond in a reactive fashion, the forecasting model I propose is actually a feedback loop of sorts—a sort of Quantified Self for society. No, we can’t control (all) earthquakes, but there is a lot of the near-future that is in our control—if we can reconnect our conscious lives to causation. We may choose not to shape the waves coming at us—which is always an option in the decision-making process—but if we are going to apply so much of our time and effort to collecting data and crafting visualisations, surely this little experiment isn’t asking too much.
Recently I gave a short presentation and participated in a panel titled: "The Internet of Things, Data and the Citizen" at the Re:work Technology Summit in London. Here's the video from that talk.
The audio is not entirely clear in the video, so I'd like to share the transcript of the short talk with you sharing our vision for the IoTA Platform. I think its important given that we are now about to start building the project and would like to open this discussion around IoT, 'smart citizens', the maker movement and 'technological empowerment' in an attempt to refine our vision for this experiment. (Note: The name IoTA is a placeholder, and as we develop the experiment we will have a better understanding of how to name it.)
It is estimated that by 2020, there will be more then 50 billion connected devices, adding to the 2.5 quintillion bytes of data we are already producing daily today. There are 16 billion photos on Instagram, 350 million photos areuploaded on Facebook daily, 100 hours of video uploaded YouTube every minute. And thats just from digital data, not the connected deivces we envision forming the IoT world: tables, chairs, bikes, and bridges or even cows, cats and dogs.
All this explosion in data has meant that we are witnessing an abundance of data spectatorship, and a lack of understanding of how to turn data into knowledge we can think with. And use. That lack of understanding makes us weak and vulnerable. Essentially powerless to a certain vision of the future.
You can see the rise in maker culture countering this, as hundreds of thousands of initiatives teaching people to tinker with the cheap accessible technologies are growing, perhaps a clear sign of technological empowerment. But, alongside this genuine infectious enthusiasm, we also see tons and tons of rhetoric.
As members of an increasingly technologically mediated society we need to develop new kinds of critical socio-technical literacies. So making is very important, but also thinking about what we make. (Think) Make. (Think) Do.
So IoTA - an experiment that we are building, is an opportunity for experts, non-experts, curators, challenger seekers, people, and more people to experiment with the technology and data in inventive, playful and ingenious ways. Data, however big and plentiful, that does not necessarily lead to better or more rational decisions. Through IoTA we are not interested so much in how data is made public, but more about how the public make data, build their own hypothesis and make their own decisions. Here's a film showing early sketches of this web platform.
Today we are in the process of building, this is a live experiment. We dont have the answers yet. we hope that IoTA can help nurture a socio-technically literate population, who will gain the conceptual tools needed to parse the implications of the work that they do. something our schools do not currently nurture.
We recently finished a project Dynamic Genetics vs. Mann exploring the implications of synthetic biology and genomics in the context of future healthcare. We are thrilled to have Christina Agapakis reflect on the project in the context of genomic prediction, privacy, and piracy.
This is me
What if personalized medicine never happens? What if the promised therapies tailored to our unique genomes just never materialize? Although it seems inevitable, there is no guarantee that we will be able to precisely match treatments to individuals. For complex diseases with many associated genes interacting in changing environments, the statistical power to make therapeutic predictions currently remains elusive. What if we sequence the genome of every single person on earth and the data is still not big enough?
In such a future, will we still believe in genomic promises? Perhaps, unable to let go of the hope that our genes can predict our future health, we continue to demand access to our largely uninformative genetic code. Unable to find strong associations for complex and chronic diseases but still desperate for determinism, we might look for answers not only in the genes of our own cells but the genes of our microbial symbionts.
This hope might remain part of medical rituals, a statistical placebo for the post-genomic checkup. The doctor takes samples of your secretions and sends them to a genome sequencing company, the costs barely a blip on otherwise ballooning medical bills. You talk about your fears of aging, cancer, neurodegenerative diseases, antibiotic resistant bacteria. You discuss your parents and grandparents’ medical history. Your blood pressure, cholesterol, and blood sugar are measured. Risks are calculated. You should probably lose some weight, eat more vegetables, walk more. You should smoke less, eat less processed food, less sugar. You should take better care of yourself. You probably should have done this anyway. You go home with a reassuring list of percentages that put a number on the fundamental uncertainty about your future.
The sequencing company analyzes your DNA, bills your insurance company, and stores your data in the cloud. Your demographic information and health records are linked to your unique set of sequence variations. Associations are identified, risk percentages are modified. Sequences are patented. Progress (money) is made.
You continue to be anxious about privacy. You think, “if a company is telling me that my DNA data is me, then why should that company have so much access to me?” We are told that in our dangerous world we must give up some privacy for increased safety. For increased health we must give up some of our expectations about genetic privacy.
“Crimes of a Genetic Nature”
DNA is good for telling stories about the future. DNA as machomolecule, in control of our genetic destiny. DNA as code, programmable, controllable, readable, re-writable. Like other data-driven futures, DNA-based stories are stories about probability, risk, and control: risk of developing certain medical conditions and the control that DNA has over our biological characteristics. Risk that genetic information will be used to discriminate against us, risk that our DNA will be used to control what we are and what we can be.
Superflux is good at telling stories about the future, stories that help us connect with the abstractions of probabilities and the weirdness of our unevenly distributed futures. With Dynamic Genetics vs. Mann, Superflux tells a story about DNA, risk, and control, not with percentages and promises but through the carefully crafted evidence of a fictional patent infringement trial.
The story is set in Britain in the near future, when the UK’s National Health Service (NHS) has been privatized and transformed into National Health Insurance (NHI). The trial’s defendant, Arnold Mann, faced with unmanageable NHI premiums due to undetermined genetic risk factors, turns to black market gene therapy, replacing his risky genes with healthy sequences patented by the fictional biotech giant Dynamic Genetics. With these new genes, his insurance costs are decreased, but he is prosecuted for the DNA sequences that he now holds in his cells, sequences that he didn’t pay the right people for.
At first glance, DG v Mann seems to be a very familiar kind of future, especially for people who don’t live in the UK and don’t have an NHS. For many Americans, a story about an insurance company trying to use anything and everything to screw you over is not an unfamiliar fiction but an everyday fact of life. The idea that an insurance company could one day use your DNA sequences to justify increasing your premiums or deny you coverage is such a pervasive story in the American debates about gene sequencing that it was codified into law, outlawed by the 2008 Genetic Information Nondiscrimination Act. If anything, DG vs. Mann might give its first shock of weirdness with the notion that it could be weird for such corporate shenanigans to exist in the first place. Imagine a future where Americans think that privatized insurance is a frightening and ridiculous scenario!
This is one way that design fiction could begin to help us “bypass the established narratives about the present and future,” challenge us to see the present world from a new perspective, and teach us to challenge our assumptions about what is and what might be possible–both technologically and politically. Design fictions show technologies at the edge of speculation and reality, inviting us to imagine, question, and debate the applications and implications of new science and technology in a cultural context. Exploring genetic technologies in relation to government programs, the business of health care, and the ongoing debates about piracy and intellectual property allows for discussion not just about the function of the technology itself, but its inextricable relationships with power, politics, economics, and society.
Fictions give life to these complex relationships and give us a vocabulary to debate the kind of future we want (think of how often Gattaca used to come up in conversations about DNA sequencing). But while such stories are good at challenging our assumptions about how a technology might be used, rarely do they challenge the deeper assumptions about technological power and control.
What does the world look like when we bypass the established narratives of DNA as the of master of our readable and rewritable future? What if DG vs. Mann is actually a story about genetic indeterminacy?
“Good Source of 6 Vitamins & Minerals”
The DNA evidence in DG vs. Mann is not human readable. Strips of paper with tiny, indecipherable A’s, T’s, C’s and G’s highlight the regions of Arnold Mann’s genome that are infringing on Dynamic Genetic’s patents. Looking at these strips, we don’t know what diseases he was at risk for, how much of a burden he would one day be on the insurance pool, or even if the pirated gene therapy has actually changed his odds of developing the disease.
It’s possible that Mann’s risky sequences are part of the relatively small set of gene variants that are known to directly cause devastating diseases. But if Mann is an otherwise healthy adult, it’s much more likely that the NHI actuaries are looking for common gene variants that have been statistically associated with very common and very expensive diseases: type II diabetes, cancers, and cardiovascular disease.
What does it mean if you have, for example, a diabetes-associated sequence in your genome? In terms of real world health outcomes, the small changes in risk associated with any one such variant probably don’t mean much, especially compared to the big effects that environment and diet can have.
Indeed, it’s harder to imagine what these numbers might mean for your health than what they could mean for your health insurance. These associations provide an “objective” justification to what the insurance company wanted to do all along: get more money. As long as people still believe that DNA is in control of our biological destiny, these associations don’t actually have to be biologically meaningful in order to have a big effect.
What does it mean then to use gene therapy to change these risky gene sequences? Considering that for most health outcomes, zip code is a better predictor than genetic code, probably not much. But if an insurance company can use DNA sequences to justify charging more, then altering gene sequences isn’t necessarily about being healthier but simply appearing healthier to the risk calculators. The new variants are the genetic equivalent of sugary breakfast pastries fortified with vitamins and minerals, an unknown risk with a quantifiable veneer of “healthiness.”
Unlike Pop-Tarts, however, when it comes to deciding who gets affordable insurance coverage, such genetic spoofing might ironically be enough to translate to better health in the real world, where access to health care is much more important than DNA. For Arnold Mann, the potential dangers—medical and legal—of undergoing back-alley gene therapy is worth the risk in order to get affordable insurance. People have done weirder things for health care.
Polarized debates about the desirability of a new technology and its potential implications often oscillate between cheerful utopia and horrific dystopia. We discuss the promises and perils, the risks and rewards—opposite ends of a speculative spectrum. The real future, of course, is not simply one side or the other, happening instead somewhere in the messy in betweens, neither world-saving nor civilization-destroying.
But wether proposing utopia or dystopia, both sides of such debates grant technologies with an unexamined power to solve or create problems, what anthropologist Georgina Born calls an “unproblematic effectivity.” For debates about the future of biotechnologies, the power of DNA always remains at the center. When speculating about the future of a technology, it is worth asking: what if it just doesn’t work that way?
Stories about the future can open up new possibilities, new avenues for debate, breaking free from the “half-pipe of doom” between utopia and dystopia. We can imagine more complex, weird, ambivalent futures—stories where technological promises come unraveled, their technical underpinnings explored, their cultural appeal examined.
We want to know the future. We want to know that in the future we will be able to know more than we do now. We want our futures populated with competent scientists, always in control, able to fully understand and accurately predict. We want DNA to be able to justify inequalities in health, we want DNA to give us answers, to tell our future.
DNA is obviously an important molecule, but too many of our social problems and technological dreams rely on the false promise of genetic determinism. DNA is not all-powerful. Data is not enough. Health is biological, but also social, political, economic. Biology is complex. Biology is messy. For better health, we need less sequencing and more support. For better technological promises, we need less control and messier futures.
About the Author: Christina Agapakis is a biological designer who blogs about biology, engineering, engineering biology, and biologically inspired engineering. Follow on Twitter @thisischristina.