January 30, 2019

John Allsopp: the consequences of the web

John Allsopp

John Allsopp

Author & conference organizer

John Allsopp is an author, developer, conference organiser. He’s the co-founder of the Web Directions conference, and co-chair of the Open Web Education Alliance (W3C-XG). We spoke with John during Design Exchange Sydney in 2019.

Aarron Walter:   I’m excited to talk to you today because not only do you have your finger on the pulse of what’s happening on the web right now, but you’ve been [part of it] from the very beginning. 

John Allsopp: That’s right, I started working on my first digital product, an application for the Mac platform, in ’93. I came along as Mosaic started to get a little bit of traction. You could get an internet connection that allowed you to use a text mode browser like Lynx for maybe a few hundred bucks a year. But it was very hard to even get an ISP—certainly in Australia. I think you paid something like $10 an hour to get a connection that allowed you to use a web browser like Mosaic.  

In the last few years, I’ve had the privilege to have a bit to do with CERN (European Organization for Nuclear Research) recreating early web browsers, including the very first one, in celebration of the web’s 30th anniversary, as based on Tim Berner-Lee’s original paper envisioning what the web would become.

AW: You influenced responsive design in your article The Dao of Web Design. What year was that published?

JW: 2000, so it’ll be 20 years old in April.

AW: That influenced our common friend Ethan Marcotte, who came up with the core tenets and methodologies for responsive design.

JW:  That’s right. I used the term “adaptive design” back when I first wrote this. It was very theoretical back then. We didn’t have much diversity in screen sizes: just 800×600, maybe 1024×768. Macs tended to have a richer color palette than a lot of the Windows devices at the time, too.

But—strictly speaking—we didn’t have any sort of mobile web, either. Ethan’s work came just after the introduction of the iPad. That was the impetus to ask, “How do we target a broad range of form factors?” Now, that’s only multiplied dramatically.

Just to change the subject slightly, but I think screens are going away in a lot of ways. Not only in an AirPod type interaction, but obviously the Alexas, Google Homes, and other devices that are everywhere. There’s enormous opportunity there, and a lot of those principles still apply. But should we be building new voice-based experiences that are separate from those that are screen-based? There are some really interesting design challenges to think about.

AW:  How do the principles of a graphical user interface map over to natural user interfaces like these?

JW:  I built and presented a framework at one of O’Reilly’s conferences a couple of years ago. One of the advantages of being involved [in the web] for a very long time is that you get a sense of how history repeats. 

There are three quite distinct periods of computing history: Mainframe, mini, and personal computing. (I knew about the first two from studying computer science in the 1980s, and lived the third one from the very beginning as a computer nerd—I even had the great privilege of having a little Tandy TRS-80-type device). 

In mainframe, computing was hidden away, batch processing, the domain of experts. There were a whole bunch of geographical constraints on the device, which constrained how we interacted with it, and how much it cost to interact with it. But the mini computer revolution made it an order of magnitude less inexpensive. It allowed it to start getting applied outside of aerospace and government into small, medium enterprises. 

Then the personal computer arrives. Since the early 1980s, you could argue we’ve essentially been part of the same computing trajectory, because the mobile device is still almost entirely text based. The devices are dumb in a sense that they sit there and wait for our input: I click a button, tap the screen, or type something.

What is really interesting, however, is what’s happening next: It’s screens going away. You can’t use screens in a whole bunch of places. They’re highly problematic socially, though people still use them in the car, walking down the street. They create challenges like car accidents or pedestrians being run over. 

I feel like we’ve extended the limits of where screens are useful, up to a certain point. So what happens next? Is that it? Is that everywhere we can compute? I don’t think so. 

I feel there’s a two-fold aspect: How we interact with them—I think a lot will be voice and hearing based—and [how they will react to us] simply living our daily lives, devices reacting to stimulus. There’ll be all these sensors on devices, like gyroscopes. Like my watch right here is constantly recording my pulse. And I saw a story the other day where a man had a terrible bike accident with his Apple Watch on. It was smashed up, but recognized something seriously had gone wrong, and essentially called 911. 

I feel like we’ve  extended the limits of where screens are useful up to a certain point. And so what happens next? Is that it? Is that everywhere we can compute? And I don’t think so. What I feel is that there’s the two fold aspect: how we interact with them, which I think a lot will be voice and hearing based. But also a lot of it is simply us living our daily lives, and these devices reacting to a whole lot of other stimulus.

AW: Apple’s recent keynote where they announced the new version of the Apple Watch began with a video of how the device’s presence [helped people], like an older man who fell and his watch immediately notifying an EMT.

JW: We are only beginning to think about the designed opportunities and possibilities of these devices. We’re in the really, really, really early days of it—the DOS or CPM edition of that. We’re being surrounded by genius technology all the time and feel like, “Oh, it’s all happened. We’ve missed the opportunity.” We’ve reached a stasis point.

But there’s a real point of inflection here: The opportunity to think about these devices, and the data they make available. There’s also a bunch of ethical and other challenges around it. It’s a very interesting opportunity we’re only just beginning to explore.

AW: We see it born out in the economics of many of the big four, big five companies. Like Apple’s earnings reports, where the majority of their capital is coming from wearables. That’s their largest growing component. AirPods alone are 15% of their revenue, which is mindblowing.

JW:  I’ve got a pair. I generally buy stuff that gets some traction because I feel like I should really check out how they work. To me, it’s transformative: The friction of listening to something is so low. I’ve heard people make the observation that one of the resurgences in podcasting is in no small part due to AirPods and similar devices. It’s so easy to slip them in, when you’re in transit or even walking on the street.

AW: That’s an interesting way to look at it. You talked about phones being sort of dumb things waiting for our inputs. What feels magical today is when our computers are connected to one another, not just in a single point of interaction, but in multiple points. It’s a seamless customer experience or user journey

AW: To bring us full circle, I can’t help but think of Tim Berners-Lee’s original paper where he describes his elderly mother needing care and [imagined] using different systems for calendaring, healthcare, and so forth. All of these agents, as he described them, would seamlessly interact together in a transparent way, multiple systems talking to each other behind the scenes without asking for input. Recognizing an interaction is happening and putting things into action.

JW:  We’re not even nearly there yet. That’s an easy thing. When it does work, like when you get an email in Gmail and it gets automatically put in your Google calendar, it feels magic. But we’re living in these little siloed worlds. I think it’s the very beginning of what can potentially be taken off our hands. 

You feel like so much of your life is spinning these little plates. I think the single hardest thing in most people’s lives is taking care of the minutiae.

It takes as much energy to look up, “When was that date to get my teeth cleaned?” as it does “When was my friend’s wedding anniversary?” The biggest things and the smallest things are still these single points of focus that require this amount of energy. If we could offload the minutiae of our lives, I think that would free us up to focus on the less trivial, but still important aspects of our lives.

AW: When I think about Tim Berners Lee’s original vision now, a couple of decades later, it mostly scares me because of the various dystopian things that are transpiring in the world. How do you reconcile those two things?

JW: It’s a really, really significant challenge. When Twitter first emerged, I think those who used it when it was really young considered it to be this tremendous boon to their lives. Probably one of the reasons why you and I maintained a friendship across thousands of kilometers for over a decade is because of something like Twitter. You can just check in with someone, see their circle of friends. I think Facebook probably more formally played that role on a broader site—and what could be wrong with that?

It wasn’t a utopian belief that it was going to make the world a better place. We’re going to be with people, together. I think there was tremendous negativity that a lot of us had. It’s very interesting that the hippies and personal computers are revolutions that emerged simultaneously.

There’s this fundamental ethos that you and I refer to. It’s shaped by people that we personally know on the web, rooted in New York, Boston, Philadelphia, that part of the world. And then there’s this other ethos, which is very much “What does the web enable? “How can we maximize revenues?” A much more business-minded approach to it. That tends to be more rooted, I think, in the Bay Area. 

So there’s something very interesting going on there: That utopianism, whether it was more of a liberal or a more libertarian utopianism, informed a lot of these decisions for quite different reasons. I think people with both political persuasions thought that technology was essentially a force for good. But it’s hard to argue that if you look at the history of technology. Often, it’s both founded for and invested in by the military. All the way back to Leonardo Da Vinci, who made all of his money designing weapons systems and fortifications for the Medici family and others. 

That’s why I refer to this [utopianism] as being quite naive. I shared that naivety for a very long time, until relatively recently. Now, we’re aware of the consequences. I think there are 70 countries in the world now where there’s demonstrated, systematic foreign approaches to unsettling electoral and political practices using technologies like Facebook and so on.

We can’t hide that it’s happening. Now, we have to think, “Well, what’s my role in this?” “What do I do about it?” I don’t think it’s simply professional ethics. I think that a lot of this probably has to be legislative and regulatory.

I don’t think it’s a trivial challenge, both in terms of trying to solve it and in terms of potential impacts. It’s potentially a civilizational level of challenge. 

We’ve only seen what goes wrong when we don’t take these things seriously the last handful of years.

We’ll continue the interview in a second conversation with John, where we’ll discuss the state of technology in Australia and Southeast Asia.

designbetter conversations
designbetter conversations