Mourning the loss of Physical Interfaces

There was a time before screens, when everything had a dedicated, physical interface. I suppose the move to screens wasn’t the first move; it was preceded by transition from levers and knobs to buttons. But buttons are still physical, and it is the disappearance of buttons and knobs and levers altogether that is disconcerting.

It’s not that I’m a luddite, a technophobe or simply an aging person who would prefer things don’t change—though I can’t deny these influences entirely. I don’t own a mobile phone, don’t want one, and I reject change for change’s sake. Perhaps some of my dislikes fall into these; the Start button on new cars, rather than a key, is frustratingly different for me. The key has always been a part of driving to me, a part of a long, complex but fluid motion from years practice:

Starting my car

As I approach the car, I anticipate needing keys and pull them from my pocket or purse. There’s a one-handed thing to flare them out, then a finger motion to select the automobile key. Slip the key in the door lock, twist and back, out; pull the door open, slip my backpack or purse down my shoulder and place it on the passenger’s side or rear as I slide into the driver’s seat. My feet find the pedals and depress the brake and clutch, my left hand grabs the seatbelt and pulls it forward while the right hand inserts the key and turns over the engine. When it engages, I jiggle the gearshift to make sure I’m in neutral and release the clutch; with the seatbelt already pulled forward by my left hand, I grab the clip and insert it with my right, then tug on the belt, a habit formed because my old Saturn’s clip was unreliable. Brake, clutch, then disengage the parking brake; shift into first or reverse, do the brake-gas pedal timing as I slip the clutch and I’m off.

To be fair, I’m not quite sure that’s right. It’s something I do, not something I think about; as I write I’m envisioning it but conscious belief and muscle memory may not match precisely. The move to keyless entry wasn’t bad; I replaced the key insertion, twist and back with keypad location, press the button, shift the key in my hand and release the switchblade key in anticipation of inserting it into the ignition.

But with the press-button entry and ignition, the key is uninvolved. It sits in my pocket (where it’s in the way, uncomfortable when I sit down) or remains in my backpack or purse. I often automatically go for it to open the door, but then there’s no place to put it without an ignition slot. So I fumble and it ends up clipped on a belt loop, or in my purse, or in the cup holder. Keys used to have a spot: the ignition.

Perhaps in time I’ll adapt, but I’ll miss the flow. It is subconsciously part of driving a car, and I miss it, annoyed that the motion has become awkward. But I know it’s arbitrary, and it doesn’t matter.

Tuning the Radio

What does matter are new radio interfaces. In the past there was a tangibility, a concreteness: one button, one action. The tone was adjusted with some little sliders. At most, a few of the knobs were dual-action, like a push-on/push-off volume knob.

I had to look when I first bought my Saturn car, but quickly it became unnecessary. My body learned where the buttons were, and there became a fluency of use: I thought what I wanted, and my hands did it. At most, I glanced as my fingers approached to get the precision right. The analog nature of each control was perfect for what it did, whether it was pressing a button, sliding the EQ settings, turning the volume knob. I could do it without thinking about it.

Then in 2011 I replaced my Saturn’s car stereo with an aftermarket so I could hook up my iPod. I never became fluent with the replacement stereo. True, it had fewer buttons and knobs than the Saturn. But instead of many buttons, one action each, there was a single knob that drove a menu. Depressing the knob activated and selected; rotating moved through choices. To go back, there was a return button; it was tiny because the screen took up a sizable chunk of the radio’s dashboard real estate. It had a half-dozen other buttons equally tiny, necessitating looking—not glancing—at the thing to target the buttons accurately enough to hit the right one. And adjusting anything other than the volume (the default thing the knob did) was an accident waiting to happen: balance, fade, tone, all previously given their own knobs, were now buried in menus that required reading the screen to navigate.

The sound was good, and I could attach my iPod, but I really didn’t feel safe using it, and it was clumsy doing everything through the limited buttons provided by the stereo. I finally replaced my old-style iPod with a touchscreen; there was a mode-switch in the stereo that allowed the iPod to retain touchscreen control. It made it usable, but it wasn’t like a dedicated interface.

Video Editing

The first time I mourned for dedicated interface loss was when I encountered the Video Toaster in the early 1990s. It was a revolutionary breakthrough in video technology, and the system I used had attached VCRs and software to control them.

But despite the Toaster’s ability to do snazzy things for affordable, the VCR interface was clumsy. With the mouse, one selected a VCR on the screen, then tapped the middle mouse button to activate VCR-mouse mode; sliding the mouse back an forth adjusted VCR review/cue speeds. To control the mouse pointer again you had to click out of that mode, but then the VCR deck was doing whatever you left it doing. There were a few screen-buttons to pause or cue/review, but they required targeting and clicking, and offered limited control.

Compare to the old analog equipment: two jog/shuttle knobs, one for each deck. Grab a knob and turn; both VCRs could be adjusted at the same time, and response was instantaneous. The knobs set deck speed; the knob rotated 180º, with a little grab-spot in the middle indicating neutral; left of this was backward, right was forward, the speed adjustable by position. When in neutral the jog in the center portion of the knob activated; rotating the jog wheel slid through tape one frame at a time for precise positioning.

After a little editing on the analog equipment, when I wanted something to happen, my body grabbed the knobs and made things happen. “The next sequence is from earlier on the source tape,” I thought, and the tape rewound; “It’s coming up here,” and the tape slowed and stopped at the right spot. My hands worked from training, not from thinking; my actions became fluid. This fluency never occurred with the generic interface on the Toaster.

When you see an image of a live-broadcast control-room, they don’t have mice and generic computers. The video people have control boards with buttons and levers and joysticks, the audio people have mixers with sliders and knobs. As events unfold, there’s no time for them to fiddle with mice, open widgets, move windows, dismiss alerts.


As I write, I think words and they appear in this document on the screen. I don’t think about pressing the keys; that’s wired into my body, like forming sounds for speech.

Could I touch type on a touch-screen? Not on a phone-size device, but on a tablet-size device, if I was looking at it, yes. But I could not transcribe a document or look away for long: the bumps on F and J keys let me reposition my hands correctly without looking—without even thinking about it. Without these, I’d lose home row and end up typing gibberish.

Shortcomings of Generic Interfaces

Context sensitivity. A screen UI behaves differently depending on what’s going on; if a window or message pops up, obligating the user to pay attention—necessitating the user to split attention between what they’re trying to achieve, and the actions necessary to achieve it. This rarely applies with dedicated interfaces.

Overloaded controls. Controls serve different purposes at different time. With a dedicated interface, it always behaves the same, it has a specific spot in relation to other physical objects. In time the physical motions to control it move into muscle memory.

Size. A screen has limited size, especially as we go for portability. As things get more complex, the buttons and controls on that screen have to get smaller to fit them all—necessitating attention to hit the correct one. (Alternately, move some of them to pop-ups and slide-out widgets, but that adds to the context sensitivity problem.) Dedicated interfaces are made with buttons and controls sized for ease of use, based on a space limit.

Lack of tactile feedback. Generic interfaces are designed for broad uses, not serviceability in one use. Well-designed physical controls provide a way for the user to “feel” what they are doing, without looking. A familiar TV remote can be operated by hand whilst focusing on the TV—something you can’t do with an all-in-one remote control application for your smartphone.


Advancing technology is providing us with powerful new generic devices, and with them, we are inventing new ways to interact with the machines. As cool as the technology is, it’s often more clumsy than older, dedicated interfaces—the physical components having been prohibitively expensive. There is a distinctiveness to physical interfaces that is lacking in the generic UI elements of a tablet, computer, or device built the current gadgetty ideal.

It is swell that technology enables us to do things previously reserved only for those with money to buy expensive equipment to do it. But it is sad—and sometimes dangerous—when distinctive mechanisms for doing the work, refined over time to perfection, are replaced with clumsy skeuomorphic replicas and interfaces based on video screens.

At present, it seems the fad is (at least for consumer equipment) to try to put a touchscreen and display panel on anything and everything. I hope it is just the zeitgeist, and that we will come to appreciate the ease of use, intuitive behavior, and simplicity of dedicated, tactile, physical interfaces.