It was Steve Jobs who married technology with usability and created the first two breakthroughs in how we interacted with devices. Now that Steve Jobs isn’t around, I wonder if there will ever be a third paradigm shift. He knew the art of packaging technology and usability together and more than once succeeded in creating waves of technological adoptions.

Before we get into how our interactions with devices may change in the future, let us look at how Steve Jobs had successfully innovated interaction models twice before.

First Shift: GUI based interaction in PCs

Douglas Engelbart of Stanford Research Institute may have thought of the mouse based graphical user interface in the 1960s but he didn’t take it mainstream. Even tech giants such as Xerox and IBM, who came up with the first GUI based PCs, failed miserably.

Steve Jobs was the first to adopt this technology and offer it to people in a way they could understand and use. Apple’s Machintosh (launched in 1984) was the first GUI based desktop to become mass market. Thus, spawning a range of similar desktops from competitors including Microsoft over the years.

Second Shift: Touchscreen based interaction with Mobiles

Touchscreens have been in use for a long time now. EA Johnson was to touchscreens what Douglas Engelbart was to mouse-based GUI interactions. EA Johnson developed the touchscreen in 1965 for a device used to control air traffic in UK.

With time, this touchscreen technology got used in the first touch tablet built by Nimish Mehta (in 1982), HP-150 the first touchscreen computer (in 1983), Casio AT-550 watch (in 1984), and Simon Personal Communicator phone (in 1993) among other devices.

Unfortunately none of these devices became mainstream.

Steve Jobs didn’t invent touchscreens, but he did come to the rescue again by making sure it went mainstream. He innovated on top of touchscreens to create what we today know as iPhone. As of today, more than 500 million iPhones have been sold – the first mainstream device with touchscreen.

Back in 2007 when Steve Jobs was launching iPhone, he didn’t just have to wow the audience but he had to educate them too. If you remember his address at the launch of iPhone he shows how to swipe and scroll through photos, he shows how to rotate and get the photos in portrait or landscape. How can one forget the “woah!” heard when he pinched the photograph to zoom in?

Once again, he brought to the fore something he hadn’t invented.

Third Shift: What will it be and who will bring it?

Considering the last interaction model breakthrough was in 2007 when the first iPhone was launched it is time for the third shift to happen. The third paradigm shift can happen in many directions:

Voice
Voice is a natural choice, we all start talking before we start reading or writing so definitely a strong point. Unfortunately, language recognition is a problem. Many a times what one says into the device is misunderstood, not to mention external disturbances. With voice the next biggest problem is privacy – you don’t want the stranger standing next to you to hear what you say into your mobile. I wouldn’t rule out a voice based interaction model completely only because Google Glass is the front-runner of this model.

Motion Sensing
Motion sensing has been around with us for some time now (game console companies have invested in and seen loads of success). Needless to say, this technology follows movements of the fingers, hands, arm, eye etc and translates it into actions on your device. The first implications of motion sensing outside of gaming consoles has been in Smart TVs – I have one, and I can assure you that using gestures isn’t the best way to operate your TV.

Two-way interaction
What if you didn’t have to touch (for most part) and the device did the work for you intelligently. Think of smart algorithms – so smart that they can’t go wrong and bring up the right content at the right time and place. You may have to make those occasional touches to the screen, but for most part it is touchless. Why is it a two-way interaction …because without knowing about you, your likes and dislikes this model can’t work. Imagine, a third generation ‘Google Now’…two years from now. Too early, but I think a definite possibility.

Fluid keys on your touchscreen
This is not much of an interaction model change but an improvement over the existing touchscreen model. In recent times a lot of companies have started asking – What if the touchscreen didn’t have to be flat? What if buttons could emerge out and retreat back on a touchscreen? From a layman’s perspective this may seem like a small thing but hard keys ensures we can take our eyes off our phones thus freeing us up for other tasks. Imagine being able to play games on your phone with fluid hard keys, which vanish the moment you close the game?

These are just the top four models I could think of, and I am sure there are many more. I wonder if there is somebody out there who will pick up the various technologies available, evaluate them, package the best with a pinch of usability and offer it to common folks to make it mainstream. Just like Steve Jobs did the last two times.

I don’t see anybody just yet, but am willing to wait and watch. If you can see somebody, I will like to hear.