For the most part, current cloud computing implementations require a computer for the end user; even though there may be services provided through the Internet, there still is the need for the end user to have a machine with sufficient computing power to execute at least a browser and the application. But suppose the application were delivered without the need for the end user to have such a computer? What if the entire application experience were delivered over the Internet and required no application processing on the user's end, not even a browser?
This certainly isn't a new idea. Mainframes have been delivering a complete application experience to dumb terminals for decades. For an end-user-friendly experience however, a full, responsive GUI with rich audio and video must be delivered, to a variety of different client configurations:
- in your office or home office: a 24" monitor, some speakers, a keyboard, a mouse, a printer
- in your family room: a 50" plasma display with a remote, surround-sound speakers, game controllers, possibly wireless keyboards/mice
- in your kitchen: a touch-screen flat display hanging on the wall
- in your favorite Internet cafe: a 20" screen with a keyboard and game controllers (and cup holder)
- in your pocket: a phone or other portable device with a small touch screen, built-in keyboard/game buttons
In all cases the "dumb terminal" client device would contain embedded software (probably on a specialized chip in the monitor) that manages a connection to an application provider, sends input, and receives and decompresses audio/video frame updates. But that is all it does - this embedded software is dumb in that it is not processing the user input or running the application. It only needs to know how to pass information both directions: sending input to the cloud, and receiving output from the cloud.
Standards would likely emerge among among consumer electronics manufacturers and application providers so one device could connect to any of a number of providers. Conversely, you as the consumer could sign on to your application desktop from any of a number of client devices (office, family room, kitchen, portable, etc.) It is likely that as the functionality in the specialized chip became ubiquitous it would be inexpensive to produce and thus not add much in cost to the device. As for application processing, the application provider does the heavy-lifting, establishing powerful server farms for accepting client connections, processing application input, and sending back audio/video updates.
Assuming application providers can profitably deliver these experiences in a cost-effective way for consumers, imagine what the consumer no longer has to worry about:
- Purchasing/repairing/frequently updating expensive computing hardware
- Troubleshooting hardware/software conflicts
- Workstation administration duties (file backups, applying operating system patches, virus and malware protection)
- Managing multiple workstations, between job and home or multiple computers in the home
- Maintaining a library of software and data disks
- Important documents and data are always available (instead of "the data I need is on the wrong computer...")
- Document security (handled now by the provider)
- The horrible customer assistance experience of being bounced back and forth between different hardware and software companies, each saying the problem is the other's fault
But is this vision of delivering a full application experience with a rich audio/video interface through the Internet really feasible?
The technical challenge that either enables it or makes it impossible boils down to this: Can input be transmitted over the Internet, processed in a server farm, with the resulting output transmitted back, decompressed and displayed fast enough to appear to the end user as if he or she controlled the action? This is a critical measure for a positive cloud computing experience from an end-user's perspective - the applications must appear responsive.
So how fast is "fast enough?" Robert B. Miller's Response time in man-computer conversational transactions, published by the Association for Computing Machinery in 1968, remains a useful reference for assessing response delay. I'll pick Miller's Topic 1 - Response to control activation as the yardstick for "fast enough" here. Miller suggested that an action such as the clicking of a typewriter key should be met with a response that appears "immediate" to the user - "perceived as a part of the mechanical action induced by the operator" [p. 271]. Miller suggested a time delay of no more than 0.1 second is perceived by the end user as a simultaneous response.
For example, if the user clicks a mouse on a spreadsheet cell and the visual display of highlighting that cell appears to the user within 0.1 second, the user's perception is that he or she controlled the action - he or she made the cell highlight. If the delay between user input and the resulting recognizable effect is greater than that, the user begins to feel more like the computer controlled the action - like he or she submitted a command that the computer processed, rather than he or she directly highlighted the cell. The user no longer feels in control.
This model of cloud computing is possible then, if the process of transmitting input over the Internet for processing in a server farm and transmitting and displaying the resulting output takes at most around 0.1 seconds, or 100 milliseconds of time. A would-be application provider would look to the following to limit this delay to 100 or fewer milliseconds:
- A faster Internet. Better Internet bandwidth, all the way to the typical home or office; faster wireless as well.
- Well constructed, powerful server farms. Have the most powerful hardware possible combined with the fastest grid-style operating software for managing connections to take less time processing user input. Have several located throughout the country (world?) to maximize proximity for consumers.
- Exceptional audio/video compression and decompression. Reduce the amount of data being sent back to the clients, thus requiring less bandwidth, and reduce the time taken to display the compressed video.
The current commercial effort that most closely matches this vision of cloud computing is that of the company OnLive, which had something of a coming-out party at the Game Developers Conference in March 2009. OnLive is forging a potentially industry-shaking distribution model for streaming high-end video games, one with clear benefits for game publishers. OnLive claims to have enabled their streaming model by, among other things, developing a technological breakthrough in video compression. That claim will be put to the test in late 2009 and 2010 as their system scales up with actual subscribers. Whether or not OnLive succeeds, they have sparked the imagination for what cloud computing can ultimately mean. And even if they don't succeed as a distributor of video games, if they have accomplished their stated compression breakthrough, others will certainly license the technology or mimic it.
And should OnLive succeed in providing a great gaming experience through their model, overcoming the 100-millisecond challenge with pricing that is reasonable for consumers, haven't they effectively proven that this cloud computing model is the future for personal computing? After all, video games are just software applications. In fact, the highest-end video games are particularly complex software applications requiring a great deal of computing power. If OnLive succeeds technically and commercially with the toughest of applications in video games, the model can certainly work with a word processor or spreadsheet.
Isn't it just a matter of time before we see significant improvements in the areas that would concern providers? Faster Internet to the home, improved bandwidth, specialized server farm components developed cheaply, improvements in compression... we'll see positive steps if not leaps in all these areas over the coming years, the combination of which will enable a completely new paradigm for executing applications.