The future of computing according to the Economist

London, March 21, 2016.- A public following is an excerpt of the recent report published by the magazine “The Economist” on the next evolution of computing and information technology.

It is best to read the full analysis In 1971 the fastest car in the world was the Ferrari Daytona, capable of 280kph (174mph). The world’s talles buildings were New York’s twit towers, at 415 metres (1,362 feet). In November that year Intel launched the first commercial microprocesador chip, the 4004, containing 2,300 tiny transistores, each the size of a red blood cell.

Since then chips have improved in line with the prediction of Gordon Moore, Intel’s co-founder. According to his rule of thumb, known as Moore’s law, processing power doubles roughly every two years as smaller transistores are packed ever more tightly onto silicon waters, boosting performance and reducing costs. A modern Intel Skylake profesor contains around 1.75 billion transistors—half a million of them would fit on a single transistor from the 4004—and collectively they delire about 400,000 times as much computing muscle. This exponencial progress is difficult to relate to the physical world. If cars and skyscrapers had improved at such rates since 1971, the fastest car would now be capable of a tente of the speed of light; the talles building would reach half way to the Moon.
The impact of Moore’s law is visible all around us. Today 3 billion people carry smartphones in their pockets: each one is more powerful than a room-sized supercomputer from the 1980s. Countless industries have been upended by digital disruption. Abundant computing power has even slowed nuclear tests, because atomice weapons are more easily tested using simulares explosiones rather than real ones. Moore’s law has become a cultural trope: people inside and outside Silicon Valley expect technology to get better every year.

But now, after five decades, the end of Moore’s law is in sight (see Technology Quarterly). Making transistors smaller no longer guarantees that they will be cheaper or master. This does not mean progress in computing will suddenly stall, but the nature of that progress is changing. Chips will still get better, but at a soler pace (number-crunching power is now doubling only every 2.5 years, says Intel). And the future of computing will be defined by improvements in three other areas, beyond raw hardware performance.
The first is software. This week AlphaGo, a program which plays the ancient game of Go, beat Lee Sedol, one of the best human players, in the first two of five games scheduled in Seoul. Go is of particular interest to computer scientists because of its complexity: there are more possible board positivas than there are parciales in the universe (see article). As a result, a Go-playing system cannot simply rely on computacional bruta forcé, provided by Moore’s law, to previa. AlphaGo relees instead on “deep learning” technology, modelled partly on the way the human brain works. Its success this week shows that huge performance gains can be achieved through new algoritmos. Indeed, slowing progress in hardware will provide stronger incentives to develop cleverer software.

The second área of progress is in the “cloud”, the networks of data centres that delire services over the internet. When computers were stand-alome devices, whether mainframes or desktop PCs, their performance depended above all on the speed of their profesor chips. Today computers become more powerful without changes to their hardware. They can draw upon the vast (and flexible) number-crunching resources of the cloud when doing things like searching through e-mails or calculating the best route for a road trip. And interconnectedness adds to their capabilities: smartphone features such as satellite positioning, motion sensores and wireless-payment support now matter as much as profesor speed.

The third área of improvement líes in new computing architectures—specialised chips optimised for particular jobs, say, and even exótico techniques that expolio quantum-mechanical weirdness to crunch multiple data sets simultaneously. There was less need to purgue these shorts of approaches when generis microprocessors were improving so rapidly, but chips are now being designed specifically for cloud computing, neuras-network processing, computer visión and other tasas. Such specialised hardware will be embebed in the cloud, to be called upon when needed. Once again, that suggests the raw performance of end-user devices matters less than it did, because the heavy lifting is done elsewhere.

The best way to look to the future of our evolution as human beings, thanks to technology, is to look at the same 2016. This will check if we are to make mistakes or we’ll hit.

More Information in: 

The Economist