A few years ago, my uncle spoke to an engineer from Intel. He complained that Intel was lagging behind and the biggest problem was that the processor drained the battery too fast. The engineer assured him that with their next generation of chips, they would drastically reduce the problem.
Well the next generation did come and they did solve some problems. For instance, whereas in 2006, a laptop battery drained after a couple of hours, the current generation of computers from companies like ASUS can run 8-9 hours depending on how you use them. But there are still problems. Sometimes they heat up so much that you can’t actually put them on your lap. My wife’s HP laptop (a few years old now) needs to be placed on a cool surface otherwise it heats up so much that the processor is visibly slower.
So what happened? The amount of energy expended in an operation on a computer chip drops as the size of the transistors in it drops. You may have heard of Moore’s Law which states that the number of transistors per square inch doubles every year. This has been happening for about 45 years. A corollary of this is that the power consumed per operation (say addition) also drops at the same rate. So computers should be lasting longer and heating up less right? Not quite.
About 10 years ago, there was an explosion in clock rates. We went from a few megahertz (MHz) processor speeds to gigahertz (GHz) processor speeds within a few years (remember the Pentium days?). This meant that whereas earlier computers were doing a few million operations every second, the newer generation were doing a few billion operations every second. So even though the energy used (or heat generated) in each operation halved, the number of operations every second more than doubled using up all the saved energy.
Intel yesterday (Pentium 4) and today (i7)
After clock speeds plateaued, engineers started using multi-core architectures where multiple operations could be done at the same time in different parts of the processor, again increasing the total number of operations happening.
But there is one basic problem, why has the number of operations increased so much? Well, basically because software has kept pace with hardware. Newer software releases assume they will be running on more powerful processors and use all available power to run fancier features.
I started my career working in an assembly language where I had to struggle to find every extra byte of memory. Today, I can allocate megabytes without having to think twice about it.
The problem is that most users won’t really see any of those features. For instance, a researcher from Microsoft told me about all the cool features they had in Windows 10 for audio processing. The problem is that unless I had met him, I would never have heard of those features and even now, I am not sure where I would use them.
Explore the options in the control panel of your laptop sometime, you may be surprised by some of the things you can do to improve the performance of your speakers and microphone. You may as well do that, after all you paid for all those features when you bought the machine.
A 1980s VCR
I remember when I was a kid in the 80s, we had a Japanese VCR with loads of cool buttons. I think we used 7 buttons during the entire life of the product (eject, play, stop, pause, rewind, fast forward and record). All the timers and multi-channel capability was wasted. How much electronics and engineering effort had gone into those “key differentiator” features I don’t know.
With gigabytes of storage, free app downloads and high speed wireless internet, we tend not to worry about the cost of software. But we are paying for every app we download in battery life (how much battery is lost in useless background activities), reduced performance and more expensive higher end processor equipment.
So why has this trend lasted so long? Why don’t companies save R&D money and stop packing bloated unnecessary software? Well, mainly because companies will build anything they can charge you for, and the psychological cost of downgrading is too high.
Suppose tomorrow Microsoft announces a community based free download and support for Windows 2000 and associated office products. 90% of their customers probably don’t need anything more. But how many would switch to the free option? I guess hardly anyone. Even though the battery life of any machine running that would be in days. The look-feel would seem odd and there would be the odd 5% features that the users would have gotten so used to that it would seem a trial to move back.
Also, you will find that only older versions of 3rd party software will work for you as most developers moved over to working on the latest version a long time ago. So no more bells and whistles.
I was quite happy using my 2013 Nexus 7 tablet to access the internet and play games until newer versions of Android made the machine too slow to use.
So, the trend continues, tech companies will continue to make more improvements of which only a small fraction will be useful for any given user and we keep on paying for them even though we will never use them.