LTE Advanced deployment will start in 2013 in South Korea. Norway and Hong Kong and Japan will also lead the way to faster communication speeds.
A shift to cognitive radio and dynamic spectrum allocation would enable full utilization of the available local spectrum. Usually most of the wireless spectrum is not used and if you could expand to use more licensed spectrum where there are no one actually using it at the time then communication speeds to go up 10-100 times.
Cognitive radio is a paradigm for wireless communication in which either a network or a wireless node changes its transmission or reception parameters to communicate efficiently avoiding interference with licensed or unlicensed users. This alteration of parameters is based on the active monitoring of several factors in the external and internal radio environment, such as radio frequency spectrum, user behaviour and network state. The first phone call over a cognitive radio network was made on Monday 11 January 2010 in Centre for Wireless Communications at University of Oulu using CWC’s cognitive radio network CRAMNET (Cognitive Radio Assisted Mobile Ad Hoc Network), that has been developed solely by CWC researchers.
Regulatory bodies in various countries (including the Federal Communications Commission in the United States, and Ofcom in the United Kingdom) found that most of the radio frequency spectrum was inefficiently utilized. For example, cellular network bands are overloaded in most parts of the world, but many other frequency bands, such as military, amateur radio and paging frequencies are not. Independent studies performed in some countries confirmed that observation and concluded that spectrum utilization depends strongly on time and place.
More cost efficient optical fiber architectures and continual deployment of last mile optical fiber should eventually enable full optical network deployment at multi-gigabit to terabit per second communication speeds.
Exaflop (10^18 operations per second) class supercomputers should be here by 2015-2017. Distributed SETI@home systems should achieve exaflops around 2012-2013.
Memristors will help with faster memory and computing and possibly with AI. Memristor seems to be a better analog model of a synapse.
Two memristors can be used for an analog simulation of a synapse. 200 trillion memristors would be a human level synapse system. Memristors can go 1-5 nanometers in the form of nanowires. Hundreds of trillions of memristors would be compact.
Memristors could be the key to making analog synapse networks at human brain scale or just as a way to make a 3D computer structure that is near nanoscale is structure that has localized computing and memory.
Memristors, optical computing could take us to the limits of irreversible computing and help us take advantage of reversible computing.
Irreversible computing seems to be limited to about 3.5 X 10*20 operations per watt. A wide variety of proposed reversible device technologies have been analyzed by physicists. With theoretical power-performance up to 10-12 orders of magnitude better than today’s CMOS. A likely minimum reversible computing capability for an advanced civilization is 10**29 operations per watt.
Yottaflops at 10^24 operations per second seem reachable using memristors and highly efficient optical computing and onchip communication. Memristors could also be used for 3 dimensional computing systems.
Scott Aaronson (MIT) gave new evidence that quantum computers—moreover, rudimentary quantum computers built entirely out of linear-optical elements—cannot be efficiently simulated by classical computers. The quantum computers could do calculations beyond classical computers.
In particular, we define a model of computation in which identical photons are generated, sent through a linear-optical network, then nonadaptively measured to count the number of photons in each mode. This model is not known or believed to be universal for quantum computation, and indeed, we discuss the prospects for realizing the model using current technology On the other hand, we prove that the model is able to solve sampling problems and search problems that are classically intractable under plausible assumptions.