Science and Technology Miracles of 2023

1. Vladan Vuletic (MIT professor and QuEra Co-founder) had a scientific and technical presentation at the 2023 Q2B conference today. The main part of the talk came on the last slide where Vladan laid out his projection for how QuEra and its research partners will be able to advance their breakthrough Quantum Error Correction work.

Vladan expects sometime in 2025, QuEra will be able to get 10,000 to 100,000 physical qubits.
They will be able to get 100 error corrected qubits with 1 in a million to 1 in 100 million error rates.
This will mean very robust error correction with more physical qubits per logical qubit.
If this target is met then there QuEra will be delivering rapid progress and before 2028 commercial quantum error corrected computer systems that will be delivering commercially valuable work.

Prof. Vuletic did not commit that QuEra will have a 100 Qubit system with less 1 in a million in two years. A computer with such capabilities will be demonstrated in the labs (e.g. Harvard or MIT) in two years, and later then perhaps turned into a commercial product.

In 2023, QuEra revealed error correction with neutral atoms at about 50 qubits. This paper was peer reviewed and published in Nature.

Other quantum computer companies have made progress with the volume of quantum computing qubits and suppression and mitigation of errors.

2. Room Temperature Superconductivity discovery revealed by South Koreans. Computational confirmations through DFT (Density Function theory) simulations. In December, 2023, experimental confirmation by researchers from five Universities and research labs.

South China University of Technology and Central South University published a paper confirming the discovery of a near-room-temperature superconducting component in LK99-type materials through sample testing. This is significant experimental support for LK99 room temperature superconductivity.

They have found significant hysteresis and memory effect of LFMA in samples of CSLA. The effect is sufficiently robust in magnetic field sweep and rotation and will lose memory in a long duration. The temperature dependence of LFMA intensity exhibits a phase transition at 250 K. The phase diagram of superconducting Meissner and vortex glass is then calculated in the framework of lattice gauge model. In the near future, they will continue to improve the quality of samples to realize full levitation and magnetic flux pinning by increasing active components. The application of a microwave power repository will be considered as well.

Most superconductors have got the low-field microwave absorption (LFMA) due to the presence of superconducting gap and the relevant superconducting vortices as excited states. More importantly, the derivative LFMA of superconductors is positively dependent of the magnetic field as the vortices are more induced under higher field. As a comparison, although the soft magnetism is also active under low field, the precession of spin moments will be suppressed so that the derivative LFMA of magnetic materials is normally negative. The sign of LFMA can be always corrected by the signal of radicals in our measurements. In this case, the signals below 500 Gauss are all positive, implying the presence of superconductivity.

School of Minerals Processing and Bioengineering, Central South University, Changsha, China
State Key Laboratory of Luminescent Materials and Devices, South China University of Technology, Guangzhou, China
Department of Physics, South China University of Technology, Guangzhou, China
Guangdong Provincial Key Laboratory of Luminescence from Molecular Aggregates, Guangdong-Hong Kong-Macao Joint Laboratory of Optoelectronic and Magnetic Functional Materials, South China University of Technology, Guangzhou, China
School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China

3. AI expert, Alan Thompson has moved up his conservative estimate of AI reaching AGI (Artificial general intelligence from 61% to 64%. At the start of 2023, Alan was at 39% toward reaching AGI.

In a paper published in Nature, Google/Deepmind introduce FunSearch, a method to search for new solutions in mathematics and computer science. FunSearch works by pairing a pre-trained LLM, whose goal is to provide creative solutions in the form of computer code, with an automated evaluator, which guards against hallucinations and incorrect ideas. By iterating back-and-forth between these two components, initial solutions evolve into new knowledge. The system searches for “functions” written in computer code; hence the name FunSearch. FunSearch uses Google’s PaLM 2, but it is compatible with other LLMs trained on code. It was used to solve the best bounds for Cap Sets problem.

Definition’s for AI
AGI = artificial general intelligence = a machine that performs at the level of an average (median) human.

ASI = artificial superintelligence = a machine that performs at the level of an expert human in practically any field.

I use a slightly stricter definition for AGI that includes the ability to act on the physical world via embodiment. I appreciate that there were some approaches on getting to AGI that fully bypass embodiment or robotics.

Artificial general intelligence (AGI) is a machine capable of understanding the world as well as—or better than—any human, in practically every field, including the ability to interact with the world via physical embodiment.

And the short version: ‘AGI is a machine which is as good or better than a human in every aspect’.

4. The National Ignition Facility (NIF) at Lawrence Livermore National Lab (LLNL) made worldwide news when they reached nuclear fusion ignition in December 2022 but they went beyond that result on July 30, 2023.

The Big 3 recent results were:
1. Aug 8, 2021 1.9 megajoules (MJ) of laser power was fired at a target which resulted in the release of 1.35 megajoules (0.72 return)
2. Dec 5, 2022 2.05 megajoules was fired at a target which resulted in the release of 3.15 megajoules (1.54X gain)
3. July 30, 2023 2.05 megajoules was fired at a target which resulted in the release of 3.88 megajoules (1.9X gain)

There are plans to increase the power to 3 MJ which would conservatively deliver more than a 10X gain and 30 MJ.

They have already operated with 2.2 MJ shots. The first 2.2 MJ shot are calculated to deliver 5-10 MJ. Modifying the target shape from a pill capsule like shape to one with a wide centerline with two stacked trapezoids (Frustraum) could deliver over 10 MJ using 2.2 MJ of laser power.

The latest series of experiments features a 7% boost in laser energy should lead to even larger yields. The first experiment in this series was one of the successful ignitions, on 30 October. Although it didn’t break the record, an input of 2.2 megajoules of laser energy yielded an output of 3.4 megajoules of fusion energy.

5. SpaceX got close to 100 launches in 2023. SpaceX is making great progress mass producing satellites, rocket engines and rockets.

4 thoughts on “Science and Technology Miracles of 2023”

  1. It’s something I’ve given fair amount of thought to, and even written some notes on.

    Artificial Intelligence (AI): either a work-around for ‘real’ intelligence, or a device capable of some actual thought-like processes.

    Narrow AI: A work-around for ‘real’ intelligence, , or a device capable of substantial actual thought-like processes, that is limited to a specific set of endeavors, but which can perform them as well as a human, and without rest or any other human requirements.

    NOTE: Keep in mind that in the two previous types, the terms could be applied to a device capable of actual cognitive activity, or to a sufficiently complex learning tree. Both would examples of cognitive automation.

    Algorithmic AI: A learning tree of such immense complexity it can emulate a human being in nearly all situations likely to occur. Increasing size would only tend to extend its performance to even more potential situations, but never 100% of them. If not instructed to be evasive, when it is asked if it is a sentient being, it would have to respond with something like: “No, regardless of appearance, there is no one home here; I am no more self-aware than a 1950’s era toaster.” Sci-fi writer Stephen Baxter has employed such machines in at least one of his novels.

    Artificial General Intelligence (AGI): human-like intelligence, capable of reasoning and planning and able to create and execute complex processes to achieve a designated end goal. There is, however, no reason to believe it would do anything at all without being specifically instructed to seek and attain a designated goal, any more than an automated loom would create cloth without being provided with thread and activated.

    NOTE: The two previous examples could also find themselves referred to as ‘artilects,’ being as they are artifacts that either think, or appear to. Because of the potential for their misuse, the governing society would probably have large numbers of artilects monitoring constantly to prevent this.

    Synthetic Intelligence (SI): Indistinguishable from a fully operational human-like mind somehow uploaded into a manufactured device. At this point, there would be serious ethical and moral quandaries involved in not granting it the same rights as a human child, at a minimum. For this reason alone, producing them (or copying them) without authorization by the governing society, would likely be immensely illegal. Although some SI will be created, the primary use of this level of technology might be in augmenting and/or hosting human intelligence.

    Advanced Synthetic Intelligence (ASI): An SI superior in most or all ways to an unaugmented human mind. The primary use of this level of technology might be in augmenting, upgrading, and/or hosting human intelligence.

  2. Assuming we are going to call neural nets “artificial intelligence” we still run into a problem of differentiating AI from AGI by the number of tasks they can complete compared to an average human. A stapler can perform a few tasks: stapling, bludgeoning, being a paperweight, etc. A Swiss Army knife can perform many more tasks (this is a pretty good analogy to “AI” since the tasks need to be initiated and assisted by humans—they don’t spontaneously decide to staple or unscrew something). But the knife isn’t “more intelligent” than the stapler.

    The “tasks” we are using to define how intelligent a system is are chosen by the criteria that before now we needed an intelligent human to complete the tasks. We once needed an intelligent human to compute and calculate or weave fibres into cloth.

    On the one hand, the number of tasks which won’t need to go un-done in the future because we couldn’t afford a needed human to stand around for hours or days or months waiting to do a few cents worth of productive work will diminish drastically and enable a lot of new productivity; so it is revolutionary. On the other hand, calling neural networks “intelligent” or “AGI” yields results as baffling as people worrying that a loom will come to their home and start telling them what to do.

  3. No one has proved lk99. Not even the original South Koreans. Those five Chinese universities are like the Wish or Temu of the academic world.

Comments are closed.