Effective Sagacity and effective AGI improvement in science and technology

Michael Anissimov has an interesting article AI and Effective Sagacity, by Mitchell Howe

I believe that a large part of the surprisingly common discord between IQ
scores and societal significance can be explained by my simple theory of
‘Effective Sagacity’

The amount previously invested and currently spent in highest-level
thought combine to form one’s “Effective Sagacity.” In the end, this is the
*only* measurement of mental capacity an AI researcher ought to be
interested in.

I agree that the Effective sagacity measure is what is relevant.

The most important part for me is:

the truly Sagacious AI could also effectively find its way out of this cul-de-sac of human thought. It could do so the same way outstanding scientists do today: by identifying the limits of current understanding and coming up with the right questions to ask in order to expand those limits. The AI could either come up with great experiments to advance human knowledge, or, more efficiently in the software field, create and perform experiments on its
own. Even if the AI is -merely- capable of directing humans in bold new experiments, it has already done something truly significant.

I am concerned with actual productivity gains and amount and timing of technological improvement.

I think a rough drilldown is possible of which areas of science, technology would be most ameniable to improvement without (or limited amounts of construction and experimentation) constructing devices and experiment and how much of an impact would even fastly superior Sagacity have. How much recursive improvement would be possible before architectural limitations or the problems or limitations of the initial imperfect AGI design surface.

I think for really amazing work and improvement, the AGI needs to arrange to get things built. Better tools, better computers etc…