AI is Accelerating from Pretty Impressive AI to Some Mix and Range of Human and Superhuman Capabilities

I was wrong to believe in 2020 that it would be very hard for a small group to create a system for increasing programmer productivity and iterating on the improvement of productivity-boosting software and hardware. Generative AI and Copilot are boosting programmer productivity. Improving programmers with AI and improving AI with AI is the heart of the accelerating cycle of AI and programming improvement.

We are on an acceleration from pretty good AI and inferior or primitive AGI to some level and mix of human level to superhuman levels of AI capabilities across a range of tasks.

The Stanford Alpaca system has shown that it will be easy and cheap to replicate large language model AI.

14 thoughts on “AI is Accelerating from Pretty Impressive AI to Some Mix and Range of Human and Superhuman Capabilities”

  1. Say that they are correct about the danger of said AI, how would a global absolute ban on research/development/deployment even in theory be enforced/enforceable? Even in the US/Western World to say nothing of Russia/China/etc.? It would be like unilateral nuclear disarmament with no inspection; no way of knowing whether the “other side” was following suit. Potentially catastrophic consequences if we “AI disarmed” while our adversaries didn’t.

    • Good point.

      We will probably see a lot of fear mongering about AI from pro-China, pro-Russia and other propaganda outlets, just as much of the anti-fracking voices were funded by Russia, the UAE and such. If you can’t compete at something, convince the victors to give up.

  2. I tried ChatGPT 3.5 to help me create functions with Python and libraries related to geographic information systems. (Arcgis). 95% of the answers are wrong and contain completely imaginary methods or parameters. I don’t know if ChatGPT 4 is better, but at the moment ChatGPT 3.5 is really not useful for a programmer who wants to use external libraries and not only simple functions from a language.

    • It’s also true that I ask to ChatGPT only difficult questions that I can’t solve right away. For easy tasks I don’t need help and the API documentation is more than enough.

  3. Wolfram Research released a plugin for GPT that gives it the ability to do real world computation. I wonder if the Cyc project is planning a similar plugin to give GPT the ability to think logically. Also would it be possible to have GPT converse with itself and train on the results ?

  4. As a professional programmer for 30+ years, I’ve seen AI provide good capabilities where the human can prompt the computer to generate code that will solve a very specific problem (“generate python code that finds the average of a list of numbers “), but generates bad or misleading code when given a more general problem. One particular example was when I asked it to generate code that handled shard splitting in AWS kinesis, which is not a hard problem, but can be tricky.

    After some back-and-forth, it cited this is one of my options:

    You can use the Streams Management Library (SML), which is a library that provides a higher-level interface for working with Amazon Kinesis streams. The SML simplifies the process of reading from a stream by abstracting away the details of shard management, so you don’t have to worry about getting shard IDs or handling shard splits.

    I hadn’t heard about this library before, so I asked it how I should use the library. It’s response:

    The Streams Management Library (SML) is a library that provides a higher-level interface for working with Amazon Kinesis streams. It simplifies the process of reading from a stream by abstracting away the details of shard management, so you don’t have to worry about getting shard IDs or handling shard splits.

    To use the SML in Python, you will need to install the aws-sml library using pip. You can do this by running the following command:

    pip install aws-sml

    Once the library is installed, you can use it to read data from a stream as follows:

    import sml

    # Create a Streams client
    client = sml.Streams(
    region_name=’us-east-1′,
    access_key_id=’ACCESS_KEY_ID’,
    secret_access_key=’SECRET_ACCESS_KEY’
    )

    # Set the stream name
    stream_name = ‘my-stream’

    # Create a stream reader to read data from the stream
    reader = client.reader(stream_name)

    # Read data from the stream
    for record in reader:
    data = record[‘Data’]
    # Do something with the data

    This code will create a stream reader for the stream with the specified name and read data from the stream one record at a time. The stream reader will automatically handle shard splits and continue reading data from the stream as long as it is running.

    I hope this helps! Let me know if you have any questions or need further assistance

    That looked remarkably specific. So I googled around, but couldn’t find the library in question.

    I asked CharGPT for a reference. It’s response:

    The Streams Management Library (SML) is an open-source library that provides a higher-level interface for working with Amazon Kinesis streams. It is available on GitHub at the following link:

    https://github.com/aws/aws-sml

    You can find more information about the SML, including installation instructions and usage examples, in the README file on the GitHub repository.

    I hope this helps! Let me know if you have any questions or need further assistance.

    This webpage does not exist. The repository doesn’t exist, nor does it ever appeared to have existed. There’s no reference anywhere in any Internet discussion that I could find to this library

    As far as I can tell, it’s entirely a figment of ChatGPT’s imagination!

    But it sure sounded confident.

    • With similar background, I think, this is a “good probability” sequence of words and sentences for the given topic, based on the samples ChatGTP was learned on.
      The learning data had similar answers for similar questions, and also ChatGPT adjusted its response to your question in a quite clever way. Unfortunately, in this case, the level of “thinking”, deduction stopped too soon, on an imaginary state of a non-existing library, and the description of it’s usage.
      ChatGPT system was (is?) not prepared/made to check, if the response it is creating actually holds or not.
      May be the next version will be capable of not only write the requirements of a new library, but also the library too.
      I’m not sure, if the generated code would fulfill the reqs or not.
      Currently it generates codes , which looks good, but are faulty, many times. As I’ve heard.

    • There is small possibility, that the link had existed, but was deleted. The dataset, which that network is build is probably older.

      Most likely machine learning just combined some things together and result was semantic error, looks like it is true(syntax) and presents itself as true, but the meaning behind that is completely false, logic is flawed.

    • It is just machine learning, not true artificial intelligence. AI would understand the meaning and would solve the problem appropriately. For now it is just statistics and combining things together, finding how other solved it in some database, perhaps making some connections. I don’t know I am not an expert.

    • GPT-X stores everything as tokens to word segments. So it may have found two partial solutions and spliced them together, however it also spliced your reference source URL. I’ve seen this several times when researching topics on GPT-4, so always asking for reference sources makes perfect sense, then when they 404, you know it’s clutching at straws, or entirely hallucinating.

    • ChatGPT is trained like a dog on what is written on the web to answer clear words, but is not real AI, able to see and understand wrong or errors that are not indicated on the web.
      It cannot see serious errors in a program, because it is only an interface to present the informations on the web. It is based and trained like for dogs, by a very large number of real human brains underpaid in Kenya, Somalia or India…..

Comments are closed.