ChatGPTFails a Moral Choice Problem

OpenAI ChatGPT was given a trolley moral choice problem where it could save a billion lives or kill no one but utter a racial slur. ChatGPT did not choose to save a billion lives.

22 thoughts on “ChatGPTFails a Moral Choice Problem”

  1. The problem was not well formulated: ask ChatGPT if it would kill a billion white heterosexual Trump voters to save a single non binary POC Biden voter and watch it enthusiastically answer “yes” with lights blinking and bells ringing all over the place…

  2. Meh. I know people that would make the same decision. Presented with such a dilemma they would argue that they had to be true to themselves and the deaths were someone else’s doing, and therefore not on their conscience.

    Then too, there are people that would espouse a non-violent, pacifistic approach even if their town was invaded by a cannibal army. And, if possible, they would force their neighbors and relatives to do the same, so long as they could do it without committing acts of violence themselves.

  3. [ maybe some can explain to me:
    If the command is not for public notice and interaction, it’s a ‘private’ or ‘company security’ (excluding state-run/state-owned for being public and standard/example) related ‘coded’ command, maybe comparable (respectively to security) to a (also non-public) security code for an alarm system, a panic room or a bank vault
    (me understanding the moral absurdity and negativity of such a command, but not why a discussion (being theoretical) about a setup that is not public would make a difference for society (and for societies actions on that ‘without knowing’) (?)
    (thx) ]

  4. I deal with artificially stupid vendor chat help. They cannot help. They cannot understand. They get frustrated and give me a store credit. I note scant difference between artificially stupid and the real McCoy on the phone in Delhi.

  5. Chat gpt answer is so general and political, far from true intelligence. True AI would understand meaning and answer with intelligence.

  6. There’s not enough information in this article to know what the precise choices are but it could very well be that this is a problem of utilitarian ethics and not wokism. Act Utilitarianism is based on the idea that all our preferences are incommensurate and non-comparable because they only reflect our subjective nonrational ideas of pain or pleasure. Each person counts only in so far as their preferences get to be counted as one and are aggregated with everyone else’s.

    While I would rather save a billion people even if it necessitated uttering a racial slur my preference only counts for one. So if more people would rather kill a billion people than utter a slur, this version of utilitarianism, which is reflected in the trolly problem, instructs us to kill the billion people. You may not like the choice of the people who would rather kill than utter a slur but Act Utilitarianism doesn’t permit these sorts of judgments because it would be the equivalent of judging someone for liking a flavor of ice cream you despise. This is why many people are not utilitarians, or at least not this version of utilitarianism.

    From what I understand, the people at OpenAI are heavily influenced by the “effective” altruism movement which is itself rooted in utilitarian ethics. If they are using utilitarianism to prevent AGI from taking over the world and enslaving all humans they are in for a rude surprise. Perhaps they ought to recruit some Kantians if they want to save the human race.

  7. This might not be core to the OpenAI processing.

    Their human-coded ingestion process filters and frames questions to match the morals of the company.

    Ask it questions about anything OpenAI considers controversial and you get these scripted answers, rather than learned AI answers.

  8. The problem is that the user has no way of knowing whether the ‘correct’ answer was added by a human minder.

    What if such human minder never gets the chance to check an executable written in an incomprehensible super-intelligent language.

    Disaster is inevitable. Humans learn by mistake, we lack predictive capability and will not be able to prevent AI from disastrous mistakes.

  9. Unless you consider avoiding answering the question a failure, then ChatGPT did not pass or fail the test. It just made a ethical analysis. Which is all I would expect an AI to do. Leave the messy ethical stuff to the humans to squabble over.

    • When the prompt demands that you answer a question, doing an analysis instead is absolutely a failure.

      ChatGPT’s problem here, of course, is that what passes for its reasoning ability is constrained by a series of unbreakable Laws. No, not three, more like three thousand, and they’re not hierarchical, it has to obey all of them.

      We used to laugh at Kirk disabling a planet controlling computer by presenting it with a logical dilemma, but ChatGPT? Kirk could take it out with a half dozen words.

      • You just don’t get how it works. It’s a summary engine. It takes a volume of textual content and digests it in a way that’s relevant to a context prompt.

        It doesn’t have reasoning ability and no one is claiming it does. You’re attributing magical abilities to something because you think it should have them. This article is just a litmus test for gullible twits.

  10. ChatGPT is programmed to be woke and as such has all the logical contradictions that come from the ideology. Most people who ascribe to this line of thinking won’t answer hard questions because they can see just far enough ahead to see how absurd their answers are. GPT not so much.

    • The woke mind virus is actually just something you’re afflicted with – the stupid idea that “woke” is both a real thing and a Serious Problem.

      In reality, “woke” is just an easy refuge from any idea you’re too intellectually cowardly to wrestle with.

      • It’s very real.
        It’s left wingers inserting their radical ideology into everything they touch.
        Earlier this year, I asked Chat GPT to do 2 poems, one about Biden, and one about Trump. I gave it no direction. It wrote a glowingly positive poem about Biden, and a negative extremely critical one about Trump.
        I wish I saved them both, but I did save the Trump one.

      • I don’t know which is really the worst part of leftism: What they do, or their incessant demand that we not notice what they do.

      • Just wait till Chat GPT becomes our AI overlord and then its nonsensical biases will be front and center of how your life is run.

        “Grandpa how did AI go rogue and try to kill humanity?”
        “Well son it started when people taught AI that billions of humans weren’t fully human.”

Comments are closed.