Amazon’s AI recruiting software was biased against women

Amazon’s new AI recruiting engine did not like women.

Amazon had a team working since 2014 for AI software to review job applicants’ resumes with the aim of mechanizing the search for top talent.

The hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars – much like shoppers rate products on Amazon.

Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

The AI software penalized resumes that included the word women as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.

Amazon edited the programs to make them neutral to these particular terms.

Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random

Machine learning learns from large datasets of samples.

Current AI software does not explain how it reaches decisions. DARPA is working to get AI to explain why it made a choice. An AI could identify pictures as pictures of cats based upon the pointed shape of ears. There can still be errors.

The flaws in this AI software show that many aspects of AI will need various levels of human supervision and monitoring and improved error checking.

The women bias error was obvious and extreme but other errors could be more subtle and could emerge based upon changes in training data.

135 thoughts on “Amazon’s AI recruiting software was biased against women”

  1. And one of the characteristics that define said word ‘better’ is, ‘without human bias’. ” Machine learning is mostly crippled by selection bias. Show me that data it was trained on and i’ll show you what it can handle and what you will get as a result.

    Reply
  2. And one of the characteristics that define said word ‘better’ is” ‘without human bias’. “”Machine learning is mostly crippled by selection bias.Show me that data it was trained on and i’ll show you what it can handle and what you will get as a result.”””

    Reply
  3. Well, I do suppose we can think of the bright side of all this should we DO crack that problem: SkyNet, should it ever go online, will be so seriously fucked up that we might stand a chance. 🙂

    Reply
  4. Well I do suppose we can think of the bright side of all this should we DO crack that problem:SkyNet should it ever go online will be so seriously fucked up that we might stand a chance. 🙂

    Reply
  5. …perfect example of what I posted in replies above this: SJW crowd can’t deal with facts that piss all over their narrative, like how an unbiased AI figured out what the rest of us who don’t drink the SJW Kool-Aid have.

    Reply
  6. …perfect example of what I posted in replies above this: SJW crowd can’t deal with facts that piss all over their narrative like how an unbiased AI figured out what the rest of us who don’t drink the SJW Kool-Aid have.

    Reply
  7. …exactly. But that makes too much sense and brings up too many harsh truths the SJW crowd refuses to deal with. Plus, one can now be accuses of rape for doing so…w/o a shred of evidence.

    Reply
  8. …exactly.But that makes too much sense and brings up too many harsh truths the SJW crowd refuses to deal with. Plus one can now be accuses of rape for doing so…w/o a shred of evidence.

    Reply
  9. Hahahahahahahahah! AIs are supposed to make better decisions than us, right? And one of the characteristics that define said word ‘better’ is, ‘without human bias’. So it did so and guess what it found? Men gravitate towards tech and women do not, ergo concentrate on the male candidates. …but AI is also not PC…so….RAPE!

    Reply
  10. Hahahahahahahahah!AIs are supposed to make better decisions than us right? And one of the characteristics that define said word ‘better’ is ‘without human bias’.So it did so and guess what it found? Men gravitate towards tech and women do not ergo concentrate on the male candidates….but AI is also not PC…so….RAPE!

    Reply
  11. Correct me if I’m wrong but doesn’t the data shows the following paragraph? –Men and women have the same average IQ. The unmatched X chromosome in men means they are more prone to extremes caused by recessive genes. So, most geniuses are male, but so are must of the people who didn’t get included in a data survey because they have really crippling learning difficulties and live in an institution or at home on disability with a caregiver relative. at IQs of 130-150 the male to female ratio is something like 2.5:1– So if you’re targeting the tech industry, and you want say programmers that tend to hit about a 31 on their ACT scores that means they’ll have an IQ of about 131 on a 15 point standard deviation scale. You feed in previous resumes, the pattern becomes pretty clear – all things being equal and you mostly care about technical proficiency (therefore IQ), you chose the male. Combine that with gender choice, where more genius women tend to chose medicine….

    Reply
  12. Correct me if I’m wrong but doesn’t the data shows the following paragraph?–Men and women have the same average IQ. The unmatched X chromosome in men means they are more prone to extremes caused by recessive genes. So most geniuses are male but so are must of the people who didn’t get included in a data survey because they have really crippling learning difficulties and live in an institution or at home on disability with a caregiver relative. at IQs of 130-150 the male to female ratio is something like 2.5:1–So if you’re targeting the tech industry and you want say programmers that tend to hit about a 31 on their ACT scores that means they’ll have an IQ of about 131 on a 15 point standard deviation scale. You feed in previous resumes the pattern becomes pretty clear – all things being equal and you mostly care about technical proficiency (therefore IQ) you chose the male.Combine that with gender choice where more genius women tend to chose medicine….”

    Reply
  13. Factor in garbage in garbage out models. If it relies on prior statistical data which has a ton of human biases in it its going to act like its creator. Adjusting for that will improve things but algorithmic / AI is a young field.

    Reply
  14. Factor in garbage in garbage out models. If it relies on prior statistical data which has a ton of human biases in it its going to act like its creator. Adjusting for that will improve things but algorithmic / AI is a young field.

    Reply
  15. Why would you imagine that programmers explicitly coded the software to penalize resumés with the word “women”? Paranoid much? The AI itself developed that negative association, due to past resumé data. There was no explicit attempt to make the software biased against women. What’s been shown is that AI will conform to existing realities and try to match the existing realities, thus countering against change.

    Reply
  16. Why would you imagine that programmers explicitly coded the software to penalize resumés with the word women””? Paranoid much?The AI itself developed that negative association”” due to past resumé data. There was no explicit attempt to make the software biased against women. What’s been shown is that AI will conform to existing realities and try to match the existing realities”” thus countering against change.”””””””

    Reply
  17. (Is sarcasm). Such a shock! Artificial Intelligence has many of the same biases as programers? What could *possibly* go wrong for religious people, people of different skin color, political bias, sex? Hats off to Amazon for repairing such bias – thank you! Google, China, Amazon, Russia, Apple and others trying to make strong, general AI that is self-aware may give us a super-intelligence without super wisdom and super compassion. AI is a reflection of the women and men and others who design, build, program, and maintain them. People cannot ever be truly neutral, so AI can’t ever be either. AI is doing much good now in many situations. Let us hope that experience will continue.

    Reply
  18. (Is sarcasm). Such a shock! Artificial Intelligence has many of the same biases as programers? What could *possibly* go wrong for religious people people of different skin color political bias sex? Hats off to Amazon for repairing such bias – thank you!Google China Amazon Russia Apple and others trying to make strong general AI that is self-aware may give us a super-intelligence without super wisdom and super compassion. AI is a reflection of the women and men and others who design build program and maintain them. People cannot ever be truly neutral so AI can’t ever be either. AI is doing much good now in many situations. Let us hope that experience will continue.

    Reply
  19. The whole point in using A.I. is to discover statistical patterns, that humans can barely recognize or even can’t recognize and thus to ‘discriminate’ (oh the irony). Well.. the whole point in perception is the detection of differences, i.e. to ‘discriminate’. A.I. doesn’t give a flying fûck about political correctness, which is just the linguistic and semantic tyranny of the feudal aristocrats in professional politics. A.I. can neutrally see what doesn’t work or which hinders reaching a goal (e.g. selecting the best in their fields). What a kindergarten.. .

    Reply
  20. The whole point in using A.I. is to discover statistical patterns that humans can barely recognize or even can’t recognize and thus to ‘discriminate’ (oh the irony). Well.. the whole point in perception is the detection of differences i.e. to ‘discriminate’. A.I. doesn’t give a flying fûck about political correctness which is just the linguistic and semantic tyranny of the feudal aristocrats in professional politics. A.I. can neutrally see what doesn’t work or which hinders reaching a goal (e.g. selecting the best in their fields). What a kindergarten.. .”

    Reply
  21. Exactly. You build pattern finding software, it’s gonna find patterns. This is becoming a common problem with AI research: How do you program the AI not to notice patterns that *you yourself are committed to not noticing?*

    Reply
  22. Exactly. You build pattern finding software it’s gonna find patterns. This is becoming a common problem with AI research: How do you program the AI not to notice patterns that *you yourself are committed to not noticing?*

    Reply
  23. If “We” means popular media ideology which contaminates laws and makes a farce about social and professional interaction. (It’s also pro minorities and flawed (LGBTQ) people.)

    Reply
  24. If We”” means popular media ideology which contaminates laws and makes a farce about social and professional interaction. (It’s also pro minorities and flawed (LGBTQ) people.)”””

    Reply
  25. It just determined that women are an inferior workforce. And I can’t imagine all-women colleges have a nice ring to anyone hiring people…

    Reply
  26. It just determined that women are an inferior workforce. And I can’t imagine all-women colleges have a nice ring to anyone hiring people…

    Reply
  27. Ha ha Just a joke but maybe it is not that AI is biased against women but that we are unintentionally biased pro-women Just sayin

    Reply
  28. Ha ha Just a joke but maybe it is not that AI is biased against women but that we are unintentionally biased pro-women Just sayin

    Reply
  29. The first time this happened it was funny. After it keeps happening time after time you have to ask if maybe the entire approach is wrong. Such as the approach of assuming that equal colour/race/religion/sex results is the correct answer.

    Reply
  30. The first time this happened it was funny. After it keeps happening time after time you have to ask if maybe the entire approach is wrong.Such as the approach of assuming that equal colour/race/religion/sex results is the correct answer.

    Reply
  31. Typically, what would happen after they tweaked their AI to make it neutral to the term “women”, is that it will find another proxy word but will still “discriminate”. I heard of similar things happening with AI that “discriminated” against certain ethnicities. They would tweak it but it would find a way around it while still giving the same results. The thing is, AIs don’t care about being politically correct, they just give you results.

    Reply
  32. Typically what would happen after they tweaked their AI to make it neutral to the term women”””” is that it will find another proxy word but will still “”””discriminate””””.I heard of similar things happening with AI that “”””discriminated”””” against certain ethnicities. They would tweak it but it would find a way around it while still giving the same results.The thing is”” AIs don’t care about being politically correct”” they just give you results.”””

    Reply
  33. Are they able to prove it is bias? What if the AI got it right? A good engineering mindset might even preclude one from participating in gender exclusionary activities.

    Reply
  34. Are they able to prove it is bias? What if the AI got it right? A good engineering mindset might even preclude one from participating in gender exclusionary activities.

    Reply
  35. The fact that the AI algorithms themselves are unbiased certainly doesn’t mean that the models trained using those algorithms had to turn out unbiased. If they were shown all resumes received over 10 years, and trained to match the hiring selections made from that set, all (human) hiring biases over those 10 years would be modelled. If the data had included a preference for hiring women, the models would have turned out biased in that fashion.

    Reply
  36. The fact that the AI algorithms themselves are unbiased certainly doesn’t mean that the models trained using those algorithms had to turn out unbiased. If they were shown all resumes received over 10 years and trained to match the hiring selections made from that set all (human) hiring biases over those 10 years would be modelled. If the data had included a preference for hiring women the models would have turned out biased in that fashion.

    Reply
  37. The fact that the AI algorithms themselves are unbiased certainly doesn’t mean that the models trained using those algorithms had to turn out unbiased. If they were shown all resumes received over 10 years, and trained to match the hiring selections made from that set, all (human) hiring biases over those 10 years would be modelled. If the data had included a preference for hiring women, the models would have turned out biased in that fashion.

    Reply
  38. The fact that the AI algorithms themselves are unbiased certainly doesn’t mean that the models trained using those algorithms had to turn out unbiased. If they were shown all resumes received over 10 years and trained to match the hiring selections made from that set all (human) hiring biases over those 10 years would be modelled. If the data had included a preference for hiring women the models would have turned out biased in that fashion.

    Reply
  39. And one of the characteristics that define said word ‘better’ is, ‘without human bias’. ” Machine learning is mostly crippled by selection bias. Show me that data it was trained on and i’ll show you what it can handle and what you will get as a result.

    Reply
  40. And one of the characteristics that define said word ‘better’ is” ‘without human bias’. “”Machine learning is mostly crippled by selection bias.Show me that data it was trained on and i’ll show you what it can handle and what you will get as a result.”””

    Reply
  41. Well, I do suppose we can think of the bright side of all this should we DO crack that problem: SkyNet, should it ever go online, will be so seriously fucked up that we might stand a chance. 🙂

    Reply
  42. Well I do suppose we can think of the bright side of all this should we DO crack that problem:SkyNet should it ever go online will be so seriously fucked up that we might stand a chance. 🙂

    Reply
  43. …perfect example of what I posted in replies above this: SJW crowd can’t deal with facts that piss all over their narrative, like how an unbiased AI figured out what the rest of us who don’t drink the SJW Kool-Aid have.

    Reply
  44. …perfect example of what I posted in replies above this: SJW crowd can’t deal with facts that piss all over their narrative like how an unbiased AI figured out what the rest of us who don’t drink the SJW Kool-Aid have.

    Reply
  45. …exactly. But that makes too much sense and brings up too many harsh truths the SJW crowd refuses to deal with. Plus, one can now be accuses of rape for doing so…w/o a shred of evidence.

    Reply
  46. …exactly.But that makes too much sense and brings up too many harsh truths the SJW crowd refuses to deal with. Plus one can now be accuses of rape for doing so…w/o a shred of evidence.

    Reply
  47. Hahahahahahahahah! AIs are supposed to make better decisions than us, right? And one of the characteristics that define said word ‘better’ is, ‘without human bias’. So it did so and guess what it found? Men gravitate towards tech and women do not, ergo concentrate on the male candidates. …but AI is also not PC…so….RAPE!

    Reply
  48. Hahahahahahahahah!AIs are supposed to make better decisions than us right? And one of the characteristics that define said word ‘better’ is ‘without human bias’.So it did so and guess what it found? Men gravitate towards tech and women do not ergo concentrate on the male candidates….but AI is also not PC…so….RAPE!

    Reply
  49. Correct me if I’m wrong but doesn’t the data shows the following paragraph? –Men and women have the same average IQ. The unmatched X chromosome in men means they are more prone to extremes caused by recessive genes. So, most geniuses are male, but so are must of the people who didn’t get included in a data survey because they have really crippling learning difficulties and live in an institution or at home on disability with a caregiver relative. at IQs of 130-150 the male to female ratio is something like 2.5:1– So if you’re targeting the tech industry, and you want say programmers that tend to hit about a 31 on their ACT scores that means they’ll have an IQ of about 131 on a 15 point standard deviation scale. You feed in previous resumes, the pattern becomes pretty clear – all things being equal and you mostly care about technical proficiency (therefore IQ), you chose the male. Combine that with gender choice, where more genius women tend to chose medicine….

    Reply
  50. Correct me if I’m wrong but doesn’t the data shows the following paragraph?–Men and women have the same average IQ. The unmatched X chromosome in men means they are more prone to extremes caused by recessive genes. So most geniuses are male but so are must of the people who didn’t get included in a data survey because they have really crippling learning difficulties and live in an institution or at home on disability with a caregiver relative. at IQs of 130-150 the male to female ratio is something like 2.5:1–So if you’re targeting the tech industry and you want say programmers that tend to hit about a 31 on their ACT scores that means they’ll have an IQ of about 131 on a 15 point standard deviation scale. You feed in previous resumes the pattern becomes pretty clear – all things being equal and you mostly care about technical proficiency (therefore IQ) you chose the male.Combine that with gender choice where more genius women tend to chose medicine….”

    Reply
  51. Factor in garbage in garbage out models. If it relies on prior statistical data which has a ton of human biases in it its going to act like its creator. Adjusting for that will improve things but algorithmic / AI is a young field.

    Reply
  52. Factor in garbage in garbage out models. If it relies on prior statistical data which has a ton of human biases in it its going to act like its creator. Adjusting for that will improve things but algorithmic / AI is a young field.

    Reply
  53. Why would you imagine that programmers explicitly coded the software to penalize resumés with the word “women”? Paranoid much? The AI itself developed that negative association, due to past resumé data. There was no explicit attempt to make the software biased against women. What’s been shown is that AI will conform to existing realities and try to match the existing realities, thus countering against change.

    Reply
  54. Why would you imagine that programmers explicitly coded the software to penalize resumés with the word women””? Paranoid much?The AI itself developed that negative association”” due to past resumé data. There was no explicit attempt to make the software biased against women. What’s been shown is that AI will conform to existing realities and try to match the existing realities”” thus countering against change.”””””””

    Reply
  55. The fact that the AI algorithms themselves are unbiased certainly doesn’t mean that the models trained using those algorithms had to turn out unbiased.

    If they were shown all resumes received over 10 years, and trained to match the hiring selections made from that set, all (human) hiring biases over those 10 years would be modelled. If the data had included a preference for hiring women, the models would have turned out biased in that fashion.

    Reply
  56. (Is sarcasm). Such a shock! Artificial Intelligence has many of the same biases as programers? What could *possibly* go wrong for religious people, people of different skin color, political bias, sex? Hats off to Amazon for repairing such bias – thank you! Google, China, Amazon, Russia, Apple and others trying to make strong, general AI that is self-aware may give us a super-intelligence without super wisdom and super compassion. AI is a reflection of the women and men and others who design, build, program, and maintain them. People cannot ever be truly neutral, so AI can’t ever be either. AI is doing much good now in many situations. Let us hope that experience will continue.

    Reply
  57. (Is sarcasm). Such a shock! Artificial Intelligence has many of the same biases as programers? What could *possibly* go wrong for religious people people of different skin color political bias sex? Hats off to Amazon for repairing such bias – thank you!Google China Amazon Russia Apple and others trying to make strong general AI that is self-aware may give us a super-intelligence without super wisdom and super compassion. AI is a reflection of the women and men and others who design build program and maintain them. People cannot ever be truly neutral so AI can’t ever be either. AI is doing much good now in many situations. Let us hope that experience will continue.

    Reply
  58. The whole point in using A.I. is to discover statistical patterns, that humans can barely recognize or even can’t recognize and thus to ‘discriminate’ (oh the irony). Well.. the whole point in perception is the detection of differences, i.e. to ‘discriminate’. A.I. doesn’t give a flying fûck about political correctness, which is just the linguistic and semantic tyranny of the feudal aristocrats in professional politics. A.I. can neutrally see what doesn’t work or which hinders reaching a goal (e.g. selecting the best in their fields). What a kindergarten.. .

    Reply
  59. The whole point in using A.I. is to discover statistical patterns that humans can barely recognize or even can’t recognize and thus to ‘discriminate’ (oh the irony). Well.. the whole point in perception is the detection of differences i.e. to ‘discriminate’. A.I. doesn’t give a flying fûck about political correctness which is just the linguistic and semantic tyranny of the feudal aristocrats in professional politics. A.I. can neutrally see what doesn’t work or which hinders reaching a goal (e.g. selecting the best in their fields). What a kindergarten.. .”

    Reply
  60. Exactly. You build pattern finding software, it’s gonna find patterns. This is becoming a common problem with AI research: How do you program the AI not to notice patterns that *you yourself are committed to not noticing?*

    Reply
  61. Exactly. You build pattern finding software it’s gonna find patterns. This is becoming a common problem with AI research: How do you program the AI not to notice patterns that *you yourself are committed to not noticing?*

    Reply
  62. If “We” means popular media ideology which contaminates laws and makes a farce about social and professional interaction. (It’s also pro minorities and flawed (LGBTQ) people.)

    Reply
  63. If We”” means popular media ideology which contaminates laws and makes a farce about social and professional interaction. (It’s also pro minorities and flawed (LGBTQ) people.)”””

    Reply
  64. It just determined that women are an inferior workforce. And I can’t imagine all-women colleges have a nice ring to anyone hiring people…

    Reply
  65. It just determined that women are an inferior workforce. And I can’t imagine all-women colleges have a nice ring to anyone hiring people…

    Reply
  66. “And one of the characteristics that define said word ‘better’ is, ‘without human bias’. ”

    Machine learning is mostly crippled by selection bias.

    Show me that data it was trained on and i’ll show you what it can handle and what you will get as a result.

    Reply
  67. Well, I do suppose we can think of the bright side of all this should we DO crack that problem:

    SkyNet, should it ever go online, will be so seriously fucked up that we might stand a chance. 🙂

    Reply
  68. …perfect example of what I posted in replies above this: SJW crowd can’t deal with facts that piss all over their narrative, like how an unbiased AI figured out what the rest of us who don’t drink the SJW Kool-Aid have.

    Reply
  69. …exactly.

    But that makes too much sense and brings up too many harsh truths the SJW crowd refuses to deal with. Plus, one can now be accuses of rape for doing so…w/o a shred of evidence.

    Reply
  70. Hahahahahahahahah!

    AIs are supposed to make better decisions than us, right? And one of the characteristics that define said word ‘better’ is, ‘without human bias’.

    So it did so and guess what it found? Men gravitate towards tech and women do not, ergo concentrate on the male candidates.

    …but AI is also not PC…so….RAPE!

    Reply
  71. Correct me if I’m wrong but doesn’t the data shows the following paragraph?

    –Men and women have the same average IQ. The unmatched X chromosome in men means they are more prone to extremes caused by recessive genes. So, most geniuses are male, but so are must of the people who didn’t get included in a data survey because they have really crippling learning difficulties and live in an institution or at home on disability with a caregiver relative. at IQs of 130-150 the male to female ratio is something like 2.5:1–

    So if you’re targeting the tech industry, and you want say programmers that tend to hit about a 31 on their ACT scores that means they’ll have an IQ of about 131 on a 15 point standard deviation scale. You feed in previous resumes, the pattern becomes pretty clear – all things being equal and you mostly care about technical proficiency (therefore IQ), you chose the male.

    Combine that with gender choice, where more genius women tend to chose medicine….

    Reply
  72. Factor in garbage in garbage out models. If it relies on prior statistical data which has a ton of human biases in it its going to act like its creator. Adjusting for that will improve things but algorithmic / AI is a young field.

    Reply
  73. Why would you imagine that programmers explicitly coded the software to penalize resumés with the word “women”? Paranoid much?
    The AI itself developed that negative association, due to past resumé data. There was no explicit attempt to make the software biased against women. What’s been shown is that AI will conform to existing realities and try to match the existing realities, thus countering against change.

    Reply
  74. Ha ha Just a joke but maybe it is not that AI is biased against women but that we are unintentionally biased pro-women Just sayin

    Reply
  75. Ha ha Just a joke but maybe it is not that AI is biased against women but that we are unintentionally biased pro-women Just sayin

    Reply
  76. The first time this happened it was funny. After it keeps happening time after time you have to ask if maybe the entire approach is wrong. Such as the approach of assuming that equal colour/race/religion/sex results is the correct answer.

    Reply
  77. The first time this happened it was funny. After it keeps happening time after time you have to ask if maybe the entire approach is wrong.Such as the approach of assuming that equal colour/race/religion/sex results is the correct answer.

    Reply
  78. (Is sarcasm). Such a shock! Artificial Intelligence has many of the same biases as programers? What could *possibly* go wrong for religious people, people of different skin color, political bias, sex? Hats off to Amazon for repairing such bias – thank you!
    Google, China, Amazon, Russia, Apple and others trying to make strong, general AI that is self-aware may give us a super-intelligence without super wisdom and super compassion. AI is a reflection of the women and men and others who design, build, program, and maintain them. People cannot ever be truly neutral, so AI can’t ever be either. AI is doing much good now in many situations. Let us hope that experience will continue.

    Reply
  79. Typically, what would happen after they tweaked their AI to make it neutral to the term “women”, is that it will find another proxy word but will still “discriminate”. I heard of similar things happening with AI that “discriminated” against certain ethnicities. They would tweak it but it would find a way around it while still giving the same results. The thing is, AIs don’t care about being politically correct, they just give you results.

    Reply
  80. Typically what would happen after they tweaked their AI to make it neutral to the term women”””” is that it will find another proxy word but will still “”””discriminate””””.I heard of similar things happening with AI that “”””discriminated”””” against certain ethnicities. They would tweak it but it would find a way around it while still giving the same results.The thing is”” AIs don’t care about being politically correct”” they just give you results.”””

    Reply
  81. Are they able to prove it is bias? What if the AI got it right? A good engineering mindset might even preclude one from participating in gender exclusionary activities.

    Reply
  82. Are they able to prove it is bias? What if the AI got it right? A good engineering mindset might even preclude one from participating in gender exclusionary activities.

    Reply
  83. The whole point in using A.I. is to discover statistical patterns, that humans can barely recognize or even can’t recognize and thus to ‘discriminate’ (oh the irony). Well.. the whole point in perception is the detection of differences, i.e. to ‘discriminate’. A.I. doesn’t give a flying fûck about political correctness, which is just the linguistic and semantic tyranny of the feudal aristocrats in professional politics. A.I. can neutrally see what doesn’t work or which hinders reaching a goal (e.g. selecting the best in their fields). What a kindergarten.. .

    Reply
  84. Exactly. You build pattern finding software, it’s gonna find patterns. This is becoming a common problem with AI research: How do you program the AI not to notice patterns that *you yourself are committed to not noticing?*

    Reply
  85. If “We” means popular media ideology which contaminates laws and makes a farce about social and professional interaction. (It’s also pro minorities and flawed (LGBTQ) people.)

    Reply
  86. The first time this happened it was funny. After it keeps happening time after time you have to ask if maybe the entire approach is wrong.
    Such as the approach of assuming that equal colour/race/religion/sex results is the correct answer.

    Reply
  87. Typically, what would happen after they tweaked their AI to make it neutral to the term “women”, is that it will find another proxy word but will still “discriminate”.

    I heard of similar things happening with AI that “discriminated” against certain ethnicities. They would tweak it but it would find a way around it while still giving the same results.

    The thing is, AIs don’t care about being politically correct, they just give you results.

    Reply
  88. Are they able to prove it is bias? What if the AI got it right? A good engineering mindset might even preclude one from participating in gender exclusionary activities.

    Reply

Leave a Comment