Amazon’s AI recruiting software was biased against women

Amazon’s new AI recruiting engine did not like women.

Amazon had a team working since 2014 for AI software to review job applicants’ resumes with the aim of mechanizing the search for top talent.

The hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars – much like shoppers rate products on Amazon.

Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

The AI software penalized resumes that included the word women as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.

Amazon edited the programs to make them neutral to these particular terms.

Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random

Machine learning learns from large datasets of samples.

Current AI software does not explain how it reaches decisions. DARPA is working to get AI to explain why it made a choice. An AI could identify pictures as pictures of cats based upon the pointed shape of ears. There can still be errors.

The flaws in this AI software show that many aspects of AI will need various levels of human supervision and monitoring and improved error checking.

The women bias error was obvious and extreme but other errors could be more subtle and could emerge based upon changes in training data.

135 thoughts on “Amazon’s AI recruiting software was biased against women”

  1. If anything this is indicative of how our “future AI overlords” will work: they will be merciless accountants.

  2. If anything this is indicative of how our future AI overlords”” will work: they will be merciless accountants.”””

  3. And one of the characteristics that define said word ‘better’ is, ‘without human bias’. ” Machine learning is mostly crippled by selection bias. Show me that data it was trained on and i’ll show you what it can handle and what you will get as a result.

  4. And one of the characteristics that define said word ‘better’ is” ‘without human bias’. “”Machine learning is mostly crippled by selection bias.Show me that data it was trained on and i’ll show you what it can handle and what you will get as a result.”””

  5. Well, I do suppose we can think of the bright side of all this should we DO crack that problem: SkyNet, should it ever go online, will be so seriously fucked up that we might stand a chance. 🙂

  6. Well I do suppose we can think of the bright side of all this should we DO crack that problem:SkyNet should it ever go online will be so seriously fucked up that we might stand a chance. 🙂

  7. …perfect example of what I posted in replies above this: SJW crowd can’t deal with facts that piss all over their narrative, like how an unbiased AI figured out what the rest of us who don’t drink the SJW Kool-Aid have.

  8. …perfect example of what I posted in replies above this: SJW crowd can’t deal with facts that piss all over their narrative like how an unbiased AI figured out what the rest of us who don’t drink the SJW Kool-Aid have.

  9. …exactly. But that makes too much sense and brings up too many harsh truths the SJW crowd refuses to deal with. Plus, one can now be accuses of rape for doing so…w/o a shred of evidence.

  10. …exactly.But that makes too much sense and brings up too many harsh truths the SJW crowd refuses to deal with. Plus one can now be accuses of rape for doing so…w/o a shred of evidence.

  11. Hahahahahahahahah! AIs are supposed to make better decisions than us, right? And one of the characteristics that define said word ‘better’ is, ‘without human bias’. So it did so and guess what it found? Men gravitate towards tech and women do not, ergo concentrate on the male candidates. …but AI is also not PC…so….RAPE!

  12. Hahahahahahahahah!AIs are supposed to make better decisions than us right? And one of the characteristics that define said word ‘better’ is ‘without human bias’.So it did so and guess what it found? Men gravitate towards tech and women do not ergo concentrate on the male candidates….but AI is also not PC…so….RAPE!

  13. Correct me if I’m wrong but doesn’t the data shows the following paragraph? –Men and women have the same average IQ. The unmatched X chromosome in men means they are more prone to extremes caused by recessive genes. So, most geniuses are male, but so are must of the people who didn’t get included in a data survey because they have really crippling learning difficulties and live in an institution or at home on disability with a caregiver relative. at IQs of 130-150 the male to female ratio is something like 2.5:1– So if you’re targeting the tech industry, and you want say programmers that tend to hit about a 31 on their ACT scores that means they’ll have an IQ of about 131 on a 15 point standard deviation scale. You feed in previous resumes, the pattern becomes pretty clear – all things being equal and you mostly care about technical proficiency (therefore IQ), you chose the male. Combine that with gender choice, where more genius women tend to chose medicine….

  14. Correct me if I’m wrong but doesn’t the data shows the following paragraph?–Men and women have the same average IQ. The unmatched X chromosome in men means they are more prone to extremes caused by recessive genes. So most geniuses are male but so are must of the people who didn’t get included in a data survey because they have really crippling learning difficulties and live in an institution or at home on disability with a caregiver relative. at IQs of 130-150 the male to female ratio is something like 2.5:1–So if you’re targeting the tech industry and you want say programmers that tend to hit about a 31 on their ACT scores that means they’ll have an IQ of about 131 on a 15 point standard deviation scale. You feed in previous resumes the pattern becomes pretty clear – all things being equal and you mostly care about technical proficiency (therefore IQ) you chose the male.Combine that with gender choice where more genius women tend to chose medicine….”

  15. Factor in garbage in garbage out models. If it relies on prior statistical data which has a ton of human biases in it its going to act like its creator. Adjusting for that will improve things but algorithmic / AI is a young field.

  16. Factor in garbage in garbage out models. If it relies on prior statistical data which has a ton of human biases in it its going to act like its creator. Adjusting for that will improve things but algorithmic / AI is a young field.

  17. Why would you imagine that programmers explicitly coded the software to penalize resumĂ©s with the word “women”? Paranoid much? The AI itself developed that negative association, due to past resumĂ© data. There was no explicit attempt to make the software biased against women. What’s been shown is that AI will conform to existing realities and try to match the existing realities, thus countering against change.

  18. Why would you imagine that programmers explicitly coded the software to penalize resumĂ©s with the word women””? Paranoid much?The AI itself developed that negative association”” due to past resumĂ© data. There was no explicit attempt to make the software biased against women. What’s been shown is that AI will conform to existing realities and try to match the existing realities”” thus countering against change.”””””””

  19. (Is sarcasm). Such a shock! Artificial Intelligence has many of the same biases as programers? What could *possibly* go wrong for religious people, people of different skin color, political bias, sex? Hats off to Amazon for repairing such bias – thank you! Google, China, Amazon, Russia, Apple and others trying to make strong, general AI that is self-aware may give us a super-intelligence without super wisdom and super compassion. AI is a reflection of the women and men and others who design, build, program, and maintain them. People cannot ever be truly neutral, so AI can’t ever be either. AI is doing much good now in many situations. Let us hope that experience will continue.

  20. (Is sarcasm). Such a shock! Artificial Intelligence has many of the same biases as programers? What could *possibly* go wrong for religious people people of different skin color political bias sex? Hats off to Amazon for repairing such bias – thank you!Google China Amazon Russia Apple and others trying to make strong general AI that is self-aware may give us a super-intelligence without super wisdom and super compassion. AI is a reflection of the women and men and others who design build program and maintain them. People cannot ever be truly neutral so AI can’t ever be either. AI is doing much good now in many situations. Let us hope that experience will continue.

  21. The whole point in using A.I. is to discover statistical patterns, that humans can barely recognize or even can’t recognize and thus to ‘discriminate’ (oh the irony). Well.. the whole point in perception is the detection of differences, i.e. to ‘discriminate’. A.I. doesn’t give a flying fĂ»ck about political correctness, which is just the linguistic and semantic tyranny of the feudal aristocrats in professional politics. A.I. can neutrally see what doesn’t work or which hinders reaching a goal (e.g. selecting the best in their fields). What a kindergarten.. .

  22. The whole point in using A.I. is to discover statistical patterns that humans can barely recognize or even can’t recognize and thus to ‘discriminate’ (oh the irony). Well.. the whole point in perception is the detection of differences i.e. to ‘discriminate’. A.I. doesn’t give a flying fĂ»ck about political correctness which is just the linguistic and semantic tyranny of the feudal aristocrats in professional politics. A.I. can neutrally see what doesn’t work or which hinders reaching a goal (e.g. selecting the best in their fields). What a kindergarten.. .”

  23. Exactly. You build pattern finding software, it’s gonna find patterns. This is becoming a common problem with AI research: How do you program the AI not to notice patterns that *you yourself are committed to not noticing?*

  24. Exactly. You build pattern finding software it’s gonna find patterns. This is becoming a common problem with AI research: How do you program the AI not to notice patterns that *you yourself are committed to not noticing?*

  25. If “We” means popular media ideology which contaminates laws and makes a farce about social and professional interaction. (It’s also pro minorities and flawed (LGBTQ) people.)

  26. If We”” means popular media ideology which contaminates laws and makes a farce about social and professional interaction. (It’s also pro minorities and flawed (LGBTQ) people.)”””

  27. It just determined that women are an inferior workforce. And I can’t imagine all-women colleges have a nice ring to anyone hiring people…

  28. It just determined that women are an inferior workforce. And I can’t imagine all-women colleges have a nice ring to anyone hiring people…

  29. Ha ha Just a joke but maybe it is not that AI is biased against women but that we are unintentionally biased pro-women Just sayin

  30. Ha ha Just a joke but maybe it is not that AI is biased against women but that we are unintentionally biased pro-women Just sayin

  31. The first time this happened it was funny. After it keeps happening time after time you have to ask if maybe the entire approach is wrong. Such as the approach of assuming that equal colour/race/religion/sex results is the correct answer.

  32. The first time this happened it was funny. After it keeps happening time after time you have to ask if maybe the entire approach is wrong.Such as the approach of assuming that equal colour/race/religion/sex results is the correct answer.

  33. Typically, what would happen after they tweaked their AI to make it neutral to the term “women”, is that it will find another proxy word but will still “discriminate”. I heard of similar things happening with AI that “discriminated” against certain ethnicities. They would tweak it but it would find a way around it while still giving the same results. The thing is, AIs don’t care about being politically correct, they just give you results.

  34. Typically what would happen after they tweaked their AI to make it neutral to the term women”””” is that it will find another proxy word but will still “”””discriminate””””.I heard of similar things happening with AI that “”””discriminated”””” against certain ethnicities. They would tweak it but it would find a way around it while still giving the same results.The thing is”” AIs don’t care about being politically correct”” they just give you results.”””

  35. Are they able to prove it is bias? What if the AI got it right? A good engineering mindset might even preclude one from participating in gender exclusionary activities.

  36. Are they able to prove it is bias? What if the AI got it right? A good engineering mindset might even preclude one from participating in gender exclusionary activities.

  37. The fact that the AI algorithms themselves are unbiased certainly doesn’t mean that the models trained using those algorithms had to turn out unbiased. If they were shown all resumes received over 10 years, and trained to match the hiring selections made from that set, all (human) hiring biases over those 10 years would be modelled. If the data had included a preference for hiring women, the models would have turned out biased in that fashion.

  38. The fact that the AI algorithms themselves are unbiased certainly doesn’t mean that the models trained using those algorithms had to turn out unbiased. If they were shown all resumes received over 10 years and trained to match the hiring selections made from that set all (human) hiring biases over those 10 years would be modelled. If the data had included a preference for hiring women the models would have turned out biased in that fashion.

  39. Sarah Connor: [in a motel room] Kyle, the women in your time, what are they like? Kyle Reese: Good fighters.

  40. Sarah Connor: [in a motel room] Kyle the women in your time what are they like?Kyle Reese: Good fighters.

  41. Sarah Connor: [in a motel room] Kyle, the women in your time, what are they like? Kyle Reese: Good fighters.

  42. Sarah Connor: [in a motel room] Kyle the women in your time what are they like?Kyle Reese: Good fighters.

  43. The fact that the AI algorithms themselves are unbiased certainly doesn’t mean that the models trained using those algorithms had to turn out unbiased. If they were shown all resumes received over 10 years, and trained to match the hiring selections made from that set, all (human) hiring biases over those 10 years would be modelled. If the data had included a preference for hiring women, the models would have turned out biased in that fashion.

  44. The fact that the AI algorithms themselves are unbiased certainly doesn’t mean that the models trained using those algorithms had to turn out unbiased. If they were shown all resumes received over 10 years and trained to match the hiring selections made from that set all (human) hiring biases over those 10 years would be modelled. If the data had included a preference for hiring women the models would have turned out biased in that fashion.

  45. If anything this is indicative of how our “future AI overlords” will work: they will be merciless accountants.

  46. If anything this is indicative of how our future AI overlords”” will work: they will be merciless accountants.”””

  47. And one of the characteristics that define said word ‘better’ is, ‘without human bias’. ” Machine learning is mostly crippled by selection bias. Show me that data it was trained on and i’ll show you what it can handle and what you will get as a result.

  48. And one of the characteristics that define said word ‘better’ is” ‘without human bias’. “”Machine learning is mostly crippled by selection bias.Show me that data it was trained on and i’ll show you what it can handle and what you will get as a result.”””

  49. Well, I do suppose we can think of the bright side of all this should we DO crack that problem: SkyNet, should it ever go online, will be so seriously fucked up that we might stand a chance. 🙂

  50. Well I do suppose we can think of the bright side of all this should we DO crack that problem:SkyNet should it ever go online will be so seriously fucked up that we might stand a chance. 🙂

  51. …perfect example of what I posted in replies above this: SJW crowd can’t deal with facts that piss all over their narrative, like how an unbiased AI figured out what the rest of us who don’t drink the SJW Kool-Aid have.

  52. …perfect example of what I posted in replies above this: SJW crowd can’t deal with facts that piss all over their narrative like how an unbiased AI figured out what the rest of us who don’t drink the SJW Kool-Aid have.

  53. …exactly. But that makes too much sense and brings up too many harsh truths the SJW crowd refuses to deal with. Plus, one can now be accuses of rape for doing so…w/o a shred of evidence.

  54. …exactly.But that makes too much sense and brings up too many harsh truths the SJW crowd refuses to deal with. Plus one can now be accuses of rape for doing so…w/o a shred of evidence.

  55. Hahahahahahahahah! AIs are supposed to make better decisions than us, right? And one of the characteristics that define said word ‘better’ is, ‘without human bias’. So it did so and guess what it found? Men gravitate towards tech and women do not, ergo concentrate on the male candidates. …but AI is also not PC…so….RAPE!

  56. Hahahahahahahahah!AIs are supposed to make better decisions than us right? And one of the characteristics that define said word ‘better’ is ‘without human bias’.So it did so and guess what it found? Men gravitate towards tech and women do not ergo concentrate on the male candidates….but AI is also not PC…so….RAPE!

  57. Correct me if I’m wrong but doesn’t the data shows the following paragraph? –Men and women have the same average IQ. The unmatched X chromosome in men means they are more prone to extremes caused by recessive genes. So, most geniuses are male, but so are must of the people who didn’t get included in a data survey because they have really crippling learning difficulties and live in an institution or at home on disability with a caregiver relative. at IQs of 130-150 the male to female ratio is something like 2.5:1– So if you’re targeting the tech industry, and you want say programmers that tend to hit about a 31 on their ACT scores that means they’ll have an IQ of about 131 on a 15 point standard deviation scale. You feed in previous resumes, the pattern becomes pretty clear – all things being equal and you mostly care about technical proficiency (therefore IQ), you chose the male. Combine that with gender choice, where more genius women tend to chose medicine….

  58. Correct me if I’m wrong but doesn’t the data shows the following paragraph?–Men and women have the same average IQ. The unmatched X chromosome in men means they are more prone to extremes caused by recessive genes. So most geniuses are male but so are must of the people who didn’t get included in a data survey because they have really crippling learning difficulties and live in an institution or at home on disability with a caregiver relative. at IQs of 130-150 the male to female ratio is something like 2.5:1–So if you’re targeting the tech industry and you want say programmers that tend to hit about a 31 on their ACT scores that means they’ll have an IQ of about 131 on a 15 point standard deviation scale. You feed in previous resumes the pattern becomes pretty clear – all things being equal and you mostly care about technical proficiency (therefore IQ) you chose the male.Combine that with gender choice where more genius women tend to chose medicine….”

  59. Factor in garbage in garbage out models. If it relies on prior statistical data which has a ton of human biases in it its going to act like its creator. Adjusting for that will improve things but algorithmic / AI is a young field.

  60. Factor in garbage in garbage out models. If it relies on prior statistical data which has a ton of human biases in it its going to act like its creator. Adjusting for that will improve things but algorithmic / AI is a young field.

  61. Why would you imagine that programmers explicitly coded the software to penalize resumĂ©s with the word “women”? Paranoid much? The AI itself developed that negative association, due to past resumĂ© data. There was no explicit attempt to make the software biased against women. What’s been shown is that AI will conform to existing realities and try to match the existing realities, thus countering against change.

  62. Why would you imagine that programmers explicitly coded the software to penalize resumĂ©s with the word women””? Paranoid much?The AI itself developed that negative association”” due to past resumĂ© data. There was no explicit attempt to make the software biased against women. What’s been shown is that AI will conform to existing realities and try to match the existing realities”” thus countering against change.”””””””

  63. The fact that the AI algorithms themselves are unbiased certainly doesn’t mean that the models trained using those algorithms had to turn out unbiased.

    If they were shown all resumes received over 10 years, and trained to match the hiring selections made from that set, all (human) hiring biases over those 10 years would be modelled. If the data had included a preference for hiring women, the models would have turned out biased in that fashion.

  64. (Is sarcasm). Such a shock! Artificial Intelligence has many of the same biases as programers? What could *possibly* go wrong for religious people, people of different skin color, political bias, sex? Hats off to Amazon for repairing such bias – thank you! Google, China, Amazon, Russia, Apple and others trying to make strong, general AI that is self-aware may give us a super-intelligence without super wisdom and super compassion. AI is a reflection of the women and men and others who design, build, program, and maintain them. People cannot ever be truly neutral, so AI can’t ever be either. AI is doing much good now in many situations. Let us hope that experience will continue.

  65. (Is sarcasm). Such a shock! Artificial Intelligence has many of the same biases as programers? What could *possibly* go wrong for religious people people of different skin color political bias sex? Hats off to Amazon for repairing such bias – thank you!Google China Amazon Russia Apple and others trying to make strong general AI that is self-aware may give us a super-intelligence without super wisdom and super compassion. AI is a reflection of the women and men and others who design build program and maintain them. People cannot ever be truly neutral so AI can’t ever be either. AI is doing much good now in many situations. Let us hope that experience will continue.

  66. The whole point in using A.I. is to discover statistical patterns, that humans can barely recognize or even can’t recognize and thus to ‘discriminate’ (oh the irony). Well.. the whole point in perception is the detection of differences, i.e. to ‘discriminate’. A.I. doesn’t give a flying fĂ»ck about political correctness, which is just the linguistic and semantic tyranny of the feudal aristocrats in professional politics. A.I. can neutrally see what doesn’t work or which hinders reaching a goal (e.g. selecting the best in their fields). What a kindergarten.. .

  67. The whole point in using A.I. is to discover statistical patterns that humans can barely recognize or even can’t recognize and thus to ‘discriminate’ (oh the irony). Well.. the whole point in perception is the detection of differences i.e. to ‘discriminate’. A.I. doesn’t give a flying fĂ»ck about political correctness which is just the linguistic and semantic tyranny of the feudal aristocrats in professional politics. A.I. can neutrally see what doesn’t work or which hinders reaching a goal (e.g. selecting the best in their fields). What a kindergarten.. .”

  68. Exactly. You build pattern finding software, it’s gonna find patterns. This is becoming a common problem with AI research: How do you program the AI not to notice patterns that *you yourself are committed to not noticing?*

  69. Exactly. You build pattern finding software it’s gonna find patterns. This is becoming a common problem with AI research: How do you program the AI not to notice patterns that *you yourself are committed to not noticing?*

  70. If “We” means popular media ideology which contaminates laws and makes a farce about social and professional interaction. (It’s also pro minorities and flawed (LGBTQ) people.)

  71. If We”” means popular media ideology which contaminates laws and makes a farce about social and professional interaction. (It’s also pro minorities and flawed (LGBTQ) people.)”””

  72. It just determined that women are an inferior workforce. And I can’t imagine all-women colleges have a nice ring to anyone hiring people…

  73. It just determined that women are an inferior workforce. And I can’t imagine all-women colleges have a nice ring to anyone hiring people…

  74. “And one of the characteristics that define said word ‘better’ is, ‘without human bias’. ”

    Machine learning is mostly crippled by selection bias.

    Show me that data it was trained on and i’ll show you what it can handle and what you will get as a result.

  75. Well, I do suppose we can think of the bright side of all this should we DO crack that problem:

    SkyNet, should it ever go online, will be so seriously fucked up that we might stand a chance. 🙂

  76. …perfect example of what I posted in replies above this: SJW crowd can’t deal with facts that piss all over their narrative, like how an unbiased AI figured out what the rest of us who don’t drink the SJW Kool-Aid have.

  77. …exactly.

    But that makes too much sense and brings up too many harsh truths the SJW crowd refuses to deal with. Plus, one can now be accuses of rape for doing so…w/o a shred of evidence.

  78. Hahahahahahahahah!

    AIs are supposed to make better decisions than us, right? And one of the characteristics that define said word ‘better’ is, ‘without human bias’.

    So it did so and guess what it found? Men gravitate towards tech and women do not, ergo concentrate on the male candidates.

    …but AI is also not PC…so….RAPE!

  79. Correct me if I’m wrong but doesn’t the data shows the following paragraph?

    –Men and women have the same average IQ. The unmatched X chromosome in men means they are more prone to extremes caused by recessive genes. So, most geniuses are male, but so are must of the people who didn’t get included in a data survey because they have really crippling learning difficulties and live in an institution or at home on disability with a caregiver relative. at IQs of 130-150 the male to female ratio is something like 2.5:1–

    So if you’re targeting the tech industry, and you want say programmers that tend to hit about a 31 on their ACT scores that means they’ll have an IQ of about 131 on a 15 point standard deviation scale. You feed in previous resumes, the pattern becomes pretty clear – all things being equal and you mostly care about technical proficiency (therefore IQ), you chose the male.

    Combine that with gender choice, where more genius women tend to chose medicine….

  80. Factor in garbage in garbage out models. If it relies on prior statistical data which has a ton of human biases in it its going to act like its creator. Adjusting for that will improve things but algorithmic / AI is a young field.

  81. Why would you imagine that programmers explicitly coded the software to penalize resumĂ©s with the word “women”? Paranoid much?
    The AI itself developed that negative association, due to past resumĂ© data. There was no explicit attempt to make the software biased against women. What’s been shown is that AI will conform to existing realities and try to match the existing realities, thus countering against change.

  82. Ha ha Just a joke but maybe it is not that AI is biased against women but that we are unintentionally biased pro-women Just sayin

  83. Ha ha Just a joke but maybe it is not that AI is biased against women but that we are unintentionally biased pro-women Just sayin

  84. The first time this happened it was funny. After it keeps happening time after time you have to ask if maybe the entire approach is wrong. Such as the approach of assuming that equal colour/race/religion/sex results is the correct answer.

  85. The first time this happened it was funny. After it keeps happening time after time you have to ask if maybe the entire approach is wrong.Such as the approach of assuming that equal colour/race/religion/sex results is the correct answer.

  86. (Is sarcasm). Such a shock! Artificial Intelligence has many of the same biases as programers? What could *possibly* go wrong for religious people, people of different skin color, political bias, sex? Hats off to Amazon for repairing such bias – thank you!
    Google, China, Amazon, Russia, Apple and others trying to make strong, general AI that is self-aware may give us a super-intelligence without super wisdom and super compassion. AI is a reflection of the women and men and others who design, build, program, and maintain them. People cannot ever be truly neutral, so AI can’t ever be either. AI is doing much good now in many situations. Let us hope that experience will continue.

  87. Typically, what would happen after they tweaked their AI to make it neutral to the term “women”, is that it will find another proxy word but will still “discriminate”. I heard of similar things happening with AI that “discriminated” against certain ethnicities. They would tweak it but it would find a way around it while still giving the same results. The thing is, AIs don’t care about being politically correct, they just give you results.

  88. Typically what would happen after they tweaked their AI to make it neutral to the term women”””” is that it will find another proxy word but will still “”””discriminate””””.I heard of similar things happening with AI that “”””discriminated”””” against certain ethnicities. They would tweak it but it would find a way around it while still giving the same results.The thing is”” AIs don’t care about being politically correct”” they just give you results.”””

  89. Are they able to prove it is bias? What if the AI got it right? A good engineering mindset might even preclude one from participating in gender exclusionary activities.

  90. Are they able to prove it is bias? What if the AI got it right? A good engineering mindset might even preclude one from participating in gender exclusionary activities.

  91. The whole point in using A.I. is to discover statistical patterns, that humans can barely recognize or even can’t recognize and thus to ‘discriminate’ (oh the irony). Well.. the whole point in perception is the detection of differences, i.e. to ‘discriminate’. A.I. doesn’t give a flying fĂ»ck about political correctness, which is just the linguistic and semantic tyranny of the feudal aristocrats in professional politics. A.I. can neutrally see what doesn’t work or which hinders reaching a goal (e.g. selecting the best in their fields). What a kindergarten.. .

  92. Exactly. You build pattern finding software, it’s gonna find patterns. This is becoming a common problem with AI research: How do you program the AI not to notice patterns that *you yourself are committed to not noticing?*

  93. If “We” means popular media ideology which contaminates laws and makes a farce about social and professional interaction. (It’s also pro minorities and flawed (LGBTQ) people.)

  94. It just determined that women are an inferior workforce. And I can’t imagine all-women colleges have a nice ring to anyone hiring people…

  95. The first time this happened it was funny. After it keeps happening time after time you have to ask if maybe the entire approach is wrong.
    Such as the approach of assuming that equal colour/race/religion/sex results is the correct answer.

  96. Typically, what would happen after they tweaked their AI to make it neutral to the term “women”, is that it will find another proxy word but will still “discriminate”.

    I heard of similar things happening with AI that “discriminated” against certain ethnicities. They would tweak it but it would find a way around it while still giving the same results.

    The thing is, AIs don’t care about being politically correct, they just give you results.

  97. Are they able to prove it is bias? What if the AI got it right? A good engineering mindset might even preclude one from participating in gender exclusionary activities.

Comments are closed.