Skip to content

Gender Bias: It Is Worse Than You Think

March 28, 2017

Science meets bias and diversity

Deborah Belle is a psychology professor at Boston University (BU) who is interested in gender differences in social behavior. She has reported a shocking result about bias.

Today I thought I would discuss the issue of gender bias and also the related issue of the advantages of diversity.

Lately at Tech we have had a long email discussion on implicit bias and how we might do a better job of avoiding it in the future. My usual inclination is to think about such issues and see if there is some science behind our assumptions. One colleague stated:

The importance of diversity is beyond reasonable doubt, isn’t it?

I agree. But I am always looking for “proof.”

Do not get me wrong. I have always been for diversity. I helped hire the first female assistant professor to engineering at Princeton decades ago. And I have always felt that it is important to have more diversity in all aspects of computer science. But is there some science behind this belief? Or is it just axiomatic—something that we believe and needs no argument—that it is “beyond reasonable doubt?”

This is how I found Deborah Belle, while looking on the web for “proof.” I will just quote the BU Today article on her work:

Here’s an old riddle. If you haven’t heard it, give yourself time to answer before reading past this paragraph: a father and son are in a horrible car crash that kills the dad. The son is rushed to the hospital; just as he’s about to go under the knife, the surgeon says, “I can’t operate—that boy is my son!” Explain …

If you guessed that the surgeon is the boy’s gay, second father, you get a point for enlightenment… But did you also guess the surgeon could be the boy’s mother? If not, you’re part of a surprising majority.

In research conducted by Mikaela Wapman […] and Deborah Belle […], even young people and self-described feminists tended to overlook the possibility that the surgeon in the riddle was a she. The researchers ran the riddle by two groups: 197 BU psychology students and 103 children, ages 7 to 17, from Brookline summer camps.

In both groups, only a small minority of subjects—15 percent of the children and 14 percent of the BU students—came up with the mom’s-the-surgeon answer. Curiously, life experiences that might [prompt] the ‘mom’ answer “had no association with how one performed on the riddle,” Wapman says. For example, the BU student cohort, where women outnumbered men two-to-one, typically had mothers who were employed or were doctors—“and yet they had so much difficulty with this riddle,” says Belle. Self-described feminists did better, she says, but even so, 78 percent did not say the surgeon was the mother.

This shocked me. I knew this riddle forever it seems. But was surprised to see that the riddle is still an issue. Ken recalls from his time in England in the 1980s that surgeons were elevated from being addressed as “Doctor X” to the title “Mister X.” No mention of any “Miss/Mrs/Ms” possibility then, but this is now. I think this demonstrates in a pretty stark manner how important it is to be aware of implicit bias. My word, things are worse than I ever thought.

Bias In Diversity Studies

I looked some more and discovered that there was, I believe, bias in even studies of bias. This may be even more shocking: top researchers into the importance of diversity have made implicit bias errors of their own. At least that is how I view their research.

Again I will quote an article, this time from Stanford:

In 2006 Margaret Neale of Stanford University, Gregory Northcraft of the University of Illinois at Urbana-Champaign and I set out to examine the impact of racial diversity on small decision-making groups in an experiment where sharing information was a requirement for success. Our subjects were undergraduate students taking business courses at the University of Illinois. We put together three-person groups—some consisting of all white members, others with two whites and one nonwhite member—and had them perform a murder mystery exercise. We made sure that all group members shared a common set of information, but we also gave each member important clues that only he or she knew. To find out who committed the murder, the group members would have to share all the information they collectively possessed during discussion. The groups with racial diversity significantly outperformed the groups with no racial diversity. Being with similar others leads us to think we all hold the same information and share the same perspective. This perspective, which stopped the all-white groups from effectively processing the information, is what hinders creativity and innovation.

Nice study. But why only choose to study all-white groups and groups of two whites and one black? What about the other two possibilities: all black and two blacks and one white? Did this not even occur to the researchers? I could imagine that all-black do the best, or that two black and one white do the worst. Who knows. The sin here seems to be not even considering all the four combinations.

It Is Even Worse

Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai have a recent paper in NIPS with the wonderful title, “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.”

Again we will simply quote the paper:

The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors, which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.

Here is one of their examples. Suppose we want to fill X in the analogy, “he is to doctor as she is to X.” A typical embedding prior to their algorithm may return X = nurse. Their hard-debiasing algorithm finds X = physician. Yet it recognizes cases where gender distinctions should be preserved, e.g., given “she is to ovarian cancer as he is to Y,” it fills in Y = prostate cancer. Their results show that their hard-debiasing algorithm performs significantly better than a “soft-debiasing” approach and performs as well or nearly as well on benchmarks apart from gender bias.

Overall, however, many have noted that machine learning algorithms are inhaling the bias that exists in lexical sources they data-mine. ProPublica has a whole series on this, including the article, “Breaking the Black Box: How Machines Learn to be Racist.” And sexist, we can add. The examples are not just linguistic—they include real policy decisions and actions that are biased.

How to Balance Bias?

Ken wonders whether aiming for parity in language will ever be effective in offsetting bias. Putting more weight in the center doesn’t achieve balance when all the other weight is on one side.

The e-mail thread among my colleagues centered on the recent magazine cover story in The Atlantic, “Why is Silicon Valley so Awful to Women?” The story includes this anecdote:

When [Tracy] Chou discovered a significant flaw in [her] company’s code and pointed it out, her engineering team dismissed her concerns, saying that they had been using the code for long enough that any problems would have been discovered. Chou persisted, saying she could demonstrate the conditions under which the bug was triggered. Finally, a male co-worker saw that she was right and raised the alarm, whereupon people in the office began to listen. Chou told her team that she knew how to fix the flaw; skeptical, they told her to have two other engineers review the changes and sign off on them, an unusual precaution.

One of my colleagues went on to ascribe the ‘horribleness’ of many computer systems in everyday use to the “brusque masculinism” of their creation. This leads me to wonder: can we find the “proof” I want by making a study of the possibility that “men are buggier”—or more solidly put, that gender diversity improves software development?

Recall Ken wrote a post on themes connected to his department’s Distinguished Speaker series for attracting women into computing. The series includes our own Ellen Zegura on April 22. The post includes Margaret Hamilton and her work for NASA’s Apollo missions, including the iconic photo of the stack of her code being taller than she. Arguments over the extent of Hamilton’s role can perhaps be resolved from sources listed here and here, but there is primary confirmation of her strong hand in code that had to be bug-free before deployment.

We recently posted our amazement of large-scale consequences of bugs in code at underclass college level, such as overflowing a buffer. Perhaps one can do a study of gender and project bugs from college or business applications where large data could be made available. The closest large-scale study we’ve found analyzed acceptance rates of coding suggestions (“pull requests”) from over 1.4 million users of GitHub (news summary) but this is not the same thing as analyzing design thoroughness and bug rates. Nor is anything like this getting at the benefits of having men and women teamed together on projects, or at least in a mutual consulting capacity.

It is easy to find sources a year ago hailing that study in terms like “Women are Better Coders Than Men…” Ordinarily that kind of “hype” repulses Ken and me, but Ken says maybe this lever has a rock to stand on. What if we ‘think different’ and embrace gender bias by positing that women approach software in significantly different ways—?—where having such differences is demonstrably helpful.

Open Problems

What would constitute “proof” that gender diversity is concretely helpful?

37 Comments leave one →
  1. March 28, 2017 11:45 pm

    I love that wordpress is recommending your post “Bias in the primes” as a related post.

  2. Dana Randall permalink
    March 29, 2017 12:19 am

    Dick, your own university has been studying issues of diversity and implicit bias for years. In 2014, the ADVANCE Professors began the Equity, Diversity and Excellence initiative to help educate the campus on some of the ways bias enters basic processes such as hiring and promotion and tenure. In fact, well aware of the skepticism and resistance likely among GT professors, we focused on data from studies that are proof based. If you want to continue your exploration, please start here: .

    You can also find a nice PNAS article offering at least partial solution to your open problem: .

    Perhaps a blog on diversity may have benefitted from more diversity?

    • March 29, 2017 8:39 am

      Dana, thank you for supplying some more diversity :-). The abstract of the PNAS article indeed concludes, “We find that when selecting a problem-solving team from a diverse population of intelligent agents, a team of randomly selected agents outperforms a team comprised of the best-performing agents. This result relies on the intuition that, as the initial pool of problem solvers becomes large, the best-performing agents necessarily become similar in the space of problem solvers. Their relatively greater ability is more than offset by their lack of problem-solving diversity.” But at the end of our post we are trying out the prospects of making a more affirmative case for female software engineers than that.

      • Dana Randall permalink
        March 29, 2017 10:38 am

        Men are buggier? Don’t you think there are more bugs when input from any half of a population is more readily dismissed, especially when the remaining group has a large commonality of experience?
        Maybe the studies should stop looking at female performance and should instead look at performance by individuals who are stubborn enough to persist despite facing all sorts of bias and discrimination — surely these qualities are more likely correlated with fastidious code than chromosomes.

  3. March 29, 2017 2:22 am

    Female surgeons in England are referred to as ‘Miss’. Always sounds a bit strange.

  4. Peter Gerdes permalink
    March 29, 2017 2:51 am

    The riddle about the surgeon doesn’t have the implications for sexism that many people assume because many people have exactly the same confusion if you say a mother and daughter are in a horrible car crash that kills the mother.

    By both asking “how can this be” and only mentioning one parent the story discourages us from thinking about the other parent. It’s much akin to asking “Who is buried in Grant’s tomb” and using the fact that many people say “I don’t know” to conclude we don’t understand how possessives work.

    The relevant information is the DIFFERENCE in responses when one says mother and daughter are in a car crash versus father and son!

  5. Micha permalink
    March 29, 2017 2:53 am

    It’s worth noting, in a purely numerical way, that only 10% of surgeons are women, and that 14% of the interviewee answered the riddle correctly. The fact that it did not occur to a lot of people that the surgeon could be the mom may be based, subconsciously, on the factual unlikelihood of having a female surgeon. The riddle could probably be flipped with such jobs where women are over represented. Similarly, “he is to doctor as she is to nurse” may be more factually true; both jobs exhibit a lack of balance in representation.

    All in all, I believe the article failed to actually address its first question, but added to it another part; it becomes “Can you prove that we should strive for diversity, and that doing so should be done through forcefully blinding ourselves from factual, numerical inequalities?” (I’m not saying it is a bad or obviously wrong question; it is worth asking.)

    • Peter Gerdes permalink
      March 29, 2017 3:53 am

      Also, the computer was giving the CORRECT answer for the analogy he is to doctor as she is to blank. Implicit in any analogy question of this form is that the difference between the first elements (i.e. he and she) is salient as such the relationship one should infer in this case is that of stereotypical job for that gender in medicine. Just as the analogy “Natural Number is to Integer as Mersene (sp) Prime is to __” has the proper answer Prime not Integer.

      So if one accepts that some analogies can turn on the relation of stereotypical role then the debiasing algorithm does degrade performance. Now maybe you don’t want the algorithm to recognize these sorts of analogies but that’s a different matter.

  6. Peter Gerdes permalink
    March 29, 2017 3:36 am

    The evidence about the benefits of racial/gender diversity is mixed and much of the work that claims to show benefit suffers from the same fatal flaw you point out of not considering all possible explanations (e.g. blacks were simply better at the task in that study). Between this, the HUGE publication bias and the general replication issues in the social sciences makes it almost impossible to reach any firm conclusions from bias/diversity research.

    This is a shame because actual knowledge about the effects of bias and diversity would be quite valuable.

    However, one should be prepared for results showing that gender diversity is harmful in some contexts. After all a mixed gender team has greater likelihood of gender/sexual based conflict not to mention inefficiencies caused by implicit bias (e.g. ignoring female coder’s contributions/concerns). Even without bias it’s certainly possible that certain endeavors are more successful when performed by single gendered groups as a number of studies suggest in the education context.

  7. cero permalink
    March 29, 2017 3:51 am

    It would be interesting to do the same riddle replacing “boy and his dad” by “girl and her dad”, “girl and her mom” and “boy and his mom”.

    I’m pretty sure the effect just by replacing “boy” with “girl” will be surprising, since you change from an all-male-context to a mixed context.

  8. Peter Gerdes permalink
    March 29, 2017 4:06 am

    “What if we ‘think different’ and embrace gender bias by positing that women approach software in significantly different ways—?—where having such differences is demonstrably helpful.”

    If one assumes that plausibly women approach software in ways that are significantly different enough that it substantially affects the product produced then intellectual honesty forces you to take the possibility that women approach software in ways that turn out to be (statistically) inferior.

    I think this is the fundamental problem with intellectually honest inquiry into these issues. You can’t take gender differences seriously but insist that they only result in positive socially acceptable facts. Differences like this will mean at least in some circumstances the way women tend to approach a problem will be worse and, even though the costs and benefits may average out in the big picture, no one is comfortable openly acknowledging those cases.

    • Peter Gerdes permalink
      March 29, 2017 4:08 am

      Should have read:

      possibility that women approach software in ways that turn out to be (statistically) inferior *seriously*.

      Forgot seriously.

    • March 29, 2017 8:35 am

      Peter, there is a possibility that women is isolation might be “statistically inferior” but the combination may benefit. That’s what we are properly asking when we talk about the benefit of “diversity” not just “more women.” Dick’s point about the black+white murder-mystery study is related to this.

      • Peter Gerdes permalink
        April 2, 2017 10:13 pm

        I wasn’t so much considering the issue of women on their own being inferior (I take that to be unlikely once ability/accomplishment is conditioned on) but including women on a team might be net harmful.

        In particular, all the distractions of gender interaction and the inefficiencies induced by team member gender bias all work against a mixed gender team. Is it POSSIBLE that women add some benefit that is so valuable that even when their contributions are ignored (as in your anecdote) they contribute more than the inefficiency this induces? Yes, it’s possible.

        However, given that we are pretty confident of many of the harms that result from a mixed gender team but only speculate about the benefits we have to be open to the possibility we will find out that adding women to an existing team hurts productivity (not necessarily through any fault of the women involved).

        So what do we do then? If we think we are always morally required to hire the best candidates (regardless of how they will affect overall team performance) this whole discussion is really kinda useless. On the other hand if we are willing to prefer adding team members whose presence adds most to the overall team productivity (even if they are less able than other candidates) what do we do IF we find out that mixed gendered teams perform worse?

        In short I don’t see any coherent rule that would allow us to make use of the fact that adding women might increase productivity that wouldn’t yield unacceptable (not merely socially but IMO morally) results if it turns out that adding them to a previously all male team decreases productivity for reasons beyond their control.

      • Peter Gerdes permalink
        April 2, 2017 10:25 pm

        To be clear the problem is that CS is so male dominated. If it wasn’t a result that mixed gender teams did worse would simply suggest the unproblematic solution of single gendered teams. However, given that CS is male dominated it seems troubling to factor in overall team productivity if that ends up favoring single gender teams.

      • Peter Gerdes permalink
        April 2, 2017 10:34 pm

        Sorry, I think my reply here wasn’t super clear.

        Yes, in the original post I was referring to the fact that one can’t take the idea that women approach software in a sufficiently different way as to allow they offer some extra benefit when added to male dominated teams without taking seriously the possibility that those ways they do approach software are substantially less effective.

        My complaint there was that it’s not intellectually coherent to only seriously entertain substantial gender differences when it implicates ways we might empower women. Intellectual honesty means that if we really think men and women approach software so much differently we need to be willing to consider outcomes like men and women have incompatible approaches (and thus diversity is harmful) or that women simply have a worse one.

        I personally believe that substantial differences in either approach or ability are probably pretty minimal once accomplishments are conditioned on but for those who don’t are they willing to accept the potential implications of their view about gender even when it turns out socially unacceptable?

  9. March 29, 2017 5:48 am

    Here’s a relevant paper on diversity —

    The Dynamics of Vertical and Horizontal Diversity in Organization and Society

  10. March 29, 2017 6:09 am

    Just wanted to point out that among the “undergraduate students taking business courses at the University of Illinois” there might have been a lot more white than black subjects, so probably the reason why only white and two white-one black groups were used, was to maximize the number of mixed groups.

  11. March 29, 2017 11:31 am

    Scott E. Page, U. Michigan, offers a complexity-based argument in his book “The Difference” and 24 lectures in the Great Courses series.

    Clearly, computer science has a diversity *management* problem, but is it fixable? reversible?

    Further diversity arguments go back to Professor Virginia Valian, 20 years ago. When did the first male utter the words “implicit bias”?

    What’s new?

  12. Dana Randall permalink
    March 29, 2017 11:33 am

    My own snapshot of the surgeon problem: When I started teaching, a student from an u.g. math class came to my office hours and wanted to know if he could ask a personal question. He was confused by my once saying in class, “When I was a graduate student at Berkeley…,” and wondered how that could be. Dumbfounded, told the 4 students in my office to figure out what I might have meant by themselves, much like I would have done if they were stumped on a math question. Think…think…think…”Oh! You got your masters there and are getting your PhD here?” “No.” Think…think…think…”You transferred?” “No.” Eventually I had to explain I was an Assistant Professor. Apparently this was a far more challenging brain teaser than I had anticipated. (And before you ask, the guy down the hall who looked like he was 15 never had this experience.)

    • rjlipton permalink*
      March 29, 2017 12:39 pm


      Thanks for this example. Well wish you did not have this example to tell. But I guess there is a fundamental issue if math students cannot even imagine all the cases to a simple real world problem.

    • Keren Censor-Hillel permalink
      March 30, 2017 5:58 am

      Here’s another one in the same spirit (I have more): A few months ago someone knocks on my (always open) office door, asking to speak with the professor. I reply something such as “sure, what about?” and the person looks completely baffled. The rest is: Person: “W…wait, are you the professor?”. Me: “Yes.”. Person takes a step back to read the sign on my door and only then starts speaking.

  13. gepect permalink
    March 29, 2017 3:44 pm

    The Neale et al. study actually doesn’t show a significantly better performance for mixed groups (Table 1 gives r=-0.15 which at this sample size isn’t even significant at 10%). Then again, with this sample size only pretty strong effects could have been detected, so unfortunately overall we don’t learn a lot from this study. Incidentally, I find it quite unfair to criticize the authors for not considering more mixed groups; I’m sure they would have loved to do that (or increase the sample size), but time and money are unfortunately quite limited for these studies.

  14. March 29, 2017 4:48 pm

    The issue is not whether female coders are better, or even different, than male ones: that is really missing the point. (Although I agree with Dana that groups of people do better when they’re not all competitive jerks, and when some of them have had to develop fantastic coping and communication skills to get where they are!)

    The issue is that science and engineering is currently limiting itself, through insufficient investment, stupid cultural behaviors, unforced errors, and general boneheadedness, to about half the population – and when you consider race, class, and global North/South boundaries, far less than that.

    This is bad for two reasons: 1) everyone, female, male, and other, should feel that science and engineering belongs to them, and that these careers are available to them a priori; and 2) when we look at the scientific challenges facing us, artificially limiting ourselves to a particular subpopulation – chosen for bizarre historical reasons – is something we can’t afford.

    I question whether any additional “proof” is necessary.

    p.s. I appreciate your highlighting of Bolukbasi et al. For the most part, it seems that machine learning, applied to the current corpus of human behavior, tends to amplify our worst tendencies. It’s nice to hear that it might be able to reverse them to some extent.

  15. SteveB permalink
    March 30, 2017 12:46 pm

    Here’s a small paradox, would love if someone took a shot at explaining it:
    4 systems I am familiar with 1. nordic country, 2. developed western european country, 3. eastern europe 15y ago, 4. large middle eastern country.
    Ordering is obviously same in terms of progressiveness on gender issues, certainly in terms of amount of ink used on topic.
    Ratio of women in cs program, approximate: 1. 15% 2. 20% 3. 40% 4. 55%

    • Jeff permalink
      March 31, 2017 8:05 am

      I don’t know the specifics of this example, but tendency to think “equal education = gender equality” does not always pertain. For example, it is possible that women need more education in order to achieve income/opportunity parity with men of the same age.

  16. John permalink
    March 30, 2017 1:27 pm

    I’m not sure how to ask this and not come off as snarky. I’m legitimately curious. Why do many people hoping to reduce bias almost immediately suggest there are real differences we should leverage?

    If there are real differences, I would not call that bias. In my life experiences, I see no reason to believe coding ability correlates with gender. But if, on average, men really are worse at programming bug free code, why wouldn’t you explicitly _want_ gender inequality, for in that case a selection of good programmers should statistically have more women as a matter of fact, not bias.

    • william e emba permalink
      March 31, 2017 11:16 am

      Your question assumes that managers engage in objective-performance based evaluations. Unfortunately, most people, including self-declared rationalists, rely quite strongly on their internal biases.

      I once was a consultant for NASA, hired to provide outside scrutiny of some engineering software. At one point I found a severe mathematical probability flaw in their simulation algorithms, one that was easy to make if you thought naively about the model, but one that was obviously incorrect if you thought in terms of Haar measure. It took me a week to come up with the correct solution. I sat on it at first, as it took me a week to come up with a way to explain what I found that was suitable for the engineers. (I came up with an elementary 2D example, where the naive thinking leads to something that even first year calculus students would understand was wrong.) Had I just barged in with the “you’re wrong, higher math is right” explanation, they would certainly have ignored me. But once they were convinced I knew what I was doing, they didn’t even bother to doublecheck my actual solution.

      Thinking is only partially about rationality. It’s also partly gut-instincts in action. I made sure the engineers’ gut-instincts were that I knew this material really really well, and based on the 2D example, that they too understood the material, even when they didn’t. Our hosts are similarly interested in similar gut-instinct retraining at the social engineering level, a much more difficult task.

  17. March 31, 2017 12:48 am

    Dear all, here is a related “Test your intuition” question.

  18. Colin permalink
    March 31, 2017 7:02 pm

    On the subject of an evidence-based approach to diversity, I see that Cathy O’Neil (mathbabe) has a book out on how ‘big data’ ends up entrenching discrimination and polarization:

    People’s implicit/subconscious biases are bad enough, but we could end up in an even worse situation if a poorly-designed cost-benefit analysis of diversity just leads to prejudiced beliefs getting reinforced with the illusion of statistical rigour.

    • April 1, 2017 12:23 am

      Indeed, we link her “mathbabe” blog, and this book has been in our background thinking for a few recent posts.

  19. Anonymous permalink
    April 4, 2017 4:21 pm

    I agree with Dana, that there is in fact plenty of science on this subject, and it doesn’t require looking very far.

    I’ll provide one more anecdote: the title of your post: “Gender bias: it is worse than you think”: which implicitly assumes your readers are male, as most women who have advanced to a graduate-level background in computer science probably have a good intuition for how bad the gender bias really is (as evidenced by the comments here).

    The question, particularly on a theory blog, shouldn’t be whether it’s “good business” to include women, but whether we believe that any talented, curious person should be shut out of the ultimate quest for scientific knowledge by an exclusionary culture.

    • April 5, 2017 7:27 pm

      “The question, particularly on a theory blog, shouldn’t be whether it’s “good business” to include women, but whether we believe that any talented, curious person should be shut out of the ultimate quest for scientific knowledge by an exclusionary culture.” Right on.

      It’s an imperfect analogy, but I wonder what would have happened if, during the Civil Rights era, well-meaning scientists had studied whether racial diversity would improve decision making because African-Americans think differently. Clearly that would be entirely beside the point, and if anything would feed into racist narratives that there are fundamental differences based on race.

      I don’t give a damn if women are better or different at coding. I don’t even care if mixed-gender teams do better than homogeneous teams – or even if they do worse because of the men being jerks, in which case the solution is to kick the sexist men off the team no matter how good they are at coding with their bros. The issue is not maximizing the speed with which we produce new apps – the issue is making women feel welcome in our field.

      I’m tired of women being made uncomfortable, attacked by trolls on the internet, harassed in tech workplaces, and all the other things that we do to exclude them. If we believe in what we do, it’s absurd to think that it’s ok to only let half the population do it.

      [Dick, Ken: I’m responding more to Peter Gerdes here than to you. But I still think that your asking for “proof” is not the point.]


  1. “Who Owns Your Face?” and other Twinks… – the rasx() context
  2. google gender/ diversity memo firestorm, other tech+gender stuff | Turing Machine
  3. Finding Coprime Pairs | Gödel's Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s