A summary of NPR’s “When Women Stopped Coding”

A friend shared with me this 15-minute Planet Money segment called “When Women Stopped Coding“. From their intro:

For decades, the number of women in computer science was growing.  But in 1984, something changed. The number of women in computer science flattened, and then plunged. Today on the show, what was going on in 1984 that made so many women give up on computer science? We unravel a modern mystery in the U.S. labor force.

There’s no transcript for the show, so I typed out some notes as I listened. Here’s my summary of the show:

The rise of personal computers in the 70s and early 80s meant that people could come to college with exposure to computers generally and experience with programming specifically. People who didn’t have that exposure were getting discouraged – and sometimes being discouraged, by professors and peers – and dropping out.

Why was there a gender difference in exposure to computers? For one thing, computer ads were super male-focused. The show plays several clips of ads aimed at men and boys, one of which does have a woman in it – jumping into a pool in her bikini. The hosts interview Jane Margolis (Unlocking the Clubhouse) about her research into childhood and adolescent experiences with computers. She talks about how computers were marketed as games for boys, how families bought into this by doing things like placing computers in their sons’ rooms, even when their daughters were more interested in computing, and how was reinforced culturally by movies like Weird Science that celebrate male nerds and objectify/denigrate women.

The show emphasizes that the women dropping computer science majors were not doing so just because they were “behind”. Many were actually excelling – but they still didn’t feel comfortable or welcome in this highly gendered culture. Fixing the culture, and welcoming students who are new to computer science, can be done: Carnegie Mellon, where Margolis did her research, made changes and now has 40% women in their undergraduate computer science programs. Harvey Mudd and the University of Washington have made similar improvements.

Trust is also hard

Building on my post about verification a few months ago, I want to talk about quantifying trust.  As full verification of all knowledge by a single individual is as impossible as drinking the sea, we are faced with two options: paranoia or trust.

Last year my friend Madeleine tried to persuade me that learning about gene patenting was important.  I agreed that gene patenting was important, but told her, “I don’t have time to learn about it and besides, I trust you. Whatever you tell me about gene patenting I will believe.”  I have said similar things to friends about a variety of topics, and I engage in a similar calculus when I trust a blog’s critique of a journal article or friend’s explanation of how a program on my computer works.  I could verify any given claim, but it would take days, weeks, months or even years that I don’t want to spend.  So I trust.

There have already been many efforts to quantify trust.  In a way, Google does this: page rank measures not just which websites are popular, but which are popular among popular websites.  One can view that as “trusting” the popular pages.  Less abstractly, sites like Yelp, Amazon, AirBnB, and many more monetize trust by making digestible our collective opinions of goods and services.  Some of these sites, like Yelp, have a social networking aspect, highlighting reviews from friends.  All of the sites allow some type of verification, in the form of reading reviews.  It’s not surprising to me that most efforts towards quantifying trust online are commercial – I would expect nothing less from our hypercapitalist culture – but it does lead to some ironies.  Thus TrustCloud, “a real time curated global positive reputation data service”, boasts that “our proprietary algorithms look for behaviors like responsiveness, consistency and longevity in online behavior”.  By keeping their methods proprietary they are, of course, saying “trust us”.

If commercial efforts towards quantifying trust are hamstrung by the assumed need for secrecy, what about non-commercial efforts?  There are many examples of implicit trust networks facilitating non-commercial projects.  The efforts of Debian Developers and Cochrane Collaborators spring to mind.  And yet, as I said, this trust is implicit and therefore difficult to leverage. We believe, holistically, that a Cochrane review is reliable and we can verify, with much effort, that a review has been done rigorously, but there is no automatized way for us to share that verification with others nor to build up a track record of whose work is consistently verified.  Unsurprisingly, the Debian community automates and quantifies a great deal more than Cochrane. Debian, a free software operating system, relies on individuals to manage sub-parts of the project, known as packages.  A number of statistics about these packages are compiled, including how often they’re used, how many predictable errors and warnings they provoke, and the number of reported bugs over the package’s lifetime.  But it’s not clear how these tools compare with implicit social trust networks in the success – or even continued functioning – of the project.

If a non-commercial project (or creatively and transparently commercial project) were to quantify trust, what would that look like?  I’d argue that it would have the following characteristics:

  • The methodology would be entirely transparent, even if some or all of the data being processed was kept private.
  • Such a system would facilitate verification to the fullest extent possible, while utilizing partial and incomplete verification.
  • This system would also enable snap judgments, distilling a multi-faceted trust network down to a single number or rating. Users could then choose what amount of engagement, from “snap judgment” to “full verification”, they care to perform.
  • It would follow the lead of social networking sites and recognize personal connections as a key source of trust.
  • It would allow individuals to disclose self-distrust by incorporating measures of confidence in their own statements, claims, verification processes, etc.

That’s just a first stab.  Already I can see problems, the most obvious being how terribly sad I would feel if I had a low trust score.  I will continue reading and thinking so that I can refine these ideas in a future blog post.

Thanks to Daf for answering some questions about Debian.

Meanwhile, at Grace Hopper…

Picture by Stefana Muller.

In between weddings both this weekend and last, I made it out to Phoenix for the Grace Hopper Celebration of Women in Computing. I gave a talk at Open Source Day about starting in open source. Rikki Endsley of Red Hat wrote up a summary of my talk, and Serena Larson of ReadWrite wrote an article based on the talk and a brief interview we did beforehand. My slides are here.

There’s a lot I could say about the male allies panel and Satya Nadella’s keynote but others have said it better than I could.  

I’ll limit myself to two remarks. First, I am not against the idea of including allies at GHC or most other feminist or women-centered spaces, but that inclusion has to be done thoughtfully, with real allies and not just corporate sponsors posing for publicity. Second, it’s worth noting that GoDaddy is planning to IPO sometime in the next year. If they’re successful, perhaps because the public no longer thinks of them as misogynists, GoDaddy founder Bob Parsons stands to make even more money for his hard work creating a financially troubled company and, oh yeah, blatantly objectifying women.  Not a bad deal if you can get it.

List

Things I’ve accomplished recently, and future plans, offered up as justification for the lack of a “real” post:

  • I spoke at Alterconf and Software Freedom Day.  The slides for Alterconf are here, though there should be video coming out soon.  I used a walkthrough for SFD, which can be found here.
  • I’ll be speaking next week at the Grace Hopper Celebration of Women in Computing, and also hosting a lunch table on contributing to free and open source software.  If you’re going to GHC, hit me up!
  • I’m organizing, with the help of Thea Atwood, an open science event at UMass Amherst.  Details here.  I think this will be great and it will hopefully encourage more open science efforts in the valley.
  • I’ll be hosting a Pycon poster proposal brainstorming session on IRC next week – date to be announced, once I’ve got my Wise Python Elders confirmed.  To be followed up with a critiquing and editing session before the deadline.
  • I wrote a post, “What we talk about when we talk about replication”, for the Open Science Collaboration blog.
  • We’re getting to the heart of events season at OpenHatch.  If you’re in Chicago, Victoria, Lewisburg PA, LA, or Long Island and want to volunteer at an event this month, let me know!

Not enough for you?  Relax and watch some time lapse photography of fireflies:

When everything looks like a nail

A hammer hangs on a nail sticking out of a wooden fence.

Hammer by Jerry Swiatek, CC BY 2.0

I’ve long known the adage “When you have a hammer, everything looks like a nail” but I’ve only recently come to appreciate its truth.

As I’ve mentioned on this blog before, my main client for the last year and a half has been OpenHatch, for whom I’ve been organizing events and event series. (I do many other things for them but the event series are my biggest focus.) A few months ago I was chatting with science librarian Thea Atwood about the disappointing lack of interest in open science at most of the Five College schools, especially my alma mater, Hampshire. We decided to throw together an event in mid-October to address that.

Event-planning has become a “hammer” for me: a tool for addressing problems that feels easy and natural. When I see an issue that could plausibly be fixed by throwing an event, I instinctively think about doing so.

I have several other hammers. The first one I picked up was writing stories. I write for fun, yes, but I also write to fix problems: my first novel is a way of articulating flaws in libertarianism, my children’s book is a response to the way society was gendering my best friends’ children, and my current project is meant to encourage girls to go into technology.

In college, I gained another hammer: doing experiments. When I have a question, I often design in my head the process that would allow me to answer it, whether that’s observation, controlled manipulation, or analysis of pre-existing data. Sometimes I even get to carry out these experiments, though of course that was more common when I worked at a lab.

When I learned to program, I added the hammer of “make a website!” though my grip can be somewhat shaky. I tend to brainstorm static websites, simple apps, or sites that use basic mysql-style databases because that’s what I feel comfortable creating. There’s still a great deal to web development that I’m unfamiliar with and therefore don’t think of when faced with a problem.

Which brings me to my worry: with so many tools in my belt, do I have a false sense of security that I’m choosing the best method for approaching a particular problem? There are still so many approaches I can’t take. My response to a problem is almost never “Make a business!” or “Create a physical object!” or “Write a song!” or “Check/change the law!”, because those aren’t things that I know how to do.

The next time I think of a story, an experiment, a website or an event as the best answer to a given problem, I want to take a step back and think about what the best solution really is. I want to force myself to come up with an approach that is outside my comfort zone. And I want to keep improving my toolset.

What are your hammers, and what hammers do you wish you had?