A Brief History of Humankind, briefer

A few months back, a friend recommended watching the videos from a Coursera course, A Brief History of Humankind.  I recently finished, and I second the recommendation.  The lectures are engaging and encompassing without being overly shallow, and I like Yuval Harari’s approach of describing theories while continuously pointing out their contradictions and uncertain nature.

To make the course go even faster, I have two suggestions.  First, watch the videos on at least 1.5x speed. Dr. Harari speaks painfully slowly, and I was often up to 2.5x speed.  Second, I have compiled a list of the segments I got the most out of.  Watching only these on 2x speed, you should get through the course in just four or five hours.  As I said, I do recommend the whole thing, but here were the highlights for me:

  • Lesson 2: The Cognitive Revolution, Segment 1: An easy to understand overview of what the cognitive revolution was, when it happened, and what the consequences were.  If this piques your interest, watch the four subsequent sections of this lesson.
  • Lesson 4: The Human Flood, Segment 2: The impact of the cognitive revolution on earth ecosystems.
  • Lesson 5: History’s Biggest Fraud, Segment 1: The agricultural revolution and its downsides.  The three subsequent sections in this lesson are also quite good.
  • Lesson 6: Building Pyramids, Segment 3: The rise of mathematical thinking, and the invention of math to help us handle it. I’ll recommend any segment that covers Sumer.
  • Lesson 7: There is No Justice in History, Segment 2  & Segment 4: Talks about how hierarchies formed, focusing on race/caste (segment 2) and patriarchy (segment 4). The value of these segments probably depends on the extent to which you’ve previously thought and learned about race and gender.
  • Lesson 8: The Direction of History, Segment 3: The history of trust. Possibly my favorite segment of the whole course – I found myself thinking about the characterization of money as a form of quantified trust long after the segment was over.
  • Lesson 10: The Law of Religion, Segment 2: An overview of theistic religions, with a focus on how polytheism and dualism influenced monotheism.
  • Lesson 11: The Discovery of Ignorance, Segment 1: How science and imperialism grew with each other.  Can be summed up with this line: “The real aim of modern science is not truth, it is power.”
  • Lesson 13: The Capitalist Creed, Segment 2 & Segment 4: Segment 2 is as good an introduction to capitalism as I’ve found anywhere.  Segment 4 is short and very much worth watching.  It talks about unregulated capitalism, monopolies, and indifference, using the Atlantic slave trade as an example.
  • Lesson 14: The Industrial Revolution, Segment 3: This one is actually an anti-recommendation.  An introduction to consumerism, it has a shallow and frankly offensive take on obesity.  I found it a useful reminder that all histories are narratives and all narrators are fallible.  (Another reminder: a not too tactful reference to transgender experiences in Lecture 17 Segment 1.)
  • Lesson 15: A Permanent Revolution, Segment 1 & Segment 4: Segment 1 covers our changing approaches to time.  If you like this segment, I recommend reading A Geography of Time by Robert Levine.  Segment 4 talks about the surprising peacefulness of our time.

I will leave you with the last few sentences from the course:

So I hope that you leave this course with more questions than you had when you entered it, and that you leave this course with a desire, with a wish to study and to learn more about our history. In addition, I hope that you leave this course feeling a bit more uneasy than when you started it. Uneasy about the many questions to which we humans have no clear answer yet. Uneasy about the many problematic events that happened in the past, and uneasy about the direction history may be taking us in the future.

Money is hard, too, but you knew that

I received a number of comments regarding my recent posts (1,2) about how abstraction/quantification of trust leads to negative consequences such as competition, social judgment, and “gaming the system”. This does not surprise me, because my model for quantifying trust is money. And it is hard to find a technology more profoundly impactful and more deeply flawed than money.

This video segment from a Coursera class I’ve been taking does a good job of explaining the invention of money as a way of abstracting trust. From the transcript:

Why are people willing to work, for entire month, doing, many times, things they don’t really like, just in order to get the end of the month, a few colorful pieces of paper? People are willing to do such things when they trust the figments of the collective imagination. These, these cowry shells or these colorful pieces of paper. Trust is the real raw material from which all types of money in history have been minted.

When a wealthy farmer, say in ancient China, sold all his possessions in exchange for a sack of cowrie shells. And then travelled with them to another province, he trusted this ancient Chinese farmer, he trusted that when he reached his destination, other people, complete stranger, who he never met before would be willing to sell him rice, to sell him a house, to sell him fields in exchange for his cowry shells.

Money is accordingly a system of mutual trust. And not just any system of mutual trust. Money is the most universal and most efficient system of mutual trust ever devised by human beings.

Money is an abstraction that allows us to trust that we will get our physical needs met without having to do the work ourselves of building our own houses, growing our own food, making our own medicines. With money, we were able to expand our resource networks to vast numbers of strangers.  What I’m imagining is something similar, but in the realm of information.  With so many facts, claims, data points, anecdotes, and opinions constantly surrounding us, we end up making sense of things through something very primitive: gossip from our friends, and faith in people who look like us and talk like us.  I don’t think we’re wrong to do so.  But it’s not a very efficient system.

What if we could create a system, an abstraction, that allows us to trust that the knowledge and statements that are proposed to us are true, with a specific and transparent level of confidence?

Of course, we already have this in some respects. Most notably, we have a large and quite profitable academic and industrial system whose sole purpose is to create, collect and verify knowledge. But this system functions largely based on implicit rather than explicit trust. You trust the judgment of editors and peer reviewers of journals, of grant-makers and tenure-granters, and you trust these judgments because they belong to institutions with good reputations, or because their own work is frequently cited, or because you’ve read their work before and haven’t found any flaws. Some of these reasons are flimsier than others, but all are understandable, because verification is hard, trust is also hard, and there’s no other game going.

Which brings us around to the criticisms.  Let’s say we could make a much more efficient system, the equivalent of money but for knowledge.  Should we?

After all, money is the cause of so many problems in our society.  It causes corruption in our political systems, resentment and conflict in our personal lives, and suffering and death for many who do not have a lot of it.  Why on earth would we want to make anything more like money?

I don’t have a pat answer to that, but I do have two responses that I think are worth exploring.

First: for all the awful downsides to money, one can argue that we owe the last several thousand years of social and technical advancements to money.  Could we live the life we do now, with our cell phones and low infant mortality and space missions and chemotherapy treatments and takeout dinners and musicals, without money? This is a pretty epic counterfactual, so I’m not expecting immediate agreement, but I tend to believe that money has done more good than harm.

And second: money is very simple.  It was invented five millennia ago, and that shows.  Sure, Wall Street sharks and shysters like to create complex financial instruments but they certainly aren’t doing so to benefit society.  But we have computers now – most of us carry them around in our pockets, and sleep with them by our bedsides – and they keep way better track of value than cuneiform tablets.  For some ideas about how we could improve money, I recommend reading Charles Eisenstein’s Sacred Economics, especially the chapters Currencies of the Commons and Negative Interested Economics.  Bitcoin, of course, is an attempt at improving money through technology as well.  Unfortunately, money’s got a lot of baggage.  A trust abstraction system for information, if devised, could be structured to mitigate harms from the beginning.

That sounds great in theory, but what might it look like in practice?  I’ll sketch out some ideas in future blog posts.

 

Certainty

I have a habit of qualifying my statements with estimated likelihoods and error bars. “I think about ten people are coming, plus or minus two.” “I’m, like, eighty percent sure that it runs on Windows.” I worry that it comes off as an affectation, but I also worry that I’m not conveying my level of certainty effectively. I know how certain I am, and it pains me when that information gets lost to the ambiguities and inefficiencies of the English language. (I’m told that qualification by certainty is built into Lojban, which I believe with 99% certainty.)

When I send my female friends cover letters and grant proposals, they strike out words like “I think” and “I believe” and “probably” and “try”. I let them do it – we all know that there’s a confidence gap that disadvantages women – but it chafes. Aside from dangly earrings, uncertainty is the aspect of femininity I am most comfortable with.

Perhaps too comfortable.

There is one line in particular from this thoroughly memorable poem that I cannot get out of my head:

I asked five questions in genetics class today and all of them started with the word “sorry”.

My least favorite thing about blogging is the authoritative tone that nearly all bloggers adopt. To avoid it, I center myself and my experiences, about which I am actually the authority. Then my writing seems self-involved. (A lot of women essayists and authors are ridiculed as self-involved. But on what other topics are they allowed to speak with authority?)

I would rather just qualify everything I say. I’ve thought about using hover text for this purpose, or color-coding my sentences to reflect just how confident I am in them:

My least favorite thing about blogging is the authoritative tone that nearly all bloggers adopt. To avoid it, I center myself and my experiences, about which I am actually the authority. Then my writing seems self-involved. (A lot of women essayists and authors are ridiculed as self-involved. But on what other topics are they allowed to speak with authority?)

But you can’t color-code the spoken word – you can’t color-code most of your life – and you can’t shrink down when somebody else is waiting to take your space.  David Dunning writes that ”I don’t know” should be “an enviable success, a crucial signpost that shows us we are traveling in the right direction toward the truth” but for now it is disregard, devalued, and feminized.

What can we do about it?  I don’t know.  I don’t know I don’t know I don’t know I don’t know.

Which leaves me here, with my self-involved writing and my error bars, trying to be taken seriously, but not with certainty.

A summary of NPR’s “When Women Stopped Coding”

A friend shared with me this 15-minute Planet Money segment called “When Women Stopped Coding“. From their intro:

For decades, the number of women in computer science was growing.  But in 1984, something changed. The number of women in computer science flattened, and then plunged. Today on the show, what was going on in 1984 that made so many women give up on computer science? We unravel a modern mystery in the U.S. labor force.

There’s no transcript for the show, so I typed out some notes as I listened. Here’s my summary of the show:

The rise of personal computers in the 70s and early 80s meant that people could come to college with exposure to computers generally and experience with programming specifically. People who didn’t have that exposure were getting discouraged – and sometimes being discouraged, by professors and peers – and dropping out.

Why was there a gender difference in exposure to computers? For one thing, computer ads were super male-focused. The show plays several clips of ads aimed at men and boys, one of which does have a woman in it – jumping into a pool in her bikini. The hosts interview Jane Margolis (Unlocking the Clubhouse) about her research into childhood and adolescent experiences with computers. She talks about how computers were marketed as games for boys, how families bought into this by doing things like placing computers in their sons’ rooms, even when their daughters were more interested in computing, and how was reinforced culturally by movies like Weird Science that celebrate male nerds and objectify/denigrate women.

The show emphasizes that the women dropping computer science majors were not doing so just because they were “behind”. Many were actually excelling – but they still didn’t feel comfortable or welcome in this highly gendered culture. Fixing the culture, and welcoming students who are new to computer science, can be done: Carnegie Mellon, where Margolis did her research, made changes and now has 40% women in their undergraduate computer science programs. Harvey Mudd and the University of Washington have made similar improvements.

Trust is also hard

Building on my post about verification a few months ago, I want to talk about quantifying trust.  As full verification of all knowledge by a single individual is as impossible as drinking the sea, we are faced with two options: paranoia or trust.

Last year my friend Madeleine tried to persuade me that learning about gene patenting was important.  I agreed that gene patenting was important, but told her, “I don’t have time to learn about it and besides, I trust you. Whatever you tell me about gene patenting I will believe.”  I have said similar things to friends about a variety of topics, and I engage in a similar calculus when I trust a blog’s critique of a journal article or friend’s explanation of how a program on my computer works.  I could verify any given claim, but it would take days, weeks, months or even years that I don’t want to spend.  So I trust.

There have already been many efforts to quantify trust.  In a way, Google does this: page rank measures not just which websites are popular, but which are popular among popular websites.  One can view that as “trusting” the popular pages.  Less abstractly, sites like Yelp, Amazon, AirBnB, and many more monetize trust by making digestible our collective opinions of goods and services.  Some of these sites, like Yelp, have a social networking aspect, highlighting reviews from friends.  All of the sites allow some type of verification, in the form of reading reviews.  It’s not surprising to me that most efforts towards quantifying trust online are commercial – I would expect nothing less from our hypercapitalist culture – but it does lead to some ironies.  Thus TrustCloud, “a real time curated global positive reputation data service”, boasts that “our proprietary algorithms look for behaviors like responsiveness, consistency and longevity in online behavior”.  By keeping their methods proprietary they are, of course, saying “trust us”.

If commercial efforts towards quantifying trust are hamstrung by the assumed need for secrecy, what about non-commercial efforts?  There are many examples of implicit trust networks facilitating non-commercial projects.  The efforts of Debian Developers and Cochrane Collaborators spring to mind.  And yet, as I said, this trust is implicit and therefore difficult to leverage. We believe, holistically, that a Cochrane review is reliable and we can verify, with much effort, that a review has been done rigorously, but there is no automatized way for us to share that verification with others nor to build up a track record of whose work is consistently verified.  Unsurprisingly, the Debian community automates and quantifies a great deal more than Cochrane. Debian, a free software operating system, relies on individuals to manage sub-parts of the project, known as packages.  A number of statistics about these packages are compiled, including how often they’re used, how many predictable errors and warnings they provoke, and the number of reported bugs over the package’s lifetime.  But it’s not clear how these tools compare with implicit social trust networks in the success – or even continued functioning – of the project.

If a non-commercial project (or creatively and transparently commercial project) were to quantify trust, what would that look like?  I’d argue that it would have the following characteristics:

  • The methodology would be entirely transparent, even if some or all of the data being processed was kept private.
  • Such a system would facilitate verification to the fullest extent possible, while utilizing partial and incomplete verification.
  • This system would also enable snap judgments, distilling a multi-faceted trust network down to a single number or rating. Users could then choose what amount of engagement, from “snap judgment” to “full verification”, they care to perform.
  • It would follow the lead of social networking sites and recognize personal connections as a key source of trust.
  • It would allow individuals to disclose self-distrust by incorporating measures of confidence in their own statements, claims, verification processes, etc.

That’s just a first stab.  Already I can see problems, the most obvious being how terribly sad I would feel if I had a low trust score.  I will continue reading and thinking so that I can refine these ideas in a future blog post.

Thanks to Daf for answering some questions about Debian.