This past week, I talked to 10 candidates for our first round of summer intern interviews. It was awesome to see the enthusiasm and energy in the applicants, as well as the range of people who applied. On the job listing, we stated that “experience is preferred, but not required”—so we got a lot of MBA students, young professionals, recent graduates… and, to my surprise, one rising sophomore.
This young man, let’s call him Carl, realized there was no downside to having a conversation about the role. Because if he got picked to have a conversation, he could pitch why, despite how young he is, he’s qualified for the role. And if he impressed (which he did), he could put himself in a position to land future roles or be referred to other opportunities. He would also make a connection in the venture industry that he wouldn’t otherwise have had. Having spoken to him, I can put a face to the name, and I have a positive impression.
Because we had such fantastic candidates, Carl didn’t move on to the next round. But, he stands out in my mind for having the chutzpah and confidence to put himself out there. At Spero, we are giving some serious consideration to having ongoing, in-term interns, and Carl will be on the very short list of people I’d call if that opportunity opens up.
This story connects to the broader issue of how requirements are written and what they actually mean. Most of the time, a requirement is a proxy for a skill that people want to see so you can be successful in the role.
For example, in a job description:
An MBA is a proxy for analytical skills, basic financial skills, modeling basics, and an ability to evaluate businesses.
Experience in a specific role/industry is a proxy for the ability to hit the ground running versus needing people to spend time explaining how the industry works.
Years of experience is a proxy for maturity, the ability to collaborate with people, and the ability to handle the kinds of decisions the role will require.
And when raising venture investment, $1M revenue is often a proxy for product-market fit. There may be other ways to show PMF without being at exactly $1M.
This is not to say that every prerequisite is a proxy, or that you should fudge the facts. If you’re 2 credits short of your MBA, don’t say you have an MBA — but don’t automatically assume you’ll be rejected, either.
In our summer intern posting, we didn’t specify a single hard requirement, not even a college degree. I believe the whole professional world is moving towards tours of duty, which don’t depend on credentials like that. One of the smartest product designers I know doesn’t have an undergraduate degree. He is awesome and has accomplished a lot at some of the very best tech companies in Silicon Valley.
Next time you see one of those proxies, if you feel you‘ve demonstrated what the proxy is asking for, don’t hesitate to apply. I‘m fairly confident that Carl will do something interesting with his life. He recognizes that he is the asset and despite his youth, he understands the concept of proxies. He took the chance to put himself out there, and in nearly every situation, that’s better than not applying.
We’ve all seen the experiment where children are asked to draw scientists, right? 75% of 16-year old girls draw scientists as men. And while this has improved markedly over the decades, it saddens me that certain professions seem to still have a default gender.
“Programmer” is one of the professions that is default male. But those who are aware of tech history know that in the 1940s, many of the first programmers were women. So why do we now think of programmers as male by default?
As I recently discovered in Marie Hicks’ book Programmed Inequality, this was not by happenstance—at least not in Britain. There, during and after WWII, authorities made deliberate decisions to put women at the bottom of the technological totem pole.
Here’s a representative anecdote that starts the book and made me wince at its unfairness:
In 1959, a computer operator embarked on an extremely hectic year, tasked with programming and testing several of the new electronic computers on which the British government was becoming increasingly reliant. In addition, this operator had to train two new hires with no computing experience for a critical long-term project in the government’s central computing installation. After being trained, the new hires quickly stepped into management roles, while their trainer, who was described as having “a good brain and a special flair” for computer work, was demoted to an assistantship below them. This situation seems to make little sense until you learn that the trainer was a woman, and the newly hired trainees were men.
To understand what happened, rewind to the 1940s, when women were actively recruited to help with the war, and in particular, to staff the code-breaking effort at Bletchley Park, which was critical to the Allies in World War II.
Women were responsible for setting up and running the machines. In addition, the women also took charge of the significant amount of manual operation in order to make the system work, which included reading the codes, transferring them to punched tape and operating the machines. Over time, as the war progressed, the workload increased, and the women worked around the clock, in three shifts, to ensure the machine was always in operation. All of this significant and important work relied entirely on women.
At the end of the war, the women were forbidden to talk about what they were doing at Bletchley because the work was top-secret codebreaking stuff. This led to women being effectively erased from the earliest association with computers.
Post the war, between 1946 and 1955, Britain deployed computers to enable the country to process vast quantities of data. Women were used to keep the computers operational, but both the private and public sectors took advantage of sexist policies and laws to keep women at the lowest tier of the emerging industry. For example, companies used the “marriage bar” (which was still on the books but had been unenforced during the war) to fire a woman once she got married. This policy had no benefit to the employer, who lost a valuable and trained worker, but it kept women’s salaries low, and ensured the cultural norm of keeping women dependent. It also led to a number of professional women keeping their marriages secret. The Civil Service actually created a whole new job grade to ensure that women were not allowed to rise past a certain tier and they were denied promotions. The contortions went so far as to designate certain (lower) roles as women-exclusive, where men could not even apply
Mary Lee Berners-Lee, the mother of Tim Berners-Lee, worked at Ferranti in the 1950s. She recalled that women programmers performing the same work as men were paid less because “Ferranti was a paternal firm” that believed “men would have to support a wife and children so they needed more money.”
On top of all that, all these women were considered menial workers, which was not true. In the media and recruiting ads, they were portrayed as being part of the machine, just contributing to the machine performing well. So, a lot of the “programmer” roles were made to look unappealing on purpose.
What the authorities did not anticipate was the fact that computing was going to be the most revolutionary sector, and by making it so unattractive, they would shoot themselves in the foot by repelling male candidates. The continued suppression of women, at times to the detriment to the industry and country, would mean that there was a belief that anything involving machines didn’t require real intellect. And men would not want these jobs. The narrative designed to keep women down was about to backfire.
In the mid-sixties, the field started becoming exciting with the support of the government. The income potential of the industry increased. In order to appeal to men, while still keeping women boxed into the lower tiers, the language started to change, to indicate a segregation between feminine and masculine, menial and intellectual, programmers and operators. Advertising suggested that the computers were “operated by a typist, not highly paid programmers and controllers.” The typists, were, of course women.
In 1970, Britain passed the equal pay act, which eliminated different salaries for men and women. But the decades of structural inequality would still take their toll. While male employment in the industry expanded in the higher-paid tiers, women’s employment only increased in the lower tiers.
Britain, which was the most technologically advanced nation during the Second World War, did a number of things to shackle itself and fritter this advantage away. Instead of espousing free market policies, they chose socialist and protectionist ones. Instead of letting private enterprises thrive, the government formed International Computers Limited (ICL), a computer company that was intended to compete with the best in the world. And instead of letting the best programmers have successful and unshackled careers, Britain let decades of gender discrimination mire it in constant labor shortages and high turnover.
It might soothe our nerves to think all of this is history. However, in 2012, the London Science Museum held an exhibit on wartime codebreaking where the women operators were erased, yet again. The exhibit stated that Bletchley’s “machines operated around the clock,” but never mentioned that it operated completely because of women who worked three shifts. And “the two [females] in the only surviving picture of a Colossus being operated are not named in any of the exhibits at the UK’s Bletchley Park Historical Site and National Computing Museum, although their identities are known.”
In 2012!
I read Programmed Inequality by Marie Hicks in order to understand what we could learn from it and how we could identify this kind of systemic, cultural, and biased decision making when we see it.
The biggest takeaway is captured by Hicks:
These women’s experiences also elucidate the power dynamics behind how technology often heightens existing power differences.
Technology is built within the existing legal and cultural norms of society. We must pay attention to this, because even as the tech industry makes progress, it can reinforce unfair prevailing norms instead of alleviating them. It can strengthen the disparity instead of ensuring a level playing field.
Tech alone cannot cure all of society’s ills, but it can play an important role. To do so, tech leaders need to think about how technology can enable access rather than reinforce existing power dynamics.
Because while opportunity is not equally distributed, talent certainly is. And in order not to repeat the mistakes of the past, we must learn from history and from the first women in tech.
Many years ago, I had a junior person on my product team who was:
Constantly questioning why she “had” to be at meetings
Pushing back against tasks and responsibilities assigned to her, saying they were a waste of time
Constantly questioning long-established company culture
And, by pushing and questioning, being annoying.
Knowing only those things, a lot of people would say, “That person doesn’t seem like a good culture fit; you should get rid of her.”
But this same person was also:
Hardworking beyond belief
A brilliant product manager
A flawless executor
Beloved by engineers and designers.
She was (and is) awesome. She made the whole company better by launching an incredibly important product that the community loved, and which drove real revenue. She had a massive impact on all the organizations where she worked. And on top of that, she went on to become a dear friend.
People like that—I call them challengers—can often be the smartest people in the room. They question because they want to understand the logic and thinking behind ingrained cultural habits. They want to use their time well, and they want to make a big impact.
These are the people who push organizations to be better. Organizations that don’t have these challengers don’t succeed in the long term, because being challenged is what leads to growth. Every organization needs at least a handful of these people. They don’t have to win every fight, but they shouldn’t lose every fight either.
Yes, they can also be incredibly annoying, require a fair amount of time to manage, and drive you to distraction. That’s the price the organization pays, in terms of friction and time, for having dissenters in the ranks. But those dissenters are worth their weight in gold. If every employee is drinking the kool-aid, the company won’t question long-held beliefs, challenge itself, or stretch in new and interesting ways.
So when you recruit, look beyond the pre-converted. Hire the people who have some doubts or reservations. When you do have a challenger on your team, don’t crush their spirit. They will point out that maybe you aren’t the best thing since sliced bread, and the outside world may view your “brilliant and fair” idea through a different lens. Challengers teach you to be open to questioning yourself and changing your mind, and that’s how you grow.
To tie this in to current events: big systems, and societies, need challengers, too. Instead of seeing the challengers as annoyances who are not worth your time, try to listen to their point of view. Regardless of which “side” they may be on, there may be times when you agree with them. And if you don’t entirely agree, if you are willing to listen, you might be able to arrive at a solution that incorporates good ideas from different constituents.
Twitter took a (somewhat) principled stand last week. They exhibited some understanding of the fact that product leadership extends beyond creating a product: it also entails product stewardship. By “product stewardship” I mean taking a stand on what people should and shouldn’t be able to do with your product.
Jony Ive put it well when he was interviewed about the iOS feature Screen Time: “If you’re creating something new, it is inevitable there will be consequences that were not foreseen,” he said. “It’s part of the culture at Apple to believe that there is a responsibility that doesn’t end when you ship a product.”
This is an idea I’ve been thinking since I was at eBay. As I wrote a couple years ago:
When I led product at eBay, we wanted to be “a well-lit place to trade.” The company’s mission was “to empower people by connecting millions of buyers and sellers around the world and creating economic opportunity.” That was the intention. But as we scaled, people began to use eBay in ways we hadn’t predicted. At one point people began trading disturbing items, including Nazi memorabilia. As we thought about how to solve it, we asked ourselves a few questions: Who are we? What do we believe? Why did we create this product? Once we framed it in terms of core values, the decision about what to do became clear. The company decided to ban all hate-related propaganda, including Nazi memorabilia.
This week, yet another black man was murdered in broad daylight because of the color of his skin. It makes me ill. And it makes stewardship even more relevant.
Twitter, much like eBay in the past, asked themselves who they were, that they believed in, and why they created the product. But unlike eBay, they were not bold. Instead, they took a baby step forward last week and exhibited that they were finally willing to face the consequences for taking a (little) stand.
You may not threaten violence against an individual or a group of people. We also prohibit the glorification of violence.
You may not threaten or promote terrorism or violent extremism
You may not engage in the targeted harassment of someone, or incite other people to do so.
You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
And until last week, Twitter had declined to enforce these policies when it came to one user: Donald Trump. Twitter’s reasoning was that as the President, his tweets were of public interest, and different rules applied to him. And so for years, Twitter allowed the most powerful man to:
threaten violence
threaten or promote extremism
engage in harassment
promote violence and threaten harassment on the basis of race
Twitter allowed the product they had built with love and care to be abused and defiled. And then, last week, Donald Trump went too far, even for Twitter.
First, Trump tweeted falsehoods about mail-in ballots. This violated Twitter’s Election Misinformation policies which have existed since 2018. So, Twitter enforced their policies and added a misinformation label to his tweet.
Since Trump is also a child (and a dictator wannabe), he went after a Twitter policy employee who received numerous death threats as a result. He also threatened to revoke Section 230.
Second, Trump used language that included a threat of violence, which he has done before. This time, finally, Twitter hid the tweet behind a warning.
This immediately divided the Twitterverse. Many, who felt this small step was long overdue, were glad. Others felt that this was a huge violation of free speech.
I disagree with the second group. Twitter is a private company, which means the First Amendment does not apply. The First Amendment applies to the government and how it deals with citizens’ First Amendment rights. A private company can say “No shirt, No service”, or a restaurant can ask someone to leave if they start cursing loudly. In addition, Twitter actually has rules and policies that they have not been enforcing with Trump. Those rules aren’t new. Starting to enforce the existing rules fairly, and applying them to all users, is not a restriction. If Trump were not President, he’d have been suspended a long time ago.
So, at least they did something. They decided to enforce a version of the rules. Sort of lame. But better than nothing (our new low standard).
Meanwhile the other big platform in tech land is, of course, Facebook. They also have rules and they decided that the rules do not apply to Trump. They gave the racist-in-chief carte blanche. They effectively and fully caved. And with that, they picked a side.
As product leaders, we all want people to love what we create. But people often use our products in ways we never could have predicted. Once we release something into the world, it belongs to the users — and sometimes they use our products in unexpected and negative ways. We can’t be held responsible for what they do with it… right?
We have become painfully aware of what can happen when the tools we use encourage our worst instincts and amplify the most virulent voices. In past few months, there have been several violent efforts where the suspects behind them had been vocal about their beliefs on social media. Do the platforms really have no control over the ways in which their products are used? That feels both naive and untrue.
My friend Ashita Achuthan, who used to worked at Twitter, said this: “Technology’s ethics mirrors society’s ethics. As technologists we apply a set of trade offs to the design decisions we make. While we are responsible for thinking through the second and third order effects of our choices, it is impossible to predict every use of our products. However, once a new reality emerges, it is our responsibility to ask who has the power to fix things. And then fix them.”
To be clear, these decisions are not easy because these are complex problems. There are legal considerations, there are social considerations, there are moral and ethical considerations. When platforms are used globally, these decisions are hard to rush. Policy teams, business teams, and product teams agonize over where to draw the line and the unintended consequences of these decisions. As I tweeted on Thursday night, regardless of the decision, people will criticize it. But at the end of the day, these decisions have to be made. That is the job when you run a company like this.
In the past week, two white male CEOs—Jack Dorsey and Mark Zuckerberg—made two different choices. They have shown us who they are. It is now up to us (the users, the employees, the investors) to decide if we want to support that.
What can one do? If I use a platform that puts the most vulnerable at risk, I can stop using the platform. If I work at a company, I can evaluate whether my values and those of leadership match—my time matters, where I work and who I enrich with my work matters. If our values are aligned, and it’s a disagreement on tactics, I can try to convince the leadership to see the logic of my argument. If my values and those of the leadership are not aligned (and if I am in a position to do so) I might consider leaving to work at a company which is more aligned with my values. And if I am an investor in company that doesn’t share my values, I can sell my shares.
The United States of America was built on abusing black men, women, and children (in addition, of course, to Native Americans). That some of our fellow citizens live in fear and get killed on a whim is not acceptable. We should not allow this to keep happening. If you live in America, whether you were born here or came here later in life (like I did), everything we have is built on top of black bodies.
It is our jobs, as human beings, as leaders, to stand up for what’s right. And it is not right that the biggest bully in our country can use the platforms that were built to connect people to threaten and intimidate the most vulnerable. The leaders of these companies need to be stewards of their platforms and make sure the platforms are not used to harm people in the real world.
Thank you to Ashita Achuthan for reading drafts of this post. This is the post and video on Product Leadership is Product Stewardship.
It was the summer of 2012, and most of the class was on draft 63 of their soon-to-be perfect first feature script. But before that, we each planned to submit draft 79 to all the prestigious film labs. There, we would get input from auteurs we admired. Then, we’d make the perfect film, it would open to acclaim at the perfect festival, and get acquired and released nationwide. That was the plan.
That same summer, Charles and Sarah-Violet (SV) had a very different plan. Instead of perfection, they decided to create immediately. They cranked out a feature script. They each borrowed $40K through student loans. Knowing they were on a tight budget, they wrote about a world they knew (deep Brooklyn), with only a small handful of locations (all in NY), and very few characters. They didn’t submit the script to any labs. They didn’t apply for any grants. They did not wait.
They planned the shoot. They cast fantastic actors, some of whom they’d known for years. One of our classmates was the cinematographer.
They shot their feature. They edited their feature.
They did it all on a total of $110K. Tiny, even by indie standards.
One year later, they submitted it to festivals. The movie, FortTilden, premiered at SXSW. It won SXSW. And that set SV and Charles on a different trajectory. They were writers on the Netflix show Wet Hot American Summer and now have their own, very successful show on TBS, Search Party.
I share this story to share the power of ignoring gatekeepers. There are a few big steps in making a feature film: write a script, prep and plan the shoot, shoot, edit, release. Every step depends on funding. You could wait for funding at each stage—basically asking for permission from someone else to make your film. Or, you can do what SV and Charles did — make the best movie within the constraint they faced and the funds they were able to access. No waiting, no permission needed.
Don’t get me wrong: this is definitely not an easy or guaranteed path. I spoke with SV recently about her story, and she said, “(Taking out those loans) was still a huge insane risk I wouldn’t exactly recommend for everyone. But it felt right. So I’m always very careful to say, ‘Look, this is how we did it, and it worked out for us. I have some success but I also still have student debt.’ That said, I do NOT regret it. Not everyone would be comfortable with the position I put myself in, but it was right for me. I had a lot of clarity in the process and risking the money didn’t scare me. Waiting years and years to find funding or someone to approve of my voice was a much scarier fate.”
If you follow the SV & Charles model, you will have a real, live product. A product which people can see and enjoy. A product that people can evaluate and say “hey, they won SXSW on a tiny budget.”
Given the choice between being constrained, but still making something, versus waiting for the “ideal” situation, what would you pick? While most of the class was dreaming of the perfect first feature, SV and Charles made their first feature. That was enough to launch them into a world that is very hard to break into.
Breaking into tech is easier because angels and early funders (the gatekeepers) are willing to fund first-time founders. But it’s not always easy to raise your angel or pre-seed round.
Look at the funds and skills that you have. Decide how much risk you want to take — each person has their own comfort level and you should be the one that decides what is best for you. And then, design and build something using your skills and your budget. If you build something people love, you will have a little success. And that little success can propel you onto your next opportunity. And then onto the next opportunity. And each project or startup could get better. The gatekeepers will then come to you (and I say that as a venture investor).
In my film school class, every single person had ambition, most had a great idea. But SV and Charles just did it. And they went from strength to strength. You can, too.