Hardware as a Moat

Photo credit: Peloton

When investors look at companies, one of the things we try to understand is whether the company can build a moat. In tech, a moat is a conscious business design choice that allows a company to develop and maintain a sustained competitive advantage over other players in the space.

Economies of scale and network effects are two of the best-known moats. But there are others—like deep tech, etc. Today, I want to propose a new moat: hardware. I’m talking about products like Peloton, where the hardware is central to having the full experience.

In the wellness space, software solves real problems, and the app ecosystem in these spaces is thriving. These apps help with things like meditation, period tracking, sleep tracking, and exercise of all kinds. But in a crowded market like that, investors ask, “How does a company win?”

Now, when you have “just” apps in these spaces, with no connected hardware, they are fungible. To move from one app to another, you just… click. If you have paid an annual subscription, you might wait until it expires, or you might just eat the cost and try a new app regardless. With these apps, there may be UI differences or instructor differences, but the experience is largely similar. The customer choice is being made based off of relatively small differences.

As my colleagues and I have written, in a software-only space where differences are minimal, a product-led community can be a great moat. Another moat could be sales to enterprises, where a company buys the app/service as a benefit for employees. If you lock up as many companies as possible, very quickly, the employees of those companies become your customers. But this kind of lockup, while valuable, may not be driven by passion or love for the product.

So, a moat that could be longer lasting is hardware. Yes, hardware often scares VCs, because it can be complicated. You need to create processes around mailing things out, dealing with returns, production challenges, and more. There seems to be an endless list of issues.

However, hardware has changed over the past few years, becoming easier to manage than it was 10 years ago. A whole ecosystem of companies has emerged to support hardware companies with many of the logistical issues. And engineers, product people, designers, and marketers who have built and shipped multiple versions of good consumer hardware are available in the market.

In a crowded software market, hardware can be your moat.

The most successful of these companies is Peloton. If you buy a bike (or presumably a Tread), to just get your hardware to turn on, you have to pay Peloton $39 a month for a subscription. And while your parents’ generation ended up using exercise equipment mainly as a clothes horse, these new devices are a joy. They are well-designed, they are connected, they often have community elements, and they have a rich user experience that’s compelling and powerful. So, sure, much like your parents, you could turn your Peloton into a dumping ground for your shorts and t-shirts, but once you’ve bought a device that costs several thousand dollars, the additional monthly pinch of $39 is likely making sure that you are using the device. Their numbers back this up: their S1 claims 95% 12-month retention for those who buy their hardware. Even if it is a bit lower, like some analysts debate, it’s still incredibly strong. Also, just yesterday, agreeing with this point of view, Lululemon bought Mirror, a connected hardware exercise mirror, for $500M.

Photo Credit: Core

It is even better when the hardware gives you unique data. For example, our portfolio company Core provides your Heart Rate, your HRV, your minutes of calm, and your minutes of focus for each session. Because you are holding a device that vibrates to give you a point of focus, and that same device gives you all this data, once you’ve bought a Core, you’re unlikely to want to use anything else to meditate. Same goes for Peloton.

In addition, unlike another apps on your phone, which are all behind the black glass screen, hardware is visible. You can see your Peloton bike in your house, your Core trainer on your desk or bedside table. And by being visible, and by being well-designed, they beckon to you to use them and experience the tactile, live user experience with them. They exist in the real world.

These are still the early days of connected hardware, but between the physical presence, the cost paid to acquire the device (which could be a proxy of customer commitment), the tactile experience, and the data generated, these devices can serve as a moat in crowded software markets. Over the next few years, companies will evolve their model and create fantastic new experiences for consumers. Both as a user and as an investor, I am excited to see where this world goes, and I’m excited to use these beautifully designed products and apps to achieve my goals.

Ignore the Requirements, Check the Proxies

Photo by Free To Use Sounds on Unsplash

This past week, I talked to 10 candidates for our first round of summer intern interviews. It was awesome to see the enthusiasm and energy in the applicants, as well as the range of people who applied. On the job listing, we stated that “experience is preferred, but not required”—so we got a lot of MBA students, young professionals, recent graduates… and, to my surprise, one rising sophomore.

This young man, let’s call him Carl, realized there was no downside to having a conversation about the role. Because if he got picked to have a conversation, he could pitch why, despite how young he is, he’s qualified for the role. And if he impressed (which he did), he could put himself in a position to land future roles or be referred to other opportunities. He would also make a connection in the venture industry that he wouldn’t otherwise have had. Having spoken to him, I can put a face to the name, and I have a positive impression.

Because we had such fantastic candidates, Carl didn’t move on to the next round. But, he stands out in my mind for having the chutzpah and confidence to put himself out there. At Spero, we are giving some serious consideration to having ongoing, in-term interns, and Carl will be on the very short list of people I’d call if that opportunity opens up.

This story connects to the broader issue of how requirements are written and what they actually mean. Most of the time, a requirement is a proxy for a skill that people want to see so you can be successful in the role.

For example, in a job description:

An MBA is a proxy for analytical skills, basic financial skills, modeling basics, and an ability to evaluate businesses.

Experience in a specific role/industry is a proxy for the ability to hit the ground running versus needing people to spend time explaining how the industry works.

Years of experience is a proxy for maturity, the ability to collaborate with people, and the ability to handle the kinds of decisions the role will require.

And when raising venture investment, $1M revenue is often a proxy for product-market fit. There may be other ways to show PMF without being at exactly $1M.

This is not to say that every prerequisite is a proxy, or that you should fudge the facts. If you’re 2 credits short of your MBA, don’t say you have an MBA — but don’t automatically assume you’ll be rejected, either.

In our summer intern posting, we didn’t specify a single hard requirement, not even a college degree. I believe the whole professional world is moving towards tours of duty, which don’t depend on credentials like that. One of the smartest product designers I know doesn’t have an undergraduate degree. He is awesome and has accomplished a lot at some of the very best tech companies in Silicon Valley.

Next time you see one of those proxies, if you feel you‘ve demonstrated what the proxy is asking for, don’t hesitate to apply. I‘m fairly confident that Carl will do something interesting with his life. He recognizes that he is the asset and despite his youth, he understands the concept of proxies. He took the chance to put himself out there, and in nearly every situation, that’s better than not applying.

What We Can Learn from the First Women in Tech

A child’s drawing of a scientist, from a Draw-a-Scientist study (VASILIA CHRISTIDOU)

We’ve all seen the experiment where children are asked to draw scientists, right? 75% of 16-year old girls draw scientists as men. And while this has improved markedly over the decades, it saddens me that certain professions seem to still have a default gender.

“Programmer” is one of the professions that is default male. But those who are aware of tech history know that in the 1940s, many of the first programmers were women. So why do we now think of programmers as male by default?

As I recently discovered in Marie Hicks’ book Programmed Inequality, this was not by happenstance—at least not in Britain. There, during and after WWII, authorities made deliberate decisions to put women at the bottom of the technological totem pole.

Here’s a representative anecdote that starts the book and made me wince at its unfairness:

In 1959, a computer operator embarked on an extremely hectic year, tasked with programming and testing several of the new electronic computers on which the British government was becoming increasingly reliant. In addition, this operator had to train two new hires with no computing experience for a critical long-term project in the government’s central computing installation. After being trained, the new hires quickly stepped into management roles, while their trainer, who was described as having “a good brain and a special flair” for computer work, was demoted to an assistantship below them. This situation seems to make little sense until you learn that the trainer was a woman, and the newly hired trainees were men.

To understand what happened, rewind to the 1940s, when women were actively recruited to help with the war, and in particular, to staff the code-breaking effort at Bletchley Park, which was critical to the Allies in World War II.

Women were responsible for setting up and running the machines. In addition, the women also took charge of the significant amount of manual operation in order to make the system work, which included reading the codes, transferring them to punched tape and operating the machines. Over time, as the war progressed, the workload increased, and the women worked around the clock, in three shifts, to ensure the machine was always in operation. All of this significant and important work relied entirely on women.

At the end of the war, the women were forbidden to talk about what they were doing at Bletchley because the work was top-secret codebreaking stuff. This led to women being effectively erased from the earliest association with computers.

Post the war, between 1946 and 1955, Britain deployed computers to enable the country to process vast quantities of data. Women were used to keep the computers operational, but both the private and public sectors took advantage of sexist policies and laws to keep women at the lowest tier of the emerging industry. For example, companies used the “marriage bar” (which was still on the books but had been unenforced during the war) to fire a woman once she got married. This policy had no benefit to the employer, who lost a valuable and trained worker, but it kept women’s salaries low, and ensured the cultural norm of keeping women dependent. It also led to a number of professional women keeping their marriages secret. The Civil Service actually created a whole new job grade to ensure that women were not allowed to rise past a certain tier and they were denied promotions. The contortions went so far as to designate certain (lower) roles as women-exclusive, where men could not even apply

Mary Lee Berners-Lee, the mother of Tim Berners-Lee, worked at Ferranti in the 1950s. She recalled that women programmers performing the same work as men were paid less because “Ferranti was a paternal firm” that believed “men would have to support a wife and children so they needed more money.”

On top of all that, all these women were considered menial workers, which was not true. In the media and recruiting ads, they were portrayed as being part of the machine, just contributing to the machine performing well. So, a lot of the “programmer” roles were made to look unappealing on purpose.

What the authorities did not anticipate was the fact that computing was going to be the most revolutionary sector, and by making it so unattractive, they would shoot themselves in the foot by repelling male candidates. The continued suppression of women, at times to the detriment to the industry and country, would mean that there was a belief that anything involving machines didn’t require real intellect. And men would not want these jobs. The narrative designed to keep women down was about to backfire.

In the mid-sixties, the field started becoming exciting with the support of the government. The income potential of the industry increased. In order to appeal to men, while still keeping women boxed into the lower tiers, the language started to change, to indicate a segregation between feminine and masculine, menial and intellectual, programmers and operators. Advertising suggested that the computers were “operated by a typist, not highly paid programmers and controllers.” The typists, were, of course women.

In 1970, Britain passed the equal pay act, which eliminated different salaries for men and women. But the decades of structural inequality would still take their toll. While male employment in the industry expanded in the higher-paid tiers, women’s employment only increased in the lower tiers.

Britain, which was the most technologically advanced nation during the Second World War, did a number of things to shackle itself and fritter this advantage away. Instead of espousing free market policies, they chose socialist and protectionist ones. Instead of letting private enterprises thrive, the government formed International Computers Limited (ICL), a computer company that was intended to compete with the best in the world. And instead of letting the best programmers have successful and unshackled careers, Britain let decades of gender discrimination mire it in constant labor shortages and high turnover.

It might soothe our nerves to think all of this is history. However, in 2012, the London Science Museum held an exhibit on wartime codebreaking where the women operators were erased, yet again. The exhibit stated that Bletchley’s “machines operated around the clock,” but never mentioned that it operated completely because of women who worked three shifts. And “the two [females] in the only surviving picture of a Colossus being operated are not named in any of the exhibits at the UK’s Bletchley Park Historical Site and National Computing Museum, although their identities are known.”

In 2012!

I read Programmed Inequality by Marie Hicks in order to understand what we could learn from it and how we could identify this kind of systemic, cultural, and biased decision making when we see it.

The biggest takeaway is captured by Hicks:

These women’s experiences also elucidate the power dynamics behind how technology often heightens existing power differences.

Technology is built within the existing legal and cultural norms of society. We must pay attention to this, because even as the tech industry makes progress, it can reinforce unfair prevailing norms instead of alleviating them. It can strengthen the disparity instead of ensuring a level playing field.

Tech alone cannot cure all of society’s ills, but it can play an important role. To do so, tech leaders need to think about how technology can enable access rather than reinforce existing power dynamics.

Because while opportunity is not equally distributed, talent certainly is. And in order not to repeat the mistakes of the past, we must learn from history and from the first women in tech.

Seek the Challengers

Photo by Shelagh Murphy on Unsplash

Many years ago, I had a junior person on my product team who was:

  • Constantly questioning why she “had” to be at meetings
  • Pushing back against tasks and responsibilities assigned to her, saying they were a waste of time
  • Constantly questioning long-established company culture
  • And, by pushing and questioning, being annoying.

Knowing only those things, a lot of people would say, “That person doesn’t seem like a good culture fit; you should get rid of her.”

But this same person was also:

  • Hardworking beyond belief
  • A brilliant product manager
  • A flawless executor
  • Beloved by engineers and designers.

She was (and is) awesome. She made the whole company better by launching an incredibly important product that the community loved, and which drove real revenue. She had a massive impact on all the organizations where she worked. And on top of that, she went on to become a dear friend.

People like that—I call them challengers—can often be the smartest people in the room. They question because they want to understand the logic and thinking behind ingrained cultural habits. They want to use their time well, and they want to make a big impact.

These are the people who push organizations to be better. Organizations that don’t have these challengers don’t succeed in the long term, because being challenged is what leads to growth. Every organization needs at least a handful of these people. They don’t have to win every fight, but they shouldn’t lose every fight either.

Yes, they can also be incredibly annoying, require a fair amount of time to manage, and drive you to distraction. That’s the price the organization pays, in terms of friction and time, for having dissenters in the ranks. But those dissenters are worth their weight in gold. If every employee is drinking the kool-aid, the company won’t question long-held beliefs, challenge itself, or stretch in new and interesting ways.

So when you recruit, look beyond the pre-converted. Hire the people who have some doubts or reservations. When you do have a challenger on your team, don’t crush their spirit. They will point out that maybe you aren’t the best thing since sliced bread, and the outside world may view your “brilliant and fair” idea through a different lens. Challengers teach you to be open to questioning yourself and changing your mind, and that’s how you grow.

To tie this in to current events: big systems, and societies, need challengers, too. Instead of seeing the challengers as annoyances who are not worth your time, try to listen to their point of view. Regardless of which “side” they may be on, there may be times when you agree with them. And if you don’t entirely agree, if you are willing to listen, you might be able to arrive at a solution that incorporates good ideas from different constituents.

Twitter is Less Wrong. Facebook is More Wrong. Values Matter.

Photo by Luis Quintero on Unsplash. Icons added by me.

Twitter took a (somewhat) principled stand last week. They exhibited some understanding of the fact that product leadership extends beyond creating a product: it also entails product stewardship. By “product stewardship” I mean taking a stand on what people should and shouldn’t be able to do with your product.

Jony Ive put it well when he was interviewed about the iOS feature Screen Time: “If you’re creating something new, it is inevitable there will be consequences that were not foreseen,” he said. “It’s part of the culture at Apple to believe that there is a responsibility that doesn’t end when you ship a product.”

This is an idea I’ve been thinking since I was at eBay. As I wrote a couple years ago:

When I led product at eBay, we wanted to be “a well-lit place to trade.” The company’s mission was “to empower people by connecting millions of buyers and sellers around the world and creating economic opportunity.” That was the intention. But as we scaled, people began to use eBay in ways we hadn’t predicted. At one point people began trading disturbing items, including Nazi memorabilia. As we thought about how to solve it, we asked ourselves a few questions: Who are we? What do we believe? Why did we create this product? Once we framed it in terms of core values, the decision about what to do became clear. The company decided to ban all hate-related propaganda, including Nazi memorabilia.

This week, yet another black man was murdered in broad daylight because of the color of his skin. It makes me ill. And it makes stewardship even more relevant.

Twitter, much like eBay in the past, asked themselves who they were, that they believed in, and why they created the product. But unlike eBay, they were not bold. Instead, they took a baby step forward last week and exhibited that they were finally willing to face the consequences for taking a (little) stand.

Twitter has a set of rules and policies. They include:

  • You may not threaten violence against an individual or a group of people. We also prohibit the glorification of violence.
  • You may not threaten or promote terrorism or violent extremism
  • You may not engage in the targeted harassment of someone, or incite other people to do so.
  • You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.

And until last week, Twitter had declined to enforce these policies when it came to one user: Donald Trump. Twitter’s reasoning was that as the President, his tweets were of public interest, and different rules applied to him. And so for years, Twitter allowed the most powerful man to:

  • threaten violence
  • threaten or promote extremism
  • engage in harassment
  • promote violence and threaten harassment on the basis of race

Twitter allowed the product they had built with love and care to be abused and defiled. And then, last week, Donald Trump went too far, even for Twitter.

First, Trump tweeted falsehoods about mail-in ballots. This violated Twitter’s Election Misinformation policies which have existed since 2018. So, Twitter enforced their policies and added a misinformation label to his tweet.

Since Trump is also a child (and a dictator wannabe), he went after a Twitter policy employee who received numerous death threats as a result. He also threatened to revoke Section 230.

Second, Trump used language that included a threat of violence, which he has done before. This time, finally, Twitter hid the tweet behind a warning.

This immediately divided the Twitterverse. Many, who felt this small step was long overdue, were glad. Others felt that this was a huge violation of free speech.

I disagree with the second group. Twitter is a private company, which means the First Amendment does not apply. The First Amendment applies to the government and how it deals with citizens’ First Amendment rights. A private company can say “No shirt, No service”, or a restaurant can ask someone to leave if they start cursing loudly. In addition, Twitter actually has rules and policies that they have not been enforcing with Trump. Those rules aren’t new. Starting to enforce the existing rules fairly, and applying them to all users, is not a restriction. If Trump were not President, he’d have been suspended a long time ago.

So, at least they did something. They decided to enforce a version of the rules. Sort of lame. But better than nothing (our new low standard).

Meanwhile the other big platform in tech land is, of course, Facebook. They also have rules and they decided that the rules do not apply to Trump. They gave the racist-in-chief carte blanche. They effectively and fully caved. And with that, they picked a side.

As I said in my piece

As product leaders, we all want people to love what we create. But people often use our products in ways we never could have predicted. Once we release something into the world, it belongs to the users — and sometimes they use our products in unexpected and negative ways. We can’t be held responsible for what they do with it… right?

We have become painfully aware of what can happen when the tools we use encourage our worst instincts and amplify the most virulent voices. In past few months, there have been several violent efforts where the suspects behind them had been vocal about their beliefs on social media. Do the platforms really have no control over the ways in which their products are used? That feels both naive and untrue.

My friend Ashita Achuthan, who used to worked at Twitter, said this: “Technology’s ethics mirrors society’s ethics. As technologists we apply a set of trade offs to the design decisions we make. While we are responsible for thinking through the second and third order effects of our choices, it is impossible to predict every use of our products. However, once a new reality emerges, it is our responsibility to ask who has the power to fix things. And then fix them.”

To be clear, these decisions are not easy because these are complex problems. There are legal considerations, there are social considerations, there are moral and ethical considerations. When platforms are used globally, these decisions are hard to rush. Policy teams, business teams, and product teams agonize over where to draw the line and the unintended consequences of these decisions. As I tweeted on Thursday night, regardless of the decision, people will criticize it. But at the end of the day, these decisions have to be made. That is the job when you run a company like this.

In the past week, two white male CEOs—Jack Dorsey and Mark Zuckerberg—made two different choices. They have shown us who they are. It is now up to us (the users, the employees, the investors) to decide if we want to support that.

What can one do? If I use a platform that puts the most vulnerable at risk, I can stop using the platform. If I work at a company, I can evaluate whether my values and those of leadership match—my time matters, where I work and who I enrich with my work matters. If our values are aligned, and it’s a disagreement on tactics, I can try to convince the leadership to see the logic of my argument. If my values and those of the leadership are not aligned (and if I am in a position to do so) I might consider leaving to work at a company which is more aligned with my values. And if I am an investor in company that doesn’t share my values, I can sell my shares.

The United States of America was built on abusing black men, women, and children (in addition, of course, to Native Americans). That some of our fellow citizens live in fear and get killed on a whim is not acceptable. We should not allow this to keep happening. If you live in America, whether you were born here or came here later in life (like I did), everything we have is built on top of black bodies.

It is our jobs, as human beings, as leaders, to stand up for what’s right. And it is not right that the biggest bully in our country can use the platforms that were built to connect people to threaten and intimidate the most vulnerable. The leaders of these companies need to be stewards of their platforms and make sure the platforms are not used to harm people in the real world.

Thank you to Ashita Achuthan for reading drafts of this post.
This is the post and video on Product Leadership is Product Stewardship.