Categories
Product Management

Measuring Impact of small changes to the product

Had written a tweet storm(Linked below) on this topic earlier, this blog a longer version of the same and addresses some questions people raised

One problem that stumps many new PMs is measuring the impact of small features.

Eg: how do you justify any changes to orders screen, or making slight changes to the text in some obscure corner of product, or just informational changes .

What if you added small delight features like a fancy error message or a funny wait timer.

this is a gif image of customer.io's loading page animation with their mascot ami spinning a circle. this is an example of a fun and unique loading screen design
40 Clever 404 Error Pages From Real Websites
Calm Apps 404 screen


While you would LOVE to A/B test everything and see if these changes are positively affecting the core metric, unless you have a large number of users hitting those scenarios, you would not be able to get meaningful stats sig results.

Companies with billions of users can test even the minutest of things, eg Google allegedly tested 41 shades of blue , and Uber can test its custom fonts, but it’s unlikely that many companies will have the scale necessary to observe a stats sig result.

Source

So how do you make a case for making those changes and if you do that, how do you measure these.

One way of course is to simply not measure the success (Or measure but do not expect). Think of these as paper cuts, its annoying but one won’t really kill you. The idea is that you go with what you deem is right and not get into analysis paralysis mode. You see a paper cut and fix it.

But relying too much on gut as two major disadvantages

  1. Experience: Unfortunately developing a good “gut reaction” is a tricky problem and requires a lot of past experience to rely on. “Gut” is but a culmination of lot of data you have already seen
  2. What is not measured is not rewarded: Unless you are running your own startup or have significant stake in one, you want to make sure you get recognised and rewarded for your work. If there is no way to measure something, there is no real way to recognise and reward it. All it may get you is a pat on the back . It is also deeply unsatisfying because you have NO idea if you are actually adding value

One simple way of measuring these changes is “constant holdback”

Whenever you roll out such an experiment, instead of rolling it out to 100% of users, roll it out to only 95% of your users. Do the same for all other small changes, with the same 5% always excluded.

As experiments pile up, the effect of multiple paper cuts being fixed would start showing up .

You would potentially be able to see the holdback group having meaningfully different(worse) metrics than the rest of the group..

But how do you know which experiment worked:

That is the point, you potentially will not. You will see cumulative effects and not specific.

What if some experiments are actually harmful

The idea is not to have all super positive experiments but to know if you have been directionally right. The trick is to pick mundane obvious changes that you need to do rather than using this for large features. If you are directionally right, you could eventually see a nice bump.

You need more right experiments than wrong

Can I do this for large new features

If you can do control vs treatment A/B test for any feature I would advise you to not use this holdback method as the primary means of testing. You do not want to have large experiments in the mix that can completely change the general direction of overall result

But I would argue that this hold back is useful even for large experiments once you have tested them . Post your A/B experiment you can think about a holdback for large features as well. The reason is that not all changes are plain additive. Eg: If you see your conversions go up by 2% via one feature and 3% via another, on long term its not necessary that overall it would be 5% . A constant holdback would tell you the cumulative effect of all the large changes you may have done

Do not forget to delete the holdback eventually so that all your users can see the same improved experience.

I send infrequent newsletters. You can signup for them below.
* indicates required
What are you interested in
Categories
Product Product Management

There is nothing shady about Twitter’s mDAU metric

Before reading this article please read “Twitter did not claim only 5% users are SPAM”

“Why is Twitter using non-standard metric like mDAU (Monetisable daily active users) rather than “Daily active users” like everyone else?

Is twitter hiding something?
Why is twitter using such convoluted ways of reporting.

Been hearing a version of these questions a lot. Its there on twitter and sometimes even the Musks filings. There are three premises to this argument that I hope to address in this article

Premise 1: Twitter is using a non- standard metric

Premise 2: It is a bad metric

Premise 3: Twitter is not measuring it right

Premise 1: Twitter is using a non- standard metric

Let’s be clear, there is NO standard usage metric that you are supposed to report . The government mandates public companies disclose financial data in certain format, but not how a company measures usage.
Every company decides what are the most important measures for it and reports them. Even the basic metric like what is a defined as “Active” can vary from company to company based on it’s footprints

Eg: Snap chat only looks at people who opened their app(Annual report)

We define a DAU as a registered Snapchat user who opens the Snapchat application at least once during a defined 24-
hour period.”

From snapchat annual report

Whereas Pinterest(Annual report) accounts for all kinds of actions including web visits

We define a monthly active user as an authenticated Pinterest user who visits our website,
opens our mobile application or interacts with Pinterest through one of our browser or site extensions, such as the
Save button, at least once during the 30-day period ending on the date of measurement

From Pinterests annual report

Companies, especially large ones, routinely create their own combination metrics that make most sense to them.

Eg: You won’t care much about how many times Uber app was opened, you care about how many users actually took a trip.

Uber has it’s own metric called Monthly Active Platform consumer

Monthly Active Platform Consumers. MAPCs is the number of unique consumers who completed a Mobility or New Mobility ride or received a Delivery order on our platform at least once in a given month, averaged over each month in the quarter. While a unique consumer can use multiple product offerings on our platform in a given month, that unique consumer is counted as only one MAPC. We use MAPCs to assess the adoption of our platform and frequency of transactions, which are key factors in our penetration of the countries in which we operate.

From Uber’s annual report

Similarly Facebook, which is a direct competitor of twitter, is also introducing a new metric called Daily Active People (Annual report)

Family metrics represent our estimates of the number of unique people using at least one of Facebook,Instagram, Messenger, and WhatsApp

From Facebook’s annual report

Premise 2: It is a BAD metric

In short: Twitter’s mDAU metric is the number of people who it can show ad to . Twitter removes potentially suspected bots, spam accounts, and also accounts only posting via APIs etc from mDAU count(See more details)

This is an ABSOLUTELY GOLD metric. If you are in the business of selling ads, making sure you only show ads to real humans is an extremely important measure.

Every company would have some measure of this. Twitter just chose to disclose this and make this their key metric. It is a strong signal that they are in the business of selling Ads.

Twitter can absolutely disclose and measure overall spam accounts, but that does not take away the validity of the mDAU metric.

I also keep hearing that twitter removes Bots and SPAM from it’s calculations of mDAU, do you know who else does that ? Pinterest . Here is a direct quote from their annual report

We regularly deactivate false, spam and malicious automation accounts that violate our terms of service, and
exclude these users from the calculation of our MAU metrics;

From Pinterest’s annual report

Pinterests Daily Active user seems functionally equivalent to twitters monetisable Daily active users. I see nothing inherently bad in this metric.

Another point to note: Facebook and Snapchat seem to not take out spam accounts from their Daily active user count. Facebook does mention how many active users it suspects to be spam, but I did not find anything related to that in Snapchat’s filings.

There is no consistency or rule on what is a”Daily active user”

Premise 3: Twitter is not measuring it right

Here is the process twitter follows, it has a bunch of AI / ML algos that automatically remove as many accounts as possible that. it suspects are spam . Twitter than samples 100 accounts per day(9000/quarter) from the rest of the accounts and have manual reviewers rate if these accounts were SPAM or not(Triple checked I read somewhere).

They do this everyday to get a trend and find that approx number of spam users that pass through their filters is <5%.

There are three objections I hear in this regard

Objection 1: The Sample size is too small

Statistical significance is not related too much to sample size, but rather to sample selection. Typically a sample size of 100 can give great representative results for a large population, twitter is doing 9000(over a quarter).

Eg: CNN did 2020 presidential election exit poll with a sample size of just 15,590

Objection 2: They do manual review

Of course they do. They already used their AI/ ML algos to filter out all spam they could and now manual is the last step. Infact, even facebook uses manual reviewers to tag spam

Facebook defines them as “Violating accounts” (Another non standardised name)


We define “violating” accounts as accounts which we believe are intended to be used for purposes that violate our terms of service, including bots and spam.

From meta’s annual report

It goes on to explain how they determine if an account is violating

Such estimation is based on an internal review of a limited sample of accounts, and we apply significant judgment in making this determination. For example, we look for account information and behaviors associated with Facebook and Instagram accounts that appear to be inauthentic to the reviewers

From meta’s annual report

Objection 3: it can’t be 5%

Do you know how much TOTAL spam facebook claims it has? 3%. NO Kidding

we estimated that approximately 3% of our worldwide MAP consisted solely of violating accounts”

From meta’s annual report

Assuming there is no lying, if facebook can have only 3% of all its users as SPAM, twitter’s 5% SPAM after SPAM filters does not seem off.

I do suspect though that I may have missed something and facebook also removes suspected SPAM accounts before calculating “Violating accounts”, which makes it potentially functionally equivalent to mDAU of twitter

Also remember, mDAU is NOT a user facing metric.Your own experience is immaterial. Twitter can have 20% SPAM and still have only 5% SPAM in mDAU.

I will leave you with this diagram to chew on

Monetised SPAM accounts are 5% of the total green rectangle

So NO mDAU is not really that non standard, nor is it specifically bad , nor does it seem twitter’s revealed methodology is anything shady.

There can obviously be something deeply wrong with twitters count if they are hiding something, but I am unable to see any info about that.

Want to know about a real BAD metric that’s super popular and is also stated in annual reports? Read why NPS scores are useless

I send infrequent newsletters. You can signup for them below.
* indicates required
What are you interested in
Categories
Product Product Management

No twitter did not say only 5% of users are SPAM

Ever since Elon musk raised concerns about spam accounts on twitter, tonnes of twitter experts , tech media, and “Social media analyst companies” have been talking about how twitter’s claim in its filing that less than 5% of it’s users are spam is wrong. How their estimates are much much higher

Only problem, Twitter did not exactly make that claim , and as usual the tech media decided to ignore that, deliberately I believe(more on that later in article) .

The Claim

Lets first look at the filing that everyone keeps referring to
Here is the exact line from the filing

Twitters actual claim:

The actual claim is

Average of false or spam accounts during the fourth quarter of 2021 represented fewer than 5% of our mDAU during the quarter.

Lets define the terms:

DAU: Daily Active user
mDAU: Monetizable daily active user

The “m” is super important. So what is the difference? While I would love to think that it’s possibly industry specific terminology that most people do not get, twitter in its annual report actually defines for anyone who bothers to read.

We define mDAU as people, organizations, or other accounts who logged in or were otherwise authenticated and accessed Twitter on any given day through twitter.com, Twitter applications that are able to show ads, or paid Twitter products, including subscriptions

So what twitter is saying is that of the number of people who they could have shown Ads to , only 5% of them were SPAM as per their estimates.

This implies you will have to remove any accounts that tweet using systems where No ads can be shown.


For eg, its likely that you would not see an Ad if you used an API to post a tweet, and this may extend to third party clients which allow you to post. Eg: I sometimes use roam(My notes app) to directly post.

Twitter APIs allow you post 200 tweets in a span of 15 minutes


This changes a lot

  • Lots of bots and spams would be using automated scripts and APIs to post. They would never be on a surface where they can be shown ads, hence Non Monetisable. Thy are not counted
  • Real users tweeting using certain clients (Or automated scripts like IFTTT) may not be counted
  • Any account which is spam or even likely spam may be tagged by ad engine as such, and removed from potential monetisation and hence not counted. Twitter even mentions that in their filing in the same para as the 5% claim

After we determine an account is spam, malicious automation, or fake, we stop counting it in our mDAU, or other related metrics

So possibly a large swatch of accounts that may be labeled as potentially spam and fake never get to see an Ad, and hence not counted.

Not every potential spam account is deleted , possibly because there can be lot of false positives . Lot of real people behave like bots and the ad engine may have stricter rules

Fun thought exercise: If you behave like a bot, do you get ad free twitter?

A good visualisation of this would be something like this

Monetised SPAM accounts are 5% of the total green rectangle

So fake accounts on twitter could be 20% or even 50%, if they are not being monetised, it’s not counted.

The main claim in some sense is : If an advertiser spends money to reach users on twitter, only 5% of those users would be Fake.


This is an advertiser facing metric and not a user facing one. Your own experience is not what is being measured

Now coming back to how it gets reported. Remember the screenshot of reuters I shared above? In the the sub heading they do decide to make that distinction, indicating that they know this difference but chose to NOT talk about it in main heading.

This is repeated across many articles across various tech media sites. Either they ignore it and assume mDAU =DAU(which is incompetence) ,or hide it in text which I think is not very ethical.

This distinction is so important that it needs to be called out in the MAIN heading

Also read how other tech companies have similar metrics:

There is nothing shady about twitter’s mDAU metric

Some examples of Media reporting:

Reuters

Bloomberg

Forbes

Business Insider

So does twitter not have Bot problem?

Not exactly. There are 5% fake users on a platform is very different from “of the people who can be shown ads, only 5% are Fake”.
The data needed to verify this claim is

  • Who was monetised
  • Take a sample of these monetised users
  • Define and agree on the principles if what is SPAM/ Fake account
  • See what %age of these users fit that definition

This. is why its almost impossible to verify this claim without having access to twitters internal systems.

What percentage of SPAM accounts exists severely affects users and have a negative effect on user experience. This absolutely needs to be addressed, but the claim twitter is making is not about a user facing metric but rather an advertiser facing.

The big question that needs an answer is : What percentage of twitter’s daily active users are in monetisable bucket, but even that is not exactly relevant to the 5% claim.

Its very much possible that twitter is lying, or maybe they count every DAU as monetisable, maybe their SPAM engines are too lenient but we need internal data to know that .

I for one do not suspect twitter doing anything shady .

So do not blindly believe the headlines, and develop a lot more skepticism

//Update on Aug 24 2022//

Looks like Twitter’s Ex Head of Security became a whistleblower(Source) and revealed a lot of details about its security practices and also Spam accounting.

Keeping the security bits asides, it seems that even the whistleblower, who typically would be very antagonist to twitter, more or less confirmed that what twitter was reporting all along was correct.


SPAM in mDAUs are ONLY the users who slip through their existing spam filters

Image

Twitter, Zatko’s disclosure claims, actually considers bots to be a part of a category of millions of “non-monetizable” users that it does not report. The 5% bots figure that Twitter shares publicly is essentially an estimate, based on human review, of the number of bots that slip through into the company’s automated count of monetizable daily active users, the disclosure states. So while Twitter’s 5% of mDAU bots figure may be useful in indicating to advertisers the number of fake accounts that might see but be unable to interact with their ads, the disclosure alleges that it does not reflect the full scope of fake and spam accounts on the platform.

Image

Executives are incentivized to avoid counting spam bots as mDAU, because mDAU is reported to advertisers, and advertisers use it to calculate the effectiveness of ads. If mDAU includes spam bots that do not click through ads to buy products, then advertisers conclude the ads are less effective, and might shift their ad spending away from Twitter to other platforms with higher perceived effectiveness.

However there are many millions of active accounts that are not considered “mDAU,” either because they are spam bots, or because Twitter does not believe it can monetize them. These millions of non-mDAU accounts are part of the median user’s experience on the platform. And for this vast set of non-mDAU active accounts, Musk is correct: Twitter executives have little or no personal incentive to accurately “detect” or measure the prevalence of spam bots.

Twitter announced a new, proprietary, opaque metric they called “mDAU” or
“Monetizable Daily Active Users,” defined as valid user accounts that might click through ads and actually buy a product. 19 From Twitter’s perspective, “mDAU” was an improvement because it could internally define the mDAU formula, and thereby report numbers that would reassure shareholders and advertisers. Executives’ bonuses (which can exceed $10 million) are tied to growing mDAU.

Unless you’re a Twitter engineer responsible for calculating mDAU, you probably wouldn’t know what Agrawal is talking about. He is not saying that fewer than 5% of all accounts on the platform are spam. He’s saying, more or less, that Twitter starts with all the accounts on the platform, tries to automatically put all the human accounts that could be convinced by advertisers to buy products (but no spam accounts) into mDAU, and then uses humans to estimate the error rate of spam accounts that nevertheless slip through into mDAU. And naturally, Twitter “can’t share” its special sauce for determining mDAU.

Even though it’s written in a very antagonist fashion, what Zatko is saying should be music to Advertisers and twitter BD teams.

It says that Twitter took great care in making sure ads were not shown to suspected fake users and voluntarily removed them from its monetizable pool. It further claims that Exec comp was tied to increasing this specific metric rather than the “Vanity metric” DAU.

This is a GOOD thing. Anyone who works in Ad tech or marketing would tell you that.

Sure twitter can do more to fight spam, sure spam makes user experience worse, but there is currently no evidence that twitter lied in it’s SEC filing.


I send infrequent newsletters. You can signup for them below.
* indicates required
What are you interested in
Categories
Product Management

My US stock Investments [Updated July 29 2021]

I sometimes share my holdings in US stocks on twitter.

Before you ask: I use Vested to invest in US stocks from India (my referral ink)

It’s a fun exercise, and people’s comments help me learn as well. I am also a strong believer in putting your money where your mouth is, hence when I say I like company X, it carries more conviction if I actually hold that stock.

This is obviously NOT an investment advice of any sort. It’s just be tracking how my personal portfolio evolves over time. There will probably be NO financial estimates

This also does not include my Employee Stock options from Google and Microsoft( haven’t sold a single share in MSFT ever). This is only my Vested holding


July 29 2021

US stock holdings
  • Palantir remains a big bet as before. I think with China US tensions escalating, Palantir has a chance of becoming an even more important company
  • Moderna growth is primarily due to growth in stock value itself. Bought it as soon as they had vaccinations available. MRNA is a watershed moment in vaccine development and it was an almost mindless decision to double down on the pioneer.
  • Twitter as usual remains an all time favorite . I think they are extremely undervalued, but seem to have recently been shipping at incredible pace. It’s a pure and pure product company, something I hope i=I understand :), and I like what I see. (Twitter as identity and social capital management)
  • Sold most of Snapchat after stellar earnings. Will enter again
  • Sold a lot of clover to take money off table during short-squeeze, entered again when price dropped. Will hold long now
  • My cash holdings are down to <2% primarily because I did some short term investments when the market went significantly down
  • Small wild bets:
    • Didi , because why not.
    • Still hold a bit of AMC
  • Plan: More cash holding . Target ~15%

June 11 2021

US stock holdings
  • Biggest change: CASH : <1% –>17%
  • Clover got short squeezed , so YAAY
  • Uber FTW as always
  • Palantir , Moderna, Twitter conviction still stands
  • Added Snapchat

Feb 26 2021

US stock holdings
  • Doubled down on Twitter and is now my BIGGEST holding
  • Doubled down on Palantir, its becoming an extremely important company
  • Exited most of Apple. It feels like it may be a while before M1 sales show up. I also have Mutual funds in india that invest in Apple so may not need such a significant bet. Not to mention, despite great results the stocks didnt show any excitement. I obviously don’t understand stock market.
  • More clover added. It looks like a sound company

Jan 25 2021

US stock holdings
  • Reduced my stake in Tesla, I wanted to book some profits. I had opened US stock account primarily because I wanted to buy Tesla stocks 🙂 , that was a good decision in hindsight and paid off handsomely
  • Sold most of Uber to book some profits at 42 (should have held on, but I had bought a bunch at 16, 30, 38 )
  • Added Moderna. It’s an almost mindless investment. Not just because of covid vaccines but because MRNA is possibly the future of vaccination. It’s like buying amazon of future. Thats how vaccines would be designed
  • I am hoping Apple earnings would surprise everyone 🙂 . Their new processor is a game changer
  • Risky bet: Clover health. In Chamath we trust..sometimes 🙂

June 30 2020

Thus was my holding last year when I started posting .

Largest holding first

US stock holdings
  • Norwegian cruise line was a wild bet primarily because it seemed like the most stable cruiseline with deep pockets. Big pandemic recovery bet 🙂
  • Slack and Tesla, favorite since long
  • Entered Twitter
  • Uber stocks still held in morganstanley employee account. not sold a single share

I send infrequent newsletters. You can signup for them below.
* indicates required
What are you interested in
Categories
philosophy Product Management

A short note on developing skepticism

Adaptation of my Tweet thread

They say, the key is to not drink the Kool Aid.

While I speak from a more Product manager’s perspective, this is generally true for everyone who wants to be intellectually curious

It’s easy to be be a believer, it gives you sense of purpose, a sense of comfort, but if you really want to succeed, you need to be a bit of a skeptic.

And skepticism can be taught. Here is a simple trick I suggest

Developing a habit of skepticism

Whenever you read or listen to something interesting, something that catches your eye, especially something that makes claims: Think about one small fact that you can verify. I typically add a small note “really?” in my personal notes.

Go ahead and try to see if you can substantiate that. It could be a very simple thing: Eg someone says that a new study says that Covid vaccine is very effective, you can just check if the study exists and it makes that claim. It could be even simpler than that: Eg a startup says that their market is all tax payers in india and that is Y Million people. Just try ad find that data, is that accurate.

Slowly you start moving to questioning the interpretation on those facts. Eg: In the covid vaccination is “good” case above, you could now try and substantiate what is good. The research report may say 80% efficacy. Is 80% good? How does it compare to other vaccines. Are there any specific things that have been missed? Which age group, which demographic, which variant?

Eventually you start questioning the whole premise of the argument itself.

At the highest level you move to the very motivations driving the argument.

Most things you evaluate would be correct, but that is not the point. You are not trying to find malice, but just building a muscle for questioning.

With enough practice you would start seeing a pattern. You will get a “gut” for understanding what to check and verify.

Every industry, and also every individual has a pattern of what they tend to overlook or what they exaggerate. It could be personal bias or just a generally accepted “industry practice” (see Uselessness of NPS score article as an example of how a generally accepted industry practice is not necessarily accurate)

With this you are not trying to be cynical, but just being skeptics.

A very good exercise might even be to treat this very article with skepticism

  • Have I defined skepticism right
  • Can skepticism be taught, or is it just genetic
  • Who said PMs need to be skeptics. Is there some kind of qualitative or quantitative evidence to support that claim?

After writing this article, a perfect opportunity to demonstrate this came about. I started seeing some WhatsApp forwards and tweets talking about how upto 40% apple workers intend to leave for lack of full remote, or 90% apple employees want indefinite remote.

While I am all for flexibility and do believe that full remote is here to stay, the 40% / 90% number seemed way too high.

So I decided to look a bit deeper. Thankfully one of the newspaper itself posted all the details including their own skepticism

  • This data does exist. It was collected from employee survey done at Apple
  • It was done by Apple Employees themselves
  • It was done in a slack group specifically meant for people invested in remote work . DUH!!!
Apple survey results.
Screenshot of apple employee survey

Without even trying too hard you can see that only 36% of employees in a group specifically meant about remote work spoke about resigning.

If I was teaching a class on bias, I would use this as a perfect example.

You can draw absolutely NO conclusion about what apple employees want in aggregate from this. While I do give points to media sites for publishing the survey details, I hold them accountable for publishing it in the first place knowing fully well that this data has no validity.

It leads to absurd headlines and unnecessary conclusions amongst people who trust them.

Headline claiming 90% apple employees say flexibility is important
Headline claiming 90% apple employees was indefinite remote

It’s like me doing a survey in a “Board game lover”internal group about how important it is to have board games in the break room, i may get 90% Yes. That does not mean 90% people in my company want boardgames in the break room

NO company is going to lose 40% employees just because the do not offer full remote work. Ironically, the most accurate data about the pulse of their organisation might be with the company itself. The company’s survival depends on it. No one would risk losing 40% of employees.

You can obviously dig further and look at specific questions and see if these questions had inherent bias already. Surveys are not that easy and results can vary widely based on how you ask a question.

Motivation: Now you can ask, why did these news sites publish these results knowing fully well that they are widely inaccurate and biased. What is the motivation:

  • They are judged by clicks
  • Appealing to bias of people leads to more sharing
  • Apple is the current favorite punching bag

I send infrequent newsletters. You can signup for them below.
* indicates required
What are you interested in
Categories
Product Management

The Uselessness of Net Promoter Score

Introduction

Sometime back I had tweeted a thread about how I dislike NPS score as a measure of success. It led to a fair amount of discussion and debate. Hence, I decided to dig a bit deeper and write a longish post about it. While I have tried giving it some structure, each section is more or less self contained. You can directly jump into a specific section if you are already aware . I have tried to give enough context wherever possible.

I have also linked to sources wherever applicable, so feel free to follow them and do your own due diligence when in doubt.

Disclaimer: I may add more details and address any specific questions and criticisms that may come my way. Please do let me know if I misrepresented or missed something


What is Net Promoter Score AKA NPS

Net promoter score is a widely used measure of customer loyalty today. It’s claim to fame is its utter simplicity. It can measure customer loyalty with just 1 question.

To calculate Net Promoter score you ask a statistically significant number of your customers to answer a single question

How likely is it that you would recommend [brand or company X] to a friend or colleague?

  • Ask them to select from a scale of 1-10(some orgs use a slightly different scale but 1-10 is the most widely used), where 1 means not likely at all, 10 means very likely and 5 means neutral.
  • Anyone who choses 1-6 is considered a Detractor
  • Anyone who choses 9-10 is a Promoter
  • Net Promoter Score= %age of Promoters – %age of detractors

The basic claim of NPS is that it can reliably measure customer loyalty and if the company focuses its efforts to increase NPS, it can lead to more healthy growth.

This two specific claims are important to keep in mind
1) Reliably measure loyalty (better than other scores)
2) Correlated to company growth (see more details in NPS Origin story sec)


NPS origin story

While internet is filled with how to use NPS, when to use NPS, and why to use NPS, before we get to all those questions it is necessary to understand how NPS even came into picture.

The origin story of anything reveals a lot about the motivations without the burden of muddled history between then and now.

NPS score was invented by Fred Reichheld who was a consultant at Bain and company . It was introduced to the world in this HBR article

The basic idea came about when they looked at a car company use a very simple method to increase customer loyalty. The company, Enterprise Rent-A-Car, simply asked people two questions
– Quality of rental experience
– Likelihood you would rent again

The company then counted only those customers who gave it the highest scores on both the questions. All their outlets were then asked to optimize for this specific score. It was believed that this would inspire the sales agents to be better and increase customer loyalty.

Fred wanted to make this system much more simpler and see if this could be brought down to just 1 question

The interesting point to note here was that the intent was not to find a great predictors of company growth or loyalty, rather to find one question. The aim itself was simplicity

Process of finding the One Question

  1. 20 questions were created on the Loyalty Acid test survey
  2. Test was administered to customers in following industries
    • Financial
    • Cable and telephony
    • Personal Computers
    • ecommerce
    • auto insurance
    • Internet service providers
  3. Then they asked each participant to tell about a specific instance when they actually referred the company to someone. If this was not available they waited 6-12 months and asked again. This data of about 4000 users was enough to create 14 case studies which established a link between survey response and actual referral
  4. Result
    • The top-ranking question was far and away the most effective across industries:
      • How likely is it that you would recommend [company X] to a friend or colleague?
    • Two questions were effective predictors in certain industries:
      • How strongly do you agree that [company X] deserves your loyalty?
      • How likely is it that you will continue to purchase products/services from [company X]?
    • Other questions, while useful in a particular industry, had little general applicability:
      • How strongly do you agree that [company X] sets the standard for excellence in its industry?
      • How strongly do you agree that [company X] makes it easy for you to do business with it?
      • If you were selecting a similar provider for the first time, how likely is it that you would you choose [company X]?
      • How strongly do you agree that [company X] creates innovative solutions that make your life easier?
      • How satisfied are you with [company X’s] overall performance?

Link between NPS score and company growth

Then they tried to find correlation of NPS score of customers with the actual company growth

  • In airlines a strong correlation existed between “Would recommend” question and average company growth
  • Similar results existed in rental car business
  • “Would recommend” was irrelevant for database software or computer systems as people had limited choice, and senior execs who made the choice were not part of the people surveyed. For such industries “Sets standards of excellence” and “deserves your loyalty” were far more predictive
  • NPS was also not a predictor for Local telephone and cable TV company growth because they were near monopolies. Their growth was determined by how fast the population in their area increased

Who uses NPS today

Pretty much every one. As of 2020 2/3 of fortune 1000 companies seem to use a version of NPS. One simple experiment would be to search for the term “How likely” in your Inbox

Ecommerce startups email after delivering the order

Good things about NPS

It is very simple to measure and benchmark .

  • Its a single question and is used by multiple industry players to benchmark against competition and internally
  • Its easier to digest at almost all levels of abstraction

High completion rate

With users being inundated with all kinds of brands seeking their attention, it is much more easier to get them to answer 1 question rather than multiple. Infact in the paper “Assessing treatment outcomes using a single question” where they did an NPS of patients, they found that the NPS question consistently had the highest completion rate (96.5%). I would also now assume that it has become so common that users almost expect this question and are willing to answer

It defines loyalty in interesting way:

While loyalty may traditionally be defined by retention, LTV, and other metrics , it can miss out on word of mouth. NPS attempts to target that specifically by a bit lose but interesting way

Customer Loyalty Definition in Original NPS system

  • Customer loyalty can be defined as customers willingness to stick to certain provider even if they are not providing the best possible rate in a particular transaction. Think if this like : ” Sure you may be charging me more today, but I know you have done great work in the past and generally give me good rate so I will stick to you even though cheaper options may be available”
  • Customer loyalty is also more than just retention because some people maybe retained just because they cannot move out due to inertia, or exit barriers. Eg : Monopoly players , or prepaid plans
  • Loyal customers may also not be repeat purchasers, eg when they outgrew that service. Eg: You may no longer buy a pulsar bike because you no longer drive a bike, but you would recommend it to your nephew when he is considering one.

NPS claims and how they measure up

NPS is a slightly obtuse metric because instead of asking if people are satisfied with the product or service, we are asking if they would recommend it to someone else. It’s not exactly a measure of a customers own experience with the brand.

If you are introducing a new kind of measurement it needs to be better at something than the existing systems. It either helps you uncover a specific issue, or measure something unique.

Survey metrics also are predictors / proxies of some tangible business outcomes such as churn, growth, complaints, etc. A metrics with no business outcome is plainly a vanity metric.

So lets deep dive into if NPS measures up

NPS as a better predictor of growth

Let’s look at the claims made about NPS in its original research. There are multiple leaps of faith in it. The way I read the original article is:

  • Answer to NPS question seems to be the highest correlated among other questions to actual referrals in some industries
  • Higher NPS seems to be correlated to higher growth rate irrespective of company size

Using this above methodology claims have been made that NPS is the best predictor company growth. The big issue with this is that even in the original article there was no real comparison of correlation between company growth with NPS vs other survey methods.


Also even though the question seems to talk about loyalty in a very loopy fashion, it actually does not make a claim about it. There may or may not be no correlation between NPS and user retention

This research was not even reproducible

It is not reproducible

This is perhaps the BIGGEST issue with Net Promoter score. The biggest claim with NPS was it is the single best predictor of growth, but this 2007 paper found no support for that claim when they tried to replicate the same study that Reichheld did.

Not surprisingly, they found that NPS performed as well or as poorly as using the customer satisfaction index to predict growth

Comparison of NPS and Customer satisfaction index as a predictor of company growth. Source

There seems to be no real statistical backing to NPS, and as per the paper, even Reichheld acknowledges that

Reichheld’s statement acknowledging the issues with NPS

NPS as a tool to benchmark competition

A lot of literature outside talks about using NPS as a benchmark against competition, between different departments, different franchise etc. A lot of fanfare is made about how a company’s NPS is through the roof, which company in a specific industry has the highest NPS etc.


The problem with this is that this question has so many variables that its unfair to compare . It can never be an apples to apples comparison.

Instead of simply asking if customers are happy with the service, we ask “Would you recommend X to your friends and coworkers”. There are so many more variables to consider when trying to answer this question

  • Do I think it’s worth it for my friend: Hobbies, cost, personal interest, my own closeness to the friend
  • Do I even discuss this with my friends and coworkers
  • I hate it, but my this specific friend may like it
  • etc etc

More variables = more errors.

Eg: when NHS introduced NPS they found that only about 40% variations in NPS scores was explained by overall satisfaction whereas rest was explained by various other metrics such as: if the patient undertook hip replacement or knee replacement.

Even the NPS difference between patients who underwent Hip replacement(71) and knee replacement(49) were stark, making it impossible to benchmark them.

A bad action item for the hospital would be to target for same NPS across all services.

If a hospital cannot even benchmark within its own departments, it’s useless to try and benchmark to other hospitals.

It’s also very dependent on services availed and demographics of the user . Eg: when they compare NPS of Uber vs Ola, they fail to talk about if it’s the exact same mix of users or not.
Did the users take similar number of trips, same kind of vehicles , pool vs non pool etc. Without that, comparing NPS of brands Uber vs Ola is not really any benchmark, and potentially worse than just satisfaction surveys.
It’s not a worthy complication you are introducing. This is why brand NPS are not really benchmarks

For internal benchmarks, some companies, especially in ecommerce, go overboard and try to find NPS linked to each product and service, which again seems unnecessary.
It is no longer a measure of loyalty but just a feedback of the product, which may be better asked directly via ratings and reviews.

Eg see below Myntra trying to do NPS linked to a specific product. But even if I rate it low they may have no action item because they do not control the product itself. Lot many questions need to be asked to even understand my response.
A better question would have just been do you like the product, which not only would be direct but also feed their rating system.

NPS as a better loyalty metric

This is another way some people use NPS for . This is potentially because of the nature of question where it talks about referrals. Loyalty here is defined as user’s willingness to recommend.

NPS tries to force fit people into specific boxes of promoters and detractors ignoring any reasoning behind the users response

There are many reasons you would not recommend a specific product to someone. Just like in NPS article they mentioned that a loyal customer may not be a repeat customer because they outgrew the product, but would happily recommend it to someone who did not.

Using the same logic, someone who may be a loyal customer may not recommend it to friends who may not be the target customers. NPS would classify these loyal customers as detractors.

Loyalty is very fluid and detractors and promoters are not rigid boundaries as NPS tries to bucket them i

Your personally may hate the product but if you were to suggest a product to someone you would play a matchmaker role and take into consideration their individual needs and circumstances.

This 2019 survey found that 52% users who actively discourages others from using a brand also actively recommended it. You can be a promoter and detractor at the same time based on who you are talking to or how your last experience with the brand has been.

To make matters even more complicated, NPS is not even an accurate predictor of users own measurable behaviour such as repeat purchase, churn rates etc

The purest measure of loyalty in my opinion is customers actually spending money to buy your product. I criticise my bank a lot, but despite many many alternatives I have stuck with them for 15 yrs. By every definition, I am a loyal customer who they would want.

A research done by University of Cambridge with an asset heavy company in UK over three years found that NPS and actual user behaviour did not match

NPS as more “accurate” measure

Another argument for use of NPS is that its less susceptible to manipulation, but NPS has same pitfalls and anyone who owns an NPS goal can use the same old tactics to improve it.

Just like satisfaction surveys which can be manipulated, so can NPS. Simple techniques could be

  • Asking the user to rate you after a good interaction. Eg: As soon as order is delivered, or a ticket resolved. At this time you are no longer trying to find and fix issues, you are simply trying to get that score. This technique may have an effect for Play store reviews and youtube videos where ratings and likes are a social signal to other users, they are counter productive for NPS , unless this NPS is being collected as a vanity metric. Eg: Pitch deck, Presentation to leadership
  • Incentivising the user: Eg give your software product for a free trial and see almost every customer give you high ratings on NPS. It is meaningless and may have no correlation to your growth

I also did a very unscientific survey on Twitter and Linkedin to know if companies took NPS targets, and if the person responsible for the target also controlled stuff like when NPS was sent and how to pacify the user: Here are the results

As per the survey above ~30% respondents said their company has NPS goals and the owner of the goal optimises of things like when to send the survey, and in some cases even customer incentive .
You get what you optimise for and in this case my hypothesis is that system is designed to make the NPS go up not necessarily the loyalty

You get what you optimise for

This maybe the reason why companies with really high NPS also go bankrupt

NPS and other big misses

Its arbitrary and ignores all cultural nuances

There seems to be no clarity on why someone who says 6 vs 7 are in a different bucket while 5 vs 6 are not. Also no clarity why focussing only on difference between promoters and detractors matter. What if we just tried increasing average score?

It actually hides the actual improvements. Eg: movement of a large chunk of users from 1 to 5 has no effect on the NPS score.

It also ignores all cultural nuances. Eg: if you travel in Uber in US vs India, you may see a huge difference in your ratings. Anecdotally I have seen my ratings drop in india and rise in US. I presume there is a cultural difference here. In India low rating is 1* whereas in US its 4* .

I read a comment on some blog that put it well: NPS is just lots of numbes disguosed as maths

It is more noise than signal

What do you do with NPS? One common theme is that you work towards increasing it by using it as a north star, but that is not a good reason to ask this question in the first place.
There is no evidence that it is better than working to optimise other tangible metrics.

It’s not a single question

While the whole USP is its single question, you invariable would need more information as soon as the users rate <7 , defeating the whole purpose of simplicity.

Loyalty is multidimensional

While NPS seems to acknowledge that loyalty is multi dimensional, it tries to collapse it into a single dimension of word of mouth.

Its probably not for your industry

This is less of NPS issue and more of marketers abusing NPS because of its perceived simplicity.

In the original paper, NPS was not found to be a predictor of growth in industries such as computer databases.

Remember the ONLY thing it was supposed to do was predict if you will grow, without that correlation the score is more or less useless.

Sales is complex and any industry with high inertia, top down decision making, and monopolistic players NPS is not even applicable. This makes me wonder why so many startups are obsessed with it.

It’s also possible that NPS should not even be a goal.

Eg in the NHS paper I referred to, difference in NPS among patients was not due to actual patient care and recovery. Perhaps NPS is not even a measure for hospitals.


Final Thoughts

NPS seems to be a arbitrary score with little statistical backing. It is not even be valid for many industries .
While it can be used as a tool in your armour of many other signals, over reliance on this for making decisions is not prudent.

NPS is popular perhaps because it is simple, but this reminds me of the phenomenon of Bikeshedding .

Bikeshedding: If a committee were to design a nuclear power plant, they may spend far more time than necessary to discuss the bike sheds, its color, its position, and its capacity . The reason for this is that bike sheds are easy to design and everyone can have an opinion on it.
In corporate we sometimes spend a lot of time on bikeshedding activities just because our minds automatically go towards simplicity first.


NPS to me sometimes sounds like the Bikeshed of the user research world

As a startup / company, I would be more worried about actual referrals, customer churn/ retention, cost of acquisition, than NPS.

Low NPS maybe a sign of something wrong, but it’s likely also showing up in other survey questions. NPS may not be adding any value


NPS may be simple, but not necessarily useful

As a product manager, I become very suspicious when some startup or product touts high NPS scores with little else to back it up.

As an investor, I would ideally ignore the NPS score, or give it very less weightage unless backed by actual metrics. it is easy to manipulate and if it’s rewarded it would be in any company’s best interests to figure out how to get better scores.

I send infrequent newsletters. You can signup for them below.
* indicates required
What are you interested in
Categories
Product

Curious Case of Chingari app’s high Views and Likes

This was a difficult blog to write .

As some of you who follow me on twitter know, I am a fan of short form videos. Recently I decided to give short video apps popular in India a try. I tried Roposo , MX takatak, Moj, and Chingari.
After a few initial hiccups I was able to use all platforms.

I am bullish about the bharat / India story and believe we are primed for India first innovations

My plan was simple:
Upload the same video on all platforms and get a sense of onboarding process, the speed, community, India specific features etc. I did not expect any major traffic. I also set my language as Hindi because I wanted to experience the whole deal.

The video I used was me reciting Shiv Kumar Batalvi’s punjabi poetry . It had no music, was not following any trends, and was in punjabi. I had low expectations

I uploaded my video on all and waited. Every short video platform would typically show your video to some users and based on their interaction it might decide to show it to more people. Since the user base IMO is fairly similar across all apps, I was not surprised to see more or less same response on all. A few 100 views, and a few likes on all platforms except chingari. On Chingari I had 10,000+ views and 100s of likes, curiously 0 comments.

After the initial euphoria died down(hey maybe I went viral), 0 comments started to bother me. How are these views counted? how are these likes assigned?.

How views are counted is supercritical and can make or break a platform. It can cause real damage. Remember when facebook was overcounting views? It was a big scandal

To test my hypothesis that these views were not really earned by me, I uploaded a completely blank video. To my utter surprise, it followed the same trajectory. Similar number of likes and views after almost the same time.
I tried uploading more blank or nonsensical videos, just to see if somehow some video deviated from the pattern, but failed to find any. I literally had a video with a 5 second black screen. I found no way to not get views and likes.

To my annoyance, my empty videos had slight higher views and likes than my poetry. Am I net negative 🙂

Videos eventually settled at 250K views and ~17K likes. It was super uncanney. I also hit 1M views. Am I a genius influencer?

Chingari App UI

So I did an experiment. I created another account on a different phone and uploaded the similar video on both my primary account and the new account. Same title, no effects, no music, nothing and I took a screenshot every few hours for 2 days. I took a total of 8 readings and here are the results. The videos were literally a few second shots of my pillow and titled “bilkul kuch nahi” (Translation: absolutely nothing)

Video 1 direct Link

Video 2 direct Link

Note: I cannot see the time of upload,so the first reading is assumed at 1Hr based on memory. All readings were taken at the same time. Even if first reading time is a bit off, it won’t change the results.

Just to make it clearer, lets plot the above values on a chart. The line chart was so close for both videos that it hid one of the lines, and I had to make a bar chart

Video Trends of two videos
Like trend of two videos

Unfortunately I could not take readings at hourly basis, because work and life 🙂

Since views do seem to flatten around 250K, I am sure if I keep going it may not be such a straight line and there would be a decay.

I then plonked these values into a linear regression analyser (even though on a longer scale it may not be linear) and here are the results. It was not a perfect match, but was close.

Note: Chingari does not tell you exactly how many likes or views a video has, it rounds them off. Eg 2.5k instead of 2543 . This makes regression difficult

Regression analyser
Final formulas. T = time elapsed in minutes

I was also curious to see how likes vs views were behaving. Looks like they were a perfect match

The formulae was

Likes on a video = 71.71084*Views in thousands + 3.34218

To see if this formula holds up, I looked at all the videos on my account and tried to predict the number of likes based on number of views. It was pretty close

Views vs like
Note: Chingari rounds off the like counts. So 13270 likes would be written as 13.2 K or 13.3K based on which side they round off to.
If I were to round off predicted likes to the lower bound it would be
13.2 vs 13.2
17.4 vs 17.3
17.4 vs 17.9
17.7 vs 18.1
17.5 vs 17.6
17.3 vs 17.2


Unless they change something, this seems like a replicable phenomenon. If you are a creator, I would like to know if you see the same numbers for dummy videos(Good videos would of course have engagement)

Why is this important

A lot of bharat users are using short videos as source of fame and may make a decision to pursue a career as an influencer based on traction they are getting.

They may also be using this to decide if they should spend more time to create or not. Knowing if their fame is real is an important question.
A sudden burst of views may also make people rate you better, as evident from this comment from playstore .

App store review of chingari app

Views and engagement are also key metrics for any advertiser and media company. A very high number may encourage a company to invest heavily on that platform and ignore others. This happened in case of college humor which pivoted to facebook native because of insane engagement it was getting. The engagement numbers turned out to be overestimated


But Why?

Now I am NOT exactly sure why is this happening . Is this a new user boost? Perhaps a quirk of their targeting algorithm, maybe there are some users who like everything? Is this a normal trajectory for every video that has no engagement?

There are potentially alternate explanations and I would love to hear them.

What I can say is that it seems reaching millions of views on chingari app at the moment is a child’s play . Just upload almost anything and it might grow as per the formulas that I presented before. Even my own channel with useless videos has 1M+ views in a few days.

Do apps boost new users?

In the startup and product management world we call it a First run experience, and considerable effort is spent to make that as pleasant as possible.

Some companies do end up doing something special for the new user. This may include a boost in views by showing it to more people, but I have not seen this level of boosting.
This is also assuming that the numbers are all real. Actual people saw the videos and liked it.
In case numbers are not real, it is unethical and there is no excuse.

//update :
This does not seem a case for a New user. On my first account, which had 1M views, I uploaded 9 blank videos yesterday night. ALL of them have similar views and likes and the progression is still happening as predicted. Unless there is a different definition of a new user, this phenomenon does not seem to be limited to a new user

Image
Chingari app showing exact same views and liked for videos uploaded at same time

//end update

Things I did not check

When does the graph flatten : I presume it starts in day 3

Is there a trend in follow count

I send infrequent newsletters. You can signup for them below.
* indicates required
What are you interested in
Categories
Product

Why I am hopeful about NFTs

Why I am excited about NFTs : A small thread(Lot of flex ahead)

About a decade ago I started recording my poetry and mixing it with music. I had some success and was semi famous for a short while.I raked up about a 1M views on Youtube and many more unofficially on other platforms

Honors for nostalgia ode to college life poem
Youtube screenshot of stats for my most famous poetry
Honors for Madhur chadha youtube channel on poetry
Youtube screenshot for my channel stats

I went upto 10K subs on youtube, 5K fans on facebook, and was consistently getting traction on my poetry website. One of my poem was also amongst top viewed videos on youtube for the day /week in India. My work was also shared by a few famous people from TV and Media

So I should have made some money right?. I did try. I became a YT partner, I tried spotify, I tried selling on bandcamp, and even tried sponsored posts on website. My work was also available on amazon music, itunes and what not.

I made a few 100$ from youtube, sold 2 copies on bandcamp, a few itune sales, and listens on spotify that were too small to justify yearly costs of 15$. I made enough money to afford a cup of coffee—once a month 🙂

Youtube Lifetime earnings

It was not that I had no backers. But in that old world , every backer was the same. They could buy my music on itunes, or listen on spotify, or share on facebook. But there were no clear ways of “owning it” easily. Especially if they were not in India.

NFTs bring not only a method, but also a culture of trying to OWN the art you like in a borderless fashion. I would have absolutely tried NFTs at that time and would have needed one influential backer to make a decent payout. And that backer could have been anywhere in the world.

And this is what excites me about artists of today and tomorrow. You no longer need a very large audience or need to get “discovered”. You need a small set without having to worry about so many logistics. We had patreon, and substack, NFTs are just the next level.

Yes there are flaws, but I am absolutely bullish on the tech and rooting for its success.

I send infrequent newsletters. You can signup for them below.
* indicates required
What are you interested in
Categories
Product Product Management

Twitter as Identity and Social Capital management

Recent Updates:

As some of you already know, I am LONG twitter and feel that it has just started to just scratch the surface of what is possible. Recently twitter announced a slew of new features including

  • Spaces : Clubhouse competitor
  • Subscriptions: Subscribe for
    • Exclusive tweets / content
    • Newsletter
    • Badge
    • Community

In January twitter also acquired revue, a newsletter company to let twitter users start newsletters

They also announced that they are reopening “Verification” in early 2021 and users would be able to request for it

Where is it going

I beleive twitter is unleashing the beast with the power of its platform and could eventually go into Identity and Social capital management. Here are some of my early thoughts

Twitter as Identity Management

Once twitter starts mass verifying users, it would become perhaps one of the largest consumer facing identity management platform in the world. It would be bigger than many governments and will come with added social capital.

When someone says they are verified, you not only know who they are, but can also go and check out their profile.
There is a massive need for user identity verification especially in situations involving people to people contact. Eg Uber users in brazil need to validate via CPF(national ID ) if they want to pay via cash. This helps them establish trust. Think about using twitter globally

Other user cases could be

  • Is the person sending me the email the same as they claim to be
  • Is the person bidding for my furniture real

A lot of startups bootstrap on twitter network( Eg: Substack), imagine what could be done with high trust authentication.


Twitter as Social Capital Management

Whenever any influencer or content creator opens up another channel (Eg a new newsletter, a youtube channel, a blog) they end up recreating the entire network again. The need to gain the trust of not just the followers but also the platform.

While there surely are massive advantages of existing audience which follows the creator, there is always a huge leakage and the creator is subject to very different rules on each.

Eg: Even if you are extremely popular on twitter, you cannot take that social capital to say a mailchimp to send your newsletter. You need to “warm up” the system, hope your audience opens it, prey that spam filters don’t flag you, and if you have a shared IP hope all other providers are not doing something shady.

Mailchimp has no context of who you are or how you built your audience. There is no social capital associated with you.

But with twitter this could change. Twitter could use your social capital of one platform (say twitter) to jump start you on another (eg newsletter). It can use your social capital for more than just growth hack.

Will it matter to small time creators

While small social capital on one mode of communication may not matter to people, a multi modal platform may matter in aggregate.

The biggest advantage of this is that as people start deriving more value from the system, the incentive for bad behaviour becomes less You also become super cautious if you verify at any place using twitter because you don’t want to mess with your social credit.

Gotchas to watch out for
It needs to be debated if it would be wise for a single private party to wield such power. Would your twitter identity become more important than your local identity?

Some interesting ideas I think twitter would / should do

  • More control on tweet embedding: Let me generate a id that any media agency that wants to embed my tweet needs. I can then charge for my tweets
  • Transfer tweet ownership: Currently if your tweet becomes popular, people reach out to you to add their product / business as a reply. This helps people make some money. It would be interesting to see if I could just pass on the tweet ownership to someone else and they would be free to add any reply to the main thread
  • NFT: While some startups are attempting this. Every tweet could be an NFT and that can drive the tweet embedding and ownership ideas as well.
  • Tweet Ads: Every time a tweet is embedded on an external site, twitter could show and ad and share revenue with the tweeter directly
  • Mail client: While twitter knows about the social capital, it still is hindered by the spam rules of other mail client providers. I suspect they may start their own
  • Influencer Management / Tracking / smart contracts: Imagine you hire an influencer for reaching X million audience. The entire deal is done on twitter where twitter is responsible for measurement of success (which they already do). As soon as the success criteria is met, the influencer is paid . This operation could be controlled via ethereum smart contracts. This makes the entire system seamless and painless

You may have noticed I did not talk about tweet edits. Read why I don’t think twitter would ever, or should ever allow tweet edits

I send infrequent newsletters. You can signup for them below.
* indicates required
What are you interested in
Categories
Others Product Stuff

My Work From Home setup

While it has been a year since I have been working from home due to covid, I have been investing in a good WFH setup much earlier than that. The productivity improvements by just having a dedicated space to sit and do your work at home is highly underrated.

So here goes the detail of stuff that helps me work better

PS: some links below are referral links and i make a small commission for qualified purchases


The Sitting Desk: This is perhaps one of my best investment yet. I got it made a few years ago.
My main specifications were:

  • It should be longer than normal( I like to have books and reading material also on the table)
  • Specific design(see the whites)
  • No annoying leg support that most desks in the market had. They are a nuisance for slightly taller people

The standing desk setup: This is new. I figured the best way to have any standing time is if I take some meetings standing. Unfortunately most standing desks are either super expensive, or just not ergonomic. I also did not want to give up on my existing desk.

The key to a good standing desk is to have the monitor at your eye level and your arms at 90%, very similar to how it should be when you are sitting.

My standing desk comprises of :
Standing desk converter (I use this one) : I keep this always raised. It turned out to be much more sturdier than I thought and flexible height means I can adjust it just right.
– On Top of it is your stepper used in Gyms ( i once exercised 🙁 )
a laptop stand( I use this one)

While this looks a lot, this gives me a comfortable standing posture. I can stand straight and type without straining my neck or my back


My Monitors
I primarily only work on my monitors and my laptop is only used when I am away from my desk or taking a standing meeting. I love keeping one monitor vertically as it helps me read long documents better, or open multiple windows together.
I use Samsung’s curved monitor for my primary work. I love the curved monitor so much that I don’t think i would go back to flat ones any time soon. Also it seems curved monitors are better for your eyes.



I also use Amazon basics monitor arms that can connect to my table without any drilling. I can shift it relatively easily to any side of the table. Monitor arms not only save space, they also allows me to put one of my monitors in a vertical position

Monitor Arms

Chair: I use a gaming chair by Greensoul. I assumed that gaming chairs should be super comfortable because gamers spend a lot of time sitting, but after using it for a year I think its not the case. It is a decent starter chair, but I am thinking of upgrading. Suggestions welcomed (Update: It seems they updated the lumbar support in the new model)


VC setup
My entire setup is fixed and I do not move around the laptop much, as a result I cannot use my laptop for my calls.
Also if you are using a monitor, and stare at it during a video call, it can be super distracting or annoying for the other person.
As a product manager, making at-least some personal connection is super critical and hence investing in a decent VC setup can go a long way (In budget)

Video: Logitech portable webcam this is attached to my primary monitor

Logitech camera


Microphone: I use and a Ahuja MTP-20 wired unidirectional collar lavalier mike , which is attached directly to my Macbook. While the Logitech webcam comes with an inbuilt microphone, it’s voice capture is not that great. A cheap Lavalier microphone can work wonders. I use this unidirectional one that cuts out any background sound such as a running fan. Also, since it is wired, its pretty economical

Lavalier Microphone



Sound: Macbook Pro built in

A note on using bluetooth devices
I did experiment with a bluetooth setup for both microphone and speakers, but there was always this slight lag that made conversations unnatural. I recommend against using any bluetooth device for audio in or out.


Mesh Wifi: My life has been different pre and post mesh wifi. It has been the single best upgrade I have done to my home office setup. I use Deco M4.
See below the speed tests before and after mesh.
Directly from router Wifi: 22Mbps
On mesh: 160Mbps

Wifi Speed before and after using Mesh Wifi Deco M4
My Mesh wifi. It looks super cool

Deco M4


Keyboard and Mice: Apple original Keyboard and Magic Mouse. If you are using a mac and can afford it (or your company pays for it), I recommend just closing your eyes and going with apple original. They not only work well, but also support all kinds of apple specific gestures, especially when it comes to the mouse

Apple keyboard and mouse

Other stuff

Wire organisers, lots of them to keep the wires always in place and mostly out of sight. I even created a charging station


Amazon basic lamp: Not only it looks cool, its great when you want to work in a yellow light during the night. Its rechargeable so no need to keep it plugged in. Really handy if you want to read a book on the desk.

Amazon basic lamp

Blackout Curtains: My desk faces the window, and while it is great to look out, during sunny days the brightness was way too much. My eyes started hurting trying to adjust to two different light sources.
I use these ones by Armenia Hague. Mine are fairly economical and work really well. I barely notice even if its super bright outside.( See images below)


Smart bulb directly on top of my desk, connected to my google mini. Its fun to ask the assistant to switch off the lights. I use Tplink smart light

Night time


I send infrequent newsletters. You can signup for them below.
* indicates required
What are you interested in