Back to talking about product! Last week, I talked about the importance of product prioritization and the product roadmap. I’ve also shared takeaways from Des Traynor of Intercom talk on product development. Specifically, how to drive adoption and engagement. Today, here’s a framework for product prioritization – RICE.
  • Reach – How many customers or prospects would a feature/ development engage?
  • Impact – What is the outcome/ results of a build? Would this drive engagement – feature/ user adoption? Drive revenue?
  • Confidence – How likely would development have the effect on reach and impact?
  • Effort – How many total man-hours/ weeks/ months would this development take? Remember to include hours for each resource across functions (i.e. product, front-end, back-end).

Intercom suggests teams add bands of scores to quantify each factor as best as possible. For example, to understand the factor of Impact, scoring can follow: “3 for ‘massive impact’, 2 for ‘high’, 1 for ‘medium’, 0.5 for ‘low’, and finally 0.25 for ‘minimal’”.

Combine each of the scores for:

The idea of RICE is to measure each potential development (feature, build) objectively to drive the most value – benefit vs. resources.

All the brainstorming and hypotheses about a new product or feature mean nothing until it’s in the hands of users (customers). They’re all ideas, but ideas don’t build great companies – execution does.
I catch myself being quiet in a lot of brainstorming sessions for new products and features. I start out hot speaking based on whatever thoughts I have before quickly going into silent mode. I’ll speak up when something is so counter to what I believe, but otherwise, I find myself quiet.
I’ve noticed this a lot, but was never sure why my default mode is quiet, absorbing. I always thought I just had to think more to myself until I read this passage from SPIN Selling.

“I remember going to a product launch in Acapulco some years ago. The event was splendiferous. Big names from the entertainment world had been hired at unbelievable cost, and the place swarmed with public relations people, media specialists, communications consultants, and a variety of similarly expensive people. The salespeople, eagerly awaiting the great event, filed into the main hall to hear one of the most spectacular and costly Feature dumps of the decade. I was depressed at the enormous expense my client had gone to in order to make the sales force communicate the new product ineffectively, so I decided to wait outside until all the fuss and spectacle subsided. As I sat by the pool, I noticed two other people who had slipped out of the same presentation. Talking with them, I found that they were both very experienced high performers. ‘It’s just another product,’ said one. ‘When the fuss dies down, I’ll go back in and figure out which customers need it.’” (Rackham, Neil. SPIN Selling. New York: McGraw-Hill, 1988)

It clicked for me that I’m quietly deliberating how this would be valuable in the hands of customers. I’m close to prospects and customers, oftentimes, so I’ll start out sharing what I know. I then go quiet to think and absorb because, for the most part, new products and features must be put in the hands of those who will use it. Till then, I won’t know for sure. Focus groups and interviews only go so far, and require real usage to test real-world value.
Consider that for a moment. How do you speak of new products and features today? For yourself? For your customers?
I was back with a startup I’m advising this past weekend discussing the importance of metrics. Or rather, more specifically, what you get with event tracking apps like MixPanel vs. Google Analytics.
So let’s start with an example from my office – tours at Atlanta Tech Village.

You’ve taken the tour before, but you’re here with 2 friends who have never been. Meanwhile, there are seven other tour guests to which two say they’ve taken the tour again, but they loved it so much they’re doing it again. 

The tour starts downstairs in the lobby before entering the rec room with game consoles, a ping pong table, shuffle board, and a kitchen. 

Then, the tour goes through the “hot desk” area before re-entering the lobby and going up a flight of stairs to the second floor with dedicated desks and some smaller offices. 

The group then goes up to one of 3rd, 4th, or 5th floors to see the larger office spaces. The tour guide is showing the group the kitchen as well as maybe an introduction to a startup who happens to have a door open. 

Then, the group goes up the roof to check out the sweet rooftop patio. 

The group may also then head all the way to the basement where the gym is. Then, the tour ends going back to the first floor – lobby.

Got the flow? Awesome. Let’s talk about this from Google Analytics and event-tracking with MixPanel (for example).
If tour visitors were tracked in a Google Analytics-like way:
  • You + the two other visitors who said they’ve taken the tour before would be Returning Visitors while the other seven are New Visitors.
  • How everyone came to take the tour – maybe referral from a friend like me, by an ad people saw at Farm Burger, or maybe they just walked by Piedmont Road.
  • What floors and rooms the group visited.
  • Great at telling you that most everyone came in through the lobby floor – the front door or the parking garage door. It can tell you some high level flows of how you traversed the building.
  • Perhaps a couple tour members got side-tracked and skipped a floor and met back up with the group. Or maybe a couple of them left mid-way through – exits.

Event-tracking with MixPanel would show you…

  • Three of us tour members are returning.
  • I’ve been to ATV dozens of times, and I took the tour three times.
  • I am Daryl and give you some contact information because I filled out some user information the first time I entered building.
  • Each room we enter (like Google Analytics).
  • Four of us on the tour played ping pong for 3 minutes before going to the next room. This is more detailed than Google Analytics may say we were in the ROOM for 5 minutes by telling you what four of us did in the room.
  • When we got to the second floor, two of us stopped to talk to a startup, and we also grabbed a couple drinks from the kitchen. Perhaps we also sat in a room so we could test out what it’d feel like to be a startup at ATV.
  • Two people who left the tour mid-way stopped by the bathroom for two seconds, and left the tour. (Maybe the bathroom was horrendously dirty.)
  • It was my friend, John, who hit the elevator button before we all went down to the gym.

… are you starting to see what event-tracking is? In this case, MixPanel is telling me more details of what happened on the tour – events. MixPanel is able to tie in user data because a few of us signed in from the beginning.

Google Analytics is able to quickly and automatically track much about our tour group’s visit. However, it’s still pretty high-level, and though it can be exhaustively tweaked to track a lot of events, it would take a lot of work to get the data and make sense of it all. Event-tracking apps like MixPanel are made for this stuff.
Google Analytics is a powerful tool that does a lot out of the box, and should be one tool for marketing insights. An app like MixPanel allows far greater insights for customer engagement, product roadmap, and yes, marketing.
Ah and another difference between Google Analytics and MixPanel… Google Analytics relies on cookies to associate site visitors as new or returning. However, if I stayed silent at the beginning of the tour and didn’t say anything, that’s like me hiding or starting a new “cookie” session. Google Analytics would count me as a new visitor. MixPanel could recognize me as a returning visitor in the system because I would sign back in as part of the tour.
Google Analytics. Easy set-up. Automatic high-level aggregate tracking. Insights via cookies.
MixPanel. Much more work to set up. Tracks the details (events) of visits and engagement at the individual level. Easier to build and determine funnels (drop-offs) of users as they move.
Hope this was helpful. Get trackin’! 
I want to continue my post from Tuesday about the importance and value of instrumentation. Today, I want to share SaaS metrics that can be answered with proper instrumentation (operational and business).
  1. Cost of acquisition. This is cost to capture an unaffiliated buyer. Need to know the costs associated with closing this opportunity including marketing costs, engineering support, etc. For this metric, it’s important to track the flow and behaviors of a customer through websites, sales touches, etc.
  2. D1, D7, D14, D30 retention metrics. Here, D stands for “day” and the number refers to the number of days since a user first enters the system. This metric tracks the percentage of returning users to the service in D days –gauge “stickiness”.
  3. Open and Click-Through Rates of Emails. Many products these days have email engagement and nurture campaigns. Here, companies measure if users are opening these emails and, if applicable, are they clicking into a destination the company is looking for.
  4. Drop-off During Sign-up Process. Many products have multi-stage sign-ups which can deter and annoy users from completing sign-up. By measuring here, the company can quickly ascertain if the sign-up process needs to be simplified or be very valuable to motivate complete sign-up. If they never enter, they’ll never see the great product! (This, by the way, is why so many apps use Facebook, Twitter, Google login… plus, companies get personal data shared from those platforms.)
  5. In-App Engagement.This is a big bucket including what pages, tabs, profiles, features are viewed and used. You want to understand how users interact with the product – are they finding pages useful? Are features cumbersome?
  6. Customer Lifetime Value. Same concept of the revenue of a customer (or net profit) but extrapolated against the number of times a customer buys (subscription, multiple products, etc.).
  7. Churn.That is, what percentage of customers stop buying annually? Good annual churn for SaaS businesses according to Sixteen Ventures is 5-7%. High churn may point to poor value, mismanaged expectations, or an inherent problem in the product.
  8. Average Revenue per Customer. To be explicit, it’s total revenue divided by customer. In this case, revenue would naturally be weighted by where most of revenue comes from.

I get excited when a company properly instruments their products and services. It demonstrates tremendous maturity and understanding to recognize engagement data will drive confirmation (or rejection) of hypotheses, and thus, enables smarter business decisions.

What are some other metrics you find useful? How would you measure success in your company?
One of the chapters of my book is all about measuring engagement and implementing analytics – chapter aptly dubbed, “What gets measured gets improved.
The lesson is to “instrument” an application to gather data points of how users interact with the product. Quickly, you can assess if users are traversing all the steps of a Getting Started Wizard, getting stuck while building out a team, or exiting a page with above-average frequency.
We implemented Google Analytics at Body Boss which delivered anonymized data at an aggregate level – useful, but doesn’t give finer perspective into engagement. More powerful instrumentation would including capturing “events” to every actionable UI element (i.e. button), page, etc. For example, in Twitter, events would include when a user navigates to a user’s profile, scrolls down, Likes a tweet, and then searches for another user.
There are many fascinating tools for instrumentation on web pages, in-app uses, and beyond. Here are a few I’ve recently got to play with:
  • Fullstory– records behind the scenes how users interact with a site or app that can be replayed later. You can see where a user hovers his mouse, scrolls, stays on some block of text, etc.
  • Unbounce – taking A/B testing to multi-variate testing for landing pages. Quickly set up a landing page with multiple variants, and Unbounce automatically directs visitors and tracks conversions.
  • Pardot – Easily send automated mass messages personalized to recipients based on where users are in the sales funnel. Tracks users from first site visit and beyond for nurture campaigns
  • MixPanel/ Intercom – Very different in how each operate, but the feature I liked most was being able to trigger specific messages (more granular than Pardot) based on user interactions. High-degree of control by building out event-driven rules and trigger notifications.
  • Kevy – Marketing automation for ecommerce stores. Slick tool to understand consumer behavior and enables stores to better market to consumers by offering coupons, messages, and the like based on rules and triggers.

In gist, there are lots of tools available for instrumentation with overlapping features. It’s fun learning about these tools now, and dreaming of how great these would have been at Body Boss. Though, several tools didn’t exist three years ago… inherent problems don’t change, but solutions do.

What are your thoughts on instrumentation? What tools do you use?
My last post about usability testing talked about set up and guidelines for testing. Following up, here are some outcomes and lessons from a couple testing sessions so far:
  • You can get great feedback. It’d be great to build a product leveraging the experience of the founders in a startup and customers use the product seamlessly. However, founders’ view of the world can be skewed. Usability testing solicits feedback directly from the target audience.
  • Find users who will provide candid feedback. Operative words are “provide” and “candid”. Usability testing can be awkward for testers not used to giving honest feedback. It’s important for the company and customers that users provide critical feedback knowing no answer is wrong – benefits go around for everyone.
  • Bugs can screw it all up. The goal of usability testing is to assess how users interact with the product. Sometimes, you’ll want users to perform specific tasks. However, product bugs can quickly halt testing; thus, preventing constructive feedback. Now, feedback may focus on bugs rather than usability.
  • Early testing can tell you everything. Stop. Iterate. Continue later. That is, early users may find resounding issues during tests. You’ll know it when it happens. In this case, discontinue testing till these headaches are resolved else you get the same feedback from all users. Go again.
  • Watch for unspoken signs. In presentations and demos, I scan faces in the room for emotions and reactions. In usability testing, I do the same. Do they seem delighted or uninterested? Is the user focused? Confused? What’s left unsaid can be most telling.

Usability testing has been a great process for us so far. We identified key gaps, and emerged focusing efforts on select features while reducing clutter. Before, we hypothesized what users wanted. Testing has provided real data and insight.

What outcomes or lessons have you learned from usability testing as a test-giver? Any input as a test-taker? How has usability testing helped your company’s product development and roadmap?
I was recently introduced to a wantrepeneur building a platform with an experiential method of consuming media with an ecommerce side to it. I’m skeptical of the experiential component. Then again, I’m skeptical of a lot. Instead, show me the numbers (user engagement, traction, and any revenue numbers). However, she has none to show, and isn’t actively able to provide any.
She has a great v1.0 already that can be marketed to test traction and gather feedback, but she’s reluctant, opting for a feature-full release. After months with v1.0, progress is on hold as she seeks funding to build her “needed” features.
ZERO users. ZERO revenue. Ideas on business model, but that’s it. Trying to raise six figures. That’ll be tough.
Some thoughts:
  • Seeking funding takes TIME! My friend underestimates the efforts to raise funds — prospecting potential investors, setting up meetings, creating pitch decks, etc.
  • With or without funding, what’s happening? Her “startup” is stagnant. There’s no feedback from users (none anyways). No product development. Each day that passes, the market evolves, and a competitor entrenches itself with the market.
  • Life happens. How do you cope? My friend’s early partners have left due to life complications. This happens. However, she’s stuck unsure of how to proceed like hoping a good, cheap developer falls into her lap.
  • Raising funds with no traction in a difficult-to-defend market?! Startups and entrepreneurship are today’s “it” thing, so there’s lots of noise from those seeking money. Investors mitigate some risk by startups’ traction.

I like TechCrunch’s “Wasting Time with the Joneses” article calling funding as “hyperdrive, not a joy ride”. That is, “If you lay in the proper course, it will take you far. If you haven’t, you’ll just be way off the mark and beyond the reach of anyone to save you.”

What are the traps of seeking funding while still in the early stages of product development? How could entrepreneurs be successful in raising capital without traction?