Mobile Web Onboarding

Hired

 

 Company Overview

Our main product is a live marketplace platform that connects companies with the best tech talent that’s actively looking for new roles in our market(s) everyday. We cater to Software Engineering, Product Management, Design, Data Analytics, IT, and Project Management.

Our core philosophy is that we believe great tech talent should be in the driver’s with an amazing job searching experience (companies apply to them) since finding a job is typically a horribly, untransparent process.

The way Hired works is tech talent (or candidates) apply to go “live” in the marketplace only when they’re ready to start interviewing for new opportunities. The approval process is based on their location, skills, and intent on finding a new job. Once accepted on Hired, your profile becomes available to employers for up to 6 weeks and if they’re interested, they’ll apply to you (with salary upfront) to interview for them. FYI - your profile is hidden from your current employer if they happen to be on too.

hired_bkg.png
hired home.png

Project Team 

Our ‘Acquisition’ squad includes a Sr Growth PM, a handful of Software Engineers, an Engineering Manager, and a Sr Product Designer. This team primarily focuses on creating and optimizing our funnels as it relates to: 1) tech job seekers (or “candidates”) 2) employers.

In both of these funnels, you’re either visiting Hired’s site and (potentially) signing up or you are “reactivating” your previous account to get back on. Optimizing these are crucial because it directly leads to increasing the “liquidity” in our marketplace, meaning more candidates and employers with jobs that spur more successful hires (or”placements”) since we monetize through Subscription hiring plans.

Problem Space

When our Sr Growth PM joined Hired (early 2019), he learned very quickly that our Mobile web onboarding flow for new ‘candidate’ sign-ups needed updating, badly. Not only was this sign-up experience 3 long pages of scrolling and off-brand, we learned that:

  1. ~40% of our total candidate signup traffic came through Mobile

  2. Converting these signups from landing to successfully being approved and going “live” into our marketplace was ~40% lower than compared to Desktop

  3. The onboarding flow itself takes ~17 mins to complete 

How many great candidates are we missing out on? How much cost can we reduce if we converted a higher % of them? How many more placements could have been made?

Project Scope

Rollout

Our Acquisition team rolled out our amazing “new” flow the week of 9/9 (see our 2-min demo video). The first rollout was 10% of all mobile traffic in the U.S. (Canada and EU out of scope, for now) and did not include any candidates that were targeted by our Marketing team to “reactivate” an old profile of theirs. 

“Old Flow” vs “New Flow”

Screen Shot 2019-12-15 at 11.04.35 PM.png

I was brought in right after this initial rollout. My objective, alongside my Sr Growth PM, was to optimize this new candidate sign-up flow and reduce as much drop-off happening throughout the process as possible. Our hope was to make sure it was performing better than our “old” flow, especially since it was very outdated… but if it didn’t win, we’d have to stay impartial and retract back to our old flow. 

Metrics

The main variants we set out to in order to understand whether we’d stick with our new flow (or not) were:

  1. Increase the % of candidates that land on our “sign up” page into qualified onboards (QOBs) -- this means they’ve reached the very end and submitted their profile for approval on Hired

  2. Reduce the time it takes to complete the entire flow

  3. Increase the quality of candidates getting approved on Hired -- this is measured by our “auto-curation” (AC) algorithms that determine the odds of a candidate getting hired by their initial profile information (bonus #1)

  4. Increase the number of candidates that get approved from our new flow AND receive 3+ interview requests from employers (bonus #2)

Our “bonus” variants here are just a couple of other variants we wanted to keep a pulse on but are not going to be a crucial factor in determining the success of our project.

metrics.png

Execution

Every day tracking

My Sr. PM and I built and equipped ourselves with every report or dashboard (we use Salesforce as our CRM and Looker as our data analytics tool) that had to do with candidates signing up and understanding where they drop-off, what sources they’re coming from, how many are moving forward etc. Both of our morning routines including combing through these to make sure we had a pulse on everything.

looker reports.png

Initial Results

Two weeks after our initial rollout and thankfully (and purposely), with our new flow we could measure drop-off at every single step of the way. At Hired, we use Looker as our data analytics tool and with the help of our Data team, we pre-built reports to track our users as they started the candidate sign-up process until the page (or step) in which they decided to drop-off. We also use Google Analytics to track where most of our traffic was coming from and in this project, we wanted to make sure there weren’t any irregularities or spikes of sign-ups from a particular source throughout the months of rolling this out.

micro funnel.png

A simple data pull from our handy Looker reports from our first couple of weeks into a Google Sheet and quickly moving into some visually appealing bar graphs… voila.

First Areas for Iteration

  1. Primary Role — picking your tech role(s) respectively

  2. Work History — your current or most recent work experience

  3. Online Presence — Adding links such as LinkedIn, Github, StackerOverFlow or Dribbble

  4. Preview — Your profile right before you click “submit” for approval

We decided not to iterate our Splash and Sign-up pages since these tend to naturally have high drop-offs. While we could likely make these pages more engaging, we were aiming for our “lowest” hanging optimizations (low effort, high impact). The other section highlighted that we didn’t target was ‘work history lists’ which was filling any additional work experiences. We figured with improving #2 above, this should cover it, for now. 

Brainstorming Hypotheses & Solutions

Without any QA Engineers or UX researchers, we had to do our own dry runs to find bugs or sift out areas for improvement. We also use Hotjar to track user recordings so I watched hundreds of those and ran countless amount of dry runs to spark some hypotheses on why users were dropping off at each section.

recordings.png

My Sr PM, Product Designer, Eng Manager and I tasked ourselves to brainstorm as many hypotheses for each of our 4 pages we selected to iterate and organized it in a Google Sheet. We then played a little game of “spend ten” which basically meant each of us were given a fake “$10” and could put a dollar amount next to each hypothesis on what we thought was the likeliest reason for drop-off anonymously (if you were really confident, you could go with $7 on one hypothesis for example). FYI this is a great way of involving several team members and having everyone voice their input. 

Once we determined our highest valued ($) hypotheses for each step, we then set out to brainstorm solutions to start shipping our quick iterations. Once we had a few potential solutions beside each of our hypotheses, we gathered up and collectively agreed on our final solution (our Sr PM had the final say… he’s super smart and keeps us grounded). Now we can finally start building.

spend ten.png

Shipping our iterations

We use Jira for project managing and agile methodologies for shipping features. Our sprints are 2 weeks in length and ultimately, we build tickets written up as user stories in our backlog first, e.g. “[Mobile Web Optimization] As a candidate, see my character count as I’m typing up my work description”. In this case, we wrote up all of our tickets for ‘[Mobile Web Optimization]’, added to our backlog before our Engineers, and they went ahead to “point” each ticket (attribute a point score from 1-3 depending on how many days it might take them) or ran discovery to further “chunk” these tickets into additional ones when grooming. 

Screen Shot 2019-12-15 at 8.28.34 PM.png

Eventually, our Eng Manager distributed all the tickets over a couple of sprints to our individually contributing Engineers and towards early-mid October, we had our first few iterations complete. Here are a couple of quick and small fixes/iterations to show:

P.R. iteration.png
W.E. iteration.png

And then… 30-45 day results…

Starting in early October, we noticed that our “new flow” performance was failing our variants and beginning to underperform to our “control” which came as a shock since we thought our new flow was BY FAR better.

Results analysis.png
looker table.png

To my Sr PM and I, this instantly became the highest priority… figuring out why our new flow was underperforming. After drawing up and insane amount of hypotheses from data reports and dashboards, we brainstormed reasons and ultimately, settled on one:

1 - Was there a specific step causing drop-off? Nope. After taking a look at drop-off for all steps, it was very consistent with when we first rolled everything out in early September

2 - Fewer users were completing the onboarding overall? Nope. No seasonality or drops in landings overall compared to other months.

3 - A specific channel had a spike in traffic and they’re performing worse? Nope. According to Google Analytics, no spikes in traffic coming from a specific source.

4 - We shipped a change that hurt conversion? Bingo. Ouch but yep, that was it. The timeline lined up with our first ‘Work Experience’ page fixes. We saw 32.8% spike in candidates signing up under ‘Other’ compared to our “old flow” vs a .6% difference before we iterated.

other analysis.png

It turns out, after our first iteration, our users found the UX to be more enticing in selecting ‘Other’ role and typing your role instead of spending time scrolling and searching for your fixed job category that we support. What happens is if you sign-up under ‘Other’ and make it through the process, our algorithm can’t detect if your role is a fit, so it’ll “reject” by default before a “curator” (a person on our team that can manually approved people if they’re accidentally rejected) catches your profile. 

Pretty quickly we hit the drawing board revamped only this section of the sign-up process and rebranded properly. Soon after, we saw some pretty dramatic results...

60-75 day results…

results hurray.png

At this point, we nearly hit all of our targets and went ahead to roll out to 50% of our mobile traffic (US only). As we were working on new iterations, our results stayed consistent and eventually as we approached our 90-day mark…

slack post.png

100% roll out!

Looking forward

In the making of this case study and for the months to come, the most significant considerations moving forward in our continued rollout is ensuring that we are compliant with GDPR to cater our European markets (London & Paris and any future potential cities) and generally built for internationalization.

internationalization.png