This post focuses on sales ops and how to make sure the data captured in a sales process, right from the lead gen stage, flows efficiently across the various parts of the process. And beyond avoiding data leakage, how capturing as much data early on can helps analytics even if some data pieces can be deemed not relevant initially. In this first post, I take the example of a sales ops process built around 4 services / technologies, I explain the 4 groups of data useful to capture and maintain for KPIs and reporting purposes. The second post will then explain how to put all the pieces altogether.
A usual issue with sales process is the prospecting part. I used to find it the difficult part though worked out it is a fascinating and actually not so hard part of the sales process. I have also realised that making sure data related to the prospect could flow across the various part of a process is hard but very important. Hence the need to think it through when building water tight sales ops. There is a lot of value in capturing as much data as possible for future analysis, not least the ability to implement clear leading or lagging sales KPIs such as “lead time from first contact to close”, vertical where traction is the most potent, number of interactions with prospect (and hence COS), etc…
What are the technologies to use in sales ops
Let’s look at some sales ops technologies that can help making sure that data flows. And then afterwards, what categories of data can flow across the sales process. I mentioned here that the top of the funnel is filled by a range of activities that need to constantly run. This is focused on outbound activities.
The lead research: Firstly there is a need to have some research done. It is possible to use lead gen product. Rightly or wrongly, I am not a big fan. For two reasons. Firstly, I want contextual data to personalise my outreach as much as possible. In this goes way beyond a simple name, email address and a company name. I need ideally 6 or 7 data points which a researcher can collect. Secondly, because a manual process at the research stage is probably more cost efficient. I am not entirely sure of the cost per lead for a lead gen product but a CPL turns around $0.25 to $0.40 when done manually via Upwork (word of warning: there is a need to separate the wheat from the chaff on UW and good researchers are, in my experience, the exception more than the norm).
The prospecting engine: There are load of options out there for outreach. Sales ops has become a busy industry. But Reply.io will crush the market. I previously mentioned how Reply.io can be used for various purposes in a sales process (here). The focus of this post is on avoiding data leakage so I’ll explain the role Reply.io plays in this shortly.
The CRM: Again, plenty of options out there. For this tailored sales stack, and again just keeping the “no data leakage” angle, I recommend Prosperworks. As Prosperworks is built on the Google suite, it helps avoid this data leakage (detailed in the second post).
The mail client: Gmail what else? Well, I am working in the tech industry and for young innovative companies so rarely come across organisations that are using something else. So am admittedly a little bit partial.
Sales ops mantra? Gimme the data!
I use four types of categories of data and have listed them below, from the obvious to the more complex ones. I won’t talk about data such as name, company, email addresses, phone number, etc… they are the obvious ones and not really useful from a sales ops analytics point of view.
1- The “raison d’être” of the lead to exist. Beyond making a sale, there are various reasons to research a lead. I wrote some details here. It can be to engage a conversation to capture feedback and tailor a value proposition. Or there are some buying signals that can be captured and used to engage a conversation. Or, as another option, the lead is operating in a specific industry segment that is the key focus/target as tier one references can be leveraged. The list is endless.
2- The personalisation variables. These are crucial. For example, you have a focus on people tweeting a specific keyword. Or those who attended a conference and use the conference hashtag. Of course, it is possible to capture lot more, far more sophisticated variables depending on the product or maturity of the sales process. Capturing all details in the CRM can be useful for future uses (eg: date of Tweet and the link). It may sound basic but very useful to keep these personalisations variables. Again, they will prove useful in the future to reset a context.
3- The segmentation variables. These are also very important especially from a KPI point of view. The fact that the company targeted is in industry segment X, located in country Y, with Z people working for it will come very handy for analysis and planning purposes. I’ve learned, sometimes the hard way, that it was an error to discard data that could otherwise be easily captured.
4- The traction/analytics variables. This is someone for a bit more advanced stage. But it is possible via API to keep the email opens, the pages viewed, downloads, etc…
Separate note: As mentioned, this is for the context of an outbound effort. For a prospect coming from an inbound source, the 4 categories might not be all applicable and would need to be tweaked.
Now the technology is in place, the data categorised. Next post, I will share the step by step of how to put it altogether.