The way Intercom tracks user data is simple - just install a code snippet into your product, and Intercom’s user data will be updated as soon as they login to the product. This has been the way how Intercom worked in the first 2 years.
However, there’s a flaw in this system - it relies on users to login for them to start appearing in the Intercom record. Customers with an existing user-base may require some time for their active users to be tracked in the Intercom system. This limits the use of the Engage product where customers may use it to re-engage existing customers who are inactive on their product.
Collaborating with the Growth team (Who owns the sign up, purchase, and onboarding flows), we’ve committed to a two weeks design sprint to tackle this problem. The main idea is simple - we’d develop a feature for customers to import their existing user base into Intercom, so that they can start messaging them without waiting for them to login.
To get to the root of the problem, I needed to get myself familiar with how users are usually exported from other tool, how are they formatted, and how import are done with other tools.
User lists are typically exported in CSV format that contains a big data table. Each row would represent a user, and each column would contain data about that user.
The import process of other tools typically involve uploading the CSV, and then a manual “mapping” process for people to specify which column of user data map to the data that is stored within the product.
For some tools, this process could get pretty complex to understand.
Within the given timeline, we weren’t aiming to reinvent the wheel. The general process of importing user information remains the same - upload CSV, map user data, and begin import. However, in terms of execution, I’d like design this process as seamless as possible.
This exploration aims to make the process simpler by trying to match the user’s mental model. It shows a preview of the CSV that the customer uploaded, and it retains the structure of the data table (A row to represent a user, a column to represent their data), so that they could easily recognise the data they’ve uploaded, and pick the correct attribute to map a data column to.
However, after a quick run through of the design prototype, it still didn’t feel seamless. This design asks users to go through each column of the data table, and map it to an Intercom attribute. If the uploaded CSV contains a lot of data columns, this process could take a long time to complete.
This design aims to optimise the flow by only asking people to map the four key data we believe are most important to start getting value of using Intercom to target users. Mapping other data should be optional and can be done at a later time.
It also gives better affordance of what to do when you first land into flow by showing a simple and clear question (e.g. “Which column in your CSV contains your users’ email addresses?”) that guide customers to select a data column.
This iteration feels much more effortless as compared to the first round, and it became the version we’ve decided to ship with.
In retrospect, we could possibly have done more by automatically mapping data using keywords matching. But what we’ve got was a good enough first step that would provide value.
The import flow was implemented within a few weeks, and was immediately helping Intercom customers reach wider target of end-users that they wouldn’t have easily reach before. And since the Intercom pricing model is based on the number of users, this feature had direct impact on its growth in revenue.