The data science company Intent has their tag live on hundreds of websites and web applications, and modifies the customer experience real-time across billions of pageviews a month. The tool for setting configurations which dictates how each page load is modified is called Chameleon and is used by engineers, product managers, and integration engineers.
UX Team Manager
Research, Synthesis, Ideation, Wireframes, Technical Liaison
The internal tool called Chameleon managed configurations and multivariate tests for billions of pageviews per month across hundreds of enterprise partner sites. The tech that enabled this was robust and scalable, but the UI that allowed PM's and engineers to configure it was confusing, riddled with gotchas, and full of patch jobs that rendered it barely usable by all except a few veteran employees. This would frequently lead to collisions between configurations or misconfigurations that had unintended effects and expensive repercussions.
A team was formed to assess how the tool was truly being used by stakeholders, and to redesign the frontend from the ground up to eliminate user frustration and user error.
The new product, brilliantly dubbed Chameleon 2.0, would need to maintain all existing functionality and flexibility of the current tool, while trying to better guide users away from configuration errors that would break partners' websites. The entire frontend would be rewritten, so we were free to choose whatever libraries we wanted, but would need to be backwards compatible, since there would still be hundreds of live configurations when the UI was switched over. This product was only designed for desktop, since no users ever would check it from tablet or mobile.
The tool was used internally by a small group - engineers testing configurations locally, integration engineers setting up new configs for new partners or troubleshooting broken configurations, and product managers (and myself) setting up multivariate tests on partners' sites. We'd need to interview a few people in each role type to fully capture the way the tool was currently being used, and where it was not meeting user needs.
We set up 1 hour interviews with the three stakeholder types identified who regularly use the product. We aimed for 2-3 users in each category.
We developed a standard set of questions for each stakeholder type in conjunction with the PM who owned the internal tool. Additionally, we reserved 30 minutes of the interview for the user to walk us through their typical use case of the existing tool. During this phase we asked clarifying questions, watched for habits that had been adopted to work around bugs or confusing parts of the tool, and proposed possible edge-case scenarios to see how they would handle them. For a some of the "power users" we scheduled additional shadowing sessions where we would observe them perform real-life tasks and ask questions while they were completing them.
Notes from the research sessions were imported into a research tagging tool called Dovetail and tagged by stakeholder type, user need, as well as special tags for bugs, points of confusion, or a inefficient flows.
These tags allowed us to easily surface common needs, frequently encountered pain points, and sort requirements by stakeholder type.
With tagging complete, we were able to identify the various tasks that each user type would need to complete, and start mapping those out as flows that would eventually become features in the UI. While the talented designer on my team carried out a lot of the synthesis, I played an integral role as technical liaison - helping to interpret some of the denser technical concepts that users raised in interviews so they could be broken down into manageable HMW statements. My weekly use of the tool to set up multivariate tests gave me a strong mental model of the underlying data structure. Combined with an intimate knowledge of the product vision for how we wanted to use the tool in the future, I was able to help guide my team's designs into a viable solution.
With so many flows and corresponding features, we needed some method to prioritize feature work. I encouraged the team to set up the user benefit and relative effort in a prioritization matrix. Engineers and the PM on the team weighed in on the estimates, and helped decide where to draw the line for this release. We now had our feature set and could start with UI work.
All prioritized user flows were eventually consolidated into the following flowchart. This asset proved valuable in communicating design intent with PM’s and tech leads before starting on actual UI work. It also helped PM’s and tech leads compartmentalize and prioritize feature work.
Scoping a configuration correctly was one of the biggest challenges a user faced trying to use Chameleon. Some notable issues with the legacy version of the tool included:
Below are two of the screens supporting scoping that were developed from sketches into wireframes, with some of the specific UI fixes and improvements highlighted.
Creating variants for multivariate tests is the most frequent user task carried out in Chameleon, and definitely the most confusing. Setting up a test required jumping between 3-4 screens, refreshing pages and potentially losing input data, and carefully excluding variants that weren’t useful. It was common to make a mistake and have to start over when test data started coming in incorrectly, costing the company time, money and client trust.
In the example shown here, the user wants to manipulate two variables, each with two possible values. This generates four possible variants, but the user only wants to test two of them and so must carefully select which two to exclude - far from intuitive and prone to mistakes!
Several iterations of layout were sketched and wireframed until a more clear UI pattern emerged to present this complicated concept in an intuitive way.
With all the wireframes in Figma, we were easily able to build realistic prototypes. We used the same stakeholders we’d interviewed during the research phase, and had them run through some common task flows using the prototypes to identify points of confusion, misunderstanding, or further opportunities to improve the designs. Several rounds of iteration were completed on these wireframes before the final version of wireframes seen above were achieved. One of the final Figma prototypes is available here.
I wasn't included to a great degree in mockup creation, but I have included one sample here for completeness. We decided to leverage Material Design for speed and quality, and for its widespread availability across JS frameworks (this tool was built in Vue JS). From the mockups, engineers were able to quickly place components and tie them into the view model - allowing the tool to be created quickly and with excellent error handling out of the box.
The engineering team was able to selectively release different sections of the tool thanks to some clever separation of backend and frontend, and the high level flow diagram presented earlier in this study. Today, the whole company is using the new version of Chameleon. It is faster, much less error-prone, and more accessible to a wider range of internal users.