Powering Up Cross-Channel for Faster Ad Measurement & Reporting
Sep 2024 – Present
My role
Researcher
Facilitator
Interaction Designer
Visual Designer
Team
4 Product Managers
3 Developers
2 Directors
Platform
Web
LiveRamp Motif
My Role
Within the first 30 days, I identified research opportunities. In 60 days, I created strategic collaborations across 4 teams. In 90 days, I defined the MVP to create a self-service measurement experience.
Background
Brands like Albertsons and Abbvie spend millions in advertising dollars per year to reach millions of customers. At that scale, a lot of money goes to waste by showing the wrong ads to the wrong person.
Due to privacy and compliance regulations, advertising dollars are not as smart as other targeting.
LiveRamp’s managed service across channels typically requires a data scientist and months of background work, weekly meetings, and reviewing 10,000 CSV rows manually. The space is ripe for innovation.
Strategy
The plan was to connect 4 platforms: data connections, asset management, measurement tools, and reporting all into one simple experience. However, that meant 4 product teams and a lot of communication.
There was also a cultural challenge. LiveRamp failed to deliver integrations across products to create seamless tools across 3 acquisitions.
Being brand new to measurement, I had to ramp up quickly on all 4 products. With the holiday season coming in 2 short months, customers would limit their availability, partners would be on vacation, and the team could lose momentum.
Each product team expected the product to reside in their platform. Fortunately only API tools had been built, and I had the flexibility of creating a product that could potentially live anywhere across the ecosystem.
Expectation Setting
With the PM and back-end dev in tow, I started by building bridges and connecting with product and design partners across LiveRamp’s ecosystem.
I aligned with product and design directors early to understanding prior challenges and company direction. LiveRamp is on a trajectory to unify multiple platforms.
I built out an early vision of the product that lived in a neutral space, setting the expectation that the campaign manager, our target customer, will decide where the product would live. This removed the burden of ownership from all teams, and reduce potential friction.
I was also able to demonstrate my ability to understand complex problems quickly, as well as my communication style in getting feedback from peers on rough, incomplete concepts.
Research
I scheduled time with customers as early as possible. Building a 0→1 product is high risk, so I took as many steps as possible to reduce failure.
My favorite research technique is iterative: continuous discovery. Over the next 8 weeks I met with 3 customers 6 times. I also held separate sessions with 2 internal customers, meeting 8 times to understand the problem space, perform usability tests, and whiteboard solutions.
I began by mapping out the existing service journey, with data scientists pointing out where clients struggled the most to understand the product, anchors that slowed down the process, and where service teams encountered the most time spent to revise and fix customer issues.
The Guided Interface
Early prototypes shared with customers had many intentional steps. I initially hypothesized that campaign managers would need a lot of assistance getting set up, and wouldn’t know where to find their data.
The stepper replaced the checklist, walking and guiding customers through campaign set up using a sheet.
After speaking with CVS, Pepsi, and Honda, it was obvious that campaign managers were not the sole executors of measurement. There are entire teams managing multiple campaigns, similar to a product team.
Campaign managers also felt like this was a lot of work, and that the steps were in the wrong order.
Measurement set up wasn’t going to happen overnight. I would need to create an interface that reduces the customer’s burden, enabled collaboration between multiple users, and was capable of being reviewed.
Principles
Don’t make me think
Campaign managers struggled with the overloaded interface. Contextual clues and a long process slowed the customer down, rather than help them.
Collaborative QA
Campaign managers don’t work alone. Data is cross-checked by partners, and the UI should support checkpoints before moving forward.
Avoid Expensive Runs
Measurement runs could cost $10-20,000 if performed wastefully. The system should be smart enough to guide the customer through troubleshooting mistakes.
Campaign-driven naming
In the existing world, hundreds of campaigns per brand are tracked manually using IDs. By enumerating campaign IDs in the system, we can create a system that matches the customer’s world.
Data is Timely and Aware
After understanding where customers’ internal collaboration happens, I ideated solutions that allowed customers to collaborate with internal advertising teams. Creating checkpoints provided a visual space for teams to review data asynchronously, and understand the information in front of them before running costly reports.
This iteration tested well with users. By collaborating with engineering teams, I identified that data could be pre-loaded into a campaign and reduced the customers’s steps to:
1) shape the data using filters and
2) selecting exposures to build their map
You may have notice that the navigation changes multiple times. I used these sessions to determine the ideal places where reports and plans should be placed for easier discoverability.
I was also able to understand the customer’s language around labeling to communicate information architecture and data-centric terms to marketing managers in a way that was understandable.
MVP
When moving towards the MVP, I removed exploratory (but much loved) components for the sake of delivering quickly and validating the software first. Final usability tests to handoff only took 1 week because of the tight-knit collaboration between product, development, and design teams. We understood each others’ strengths and weaknesses, and were honest with our initial approach, expectations of the MVP, and what the first version is trying to accomplish.
Looking Back
Unfortunately, my team alone was not able to push for a campaign-driven architecture for advertising assets. This is a major flaw in LiveRamp’s system as a whole because data in LiveRamp does not naturally map to the customer’s real world.
Working with my manager, we are establishing records from customers that support the idea of using Campaign IDs to organize assets. Additionally, by keeping the design leadership team aware of the asset management feedback, we ran the first object-mapping workshop to create consistent naming conventions and definitions of objects across products. Finally, we are exploring how to move away from generic names like tables and views to more descriptive names, such as exposures and audiences, for customers to more easily identify their data.