Tag: Analytics

A Framework for Testing and Experimentation

Four quadrant framework

Building Speed, Efficiency, and Confidence Without Breaking Trust

Most organizations believe they have an experimentation culture. In practice, many are still operating under rules that made sense a decade ago but quietly collapse under modern conditions.

The classic model is familiar. Run an A/B test. Wait for statistical significance. Declare a winner. Move on. That approach assumed clean user-level tracking, stable channels, and patient stakeholders. None of those assumptions reliably hold anymore. Channels fragment. Privacy constraints erode signal fidelity. Product, marketing, and data systems are tightly coupled in ways they were not before.

The result is predictable. Experimentation either slows to a crawl because no one trusts the data, or it speeds up in the wrong direction, with teams over-interpreting weak signals and shipping changes that do not reproduce. Both outcomes undermine confidence. Over time, experimentation stops being a decision engine and turns into performance theater.

The framework I’ve outlined here exists to break that cycle. It comes from building growth and experimentation capabilities inside real organizations, not idealized ones. Again and again, the issue was not ambition or tooling. It was the inability to align process, analytics, and decision-making with how experimentation actually functions inside companies.

At the core is a simple but often ignored truth: not all tests deserve the same process, the same resourcing, or the same definition of confidence. Treating them as interchangeable is one of the primary reasons experimentation programs stall.

Why legacy experimentation models fail in practice

Most experimentation failures are not caused by a lack of ideas. They are caused by habits formed in a simpler measurement era. Teams still operate under implicit assumptions about clean attribution, stable platform behavior, and linear decision-making. Worse, there is often a belief that enough calibration or methodological rigor will eventually “clean” fundamentally noisy data.

In practice, experiments are routinely compromised before they finish, often by well-intentioned behavior. Teams peek early. Metrics shift mid-test. Timelines are extended or shortened until something looks acceptable. Each action feels reasonable in isolation. Collectively, they inflate false positives and create a backlog of changes that feel successful but do not hold up over time.

Leadership notices. Not because leaders are statisticians, but because outcomes stop compounding. Trust erodes even when results appear directionally correct.

At the same time, modern data stacks introduce failure modes older playbooks never anticipated. Sample ratio mismatch, identity loss across devices, platform-side filtering, and logging gaps quietly distort outcomes. When data integrity is not treated as a prerequisite, organizations end up debating conclusions that were never reliable to begin with.

The final failure is organizational rather than technical. Teams run isolated tests without shared hypotheses, comparable metrics, or agreed confidence thresholds. Learning does not compound. Experimentation becomes a series of anecdotes instead of a system that builds institutional knowledge.

This framework addresses these failures by forcing clarity upfront. What kind of test is this? What rigor does it deserve? And how should results be interpreted before anyone sees a chart?

The part most frameworks avoid: experimentation is political

There is another reason experimentation breaks down that most frameworks avoid acknowledging. Belief inside organizations is not purely rational. It is political.

Experiments do not exist in a vacuum. They exist inside power structures, incentive systems, career risk, and narrative momentum. Data does not simply inform decisions. It is used to justify them.

This is why some experiments are allowed to “fail fast” while others are endlessly scrutinized. Results that align with existing strategy are accepted on weaker evidence. Results that challenge it face higher confidence bars, deeper analysis, and longer delays. The same organization applies different standards without ever stating them explicitly.

Ignoring this reality does not make experimentation more objective. It makes it more fragile.

The goal of a modern framework is not to eliminate politics. It is to constrain its influence by setting expectations before results exist.

The four-quadrant model for modern experimentation

The framework organizes experimentation into four quadrants based on potential impact and investment depth. The purpose is not categorization for its own sake. It is alignment. Different kinds of work require different rules of engagement.

Feature Rich experiments sit at the high-impact, high-investment end of the spectrum. These are not incremental optimizations. They are ambitious initiatives designed to change how the business works. Product experience, pricing, onboarding, messaging, and operations often move together under a single hypothesis. These experiments are meant to swing for the fences.

Because of that ambition, Feature Rich work requires coordinated investment across product, engineering, design, data, marketing, and leadership. These are strategic bets, not routine tests. They demand upfront alignment on scope, success criteria, and failure thresholds, along with explicit agreement on how long the organization is willing to learn before deciding. Their value is not just in winning, but in shaping future roadmaps and experimentation priorities.

Iterative Testing plays a different role. This quadrant exists to isolate and refine variables surfaced by Feature Rich initiatives or introduced as net-new ideas that do not require full organizational mobilization. These tests are designed to answer precise questions quickly and clearly.

Iterative Testing is intentionally lighter-weight. The goal is learning efficiency. Teams should be able to run these tests frequently, stack incremental improvements, and build confidence in causal relationships without long planning cycles or executive gating. This is where experimentation earns velocity and credibility.

Channel Specific testing is narrower by design. These experiments focus on optimizing behavior within a single environment such as paid search, social platforms, CRM, SEO, or affiliates. Their value comes from control and clarity, not breadth.

Channel tests require fewer dependencies and should move quickly. Treating them as if they deserve the same governance as major product changes creates friction without increasing insight. This is where many organizations slow themselves down unnecessarily.

Adopt and Go completes the framework. This quadrant exists to prevent wasted effort by leveraging ideas that have already worked in lookalike contexts. Another brand. Another market. Another segment. The goal is not invention, but translation.

Adopt and Go relies on staged validation rather than blind replication. Even proven ideas can fail when context shifts. The discipline is knowing when enough confidence exists to scale and when adaptation is required. Organizations that lack this muscle either over-test obvious wins or roll them out recklessly.

Deterministic and probabilistic analytics as an operating reality

A critical insight behind this framework is that analytics is not monolithic. Speed, efficiency, and confidence depend on using deterministic and probabilistic methods intentionally, not interchangeably.

Deterministic analytics relies on explicit linkage through known identifiers such as authenticated users, order IDs, or server-side event joins. It is essential for validating instrumentation, diagnosing funnel mechanics, and establishing causal relationships when identity coverage is strong. Deterministic measurement provides operational truth.

Probabilistic analytics exists because deterministic coverage is often incomplete or intentionally constrained. Privacy limits, cross-device behavior, and platform opacity make inference unavoidable at scale. Probabilistic methods estimate impact when user-level paths are fragmented.

The failure mode is arguing which method is “right” after results appear. The correct method is the one agreed upon before the test launches, based on the quadrant and the decision at hand.

Feature Rich experiments require deterministic validation of implementation and downstream behavior, but often need probabilistic or incrementality-minded approaches to assess whether observed lift is truly net new once the system adapts.

Iterative Testing should rely primarily on deterministic analytics. This quadrant exists for causal clarity. If integrity cannot be established here, the test should not ship.

Channel Specific testing often lives at the boundary. Deterministic measurement works when first-party signals are strong. When they are not, probabilistic interpretation is the reality. Confidence comes from repetition and triangulation, not a single dashboard.

Adopt and Go uses deterministic analytics to confirm correct implementation and comparable behavior, while probabilistic methods help assess whether expected performance transfers across contexts. The goal is risk reduction, not novelty detection.

Confidence, governance, and decision-making

The most important principle across all four quadrants is that confidence should scale with consequence. High-impact, high-investment decisions deserve deeper validation and slower calls. Low-impact, low-investment decisions deserve speed and autonomy.

When organizations invert this logic, experimentation becomes either painfully slow or dangerously noisy. This framework gives leaders a shared language to avoid both extremes. It does not promise certainty. It promises alignment.

The goal is not more experiments. It is better decisions made at the right speed, with confidence levels that match the stakes. That is how experimentation becomes a durable advantage rather than a recurring source of friction.

Meta’s Instagram Reels Skippable Ads: The Quiet Revolution

Meta Insta Reels skippable ads

Members Only Podcast

Please log in or subscribe to listen.

In October 2025, Adweek confirmed Meta began testing Instagram Reels skippable ads, mirroring YouTube’s in-stream ad format. Users can now skip ads after a few seconds and jump back to their video feed.

At first glance, it might look like a simple UX experiment. But beneath it lies a major shift in how attention, intent, and creative performance could be measured across social platforms.

The Test and Its Context

According to Adweek reporter Trishla Ostwal, Meta is running a limited test to understand whether this format helps users discover businesses more effectively.
Unlike YouTube, Meta is not sharing revenue with creators during the pilot phase.

The timing is deliberate.
Gartner’s 2025 CMO Spend Survey shows marketers allocating 30.6% of total budgets to paid media, a 10% year-over-year increase. Social channels are now the second-largest digital spend category. And among them, Instagram has a higher purchase intent than both Facebook and YouTube.

For Meta, the equation is simple: if users tolerate skippable ads without hurting engagement, Reels could become a more efficient revenue engine.

(Source: Adweek, “Meta Is Testing Skippable Ads on Instagram Reels, Borrowing From YouTube’s Playbook,” Oct 17, 2025)

Why It Matters

Skippable ads change the game for both performance marketers and creative teams.

Historically, non-skippable ads were priced at a premium because they guaranteed full viewership. But forced attention rarely equals real engagement. Skippable formats, on the other hand, turn the moment of attention into a user decision and a behavioral signal that can separate curiosity from disinterest.

When someone doesn’t skip, that’s intent data.
When they do, it’s still valuable just different. It tells the algorithm what not to show, and it tells you which creative failed to earn a second chance.

What Growth Marketers Should Do Now

1. Treat skips as data, not failure

Every skip is a signal. Over time, Meta’s delivery models will likely use skip behavior to refine audience targeting. Track skip rates, watch-through rates, and conversions together to uncover patterns that explain quality, not just reach.

2. Redesign creative for “voluntary attention”

The first five seconds of a Reels ad now matter more than ever. Borrow the YouTube hook architecture: open with story, motion, or emotion versus a logo. Reward attention early, and you’ll earn more of it later.

3. Build sequential storytelling

Skippable formats open the door for creative sequencing:

  • Those who skip can be retargeted with shorter, sharper hooks.
  • Those who watch can be served longer or higher-intent creative, like testimonials or offers.

It’s a subtle shift from one-shot persuasion to multi-step storytelling.

4. Expect pricing and auction evolution

Meta could eventually roll out cost-per-view (CPV) or hybrid pricing models. Prepare by modeling CPV vs CPM efficiency and by aligning ROI measurement around view-through conversions.

What to Watch For

AreaShift or RiskWhy It Matters
Measurement NoiseSkip data may complicate attribution and inflate signal-to-noise ratio.Algorithms will need time to interpret skips as quality indicators rather than negatives.
Creative StrategyMay usher in a new short-form storytelling discipline.Teams will need to master earned attention rather than relying on forced impressions.
Auction DynamicsCPMs could temporarily rise as the system learns.Early adopters should isolate budgets to prevent blended CPM distortion.
Attribution ClarityView-based engagement may dilute click-based signals.Marketers must redefine how success is measured beyond CTR.
Creator EcosystemNo revenue share yet for creators.This could limit long-term adoption unless Meta adjusts monetization models.

Staying Ahead of Platform Changes

The speed at which ad platforms evolve means marketers can’t wait for quarterly updates. Teams should build a lightweight “Ad Product Watchlist” to stay current.

Here’s a simple playbook:

  • AI Alerts: Use Feedly, Perplexity, or Google Alerts for “Meta test,” “Reels ad format,” and “auction update.”
  • Weekly Syncs: Align creative and media teams to share what’s changing and test hypotheses early.
  • Rapid Test Protocol: Treat every new ad format like a product feature — run 14-day experiments, report learnings, and scale what works.
  • Partner Engagement: Encourage Meta Partner Managers to include you in closed betas. Early learnings compound over time.

The Takeaway

Skippable Reels ads are not just a UX tweak they’re Meta’s signal that attention is moving from captive to voluntary.

The best growth marketers will recognize this for what it is:
A chance to design for curiosity, not captivity.
To read attention like a behavioral dataset, not a vanity metric.
And to build creative that doesn’t demand attention but earns it.

What gives in OOH advertising spend?

2023 is nearly here, we are all returning to work and increasing business travel, I've been bombarded by (OOH) ads. Per, Wikipedia, "Out-of-home (OOHadvertising, also called outdoor advertisingoutdoor media, and out-of-home media, is advertising experienced outside of the home. This includes billboards, wallscapes, and posters seen while "on the go". It also includes place-based media seen in places such as convenience stores, medical centers, salons, and other brick-and-mortar venues. OOH advertising formats fall into four main categories: billboards, street furnituretransit, and alternative."

Looking at advertising spend in OOH, its intuitive that it should grow, particularly digital OOH (dOOH). Yet traditional OOH spend still accounts for an overwhelming 70.8% of total spend while dOOH still sits at ~30% (see link to Insider Intelligence for more data)

Furthermore, there's a ton of programmatic dOOH as well with dOOH expected to grow nearly 40% by 2026 (source: Out of Home Advertising Association of America), curious to hear from performance marketers and #DTC advertising leaders; is dOOH a focus in 2023 for you and beyond?

Some key concepts taking shape in 2023 within the general OOH space with dOOH making things more interesting

Storytelling - Not surprising but this is a general trend within the performance marketing trends. Advertisers will focus on how to bring a cohesive advertising campaign to life in a physical setting, understanding the audience segments tied to the placements for OOH.

Integration - OOH are beacons not just billboards made of print or digital LCD expressions. We'll see smart dOOH advertising tactics that geo-fence and target opted in consumers to take advantage of the experience in more vivid detail, whether it's the continuation of the story or activating an attractive offfer.

Measurement - Goes without saying but any good performance marketer or advertiser will bring in ways to understand sales lift direct or indirect to effectively measure the ROI and profitability of the dOOH campaign.

If dOOH is not part of your advertising strategy, why? If so, how? Which platforms are primed to support programmatic dOOH? Reply to me directly, would love to hear how your organization is handling DOOH in the coming months and years.

Here's the latest from Insider Intelligence, How OOH ad spend is evolving - Insider Intelligence Trends, Forecasts & Statistics

Facebook and Cambridge Analytica concerns that may impact social and digital marketing

Sorry about the ominous title but there is a concern around what’s happening in the digital world specifically relating to social media. Furthermore, it’s getting more attention due to the “Trump Bump” and it’s not good, it isn’t just about stocks, and has expanded into just about anything related to Trump as well as some recent news has a lot of people now talking about Facebook. If you're like me, own Facebook stock, recently saw $60 billion get wiped off the books and tech stocks getting hammered then it's important to understand what is going on. Here’s my super simplified FAQ which will source and cover as much of the media as well as bringing in my experience to help you understand just what’s going on. Please pardon the grammar and typos! Who is Cambridge Analytica? Founded in 2013, Cambridge Analytica (CA) is a privately held data research and marketing company that was created as a commercial solution with the goal of supporting US politics. It’s partly owned by the Robert Mercer family who also happens to be a backer of Breitbart.com and Trump. Cambridge Analytica was involved in over 50 US political races since 2014 and have primarily to my knowledge supported hyper-conservative candidates. How did Facebook and Cambridge Analytica start working together? The gist of the Cambridge Analytica relationship began with Ted Cruz in 2015 where he utilized its services during the 2015 Republican primaries and of course lost to Trump. Ted Cruz was supported by Robert Mercer during that time and having a large stake in Cambridge Analytica it shouldn’t be a surprise that Trump’s campaign team, after Cruz’s loss was backed by Mercer. Went with Cambridge Analytica. From there, not surprisingly, Steve Bannon who according to Christopher Wylie, the firm's director of research, wanted “weapons for a culture war” that Facebook would be platform from which the culture influence could begin. So, Steve Bannon, Jared Kushner and Giles Parscale, Trump’s Director of Digital, went to Facebook and partnered with them to create a digital command and control data operation with Facebook’s organic and advertising products at its fingertips. Why was Facebook working with Cambridge Analytica? This part is unclear as of yet but of course the idea of supporting a major candidate, the potential commercial arrangement and the learning certainly would be a fantastic incentive for any large-scale platform. Where did they get the personally identifiable information? First, information about us can be attained across all of the breadcrumbs we leave as we “surf the web” hop between mobile and desktop in fragmented ways – this is nothing new and we marketers know this. Facebook is such an environment where you conceivably are able to track all of the web 1.0 information but add the interests, behavioral, psychographic and social graphs you essentially have a fantastic opportunity to personalize the information to trigger an action or transaction. Typically the personalization is grounded by an advertiser’s 1st party data, such as a first name, last name, email address, phone number, or in some cases a social security number, which Facebook actually collects. They then allow based on their terms of service to match this 1st party data to the Facebook community and create lookalikes or custom audiences that can then be marketed to. The lookalike modeling is standard fare digital marketing on Facebook because it doesn't have the same accuracy of custom audiences and hence it is a bit like targeting a dartboard blindfolded.Digital marketing blindfolded Any smart advertiser knows that simply targeting using Facebook's audiences to create lookalike models isn't as effective, takes a long time to analyze and is extremely costly. In Cambridge Analytica's case, they have admited they used 1st party data and a Facebook executive even claimed they used custom audiences. That's all well and good, however, there is a serious question around how they procured their database of “first party data” without the knowledge of consumers. There is also a big question surrounding how they could have extracted that information from Facebook and stored them in a database without the knowledge of Facebook. This is a highly dangerous situation, whether you are in the US, CA or the EU. Both Facebook and Cambridge Analytica may be in legal jeopardy depending on how the database information was harvested. Based on media reports, Dr. Aleksander Kogan and his GSR (Global Science Research), paid people using Amazon's Mechanical Turk to do a "personality assessment" on Facebook. The code used in the “personality assessment” app on Facebook exposed information of approximately 270,000 people their personal information and that of their entire social graph. GSR then created a database out of that information and shared it with Cambridge Analytica. Cambridge Analytica was then able to expand this to approximately 50 million US Facebook users. Dr. Aleksander Kogan has insisted that he had very clear terms of service that allowed him to legally get personally identifiable information from Facebook users. The issue here as we know, as marketers, that even large-scale advertisers spending billions of dollars a year are not able to extract information from Facebook it’s only a push and match. Was Facebook aware of the harvesting of personal information? The answer as of March 20th according to an ex-Facebook insider, reported by the Guardian, data harvesting by 3rd parties was rampant, but executives pretended it didn’t happen. unclear but as advertisers we need to be diligent about how we collect and track information. We should be interested in Facebook’s actions and whether we are satisfied that it resolves the many concerns advertisers and their customers would have. Imagine if your competitor created a simple assessment app, targeted a range of people and were able to ascertain your customer set and conquest them privately? If there is a platform hole technologically or Terms of Service (ToS) related, then it needs to be opened further or closed. It cannot be for selective entities. How will this impact digital marketing and social media? Specifically, Facebook has seen a ~$60B drop in their books because of a decline in their share value. We’re also seeing discussions in scaling back advertising spend while this current issue blows over. While I haven’t surveyed many consumers, we do know that, unrelated to this, many celebrities such as Jim Carey have begun to shutdown Facebook. This could create an inflationary effect on the cost of advertising across Facebook’s ecosystem, something we won’t know for a few weeks. We do know that the agency and DSP worlds have been inundated with transparency, data privacy and fraud issues which many have addressed. However, there is no doubt that many brands will question every decusion related to the usage of Facebook. As a result, I believe this will have a chilling effect on our industry unless Facebook and potentially a third party investigative body gets to the bottom of this. Our industry leaders Internet American Association of National Advertisers (ANA), Adverting Bureau (IAB), International Advertising Association (IAA) should take a proactive step in working together. What steps are being taking to address this from an agency and brand standpoint? At this stage, Facebook has suspended Cambridge Analytica’s accounts, the British government has procured a warrant to seize any available data, software and hardware to assess further damage as well as forensically investigate the situation. In the US, the Massachusetts Attorney General has opened an investigation into Cambridge Analytica. Our Federal Trade Commission is also investigating whether violated any previous privacy orders using Facebook. Yet, we have not heard from Mark Zuckerberg nor Sheryl Sandberg on this matter. We have also not heard from other platforms such as Twitter, Google and Snapchat. The good news is that many of the advertising associations are in the process of providing “rules of engagement” regarding Facebook and a universal checklist essentially a simplified version of the GDPR process which will alleviate any delays into impact to brands and their customers as well the agencies supporting them. The issue of course is limited to the procurement, handling and usage of customer data. The story is unfolding and there are many things not known but such is our business! If anyone has any questions or would like to add any corrections to this piece feel free to let me know. Sources:
  • The Guardian https://www.theguardian.com/news/2018/mar/17/data-war-whistleblower-christopher-wylie-faceook-nix-bannon-trump
  • Digital Guardian: https://digitalguardian.com/blog/what-gdpr-general-data-protection-regulation-understanding-and-complying-gdpr-data-protection
  • The Guardian: https://www.theguardian.com/news/2018/mar/20/facebook-data-cambridge-analytica-sandy-parakilas?CMP=Share_AndroidApp_Tweet
  • The New York Times: https://www.nytimes.com/2018/03/19/technology/facebook-cambridge-analytica-explained.html
  • Wired: https://www.wired.com/story/the-noisy-fallacies-of-psychographic-targeting/
  • Facebook ad policy: https://business.facebook.com/policies/ads
  • Money Watch: https://www.cbsnews.com/live-news/facebook-under-fire-the-latest-on-cambridge-analytica-scandal-live-updates/

Welcome Back

Close Window

Sign Out?

Are you sure you want to log out?