Campaigns

Experiments

A/B test paywall variations to optimize conversion

Experiments

Run A/B tests within campaigns to find the highest-converting paywall. Define variants, allocate traffic, and measure results -- all without shipping a new app update.

What is an experiment?

An experiment is an A/B test attached to a campaign. It splits users into groups (variants) and shows each group a different experience. By comparing conversion rates across variants, you can identify which design, copy, or pricing performs best.

Experiments are authored in the Studio as experiment actions within your flow interactions. When you publish, Nuxie creates or updates the experiment definition and syncs it for delivery.

Variants

Each experiment has 2 to 5 variants. A variant defines a distinct path through your flow -- for example, "Control" shows the original paywall while "Test A" shows a redesigned version.

Every variant has:

  • A name -- A label for display in the dashboard (e.g., "Control", "Test A").
  • A traffic percentage -- What share of users see this variant. Percentages must sum to 100.
  • An action path -- The screens and interactions the user experiences.

Holdout groups

A holdout is a special variant that shows nothing. Users assigned to the holdout see no paywall at all, giving you a true baseline to measure the incremental impact of your campaign.

Mark a variant as a holdout when creating the experiment in the Studio. Holdout variants cannot contain any actions.

Traffic allocation

Traffic is split by percentage across variants. Assignment is server-authoritative: the server computes which variant a user sees and delivers the assignment as part of the user profile. The SDK never makes assignment decisions locally.

Assignments are deterministic per user. A user always sees the same variant for a given experiment, even across app sessions and devices. This is based on the user's identity, so identifying users is important for consistent experiment behavior.

Experiment lifecycle

Experiments move through four statuses:

StatusBehavior
DraftThe experiment is defined but not running. No assignments are made. Users see the default variant path.
RunningTraffic is split across variants. Assignments and exposures are tracked.
PausedNo new assignments are made. Existing assignments are preserved but no exposures are tracked.
ConcludedA winner has been chosen. All users see the winning variant. No exposures are tracked.

Starting an experiment

From the Experiments tab in the campaign detail view, click Start on a draft experiment. The experiment begins assigning users to variants and tracking exposures immediately.

Pausing and resuming

Pause an experiment to temporarily stop new assignments without losing data. Resume to continue the test from where it left off.

Concluding an experiment

When you have enough data, conclude the experiment by picking a winner:

  1. Open the Experiments tab and find the experiment.
  2. Review the results (see below).
  3. Click Conclude and select the winning variant.

After concluding, all users see the winning variant. Historical results remain visible in the dashboard.

Exposure and assignment

Nuxie separates assignment from exposure:

  • Assignment happens when the server computes which variant a user will see. This is delivered in the profile.
  • Exposure happens when the user actually reaches the experiment action in the flow and executes a variant path.

Exposures are tracked only when the experiment is running and the user reaches the experiment point in the flow. This prevents inflated metrics from users who were assigned but never saw the experiment.

Reading results

The Experiments tab displays results for each experiment:

  • Impressions -- Unique users exposed to each variant.
  • Conversions -- Users who met the campaign goal after being exposed.
  • Conversion rate -- Conversions divided by impressions for each variant.

Conversions are attributed to the experiment by matching the journey. When a user is exposed to a variant and later meets the campaign goal within the same journey, that conversion is counted for the variant.

Results update as new events are processed and are available as soon as the experiment starts receiving traffic.

Overrides

Override the variant assignment for specific users. This is useful for QA testing or for ensuring a particular user sees a specific experience.

To set an override:

  1. Open the experiment in the Experiments tab.
  2. Find the overrides section.
  3. Add the customer and select the variant they should see.

Overrides apply only while the experiment is running. They are respected in both the profile response and on-device evaluation.

Restart and clone

If you need to run a fresh test after making changes, use the restart or clone actions:

  • Restart -- Generates new experiment and variant identifiers, soft-deletes the current experiment, and publishes a new flow version. All previous data remains in the campaign history.
  • Clone -- Same as restart, but optionally renames the new experiment. Results are never merged between experiments.

Both actions update the source flow in the Studio and trigger an automatic publish.

Authoring experiments in the Studio

Experiments are created in the Studio by adding an experiment action to a screen interaction:

  1. Select a button or interaction point on your screen.
  2. Change the action from Navigate to Experiment.
  3. Add variant paths, each wired to a different screen or action sequence.
  4. Set traffic percentages and name the experiment.

When you publish, Nuxie preserves experiment identifiers across versions. This means republishing a flow does not reset a running experiment -- your data continues to accumulate. If you make a breaking change (adding or removing variants, changing traffic splits), Nuxie prompts you to confirm before publishing.

Next steps

  • Triggers & Goals -- Define the goal that experiments measure against.
  • Journeys -- Understand how experiment exposures relate to journey progression.
  • A/B Test a Paywall -- Step-by-step guide to setting up your first experiment.