Cookbook

A/B Test a Paywall

Run an experiment to find the highest-converting paywall variant

A/B Test a Paywall

Set up an experiment with multiple paywall variants, allocate traffic, and measure which version converts best. No app update required.

Prerequisites:

  • A Nuxie account with a published campaign containing a paywall (see Your First Paywall)
  • The iOS SDK installed with user identity configured (consistent experiment assignment requires identified users)

Step 1: Create paywall variants in the Studio

Open the flow that contains your paywall. Design the variants you want to test. For example:

  • Control -- Your existing paywall with monthly and annual pricing
  • Test A -- A redesigned paywall emphasizing the annual plan with a savings badge

Create each variant as a separate screen in your flow. Use the chat panel to generate variations: "Create a version of this paywall that highlights the annual plan with a 40% savings badge."

Step 2: Add an experiment action

Select the interaction point where users enter the paywall -- typically a button on a preceding screen, or the starting point of your flow.

Change the action from Navigate to Experiment. This opens the experiment editor where you configure:

  1. Experiment name -- Give it a descriptive name like "Annual emphasis test"
  2. Variants -- Add two or more variant paths:
    • Control (50%) -- wired to your original paywall screen
    • Test A (50%) -- wired to the redesigned paywall screen
  3. Traffic split -- Set the percentage for each variant. Percentages must sum to 100.

Each variant path points to a different screen. When the experiment runs, users are assigned to a variant and see only that screen.

Add a holdout group (optional)

Mark one variant as a holdout to measure the incremental impact of showing any paywall at all. Holdout users see nothing -- the flow skips the paywall entirely. This gives you a true baseline for conversion lift.

Step 3: Publish with experiment definitions

Click Publish to create a new version. Nuxie preserves experiment identifiers across publishes, so if you previously published this flow, your experiment data carries forward.

If you changed the variants or traffic split since the last publish, Nuxie shows a confirmation dialog:

  • Continue -- Publish with existing experiment IDs. Running experiments keep their current assignments.
  • Replace experiments and publish -- Generate new experiment IDs. This resets all assignments and starts fresh.

Choose "Replace" only if you want clean data from scratch.

Step 4: Start the experiment

Navigate to the campaign detail page in the dashboard. Open the Experiments tab. Your experiment appears in Draft status.

Click Start to begin the experiment. The server immediately begins assigning users to variants. Assignment is deterministic per user -- a user always sees the same variant across sessions and devices.

StatusWhat happens
DraftNo assignments. All users see the default variant path.
RunningTraffic is split across variants. Assignments and exposures are tracked.
PausedNo new assignments. Existing assignments are preserved.
ConcludedA winner is chosen. All users see the winning variant.

Step 5: Monitor results

The Experiments tab displays results as traffic flows in:

  • Impressions -- Unique users exposed to each variant
  • Conversions -- Users who met the campaign goal (typically a purchase) after seeing a variant
  • Conversion rate -- Conversions divided by impressions for each variant

Conversions are attributed to the variant the user saw. Only users who actually reached the experiment point in the flow and were shown a variant count as impressions -- users who were assigned but never triggered the flow are excluded.

Results update as new events are processed. Check back after accumulating enough traffic for meaningful comparisons.

Step 6: Conclude the experiment

When you have enough data to make a decision:

  1. Open the Experiments tab on the campaign detail page
  2. Review the conversion rates for each variant
  3. Click Conclude and select the winning variant

After concluding, all users see the winning variant. Historical results remain visible in the dashboard.

Step 7: Iterate or restart

If results are inconclusive or you want to test new ideas:

  • Restart -- Generates new experiment and variant IDs, preserving the previous data in campaign history. Triggers an automatic republish.
  • Clone -- Same as restart, but optionally renames the experiment.

Both actions update the source flow in the Studio and publish a new version. Previous experiment data is never merged with new experiments.

Testing during development

Override the variant assignment for specific users to test each experience without waiting for random assignment:

  1. Open the experiment in the Experiments tab
  2. Find the overrides section
  3. Add the customer and select the variant they should see

Overrides apply only while the experiment is running and are respected across sessions.

Next steps