Developer Guide

How to implement Pleras experiment code in your A/B testing platform. For guidance on running experiments effectively, see the Experiment Setup & Monitoring guide.

What you're working with

Each experiment is a self-contained JavaScript file that modifies the DOM of a live page. The code:

  • Is wrapped in a self-executing function so that its variables and functions are completely isolated — nothing leaks out to interfere with your site's own JavaScript or other experiments running on the same page
  • Waits for target elements to appear before executing
  • Injects new HTML elements and inline CSS
  • Fires tracking events via window.dataLayer

No external dependencies are required. Each file is independent and can be deployed on its own.

Code structure

Every experiment follows the same pattern:

(function() {
  'use strict';

  var CONFIG = { ... };        // Experiment name, selectors, timeouts

  function waitForElement() {} // Polls for a DOM element
  function trackExposure() {}  // Fires a dataLayer event on activation
  function runExperiment() {}  // The actual DOM manipulation

  // Initialization
  if (document.readyState === 'loading') {
    document.addEventListener('DOMContentLoaded', runExperiment);
  } else {
    runExperiment();
  }
})();

CONFIG block

At the top of each file you'll find a CONFIG object containing:

Key Purpose
experimentName Unique identifier used in tracking events
Various selectors CSS selectors the experiment targets (e.g. priceSelector, formSelector)
maxWaitTime How long (ms) to poll for elements before giving up. Default: 2000

Element polling (waitForElement)

Experiments don't assume the DOM is ready. They use a waitForElement pattern that polls using requestAnimationFrame until the target element appears or maxWaitTime is exceeded.

If the element isn't found, a warning is logged to the console:

[experiment-name] Element not found: .some-selector

This is intentional — the experiment fails silently for users rather than throwing errors.

Duplicate injection prevention

Every experiment checks whether its elements already exist before injecting:

if (document.querySelector('.exp-delivery-banner')) return;

This prevents the experiment from running twice if your platform re-evaluates the code (e.g. on SPA navigation or late-firing triggers). You should not need to add your own guards.

Selectors and stability

Selectors are chosen for stability — they're verified against your live site at the time of generation and designed to survive routine site updates. Auto-generated class names (e.g. .css-1a2b3c, .sc-dkPtyc) are never used as these change on every build.

That said, selectors can break over time. Before deploying, check that each experiment's selectors still work:

  1. Open the target URL in your browser
  2. Open DevTools (F12 or Cmd+Shift+I)
  3. Run the selectors from the CONFIG block in the console: document.querySelector('.your-selector')
  4. If null, the selector is broken and needs updating

Things that can break selectors:

  • Site redesigns or template changes — class names, IDs, or DOM hierarchy may shift
  • Copy changes — some experiments locate elements by their text content. Changing that text will break the selector.
  • A/B tests on the same page — another experiment may alter the DOM before this one runs
  • Framework upgrades — a React/Next.js version bump can change the component tree structure

If a selector is broken, get in touch and we'll update the experiment for you.

Injected styles and brand compliance

Experiments inject CSS via a <style> element appended to <head>. All class names are namespaced to the experiment (e.g. .exp-001-running-costs, .exp004-calculator) to avoid collisions with your existing styles.

Each experiment is built against two comprehensive brand guides generated from your site:

  • Visual style guide — a comprehensive reference extracted from your live site, documenting your visual identity at a granular level. Every design decision in an experiment — from how a new element is styled to how it sits alongside your existing components — is informed by this guide. The goal is that experiment elements look native to your site, not bolted on.
  • Tone of voice guide — a practical reference for how your brand communicates, extracted from copy across your entire site. Every piece of experiment copy — headlines, body text, calls to action, microcopy — is written to sound like it came from your team, not a template.

Both guides are available in your Pleras dashboard. If anything looks off — for example, if you've recently rebranded or updated your site — let us know and we'll update the guides so that current and future experiments stay on-brand. If you need changes to specific experiments, get in touch and we'll handle it.

If your site uses a strict Content Security Policy (CSP), you may need to allowlist inline styles. See Content Security Policy.

Tracking

Experiments push events to window.dataLayer (initialised if it doesn't exist). Two event types are used:

Exposure — fired once when the experiment modifies the page:

{
  event: 'experiment_exposure',
  experiment_name: 'experiment-id',
  variation: 'treatment'
}

Interaction — fired on user actions within the experiment (clicks, toggles, etc.):

{
  event: 'experiment_interaction',
  experiment_name: 'experiment-id',
  action: 'delivery_link_click'
}

To use these events in your analytics:

  • Google Analytics / GA4: If you're using Google Tag Manager, create triggers based on the experiment_exposure and experiment_interaction custom events. Map experiment_name, variation, and action to event parameters.
  • Other analytics platforms: If you use a different analytics tool and want experiment events forwarded there, you can set up listeners in your tag manager that pick up these dataLayer events and send them on. Alternatively, let us know what platform you use and we'll update the experiments to integrate with it directly.

Quizzes, email capture, and data integrations

Some experiments include interactive elements like product recommendation quizzes, calculators, or email capture forms. These work out of the box — you can deploy them immediately and run your conversion tests. The quiz logic, recommendations, and UI are all self-contained in the experiment code.

By default, interactive experiments track user engagement through dataLayer events (e.g. quiz_started, quiz_completed, quiz_email_captured). This means you can measure completion rates, drop-off points, and conversion impact from day one without any additional setup.

Where an experiment includes a form — such as an email input on a quiz — the form UI is fully functional, but the entered data (e.g. the email address) is not stored or sent anywhere. The dataLayer event records that a submission happened, not what was submitted. This means you can test whether an email capture step improves conversion without needing any backend setup.

If you want submitted data to reach your email platform or CRM (Klaviyo, HubSpot, Intercom, etc.), just let us know what tools you use. We'll set it up on the relevant experiments and make sure future ones work the same way. All we need from you is the platform and a few details like which list or audience the data should go to.

Content Security Policy

If your site enforces a CSP, experiments may be blocked from:

  1. Injecting inline styles — the <style> elements appended to <head> require style-src 'unsafe-inline' or a nonce/hash
  2. Running inline scripts — depending on how your A/B platform injects the code, you may need script-src rules for the platform's domain

Most A/B testing platforms handle this through their own CSP guidance. Check your platform's documentation if you see CSP violations in the console.

Accessibility

Experiments include basic accessibility support — accessibility attributes (ARIA) on injected elements, keyboard-accessible interactive components, and contrast-aware styling. If your site has specific accessibility requirements or standards you need to meet, let us know and we'll review the experiments against them.

Anti-flicker and element polling

Most A/B testing platforms offer an "anti-flicker" snippet that briefly hides the page while experiment code runs. This prevents users from seeing the original page flash before the variant loads. It's worth understanding how this interacts with the experiment code.

Each experiment polls for its target DOM element for up to 2 seconds (maxWaitTime: 2000 in the CONFIG block). The polling is asynchronous — it doesn't block the page from rendering or hold anything up. The script evaluates instantly (it just sets up the poll and returns), and the browser carries on as normal while the poll runs in the background.

The 2-second window is deliberately tight. Most target elements are available within milliseconds of page load, and the short timeout ensures experiments either apply quickly — within a typical anti-flicker window — or don't apply at all. This avoids the worst outcome: the anti-flicker releasing, the user seeing the original page, and then the experiment kicking in seconds later causing a visible layout shift.

If you're seeing experiments not applying consistently, or noticing flicker or layout shifts, get in touch. We'll review what's happening on your site and work out the best approach with you.

Single Page Applications (SPAs)

If your site is an SPA (React, Vue, Next.js, etc.), page navigations don't trigger full page reloads — the framework swaps content without the browser firing a new page load event. This means experiment code that ran on the initial page load won't re-run when a user navigates to a different route.

If an experiment targets a page the user navigates to after landing, it won't apply unless your A/B platform re-evaluates experiment code on route changes. Most modern platforms support this, but it needs to be configured. The experiments themselves handle re-injection safely — duplicate injection guards prevent double-rendering if the code fires more than once.

Some experiments also include additional logic to watch for dynamic DOM changes and re-apply themselves when content is added or replaced on the page. If an experiment includes this, it will handle dynamic content automatically without any extra configuration on your part.

What we can help with

If you need any changes to experiments, get in touch and we'll handle it:

  • Broken selectors or experiments not applying — if your site has changed and an experiment isn't working, we'll update it for you.
  • Copy or design adjustments — if you need changes to experiment messaging or styling, we can update them.
  • Data capture integrations — if experiments collect user data (emails, quiz responses, etc.) and you want that flowing into your CRM or email platform (Klaviyo, HubSpot, Intercom, etc.), we'll set it up and ensure future experiments follow the same approach.
  • Analytics platform integration — if you use an analytics tool other than Google Tag Manager and want experiment events sent there directly, let us know and we'll update the tracking.

Roadmap

  • Automatic selector repair — when your site changes and a selector breaks, experiments will detect this and automatically update their selectors to match the new DOM structure. No manual intervention or support request needed.
  • Self-serve experiment editing — make changes to experiment copy, styling, and configuration directly, without needing to get in touch.
  • Automatic experiment stacking — when new experiments need to build on previous winners that are still running, the system will automatically account for the existing changes without manual intervention.