Overview
The visual editor for designing mobile screens with AI
Studio
The Studio is where you design, preview, and publish mobile screens. Use it to build paywalls, onboarding flows, surveys, and feature gates -- all from a single visual editor backed by AI generation and real-time collaboration.
What the Studio does
The Studio gives you an infinite canvas for arranging screens, an AI chat for generating and refining components, and a full property inspector for fine-tuning every detail. When you are ready, publish your project to create a new version and deliver it to users through the Nuxie SDK.
The core workflow:
- Create -- Start a new project and describe what you need. The AI generates screens with real components, layout, and copy.
- Edit -- Drag screens around the canvas, select components to adjust styling, wire up navigation links, and bind dynamic data.
- Publish -- Run pre-flight checks, pick a target app, and publish. Nuxie compiles your project into an optimized version delivered from the edge.
Interface walkthrough
The Studio is organized around a three-column layout with an infinite canvas in the center.
Header
The top bar contains your project name, environment indicator, share controls, and the Publish button. You can also toggle between Design and Interactions modes here.
Left sidebar
The left sidebar has three tabs:
- Chat -- Converse with the AI to generate and iterate on screens.
- Layers -- Navigate the screen and component hierarchy as a tree view. Expand screens to see their component children.
- Data -- Create and manage view models, instances, and converters that power dynamic content.
Canvas
The infinite canvas is where your screens live. Pan by scrolling or holding Space and dragging. Zoom with pinch or Ctrl/Cmd + scroll. Select screens by clicking, or draw a marquee to multi-select. Each screen renders a live preview inside a device frame.
Right sidebar
The right sidebar appears when you select something. It switches between two modes:
- Design -- Edit screen properties (device, orientation, theme) or component properties (text, layout, colors, typography).
- Interactions -- Author triggers, actions, and navigation links for the selected screen or component.
Floating toolbar
When components are selected, a floating toolbar appears above the canvas with quick actions like Explore, Duplicate, and Delete.
Design-to-deploy loop
A typical project moves through three phases:
- Generation -- Describe your screen in chat. The AI streams components onto the canvas in real time. Each generated screen is automatically saved to your screen library for reuse in future projects.
- Refinement -- Select components and edit them visually. Adjust Tailwind styles in the inspector, rearrange layout, bind data from view models, and connect screens with navigation links.
- Publishing -- Click Publish, select your app, and confirm. Nuxie validates your project (starting screen is set, no broken bindings, no parse errors), then builds and deploys. The published version is available to your app through the SDK within seconds.
After publishing, manage delivery rules, targeting, and experiments in the Campaigns section.
Projects vs versions
While you work in the Studio, your artifact is called a project. It contains screens, components, data, interactions, and themes -- all in draft state.
When you publish, Nuxie creates a version -- an immutable, versioned snapshot delivered to your users. Re-publishing creates a new version without affecting the live version until the build succeeds.
Next steps
- Canvas & Screens -- Learn how to navigate the canvas, add screens, and configure device frames.
- AI Generation -- Generate screens from natural language prompts.
- Publishing -- Understand the publish workflow and pre-flight checks.