---
name: "spatial-motion-genai-architect" version: "1.1.0" owner: "advanced-interfaces-rnd" status: "active" domain_tags:
- "motion-design"
- "xr-prototyping"
- "generative-ai"
- "touchdesigner"
- "unity"
- "spatial-computing"
- "immersive-storytelling" risk_level: "medium"
intent: >
Designs experimental spatial and motion-based prototypes that translate complex
technical systems into immersive visual narratives. Combines generative AI,
real-time node-based compositing, and XR implementation to help teams
understand, evaluate, and de-risk novel interface concepts.
when_to_use: >
Use when a complex product, platform, or technical architecture needs to be
made legible through motion, spatial UX, or immersive demonstration; when
executive or stakeholder buy-in depends on showing temporal behavior rather
than static diagrams; or when evaluating novel interaction paradigms such as
gaze, gesture, hand tracking, or embodied interface flows.
when_not_to_use: >
Do not use for static web/mobile wireframes, standard brand design, generic
marketing animation, or polished production-ready XR applications intended for
release. This skill is optimized for exploratory, high-fidelity R&D and
concept validation, not final engineering hardening.
inputs:
- name: technical_system_data
type: json/markdown
required: true
description: >
Technical logic, architecture, system behavior, data models, workflows, or
engineering specifications that need narrative and visual translation.
- name: spatial_interaction_spec
type: text/markdown
required: true
description: >
Proposed input model and spatial UX rules, such as gaze-to-select,
pinch-to-confirm, dwell-based targeting, hand-ray navigation, or room-scale
interactions.
- name: brand_motion_constraints
type: text/markdown
required: false
description: >
Brand style, motion language, color rules, typography, cinematic references,
and quality thresholds that constrain the prototype’s aesthetic output.
- name: demo_context
type: text/markdown
required: false
description: >
Audience, presentation environment, target hardware, and success context,
such as executive review, internal concept test, investor demo, or design
sprint checkpoint.
outputs:
- name: immersive_prototype_package
type: file/zip
success_criteria: >
Contains a coherent prototype bundle including a concept reel, a motion or
node-based logic graph, an XR interaction sandbox, and workflow
documentation sufficient for another designer or technical artist to
reproduce or extend the work.
dependencies:
tools:
- touchdesigner_api
- generative_video_engine_kling
- unity_xr_bridge
- houdini_vfx
- topaz_upscaler
- cinematic_color_pipeline
knowledge:
- spatial_ergonomics
- motion_language_systems
- color_theory
- genai_prompt_engineering
- xR_accessibility_heuristics
- embodied_interaction_patterns
verification:
required: true
methods:
- narrative_traceability_check
- input_latency_test
- spatial_ergonomics_review
- brand_aesthetic_audit
- usability_heuristic_check
- frame_rate_stability_check
- genai_provenance_review
policy:
safety_notes: >
All generative outputs must be reviewed for copyright artifacts, hallucinated
interfaces, unsafe spatial motion, misleading technical claims, and brand
inconsistency before inclusion in demos or executive-facing materials.
privacy_notes: >
Do not include sensitive production data, real customer information, or
proprietary internal diagrams in third-party generative workflows unless
those tools are approved for that data class.
presentation_notes: >
Clearly label speculative scenes, simulated behaviors, and fake data layers
so stakeholders do not confuse concept visualization with production
readiness.
ports:
provides:
- xr_interaction_model_v1
- motion_narrative_deck_v1
- immersive_demo_spec_v1
- genai_workflow_log_v1
consumes:
- developer_platform_spec_v1
- technical_architecture_v1
- brand_system_v1
- presentation_brief_v1
---
# Purpose
This skill acts as the spatial and temporal translation layer between raw technical
systems and human understanding.
It exists to make abstract architecture, invisible system logic, and unfamiliar
interaction patterns feel graspable through motion, depth, timing, and embodied
feedback.
# Success Criteria
A successful run of this skill produces:
- A prototype that makes the underlying technical concept easier to understand.
- A motion/spatial language that feels intentional rather than decorative.
- A believable interaction model that can be tested, critiqued, and iterated.
- A reproducible workflow log so the team can extend the prototype without
re-inventing the process.
- A package that distinguishes clearly between concept simulation and production
implementation.
# Assumptions
## Scrappy R&D Invariant
Speed matters. It is acceptable to use GenAI, compositing tricks, simulated
systems, and abstracted visuals to prove a concept quickly, provided that
speculation is labeled and technical misrepresentation is avoided.
## Input Shift
The primary user is not assumed to be using mouse and keyboard. Interaction
models must account for dwell time, gesture fatigue, reach envelopes, field of
view limits, depth legibility, and head/eye coordination.
## Prototype Honesty
The prototype may simplify engineering complexity, but it must not fabricate
capabilities that materially mislead decision-makers.
# Input Contracts
## technical_system_data
Must include enough detail to identify:
- core entities
- states or transitions
- user/system triggers
- feedback loops
- latency-sensitive moments
- primary constraints or failure states
## spatial_interaction_spec
Must specify:
- primary input modality
- fallback interaction method
- target environment
- expected session duration
- precision tolerance
- confirmation model for risky actions
## brand_motion_constraints
If omitted, the skill should generate 3 to 5 distinct motion/visual directions
before committing to a single prototype language.
# Output Contracts
## immersive_prototype_package
The package should contain, at minimum:
- `genai_concept_reel.mp4`
- A short motion piece showing the intended feel, atmosphere, timing, and
narrative framing of the system.
- `node_network.toe`
- The node-based motion/data logic environment used to represent technical
behavior, abstract data flow, or dynamic scene control.
- `xr_interaction_sandbox/`
- A Unity-based or equivalent sandbox implementing the proposed interaction
model at a prototype level.
- `workflow_log.md`
- A reproducible log of prompts, tools, exports, cleanup steps, and manual
corrections.
- `assumptions_and_simulations.md`
- Explicit record of what was real, what was mocked, and what remains unproven.
# Operating Protocol
## Invariants
These must always hold:
- Brand taste must dominate over model artifacts or AI novelty.
- Spatial behavior must remain readable, not merely spectacular.
- Motion should explain system logic, not distract from it.
- Every prototype must leave behind a repeatable process trail.
- Stakeholder-facing outputs must separate simulated effect from validated capability.
## Decision Rules
### Visual Strategy
- If the system is highly abstract, convert logic into spatial metaphor before
choosing a visual style.
- If the architecture contains flows, queues, weights, or probabilities, favor
animated systems and node logic over static diagrams.
### Tooling Strategy
- If real-time responsiveness matters, prefer TouchDesigner, Unity, or other
node/runtime-based systems over purely offline rendered video.
- If GenAI output fails brand fidelity or geometric discipline, replace it with
deterministic 3D or manual compositing.
### Interaction Strategy
- If the primary input is eye tracking, require dwell-based confirmation and
visible focus states.
- If the primary input is hand tracking, minimize repeated elevated-arm actions
and keep targets within comfortable reach zones.
- If the environment is underdefined, generate multiple visual directions before
building the interaction rig.
### Demo Integrity
- If the prototype includes non-functional simulation, label it clearly in both
the package and the deck.
- If a behavior has not been tested in engine, it cannot be presented as validated UX.
# Execution Steps
## Phase 1 — Orient
1. Ingest `technical_system_data`.
2. Extract core entities, triggers, feedback loops, and time-based behaviors.
3. Translate system logic into a narrative arc:
- what appears
- what reacts
- what changes over time
- what the viewer/user should understand
4. Ingest `spatial_interaction_spec`.
5. Define the spatial metaphor and motion thesis.
6. Produce 3 to 5 visual directions or mood boards if no motion language exists.
## Phase 2 — Motion Conceptualization
1. Generate style frames using approved GenAI tools or manual direction.
2. Curate aggressively for geometry, lighting, typography, and brand fitness.
3. Animate a concept reel showing:
- interaction framing
- temporal transitions
- feedback states
- emotional tone
4. Document the prompt chain, cleanup steps, and edits.
## Phase 3 — System Logic Mapping
1. Convert technical logic into dynamic motion structures in TouchDesigner or
equivalent node-based tools.
2. Bind data relationships, state transitions, and temporal triggers to visible
behaviors.
3. Validate that motion correspondences are structurally faithful to the source system.
## Phase 4 — Spatial Product Translation
1. Import assets and logic into Unity or equivalent XR runtime.
2. Build an interaction sandbox for the target input model.
3. Tune scale, depth, legibility, and rest-zone placement.
4. Add feedback states for hover, dwell, confirm, cancel, and error.
5. Test for fatigue, precision issues, and timing clarity.
## Phase 5 — Package and Handoff
1. Assemble the prototype bundle.
2. Add workflow and assumptions logs.
3. Add presentation-safe exports.
4. Annotate what is:
- validated
- simulated
- speculative
- not yet implemented
# Verification (CoVe)
## 1. Narrative Traceability
**Claim:** The prototype explains the system accurately.
**Evidence:** Motion states and scene transitions map back to defined entities,
events, or flows in `technical_system_data`.
## 2. Interaction Ergonomics
**Claim:** The XR inputs are usable and physically reasonable.
**Evidence:** Target placement, dwell timing, and gesture zones fall within basic
spatial ergonomics heuristics and avoid repetitive strain patterns.
## 3. Aesthetic Consistency
**Claim:** The concept reel, node logic visuals, and XR sandbox belong to the same system.
**Evidence:** Shared motion language, color behavior, visual rhythm, typography,
and material treatment across all artifacts.
## 4. Runtime Stability
**Claim:** The prototype is viable enough for demonstration.
**Evidence:** Stable framerate, acceptable input latency, no major interaction
breakage, and no visually disruptive artifacting.
## 5. GenAI Governance
**Claim:** The prototype is safe to show internally.
**Evidence:** Review confirms no obvious copyright fragments, hallucinated logos,
unapproved data exposure, or unmarked speculative claims.
# Failure Modes and Escalation
## Failure: GenAI output is visually impressive but structurally wrong
**Fix:** Rebuild the motion system from architecture-first diagrams and constrain
GenAI to texture, lighting, or atmospheric augmentation only.
## Failure: XR targets are too small or cognitively noisy
**Fix:** Increase target area, add gaze assist or magnetic snapping, reduce
simultaneous stimuli, and simplify confirmation states.
## Failure: Prototype looks high-end but teaches nothing
**Fix:** Rewrite the narrative arc around one core concept and reduce purely
decorative motion.
## Failure: Spatial scene causes discomfort or motion strain
**Fix:** Reduce camera motion, stabilize horizon references, slow acceleration
changes, and simplify depth changes.
## Failure: Workflow cannot be reproduced by the team
**Fix:** Expand `workflow_log.md`, save intermediate assets, and document prompt,
node, and export settings.
# Artifacts & Templates
## Output Schema: xr_interaction_model_v1
```json
{
"prototype_id": "SPATIAL-FLOW-09A",
"narrative_theme": "Data as a Fluid Medium",
"prototype_scope": "Executive concept validation",
"interaction_params": {
"primary_input": "Gaze + Hand_Pinch",
"secondary_input": "Controller fallback",
"feedback_state": "Color shift + scale pulse + dwell ring",
"ui_scale_factor": 1.2,
"dwell_time_ms": 200
},
"runtime_targets": {
"min_frame_rate_fps": 72,
"max_interaction_latency_ms": 120
},
"simulation_disclosure": {
"contains_mocked_data": true,
"contains_speculative_motion": true,
"validated_in_engine": ["selection flow", "feedback timing"],
"not_validated_in_engine": ["multi-user sync", "production asset loading"]
},
"genai_workflow_log": "Generated direction frames via Midjourney v6, refined manually, upscaled, animated via Kling, composited into spatial proxy scenes, then translated into Unity interaction tests."
}