Cecilia Soto

Design System Foundations

6 min read

This was an individual initiative as the SaaS platform I worked with did not have a formalized design system. This meant that spacing, font formatting and color meanings were not consistent thorough the platform, creating more overhead during the design and development stages to basically redo the wheel again. The project spanned three months (in-between projects). I acted as the sole designer, worked alongside C-suite stakeholders representing product and engineering and the deliverables included a JIRA wiki documentation, a Figma component and style library, and a presentation for the team to promote the new design guide.

Audit Process:

  • Visual Inventory: Captured and organized screenshots of all 20 tabs and 34 modals in a master Figma file, annotating each with measurements for spacing (px values), colors (hex codes), and typography (size/weight/family)
  • Pattern Analysis (Days 6-10): Built a tracking spreadsheet cataloging every unique instance of buttons (found 8 variations), input fields (12 variations), and spacing values (15+ different measurements).
  • Stakeholder Validation (Days 11-14): Conducted three 45-minute sessions with product managers and two with senior engineers to distinguish intentional variations from drift, and identify workflow friction points

The audit surfaced 47 distinct inconsistencies requiring standardization.

Context and problem

The product had a recognizable look and feel but no formal design system, so patterns varied by team and screen. The color palette lacked semantic roles (no clear success/warning/info/interactive cues), typography was inconsistent (arbitrary header sizes, too many grays), and there were accessibility issues such as low text–foreground contrast and overly complex labels. In Figma there was no unified library, leading to duplicate components and uneven implementations.

Without a single source of truth (tokens or shared components), parallel teams often rebuilt UI elements, making it hard to standardize states and behaviors; product and engineering routinely “worked double.” This slowed delivery, raised QA churn, eroded consistency (and trust/reliability), and introduced accessibility risk. Most importantly, it didn’t scale — every new feature multiplied inconsistencies instead of compounding quality.

Goals and success metrics

  • Settle inconsistencies in current UI with stakeholders
  • Decrease time to prototype common UI (simple forms) by >30%
  • Create referable document that could work as a source of truth for settling small design decisions

Measurement methods: Self-timed design time for form interfaces comparison pre- and post- implementation (from UI work start to first draft), wiki page views, colleagues feedback.

Discovery and audit

Based on a platform scan, there were 20+ tabs and 34 complex modals with client-facing UI. There was also an official color palette document with a list of 8 colors meant for marketing use. Product managers' qualitative input was also used to get a sense of copy-writing styles.

Findings

  • Multiple instances of varied spacing, particularly between label/field pairs in forms and content padding inside modals
  • Some words were inconsistently capitalized, like using "the Platform" vs "the platform" to refer to the product (because the product was white-labeled we could not use the official name)
  • Dropdown menu naming conventions varied by product pod, some used a verb, some used camel case capitalization
  • Multiple variations of grays and sometimes other colors for titles
  • Consistent use of colors for different status (red for errors and required actions, yellow for paused or disabled items, blues for emphasis etc.)

Design strategy

Here's a list of my though process alongside some examples. All visuals are reconstructed to protect client confidentiality. They reflect architecture, naming, and decisions; not proprietary UI.

Token taxonomy

Built a three-tier token system to balance design flexibility with governance:

Tier 1: Primitives (Raw Values)

  • Colors: blue.100 through blue.900, gray.100–900, red/yellow/green scales
  • Typography: body (12px), title1 (14px), title2 (16px), title3 (20px)

Tier 2: Semantic (Purpose-Driven)

  • color.primary → blue.600 (branding, interactive elements)
  • color.secondary → blue.200 (branding, complementary interactive elements, text links)

This structure let designers reference meaning ('use color.error for validation') without memorizing hex values, while enabling future rebranding by updating only Tier 1.

Component architecture

  • Table cell: Base (shape/spacing) → Variants (numeric/string) → States (default/hover/disabled/pressed).
  • Inputs: Field + Label + Help + Validation + error messages.
  • Link to Figma demo file.

Accessibility guardrails

  • Color contrast (WCAG targets for text and foreground states).
  • Voice and tone guidelines.

Documentation

  • List all recommendations and design decisions (from stakeholder meetings).
  • Information Architecture based on feedback from the product and engineering teams (similarity matrix and hierarchical clustering) using FigJam (unmoderated closed-tree sorting) and R (for data synthesis and graphs). The image is split in two panes. On the left is an R script that loads a tab‑separated text file assigns row and column names for 27 documentation topics: checkboxes, empty states, steppers, tooltips, typography, tabs, icons, finders, buttons, lists, text input, forms, layouts, warnings, loading, error messages, date fields, color palette, chips, static text, modals, radio buttons, sections, writing, dropdowns, and info modals. On the right is the resulting 27 by 27 numeric grid: the diagonal cells are 0, off‑diagonal cells range roughly 1–13 and represent how often participants grouped two topics together. Higher numbers indicate stronger similarity. This matrix was later converted to distances and used with multidimensional scaling (MDS) to reveal clusters that informed the documentation IA, for example, Foundations, Inputs, Feedback, and Navigation. Figure 1 R code and a 27×27 similarity matrix from a card‑sorting study used to build a design‑system information architecture
  • Waiting time indicator guidelines for less than 1500ms, between 1500ms and 200ms, and more than 200ms waiting times.

The card-sorting study with 8 participants (4 product managers, 4 engineers) using closed-tree sorting revealed four natural clusters via hierarchical clustering:

Resulting Documentation Structure:

  1. Foundations (Entry point for all users) • Color Palette & Semantic Tokens • Typography & Text Hierarchy • Spacing Scale & Layout Grid • Icons & Iconography → Teams needed these basics before implementing components
  1. Inputs & Forms (Highest co-occurrence) • Text Input, Dropdowns, Radio Buttons, Checkboxes • Date Fields & Specialized Inputs • Form Layouts & Validation Patterns → Similarity scores 10-13 indicated teams mentally group these during data-entry tasks
  1. Feedback & Communication • Error Messages, Warnings, Loading States • Tooltips, Modals, Info Modals, Empty States → All system-to-user communication patterns
  1. Navigation & Organization • Tabs, Steppers, Lists • Sections, Finders • Buttons (also cross-referenced in Inputs due to dual use in forms/navigation) → Elements for structuring and moving through content

Outcomes and impact

  • JIRA wiki
  • List of all screens with accessibility inconsistencies for UX backlog
  • Presentation to engineering and product teams
  • Documentation queries had a constant growth of ˜3 visits/day for 4 weeks straight, or 2.2x times the total of people in the product and engineering teams

Prototyping Efficiency: 60% Time Reduction

Measured across standard data-entry forms (baseline: ~8 input fields, 2 dropdowns, 1 date picker, submit/cancel buttons):

Before System (2 projects):

  • Average: 3.0 hours per form (range: 2.5–3.5h)
  • Breakdown: Finding existing patterns (45 min), recreating components (1.5h), checking spacing/colors against other screens (45 min)
  • Outcome: Inconsistent field heights and label positioning across attempts

After System (3 projects):

  • Average: 1.2 hours per form (range: 1.0–1.5h)
  • Breakdown: Checking documentation (10 min), placing library components (40 min), arranging layout (30 min)
  • Outcome: Pixel-perfect consistency, accessible by default

Key Efficiency Gains:

  • Eliminated spacing guesswork (saved 15-20 min per form)
  • Pre-built validation states (error/success) saved 30 min
  • Standardized field heights enabled faster auto-layout alignment

What Drove Documentation Traffic:

Though JIRA analytics didn't provide per-user metrics, the total view pattern and qualitative feedback revealed three growth drivers:

  1. Launch Presentation: Live walkthrough for 15 team members demonstrated common lookups ('Which button variant for secondary actions?', 'Modal padding specs?')
  2. Search Discoverability: The wiki appeared in JIRA search results for queries like 'form validation' or 'button specs.'
  3. Self-Service Culture Shift: Product managers began referencing the wiki in design conversations. One PM reported using it to validate their own mockups before review

The steady growth to ~3 visits/day (2.2x the 15-person team size) suggested repeat consultation rather than one-time visits, validated by colleague feedback that the wiki became their 'first stop' for design questions.

Possible improvements

  • Implement changelog and versions to design system
  • Define and name spacing tokens
  • Coordinate efforts with engineering to have component and tokens mirroring the design system
Tagged with: