GenAI Integration Across Infor Birst

Designing prompt-driven AI features into the Birst analytics platform during the company-wide GenAI integration push, contributing to roughly ten features and helping define the interaction patterns for how AI lives alongside manual input.

Role: Design Lead on specific features within a collaborative effort
Timeline: 2023 through 2026
Scope: Direct design ownership of approximately 10 GenAI features across the Birst platform, plus collaborative work establishing the interaction patterns the Birst design team standardized for AI use
Team: Three Birst design team leads (myself plus two peers), supported by four additional designers in India; partnered with Infor's central design team, who provided AI-specific UI components
Status: Several features shipped; others designed and approved for production at the time of my departure, currently in development


The context

Infor's GenAI integration was top-down. Executives and product leadership across the company pushed for AI features to be incorporated into existing functionality and into every new feature in the pipeline. The mandate hit Birst in 2023, and the team responded with an all-out integration push. PMs requested AI elements wherever they thought one might add value, and design tickets started flowing to our team weekly.

Infor's central design team owned the AI visual layer. They provided the generate buttons, the AI disclaimer copywriting, the GenAI icon, and the differentiated colors that signaled AI-driven actions throughout the product. What they didn't provide was the interaction architecture for how those components fit together in real flows.

That gap is where the Birst design team did its most consequential work.


How the work was distributed

Each week, the three of us at the team-lead level (myself and my two designer peers) met with the four additional designers in India to triage incoming AI tickets and assign them based on complexity and proficiency. The lead designers took the more complex features. The remaining work distributed across the team based on familiarity with the affected product area.

Across the GenAI integration as a whole, I directly designed approximately 10 AI features end-to-end, contributed to others as a reviewer and pattern-setter, and mentored the associate designer on his AI work. The features I owned varied widely in scope, from a chart generator that lived inside a single dashboard modal to a prompt-to-code feature that fundamentally changed who could write Birst's custom query language.


Filling the interaction-pattern gap

Infor's design system provided the visual components for AI features but left the interaction patterns largely undefined. The Birst design team filled that gap. The most consequential pattern we landed on, and the one I'm proudest of, was the unified prompt-and-edit interface.

The problem we kept running into: a user might want to generate something with AI, but they might also want to write it manually, or generate something and then edit the result. Designing three separate UIs for those three states would have cluttered the product and forced users to context-switch between modes. Instead, we designed a single input box that handled all three, controlled by a segmented control above the field.

The user could choose Manual to type directly into the field. Or they could choose AI to type a prompt and click Generate. The AI's output appeared in the same field, replacing the prompt with the generated formula or query. From there, the user had three paths: keep the result as-is, click an Undo icon to return to the prompt and refine it, or switch the segmented control to Manual to edit the AI's output by hand.

This pattern preserved AI as an opt-in helper rather than a forced experience, gave users full editorial control over AI output, and kept the interface stable so users always knew where they were. We applied this pattern across multiple features after we nailed the flow, and the same-box pattern became one of the contributions that the Birst design team standardized internally for use across our product.

[VISUAL: Annotated diagram of the segmented-controller prompt interface, showing the Manual / AI toggle, the prompt input state, the generated output state, and the Undo icon to return to the prompt.]


Features I designed

Generate chart from a prompt

A user opens a dashboard, enters edit mode, and clicks the plus icon to choose to add a new chart. Instead of opening the Visualizer chart-builder, they choose the "Generate chart" option to bring up a modal with a prompt input at the top and a blank canvas below where the generated chart will render. The user types what they want, clicks Generate, sees the chart render in the preview area, and can either insert chart and place it on the dashboard or refine and regenerate.

The design problem here was making AI feel like a natural alternative to opening the full Visualizer product, not a separate workflow added on. The user shouldn't have to learn a new skill just because they're using AI. The flow mirrors the manual chart-creation flow but compresses it to one prompt and one preview.

[VISUAL: Generate chart modal with prompt at the top, generated chart in the preview area, and the action buttons.]


Prompt-to-BQL

BQL is Birst's custom query language, similar to SQL but with built-in formatting, auto specifies dimension and attribute, bracket helpers that made it easier than raw SQL to write queries, and more. Even with those affordances, BQL still required some technical proficiency. The bar to entry was real for casual Visualizer users.

Prompt-to-BQL eliminated that bar. A user could type what they wanted in plain English ("filter to all bedroom and living room furniture, within the furniture category, combined into a single category"), click Generate, and BQL code would appear in the editor. The user could use the result as-is to create a custom column, measure, attribute, or filter, or they could edit the generated code by hand. The feature lived inside Visualizer, where custom expressions are created.

The interaction used the segmented-controller pattern described above. The same box that displayed BQL code also accepted a natural-language prompt when the user toggled to AI mode. Generated code populated a field below the prompt because we had plenty real estate to fit everything. The user could refine the prompt or edit the result manually with the standard BQL editor.

The accuracy was strong on simple queries. Complex queries occasionally produced ordering or combination issues that a user could correct manually before applying. The design philosophy was always to assume the user might need to edit the result, never to lock them into the generated output.


AI-generated descriptions

Saved expressions, saved reports, and saved filters could all benefit from a description so other users on the team understood what they were looking at. Manual descriptions were often skipped because writing them was tedious, and a missing description made shared assets harder to use.

I worked with the associate designer to design a description-generator that used the existing context of the saved item (its formula, its inputs, its applied filters) to produce a description automatically. The user could click Generate to populate the description field, edit it manually, regenerate it, or skip it entirely. When viewing a saved expression in the library, the description appeared inline on the cell with a description icon, expanding into a tooltip on hover. Other places used an information panel or a modal.

This was one of the features where I mentored the associate designer rather than designing alone. My contribution was establishing the flow patterns, reviewing his work against the team's standards for AI components, and ensuring the description generator followed the same prompt-and-edit philosophy as the rest of our AI features.


Impact Flow's AI chatbot

The Impact Flow data-lineage feature included an AI chatbot that let users ask natural-language questions about the connections in their data. (I cover this feature in depth in the Impact Flow case study.)


Design principles the Birst team established

Across these features, the Birst design team converged on a handful of principles for how AI should live in the product:

  • Opt-in, never forced. Manual input was always the default. Users had to choose to use AI. We never replaced functioning manual flows with AI-only ones.

  • Editorial control. AI output was always editable by hand. The user could regenerate, undo, or take over directly.

  • Disclaimers everywhere AI shipped. Every generation surfaced a disclaimer reminding users that AI output could be wrong and to verify before committing.

  • Admin-level guardrails. Admin users could provide a base context that every AI prompt would consider, narrowing the model's output to the customer's data conventions or the organization's preferred patterns.

  • Prompt suggestions for new users. Each AI feature surfaced about six example prompts the user could click to populate the input. This helped users learn how to write effective prompts without facing a blank field.

  • Loading states. A consistent loading animation in the prompt field indicated that generation was in progress, which mattered because AI responses could take a few seconds.

These weren't formal documentation, but they were a shared standard our team applied consistently as new tickets landed.


Outcome

Several of the AI features I designed have shipped. Others, including the Generate chart from a prompt feature and the saved-expression library work, are designed, PM-approved, and in development at the time of my departure. Birst's release cadence runs longer than is typical at most software companies, and the GenAI pipeline is substantial.

The most durable outcome of this work is the prompt-and-edit interaction pattern itself. The segmented-controller pattern for combining manual input with AI generation became a Birst design team standard, applied across multiple features and incorporated into the team's internal pattern library for ongoing use.


Reflection

What I learned from designing AI features at scale is that AI works best as a selectively-deployed helper, not a blanket layer over the entire product. The strongest AI features in Birst are the ones that take a specific tedious task (writing BQL, captioning a saved expression, building a chart from scratch) and offer AI as a faster alternative to a manual flow that still works perfectly well on its own.

I also learned to prioritize integrity in interaction patterns over speed of delivery. The prompt-and-edit approach took longer to design than a simpler "click Generate, accept result" flow would have, but it preserved user control in a way that simpler flows didn't. As AI capabilities mature and customer expectations evolve, that user-control principle is going to age well.

The thing I'd do differently is push earlier for conversational AI in the chart builder specifically. The current generate-chart feature accepts a single prompt and produces a single result. A conversational version, where users could ask for tweaks ("make the bars green," "filter to last quarter only," "switch this to a line chart") and watch the chart update incrementally, would be more useful. I designed it as a single-prompt feature in the first pass, but I'd build the conversational capability earlier next time. That's the version of this work I'd most want to revisit.