Big Room is an AI-driven live video production platform that uses machine learning to automatically frame, track, and cut between shots from static cameras, largely for live performance events. The platform is in place in major venues such as The Apollo in Harlem and The Wiltern in Los Angeles, but before summer 2021 was only set up for internal remote operation by Big Room's developers.
I was brought on to design a full overhaul of this back-end UX and UI to prepare it for independent client use. My process included:
Analyzing the current software and its information architecture
Researching existing and emerging video production and editing user experience models
Understanding the preferences and needs of its new users
Identifying how to translate the depth of technically-complicated functions into effective user-facing interactions
UX for a new type of AI-driven live video production software
3 months, part-time
UX/UI Design, Prototyping, Research
Big Room is a new type of AI-driven live video production software. Big Room's platform is being used by famous venues to film and produce live event videos, but needed to overhaul their entire interface from a barebones back-end version to one that end clients could use independently. I was brought in to design an entirely new UX and UI, as well as work with clients to develop the concepts and with developers to implement it.
The result was a series of mockups, prototypes, and assets that became the face of the newest version of the software, which Big Room's major clients have expressed is a major and exciting improvement.
The existing UX and UI when I joined the team hadn't been created for any users outside of those who had actually spent time developing the software, and was designed with the idea that no clients would ever see the actual interface. The interactions were cluttered, inaccessible, and overly technical.
The controls were rife with nested menus, hidden settings dialogs, and interaction languages (buttons, icons) that would be indecipherable to a user not in-the-know. In addition, there were a host of features that were no longer used, or did not require any active user interaction.
I compiled an information architecture spreadsheet for every feature and control in the software and vetted with the team what was needed to be seen by whom, and figured out the hierarchy of needs that would guide the disclosure of these features in future versions.
Setup Wizard and New Features
After analyzing both existing live video production software and emerging video editing/processing UX, I wireframed a series of proposed screens including not only a reimagined monitor screen, but also drastically more visual interactions for setup (including a guided wizard) and camera preference access.
In both cases, my primary goals included moving nested menu actions into more readable and accessible interactions, as well as guiding and progressively disclosing any information that might not be immediately known to users with a range of technical familiarity below that of Big Room's own developers.
Through feedback and iteration from this stage, I developed higher fidelity clickable prototypes in Adobe XD that provided a new visual direction and also demonstrated a number of new interactions, including:
Camera mode dropdowns
Modular persistent control panels
Live camera setup edit, mirroring the setup
Login and setup creation
Mirrored Camera Setting Setup and Live Edit
Big Room is implementing the results of these prototypes and my work with their developers, with the current version including much of the features, layouts, visual assets, and interactions introduced through my work. This version is being used by clients including the largest live event company in the country, at some of its most notable venues.
The end users for this version are primarily live video production professionals and the user experience reflects layouts and interactions that are familiar to their workflows and prioritize their favored features, but still brings a new approach for a new kind of tool entering the space.
One of the more complicated tasks successfully completed towards the end of my engagement was the creation of a toggle-able Director Mode that would enable a host of new features, plus a new layout. In this mode, users would get super-fine control over camera operations and the inputs to the remaining AI control, which in essence necessitated an on-demand switch to an entirely different layout and interaction hierarchy.