Digital Product: Control and automation for industrial operations.

Background:
In 2015, Kelvin was in the midst of a pivot. We had developed a motion classification system for consumer gaming and were shifting audiences to industrial operations, a severely antiquated and underserved sector. When I started as Head of Design, we immediately jumped into ethnographic research to better understand the pain points of industries like oil and gas. The key insights were workforce management and remote monitoring and maintenance.


Role: Head of Design
My main responsibility was to direct UX research and product design. As the first designer, I helped the company develop the value proposition, conduct field research to better understand the target domain and design the MVP cross-platform app. We grew the design team from 1 to 7, including content strategy and research.

Our family of products included customer-facing applications as well as internal prototypes and developer tools.

Initiatives
Research strategy
UX, UI, IxD for software and hardware
Team development
Design System
Knowledge Base

Team:
Content Strategy, Product Design

Timeline: 3.5 years

Audience:

  1. Control Operations Engineer
    This person is a subject-matter expert and is capable of single-handedly connecting hardware, system setup, creating analysis values, mapping registers, and building logistical models. They want to build basic rules-based control models with nested and loop models. They care about the ability to self serve and the integrity of the entire control solution.

  2. Data Science Engineer
    This person’s main goal is to develop, deploy and optimize statistical models that control and automate physical systems. They may work in the command line and in high-level programming languages or in a GUI to transform data. They are mathematicians and statisticians. They do not have deep domain experience, although they need to learn the specifics of operations and the desired goals in order to build reinforcement learning models. Their main responsibility is to transform data in order to inform the optimization of models and therefore assets. They care about automating repetitive tasks, experimenting.

  3. Operations Manager
    This person manages the connection, configuration, and maintenance of hardware systems for large-scale automation and control solutions. They are focused on monitoring systems and making sure systems remain online. They have a clear understanding of the design and capabilities of all hardware connected to a machine and can troubleshoot failures which may affect data integrity. They care about hardware integrity, predicting failures and the cost efficiency of their team. 

  4. Field Technician
    This person is responsible for site visits, installation and the physical maintenance of hardware. They understand the physical layout of a site, its capabilities and limitations, and manage the integrity of connectivity and componentry. They care about about time efficiency and setup of hardware for optimal maintenance.

  5. Technical Domain analyst
    This person tunes and diagnoses control model performance; reports opportunities for improvement to the data science team. Can create basic if-then models from scratch if given the tools. Cares about model completeness and flexibility; cares about how models can account for different machine personalities.

Notable Interface components:

  • Customizable tables

  • Line charts and bar charts

  • Multi-select and single select menus

  • Sharing

  • Filter and sort

  • Threads, commenting and mentions

  • Attachments and links

  • Panels

  • Date picker

  • Cards and lists

  • Activity feeds

  • Search

  • Maps

  • Labels and tags

  • Drawers

  • Alerts

  • Surveys

  • Forms

Features:

  • View real-time, detailed machine performance from anywhere.

  • Run reports, save and share them with your team.

  • Assign work tickets to your team, reference communication history and track progress.

  • Pin urgent and informative tickets to a unit for visibility.

  • Activate and deactivate models, adjust set points

  • Receive notifications when units need immediate attention.

  • Reference and adjust settings for remote terminal units.

  • Reference machine anatomy, location and

  • Manage users, permissions and teams.

  • Manage preferences.

  • Available on iOS, Android and Web.

  • Dark Mode


Initially we solved for capturing accurate time series data and visualizing state machine performance. Tables and charts were industry standard for data analysis, with summaries of important metrics.

Table view with filtered unit type.

Through consistent user testing, we prioritized expanded chart capabilities for different use cases and advanced our features to be more customizable, individual and sharable.

Line charts for time series data with chart editing and multi-chart features.

Bar Charts for comparing states over time.

One of the key insights from initial research was workforce management, so we architected a ticketing system, designed to allow users to track work and deploying teams. It enabled the different user types to communicate around issues and record the history of maintenance for a more comprehensive machine history.

Inbox welcome screen

Ticket inbox with details and discussion thread.

Unit log with activity drawer open.

Unit log with ticket open in activity drawer.

Data labeling become one of the most sought after ticket types for data scientists. Labeling the data gave them a tool to better identify patterns and a reason to engage in a GUI.

Data labeling feature with confidence input and commenting.

Given we knew our audience adopted email culture, it was important to create an intelligent notification system that they could engage with in and out of the app. Email and push notifications were launched first with the goal of also releasing SMS.

Notifications menu with filtering and read/unread indicators.

Preferences

The real game changer was when we began deploying algorithms to control the machines remotely. As the idea of artificial intelligence and data science expertise advanced internally for our clients, they looked to Kelvin to guide their teams. Our partners needed machine learning experts to build statistical models, as well as tools to monitor, adjust and deploy them onto their own control systems. To stay ahead and compete in the market, we developed proprietary hardware with edge computing. It resulted in a complete hardware and software offering with seamless connectivity, monitoring and more accurate data streams.

Model management on machine.

Of course with a complex systematic set of features, management of users, permissions and settings became a very real initiative. We developed both customer-facing controls as well as internal-only instance management tools.

Team management with autofill feature.

Kelvin is still under development. The core capabilities of the tools remain with a focus on data science and drilling solutions for Oil and Gas.

Key Learnings

  • Clients have different operational, data and workforce structures, making it difficult to create a core solution without customizations.

  • Clients have the ability and are encouraged to create internal solutions, making the competition steep, especially when they have domain expertise.

  • Without data integrity, the audience will devalue and distrust the tools.

  • A well designed interface is a differentiator in the space. Few UI and UX best practices are being applied to competitive products.

  • Edge computing is necessary for data integrity, speed and accuracy of model deployment.

  • A ticketing system cannot be successful without a cross-channel notification system and advanced preferences.

  • Instance management tools should have been prioritized earlier with an emphasis on a client-ready solution so partners can self-manage.

  • Data scientists dont often use GUIs and are accustomed to looking at code or working in very customizable tools built for their field so convincing them to work in a tool that integrates across the business takes features they do not already have.

  • We started mobile first, but it was shortsighted because the majority of the audience was habitually on desktop. We discovered field research that mobile was desired, but in order to fit amongst their other tools, we needed to be web first.