BAO Analytics Platform
My role
As a Product Designer at BAO Systems, I was responsible for creating the application from scratch, by gathering requirements, making prototypes and visuals, communicating with managers and developers, and testing the results.
Product Designer (me)
Product Manager
Business Development Manager
Backend Engineer x 2
Frontend Engineer
Data Analytics
October 2019 - Ongoing
March 2020: MVP version was released and the first client added

The challenge

Develop a solution that allows ingest and align multiple datasets from various sources, so data becomes accessible to popular machine learning and business intelligence tools.
BAO Systems' previous product development experience was normally supported by client demand. But this project was different. At the begging of the process, we didn't have a paying client, so we didn't have real users or real user needs, just an assumption, an idea to test, and see if our clients will be interested in the possible product.

All the insights about BAO Analytics Platform features came from the business development and analytics teams. The process began by gathering information from our onboard analytics, development, and management teams. We tried to come up with the simplest concept by mapping the user problems and possible solutions for the MVP version.
Our user is a Data Analytics who works at or for non-profit organizations mostly around the world's healthcare programs.

They have access to the data stored in different form factors, that are provided to them by their organization or from publicly accessible resources. They use data to create reports and dashboards in BI tools or to see if there some data anomalies, or to generate predictions.

Most likely they are already a BAO Systems client and using DHIS2 hosting management tools.
User problems
01. Ingesting data from multiple sources
Users have data in different storage formats like CSV File, MySQL Database, DHIS2 instance, etc. that need to be analyzed. There are some built-in or external connectors for BI tools, like the DHIS2-to-Tableau connector, but they don't cover all the data formats and require dealing with lots of different interfaces.

02. Organizing and selecting relevant data
Users are dealing with a lot of data sources of different sizes and content. Users don't always know if the data is relevant to their report. Or they know the data, but they don't need all of it, just a part of it. Big datasets result in a time-consuming data load to the BI tool.
01. Ingesting data from multiple datasets
For the MVP version of the application, it was decided to go with a semi-automatic approach, meaning that the initial part of setting up a client's profile was done manually. So the main focus was on creating a flow of connecting and listing datasets. I defined the main flow stages and mapped a high overview of the user interface.
After defining the main information it was time to work on details. The main UI problem was creating a nice flow of filling in the dataset connection form. It had a lot of fields defined by preselected dataset type. I explored different complex user-flows on products like Shopify, Google Tag Manager, and Analytics, DOMO platform, etc. I was curious about how they navigate and group the information, which resulted in discussing several options and the most optimal solution for us.
02. Organizing and selecting relevant data
Organizing and selecting relevant data leads to a set of features. Almost all the features might not have the best user-friendliness. And these are the compromises that were made to cut on development time and money so the product can bring more value to potential clients even if it doesn't consist of the best possible solutions.

Features were designed to cover the basic needs, with the intent of adding better functionality when the product has more clients.

One of the features being the data structure and data preview for the connected dataset. The next step for this feature would be displaying aggregate data for the data preview, this will showcase right away if there's any relevant data let's say for a particular period or on a particular subject.
Another is SQL Transformation and Views, allowing transform data within the dataset or by manipulation of several connected datasets. SQL is not the most user-friendly way for Data Analytics to work with data, but even with cutting on development doesn't mean cutting on small features.
Schemas allow to group datasets so they are displayed properly in the BI tools. The structure was borrowed from the redshift and BI tools organization of displaying the data. Tags allow the group the data so is easier to search for relevant data.
We created an application that allows connecting multiple data sources to the redshift warehouse. By using a single output connection (redshift warehouse), and by allowing preparation and manipulate the data inside the application it made it easier to ingest the data to BI tools, like Power BI and Tableau.
MVP release found interest in our clients
After releasing basic functionality we had our first paying client. It was decided to continue working and growing the product functionality. By October 2020 we had a second paying client.
Meanwhile, the Design System that was started with the DHIS2-to-Tableau Connector application grew immensely. In August 2020, BAO Systems had a rebranding, which led to the rebranding of their's products. And thanks to Design systems it was a quite easy process.
Establishing design and product process
Releasing an MVP version and some following releases showed the potential of the product as well as the lack of processes in design and development. Having a second client motivated having proper discussions and taking actions on establishing the processes.