Build To Solve: 2025 Power BI Competition

An image of the three Support dashboards created during this competition.

Overview

Problem: The Kahua Support team lacked proper visualization into data on key aspects of their process. These included the volume of tickets per application category and the number of Service Level Agreement (SLA) response and resolution violations over time.

Solution: Using data from interviews with four members of support, my team and I built out a comprehensive dashboard. With multiple pages, each built around a central research question, we investigated relationships between resolution times and ticket type, categories and ticket volume, and the influx of tickets over the past three years. This allows the Support team to measure the effectiveness of their current approach and make a case for improvements across the company (such as enhanced documentation and support materials).

Role: UX Research, UI Design, Data Analysis
Team: 3; Jennifer Tran, Sofia Torres Moreta, and myself
Tools: Power BI, Figjam, Microsoft Teams
Duration: July 9th - July 24th
Note: The data displayed in these dashboards is synthetic and does not reflect real Kahua Support statistics.

Project

We divided our project into four phases:

  1. Preparation & Planning
  2. Research
  3. ETL (Extract, Transform, Load) & Mockup
  4. Development

These are explained in detail below.

Phase 1: Preparation & Planning

Throughout this project we used the ETL (extract, transform, load) process to pull and display our data. In addition, we conducted loosely structured interviews with key stakeholders. We wanted our stakeholders to be able to not only easily comprehend and use our dashboards, but to gain valuable information from them. Thus, these interviews served as the foundation of our project and informed our every decision regarding the design and development of our dashboards.

In order to keep our notes and questions organized, our tasks on schedule, and our eyes on the goal we set up a Figjam board. This was an invalauble tool in our collaboration.

Phase 2: Research

The research phase was by far the largest in this effort, making up more than half of our time working on the project.

We began by conducting introductory research on our own. Determined to make use of every resource at our disposal, we reviewed the links and documents provided in the competition kickoff slides. This gave us a good overview of the support and ticketing process, as well as functions of existing analytics. This knowledge allowed us to investigate during our interviews whether these analytics were addressing present needs.

We also studied the dashboard examples from these slides. These gave us insight into ways in which we could present the data.

As we looked over these documents and images, we set up two meetings with members of support. The first was with Emerson Wade.

In preparation for our meeting with Emerson, we brainstormed questions to ask him. We each created three to four informed by the documents we had read. Then, together we ranked these by importance and broadness.

In the end, we compiled eleven main questions, not counting follow-ups.

Here are the top four:

  1. How does your team use the data currently available to you? How does it benefit or not benefit your team?
  2. What tools do you currently use to track tickets — resolution times, response rates, status, etc — if any? Are these tools effective or ineffective to your current workflow?
  3. How do you currently determine which tickets or issues need attention first? Can you walk me through that process?
  4. What does a successful resolution look like for your team, and how do you measure that today?

Emerson answered every one of these questions in detail and even offered us a glimpse into the existing analytics he considered useful. In order to uncover key patterns, we created an affinity map of our meeting notes. Indeed, we discovered multiple patterns in the information he had given us.

The insights we gained from this meeting were instrumental in the creation of our research questions. They also guided the process of brainstorming questions for our next interview — our first meeting with Morgan Chen, Skylar Brooks, and Riley Nguyen.

We came into this next meeting armed with the information Emerson had generously provided, as well as the feedback the competition judges had provided on our research questions. Thus, we homed in on the queries we required more context for in order to answer our research questions.

Furthermore, when creating and ranking our questions, we accounted for the fact that Morgan, Skylar, and Riley have different roles than Emerson and so would offer different knowledge. As such, we tried to investigate the differences in the problems they face.

While some points overlapped between the two meetings, others differed. We created an affinity map of this meeting’s notes. Then, we compared these new patterns to those we discovered from our meeting with Emerson. Overall trends began to emerge. Namely, we discovered that the Support team was interested in delving into...

With this knowledge, we revisited our research questions and the feedback provided on them. We decided we could pair down the original six to the following four. 

  1. How can monitoring SLA violations contribute to improved performance within the department?
  2. In what ways can analyzing volume variations by status, category, subcategory, and user type (Owner, General Contractor, Subcontractor) enhance departmental preparation and documentation practices?
  3. How has ticket resolution time changed over time?
  4. What is the average number of touchpoints per ticket, and what does this indicate about the efficiency of the resolution process?

We selected these due to their focus on the key areas of interest for the Support team. 

By this point we were aware that the needs we sought to address could not be met with a single dashboard page. With too much content it would quickly become cluttered. So, with the intent of creating the maximum three pages, we began the next phase: extracting, transforming, and analyzing the data and mocking up our dashboards.

Phase 3: Extract, Transform, Analyze (& Mockup)

Jennifer began transforming the data, ensuring cleanliness and quality. This informed how we would present it to avoid skewing and account for outliers. During this time, we decided on our method of data extraction.

Initially, we planned to extract the data through the API, as this would allow it to update. The ability to provide dynamic and current views of data was largely preferable to the static CSV files. However, when we later extracted and viewed the data in both forms, we discovered that the API data was poor quality. Several essential fields were missing from it. Thus, we reluctantly decided to use the CSV file instead.

We met to look over the available data together and began deciding how best to measure and present it.

As this was solidified, the three of us began sketching out our mockups. I assigned each dashboard the research question(s) it should solve. With these questions at the forefront of our minds, we compared our mockups. We selected visuals from each that would present the data in a comprehensive and straightforward manner.

We then created a new set of mockups, solidifying the placement of our visuals. We accounted for space on the page, volume of data, usability, and visual appeal when choosing our graphs and tables. We aimed to present the data in a way that directly answered our research questions, thus meeting the Support team’s needs to the best of our ability.

Once we were agreed on the layout of our dashboards, we scheduled a follow-up meeting with Morgan, Skylar, and Riley to get their feedback. They provided ample insight into how managers might use the different visuals we presented. With this in mind, we made some changes to what visuals we utilized and how we presented the data. Then, we were ready to start developing.

Phase 4: Development

The development phase was our last and fastest phase. Though the construction of dashboards takes much time and finesse, they cannot complete their tasks properly without concrete research behind them. To stay consistent and grounded, we continued asking the Support team for clarification of the support process. As we gained more knowledge, we updated our design to ensure we answered our research questions and portrayed the data in unique, helpful, and readable ways.

Finally, we met with our advisor for final feedback before submitting our dashboards. Taking her thoughts into consideration, we changed a few visuals and added some filters for further clarity. 

Conclusion

The competition judges applauded the visual design and usability of our dashboards. However, it was the foundation of research that impressed them the most. This was a primary factor in our award of first place. 

Interviews with stakeholders kept our vision aligned with our users’ goals and points of view throughout the process. Feedback from our advisor offered a view into the conventions of making data presentable and comprehensible. Thus, we could not only start strong, but continue to improve our product along the way. 

Takeaways