top of page
  • Writer's pictureGregor Brunner

Optimizing User Experience: Cleaning and Improving Power BI Reports and Models

Updated: Apr 10


Unlocking the Full Potential: Optimizing Your Power BI Reports and Models for Superior User Experience

So far, optimizing a Power BI semantic model has been an immense task that takes dozens of hours and requires advanced technical know-how and knowledge of the business.

Like BIBB for generating themes for Power BI, other developers built external tools to make our lives easier when dealing with data models.

Today, I would like to show you how, with the help of one of these external tools, we can improve our experience in building and maintaining Power BI reports and models.

As we dive deeper into optimizing user experience in Power BI reports and models, 'Measure Killer' stands out in the community. Its version 2 is equipped with features that not only clean up our models and reports but also significantly enhance user interaction with data.

Size Matters: The Impact of Data Model Size on UX and Development in Power BI

Optimizing the size of your Power BI semantic models can benefit both user and developer experiences. Here’s why smaller, cleaner and well-structured models make a difference:

Boosted Performance. Smaller data models translate to quicker report loading times without frustrating waits, thereby enhancing the overall user experience.

Streamlined Interaction. Optimized models provide a smoother, more intuitive experience for users and developers. Building new reports on top of these models is easier and faster. Other developers also benefit because well-organized models are easier to navigate.

Easier Maintenance and Development. For developers, streamlined models mean less complexity in maintenance and bug fixes, allowing for quicker changes and enhancements. This benefits the development cycle, ensuring the timely delivery of new features and insights.

Introducing the Measure Killer 🔪

Measure Killer is an external tool for Microsoft Power BI that analyzes your reports and semantic models for unused measures and columns. It can also run in the Power BI Service on multiple shared models simultaneously and perform a Tenant Analysis. Like other external tools, you must download and install it on your computer.

We have designed the Measure Killer so that when it runs on a data model, we can see how much of the size of our data is used anywhere in a report (visuals, conditional formatting, filters, etc.) or the model (relationships, measures, calculated columns, etc.) or even if it is used in Power Query (joins, references, aggregations, etc.)

The lower left section gives us a quick summary of how much (in % of the total size of the model) is used anywhere and - maybe more interestingly - how much is completely unused and thus I can remove.

It can even run an analysis on multiple live connected reports to a dataset (shared/golden dataset).

In the main table, we can see which columns, measures, and calculated columns are used and where exactly they are used.

Where my columns and measures are used
Where my columns and measures are used

Now, if we do not really care about this and only want to quickly reduce the size of our model (as the title of this article suggests), we can go to the "killing" tabs.

We can automatically remove unused measures and calculated columns with one click.

It is good to keep a backup, but there is also a built-in restore feature for everything we killed before.

But what about our "normal" data columns (the ones coming in via Power Query)?

Well, there is no automatic feature for these, but I would rather get a new M code to paste into the advanced editor in Power Query.

We only need to copy the new M code and replace the old one in Power Query.

Once I hit done, we can see that one step was added to the transformations in Power Query, where the tool takes out our unused columns.

There is some manual work involved here because I need to do this copy-paste job for every table with unused columns.

The good point here is, however, whenever I want to add those columns back, I can either take out this new step it added or just click on the little icon on the right where I can even select which columns to keep one by one.

When I am done, I can hit close and apply to trigger a data refresh.

Once this is done, I can "run" Measure Killer again to get an updated output of my model.

As the summary shows, we use 100% of our model since we successfully removed all unused artifacts. This is where we should end up.

There is room for improvement even here, though. Let me give you two examples:

  1. Something that is used somewhere does not necessarily mean it is actually needed. Think about a table where all columns are unused except one, that column is used in a relationship. Now what is the purpose of this relationship if the rest of the table is unused? It is possible that I need this table for the filter flow in my model (this would be a bridge table then). In most cases, the relationship is not needed though and I might be able to remove it entirely.

  2. Some columns or measures could only be used on a hidden page for testing. The tool can show me more in the "What if analysis" (button on the top of the main window).

When I open this, I can see which columns and measures are used in one report (if I analyze multiple reports connected to a model) or something used on a certain page of my report. Let us take a look:

In our case, the hidden page of the "best movies" report has no Potential savings, meaning if I take it out, I can save X MB. But I can still see this for all the other pages of the reports I analysed. If I ever wanted to get rid of page 1 of the "best movies" report, I already know that I would be able to cut down the size of my model by 42.6 MB

The 42.6 MB is the size of our [originalTitle] column since that is only used on that page.

Summarizing what we have learned:

Tools like Measure Killer help us pinpoint and eliminate superfluous elements within our models, thus enhancing performance and clarity. This rigorous cleaning process not only trims down data models for optimal operation but also reveals the essential structure of our data, opening avenues for innovative data structuring and management strategies.

Precision Optimization: Streamlines models by removing unused components, boosting performance.

Data Hygiene: Regular cleanup maintains model efficiency, akin to preventative health for data.

Insightful Discoveries: Unearths potential for innovation in data modelling and usage.

Strategic Management: Informs data governance and infrastructure planning decisions.

Adopting a systematic approach to data model optimization offers dual benefits: it ensures our models run at peak efficiency and fosters a culture of continuous improvement.

226 views
bottom of page