Power BI is powerful, but it's not magic. I've seen reports that take 30 seconds to load, visuals time out, and users abandoning dashboards because "it's too slow." In almost every case, the root cause isn't Power BI—it's how the data model is designed, how DAX is written, or how visuals are configured.
This guide is written from seven years of real-world Power BI consulting. I've worked with manufacturers dealing with millions of transaction rows, retailers with complex product hierarchies, and services companies struggling with month-end reporting. The patterns and fixes I'll share are the ones that consistently deliver results in production.
By the end of this guide, you'll have a systematic approach to Power BI performance optimisation. You'll understand how to diagnose bottlenecks, fix slow DAX, design efficient data models, optimise Power Query, and make informed decisions about Import vs DirectQuery.
- Why Power BI Reports Become Slow
- Quick Wins You Can Implement in 30 Minutes
- Data Model Design for Performance
- DAX Performance Optimisation
- Power Query and Refresh Performance
- Visual Design and Report Performance
- Import vs DirectQuery: A Practical Framework
- Performance Diagnostic Checklist
- 10 Common Performance Mistakes
- Frequently Asked Questions
1. Why Power BI Reports Become Slow
Before fixing performance problems, you need to understand what's actually happening when a report loads or a visual renders. Power BI performance issues typically fall into three categories:
Data transfer
Power BI moves data from your source system into the data model (Import mode) or queries the source in real-time (DirectQuery). Large datasets, unnecessary columns, and inefficient query folding can all slow this down. In Import mode, refresh times explode. In DirectQuery, every visual interaction triggers a database query.
Data model evaluation
When you add a measure or filter a visual, Power BI's engine evaluates the DAX against your data model. A poorly designed model with unnecessary relationships, high-cardinality columns, or bidirectional filters forces the engine to do more work. Complex measures with iterators or nested calculations compound this.
Visual rendering
Even with efficient data and DAX, some visuals are inherently expensive. Scatter plots with 50,000 data points, slicers with 10,000 items, or cards with complex measures all take time to render. The more visuals on a page, the more rendering work Power BI must do.
Based on my experience, 80% of performance issues come from 20% of problems: bad data model design (especially flat tables and unnecessary relationships), inefficient DAX (particularly iterators over large tables), and visuals that return too much data. Fix these three areas first before optimising anything else.
2. Quick Wins You Can Implement in 30 Minutes
Not every performance fix requires rebuilding your data model. Here are high-impact, low-effort changes you can make immediately:
Remove unused columns and tables
Every column in your model consumes memory and processing power. I've audited models with 200 columns where users only needed 30. In Power BI Desktop, go to Model view and remove any column that isn't used in measures, visuals, or relationships. This is especially important for high-cardinality columns like GUIDs, timestamps, or long text fields.
Disable auto-date-table
Power BI's Auto Date/Time feature creates hidden date tables for every date column. These waste memory and cause confusion. Disable it under File > Options and settings > Options > Current File > Data Load > Time Intelligence. Build your own Date Table instead.
Set Sort By Column properly
Month names sorting alphabetically (April, August, December...) is a classic sign that Sort By Column isn't set. This doesn't directly affect performance, but it causes user confusion and manual workarounds. Always set Sort By Column for text fields that have a natural order.
Use slicers sparingly
Every slicer adds a filter context that Power BI must evaluate. I've seen report pages with 10+ slicers, each with hundreds or thousands of items. Consolidate slicers where possible, use hierarchies to drill down instead of separate slicers, and avoid slicers on high-cardinality fields like Customer ID or Transaction Number.
Turn off background refresh during development
If your dataset is set to auto-refresh in the Service, background refreshes during development can slow your work. Temporarily disable scheduled refreshes while making significant changes. Re-enable once you're ready to deploy.
Check for "Show items with no data"
Visuals have an option under the formatting pane to "Show items with no data" for categories. This forces Power BI to evaluate all possible category combinations, which can dramatically slow rendering. Only enable this if you explicitly need to see empty categories.
Reduce visual count per page
I've seen report pages with 30+ visuals. Each visual triggers DAX evaluation. More importantly, users struggle to process that much information. Aim for 8-12 high-impact visuals per page. Use drillthrough and tooltips to provide detail without cluttering the main view.
Quick wins help, but they're band-aids. If your data model is fundamentally flawed (flat files, circular relationships, or bi-directional filters everywhere), quick wins won't fix underlying performance issues. Use these as immediate improvements while planning deeper structural changes.
3. Data Model Design for Performance
Data model design is the single biggest factor in Power BI performance. A well-designed star schema will perform well even with millions of rows. A poorly-designed model will struggle with a few hundred thousand rows.
Star schema vs flat tables
A star schema separates data into fact tables (transactions, metrics, events) and dimension tables (descriptive attributes like products, customers, dates). Facts are the "many" side of relationships; dimensions are the "one" side. This design is intentionally denormalised—dimensions contain redundant, denormalised data to avoid joins during query evaluation.
Flat tables—massive wide tables that combine facts and dimensions—are appealing because they're easy to create. Power BI will even auto-build them from Excel files. But flat tables are performance killers. They contain duplicated data, larger file sizes, and more complex relationship management. I've migrated flat table models to star schemas and seen 10x performance improvements.
Cardinality and relationship direction
Every relationship in Power BI has a cardinality (One-to-many, Many-to-many, or One-to-one) and a cross-filter direction (Single, Both, or Automatic). For performance:
- Use One-to-many wherever possible. This is the most efficient relationship type. One side (dimension) filters many side (fact).
- Avoid Many-to-many relationships. Power BI handles these by generating a temporary intermediate table, which is expensive. If you need M:M, use a bridging table instead.
- Use Single cross-filter direction. Filters flow from dimension to fact, not the reverse. Bi-directional filtering (Both) creates complex filter propagation that slows evaluation and can cause ambiguous results.
High-cardinality columns
Cardinality refers to the number of unique values in a column. High-cardinality columns like TransactionID, GUID, or exact timestamps have many unique values. These are expensive in terms of memory and processing. Avoid using high-cardinality columns in relationships, slicers, or axis/grouping fields. If you need date/time analysis, create derived columns (Year, Month, Day) instead of using exact timestamps.
Reduced cardinality with buckets
For continuous numeric fields with many unique values, create buckets or bins. Instead of slicing by exact Age, create Age Groups (18-24, 25-34, etc.). Instead of filtering by exact TransactionAmount, create Amount Bands. This reduces cardinality and improves visual performance.
Date table design
A proper Date Table is non-negotiable for time intelligence. It should be continuous (no gaps), marked as a Date Table, and contain useful columns like Year, Month, Quarter, Week, and fiscal year fields if needed. The Date Table is dimension, and your fact tables relate to it via a date column. This pattern enables efficient time intelligence without expensive custom calculations.
Role-playing dimensions
Sometimes a fact table has multiple dates—OrderDate, ShipDate, DeliveryDate. Don't create separate Date tables for each. Use role-playing dimensions: create one Date table, then create multiple relationships from your fact table to it. Only one relationship can be active; activate the others with USERELATIONSHIP in measures. This pattern is efficient and avoids data model bloat.
Every performance audit should check: (1) Is this a star schema? (2) Are all relationships One-to-many with Single filter direction? (3) Is there a proper Date table marked as such? (4) Are there any unused columns or tables? (5) Are any high-cardinality columns used unnecessarily? If you can answer these positively, your data model foundation is solid.
4. DAX Performance Optimisation
Well-designed data models can still be undermined by inefficient DAX. DAX is a functional language—how you write measures affects execution speed dramatically.
Measures vs calculated columns
Calculated columns are computed during data refresh and stored in the model. Measures are computed at query time (when a visual renders). The rule: use calculated columns for slicers, axes, or filters—anything that needs to be pre-calculated. Use measures for values that change based on user selection (totals, ratios, comparisons). I've seen models with expensive calculations in calculated columns that should have been measures, causing unnecessarily large file sizes and slow refreshes.
Understanding iterators
Iterator functions like SUMX, AVERAGEX, COUNTX, and FILTER evaluate row-by-row. They're powerful but expensive. SUMX(Sales, Sales[Quantity] * Sales[UnitPrice]) iterates the Sales table, performing the calculation for each row. Whenever possible, use aggregate functions (SUM, AVERAGE, COUNT) instead of iterators. Aggregate functions are optimised to run at storage engine level; iterators run at formula engine level, which is slower.
FILTER vs CALCULATE
Both FILTER and CALCULATE can modify filter context, but they work differently. FILTER is an iterator—it evaluates every row. CALCULATE modifies filter context without row-by-row iteration. Where possible, use CALCULATE instead of FILTER. For example, CALCULATE([Total Sales], Sales[Region] = "North") is more efficient than FILTER(ALL(Sales), Sales[Region] = "North").
Variables for readability and performance
Variables (VAR) in DAX improve readability and can improve performance. Power BI evaluates variables once and reuses the result. Without variables, complex expressions might be evaluated multiple times. Use variables to break down complex measures into logical steps.
// Without variables - expensive if referenced multiple times
Total Sales with Tax =
SUMX(Sales, Sales[Amount] * 1.2)
// With variables - clearer and potentially faster
Total Sales with Tax =
VAR TaxRate = 1.2
RETURN
SUMX(Sales, Sales[Amount] * TaxRate)
Understanding filter context
Filter context is the set of filters applied to a calculation based on user selections (slicers, filters, visual-level filters). Context transition occurs when a row context (from an iterator or calculated column) is transformed into a filter context. This is what makes measures in calculated tables or iterators work, but it's expensive. Minimise context transitions where possible.
DIVIDE vs division operator
Use DIVIDE([Numerator], [Denominator]) instead of [Numerator] / [Denominator]. DIVIDE handles division by zero gracefully (returns blank) and is slightly more efficient because it's optimised for this common pattern.
Avoid bi-directional relationships
As mentioned in the data model section, bi-directional relationships cause complex filter propagation that slows DAX evaluation. If you need to filter from fact to dimension, use a measure with CROSSFILTER or a separate dimension table instead.
Use HASONEVALUE for single-value checks
When you need to check if a column has a single selected value, use HASONEVALUE instead of COUNTROWS = 1. HASONEVALUE is more efficient and clearer in intent.
Be careful with RELATEDTABLE
RELATEDTABLE fetches related rows from a many-to-one relationship. It's useful but can be expensive if used in iterators or over large tables. Consider whether you can achieve the same result with CALCULATE and filter modification.
// Common performance anti-pattern: FILTER over large table
Slow Measure =
CALCULATE(
[Total Sales],
FILTER(ALL(Sales), Sales[Quantity] > 10)
)
// More efficient: use boolean filter in CALCULATE
Faster Measure =
CALCULATE(
[Total Sales],
Sales[Quantity] > 10
)
Transaction vs grain considerations
If your fact table is at transaction level (every individual sale), some measures will iterate millions of rows. Consider whether you can pre-aggregate to a higher grain (daily totals, product-category summaries) for specific visuals. This is a trade-off between granularity and performance, and the right answer depends on your reporting requirements.
Power BI Desktop includes Performance Analyzer under the View ribbon. It records exactly how long each visual takes to load and breaks down time spent on queries vs rendering. Use this to identify slow measures—the ones taking longest to evaluate are your optimisation priorities.
5. Power Query and Refresh Performance
Slow refresh times frustrate users and can cause missed schedules. Power Query optimisation focuses on reducing data transfer and leveraging query folding.
Understanding query folding
Query folding is Power Query's ability to generate a single SQL query from your M steps. Instead of pulling all data into Power BI and then transforming it, the transformation happens in the source database. This is dramatically faster. Query folding only works with supported data sources (SQL Server, Azure SQL, Snowflake, etc.) and specific M functions.
To check if query folding is happening: right-click a step in Applied Steps and select "View Native Query." If the option is greyed out, folding isn't occurring for that step.
Patterns that prevent query folding
Some Power Query operations break query folding and force data to be pulled into Power Query for processing:
- Power Query functions without SQL equivalents: Text.PositionOf, Text.AfterDelimiter, List.Distinct with custom comparers, and many others don't fold.
- Merging with non-foldable sources: If you merge a SQL query with an Excel file, folding breaks for the merge.
- Complex conditional logic: Nested if statements or switch expressions may not fold.
- Pivoting and unpivoting: These operations often break folding, depending on the source.
Optimisation strategies for query folding
Use View Native Query to verify folding: After each transformation step, check if the native query is still being generated. If not, consider whether you can rewrite the transformation to use folding-friendly functions.
Push complex logic to the database: If your data source supports it, create database views for complex transformations. Then Power Query just selects from the view, and all the work happens in the database.
Do non-foldable steps last: Structure your query so that all foldable operations (filtering, column selection, simple aggregations) happen first, reducing data volume before you apply non-foldable transformations.
Remove columns early: Use "Choose Columns" or remove unused columns as early as possible in your query. Less data transferred means faster refreshes.
Reduce column count
Every column you import increases data transfer and memory usage. I've seen Power Query queries pulling 200 columns when reports only used 20. Be selective: only import columns you need for measures, filters, or relationships. If you need a column for a one-time transformation, remove it after the transformation step.
Filter early, not late
Filtering rows in the source database is far more efficient than filtering in Power Query or DAX. Always apply source filters in Power Query's Source step or as early as possible. If your report only needs the current fiscal year, filter at the source rather than pulling all historical data.
Incremental refresh for large datasets
For datasets with millions of rows, consider incremental refresh. Instead of refreshing the entire dataset, Power BI only refreshes recent data (e.g., last 30 days) and combines it with historical data that hasn't changed. This dramatically reduces refresh times. Incremental refresh requires Power BI Pro or Premium and a date column to partition data.
Avoid Power Query data duplication
I've seen models where the same fact table is imported three times with slightly different filters. Each duplicate consumes memory and slows refresh. Instead, import the table once and use DAX measures to create filtered variations. If you genuinely need multiple versions, consider whether you can use parameter tables or calculation groups instead.
Disable privacy warnings for trusted sources
Power Query's privacy firewall can cause evaluation to be slower or prevent folding. For trusted internal data sources, you can set privacy levels to Organizational to bypass these checks. Be careful—only do this for sources you control and trust.
Use staging queries carefully
Staging queries (intermediate queries that other queries reference) can improve organisation but may cause redundant evaluation. In Power Query, reference as much as possible, but be aware that Power BI will evaluate each query separately. If you have complex staging logic, consider whether you can consolidate into fewer queries.
Sometimes you must choose between query folding and data model complexity. For example, doing a transformation in Power Query might break folding but reduce column count in the model. Test both approaches: measure refresh time with and without the transformation to see which is faster for your specific scenario.
6. Visual Design and Report Performance
Even with an efficient data model and fast DAX, poor visual design can make reports feel slow. Visual design optimisation is about reducing the work Power BI must do to render each page.
Limit visuals per page
Every visual triggers DAX queries. More visuals mean more queries, more evaluation, and more rendering. I've seen report pages with 40+ visuals that take 20+ seconds to load. Aim for 8-12 visuals per page. If you need more, consider breaking the report into multiple pages with drillthrough navigation.
Avoid high-cardinality visuals
Scatter plots with thousands of data points, tables with millions of rows, or slicers with tens of thousands of items are performance killers. For scatter plots, aggregate or sample data. For tables, use pagination or limit row count. For slicers, use search or hierarchies to reduce options.
Card visuals with complex measures
Card visuals are simple but they still evaluate measures. If you have 10 cards on a page, each with a complex CALCULATE expression involving iterators, that's 10 separate queries. Consolidate related measures into a single visual (like a multi-row card or table) when appropriate.
Visual interactions
By default, clicking a visual filters all other visuals on the page. This triggers re-evaluation for every visual. Under Format > Visual interactions, you can disable filtering for specific combinations. If you have visuals that don't need to interact, turn off these interactions to reduce unnecessary queries.
Slicer selection
Certain slicer types are more expensive than others. Dropdown and date range slicers are generally efficient. List slicers with thousands of items are slow. Before Date slicer can be expensive if not used carefully (it creates a continuous date range). Hierarchical slicers are efficient for drilling down through levels.
Drillthrough instead of detail pages
Instead of creating separate detail pages with hundreds of visuals, use drillthrough. Users drill from a summary visual to a focused detail page with fewer, more relevant visuals. This reduces initial page load time and focuses user attention.
Tooltips vs additional visuals
Tooltips are hover-only visuals that show additional context. Instead of cluttering a page with 20 visuals, use tooltips to provide detail on demand. Tooltips only render when hovered, so they don't impact initial page load.
Conditional formatting with rules
Conditional formatting is powerful but can add overhead. Using rules based on field values (e.g., "if value > X") is more efficient than rules based on other measures or calculations. Keep conditional formatting simple—complex rules for every cell in a table visual add up quickly.
Bookmarks and buttons
Bookmarks capture a visual state (filter selections, page visibility, etc.). Buttons and bookmarks create interactive reports but can cause confusion if overused. Each bookmark switch may trigger visual re-rendering. Use bookmarks intentionally to guide users through a story, not to hide poor design.
Theme and formatting
Custom themes and extensive formatting don't directly affect performance, but they affect perceived performance. A clean, well-designed report feels faster than a cluttered one. Use consistent formatting, avoid visual noise, and guide user attention to the most important metrics.
7. Import vs DirectQuery: A Practical Framework
Power BI offers three connectivity modes: Import (data is loaded into Power BI's in-memory engine), DirectQuery (data remains in the source system, queried in real-time), and Composite (a mix of both). Choosing the right mode is critical for performance.
Import mode: When to use it
Import mode is the default and recommended choice for most scenarios. Data is compressed and loaded into Power BI's VertiPaq engine, which is extremely fast. Use Import when:
- Data volume is manageable: Up to a few hundred million rows is fine with proper modelling. Beyond that, consider archiving or aggregation strategies.
- Refresh frequency is predictable: Daily, hourly, or even near-real-time refreshes are viable with Import. Real-time (per-click) requirements suggest DirectQuery.
- You need the full Power BI feature set: Some features (calculation groups, certain DAX functions, complex time intelligence) work better or only work in Import mode.
- Source system can handle scheduled queries: Scheduled refreshes will run queries against your source. Ensure this doesn't impact operational systems.
DirectQuery mode: When to use it
DirectQuery keeps data in the source system and queries it in real-time. Use DirectQuery when:
- Data volume is too large for Import: If you have billions of rows that can't be feasibly loaded into Power BI, DirectQuery may be necessary.
- Real-time data is required: If users need to see data as of the current moment (not last refresh), DirectQuery provides this.
- Data governance requires centralised queries: Some organisations require that all queries go through a central database with row-level security or audit logging.
- Underlying database is powerful and optimised: DirectQuery's performance depends entirely on the source database. A well-indexed, well-resourced data warehouse can perform well; an overburdened transactional system will not.
DirectQuery performance challenges
DirectQuery has inherent performance limitations:
- Every visual interaction triggers SQL queries: Clicking a slicer sends a query to the database. More visuals = more queries.
- Some DAX functions are slow or unsupported: Iterators and complex calculations may generate inefficient SQL.
- No data model compression: All the work happens in the source database, which may not be as optimised as VertiPaq.
- Concurrent user load affects the source: Many users interacting with a DirectQuery report means many concurrent queries against your database.
Composite models: Best of both?
Composite models let you mix Import and DirectQuery in the same report. A common pattern: Import dimension tables (fast, small) and DirectQuery a large fact table. This can work well, but be careful: aggregations across Import-DirectQuery boundaries can be slow, and the complexity increases significantly.
A practical decision framework
1. Is real-time data (per-click) required?
YES → Consider DirectQuery
NO → Proceed to question 2
2. Is data volume > 500M rows?
YES → Consider DirectQuery or aggregation strategy
NO → Proceed to question 3
3. Is refresh frequency < 15 minutes acceptable?
YES → Import mode is likely best
NO → Consider DirectQuery or streaming dataset
4. Does source system have performance SLA for concurrent queries?
YES → DirectQuery is viable
NO → Use Import with scheduled refresh
Hybrid tables (Premium-only)
For Power BI Premium customers, hybrid tables allow Import mode for recent, frequently-accessed data and DirectQuery for historical data. This balances performance and data recency. For example, keep the last 90 days in Import mode for speed, and DirectQuery older data as needed.
If you must use DirectQuery: (1) Reduce visual count per page, (2) Use simple measures (avoid iterators), (3) Ensure source database is indexed appropriately, (4) Use aggregated tables in Import mode when possible, (5) Limit slicer complexity, (6) Test with realistic concurrent user loads.
8. Performance Diagnostic Checklist
When a user reports a slow Power BI report, follow this systematic diagnostic approach. Don't guess—measure and prioritise fixes based on data.
Step 1: Reproduce the issue
Open the report in Power BI Desktop. Is it slow in Desktop, or only in the Service? Desktop-only issues suggest DAX or model problems. Service-only issues suggest resource constraints (capacity limits), gateway performance, or network latency.
Step 2: Use Performance Analyzer
Enable Performance Analyzer under the View ribbon. Refresh the report and examine the timeline. Which visual took longest? Was the time in DAX queries or visual rendering? Focus optimisation efforts on the worst offenders first.
Step 3: Check data model size
File size correlates with memory usage. A 500MB PBIX file is on the larger side. In the Service, check dataset size under Settings. If you're approaching capacity limits (1GB for Pro, 100GB+ for Premium depending on SKU), you need to reduce data volume or optimise the model.
Step 4: Examine relationships
Open Model view. Count relationships. More than 15-20 suggests a complex model that may benefit from simplification. Check for bi-directional filters—these should be rare. Look for circular dependency warnings (yellow ! icon). Ensure all relationships are active (solid line) unless intentionally inactive.
Step 5: Review measure complexity
Open Modeling > Manage measures. Look for measures with complex CALCULATE statements, nested iterators, or multiple RELATEDTABLE calls. These are candidates for optimisation. Use DAX Studio to analyse query plans for the worst-performing measures.
Step 6: Check Power Query query count
In Power Query Editor, count your queries. More than 20-30 suggests complexity that could be reduced. Look for duplicated queries, unused staging queries, or opportunities to consolidate. Check Applied Steps length—queries with 50+ steps may be over-engineered.
Step 7: Verify refresh times
Check refresh history in the Service. Is refresh time increasing over time? This suggests data growth or inefficient incremental refresh patterns. Are refreshes failing with timeout errors? This may indicate gateway or source database performance issues.
Step 8: Test with simplified measures
Create a test page with a simple card visual using a basic SUM measure. Is this fast? If yes, your data model is fundamentally sound, and complex measures are the bottleneck. If no, data model or data volume is the issue.
Step 9: Check for visual-level filters
Visual-level filters (filters on individual visuals rather than page-level slicers) add complexity. Review each visual's filters—are there multiple filters on the same visual? Can these be consolidated into slicers or measure logic?
Step 10: Review user patterns
How are users actually interacting with the report? If everyone goes to page 3 immediately, make page 3 the first page. If a certain slicer is used by 90% of users, make it more prominent. Optimise for real usage, not hypothetical scenarios.
9. 10 Common Power BI Performance Mistakes
In seven years of consulting, I've seen the same performance mistakes repeatedly. These aren't theoretical—they cause real problems in production reports.
Mistake #1: Flat table imports from Excel
Impact: 5-10x slower performance, 3-5x larger file sizes.
Fix: Transform flat tables into star schema. Create separate dimension tables for categories, dates, products, etc. The one-time effort pays dividends in report performance and maintainability.
Mistake #2: Bi-directional filter relationships everywhere
Impact: 2-4x slower DAX evaluation, ambiguous results.
Fix: Use Single filter direction for 95% of relationships. Only use Both when you have a specific, documented requirement and understand the implications. Use CALCULATE and CROSSFILTER as alternatives.
Mistake #3: Measures using FILTER over entire tables
Impact: 3-10x slower measure evaluation.
Fix: Use boolean filter expressions in CALCULATE instead. CALCULATE([Measure], Table[Column] = value) is dramatically faster than FILTER(ALL(Table), Table[Column] = value).
Mistake #4: High-cardinality columns in slicers
Impact: 2-5x slower page loads, overwhelming UI.
Fix: Don't put CustomerID, TransactionID, or exact timestamps in slicers. Use derived columns (Month, Year, Amount Bands) or search-based dropdowns instead.
Mistake #5: Every column imported "just in case"
Impact: 2-3x larger file sizes, slower refreshes.
Fix: Be selective. Only import columns used in measures, filters, relationships, or visuals. Remove the rest. If you need a column later, add it back—premature optimisation in the wrong direction.
Mistake #6: DAX iterators over large tables
Impact: 5-20x slower measure evaluation.
Fix: Use aggregate functions (SUM, AVERAGE, COUNT) instead of iterators (SUMX, AVERAGEX, COUNTX) where possible. If you must iterate, filter the table first to reduce row count.
Mistake #7: No Date Table, relying on Auto Date/Time
Impact: Memory waste, confusing time intelligence, compatibility issues.
Fix: Create a proper Date Table, mark it as such, and use it for all time intelligence. Disable Auto Date/Time globally in options.
Mistake #8: Too many visuals per page
Impact: 3-5x slower page loads, user overwhelm.
Fix: Aim for 8-12 visuals per page. Use drillthrough, tooltips, and additional pages to provide detail without clutter. Focus on the metrics that matter most.
Mistake #9: Power Query transformations that break query folding
Impact: 5-10x slower refreshes.
Fix: Use View Native Query to verify folding. Push transformations to the database via views or SQL where possible. Do non-foldable steps last, after reducing data volume.
Mistake #10: DirectQuery for datasets that should be Import
Impact: 3-10x slower visual interactions, source database load.
Fix: Use Import mode unless you have a specific requirement for real-time data or data volume too large for Import. Most organisations over-estimate their real-time needs.
Power BI Performance Audit
If your Power BI reports are slow and you're not sure where to start, a Power BI audit can identify bottlenecks and provide a prioritised remediation plan. I'll review your data model, DAX measures, and visual design, then deliver actionable recommendations.
Book a Power BI AuditRelated Services
Performance optimisation is one part of a robust Power BI implementation. Digital Adaption provides comprehensive Power BI consultancy services for UK organisations, including data modelling, dashboard development, and team training.
Need ongoing Power BI support? Learn about retainer options to keep your dashboards running smoothly as your business evolves.
Frequently Asked Questions
Start with Performance Analyzer under the View tab to identify which visuals are slow. Common fixes: reduce visual count per page, optimise DAX measures (remove FILTER over large tables), ensure star schema data model design, and remove unused columns. For Import mode, reduce data volume with filters or aggregation. For DirectQuery, ensure source database is indexed and consider aggregations.
Slow DAX measures typically involve iterators (SUMX, FILTER) over large tables, complex CALCULATE statements with multiple filter modifications, or context transitions in calculated columns. Use Performance Analyzer to identify slow measures, then rewrite using aggregate functions instead of iterators, use boolean filters in CALCULATE instead of FILTER, and add variables to avoid repeated calculation.
Reduce file size by removing unused columns and tables, implementing star schema to eliminate duplicate data, reducing data volume (filter to only necessary date ranges, archive historical data), and disabling high-cardinality columns. In Power Query, remove columns early and use Choose Columns instead of loading everything then removing. For large datasets, consider incremental refresh or aggregation.
Query folding is Power Query's ability to generate a single SQL query from your M transformations, pushing work to the source database instead of processing in Power BI. This dramatically improves refresh performance. Check if folding is happening by right-clicking a step and selecting "View Native Query." Avoid transformations that break folding (complex text functions, certain merges, pivoting) or do them last after reducing data volume.
Use Import mode for most scenarios—it's faster and supports the full Power BI feature set. Use DirectQuery when you need real-time data (not scheduled refresh), have data volume too large for Import (billions of rows), or have data governance requirements for centralised querying. Import mode handles hundreds of millions of rows efficiently with proper modelling; DirectQuery should be a deliberate choice for specific requirements.
Optimise data model by implementing star schema (separate fact and dimension tables), using One-to-many relationships with Single filter direction, avoiding bi-directional filters, removing unused columns and tables, creating a proper Date Table marked as such, and using role-playing dimensions for multiple date fields. High-cardinality columns should be removed or aggregated. A well-designed star schema will perform well even with large datasets.
Slow refresh is typically caused by importing unnecessary columns or rows, Power Query transformations that break query folding, complex data shaping logic that should be in source views, gateway or network bottlenecks, or source database performance issues. Use Power Query's View Native Query to verify folding, remove unused columns, filter data at source, and consider incremental refresh for large datasets.
Power BI can handle hundreds of millions of rows efficiently in Import mode with proper data modelling. The limit isn't strictly row count but memory usage (1GB for Pro, more for Premium). For datasets beyond this, use DirectQuery, incremental refresh with historical archiving, or aggregated tables. Focus on row count at the grain you analyse—a 100M row transaction table might aggregate to 100K rows of monthly summaries, which is trivial for Power BI.
Power BI performance best practices: use star schema data models, avoid bi-directional relationships, use aggregate functions instead of iterators, remove unused columns, limit visuals to 8-12 per page, avoid high-cardinality slicers, enable query folding in Power Query, use Import mode unless DirectQuery is required, create a proper Date Table, and use Performance Analyzer to identify bottlenecks. Focus optimisation on the worst offenders first.
Enable Performance Analyzer from the View tab in Power BI Desktop. Click "Start recording" then refresh your report. Performance Analyzer shows a timeline of each visual's load time, broken down into DAX query time vs rendering time. Use this to identify slow visuals—focus optimisation on the worst-performing ones first. Click on a visual's bar to see the underlying DAX queries, which you can paste into DAX Studio for deeper analysis.
Need Help With Your Power BI Performance?
Whether you're dealing with slow reports, large datasets, or complex DAX that needs optimisation, I work with UK organisations to diagnose and fix Power BI performance issues.
Book a Free 30-Minute Consultation