I LOVE Pivot Tables – they are (in my view) **the best aggregation tool available to the data analyst today**. However sometimes when you aggregate your data, you find yourself at a point where you can’t see the trees for the forest. If this happens, **you really need to start to disaggregate your data into the component parts that are driving the overall result.** In my post today I take typical business scenario looking at the change in sales $ turnover vs prior year and then answer the question “What is driving the change in my sales $ turnover vs last year?”. This technique can also be used in comparing the change in profit (aka Margin $) too.

# Sales are up – that’s good, right?

I have a modified version of the Contoso database that I have used as my test data (shown below). As you can see in the pivot table below, there has been a change in sales $ turnover vs last year of around $428m across 8 different categories – not the most realistic data, but you can still see what is going on with this data. Each category in the pivot table has performed somewhat better or worse, ultimately combining to deliver the overall result.

But the question I really want to answer is, “**what is driving the change in sales $”**. If you think about this question, you can actually break it down into 2 lower level drivers of the change in sales $ turnover*.

- A change in volume (ie a change in the number of units sold)
- A change in price per unit (ie a change in the rate you charge).

* In fact you can go on to disaggregate the drivers of your sales turnover in many different ways, and to a much lower level. I did this earlier in the year with my entry in the Power BI Report contest – you can read about that here if you are interested.

# Writing the DAX

The DAX formulas to disaggregate the total sales $ number are actually quite easy to write. As normal, **it is easiest to write a number of interim measures along the way** (these each have their own intrinsic value anyway). So to start with, I have written independent measures for quantity (aka volume) and also sales value before writing the final measures I need for this task.

## Volume Measures

Total sales units this year :=sum(Sales[SalesQuantity])

Total sales units last year :=CALCULATE([Total Sales Qty] , filter(all(Calendar),Calendar[CalendarYear] = max(Calendar[CalendarYear])-1))

Change in sales units TY vs LY :=if(and([Total Sales Qty LY] <>0,[Total Sales Qty]<>0),[Total Sales Qty] – [Total Sales Qty LY])

## Value Measures

Total Sales $ TY :=sumx(Sales,Sales[SalesQuantity] * Sales[UnitPrice])

Total Sales $ LY :=CALCULATE([Total Sales], FILTER(all(Calendar),Calendar[CalendarYear] = max(Calendar[CalendarYear])-1))

Chg in Sales $ TY v LY :=if(and([Total Sales LY] <>0,[Total Sales]<>0),[Total Sales] – [Total Sales LY])

## Average Sell Prices

Avg Price per Unit TY :=DIVIDE([Total Sales] , [Total Sales Qty])

Avg Price per Unit LY :=CALCULATE([Avg Price per Unit], filter(all(Calendar),Calendar[CalendarYear] = max(Calendar[CalendarYear])-1))

Once you have these measures, you can then move on to write the measures that disaggregate the movement in volume from the movement in price as follows.

# Variance Due to Volume

In a business sense, the measure I have written below is saying “If I sold the extra units this year at the same price I got last year, how much extra (or less) money would I have”? **This is therefore the variance in sales $ driven by a change in volume**.

Chg in Sales $ due to volume :=[Change in sales units TY vs LY] * [Avg Price per Unit LY]

## Variance Due to Price

The last measure is the easiest to write, as it is simply the total variance less the variance due to volume.

Chg in Sales $ due to price :=[Chg in Sales $ TY v LY ] – [Chg in Sales $ due to Volume]

# Charting the component parts

As you can see below, TV and Video total sales were up $178m vs prior year, **yet there was also a decline in sales of $79m caused by lower sell prices**. And Cameras and camcorders actually had an increase in sales due to sell price, and that drove the total result higher than it otherwise would have been. Of course there is normally an inverse relationship between price and volume (the lower the price, the more you sell). The trick is to maximise sales (or more correctly margin $).

Why not try this technique on your own business data. This test data is not very realistic, so the insights are not that realistic either. However analysis like this on real business data is bound to help you find things you didn’t know. It is good to know if the increase (hopefully an increase) in sales turnover has been driven by price increases, volume increases, or maybe both.

**Subscribe to PowerPivotPro!**

Very nice, though you need to be careful about this type of analysis, as it can hide effects at lower levels since we are using average prices. Ideally the analysis would have been done at the lowest product level so as to not hide price effects that average out on a total average level.

Hi Konrad. Yes of course. This is test data, and you need to bring your own business context to the exercise. The formulas work at any level, so if category is too higher level (and hence the averages don’t make sense) then use a lower level in your product hierarchy. Just change the pivot table.

Is there a way to have the grand totals equal the sum of the lowest hierarchy items, rather than being it’s own calculation? Drilled down to the lowest level, my analysis is excellent. However, when I move up a level, the variances become significant in opposite directions. Together they cancel each other out mostly, but they still appear to be large swings when in reality the sum of the parts is rather small.

Thank you for the information. We are working on this subject right now and this is very useful.

What I have difficult explaining is a change in Margin %.

This can be driven by changes at the customer level for

1. Avg. Revenue (if everything else remains constant, the Margin % will improve if Avg. Revenue improves. The amount this will impact the Margin % is dependent on their total revenue.

2. Avg. Cost (if everything else remains constant, the Margin % will improve if Avg. Cost improves. The amount this will impact the Margin % is dependent on their total revenue.

2. Frequency (If everything else remains constant, the Margin % can shift up or down, dependent on how this customers margin % compares to the average customer.

Ex. IF a customer has a 20% Margin, and the average overall is 10%, an increase in this customers activity will increase the overall average margin%

It is when all three of these change for a customer (Ex. Avg. Revenue goes up, Avg. Cost goes down, Frequency goes down) that I have difficulty quantifying what customers are driving the change.

Should I just use your method to determine what is driving the change in Total Revenue and the change in Total Costs? I suppose if i figure that out, it would give me four categories that would drive the change.

Change in Revenue due to Price

Change in Revenue due to Activity

Change in Cost due to Price

Change in Cost due to Activity

Funny, I was thinking about this overnight too. I think yor items are a good place to start, but I think there are just 3. Change in margin $ due to volume change, cost movement and sell movement. You could try these as %, but I am unclear how it will work given the last one has a sell price movement too. Sounds like you have already looked at it, but at is where I would start.

Hello, my remark is in line with what Konrad said earlier …the danger of the average ! I didn’t go yet really into detail…just had a quick reading…but to me it seems indeed only to work properly when the underlying/detailled data is “homogeneous”…I mean for example that the actual prices per unit are not having a wide range within one categgory.

Also will this work fine (give logical results) if you suddenly would have a new article sold (whcih didn’t exists last year) within an existing category; with a high unit price? Would this not highly influence the average unit price calculation…

Once again I didn’t read in detail but I have a bit the feeling that in real business scenarios this will not be enough…

Of course, you should only use any technique where it makes sense to do so.

Matt, thank you for writing this! It’s exactly the real world type scenario I will use in my job and actually describes a project I’ve been wanting to do on real data. Thank you!

Hi Matt, Its so good to have such valueable insights from above analysis. I just wondering to include portfolio change. Within the same brand there are new launches/ products expire and so on how we can bring it to such analysis. i guess it would be factor under volume change ,Do you have any idea?

For this I personally would be checking for cannibalisatio . You would be hoping for/expecting total units sales, total $ sales and total margin $ to be up overall for the entire portfolio/group of like products. Ideally you would be able to track competitors too, but that may not be possible. The flip side of cannibalisation is “what happens if we don’t launch this product extension and the competitor does instead”? The cannibalisation could be the same, but this time you don’t get the NPD sales as they go to the competitor instead. 🙁

IMO there four variations of Sales increase/decrease:

* Price increase/ decrease.

* Volume increase/decrease.

* New products.

* Phased out products.

I also would like to add the currency effects

For non-homogeneous product groups, one has to further break down the “Volume Variance” into two components:

1) Sales Quantity Variance

2) Sales Mix Variance

1) is calculated as ( Actual Quantity – Standard Quantity ) * Weighted Average Standard Price @ Standard Mix (WASPSM)

2) is calculated as ( Weighted Average Standard Price @ Actual Mix [WASPAM] – WASPSM ) * Actual Quantity

See detailed calculatione examples here: http://www.hocktraining.com/download/samples/CMA_Text_Sample.pdf

(Even though the example calculations are for Manufacturing Input cost, same applies to Sales.)

Thanks for the post Matt,

in general, would the accuracy of the results improve if we compare shorter time periods, e.g., TY-Q1 vs LY-Q1? specially if prices have changed dramatically during sale seasons. Thanks

I don’t think the accuracy will change, but the relevance may change. There is no substitute for your own experience and understanding of your own business. So if it makes more sense to your business to analyse your data over shorter periods, then you should do that.

Yet another problem is see a lot is discounts. At my current customer, different types of discounts are given over the total sum of sales.

And Currency is yet another dimension in the variance analysis.

Should we apply same formula for calculation if we want to get the figures month wise?

This is a disaggregation approach for any comparison periods. So if you are comparing this month with last month, or this month with the same month last year, it is still the same method.

Matt, plz plz plz, can you help me with this problem,

http://stackoverflow.com/questions/38489829/complicated-powerpivot-issue

Hi,

when I was trying to apply the Total Sales Units LY, the calculation sum up the total for every markets. Is there anyway, it only shows each market separately?

IT might sound confusing, but I was looking for away to get the total sale unit LY with different market in the ROWS. When I put Market as the Row, and Sum of total sales for every markets is displayed,

Read the “totals don’t add up” section half way down the article on my blog

http://exceleratorbi.com.au/sum-vs-sumx-in-dax/

Hi,

Awesome article ! A good example to show how to translate business requirements into meaningful data analysis by using the beauty of DAX.

Actually I have a question: What about using a variable which stores the value of the “year selected by the user” minus one ?

The measure could be :

Total Sales $ LY =

VAR PreviousY = Max ( Calendar [CalendarYear] ) – 1 RETURN

CALCULATE ( [Total Sales] ; Calendar [CalendarYear] = PreviousY)

It returns the same results because the variable is evaluated once and in its current filter context. Its value is fixed so there is no need to use Filter function anymore !

The pros I see :

(i) The formula is smaller, easier to read and to maintain. Users can understand better the role of each component (especially which part relates to the user choice).

(ii) It prevents you from the classic error to forget your “All” function in the Table argument of your filter function.

Another pro I see (not in this case) is when you want to filter a column based on the value of a measure but you don’t want context transition to apply. You first store the value of your measure in a variable… which becomes fixed and allow you to filter without context transition to happen !

What do you think ? And what would be the pros & cons that you identify in my suggestion ?

Cheers,

Tristan