skip to Main Content


P&L Graphic

Guest Post by David Churchward

PART 1 and PART 2 of this series on Profit and Loss posts covered the basic layout of the P&L together with some time intelligence and filtering to display relevant numbers to cover actual, budget and prior year for both a selected period and the equivalent year to date.  This was all based around the core measure referred to as Cascade_Value_All.

In this post, I’ll go on to refine this report further by adding Return On Sales (ROS) percentages and variance calculations as well as tune up the appearance of the report.

Return on Sales (ROS)

Return on Sales (or ROS for short) is the percentage that a number on our report represents as a function of the equivalent sales value.  Most people will be familiar with the term Gross Margin Percentage and this is one type of ROS.  Gross Margin percentage is a function of gross margin value divided by sales value.  This is often dissected by individual product lines as the gross margin values are analysed by the same product categories as sales.  Operating Profit Percentage is another ROS measurement but this is not normally sub analysed (although it can be if your dataset lends itself to that approach).

You’ve probably deduced that the financial calculation is therefore:

ROS %= Analysed Value / Equivalent Sales Value * 100

We’ve already got the value to be analysed in our model.  This is the Cascade_Value set of measures.  Therefore, we simply need to create the Equivalent Sales Value measure.  I’ve called this measure Sales_Compare.

Calculating the Equivalent Sales Value – Sales Compare

In PART 2, we ended up with 6 measures (Cascade_Month_Actual, Cascade_Month_Budget, Cascade_Month_Actual_PY and the equivalents for YTD).  We can use these measures as the basis for for a new set of Sales_Compare measures.  I’ll run this example on the Cascade_Month_Actual measure.  The same methodology applies for the other Sales Compare measures that we’ll use for each of our final report headings.

My Cascade_Month_Actual carefully sums the underlying dataset for each of my report headings (Report Heading 1 contains my main groups and Report Heading 2 contains my sub groups).  I need to maintain my Report Heading 2 categorisation but remove the link from my Report Heading 1 categorisation.  This is done using an ALL() function within a CALCULATE function.







You’ll notice that I use my Cascade_Month_Actual measure, remove any association that it may have with Heading 1 and then direct it to filter the measure based on Heading 1 being equal to “Sales”.  This way, I essentially repeat my sales value in each section of my P&L report.


It should be noted that this is an interim calculation.  It doesn’t necessarily make sense as it stands in the report above, but it will prove useful in further calculations.  Having said that, I would prefer to only see the values in the sections where I want to display ROS percentages.  In order to do this, I need to tell my dataset where I want to see ROS percentages displayed.  To do this, I go to my DIM_Heading1 table and create a field where I can mark the headings where I want ROS calculations to be performed.  I’ve called this field Heading1_PCT and mark the headings where I want to drive this calculation with a 1.


I can now reference the Heading1_PCT field to determine when the Sales Compare measure should return a result.  We’ll call this measure SalesComp_Month_Actual.


= IF(












The calculate function is exactly as per our previous measure but I’m only running the calculate function when Heading1_PCT is equal to 1.  I can only conduct this evaluation when I only have 1 value for Heading1_PCT so I have to use COUNTROWS and VALUES to determine this.


Calculating ROS

We now have the two key measures to derive our ROS being Cascade_Month_Actual and SalesComp_Month_Actual.  I could simply divide one by the other surely.  Essentially, that is correct, but as explained at length in Profit & Loss – The Art of Cascading Subtotals, we need the measure to behave slightly differently in different sections of the report.  You’ll notice above that I have SalesComp_Month_Actual detail values for Operating Profit and Gross Margin.  I only want my ROS calculation to evaluate at the total level for Operating Profit but at the detail level for Gross Margin.  I can therefore use the evaluation that I used in my Cascade_Value_All measure and join it up with my simple division.


= IF(

MAX(DIM_Heading1[Heading1_Summary]) = 1

&&(MAX(DIM_Heading1[Heading1_Show_Detail]) = 1

||COUNTROWS(VALUES(DIM_Heading2[Heading2_Name])) > 1





My logical test is exactly the same as the one we saw in Cascade_Value_All in Profit & Loss – The Art of Cascading Subtotals.  After that, it’s a simply division.


This same method is used for each associated column measure that we wish to calculate an ROS for.  If I also remove my interim measures and apply some formatting, I get the layout below for month actual, month budget and month prior year.  I should take this opportunity to apologise for not having been very inventive when I made this dataset up having used a flat set of percentages!



To complete the picture, we need to add some variance calculations.  Because of the way that we’ve built up our measures, this is now very simple.  Our actual versus budget variance is


= [Cascade_Month_Actual] – [Cascade_Month_Budget]

I’m not going to insult your intelligence by writing out the equivalent for actual versus prior year.

Tidy Up

My report is going to display data for a selected month and the year to date position for that month.  I prefer to split the two sections with a blank column.  I don’t think we’re going to register on Rob’s spicy scale for this one!

Blank = BLANK()

Headings and CUBEVALUE

To give my measure names meaning and to allow for their careful use, they’ve become somewhat ugly.  You can of course provide a title for the column that is a lot more user friendly.  However, I prefer to write my own because I can make it look tidier and, more importantly, I can reference the period that has been selected on my slicer.

I use an Excel formula in my heading that uses a CUBEVALUE function to call the period number that I’ve selected.  You could also do the same with year selections.

=”Period – “&CUBEVALUE(“PowerPivot Data”,”[Measures].[Maximum of Period]”,Slicer_Period)

This appends the period number onto the text “Period – “ by referencing the MAX value for Period when brought into the context of my slicer called Slicer_Period.  I do the same for YTD, add a few more headings to describe the type of data and then hide my pivot table headings!

When It All Comes Together

When you put all of these elements together (PART1, PART2, Sales Compare, ROS, Variances, Blank and Headings) you get something like this….



This Post Has 17 Comments
  1. Hello:
    I was following along with your downloaded Financial analysis example and wanted to try out the period slicer to see the changes to the model. When I changed the value from Period 3 to Period 4, I reveiced an arroe message that stated ” Data could not be retrieved from external source. Error message returned from external data source: The operation has been cancelled due to memory pressure.”
    Am I doing something wrong, or is the model not operating correctly?
    Thank you. I am relatively new to PowerPivot, so I may not be using the model correctly.

    1. Hi Stan
      Are you using the file that was made available with this post (part 3)? I’ve just downloaded it and it appears to be working fine but that’s probably obvious if I created it!

      To run some checks, can you please advise the following:
      1 – Powerpivot window loads and data tables are visible without any errors
      2 – How much RAM are you running?
      3 – If you use Data > Refresh All – do you get any errors?
      4 – Do you have other powerpivot workbooks that work OK?
      5 – Have you tried closing other applications and then running? If you’re short of RAM, I guess this could have an effect although this is a small workbook / dataset so I would be surprised.


    2. Yes, I agree with David. This isn’t something wrong with the workbook, and you aren’t doing anything wrong either Stan. You are just running out of RAM. If you maybe close a bunch of other applications, or add more RAM, you should be fine.

  2. Gentlemen:

    Thank you for the responses. David,the answers to your questions are as follows:
    1- Yes it runs with no visible errors.
    2- 4GB
    3- No errors noted after refresh.
    4- I have only encountered this error with the Stage1 and Stage2 downloaded files. All other PP files I have created or downloaded have been flawless on this machine.
    5 – Yes, I have tried the files with no other aplications running. Both Stage1 and Stage2 files returned the error messages.

    I will see if I can add additional RAM….

    Thank you for your excellent site. I have found it very useful in getting up to speed on PP!

    1. Hi Stan

      The file was created on a machine with 4GB of RAM (64 bit) so that probably isn’t the issue. Just tried it with someone running 2GB RAM on 32 Bit and everything works fine. We’ve managed to recreate the issue on one our machines here (8GB RAM 64 Bit). Looking into it to try and find out why. It looks like a memory leak is happening and I suspect that it’s something to do with the linked tables but I don’t know why yet. Watch this space.

      In the meantime, can you run task manager at the same time as clicking a slicer operation and see what the Excel process memory does? It reaches 100mb on my PC and others that work OK but the PC that crashes tops out at over 6GB.


  3. David:
    I ran the Task Manager in parallel and the slicer operation, when initiated, pegs the total physical memory to its limit and then displays the error message.
    BTW – my machine has 3GB RAM, not 4GB as I indicated, before.
    I am relieved that you could replicate the condition on another machine.

    1. Quick update Stan – unfortunately, we still haven’t isolated the problem. We’ve been carefully uninstalling programs that may be conflicting and adjusting settings everywhere but to no avail as yet. We’re going to rebuild the machine to see if this cures the problem, but the problem with that is that we won’t be able to isolate the cause.

      I’ll keep you updated as and when we find out more. We are still working on it.


  4. Hi David,

    I just upgraded my 64bit win 7 machine to 8gb of memory and I get the error Stan mentioned above about memory pressure. This seems like a power pivot bug as the workbook worked fine on a 64bit win 7 machine with 4gb of memory.

    I enabled a trace and reviewed the results in Profiler but there is no obvious error.

    I checked the memory after clicking a slicer and it does indeed jump to 6GB – hence your suggestion of a memory leak is probably right, although, I think it is not related to the linked tables as these update in the PP window okay. It’s only when querying via the pivot table that causes the error.

    Well done on finding the bug.


  5. Hi Again,

    I tried the file on a machine with CTP3 of V2 of PowerPivot and I no longer see the error after the file is updated to work with CTP3. However, there is a still a problem present as any updates via the Pivot Table i.e. selecting a period or expanding a level take around 40 seconds to complete. During this time the memory usage goes from 250mb to 2.5gb. Although, we have remember this is a CTP version of the product.

    It’s very strange why this problem isn’t present on a 64bit machine with 4gb of memory.

    1. Hi Lee

      Thanks for the info. This whole thing gets more curious.

      We’ve isolated where the problem is happening but I have no idea why it is happening at this stage. The issue appears to eminate from the ROS measures where an IF statement is used to determine when the measure should execute and a further IF statement is used on the denominator of the resulting divide. Somehow, I suspect that the formula is evaluating to infinity before it realises that it doesn’t need to evaluate. I’m guessing, but this might suggest that the more RAM you throw at this problem, the quicker that it runs measure elements that it doesn’t need to.

      I’ve written a measure that gets around this problem and I’ll send details when I’ve tidied it up. However, I think we still have a memory leak problem in certain circumstances.

      I’ll be in touch with further details shortly.


  6. Hi David,

    I’m happy to test the revised workbook on both of my machines that exhibit the problem.

    I think you should submit this bug to Connect as I imagine we’ll come across this problem more and more. I’m currently developing a financial model using similar techniques and if I hit the same problem I’ll raise a Connect bug too.


  7. Did anybody sort this out?

    I would like to use the technique, but if its this slow on such small amounts of data – its a non starter.

  8. I know this is an older post, but one simple solution would be to turn off your cross filtering on your slicers, improved performance quite a bit. Thanks again David for the wonderful tool.


Leave a Comment or Question