I often have inputs I know today and want to “run them out” into the future applying certain parameters along the way to create a projection. I’m in finance, so my needs will almost always have a financial planning bias, but there’s no reason why this wouldn’t have general mathematical, statistical, or scientific utility as well. Heck, I’ve gleaned lots from non-financial posts and applied them to my work. Inputs could be virtually anything in this DAX pattern. If you use a bottom-up approach to financial planning and analysis (FP&A), it should be especially easy to relate to this post. To keep things relatively straightforward, I’ll show you how to produce a labor cost projection using a project start date, hourly rate, and employee start and end dates.

Please click here to download the workbook to follow along.

**First, the setup**.

I created three tables in my “data model” (using quotes here because the tables are disconnected). I won’t spend too much time talking about getting data into the data model; however, I have been experimenting and becoming more comfortable with Power Query. Some P3 guru—maybe Ryan Sullivan—recently gave me a pointed piece of advice: “If you can put it in Power Query, do it.” Even though Power Query has been out of my comfort zone, after receiving this advice, I decided it was time to get uncomfortable. Boy, has it paid off!

Here’s a summary of what I did to set up these three tables:

**Calendar:** used List.Dates and set parameters to create a one-year table. In my work, which involves lots of projects, I use an approach I learned from Ken Puls’ blog to create a dynamic parameter so I can change the start/end dates of the projects and Power Query makes a table between those exact dates. It’s awesome.

**Project Start Date:** Excel table sucked into the data model via Power Query.

**Data Input:** Excel table sucked into the data model via Power Query.

So, I lied. I’m breaking the “if you can put it into Power Query, do it” rule in the name of connecting with you more on DAX calculated columns than M, which is beyond where I want to go with this post. In my **calendar table**, I created the following columns:

Day Number of Week: =

WEEKDAY ( [Date] )

Work Day: =

SWITCH ( [Day Number of Week], 1, 0, 7, 0, 1 )

Month: =

MONTH ( [Date] )

Year: =

YEAR ( [Date] )

Year Month: =

[Year] & “-“

& [Month]

Year Month Sort: =

( [Year] * 100 )

+ [Month]

The combination of “Day Number of Week” and “Work Day” gives me similar functionality to NETWORKDAYS in Excel, so I’m able to determine the exact number of work days per month. My company works in the Middle East where weekends fall on Fri/Sat instead of Sat/Sun, which has an impact on the number of workdays. In addition to giving me the NETWORKDAYS functionality, it also gives me the opportunity to move my workable days around to accommodate different schedules.

The rest of the calculated columns are not part of the pattern per se; they’re just part of my “visualization” requirements. I’ll leave you to your own. There is no shortage of information how to create calendar tables, here is just one Matt Allington penned that I often reference.

For **project start date**, I just need the date in a measure, so I use:

Project Start Date :=

MAX ( [Value] )

Finally, on the **data input** tab, I’ve created a table with a row ID (because every good table has a row ID—you’ll never know when you’ll need it!), employee name, hourly rate, and the start and end dates of the employee’s engagement on this project. I’ve given you a variety of start an end dates (i.e., duration and start/end times within a month) to show you the accuracy of this pattern.

**Here is a look at that data input table:**

**Second, calculating the projection.**

I’ll list the three measures that make this calculation work and then go into detail on the other side:

1. Num of Working Days :=

CALCULATE (

SUMX ( ‘Calendar’, ‘Calendar'[Work Day] ),

FILTER (

‘Calendar’,

‘Calendar'[Date] >= MAX ( DataInput[Start Date] )

&& ‘Calendar'[Date] <= MAX ( DataInput[End Date] )

)

)

2. Total Working Days :=

CALCULATE ( SUMX ( DataInput, [Num of Working Days] ) )

3. Labor Cost :=

CALCULATE (

SUMX ( DataInput, DataInput[Hourly Cost] * 8 * [Total Working Days] )

)

The first measure calculates the number of working days. The second iterates the result of #1 over the DataInput table and is also the measure used in our first report, which shows total days worked.

The third measure calculates the total cost using the hour cost multiplying by an eight-hour day and the total working days calculated in #2. “Labor Cost” is the measure used in our second report.

**Where you can go from here…**

I’ve given you the basic pattern; the application of it is only limited by your imagination. I’ve used this to calculate labor cost as above, apply a revenue calculation (or gross margin %), and utilize it within my first post to create a project forecast (i.e., ITD invoiced plus projection vs. project budget). I’ve also used this to take a lump sum amounts—say revenue forecasts from various streams and forecast—to project them out over time to better understand how we might collect on it in the future. Finally, I’ve used this as a calculator on the fly answering the question: “if I know X, Y, and Z, I wonder how that will look over time?” My answer to that: “Wait a sec.” Usually, it’s a dataset with hundreds of lines of inputs, so it makes any other way of calculating a result quickly pale in comparison.

# Forget bending spoons with your mind – there’s no money in it.

It takes a special kind of mindset to “bend” data (and software!) to the human will. As this article demonstrates, we at PowerPivotPro can twist Power BI into a pretzel if that’s what an organization needs. (A robust, trustworthy, industrial-strength pretzel of course).

The data-oriented challenges facing your business require BOTH a nimble toolset like Power BI AND a nimble mindset to go with it. And as Val Kilmer / Doc Holladay once said, we’re your huckleberry.

**Subscribe to PowerPivotPro!**

I like the use of

Work Day: =

SWITCH ( [Day Number of Week], 1, 0, 7, 0, 1 )

and understand that it is for the Middle East, but why the 7? Do they work compressed hours that day?

Thanks Stephen. This formula is used in conjunction with [Day Number of Week] to figure out weekends similar to NETWORKDAYS or NETWORKDAYS.INTL in Excel. In some countries, weekends are Fri/Sat, which can affect the number of billable days.

I’ve been waiting for the day to see this kind of simulation mindset leak into Power BI.

I thought it held a lot of promise.

Unfortunately, by design, I don’t think this solution is scalable. Imagine having instead of 1 year, 3 or 5, maybe 10 years. And 100s of assumptions that feed into measures built upon measures. And of course, if you have to give choice to the user to see how results are sensitive to the inputs of those assumptions, then you need wrap those assumptions in what-if scenario parameters.

Because measures are calculated in runtime, this becomes a very costly operation. Good luck changing any assumption parameters to other values. Or even applying different filters.

Another way of avoiding this issue, is to create intermediate calculated tables (only available in Power BI and not excel, but work arounds with power query can work). The problem with this is that slicers can’t affect calculated tables be they are not evaluated at run time. The only possible way to have a calculate table hold all your results for a simulation, that is also adjustable by parameter is to literally calculate every possible outcome from all variables. As you can imagine, these tables becomes exponentially big. 1 row for every day in your date table * every possible value for parameter 1 * every possible value for parameter 2 etc.

I tried building an entire financial model once inside PowerPivot. Suffered from performance issues ultimately and had to revert to classic excel models.

I’m not saying it’s not doable. But it’s a long ways away from achieving interactive, complex, scalable solutions.

Unless the technology changes… (dynamic table generation for example)

Bob, thanks for this! It’s one of the most useful comments I’ve ever received on DAX. I too really (really!) want this approach to be more scalable. I’m currently piloting a project using four or five variables (and multiple years) and the performance is passable–I’m still playing with it. In the course of developing that project, I have used this on the fly and for much simpler analysis, so it’s still proved extremely useful to me.

I’m interested in a couple comments you made if you’re willing to elaborate:

1. Can you tell me what you mean “by design”?

2. Can you elaborate on how you use intermediate calculated tables and also the PQ “work-arounds”?

3. Have you ever found a hybrid Excel-PQ/DAX model useful? I’m thinking about going this way to take care of the more costly calculations.

Hi Matthew –

Very cool post, I like that you’re flexing your comfort zone with Power Query! I didn’t expect a fully disconnected data model, but you’ve done some awesome things with it.

From a performance standpoint, you may want to be careful using SUMX() in each of your measures. It’s hard to test performance in DAX Studio on your data model (because it’s so small), but looking at your measures from a 10,000 foot view, here’s what I found:

[Num of Working Days] iterates over the entire calendar table…twice. One because FILTER() will scan each row to determine if it passes the Start Date / End Date condition (which it needs to calculate for each row), and then scans each of those passing rows again as part of the SUMX(). With a 1 year table, this is scanning 365 rows. This will happen very quickly, but it’s going to compound as we’ll see.

[Total Working Days] iterates over the DataInput table, and each time it does so it calls [Num of Working Days]…which scans the Calendar table twice. A 10 row table is then going to scan the Calendar 10 times.

I understand the need to have row specific start and end dates that feed into the [Num of Working Days] measure, so I think SUMX() is unavoidable here.

[Labor Cost] iterates over the DataInput table, and each time it does so it calls [Total Working Days], which iterates over the DataInput table again, and in turn will iterate over the Calendar table twice… Now we’re scanning DataInput 100 times and scanning the Calendar table 100 times.

I know that the most recent versions of the Vertipaq Engine have made some efficiencies to both SUMX() and FILTER() to cache certain queries to improve performance, but I would hesitate to use these measures as is in a production environment. At scale, these measures will not perform very well.

A more efficient version of the first measure would be:

Number of Working Days :=

VAR StartDate =

MAX ( DataInput[Start Date] )

VAR EndDate =

MAX ( DataInput[End Date] )

RETURN

CALCULATE (

SUM ( ‘Calendar'[Work Day] ),

‘Calendar'[Date] >= StartDate,

‘Calendar'[Date] <= EndDate

)

This stores the Start Date and End Date values as variables that are only calculated once, allowing you to remove the FILTER() argument (since you can check a column against a scalar value using just CALCULATE()). Since you're only summing up 1 column, SUM() can replace SUMX().

I would leave the [Total Working Days] measure as is.

Your third measure could be optimized as follows:

Labor Cost :=

CALCULATE (

SUMX ( DataInput, DataInput[Hourly Cost] * 8 * [Number of Working Days] )

)

Since the [Total Working Days] measure just sums up all working days for each row of the DataInput table, we can just call that [Number of Working Days] measure inside of the same row context and we'll get the same result, without having to nest SUMX()'s.

I'm interested to hear some more details of how you used this for forecasting in models! Maybe a followup post with more details on some of your use cases?

Cheers,

~ Chris

Thanks for this comment, Chris–very helpful. I’m trying to educate myself on the Vertipaq engine so I can more quickly recognize efficient vs. costly approaches. Agreed on X functions–I always use them with trepidation. Marco Russo provided some commentary on my first post that helped me personally understand them better.

Also, Chris, thanks for the feedback on VAR specifically. Not using VAR is an old habit that I’ve been trying to break (successfully even). I can’t remember if I left it out for simplicity sake or because the workbook I used as a template was developed before the habit was broken. At any rate, thanks for bringing it up!

No worries Matt, just want to make sure your awesome models run smoothly! Excited for more posts from you.

Hi Matthew. Thanks for putting this post together. Some great stuff to help a comparative newcomer get to grips with DAX (and M).

I just had one query about the ‘Number of working days’ measure. Should the first filter expression use MIN instead of MAX. Otherwise you’re counting days from the latest start date – ok when the overall filter context returns a single row, but not when multiple rows are returned.

Regards, Ian

Thanks for the comment, Ian. Did you notice an issue in the result?

On a second look at your comment and the model, I see your point. If you create a pivot table and drop in [Number of Working Days] I think one sees the result. This measure is meant to be a stepping stone for [Total Working Days] which “fixes” issue by iterating [Number of Working Days] over the DataInput table (line by line).

Can you explain why the total working days measure is needed? It seems as though the first measure, number of working days is returning the same result.

If you create a pivot table and drop in [Number of Working Days] and compare to the [Total Working Days], there is a different result on the total line. The [Number of Working Days] measure is meant to be a stepping stone for [Total Working Days] which “fixes” issue by iterating [Number of Working Days] over the DataInput table (line by line).