Well folks I haven’t had much time to play with the V2 Beta (aka CTP3) yet. A lot going on over the past week:
- Authoring a brand new suite of models and reports for our Duane Reade DR-Direct solution
- A nasty crashing bug in PowerPivot for SharePoint that we’ve now learned to catch and repair
- I submitted a PowerPivot article to CIMA Insight magazine (for August)
- Running refresh tests with PowerPivot servers in Chicago pulling data from SQL in Dallas (over Internet VPN – more on this later) for the HostedPowerPivot offering
So in the meantime, David Hager has a short article to share on the new TOPN function. I haven’t had a chance to try it yet, and I’m a little surprised that it doesn’t have special handling for ties (as he’s discovered).
David Hager on TOPN
Using the New PowerPivot Version 2 TOPN Function: Up, Down and Between
One of the new DAX functions included in the Denali PowerPivot CTP3 release is the TOPN function. This function returns a table which can be aggregated for the top n values. Prior to the release of this function, workarounds were available to achieve the same results, mainly through DAX ranking formulas (and Denali PowerPivot now has native functions for ranking too!). So, a typical DAX formula which uses the TOPN function is:
where SalesTable is the table and SalesTotal is the column in that table containing the top n results to be analyzed.
Well, that was easy enough. However, there is a systemic problem here and it’s mainly data-related. In order to understand why the TOPN function don’t work the way you might expect, read the following that comes from the Denali PowerPivot CTP3 help documentation on TOPN.
“If there is a tie, in order_by values, at the Nth row of the table, then all tied rows are returned. Then, when there are ties at the Nth row the function might return more than n rows.”
In English, that means if there are duplicate values in your data, the TOPN function will return the WRONG answer if the Nth value is adjacent to duplicates. In order to get a meaningful result then, the data must be converted so that each data point has a unique value. The question is, how do you do that and retain the integrity of the data? An Excel trick can be used here as a workaround. This method requires that each value be incremented by a very small but unique value. The best implementation of this comes in the form of a calculated column with the following formula
The goal of this formula is to create a column of unique numbers add up basically to zero. There is a balance that is required in this formula between affecting data and insuring that all values in the dataset are unique. The RAND function returns a 15-place decimal, which is the limit in Excel. However, if all of the decimal places were used, the value of the number could be as high as 1 or as low as -2. By multiplying the random numbers by a very small number, the numbers will not interfere with the individual values if they themselves are not small numbers. However, if the multiplier is made too small, not all of the random numbers may be unique depending on the size of the dataset. So, if you are going to use this workaround, you need to test [UniqueIncremValue] to see if all of the values are unique. The best way to do this is to compare its distinct count with the distinct count of a primary key column in the table. If they are equal, TOPN will work for any value of N.
Now, the original formula will work fine if the new calculated column is used in place of SalesTable[SalesTotal].
It should also be noted that the TOPN function can used in a DAX like the one shown below to afford a BOTTONM result. Note that this formula takes advantage of the new DISTICTCOUNT function.
I think you can see that the TOPN function is a valuable new item for your DAX toolbox. Have fun!