by Matt Allington

I am well under way in my career as a Power Pivot Consultant and Trainer.  And I have to say (now that I have delivered a number of Rob’s “PowerPivot for Excel” training classes) that I am finding delivering training to be one of the most rewarding things I do.

It occurred to me recently that people like me (and also Rob, Avi, Scott), that train users in PowerPivot are able to glean useful insights into the way people learn (and incorrectly learn) PowerPivot.  Today I am going to share with you 5 common mistakes that I have personally observed – maybe you will identify yourself in some of these things, or maybe you will confirm that you are doing just great.  Either way, it is worth a read to either discover a gap or confirm your skill.

But first: I have trained 2 general types of students

I have found there are 2 groups of students that sign up for my PowerPivot for Excel classes.  There are students that are very new, very green (see what I did there – green, get it?) and are using the class to get started.  They come in with very little knowledge about PowerPivot, but enough to know that this could be something awesome.  In the second group are students who have a reasonable amount of PowerPivot experience under their belt but realise there is more to it.  What is a “reasonable amount” of course can vary, but I would classify these people as “active” for 6-26 weeks with a total number of “invested hours” in the vicinity of 10 – 60 hours or so.  Often they are struggling to move forward, and this post covers the main reasons why.

I always ask my students to rank their knowledge on a scale of 0 to 10, and they normally rank themselves 0 (newbies) or about 3 or 4.

2 types
What has really (pleasantly) surprised me is that both groups of students seem to walk away with a similar positive level of satisfaction with what they have learned.  This is a bit curious because I am teaching the same content to people with no experience with PowerPivot, and also to those with many hours of experience.  But what I have observed is that self taught students in my courses have often missed some of the fundamental principles, or not fully understood certain key concepts.  And the newbies come with an open mind like a sponge to learn, and they generally walk away with a solid foundation from which to continue to learn, but clearly not the experience of the more experienced students.  The biggest risk for the newbies is failure to practice what they have learnt.  I don’t see this in the more experienced group because they tend to want to get back to work immediately and apply their new found knowledge to solve problems they have known about but couldn’t solve before.  I have seen the light bulb turn on many times with a student saying “I just realised I’ve been doing it all wrong”!

The 5 Common Mistakes in the Self Taught Group

One feature of being self taught, is that they often don’t get all the right information the first time.  Don’t get me wrong – I think it is fantastic that so many people are teaching themselves.  And if you buy a good book (Rob/Avi’s book or my book) and apply the learnings, then you are off to a great start.  But many people find that supplementing self learning with instructor lead coaching and tuition can pay back big dividends.

Here are my top 5 most common mistakes I observe with self taught students.  If you associate with any of these things, then do yourself a favour and seek out some new knowledge to help yourself become more powerful and effective with PowerPivot.

1. Too many custom columns in Data tables

This is by far the most common sin – there is daylight between this one and the next one.  I think the issue is that so many (most?) of us come from an Excel world, certainly the self service clients that I am engaged with in my business do.  In the past, us Excel professionals always had to bring all our data into a single monolithic table prior to creating a pivot table, normally using vlookup or index/match as our tool of choice.  And it is a hard habit to break because we feel comfortable in the world of creating formulae in a table (ie custom column).

Custom Columns are a LOT like Excel, and measures are NOT! So we gravitate to custom columns like a magnet.

magnet

If this is you, then please do yourself a favour and stop already!  My best advice to Excel professionals learning PowerPivot is don’t ever ever use a custom column in a data table unless it is technically not possible to do the same thing as a measure/calculated field. Seek out the skills to learn to create the measures you need instead.  If you need to put the results of your DAX on the Rows, Columns, Filter or Slicer, then Calculated Columns are the only choice (but it is still better to bring them in from a source DB if possible). Calculated columns are generally ok in your lookup tables too, however  99.9% of data table DAX can/should be done as a measure/calculated field.  Alright, I am exaggerating, it is more like 99.8%, but you get the point.

If you are not writing these DAX formulae in a ratio (Measures to Calculated Columns) of 999 to 1, then you are a candidate for some remedial “calculated column” behavioural modification!

see no evil

So the point is this:  If you a coming from an Excel background and you are writing your data table DAX in Calculated Columns, you are most likely on the wrong track.  I am yet to find a single calculated column in a data table from this group that was the right approach – not 1 single column.

2. Thinking you can incrementally learn off an Excel base

In short – you can’t.  Don’t fall into the trap of thinking “PowerPivot sounds like Pivot Tables, it must be the same”.  I truly believe that anyone that is competent in Excel can become competent in PowerPivot, but I don’t think you can incrementally learn off your Excel heritage.  PowerPivot IS incrementally learnable, but only once you get to base camp.  PowerPivot is not the same as traditional Excel – it is a completely new piece of software that just happens to come bundled with Excel. You need to invest some time to learn the basics properly to have a proper foundation from which you can THEN learn incrementally. Self taught is fine, but make sure you get a good book (at a minimum) and do it right.  Just because you can drive a car, doesn’t mean you operate a boat without learning some new fundamental skills – it is a bit like that.

3. Building PowerPivot Data Models like you would a Relational Database

This is a common problem I see with people that have made the journey from Excel to Access at some time in their career (but also in students that come from a relational DB background).  They learnt that Access could do things they couldn’t do in Excel (in a pre-PowerPivot world) anyway.  These skills are above what the average Excel user had in their arsenal, but now it can be a crutch if you don’t learn how/why PowerPivot is different.  I came from this camp, and I wasted quite some time not understanding how relationships work in PowerPivot, and why they are different to Access and other relational database products.  Key mistakes include importing the full snowflake schema from the DB, and missing the opportunity to structure (and understand) the difference and behaviour of “data tables” and “lookup tables”.

4. Trying to build big DAX formula in a single step

I have read a lot of books by John Walkenbach – as I am sure many of you have too.  I learnt from John’s books a technique called “mega formulas” where you create individual parts of the formula piece by piece and in the end you bring it all together into 1 “mega formula”.  Rob also teaches that you should build your measures up piece by piece.  I actually believe there is no need to take the Walkenbach final step and build a mega formula, but instead leave your complex formulae referring to the individual intermediate measures.  It is a bit like moving the elephant on your door step – the easiest way to tackle it is to chop it up into manageable pieces

elephant

The trouble with DAX – particularly when you are learning – is that there is a lot that can go wrong with a DAX formula, particularly as you are learning.  If your formulae are too big and complex, it can be difficult to even find all the problems to make it work.  On the other hand if you chop the problem up into manageable pieces, you can incrementally build the measure one piece at a time, and finally assemble the finished masterpiece at the end.

5. Not using tried and tested best practices

There are quite a few best practices that I see communicated in all the right places (here, SQLBI.com to name just two).  I believe the 2 most important best practices for Excel users are:

1. Using proper naming conventions.  Always add a table name before a column name, and never add a table name before a measure name/calculated field.

  • Column reference:  Table[Column Name]
  • Measure Reference:  [measure name]

2. The second is how you layout your tables in diagram view.  I will go as far to say that if you have a SQL RDBMS heritage, then you can set up your tables how ever you like.  But if you come from the world of Excel (like me and many other new disciples of PowerPivot) then you are well advised to do it the way Rob teaches.  Lookup tables go at the top, data tables go to the bottom.  When you layout your tables this way, us Excel folk can “visualise” the filters flowing down hill across the relationship from the one side to the many side – not the other way around.

5. (again) Thinking you can passively learn DAX

Edit 11 Nov:  I am adding another big on that I missed.  I can’t call it number 6 so here is 5 again!  This is another mistake I made early on and I know others are doing the same thing.  You simply cannot learn DAX “passively”.  By that I mean by reading a book in bed, watching Rob’s videos etc and not touching the keyboard – this is simply not going to work.  You need to practice, practice practice if you want to make anything you learn stick.  You can practice using your own data, or you can download one of the free databases on the web like Adventure Works and use that as your data source.  But you must use something and practice practice practice.

_____________________

So do you associate with any of these common learning mistakes?  If so, all is not lost.  If you move from “Don’t know what you don’t know” to “I know what I don’t know”, then you are on the journey to greatness.  If you don’t make these mistakes, then you are already doing just great, so go you good thing (Melbourne Cup on Tuesday you see).

A question for you: What lessons did you learn “down the track” that you wish you had learnt earlier in your DAX journey?


Matt Allington is a professional Self Service BI expert, consultant and trainer based in Sydney Australia.

  Subscribe to PowerPivotPro!
X

Subscribe

Matt Allington

Matt Allington is a Microsoft MVP specalising in Power Pivot, Power BI and Power Query Consulting and Training based in Sydney Australia. Visit Matt's blog here. 

This Post Has 40 Comments

  1. I find #1 interesting, in our shop we have almost the exact opposite problem – not enough Calculated Columns which hurts DAX performance – allow me to explain. The vertipax engine does not multi-thread particularly well, and so in a simple example where we have a calculate(Measure,Filter1,Filter2,Filter3) when we run profiler and watch the DAX run, it run Filter1 then Filter2 then Filter3 then Measure. If however we unifed those filters into a calculated column, it then becomes Calculate(Measure,Filter) and it runs significantly faster.

    Now, we are working on 10’s or 100’s of millions of rows on SSAS Tabular, but the same design principle holds true when working in powerpivot – are you RAM contrained or CPU constrained? For us, performance-wise it is almost universally a CPU constraint and so grouping filters in calculated columns allows for a sort of pre-aggregation which vastly improves DAX performance.

    I think for learning DAX and for small-ish workbooks not using calculated columns makes sense, and we often only use them when we are optimizing DAX, but to go so far as to say “don’t ever ever use a custom column unless it is technically not possible to do the same thing as a measure/calculated field” – is way too far – calculated columns are an invaluable tool to have in the tool box.

    1. Hey Trevor, you raise some good points, however my post is not giving advice to people who know how to use SQL Profiler and SSAS. I am talking about self taught people that predominantly come from the Excel world (Rob’s course is PowerPivot for a Excel) and do not run SSAS tabular or have the technical capability to run the profiler over their queries let alone understand the difference between a calculated column and a measure, and more often than not don’t have a clear understanding of the difference between a data table and a lookup table.

      If you don’t know the difference, I recommend you stick to measures (in your data tables) until you do know the difference. If you know SSAS, and how to use profiling tools to optimise your queries, then you can and should do what ever works.

      What I didn’t make as clear as I should have (and I suspect is your case), is that it is fine to use calc columns on your lookup tables. I will make a quick edit on that :-).

      1. I am trying to think of the ratio of “times I saw too many calc columns” to “times adding a calc column would improve perf”… but there would be like, a divide by zero error or something. I’d be interested to hear an example from Trevor.

        1. you should be using =DIVIDE() 🙂

          I think Trevor’s point was relating to the lookup tables, and certainly that was less clear in my post before I read Trevor’s comments and made some clarification comments that I was talking about the Data Tables.

          I agree with Trevor (and others) that we should not be frightened of Calc Columns in lookup tables, and clearly the SSAS professionals have tools to understand exactly what works and what doesn’t. I actually found Trevor’s example very interesting and I will be storing that in my ever increasing knowledge base.

    2. Trevor, the golden rule is lowering the number of unique values to compare in a column. If a calculated column do that, it makes sense, otherwise it doesn’t. A calculated column is helpful to filter data, is usually not helpful to split a column.
      So if you have CALCULATE ( [m], t[a] > 100 ) and t[a] has 100.000 unique values (or more, and most of them greater than 100), then a column t[b] = t[a] < 100 will improve the speed of CALCULATE ( [m], t[b] = TRUE) because you reduce a filter to 1 iteration. So in your example the problem is not really the number of filter that are evalulated in a sequential way, but the overall number of values that results from the filter, that you can reduce if the calculated column already has the result of the filter condition.
      Too many word to say this: lower the number of unique values you obtain in a filter. Not by units, but by order of magnitudes, otherwise the result does not worth the effort.

      1. Marco,

        Thank you for this helpful clarification.

        I have three questions:

        [1] Are you saying that if the Calculated Column reduces the number of unique values, then it has the potential of a “formula calculating speed” advantage because the filter will have to iterate over a smaller number of values in the fact table?

        [2] Does this “formula calculating speed” advantage come from the fact that the Columnar Database only stores the unique values in the column object (column object is probably not the right term here, but I mean the single column that the database uses to store the unique values).

        [3] In a few cases where I had “flat” Excel tables (about 500,000 rows) which had columns with a very small number of unique values, when I import them into PowerPivot, the file size is dramatically reduced. When I took the same set of flat data and broke it apart into dimension tables and then imported it into PowerPivot, the file size is not much different. Is this a case, is the effort to transform the “flat’ table not worth it because the Columnar Database is, in a way, doing that for me when it stores only unique values?

        1. 1) Yes

          2) Yes

          3) Yes from the compression point of view, but with a flat table you cannot do many things you can do with a regular star schema (for example, comparing budget and sales of two different tables by time – sharing time table is the right thing to do in that case). And also from a usability point of view, it’s better having attributes organized in tables instead of a flat list of attributes in a single table

  2. @Matt,
    Great Article…..I especially Relate to Point 2…..For 12 Months I thought I will eventually get “DAX” just I eventually learnt everyother thing in Excel…..But It was Only when I bought the “Book” that the Bulb glowed…..and It is only after I attended the Online Course that the Bulb glowed brightly !!

    But could you elaborate Point 3….”Key mistakes include importing the full snowflake schema from the DB”….Not sure what you mean.

  3. Using the Adventure Works database as an example, there is a product table, a product sub category table and a product category table in the database schema. Thes data table connects first to the product table and then there are two further connections to the other two tables. This type of chained connection is generally less efficient in PowerPivot than having the required fields as extra columns in the product table, and also less intuitive for users (those that use the field list but didn’t build the model) to understand they need to find sub category and category attributes in different tables. This is a relational database storage optimisation concept, not a SSAS Tabular or business object concept.

    If Excel users can’t write a SQL view/Access query that simplifies the lookup tables into a single flat lookup table (don’t know how, don’t have access or can’t get help from a DB professional), then it is generally better to then use a calculated column to bring the required columns into the product table once it is in PowerPivot, and then hide the subcategory and category tables.

  4. Thank you for the article. I definitely agree with the skill time line. Having read many of their articles, “Italy” is off the charts. I would also have to agree with Trevor that need to be careful in speaking in absolutes to students with respect to calculated columns. To quote what “Italy” has remarked in many articles: “It depends”.

  5. Matt: I can confirm and subscribe to every part of this blog post. The number 2 is the biggest wall to many Excel users, I think it’s an important point to highlight and I probably didn’t do that enough, taking for granted that people should realize that automatically. Clarify that upfront could be much better.

    Matthes: rule #1 – the first answer to any question is: “It depends”. Following rules: “It depends”

  6. Matt,
    Just loved your article. I have experienced at least a few of these myself at some point in my own journey and have seen other new users struggle with the same. But very succinctly put! One I can relate to is confusion with is #3 – with my SQL knowledge I expected the Power Pivot relationships to kinda work like an Inner Join – boy was I in for some befuddlement 🙂
    Good thing I now know the magic tricks to make relationships behave that way if I needed to 🙂

  7. Actually there is a big one that I missed. “Thinking you can get good at DAX without writing DAX”. You can’t. I made this mistake myself when I was at Coke, thinking I could read Rob’s book or passively watch Rob’s videos and not get my hands on the keyboard and practice. If you are not writing DAX, you will never be good at DAX.

  8. +1 on the above comments. Having attended the course presented by Matt I can confirm these mistakes were highlighted and best practice approaches provided. Definite lightbulb moments, and hmm, better go back to scratch with the models I had built.

  9. #1 is really an issue for self trained PowerPivot uers like me. It feels much easier to write DAX for calculated columns in the beginning as you do not need to know much about PowerPivot for that. To write DAX for measures needs some understanding of the filter flows, how a single cell in the table is using these filters and that the subtotal/total is cacluated by the DAX formula and no sum of the row/column. Thats quite difficult to understand if nobody tolds you that. Iam still struggling sometimes with measures that show me the expected values in rows/columns but not in totals/subtotals.

    1. Yes, your situation is very common and you have described the challenge very well. The problem you are having with Totals can be caused by a lot of things. Eg if you are counting customers that have purchased and have years on rows. A customer gets counted in each year that they purchased, but they only get counted once in the total, so clearly the numbers wont add up if even 1 customer has purchased in multiple years. Probably one of the the easiest fixes is just to rename the Grand Total to something else more meaningful (just click the cell with the Grand Total label and type over it with a new name). In this case, you could rename it “Total All Time” and it would then probably make sense. There are many other things you can do of course.

  10. #5 (again) is my favourite point.

    I had a huge desire to see everything PowerPivot can do, and rush through the videos / book (“wow can’t wait to do that!”). I gave in to that desire.

    It wasn’t until I did Matt’s live course a few weeks ago that I realised how many of the concepts I vaguely ‘knew’ about, but how much I sucked at actually doing them. Because, you know, part of the live course is actually being forced to write DAX yourself. There’s no hiding behind a good cup of coffee. (I hope I haven’t given away too many of your top secret training techniques).

    I have come up with a cunning, novel solution though…go back and do it all over again properly. Do it over and over and over again. If a technique hasn’t become boring and tedious, if it’s still interesting and slightly challenging, if I can’t do it at light speed, then I’d say it’s not truly learnt. I’m aiming for boredom. Boredom is my friend.

  11. Matt,
    Thanks for your blog on the 5 common mistakes. I struggle with mistake #4 most often, but not for the reason you would expect. I find the GUI list of measures (bottom half of the screen in 2013) very annoying as I build formulas. I cannot group or move measures which makes it difficult as I build an “elephant”. With a long list of measures I will inevitably duplicate or misplace a measure. Are there techniques that I should be using? Better yet, is this something we can change? A more flexible environment to manage measures would go a long way for novice DAX developers.

    Ron

    1. Yes Ron, the UI is horrible. Microsoft, please take note! Apart from waiting for Microsoft to improve the experience, there are 2 things you can do. Firstly get used to a good naming convention. eg start all your aggregate measures with Total Sales…, Total Margin…. etc, Consistency is key, and you should go back and fix it if you stray off track.

      The second thing you can do (in conjunction with the first) is download and install DAX Studio. If you follow the instructions here https://powerpivotpro.com/2012/10/other-better-ways-to-get-all-measures-as-text/ you can create an export of all your measures into Excel. This will help you understand what you have got, and also help you go back and fix ones that have not followed your naming convention.

  12. I just finished writing a “mega” formula. OK not that mega.
    I built it in pieces using the steps suggested by the Italians for understanding M2M.
    – you know … COUNTROWS not just tables as filters.
    I tested each step to make sure I understood what was going on.
    I have already built many M2M measures so I am familiar with the concept.

    Once done the measures was multiple pieces and worked ,,,, slowly,
    I recovering it into a mega formula without all the COUNTROWS and the performance increased by an order of magnitude,

    My main bridge table has less than 1M rows and my main fact table has less than 0.5M rows,.
    Small stuff relatively,

    But the single line formula was much more efficient in this case. I was a bit surprised myself,
    I can post the steps and pattern if anyone cares,

    All of my ideas are stolen from the work of others on these posts and the texts.
    The patterns work and I combined that with the idea of duplicate dimension tables and inactive relationships, Those last bits needed a step by step approach,

    But ,,,, the final formula was a simple mega formula and was much faster

    1. That is indeed curious. When you say faster, are you saying faster when comparing 1 formula in a single pivot table at a time, or are you saying you comparing 1 mega formula with a pivot table that contains all the interim measures in the pivot table plus the final formula.

      1. here is the mega formula. formatting courtesy of the Italians.

        Bookings Linked MEGA :=
        CALCULATE (
        [Bookings All Time],
        USERELATIONSHIP ( Bookings[Sales Order Num], ‘Sales Order Number Alt'[Sales Order Num] ),
        CALCULATETABLE (
        ‘Sales Order Number Alt’,
        CALCULATETABLE (
        ‘SO Number Linked’,
        CALCULATETABLE ( ‘Sales Order Number’, ‘Bookings’ )
        )
        )
        )

      2. Here are multiple formulas. You may have to copy and reformat.
        I could not figure out how to format correctly in reply.

        Bookings Linked:=CALCULATE([Bookings All Time],
        USERELATIONSHIP(Bookings[Sales Order Num],’Sales Order Number Alt'[Sales Order Num]),
        FILTER(‘Sales Order Number Alt’,[Prior Order Count])
        )

        Prior Order Count :=
        COUNTROWS( CALCULATETABLE(‘Sales Order Number Alt’,
        FILTER ( ‘SO Number Linked’, [SO Linked Count])
        )
        )

        SO Linked Count:=
        CALCULATE(COUNTROWS(‘SO Number Linked’),
        FILTER(‘Sales Order Number’,[SOA1_cnt])
        )

        SOA1_cnt :=
        CALCULATE( COUNTROWS(CALCULATETABLE(
        ‘Sales Order Number’,
        Bookings
        )
        )
        )

  13. I build too many calculated columns because of my Excel heritage.
    I also have a performance profiling heritage .. but the code I used to write is not compiled into more code 🙂 It gets compiled into hardware so cost = transistors. It literally costs you more to build.

    I built some calculated columns to improve performance especially when cardinality was low.
    Sometimes it helps performance, sometimes the optimizer is smart and a measure is just as fast.
    I think a calculated column is OK for beginners when they don’t understand a measure or the X functions. Getting the right answer with less coding effort is a benefit … as long as performance is not practically impacted. BTW. I learned Excel first and then DAX …. and then SQL. I eventually had to learn to speak database but I know limited dialects. I can speak to my IT department technically after some reading.

  14. Even better than calculated columns …. but not for newbies.
    Use SQL to do your heavy lifting in ETL. I use the word ETL loosely …
    I think Alberto used it this way in a presentation.

    I prebuild tables using SQL instead of DAX. The language is not important.
    The point is the table is prebuilt instead of on the fly.

    One recent example, the table takes almost 50s from cold cache.
    I update this table once a day by rerunning the query.
    This is a simple piece of SQL but creating it on the fly may be too much in a tabular model that needs to be responsive.

    When I get time, I will use summarize and generate to recode the SQL into DAX but the amount of time to translate the code ….. not sure if it is worth it just to have an all DAX solution.

    The SQL is not optimized but simple and brain dead. 50s once day … may be good enough.
    I suspect that the DAX will be fast if I code it carefully.

  15. I have 1 Power pivot with 1/2 dozen tables.
    I coded measures using COUNTROWS to traverse backwards from the fact table to the lookup table. Each of these COUNTROWS creates an intermediate measure that gets used in a filter for a FILTER or CALCULATETABLE(FILTER)

    I built the individual measures and then tested them in intermediate tables.
    The final combination worked!

    The mega formula was just several (3?) nested CALCULATETABLES.
    It would have taken more time to debug all those nested tables.

    The mega formula gives an answer as soon as I touch the model.
    The multi-step formula makes you look at your watch … 3s is the critical number right?

  16. Matt this was my attempt to reply back to your question.
    Not sure I answered it. I did not mean to create a new post

  17. The most important is #5 (again).Also the adventure works cannot help you enough until you use your data to solve problems, at least in my case. My journey in PowerPivot & other PowerBI tools went like this:

    Beginning reading this blog and apply some easy DAX formulas. Afterwards I read Rob’s book and I believed I had a good understanding of DAX and powerpivot as most of common reports based on simple formulas.

    I live in Europe unfortunately ( & fortunately – read later ) so I only could have Rob’s video lessons. So I took the 1st video course. ououou DAX can do all that..(my thought ). That was more than enough for most of the cases. I could even apply DAX patterns ( Italians ) and make some adjustments. I was really happy and believed I was really good. In reality at your scale I was at 7.

    ( fortunately now) I live in Europe so I had the chance to attend Marco’s Advanced DAX course..
    Revolution in my mind..how DAX engine works…Context transition , table materialization & filters propagation – filters flow everywhere and plenty more.

    Fundamentally ( after “it depends” I believe Marco’s most used word 🙂 ) change the way I understand DAX.. Now I am in the “Italians this way” in your graph.

    Conclusion- total agree that attend a course in person is the best thing to do..but as you said, needs to get your hands dirty with writing DAX.

    P.S. The most important lesson was watching Marco passionate to solve a business scenario example live in course and don’t give up until he find the right formula and it was only a question that he could avoid.. Then you realize why the some people like Italians ,you (Matt) , Rob , Scott , Avi & others are great at what they do.
    PASSION & LOVE for your job & DAX.

  18. I love the diagram that says “Italy this way.” The Italians (Ferrari and Russo) are in fact “off the charts” in terms of what they can do with PowerPivot. I bought one of their books on data modeling, and I am impressed to say the least.

    1. Hi Mike. Thanks for mentioning that book. I provided a link for Rob’s book but not the one you referred to, which is this one. http://xcbs.com.au/TheItalians This is definitely the second book I would recommend for Excel users. In my opinion, this can be the first book for SQL professionals but it should be the second book for Excel people. I have learnt solo much from this book!

  19. hi

    i installed dax studio 2.5.0.

    i did not get Add-Ins menu for DAX Studio on excel ribbon.

    i tried to add through COM Add-Ins in excel but still no result.

    i launched DAX Studio directly but getting only Tabular Server connect option and nothing else

    please give me the solution 🙂 to get DAX Studio Add-Ins on excel ribbon and get the Power BI designer option in DAX Studio connect window

    i am using excel 2013

Leave a Comment or Question