skip to Main Content

An Excel Pro’s Two Year Journey in Database Land

In a post last week I mentioned that when you use a real database as a source for PowerPivot, a number of unexpected doors open up.  As a longtime Excel pro who has spent the past two years working closely with a database team, I can tell you that it’s been an eye opening experience.

So I have been advocating to Excel/PowerPivot pros for quite some time that they cozy up to their database teams (if they have them).  Of course, this works both ways, and I have also been advocating to Database/BI pros that they embrace and cooperate with Excel/PowerPivot pros.

Awareness, Flexibility, and Sometimes…  Outright Resistance

It’s an interesting ambassadorship.  On one hand, Excel pros are often accustomed to doing everything on their own, with no external help whatsoever, and so it’s mostly a matter of opening their eyes to benefits they’ve never imagined. 

But the other camp often doesn’t expect to learn anything new and relevant about the role of Excel, that unruly little brother of “real” BI.  In conversation, that camp splits into two subgroups rather quickly – one that is willing to consider new ideas and possibilities, and another that regards Excel as the devil.  Interactions with that second group can get ugly.  I also tend to fear for them a bit.

Today, I’m going to focus on the Excel crowd and explain why a db pro can make a huge difference in their lives.  While I have danced around the topic for two years on the blog, this will be my attempt to provide a definitive list of the benefits.

And you die-hard Excel haters out there, don’t worry, I’m coming back around to you soon Winking smile

Benefit #1:  Data Shaping is Easier in Databases.  MUCH Easier.

Hey, Excel is just a collection of individual cells when you come right down to it.  Cells go into formulas, and cells come out.  But turning a wide and short table into a tall skinny table (or vice versa) for instance can consume the better part of an Excel pro’s day in some cases.  That same operation might take a db pro 5 minutes or less.  And even better, the next time might take them no time at all…

Benefit #2:  Auto-Refresh!

Once the db pro has a script in place (or query or view or sproc or whatever it is that they deem best), it can now be run automatically in response to your refresh request from PowerPivot. 

If you are running PowerPivot on your desktop, hey, now it’s just one click (refresh) to pull in the latest, properly-shaped data.  But even better, if you have access to a PowerPivot-enabled SharePoint server, you can put your workbook up there and schedule it to refresh itself!

Even if the manual shaping that you do today only takes you a few minutes each time, it’s still worth it to outsource it to a db.  The SharePoint option means you don’t have to do anything each day to merely update the reports, and the difference between “small” and “nothing” is huge.  Like, “you can go on vacation without repercussions” huge.  Or “you don’t have to come in early each day” huge.

Benefit #3:  Quality

Did you know there are entire conferences devoted to the topic of spreadsheet errors?  Spreadsheet errors are a fact of life in traditional spreadsheets, but they are primarily due to specific problems that a good PowerPivot “ecosystem” eliminates.  One is the lack of convenient named reference – “what did D$14 refer to again?” crosses the inner monologues of Excel pros worldwide millions of times a day.  PowerPivot fixes that – everything is referenced by table/column/measure name.

But raw repetition is the real killer.  If you perform the same spreadsheet task every day for a year, pure statistics tells us you will make mistakes.  And if the task is tedious, you will make even more.

The lack of “portable formulas” in traditional spreadsheets is an underappreciated source of repetition.  Even a single iteration of a reporting task explodes into repetitious subtasks and provides lots of opportunity for error.  (PowerPivot fixes that one too).

So we are left with raw repetition – performing the same task every day – as our primary source of error.  Once you have PowerPivot, data shaping is the biggest source of repetition.  Outsourcing that shaping logic into a database, then, doesn’t just save time.  It prevents mistakes. 

Why?  Because the db logic is written only once, and it doesn’t change no matter how many times you run it.  Even if there were mistakes made in the db logic, you will catch them sooner or later (usually immediately), and once you fix them, they stay fixed.

Come back Thursday for part two, with items four through seven.

Rob Collie

Rob Collie

One of the original engineering leaders behind Power BI and Power Pivot during his 14-year career at Microsoft, Rob Collie founded a consulting company in 2013 that is 100% devoted to “the new way forward” made possible by Power BI and its related technologies. Since 2013, PowerPivotPro has rapidly grown to become the leading firm in the industry, pioneering an agile, results-first methodology never before seen in the Business Intelligence space. A sought-after public speaker and author of the #1-selling Power BI book, Rob and his team would like to help you revolutionize your business and your career.

This Post Has 8 Comments
  1. For your fact tables in powerpivot, is it better to connect to database tables or a view that’s already done the table joins for you?

    1. Hi Karl, I am also very interested in the answer to that question. Can Access DB help those of us who do not have access to sharepoint, sql etc.

  2. @ Avi. What does that mean exactly? What is a view that is already joined for you mean and what does it mean not to flatten? Thanks.

  3. I was being more cryptic than I intended to. Alex, “not joined” means do not do a SQL join between the Data and Lookup tables (okay Fact and Dimension tables if that naming works for you). Not flatten means the same thing – not to flatten all Data/Lookup tables in a single wide table. For more try

    Or in our book
    Chapter 10: Thinking in Multiple Tables

    Power On!

  4. Avi,
    How would you handle a .csv file with 98,000 rows and about 1950 columns? (This is a real file generated by the US Dept of Ed). I used Power Query to split the csv into 2 tables, loaded both tables into Data Model Power Pivot, Now I want to import and manipulate them in SQL Server (community version). I successfully imported the 2 tables using the import/export tool in SSDT. Is the data now stored in SQL Server? Does SSDT “sit on top” of SQL Server, or is SSDT still referring to Data Model Power Pivot?

Leave a Reply

Your email address will not be published. Required fields are marked *