SQLPam's Blog

April 18, 2012

SQL Saturday #111 recap

Filed under: Community,SQL Saturday — sqlpam @ 2:50 pm

I was very honored to be accepted to speak at SQL Saturday 111 in Atlanta. There were so many submissions and such well-known names that I was grateful to be included in such company.

I took Friday off – since this was supposed to be a shoe string trip. I left early and drove up stopping at my to visit a longtime family friend in Buford – about 45 minutes from the event. I had a nice visit and changed into SQL Saturday gear for the speaker dinner. My GPS got me there with no mishaps.

I got to the dinner and there were so many familiar faces it was hard to know where to start. I won’t even try to ID them all here. I got my hugs from Karla (Twitter ), Stacia ( Twitter ), Jessica ( Twitter ), and Aaron ( Twitter ). The room read like a who’s who list for the SQL community. I sat down to a delicious dinner sharing the company of Stuart ( Twitter ) and Laura – a volunteer for the event. We discussed the in’s and out’s of organizing a SQL Saturday and family while enjoying a sumptuous dinner. I had prime rib – but it was a tough decision.

After eating, I touched base with a number my SQL Family – so many I usually only see at Summit. It had been a very long day. Although the party moved on to parts untold here – I opted to go straight to the hotel. I still had to pull together last minute stuff for my presentation.

The next morning, I arrived a little late, checked in and decided to check out the sponsors. Some were old friends and a few new friends before it was over. As a SQL Saturday organizer – sponsors are very important to cultivate. They are the backbone of the event. So I make sure they know I appreciate the efforts they are making. Atlanta had a wonderful setup for the sponsors – they had 2 rooms with snacks to draw the traffic and room to move around. Have I yet said I was envious of their venue? For the record, I would love to have one like it here in Tampa. I got tied up meeting with sponsors and new faces that went with well-known names so I ended up missing the second session. But it was such great SQL family going on I was OK with that.

I had lunch with some friends and got everything I needed together so I was ready for the next presentation and then mine. I sat in on Andy Leonard’s ( Twitter ) session on SSIS Framework. He blows me away with his ability to make it seem simple while all the time showing humility. He is one in my SQL family that I really look up to – there are few that I really admire. Thanks Andy for setting a wonderful example.

Then it was time for my session. It had been discussed earlier that this was the dreaded time slot. Everyone was settling into a stupor with lunch hitting bottom and the previous All Star time slot finishing up. I had prepared. I had even presented the session twice before. Both times I felt good after the presentation was over. But this time nothing flowed easily. The timing seemed awkward. The material was presented – I got a number of thanks afterward. Several who had seemed bored in the session actually let me know the parts that were aha moments for them. It might have been that I was so tired – whatever it was – I was glad it was over when it was over. I am going to have to spend some real time with this session to make it more comfortable.

Moving on – I was glad to attend Jessica Moss’ presentation on Report Parts. I learned a lot. I have not done any Report Builder. But it is something I need to get familiar with so it is an option I can offer my clients. I am impressed with Jessica. We both took a class on presenting way back when – prior to SQL Saturday #1. I have watched her grow and far exceed any level I aspire to achieve. She engaged the room and knew her material inside out. It was a pleasure to attend her session.

That was the last session so we all moved outside. The sponsors pulled raffle tickets. Many of us were very thrilled to see so many volunteer shirts running up to claim prizes. They deserve more than most to take something tangible home from the event.

We moved to the after event. I was pleasantly surprised at the number of people that showed at the after party. In thinking back – the die-hard partiers were those that had to head back to the hotels. I think Aaron was the only local to hang out to the next location – what a trooper?

In all it was a wonderful day. I learned a lot. I met many new people – some I have followed or seen on Twitter – while others were brand new to me. I got to connect with cherished friends and SQL Family I usually don’t see outside of Summit.

A special thanks to Audrey Hammonds ( Twitter ) for heading up the organization of SQL Saturday 111 as well as the entire team of people who worked to make this happen. Thank you one and all.



April 11, 2012

Monster Reports – Part III – Disparate Controls

Filed under: Presentations,SSRS — sqlpam @ 8:29 pm

This – as the name states – is Part III in a series on My New Presentation – Taking the Scary out of Monster Reports. Previously, we discussed that there are 2 types of monster reports: Disparate Data and Disparate Controls. This article will discuss Disparate Control reports, some of their challenges and some ways to get around them.

With this type of report, I usually start by defining the components that will go on the report. If the user has supplied a detailed version of what they want I know how to proceed. If not, I need to ask myself and sometimes the user, some of the following questions:

Is the report portrait or landscape?

  • Will I need more than one page for the report?
  • How many columns are needed across a page?
  • Can the different components be sized close to the same width? There might be different column widths on a page allowing you to group the components into the columns by width.
  • Do the different components have a relatively fixed height or will they grow with data? Top X reports are relatively fixed – as are set aggregates. A list based on parameters may grow beyond your page.

You may need to work with the user to come up with a clean design. Knowing the column widths up front is critical for a clean report before you start the next step of building the individual components.

I do recommend building the different components individually. This does three things:

  1. It allows me to work in a more controlled environment without worrying about messing up something else.
  2. It adds to the catalog of reports I can offer my users.
  3. It builds a “library” of reports I can pull onto my Monster reports.

Item one is really big on my list. It really pains me when I have a report close and then mess it up by adding something that doesn’t work. By breaking them out individually, I get the luxury of a clean sand box. I know that what I am attempting is possible. Remember that some things are easier to accomplish than others. This is where we find that out.

Item two tends to be high on my clients list. In addition to the Monster report, my user often wants to be able to print the individual components. I will often place these in a subfolder and adjust the parameter settings to enhance the user experience. If I am implementing the reports on the Monster report as sub reports, these are not the deployments I access. Those are usually deployed so they are not visible to the average user. This allows the user to share a small portion of the Monster report without exposing everything. It is a nice bonus for them.

Item three is my primary reason for breaking these out into individual reports. Basically, I am creating the Lego blocks I will be using to build my report. If I need a red block – I can easily grab it and implement. It has my data sets and layouts to make my life so much easier. Most importantly – I now know they actually work.

Now that I know what components I need, I need to determine how I am going to fit them together. I have used two methods primarily. Sub reports called from a table on the Monster report or including everything on the base layout.

A table holding sub reports is usually the easiest method. There are two things to consider. If the heights of the controls are not consistent there will be gaps when one control takes up more vertical space than the other controls on a row. Remember that SSRS does not allow you the luxury of spanning rows. But let’s assume that all your components are the same size. We need to handle the parameters. When calling sub reports, the parameters are expected to come from the table’s defined dataset. In our case, we need to create a data set that defines our parameters as the columns of the main grids data set. Now we set for a fast build. I usually define one sub report and copy it to the other cells. I then make changes based on the component’s parameter needs. This makes pulling it together very fast and very easy. It just needed a little prep.

The other method is a little more solid – less exposed blank space. I start by adding each individual dataset on the report. If I am reusing a component with different parameters – I need a multiple datasets. At this point I move to the layout where I can start dropping copies of the original components onto the report.

If I drop the components directly onto the base layout, I will encounter the same blank space issue as I had with the table. However, if I place rectangles on the base report and drop a copy of the component into the rectangle I have fewer issues with blank space. The rectangles allow each “column” to grow vertically independent of the contents of the other rectangles. The rectangles can also help set the page breaks if needed.

Don’t get me wrong, Monster reports are seldom simple. By with previously outlined steps, I hope you find them less “Scary”.

You find the files associated with this preasentation at: http://sdrv.ms/II1Lcn.

April 1, 2012

Monster Reports – Part II – Disparate Data

Filed under: Presentations,SSRS — sqlpam @ 2:23 pm

This – as the name states – is Part II in a series on My New Presentation – Taking the Scary out of Monster Reports.  We discussed that there are 2 types of monster report Disparate Data and Disparate Controls.  This article will discuss Disparate Data reports, some of their challenges and some ways to get around them.

When I get a request for one of these reports, I envision someone sitting there with stacks of reports, a pair of scissors and lots of tape.  More likely – it was tons of Excel spreadsheets and utilization of the cut and paste options – but the effect is the same – they took the totals from a lot of reports and smashed them together into a single report.  I can understand why they want them – but the expectations that that these smash up reports will run as fast as one of the original reports and not tax the system in the least that usually gets to me.

So let’s state the obvious – in a perfect word – the data for these reports would all come from cubes and we would be able to easily pull this data using MDX.  That world does exist for the rare few – but not for most of us – so we get over it and roll up our sleeves.

The trouble here is that now we have to run the queries that created each of those spreadsheets we saw earlier.  If we are lucky – we have the resulting spreadsheet to work from.  It holds the clue to how our report will work.  If not, it might save some time to build that spreadsheet – not really the data – but the layout.  It will tell us what groupings we need, the row layout for each grouping, as well as the formatting of each line.

So for disparate data reports – it is all about the data – more than anything else.  So we need a- for the most part – a single stored procedure to return all the data for the main grid on the report.  I usually have to play with how this will happen at each client’s site – because everyone’s data is different.  If the data is not massive – CTE’s may be the best solution available, other might need table variables or even temp tables.  The old standby – “it depends” is very much in play here.  But need is the same – accumulate the data into a single query-able source.

It never fails – I get everything into the ONE SOURCE when the client comes back to me and says – I have a few minor changes; “I need to change the order on some of this and add the following…”.  Let’s just plan on this happening up front – because it will.  Let’s add a “control table” to the mix.  This is a table that contains data about the different report data that will be displayed on the report and the attributes of each of these data areas.  This is my blue print to the report and it is usually a work in process thru out the life of the report.

For my demo, to promote self-containment – my control table is actually a CTE.  I find a table infinitely easier to work with – but the demo has other needs.  The control table will serve a multitude of purposes.  It will control the order of the data being displayed, the format it is displayed into and as documentation for anyone coming behind me on the report.

In the monster stored procedure, for each queried bit of data, I include a short description of the data being collected.  This is in verbiage – because it is easier to relate than numbers – but again – I keep it short.  This is going to be my link between the stored procedure data and the control table.  I will have a more verbose description that gets displayed to the end user – that can be easily changed on a whim.  I will have a number that represents the order that the data is displayed.  From here the sky is the limit – anything I need to track can be included here.  My format is usually a must, but I often include source data information and tool tips.  Note that multiple fields may be required for tool tips if they are verbose because tool tips do not wrap.

The final SELECT statement in the monster stored procedure joins the collected data to the control table and imposes order while adding the elements need for the report.

In the report itself, the source data set will be based on our monster stored procedure.  We will add the groupings with headers and footers as needed.  We may need multiple detail/header/footer rows to accommodate the multiple layouts involved.  The control table will provide the key as to which will be visible for each row displayed.  A critical note – if this is being exported to Excel prior to 2008 R2 – the multiple layout rows need to be kept to a minimum as each row is visible in the Excel export.  The control table can also provide the extras such as tool tip verbiage.

So for me – the control table is a must have when dealing with disparate data reports.

The next in the series will be dealing with Disparate Controls.

Create a free website or blog at WordPress.com.