Archive

Archive for February, 2010

Shared Data Sets – The Game Changer

February 15, 2010 Leave a comment

Nothing like a snappy headline to get your attention. A nice surprise I stumbled upon while fooling around with the November CTP is something called Shared Data Sets. I now believe that it is a feature I will use a lot in future, as it has many advantages, but firstly , what is it ?

 

Quite simply it is a dataset that you can create independently of a report, which can then be deployed to the Report Server. Once that is done , the following interesting scenarios are available to us :

 

·         You can quickly create reports without worrying about the query as long as the dataset serves the needs of multiple reports.

·         The dataset can be cached to provide better performance while at the same time reducing the load on the SQL server across the execution of multiple reports.

·         It leads to interesting development scenarios , such as datasets being set up by and managed by developers , and report writers simply using them in report builder without having to worry about the back end side of a report. This scenario further builds on the “self-service” concept that Microsoft is pushing.

 

Creating a Shared Data Set

The first step is to fire up Report Builder 3. This can be done from the Report Manager website. You will then see a new screen asking you what you would like to create in Report Builder. On the left , select Shared Data Set.

On the next screen , you will be asked to choose a data source. Hence we see that to use a Shared Data Source, you will have to deploy a Shared Data Source ( something we could always do ). Select one that you have ( or create one quickly ) . You may be asked to supply credentials to connect to the source.

 

 

One the next screen , you will see , depending on the type of source , a query builder for either SQL server or Analysis Services ( haven’t tested this with those other databases yet). You can even call a Stored Procedure from your Shared Data Set. Once you have defined the query , you can click on a button called Set Options to define fields,  parameters and filters for the data set. At any time you can change the Data Source connection by hitting Select.

 

 

 

That’s it for creating it. You can then choose to save it on the Report Server itself. I like to now create a folder called Data Sets and put them there.

 

Now you can create some reports to use the shared data set. Create a report as normal , but instead of first defining a data source, you can now jump straight to creating a data set. Select the radio button called “Use a Shared Data Set” and you will be able to hit Browse and choose one from the server. Hit Refresh Fields and you’re good to go !

 

 

After deploying your report , you will see that it works as normal. If you navigate back to the dataset ( In report manager ), you can click on it and view all the properties. One of these is the Caching option.

 

This allows you to cache the dataset so that no matter how many report requests there are ( across multiple reports ) , the data will be served from the cache and no there will be no activity on the source database itself. The difference here compared to caching individual reports is that you only manage the cache on the data set , not each report itself ( once again , assuming you can create a dataset that can serve multiple reports ).  The other option you will notice there is the Security tab, which allows you to manage access to the data sets. With that in mind , watch out for the following issues :

 

·         Granting users access to a report but not the dataset.

·         If you’re using “pass through” authentication ( ie. Windows Authentication ) , the credentials will now be passed from Report Manager to the Report , to the Data Set , to the Data Source and finally the database.

 

Overall though I’m quite positive about the benefits. Even if you cannot cache your report data due to small allowable latencies, the benefit of editing a single data set instead of multiple reports for back end changes will be great in large deployments.

 

 

 

 
 
Advertisements
Categories: Reporting Services

SSIS Woes

February 3, 2010 Leave a comment

I spent the day editing some very glitchy SSIS packages , and I must say I can understand why some people stay away from SSIS. A particular annoyance that I’ve found in 2008 comes up when using the Derived Column transformation. When you create a derived column and enter the expression, it will automatically set the data type for you. You cannot change this here like you can in SSIS 2005. Why the backward step ?
 
 
In case you were wondering if you are stuck with the system selected data type , you can use the advanced editor to change it. Close the Derived Column setup window , and in the data flow Right Click on the Derived column box and select the Advanced editor. Select the Input and Output properties tab, and under Derived Column Output you can change it.
 

Now another thing that has really caused me frustration on many an occasion is loading large CSV files into a database via SSIS. I’m referring in particular to files with a large amount of columns ( 150 + ) and lots of data. When you create the data source object , it will automatically detect the data types of the columns. You can even hit a button called Suggest Types and it will go through a number of records and try to figure it out.

Unfortunately it sometimes gets it wrong. And then you have to manually go through 200 columns an edit the type. And then you get issues converting between Unicode and Non Unicode. Its a pain. Lets hope these little things get fixed soon.

I must say thought that once I’ve tested and deployed packages , they have worked reliably without needing much attention. Are there any other SSIS bugs and quick fixes you’d like to share ? Drop us some comments …..

Categories: SQL Server