Displaying multi-select parameters inside a report
RSWebparts.cab download
Report Parts : Making life easy
As I mentioned previously , I’ll try to run through all the nice new features of Reporting Services 2008 R2. We’ve covered the Sparklines and the Shared Data sets , so lets now focus on something called Report Parts.
The image above shows a folder on Report Manager where Report Parts have been published. You get a nice "puzzle" icon with it.
Now this really is a feature that will drive ad-hoc reporting amongst business users. Its easy to set up , and it makes report creation laughably easy , almost too easy. Basically you now have the ability to publish each part of your report as an individual object. A person using report builder can then browse what report parts are available on a report server and then simply drag them onto his report. Nothing needs to be set up or configured, it will create any related objects for you , like data sources or data sets. You simply "drag and run". All formatting is maintained as well. Firstly, however, somebody has to create the Report Parts iteself. This isn’t too difficult.
Once you’ve created a report in Report Builder 2 , you can select the "Publish Report Parts" option from the main drop down menu at the top left. In Visual Studio , you would have to click in Report – "Publish Report Parts". In the report that I’ve created below , each guage , table and chart will be published as an individual report part.
Shared Data Sets – The Game Changer
Nothing like a snappy headline to get your attention. A nice surprise I stumbled upon while fooling around with the November CTP is something called Shared Data Sets. I now believe that it is a feature I will use a lot in future, as it has many advantages, but firstly , what is it ?
Quite simply it is a dataset that you can create independently of a report, which can then be deployed to the Report Server. Once that is done , the following interesting scenarios are available to us :
· You can quickly create reports without worrying about the query as long as the dataset serves the needs of multiple reports.
· The dataset can be cached to provide better performance while at the same time reducing the load on the SQL server across the execution of multiple reports.
· It leads to interesting development scenarios , such as datasets being set up by and managed by developers , and report writers simply using them in report builder without having to worry about the back end side of a report. This scenario further builds on the “self-service” concept that Microsoft is pushing.
Creating a Shared Data Set
The first step is to fire up Report Builder 3. This can be done from the Report Manager website. You will then see a new screen asking you what you would like to create in Report Builder. On the left , select Shared Data Set.
On the next screen , you will be asked to choose a data source. Hence we see that to use a Shared Data Source, you will have to deploy a Shared Data Source ( something we could always do ). Select one that you have ( or create one quickly ) . You may be asked to supply credentials to connect to the source.
One the next screen , you will see , depending on the type of source , a query builder for either SQL server or Analysis Services ( haven’t tested this with those other databases yet). You can even call a Stored Procedure from your Shared Data Set. Once you have defined the query , you can click on a button called Set Options to define fields, parameters and filters for the data set. At any time you can change the Data Source connection by hitting Select.
That’s it for creating it. You can then choose to save it on the Report Server itself. I like to now create a folder called Data Sets and put them there.
Now you can create some reports to use the shared data set. Create a report as normal , but instead of first defining a data source, you can now jump straight to creating a data set. Select the radio button called “Use a Shared Data Set” and you will be able to hit Browse and choose one from the server. Hit Refresh Fields and you’re good to go !
After deploying your report , you will see that it works as normal. If you navigate back to the dataset ( In report manager ), you can click on it and view all the properties. One of these is the Caching option.
This allows you to cache the dataset so that no matter how many report requests there are ( across multiple reports ) , the data will be served from the cache and no there will be no activity on the source database itself. The difference here compared to caching individual reports is that you only manage the cache on the data set , not each report itself ( once again , assuming you can create a dataset that can serve multiple reports ). The other option you will notice there is the Security tab, which allows you to manage access to the data sets. With that in mind , watch out for the following issues :
· Granting users access to a report but not the dataset.
· If you’re using “pass through” authentication ( ie. Windows Authentication ) , the credentials will now be passed from Report Manager to the Report , to the Data Set , to the Data Source and finally the database.
Overall though I’m quite positive about the benefits. Even if you cannot cache your report data due to small allowable latencies, the benefit of editing a single data set instead of multiple reports for back end changes will be great in large deployments.
SQL R2 : Sparklines and Data Bars
New Data Pane
Indicators
Using Dundas Maps in Reporting Services 2008 R2
You’ll then see a screen asking what type of map you would like to use. The default comes with various Maps of the USA. You can download maps to add to the gallery. A popular option would be to use an ESRI shape file. Select the Map Gallery button , and choose "USA by State Inset". This will allow us to create a report of the USA showing data by each State.
The next screen has some interesting options. You can adjust the resolution ( quality ) of the map which will affect the amount of data sent over the network. Using the slider on the left , you can zoom in and crop the map. And the nicest option is the checkbox that says "Add a Bing Maps background". You can choose Road , Aerial or Hybrid , and this will really up the look and feel of your report.
On the next screen you can the type of representation you want on the map itself. I haven’t used the basic map yet as it seems to be the most in depth to set up. What you can choose is either the "Color [sic] Analytical Map" which will represent the data in varying shades of colour, or the Bubble Map, which represents the data in varying sizes of bubble. Lets try the Colour Analytical Map first.
On the next screen you can create or browse to a data source connection. This is simple to set up – click next when you’re done. The next screen allows you to use the wizard to build a SQL query. Now I have a query off the AdventureWorksDW2008 database that will return the sales amount and order quantity per customer and state. This can be used to get a rolled up value of sales per state. We’ll have to define the query and then link it up to fields available on the map object itself. First, the query …
SELECT Fact.SalesAmount, Fact.OrderQuantity, Cust.LastName, Geog.StateProvinceName FROM AdventureWorksDW2008.dbo.FactInternetSales Fact WITH (NOLOCK) INNER JOIN dbo.DimCustomer Cust ON Fact.CustomerKey = Cust.CustomerKey INNER JOIN dbo.DimGeography Geog ON Cust.GeographyKey = Geog.GeographyKey WHERE Geog.EnglishCountryRegionName = ‘United States’ |
This will return the following result
The StateProvinceName column is important here – we will use this to link it to the State Name column on the Map , so that the Map object knows which state the data is for. To do this , click on "Edit as Text" to bypass the wizard, then paste in the query. Hit the exclamation button to make sure it works and then click "Next". On the next screen , select the checkbox next to "StateName" and in the drop down box to the right select "StateProvincename". This will link the two.
The next screen is also important. Under the "Field to Visualize" section you need to select which field will be used as the data itself represented on the map. Choose [ Sum(Fields!OrderQuantity.Value ] , in other words , the Order Quantity field. Change the "Color Rule" drop down to [ Light – Dark ] and check Display labels. The data field will be the same.
Click Finish, and then Run , and you should see your map. The states on the west coast should be darker as the majority of sales occured there.
Reporting Services 2008: Axis labels not displaying on chart
Right , so as you can see , the labels on the bottom axis are missing a few months. To correct this , it’s actually quite simple .
- Right click on the axis in Design View , and select "Axis Properties"
- Change the "Interval" setting from "Auto" to 1.
That should do it. You’ll end up with the following result.
Also play with the other settings , like "Interval Type" , "Arrow Style" etc. These Dundas charts really do give you options !!
Where is the Reporting Services 2008 Web Service ?
http://SERVER/reportserver/reportservice2005.asmx |
Replace "SERVER" with the hostname or IP Address of your Repporting Services server. And you can add "?WSDL" to the end of the URL to get the WSDL. Right , sorted ? If you haven’t investigated the web service yet you really should. It is, in my opinion, one of the best features of reporting services and allows you to really customize your report integration.
Data mining in Excel
Scroll to the bottom of the page for the videos….
Custom Code in Reports – Part 1
One of the nicer , although I suppose under-utilised, features of Reporting Services is the ability to enhance your reports with a bit of custom code. Why do this ? Well you could implement some sort of difficult business logic using .NET code and your report fields, parameters and filters could interact with it to take advantage of that functionality. Now there are 2 ways to implement custom code :
-
Use embedded code in your report. This is simple to implement and the code is deployed in your RDL.
-
Create a custom assembly which is called by the report. A bit more work I suppose, but this is very powerful.
Lets start off with the embedded code. A simple example – I have here a sales report off adventure works. This uses the same query that I’ve shown previously on this blog and is a matrix with a product category grouping on rows and the year on the columns.
Right , so lets say that we would like to calculate the mean and standard deviation of these figures and stick them at the end of each row. One of the nice features of the tablix is that you can click on the cell of a column and select Insert Column – Outside Group – Right. That will create a parallel column at the end of the tablix .
To work out the mean value , the formula is simply ∑x / n ( The sum of x divided by the number of values ). A simple function that could return that for us would be :
Public Function Mean (ByVal Sum as Decimal, ByVal N as Integer) as Decimal
Return Sum / N
End Function
To implement the function in your report , right click anywhere outside the design area , select Report Properties and go to the Code tab. Paste the above function into the code area – you can have more than 1 function here.
Now to call the function from a cell we use the [Code] command, so in this example it would be :
=Code.Mean(Sum(Fields!SalesAmount.Value),CountDistinct(Fields!OrderYear.Value,"DataSet1"))
As you can see, I’m supplying the Sum of my sales value and a Count Distinct of the Year values ( across the entire dataset ) as the 2 arguments into the function. This is simple enough and will result in the following output :
Now I know what you’re thinking , you’re thinking we didn’t need a function for that simple calculation. Well the real value in this comes into play for something a bit more complex, something that you don’t want to write from scratch in that little cell. And what about if you need the same calculation for many cells in the same report ? You create one function and call it from many places in the report. It’s quite useful, although just be careful about writing too many of these functions in your reports – maintanence can quickly become a nightmare, especially if these functions are hidden in there and not documented. In part 2 , we’ll do something alot more interesting , that is , call a custom component from a report.