Archive

Archive for the ‘SQL Server’ Category

Tech Summit Cape Town is happening in Feb 2018 !

October 25, 2017 Leave a comment

Many people still have fond memories of the old Tech-Ed events that were held initially at Sun City, and then for a few years in Durban.

While we have no idea if Tech-Ed will ever return, the exciting news is that Tech Summit, a free 2 day event, will happen in February 2018. This year we had a successful event in Johannesburg, and the good news is that for 2018 we will host Tech Summit in Cape Town !!

This is free event where you register to enter – first come first served. Registration will open on the 15th November , so make sure to save the link below !!

https://www.microsoft.com/en-za/techsummit/cape-town

 

TechSummit

 

Advertisements
Categories: SQL Server

Installing SQL Server on Linux

September 29, 2017 Leave a comment

SQL on Linux is here , and its a brand new experience for us SQL guys.

I decided to give installing it a shot , and the installation process is certainly different from what we’re used to. Since I’m more familiar with Ubuntu than Red Hat , I spun up a VM with Ubuntu server 16.04 LTS. To make things easier, I also decided to install the Ubuntu desktop environment, using the command

sudo apt-get install ubuntu-desktop

Little did I realize that installing the desktop isn’t needed for SQL on Linux. You setup SQL via the command line , and use Management Studio from a Windows machine to connect. So you can skip that step if you don’t need a desktop on your Linux VM.  Once Ubuntu is up and running ( which is fairly quick) the next steps are :

1. Import the Public Repository keys

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –

2. Register the SQL Server UBUNTU repository

sudo add-apt-repository “$(curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list)”

3. Install SQL

sudo apt-get install -y mssql-serverver

That’s right. You don’t insert a disk ( or ISO image ) as you would do in Windows. SQL is installed like other Linux packages , and the entire thing is downloaded and installed.

After the install was successful , I immediately checked to see if the service was running , and got my first error.

Failed to start Microsoft(R) SQL Server(R) Database Engine.

The reason for this is because I didn’t run the configuration command ! This is a very important step and this is where you :

  1. Select your Edition ( and enter License Key if needed )
  2. Select components
  3. Setup the SA password

Run the following command to start , you will need to elevate :

sudo /opt/mssql/bin/mssql-conf setup

SqlLinuxSetup

Once this process is done , you can check that the service is running with :

sudo systemctl status mssql-server

If all is well you should see the following :

SQLonUbuntu
Now if you’re still getting an error where the service isn’t starting , the one thing you can also check is if you’ve allocated enough RAM to your VM. SQL 2017 needs at least 3.5GB of RAM and the service won’t start if it can’t find enough.

Now you’re ready to connect ! So how do we do this without a GUI ? You simply install the new Management Studio on a Windows machine and connect from there.

SSMS17

Once that’s working , how do we get data in. You can actually restore a backup from SQL on Windows to SQL on Linux. In terms of getting the backup onto the VM ,there are various ways of doing this,  but what I’ve tried recently ( and liked ) is the Linux Subsystem for Windows 10 , where using bash commands like SCP I could easily move files onto the VM.

Once the .bak file is there, I’ve also found that the restore process via Management Studio is quite straightforward as well.

REstore
With that you’re good to go. My first area of interest now is determining if there is any performance advantage on the Linux platform. So far I’ve run smaller queries and its pretty much even, but I’ll be testing with larger more complex queries next.

I’d love to hear your thoughts on SQL 2017 on Linux thus far.

Categories: SQL Server

Sql Server was unable to communicate with the LaunchPad service

July 26, 2016 Leave a comment

OK , so SQL 2016 is here , and I’m sure we’re all already playing with the new features.

I’ve decided to start learning R, and did a fresh installation of SQL 2016 with R Services installed.

While attempting to run my “Hello World” script I encountered the error –

Msg 39011, Level 16, State 1, Line 1
SQL Server was unable to communicate with the LaunchPad service. Please verify the configuration of the service.

OK, so the first thing you have to do is to enable SQL to run external scripts. Run

Exec sp_configure ‘external scripts enabled’, 1
Reconfigure with override

However , make sure that the new “Launchpad” service is running – I simply had to start this to solve the error on my VM.

LaunchPad

Categories: SQL Server, Uncategorized

SQL 2014 ColumnStore – The best of both worlds !

September 11, 2014 Leave a comment

I’m sure that many of you are familiar with the Column Store index feature that was launched with SQL 2012. Using Microsoft’s In-memory technology, the Column Store index delivered a massive performance boost to data warehouse queries. However , I’ve found that many people were put off the feature because of the fact that the Fact table becomes read only, once you implement Column Store. Well , you’ve probably heard the big news ( I’m writing this post months later than I originally wanted to ) , and for those of you who haven’t, the Column Store feature in SQL 2014 is update-able. That’s right , no more read-only fact table. However, that isn’t the reason for this post – I have something else here that might interest you. Space, the final frontier….  Firstly , you actually have 2 options with SQL 2014 – You can create the Non-Clustered Column Store as per 2012 ( which is NOT update-able ). Alternatively , you can create the new Clustered Column Store which IS update-able. The Clustered Column Store has some other interesting features : 1) It becomes the primary storage mechanism for the table. 2) It eliminates the need for other covering indexes. 3) It dramatically reduces the storage requirements for the table – and that’s a fact. Now its point 3 that I want to talk more about. In my test DW that I built for demo purposes, I have a fact table with about 20 million rows. Lets take a look at the storage statistics of that table. 01_Normal This table is currently taking up about 2.3 GB of space. We could apply Page Compression to this table ( which I believed has also been improved in 2014 ) , and we would get the following result. 02_Normal with page Comp Not bad , a reduction of the space used to less than 25% of the original size.However, we haven’t built any indexes yet to boost performance. Prior to applying the Page compression , I created a copy of the table , called FactSales2. Let’s apply the Non Clustered Column Store index to that table , to give us that performance boost , and see what happens to the storage. 03_Non Clustered ColumnStore The storage space of the table increases , and we can see that there is a 242MB overhead for the index. Now we could implement page compression and then the Non Clustered Column Store index , but then your table would still only be read-only. In addition to that , you would probably need to implement more indexes which takes up more space. In SQL 2014 , we have a better solution , which is, implement the Clustered Column Store Index. What about the Page Compression ? ….. 04_Clustered ColumnStore ….. it simply isn’t needed. The Clustered Column Store delivers better compression than Page Compression , reducing the size of the Table and the Index together to a mere 200MB. Technically , the index is the table. This is astonishingly less than a tenth of the storage space required when compared to the regular table with the Non-Clustered Column Store. Is the performance the same ? I’ve written a typical Data Warehouse reporting query with joins to the dimensions , and executed it on all 4 table types. The results are as follows :

Type Avg. Execution Time Notes
Regular Table 32 Seconds No Indexes
Regular Table with Page Compression 23 Seconds No Indexes
Table with Non Clustered Column Store 5 Seconds No Compression ( 2.3 GB Table ), Read Only
Clustered Column Store Table 5 Seconds Table is 200MB !

The only thing left to say is that, if you had reservations about using the Non-Clustered Column Store previously, you would be mad not to use it on Fact tables in SQL 2014. The benefits with regards to storage and performance are astounding. It remains to be seen if there is any impact to insert performance , but that’s a topic for another day.

Categories: SQL Server Tags: ,

SQL 2014 launched

April 8, 2014 Leave a comment

This is the big one !! SQL 2014 , with the in-memory OLTP capability, has officially launched as of 1 April 2014. You’ll find everything that you need to know at the link below :

Microsoft Official SQL Page

So far , I’ve been experimenting with the in-memory tables , and I’ve definitely seen some big performance improvements. I’ll do an upcoming post detailing how you create these tables.

Before that , I want to talk about dev and test environments. Many of you will be wanting to try out SQL 2014 , but perhaps don’t have the space available to spin up a VM. What you can do in this situation, if you have an MSDN subscription, is spin up a VM in Azure at no cost, and try out SQL 2014 there. To do so :

1) Log into the main MSDN page ( http://www.MSDN.com )
2) Click on “Cloud”
3) Under “Start your free trial” click on sign up
4) Scroll down to the bottom of that page. You’ll see a heading called “MSDN Subscriber?”. Click on “Activate Now”
5) You are basically entitled to $150 per month of free Azure services.

Once you have sorted that out , you’ll see that SQL 2014 is already available in Azure. There are preconfigured VM templates with various options.

SQL2014Azure

It takes about 5 minutes to spin up a new Virtual Machine , with the OS and SQL 2014 already installed !! Once that’s done, you can remote desktop into that and work on it as normal.

Categories: SQL Server

SQL 2014 !!

June 5, 2013 Leave a comment

Bet you didn’t see that one coming ! While some people are still getting to grips with the excellent SQL 2012 , Microsoft has officially announced SQL 2014 at Tech Ed North America.

Your first stop should be Microsofts website , where you can sign up to be notified once the trial version is available. There are also 3 PDF documents at the bottom of that page that I highly recommend downloading.

Next , what exactly was announced in terms of features ? Just some of them  :

1) Heckaton ! I’ve blogged about this before. It’s basically in-memory technology to give SQL Server OLTP applications a massive performance boost.

2) Writeable Column Store Indexes !! – A popular feature , and now your fact tables won’t be read-only once you create the index.

3) SSD Caching – extend memory by caching on an SSD. Also a performance boosting enhancement.

4) Extended Online Operations – more operations supported without taking database offline , and this extends uptime.

I’m sure that there are a whole lot more that we’ll learn about in the upcoming months. In the meantime , I recommend that you download those whitepapers at the Microsoft site and get reading !!

Categories: SQL Server

What the heck is Heckaton ?

November 23, 2012 Leave a comment

As you will have heard by now, some exciting announcements were made at SQL Pass. One of those was Hekaton , the exciting new in-memory database platform for SQL Server. I’ve found some more information on Hekaton.

Before I move onto that though, remember that xVelocity ColumnStore debuted in SQL 2012 , and the performance results are already impressive. I did a blog post on ColumnStore here, and the results were very impressive.

But that currently only applies to Data Warehouse workloads. Now we have “Hekaton” , which promises to bring the xVelocity technology to OLTP workloads. What we’ve heard at pass about Hekaton :

  • Will ship with the next major version of Sql Server
  • Will be integrated into SQL Server – no separate interface or programming model
  • Basically allows you to place most frequently used OLTP data tables in memory , in order to boost reads and writes ( that’s a very simplified explanation of it )
  • You can keep your other OLTP tables on normal storage. So a database application can be designed to run with a hybrid storage mechanism.

I would imagine that at some interval , the in-memory table data is written to disk. According to Microsoft , some customers on TAP are testing this at the moment , and are seeing anywhere between  5 and 50 times throughput gains. While I don’t like repeating such claims , we did see the evidence of such speed boosts with ColumnStore , so this is very exciting.

In the mean time , start ordering more RAM for your servers ….. J

Categories: SQL Server Tags: