Archive

Author Archive

Personal A.I

October 26, 2017 Leave a comment

Engaging with customers daily can sometimes be fascinating. With so many new ideas and innovations in the tech world every year, the eagerness to talk to them and share this with them is great. It can be humbling then, astonishing even, when you go to a customer to talk about a topic like Machine Learning, only to see that not only have they embraced your technology, but are already pushing the boundaries in ways you haven’t seen previously. And when the person giving you a detailed instruction of what they’re doing is barely out of college, it does fill you with some excitement and optimism.  The gist of the above is that terms like Machine Learning and A.I have certainly entered the mainstream lexicon.

The cloud has really enabled us to both speed up development in the A.I space (note how this field has progressed post-cloud), and bring A.I technologies to a wider audience with the cloud’s consumptive model. A big advantage of the cloud is gaining access to hardware platforms that you perhaps wouldn’t have invested in previously in the on-premises world. Azure, for example, gives you access to GPU-based VMs, on which you can build HPC and AI solutions, scaling quickly to even 1000s of GPUs.

In August 2017, MS revealed Project Brainwave, a platform built on FPGAs (Field Programmable Gate Arrays) in partnership with Intel. This is a leap over simply chaining GPUs together, the entire architecture is optimized for Deep Learning. At the unveiling of Brainwave, the Intel Stratix 10 FPGA, built on a 14nm process, demonstrated performance of a sustained 39.5 Teraflops.

Project Brainwave

The above developments in the cloud are expected to continue at breakneck pace. A performance of 39 TF may not be so impressive in a years’ time. This is an example of what is possible with the cloud.

To complete the picture, however, we need to move from the cloud to the edge…..

The Intelligent Edge

In the world of IOT a new concept is becoming prevalent – the Intelligent Edge. Edge computing refers to data processing power at the edge of a network instead of holding that processing power in a cloud.

Now this may seem odd to some of you, however the importance of the cloud does not diminish in this scenario. The intelligent edge recognizes that to deliver what businesses require, data processing and intelligence need to be applied at the edge before data is synced into the cloud.

Intelligent Edge.PNG

I’ve spoken previously about the concept of having tremendous processing power on you, enough to power intelligence, as I’ve always been fascinated by artificial intelligence. In 2010, way before the current wave, I spoke of the concept of a “Local AI” on my blog, tremendous computing and analytical power in your pocket ( and on your wrist ) , that changes your daily life.  Looking back now, the term seems clumsy, so I am renaming the concept to Personal AI. The concept of the Intelligent Edge will extend to the device you carry, not just industrial sensors, and deliver Personal AI to you.

Now in the present, to build massive neural networks, you need the scale of the cloud. To train your models, you need vast amounts of data, as well as massive CPU for things like Backpropagation. You’re not going to do that on a mobile device. However, you could run pre-trained models on your mobile device with the right custom chips. You thought that chips don’t matter anymore?

Personal AI

In 2017, we are starting to see Personal AI form.

For many years it was felt that CPU’s were becoming a commodity, and that the real innovation lied elsewhere. As an Engineering graduate who loved chips, I was dismayed by that. I still get excited when the latest desktop CPUs are announced and was pleased to see the CPU wars stronger than ever in 2017, with AMD Ryzen launching and then Intel fighting back.

One company a few years back decided that central to their differentiating strategy would be to go back to designing their own custom chips to give themselves a massive advantage, which went against what the market was saying – Can you guess the company name ?

It was Apple.

Their custom “Ax” series of custom System-on-a-chips (SOCs) are, according to Apple, key to the smooth and fast experience on their devices. In September 2017, Apple announced the A11 Bionic, their latest custom SOC, and according to them it includes a neural engine.

https://www.extremetech.com/mobile/255780-apple-neural-engine-a11-bionic-soc

The Chinese have already responded……

http://www.eweek.com/mobile/huawei-new-mate-10-smartphones-include-ai-chips-to-boost-performance

Personal AI is happening, just as I thought it would.

This is the first mainstream example of pushing Machine Leaning to The Edge, and already we can see why. Apart from the personal assistant utilizing this capability, there are other use cases – the new iPhone requires facial detection to unlock the device, and that action would need to work even when you’re offline.

Intel, however, will not readily give up any advantage in an exciting space like this. Also announced in September 2017, Intel has created a chip that simulates a neural network (Neuromorphic Computing), called Loihi. Expect this chip to find its way into smart devices that can now process more data on the Edge before sending to the cloud.

Lastly, there is the PCIe card by BrainChip, that plugs into a PC. Why would you need a Neural Network running there? The one application is processing simultaneous video feeds, and doing facial recognition on the fly.

https://hothardware.com/news/brainchip-pcie-accelerator-card-neuromorphic-computing

Back to Personal AI, there are so many uses for increased intelligence on the Edge that it will soon be taken for granted. By 2020, I see most pocket computers (what you still call mobile phones) having advanced ML capability as standard to perform a whole host of tasks. The entry point into the capability will be the personal assistant, which will rapidly become much smarter. Speech recognition will improve as it will be done on device. Instead of the personal assistant being just a front to a search engine in the cloud (as it is mostly today), the Personal AI will have real capability to process and understand, using the search engine to reference data.

I see advanced scenarios such as the following:

1) Realtime Health Analysis – currently your smartwatch monitors your heart rate and steps and sends it to your phone, which sends it into the cloud. In the near future, your Personal AI will read this data and analyze it in real-time, with the ability to alert you as early as possible should you be at risk of a heart attack , or stroke ,for example. The number of complex sensors built into your smartwatch will increase in order to enable this. Advancement in sensor technology will be key to deliver on the capability of Artificial Intelligence. We need more capable sensors, all small enough to fit on a wristwatch.

2) Environmental Analysis – Another scenario that’s easy to predict will be to use the capability on your pocket computer to perform analysis of the environment around you. Air quality is a problem in many parts of the world – imagine taking out your device, some basic readings being taken, and then machine learning kicking in to advise you if the air is safe to breathe in that location, and what the risks are. This is especially useful for travelers. Once again, I am saying that development of better sensors is critical. Imagine a form of sensor that could analyze water in a glass and tell you if it is safe to drink – very useful in certain countries.

3) Realtime Language Translation – This is already happening with products like Skype, but could be augmented with the power available on the Edge device. I would imagine that a future version of Skype could take advantage of Personal AI, to improve Realtime translation (the already translated language is sent to the cloud and all the way to the other end).

4) Custom Apps –Once you have the processing power (and the sensors) at the Edge, you will see all kinds of custom apps being built to take advantage of this. Environmental Sensors could be utilized to create an app for workers in dangerous environments, mines for example. Apart from just the raw sensor readings, it is the Personal AI engine that would add real value, delivering insight in near real-time. It would also assist in sending more relevant data into a bigger engine in the cloud, with the knock-on effect of helping train more accurate models.

The development of Personal A.I is a area of technological advancement that I believe will have more of a personal effect on your life. Just like the smartphone and social media did a decade back , the ability to both interact with a smarter personal assistant, and also get access to life changing services as listed above, will change our daily life. Perhaps a more natural interaction with technology will even stop us all from staring at a screen all day – we can only hope.

Advertisements
Categories: AI, Futurism, Tech

Tech Summit Cape Town is happening in Feb 2018 !

October 25, 2017 Leave a comment

Many people still have fond memories of the old Tech-Ed events that were held initially at Sun City, and then for a few years in Durban.

While we have no idea if Tech-Ed will ever return, the exciting news is that Tech Summit, a free 2 day event, will happen in February 2018. This year we had a successful event in Johannesburg, and the good news is that for 2018 we will host Tech Summit in Cape Town !!

This is free event where you register to enter – first come first served. Registration will open on the 15th November , so make sure to save the link below !!

https://www.microsoft.com/en-za/techsummit/cape-town

 

TechSummit

 

Categories: SQL Server

Installing SQL Server on Linux

September 29, 2017 Leave a comment

SQL on Linux is here , and its a brand new experience for us SQL guys.

I decided to give installing it a shot , and the installation process is certainly different from what we’re used to. Since I’m more familiar with Ubuntu than Red Hat , I spun up a VM with Ubuntu server 16.04 LTS. To make things easier, I also decided to install the Ubuntu desktop environment, using the command

sudo apt-get install ubuntu-desktop

Little did I realize that installing the desktop isn’t needed for SQL on Linux. You setup SQL via the command line , and use Management Studio from a Windows machine to connect. So you can skip that step if you don’t need a desktop on your Linux VM.  Once Ubuntu is up and running ( which is fairly quick) the next steps are :

1. Import the Public Repository keys

curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add –

2. Register the SQL Server UBUNTU repository

sudo add-apt-repository “$(curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list)”

3. Install SQL

sudo apt-get install -y mssql-serverver

That’s right. You don’t insert a disk ( or ISO image ) as you would do in Windows. SQL is installed like other Linux packages , and the entire thing is downloaded and installed.

After the install was successful , I immediately checked to see if the service was running , and got my first error.

Failed to start Microsoft(R) SQL Server(R) Database Engine.

The reason for this is because I didn’t run the configuration command ! This is a very important step and this is where you :

  1. Select your Edition ( and enter License Key if needed )
  2. Select components
  3. Setup the SA password

Run the following command to start , you will need to elevate :

sudo /opt/mssql/bin/mssql-conf setup

SqlLinuxSetup

Once this process is done , you can check that the service is running with :

sudo systemctl status mssql-server

If all is well you should see the following :

SQLonUbuntu
Now if you’re still getting an error where the service isn’t starting , the one thing you can also check is if you’ve allocated enough RAM to your VM. SQL 2017 needs at least 3.5GB of RAM and the service won’t start if it can’t find enough.

Now you’re ready to connect ! So how do we do this without a GUI ? You simply install the new Management Studio on a Windows machine and connect from there.

SSMS17

Once that’s working , how do we get data in. You can actually restore a backup from SQL on Windows to SQL on Linux. In terms of getting the backup onto the VM ,there are various ways of doing this,  but what I’ve tried recently ( and liked ) is the Linux Subsystem for Windows 10 , where using bash commands like SCP I could easily move files onto the VM.

Once the .bak file is there, I’ve also found that the restore process via Management Studio is quite straightforward as well.

REstore
With that you’re good to go. My first area of interest now is determining if there is any performance advantage on the Linux platform. So far I’ve run smaller queries and its pretty much even, but I’ll be testing with larger more complex queries next.

I’d love to hear your thoughts on SQL 2017 on Linux thus far.

Categories: SQL Server

Demystifying A.I.

August 21, 2017 Leave a comment

 

TheMatrix

With all the hype around “AI” and Machine Learning, I thought that I’d dabble in unpacking some of the key concepts. I find that most of my reading time now is spent in this area.

Firstly, Artificial Intelligence isn’t really something brand new. It has been written about for decades, most famously by Alan Turing in 1951. In his article “Computing Machinery and Intelligence” he spoke of the “imitation game”, which later came to be known as the Turing Test. There was a movie in 2014 called “The Imitation Game” which details Turing’s life, and how he cracked the enigma code in WW2 (starring the usually excellent Benedict Cumberbatch). Highly recommended.

In terms of actually experiencing a simple AI, the easiest way to do this is to play a videogame. Even in the 80’s with “Pac-Man”, as the player you were trying to outwit the 4 enemies on screen – the algorithms behind the enemy players could be considered an early implementation of simple AI. Modern games have more complex implementations of “AI” that may surprise you with their intelligence.

 

Pac Man

Was Blinky an A.I ?

 

So why is AI becoming more prominent now? Very simply, we have now reached a tipping point where Big Data, software advances and cloud computing can be leveraged together to add value to businesses and society. Even though we are still many years from creating a sentient AI like HAL 9000, we can implement things like Machine Learning today to bring about improvements and efficiencies.

Machine Learning

Machine Learning is a subset of A.I, not really A.I in totality. Way back in 1959, Arthur Samuel defined Machine Learning as “the ability to learn without being explicitly programmed”. Basically, this field entails creating algorithms that can find patterns or anomalies in data, and make predictions based on those learnings. Once “trained” on a set of data, these algorithms can very reliably (if chosen correctly), find those patterns and make predictions. Using tools like Azure Machine Learning, you can deploy this as a web service to automate the prediction process, and also do these predictions as a batch.

Now I personally became exposed to similar concepts around 2006 when working with SQL 2005. That product release included a lot of “data mining” functionality. Data Mining basically involved using algorithms (many built into SSAS 2005) to find patterns in datasets, a precursor to Machine Learning today.

I was really exciting by the possibilities of Data Mining, and tried to show it to as many customers as possible, however the market was just not ready. Many customers told me that they just want to run their data infrastructure as cheaply as possible and don’t need any of this “fancy stuff”. Of course, the tools today are a lot easier to use and we now include support for R and Python, but I think what was missing back in 2007 was industry hype. Industry hype, coupled with fear of competitors overtaking them, is possibly forcing some of those old I.T managers to take a look at Machine Learning now, while we also have a new breed of more dynamic I.T management (not to mention business users who need intelligent platforms) adopting this technology.

Machine Learning today has evolved a lot from those Data Mining tools, and the cloud actually makes using these tools very feasible. If you feel unsure about the value Machine Learning will bring to your business, you can simply create some test experiments in the cloud to evaluate without making any investment into tools and infrastructure, and I’m seeing customers today embrace this thinking.

Deep Learning

Deep Learning can be considered a particular type of Machine Learning. The difference is that Deep Learning relies on the use of Neural Networks, a construct that simulates the human brain. We sometimes refer to Deep Learning as Deep Neural Networks, i.e. Neural Networks with many, many layers. The scale of deep learning is much greater than Machine Learning.

Neural Networks

Neural Networks have been around for decades. As mentioned, this is a construct that mirrors the human brain. In the past, we could build neural networks but simply didn’t have the processing power to get quick results from them. The rise of GPU computing has given Neural Networks and Deep Learning a boost. There is a fast widening gap in the number of Floating Point Operations per second (FLOPS) that is possible with GPUs compared to traditional CPUs.   In Azure, you can now spin up VMs that are GPU based and build neural networks (where you might not have invested in such resources in the old on-premises world).

NVIDIA

CPU vs GPU FLOPs (image courtesy NVIDIA) 

Edit 23/8/2017 : A day after publishing this , Microsoft announced a Deep Learning acceleration platform built on FPGA (Field Programmable Gate Array) technology – more here : https://www.microsoft.com/en-us/research/blog/microsoft-unveils-project-brainwave/

While Machine Learning works well with repetitive tasks (i.e. Finding a pattern in a set of data), a neural network is better for performing tasks that a human is good at ( i.e. Recognizing a face within a picture).

Narrow AI vs General AI

All of the above would typically fall under Narrow AI (or Weak AI). Narrow AI refers to non-sentient AI that is designed for a singular purpose (I.e. A Machine Learning model designed to analyze various factors and predict when customers are likely to default on a payment, or when a mechanical failure will occur). Narrow AI can be utilized today in hundreds of scenarios, and with tools like Azure ML, it’s very easy to get up and running.

General AI (or Strong AI) refers to a sentient AI that can mimic a human being (like HAL). This is what most people think of when they hear the words “Artificial Intelligence”. We are still many years away from this type of AI, although many feel that we could get there by 2030. If I had to predict how we get there, I would say perhaps a very large scale neural network built on quantum computing, with software breakthroughs being made as well. This is the type of AI that many are fearful of, as it will bypass human intelligence very quickly and there’s no telling what the machine will do.

Why would we need a Strong AI? Some obvious use cases would be putting it onboard a ship for a long space journey – it is essentially a crew member that does not require food or oxygen and can work 24/7. On Earth, the AI would augment our capabilities and be the catalyst for rapid technological advancement. Consider this : we may not know where the AI will add the most value , however, once we build it, it will tell us where.

The good news is that you don’t need General AI (a HAL 9000) to improve businesses in the world today. We are currently under-utilizing Narrow AI, and there is tremendous opportunity in this space. I encourage you to investigate what’s out there today and you will be amazed at the possibilities.

Image used via Creative Commons license
Categories: AI, Azure, Futurism, Tech

Announcing SQL Server 2012 Service Pack 4

July 25, 2017 Leave a comment

In case you missed it , it looks like SP4 will be the final big thing for SQL 2012. You really need to move to SQL 2016 though , the improvements to the engine alone are worth it.

https://blogs.msdn.microsoft.com/sqlreleaseservices/announcing-sql-server-2012-service-pack-4/

 

Categories: Uncategorized

Sql Server was unable to communicate with the LaunchPad service

July 26, 2016 Leave a comment

OK , so SQL 2016 is here , and I’m sure we’re all already playing with the new features.

I’ve decided to start learning R, and did a fresh installation of SQL 2016 with R Services installed.

While attempting to run my “Hello World” script I encountered the error –

Msg 39011, Level 16, State 1, Line 1
SQL Server was unable to communicate with the LaunchPad service. Please verify the configuration of the service.

OK, so the first thing you have to do is to enable SQL to run external scripts. Run

Exec sp_configure ‘external scripts enabled’, 1
Reconfigure with override

However , make sure that the new “Launchpad” service is running – I simply had to start this to solve the error on my VM.

LaunchPad

Categories: SQL Server, Uncategorized

SQL 2016 requires ….Oracle ?

August 6, 2015 Leave a comment

OK , that’s a click-bait headline….. but its not totally untrue.

When I installed SQL 2016 for the first time on a VM, the install initially failed the list of checks due to 2 items. One was an update for Windows Server that wasn’t installed , but the other was an update for Oracle Java ( JRE ). Now this was puzzling to me – since when does SQL Server require Java ?

The answer lies in one of SQL 2016s most interesting features – Polybase.

Polybase is Microsoft’s “SQL-over-Hadoop” solution , a layer that allows you to write SQL and query relational and non-relational data. Originally launching with the APS appliance, the inclusion in 2016 is a milestone. So why Java ? Remember that MapReduce scripts are typically written in Java.

Sql 2016 allows you to query data from a connected Hadoop or HD Insight instance , a true sign of the Big Data times that we live in.

Categories: Uncategorized