Wednesday, January 17, 2018

Operationalize models from AML workbench into Azure - Part 1

Hi All,

One of the primary tasks for any machine learning practitioner or data scientist is to operationalize machine learning models into a serving environment. This first blog post would be showing how to operationalize models using Azure Machine Learning (AML) workbench in a step to deploy models as realtime webservices into Azure.

First, Once you have your model ready to be operationlized, you need to create and set the right execution environment for your model to be deployed to in Azure.

A) Creating a machine learning model management service using CLI in Azure

First, You need to know that to provision a ml model management service in azure that acts as a serving environment for your models. You can create a new one using azure portal or by executing this command in CLI from your computer:

az ml account modelmanagement create --location westcentralus -n mymodelmanagement -g myresourcegroup

-n: where you can specify the name of the service when it is provisioned in azure.
-g: the resource group name that you would like this account model management services assigned to.

Checkpoint #1: You need to make sure to provision a ML model management account is created in azure before going forward with the rest of this article.

B) Configure the execution environment using Azure Machine Learning Workbench using CLI

Once you have a model management account, you would not be able deploy models from your workbench tool into azure using docker container images unless you provision an environment. In the below steps, we will walk through how to accomplish this.


Follow the below steps:

1) From AML workbench, open the command line window from File --> Open Command prompt.
2) You need to login to your azure account, type the following command:

     az login

3) Follow the screen instructions by logging into your azure subscription.
4) Make sure that you are using the right subscription, The below first command list all subscriptions you have, the second line set the intended subscription you would like to use.

      az account list -o table
      az account set -s <subscriptionId>

5) Verify that your current account is set correctly by exeucting the following command:

       az account show

6) To create an environment which creates set of cloud assets containing blob storage, container registry and other assets in a resource group to host docker images for your models into the cloud:

      az ml env setup -n mymldevenv --location westcentralus

Please note that we created an environment on the same location of the model management services in west central us.

7) Set the model management account

     az ml account modelmanagment set -n mymldevenv -g mymldevenvrg

 Please note that when we created an environment in step #6, it has created a resource group for this environment with the same name with rg suffix, which we used in this step to set our model management service we created in step A with this environment.

Checkpoint #2: An AML workbench environment and a model management service were provisioned before moving forward to the rest of this article.

8) Now, we are going to set the correct execution environment in CLI to connect to the provisioned environment and therefore the model management service in azure.

    a) To see available environments, run:
   
         az ml env list

    b) In my case, i do have few of these for different project. To set an active environment, run:

         az ml env set -n mymodelmanagement  -g myresourcegroup

     c) Once you run this command, you should see this confirmation message in the console window

         Compute set to mymldevenv .

9) Every time you re-open AML workbench, you need to make sure to set the right environment before deploying models as webservices into azure model management service.


In the second part, I will take the next step of publishing models into real-time web services in azure model management service in azure.


Enjoy ML :-)

- Mostafa


Tuesday, November 14, 2017

How to convert epoch time to datetime in Pandas

Hi,

While i am working with iot data to transform UNIX epoch time in seconds. I would like to convert epoch time in seconds into human readable date time and not a reference date which is based upon 1970.

I have my data in pandas dataframe, below screen shot shows "createdTime" column in epoch time in seconds:



Here is the code segment that convert UNIX epoch time into date time:

convert = lambda x: datetime.datetime.fromtimestamp(x / 1e3)
ds['ts'] = ds['createdTime'].apply(convert)
ds.head()


This code generates the expected output, below screen shot shows the output:



Hope this helps!

Enable Jupyter notebook in Anaconda Navigator

Hi,

After i installed the latest conda runtime (anaconda 3 x64 distro) that uses Anaconda 3 on Windows 64 bit.

When i try to click on a target environment, i see that "Open with IPython" or "Open with Jupyter Notebook"

The question is how to enable this? I found how to install Jupyter notebook package for conda environment where it would be accessible through the Navigator tool.

Follow below steps:

1) Select any of the available environment, Click on "Open terminal" window.
2) Type the following command:

conda install nb_conda

3) This will install notebook packages for Jupyter and once it is completed, The Jupyter notebook will be available for all environments.




4) To testify this work properly, Click on Jupyter link and this will open up the notebook.

5) Write some code to make sure this works with no issues.

import pandas as pd
s = pd.Series([1508258340299])
pd.to_datetime(s)

The code executed with no issues:



Hope this helps!


Thursday, November 09, 2017

Error downloading files from secure sites in .NET apps 4.6.2

Hi All,

I was working on upgrading a .Net application runtime from version 4.5 to latest one 4.6.2. After i did that, the application threw an error in the step of downloading a zip file from a secure site.

The app throws thew following error:

{"The request was aborted: Could not create SSL/TLS secure channel."} when trying to download file

The code snippet that was throwing the error in the DownloadFile method in .NET WebClient class:

C# code:

 using (WebClient wc = new WebClient())

            {
                wc.DownloadFile(ssl_url, downloadedFilePath);
            }


The destination is a secure site uses SSL, after searching and trying different options, I found out the solution by enabling the TLS 1.1 and TLS 2.2 before calling download file method.

Here is the modified code snippet:


 System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls11| SecurityProtocolType.Tls12;
            using (WebClient wc = new WebClient())
            {
                wc.DownloadFile(nhtsa_url, downloadedFilePath);
            }


Hope this helps!


Tuesday, October 31, 2017

Load datasets from azure blob storage into Pandas dataframe

Hi,

In this post, I am sharing how to work and load data sets that are stored in Azure blob storage into Pandas data frame.

I have the full code posted in Azure notebooks. This code snippet is useful to use in any Jupyter notebook while working on your data pipeline while developing Machine Learning models.

I have exported a data set into a csv file and stored it into an Azure blob storage so i can use it into my notebooks.



Python code snippet:

import pandas as pd
import time
# import azure sdk packages
from azure.storage.blob import BlobService

def readBlobIntoDF(storageAccountName, storageAccountKey, containerName, blobName, localFileName):    
    # get an instance of blob service 
    blob_service = BlobService(account_name=storageAccountName, account_key= storageAccountKey)
    # save file content into local file name
    blob_service.get_blob_to_path(CONTAINERNAME,blobName,localFileName)
    # load local csv file into a dataframe    
    dataframe_blobdata = pd.read_csv(localFileName, header=0)
    
    return dataframe_blobdata

STORAGEACCOUNTNAME= 'STORAGE_ACCOUNT_NAME'
STORAGEACCOUNTKEY= 'STORAGE_KEY'    
CONTAINERNAME= 'CONTAINER_NAME'
BLOBNAME= 'BLOB_NAME.csv'
LOCALFILENAME = 'FILE_NAME-csv-local'

# load blob file into pandas dataframe
tmp = readBlobIntoDF(STORAGEACCOUNTNAME,STORAGEACCOUNTKEY,CONTAINERNAME,BLOBNAME, LOCALFILENAME)
tmp.head()


The full code snippet is posted in Azure Notebook here.

Enjoy!

Tuesday, June 06, 2017

Setup Remote Desktop for Raspberry Pi with no need for an external display

Hi,

If you are thinking about how to setup remote desktop to raspberry pi, this article is for you. I will show you a walk-through to install required packages so that you are able to remote desktop from your windows machine or any other remote machine.

Steps to configure remote desktop on raspberry pi:

1) Connect to your Raspberry Pi using Putty.
2) Open a terminal window.
3) We are going to install XRDP package to configure RDP to the Pi. Before installing xrdp, we must first install the tightvncserver package.  The tightvncserver installation will also remove the RealVNC server software that ships with newer versions of Raspbian OS since tightvncserver (xrdp) will not work if RealVNC is installed. 

$ sudo apt install -y tightvncserver
$ sudo apt install -y xrdp

3) Now, Just install Samba package that provides a GUI when accessing a Pi using RDP.

$ sudo apt install -y samba

4) Open up the remote desktop tool in windows or your host OS and set the name or IP of your Pi and hit connect.



With that, we can connect to any remote Pi or Linux based IoT device from your computer; therefore no need to connect an IoT device to an external screen.

Enjoy!

Tuesday, April 25, 2017

Linear Regression Algorthims in Scikit-Learn

Hi,

While i am working on different regression algorithms in scikit-learn library. I would like to share some important tips to differentiate between major linear regression algorithms in Machine Learning space.

Below is a comparison table to compare among four linear regression algorithms:


The general idea of Gradient Descent (GD) is to tweak parameters iteratively in order to minimize a cost function.

Batch and Stochastic Gradient Descent: at each step, both algorithms compute the gradients based on the full training dataset (as in Batch GD) or based on just one instance (as in Stochastic GD).

While in Mini-Batch Gradient Descent algorithm: computes the gradients based on small random sets of instances called mini batches.


There are more linear regression algorithms in sklearn that is not covered in this blog post, you can find it here:  http://scikit-learn.org/stable/modules/sgd.html#regression


Hope this helps!

Sunday, April 23, 2017

What is the difference between estimators vs transformers vs predictors in sklearn?

Hi All,

While working in Machine Learning projects using scikit-learn library, I would like to highlight important and fundamental concepts that every ML ninja needs to be aware of. In this post i am highlighting few concepts to differentiate estimators vs transformers vs predictors in building machine learning solutions using sklearn.


1) Estimators: Any objects that can estimate some parameters based on a dataset is called an estimator. The estimation itself is performed by calling fit() method.
This method takes one parameter (or two in case of supervised learning algorithms). Any other parameter needed to guide the estimation process is called hyperparameter and must be set as in instance variable.

For example: i would like to estimate a mean, median or most frequent value of a column in my dataset.


This is a cheat sheet of sklearn estimators. you can find the up to date version here.





2) Transformers: Transform a dataset. It transforms a dataset by calling transform() method and it returns a transformed dataset. some estimators can also transform a dataset.

For example: Imputer class in sklearn is an estimator and a transformer. You can call fit_transform() method that estimate and transform a dataset.

Python code: 

from sklearn.preprocessing inport Imputer

imputer = Imputer(strategy="mean") #estimate mean value for dataset columns

imputer.fit(mydataset)    # Imputer as an estimator

imputer.fit_transform(mydataset)   # Imputer as a transformer and estimator (Combined two steps)




3) Predictors: making predictions for  given a dataset. A predictor class has predict() method that takes a new instances of a dataset and returns a dataset with corresponding predictions. Also, it contains score() method that measures the quality of the predictions for a giving test dataset.

For example: LinearRegression, SVM, Decision Tree,..etc are predictors.


You can combine building blocks of estimators, transformers and predictors as a pipeline in sklearn. This allows developers to use multiple estimators from a sequence of transformers followed by a final estimator or predictor. This concept is called composition in Machine Learning.


Hope this helps