import matplotlib.pyplot as plt import pandas as pd import pylab as pl import numpy as np %matplotlib inline
!wget -O FuelConsumption.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv
--2019-04-15 10:39:01-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/FuelConsumptionCo2.csv Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 126.96.36.199 Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|188.8.131.52|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 72629 (71K) [text/csv] Saving to: ‘FuelConsumption.csv’ FuelConsumption.csv 100%[=====================>] 70.93K --.-KB/s in 0.04s 2019-04-15 10:39:01 (1.65 MB/s) - ‘FuelConsumption.csv’ saved [72629/72629]
Did you know? When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: Sign up now for free
We have downloaded a fuel consumption dataset,
FuelConsumption.csv, which contains model-specific fuel consumption ratings and estimated carbon dioxide emissions for new light-duty vehicles for retail sale in Canada. Dataset source
df = pd.read_csv("FuelConsumption.csv") # take a look at the dataset df.head()
|3||2014||ACURA||MDX 4WD||SUV - SMALL||3.5||6||AS6||Z||12.7||9.1||11.1||25||255|
|4||2014||ACURA||RDX AWD||SUV - SMALL||3.5||6||AS6||Z||12.1||8.7||10.6||27||244|
# summarize the data df.describe()
Lets select some features to explore more.
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']] cdf.head(9)
we can plot each of these features:
viz = cdf[['CYLINDERS','ENGINESIZE','CO2EMISSIONS','FUELCONSUMPTION_COMB']] viz.hist() plt.show()
Now, lets plot each of these features vs the Emission, to see how linear is their relation:
plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='blue') plt.xlabel("FUELCONSUMPTION_COMB") plt.ylabel("Emission") plt.show()
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show()
# write your code here plt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue') plt.xlabel("Cylinders") plt.ylabel("Emission") plt.show()
Double-click here for the solution.
Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.
This means that we know the outcome of each data point in this dataset, making it great to test with! And since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing.
Lets split our dataset into train and test sets, 80% of the entire data for training, and the 20% for testing. We create a mask to select random rows using np.random.rand() function:
msk = np.random.rand(len(df)) < 0.8 train = cdf[msk] test = cdf[~msk]
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.xlabel("Engine size") plt.ylabel("Emission") plt.show()
from sklearn import linear_model regr = linear_model.LinearRegression() train_x = np.asanyarray(train[['ENGINESIZE']]) train_y = np.asanyarray(train[['CO2EMISSIONS']]) regr.fit (train_x, train_y) # The coefficients print ('Coefficients: ', regr.coef_) print ('Intercept: ',regr.intercept_)
Coefficients: [[39.26170872]] Intercept: [124.82174745]
As mentioned before, Coefficient and Intercept in the simple linear regression, are the parameters of the fit line. Given that it is a simple linear regression, with only 2 parameters, and knowing that the parameters are the intercept and slope of the line, sklearn can estimate them directly from our data. Notice that all of the data must be available to traverse and calculate the parameters.
we can plot the fit line over the data:
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue') plt.plot(train_x, regr.coef_*train_x + regr.intercept_, '-r') plt.xlabel("Engine size") plt.ylabel("Emission")
Text(0, 0.5, 'Emission')
we compare the actual values and predicted values to calculate the accuracy of a regression model. Evaluation metrics provide a key role in the development of a model, as it provides insight to areas that require improvement.
There are different model evaluation metrics, lets use MSE here to calculate the accuracy of our model based on the test set:
from sklearn.metrics import r2_score test_x = np.asanyarray(test[['ENGINESIZE']]) test_y = np.asanyarray(test[['CO2EMISSIONS']]) test_y_hat = regr.predict(test_x) print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_hat - test_y))) print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_hat - test_y) ** 2)) print("R2-score: %.2f" % r2_score(test_y_hat , test_y) )
Mean absolute error: 22.99 Residual sum of squares (MSE): 916.41 R2-score: 0.70
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: SPSS Modeler
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at Watson Studio
Saeed Aghabozorgi, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.