Machine Learning

for robotics

Neil Chen

Hi, I'm Neil

Motivation

Autonomous delivery

Autonomous driving

FIRST

FRC

What is ML?

  • Teaching algorithms to make decisions without being explicitly programmed to do so
  • Data mining, optimization, & statistics

How is ML useful for robotics?

Learning to walk

Formulation

  • Types of learning
    • Supervised, unsupervised
  • Methodology
    • Data collection
    • Training, validation, testing
  • Algorithms
    • Linear models
    • Nonlinear models
    • Neural networks

Types of learning

Methodology

Data collection

Training, validation, testing

Algorithms

In [ ]:
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
sns.set()
In [75]:
# Linear models
from sklearn.linear_model import LinearRegression

X = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])
y = np.dot(X, np.array([1, 2])) + 3
reg = LinearRegression().fit(X, y)

Example

MNIST handwritten digit classification

In [19]:
data = np.genfromtxt('mnist_train.csv', delimiter=',')[::10,]
In [34]:
X, y = data[:, 1:], data[:, 0]
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=30, test_size=0.2, stratify=y)
In [35]:
sns.countplot(y)
plt.title('Distribution of labels')
plt.xlabel('digit')
Out[35]:
Text(0.5, 0, 'digit')
In [64]:
steps = [('scaler', StandardScaler()), ('SVM', SVC(kernel='poly'))]
pipeline = Pipeline(steps)
In [65]:
pipeline.fit(X_train, y_train)
/home/neil/Desktop/neilchen.co/ml/lib/python3.6/site-packages/sklearn/svm/base.py:193: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.
  "avoid this warning.", FutureWarning)
Out[65]:
Pipeline(memory=None,
         steps=[('scaler',
                 StandardScaler(copy=True, with_mean=True, with_std=True)),
                ('SVM',
                 SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
                     decision_function_shape='ovr', degree=3,
                     gamma='auto_deprecated', kernel='poly', max_iter=-1,
                     probability=False, random_state=None, shrinking=True,
                     tol=0.001, verbose=False))],
         verbose=False)
In [66]:
pipeline.score(X_test, y_test)
Out[66]:
0.785
In [67]:
preds = pipeline.predict(X_test)
In [72]:
fig, ax = plt.subplots(figsize=(20, 20), nrows=3, ncols=3)
for i, j in enumerate(np.random.randint(0, y_test.shape[0], 9)):
    img = (np.reshape(X_test[j], (28, 28))*255).astype(np.uint8)
    ax.flat[i].set_title('Model predicts: %d' % preds[j], fontsize='xx-large')
    ax.flat[i].imshow(img, cmap='gray')