NLP with Deeplearning4j Training Course

Duration

14 hours (usually 2 days including breaks)

Requirements

Knowledge of Deep Learning, and one of the following languages:

  • Java
  • Scala

and the following software:

  • Java (developer version) 1.7 or later (Only 64-Bit versions supported)
  • Apache Maven
  • IntelliJ IDEA or Eclipse
  • Git

Overview

Deeplearning4j is an open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Spark, DL4J is designed to be used in business environments on distributed GPUs and CPUs.

Word2Vec is a method of computing vector representations of words introduced by a team of researchers at Google led by Tomas Mikolov.

Audience

This course is directed at researchers, engineers and developers seeking to utilize Deeplearning4J to construct Word2Vec models.

Course Outline

Getting Started

  • DL4J Examples in a Few Easy Steps
  • Using DL4J In Your Own Projects: Configuring the POM.xml File

Word2Vec

  • Introduction
  • Neural Word Embeddings
  • Amusing Word2vec Results
  • the Code
  • Anatomy of Word2Vec
  • Setup, Load and Train
  • A Code Example
  • Troubleshooting & Tuning Word2Vec
  • Word2vec Use Cases
  • Foreign Languages
  • GloVe (Global Vectors) & Doc2Vec

Deep Learning for NLP (Natural Language Processing) Training Course

Duration

28 hours (usually 4 days including breaks)

Requirements

  • An understanding of Python programming
  • An understanding of Python libraries in general

Audience

  • Programmers with interest in linguistics
  • Programmers who seek an understanding of NLP (Natural Language Processing) 

Overview

DL (Deep Learning) is a subset of ML (Machine Learning).

Python is a popular programming language that contains libraries for Deep Learning for NLP.

Deep Learning for NLP (Natural Language Processing) allows a machine to learn simple to complex language processing. Among the tasks currently possible are language translation and caption generation for photos.

In this instructor-led, live training, participants will learn to use Python libraries for NLP as they create an application that processes a set of pictures and generates captions. 

By the end of this training, participants will be able to:

  • Design and code DL for NLP using Python libraries.
  • Create Python code that reads a substantially huge collection of pictures and generates keywords.
  • Create Python Code that generates captions from the detected keywords.

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Course Outline

Introduction to Deep Learning for NLP

Differentiating between the various types of  DL models

Using pre-trained vs trained models

Using word embeddings and sentiment analysis to extract meaning from text 

How Unsupervised Deep Learning works

Installing and Setting Up Python Deep Learning libraries

Using the Keras DL library on top of TensorFlow to allow Python to create captions

Working with Theano (numerical computation library) and TensorFlow (general and linguistics library) to use as extended DL libraries for the purpose of creating captions. 

Using Keras on top of TensorFlow or Theano to quickly experiment on Deep Learning

Creating a simple Deep Learning application in TensorFlow to add captions to a collection of pictures

Troubleshooting

A word on other (specialized) DL frameworks

Deploying your DL application

Using GPUs to accelerate DL

Closing remarks

Natural Language Processing (NLP) with TensorFlow Training Course

Duration

35 hours (usually 5 days including breaks)

Requirements

Working knowledge of python

Overview

TensorFlow™ is an open source software library for numerical computation using data flow graphs.

SyntaxNet is a neural-network Natural Language Processing framework for TensorFlow.

Word2Vec is used for learning vector representations of words, called “word embeddings”. Word2vec is a particularly computationally-efficient predictive model for learning word embeddings from raw text. It comes in two flavors, the Continuous Bag-of-Words model (CBOW) and the Skip-Gram model (Chapter 3.1 and 3.2 in Mikolov et al.).

Used in tandem, SyntaxNet and Word2Vec allows users to generate Learned Embedding models from Natural Language input.

Audience

This course is targeted at Developers and engineers who intend to work with SyntaxNet and Word2Vec models in their TensorFlow graphs.

After completing this course, delegates will:

  • understand TensorFlow’s structure and deployment mechanisms
  • be able to carry out installation / production environment / architecture tasks and configuration
  • be able to assess code quality, perform debugging, monitoring
  • be able to implement advanced production like training models, embedding terms, building graphs and logging

Course Outline

Getting Started

  • Setup and Installation

TensorFlow Basics

  • Creation, Initializing, Saving, and Restoring TensorFlow variables
  • Feeding, Reading and Preloading TensorFlow Data
  • How to use TensorFlow infrastructure to train models at scale
  • Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics 101

  • Prepare the Data
    • Download
    • Inputs and Placeholders
  • Build the Graph
    • Inference
    • Loss
    • Training
  • Train the Model
    • The Graph
    • The Session
    • Train Loop
  • Evaluate the Model
    • Build the Eval Graph
    • Eval Output

Advanced Usage

  • Threading and Queues
  • Distributed TensorFlow
  • Writing Documentation and Sharing your Model
  • Customizing Data Readers
  • Using GPUs
  • Manipulating TensorFlow Model Files

TensorFlow Serving

  • Introduction
  • Basic Serving Tutorial
  • Advanced Serving Tutorial
  • Serving Inception Model Tutorial

Getting Started with SyntaxNet

  • Parsing from Standard Input
  • Annotating a Corpus
  • Configuring the Python Scripts

Building an NLP Pipeline with SyntaxNet

  • Obtaining Data
  • Part-of-Speech Tagging
  • Training the SyntaxNet POS Tagger
  • Preprocessing with the Tagger
  • Dependency Parsing: Transition-Based Parsing
  • Training a Parser Step 1: Local Pretraining
  • Training a Parser Step 2: Global Training

Vector Representations of Words

  • Motivation: Why Learn word embeddings?
  • Scaling up with Noise-Contrastive Training
  • The Skip-gram Model
  • Building the Graph
  • Training the Model
  • Visualizing the Learned Embeddings
  • Evaluating Embeddings: Analogical Reasoning
  • Optimizing the Implementation

Natural Language Processing (NLP) with Python Training Course

Duration

28 hours (usually 4 days including breaks)

Requirements

Basic Knowledge of Python

Overview

This course introduces linguists or programmers to NLP in Python. During this course we will mostly use nltk.org (Natural Language Tool Kit), but also we will use other libraries relevant and useful for NLP. At the moment we can conduct this course in Python 2.x or Python 3.x. Examples are in English or Mandarin (普通话). Other languages can be also made available if agreed before booking.

Course Outline

Overview of Python packages related to NLP

Introduction to NLP (examples in Python of course)

  1. Simple Text Manipulation
    1. Searching Text
    2. Counting Words
    3. Splitting Texts into Words
    4. Lexical dispersion
  2. Processing complex structures
    1. Representing text in Lists
    2. Indexing Lists
    3. Collocations
    4. Bigrams
    5. Frequency Distributions
    6. Conditionals with Words
    7. Comparing Words (startswith, endswith, islower, isalpha, etc…)
  3. Natural Language Understanding
    1. Word Sense Disambiguation
    2. Pronoun Resolution
  4. Machine translations (statistical, rule based, literal, etc…)
  5. Exercises

NLP in Python in examples

  1. Accessing Text Corpora and Lexical Resources
    1. Common sources for corpora
    2. Conditional Frequency Distributions
    3. Counting Words by Genre
    4. Creating own corpus
    5. Pronouncing Dictionary
    6. Shoebox and Toolbox Lexicons
    7. Senses and Synonyms
    8. Hierarchies
    9. Lexical Relations: Meronyms, Holonyms
    10. Semantic Similarity
  2. Processing Raw Text
    1. Priting
    2. Struncating
    3. Extracting parts of string
    4. Accessing individual charaters
    5. Searching, replacing, spliting, joining, indexing, etc…
    6. Using regular expressions
    7. Detecting word patterns
    8. Stemming
    9. Tokenization
    10. Normalization of text
    11. Word Segmentation (especially in Chinese)
  3. Categorizing and Tagging Words
    1. Tagged Corpora
    2. Tagged Tokens
    3. Part-of-Speech Tagset
    4. Python Dictionaries
    5. Words to Propertieis mapping
    6. Automatic Tagging
    7. Determining the Category of a Word (Morphological, Syntactic, Semantic)
  4. Text Classification (Machine Learning)
    1. Supervised Classification
    2. Sentence Segmentation
    3. Cross Validation
    4. Decision Trees
  5. Extracting Information from Text
    1. Chunking
    2. Chinking
    3. Tags vs Trees
  6. Analyzing Sentence Structure
    1. Context Free Grammar
    2. Parsers
  7. Building Feature Based Grammars
    1. Grammatical Features
    2. Processing Feature Structures
  8. Analyzing the Meaning of Sentences
    1. Semantics and Logic
    2. Propositional Logic
    3. First-Order Logic
    4. Discourse Semantics
  9. Managing Linguistic Data
    1. Data Formats (Lexicon vs Text)
    2. Metadata

NLP: Natural Language Processing with R Training Course

Duration

21 hours (usually 3 days including breaks)

Requirements

  • Some familiarity with programming.

Audience

  • Linguists and programmers

Overview

It is estimated that unstructured data accounts for more than 90 percent of all data, much of it in the form of text. Blog posts, tweets, social media, and other digital publications continuously add to this growing body of data.

This instructor-led, live course centers around extracting insights and meaning from this data. Utilizing the R Language and Natural Language Processing (NLP) libraries, we combine concepts and techniques from computer science, artificial intelligence, and computational linguistics to algorithmically understand the meaning behind text data. Data samples are available in various languages per customer requirements.

By the end of this training participants will be able to prepare data sets (large and small) from disparate sources, then apply the right algorithms to analyze and report on its significance.

Format of the Course

  • Part lecture, part discussion, heavy hands-on practice, occasional tests to gauge understanding

Course Outline

Introduction

  • NLP and R vs Python

Installing and Configuring R Studio

Installing R Packages Related to Natural Language Processing (NLP)

An Overview of R’s Text Manipulation Capabilities

Getting Started with an NLP Project in R

Reading and Importing Data Files into R

Text Manipulation with R

Document Clustering in R

Parts of Speech Tagging in R

Sentence Parsing in R

Working with Regular Expressions in R

Named-Entity Recognition in R

Topic Modeling in R

Text Classification in R

Working with Very Large Data Sets

Visualizing Your Results

Optimization

Integrating R with Other Languages (Java, Python, etc.)

Summary and Conclusion

Vertex AI Training Course

Duration

7 hours (usually 1 day including breaks)

Requirements

  • Knowledge of machine learning

Audience

  • Software engineers
  • Machine learning enthusiasts

Overview

Vertex AI is a Google Cloud environment for completing machine learning tasks from experimentation, to deployment, to managing and monitoring models. It is a scalable infrastructure that provides user management capabilities and security controls over machine learning projects.

This instructor-led, live training (online or onsite) is aimed at beginner to intermediate-level software engineers or anyone who wish to learn how to use Vertex AI to perform and complete machine learning activities.

By the end of this training, participants will be able to:

  • Understand how Vertex AI works and use it as a machine learning platform.
  • Learn about machine learning and NLP concepts.
  • Know how to train and deploy machine learning models using Vertex AI.

Format of the Course

  • Interactive lecture and discussion.
  • Lots of exercises and practice.
  • Hands-on implementation in a live-lab environment.

Course Customization Options

  • To request a customized training for this course, please contact us to arrange.

Course Outline

Introduction

Overview of Vertex AI

Understanding AI Concepts

Setting up the Vertex AI Environment

Regression and Classification Concepts in Vertex AI

NLP Concepts in Vertex AI

Setting up a Containerize Training Code

Running a Training Job on Vertex AI

Deploying a Model Endpoint

Troubleshooting

Summary and Next Steps

Build a Web Application with Python/ Flask and NLP

Share the joy of famous quotes with a cloud-based web app using natural language processing to hit the right mood!

Requirements

  • Basic knowledge of Python, HTML, and Jupyter notebooks
  • Ability to install Python libraries as required per course (pip install or however you do it on your OS)

Description

Let’s share the wonderful joy of famous quotes to the world with a quoting machine web application that uses natural language sentiment to tailor the right quote for the user.

The class will teach you how to take your Python ideas and extend them to the web into real Web Applications so the world can enjoy your work.

In this class, we will:

  • develop our ideas in a local Jupyter notebook
  • gather data (famous quotes)
  • use the Vader NLP sentiment algorithm
  • tune our models and dispensing mechanisms locally
  • design the look and feel
  • get graphics
  • extend responsive HTML templates
  • port to the web using PythonAnywhere
  • enjoy great quotes in tune with our moods 24/7

Above all, you will understand how you can port your own Python ideas to the web into fully interactive web applications so the world can enjoy your work!

Who this course is for:

  • Anybody wanting to extend their programmatic reach
  • Anybody wanting to share their work to the entire world by porting to the web

Course content

1 section • 12 lectures • 1h 59m total length

NLP with vector spaces

Advance knowledge at NLP

Understand NLP

Advance knowledge at DL

Understand DL

Requirements

  • Motivation
  • Interset
  • Mathematical approach

Description

I am Nitsan Soffair, A Deep RL researcher at BGU.

In this course you will learn NLP with vector spaces.

You will

  1. Get knowledge of
    1. Sentiment analysis with logistic regression
    2. Sentiment analysis with naive bayes
    3. Vector space models
    4. Machine translation and document search
  2. Validate knowledge by answering a quiz by the end of each lecture
  3. Be able to complete the course by ~2 hours.

Syllabus

  1. Sentiment analysis with logistic regression
    1. Supervised ML
    2. Feature extraction
    3. Logistic regression
  2. Sentiment analysis with naive bayes
    1. Bayes rule
    2. Laplacian smoothing
  3. Vector space models
    1. Euclidean distance
    2. Cosine similarity
    3. PCA
  4. Machine translation and document search
    1. Word vectors
    2. K-nearest neighbours
    3. Approximating NN
  5. Additional content
    1. GPT-3
    2. DALL-E
    3. CLIP

Vector space model or term vector model is an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers (such as index terms). It is used in information filtering, information retrieval, indexing and relevancy rankings. Its first use was in the SMART Information Retrieval System.

Supervised learning (SL) is the machine learning task of learning a function that maps an input to an output based on example input-output pairs. It infers a function from labeled training data consisting of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a “reasonable” way (see inductive bias). This statistical quality of an algorithm is measured through the so-called generalization error.

The parallel task in human and animal psychology is often referred to as concept learning.

Resources

  • Wikipedia
  • Coursera

Who this course is for:

  • Anyone intersted in NLP
  • Anyone intersted in AI

Course content

Modern NLP using Deep Learning

Advance knowledge at modern NLP

Understand modern NLP techniques

Advance knowledge at modern DL

Understand modern DL techniques

Requirements

  • Motivation
  • Interset
  • Mathematical approach

Description

You will learn the newest state-of-the-art Natural language processing (NLP) Deep-learning approaches.

You will

  1. Get state-of-the-art knowledge regarding
    1. NMT
    2. Text summarization
    3. QA
    4. Chatbot
  2. Validate your knowledge by answering short and very easy 3-question queezes of each lecture
  3. Be able to complete the course by ~2 hours.

Syllabus

  1. Neural machine translation (NMT)
    1. Seq2seq
      A family of machine learning approaches used for natural language processing.
    2. Attention
      A technique that mimics cognitive attention.
    3. NMT
      An approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modelling entire sentences in a single integrated model.
    4. Teacher-forcing
      An algorithm for training the weights of recurrent neural networks (RNNs).
    5. BLEU
      An algorithm for evaluating the quality of text which has been machine-translated from one natural language to another.
    6. Beam search
      A heuristic search algorithm that explores a graph by expanding the most promising node in a limited set.
  2. Text summarization
    1. Transformer
      A deep learning model that adopts the mechanism of self-attention, differentially weighting the significance of each part of the input data.
  3. Question Answering
    1. GPT-3
      An autoregressive language model that uses deep learning to produce human-like text.
    2. BERT
      A transformer-based machine learning technique for natural language processing (NLP) pre-training developed by Google.
  4. Chatbot
    1. LSH
      An algorithmic technique that hashes similar input items into the same “buckets” with high probability.
    2. RevNet
      A variant of ResNets where each layer’s activations can be reconstructed exactly from the next layer’s.
    3. Reformer
      Introduces two techniques to improve the efficiency of Transformers.

Resources

  • Wikipedia
  • Coursera

Who this course is for:

  • Anyone intersted in NLP
  • Anyone intersted in AI

Course content

What is machine learning?

Editor’s Note: 

This report is part of “A Blueprint for the Future of AI,” a series from the Brookings Institution that analyzes the new challenges and potential policy solutions introduced by artificial intelligence and other emerging technologies.

In the summer of 1955, while planning a now famous workshop at Dartmouth College, John McCarthy coined the term “artificial intelligence” to describe a new field of computer science. Rather than writing programs that tell a computer how to carry out a specific task, McCarthy pledged that he and his colleagues would instead pursue algorithms that could teach themselves how to do so. The goal was to create computers that could observe the world and then make decisions based on those observations—to demonstrate, that is, an innate intelligence.

The question was how to achieve that goal. Early efforts focused primarily on what’s known as symbolic AI, which tried to teach computers how to reason abstractly. But today the dominant approach by far is machine learning, which relies on statistics instead. Although the approach dates back to the 1950s—one of the attendees at Dartmouth, Arthur Samuels, was the first to describe his work as “machine learning”—it wasn’t until the past few decades that computers had enough storage and processing power for the approach to work well. The rise of cloud computing and customized chips has powered breakthrough after breakthrough, with research centers like OpenAI or DeepMind announcing stunning new advances seemingly every week.

Machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, it’s not possible to tease out the implications of AI without understanding how machine learning works.

The extraordinary success of machine learning has made it the default method of choice for AI researchers and experts. Indeed, machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, it’s not possible to tease out the implications of AI without understanding how machine learning works—as well as how it doesn’t.

HOW DOES MACHINE LEARNING WORK?

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic. If you think about it long enough, this makes sense. When we look at a picture of someone, our brains unconsciously estimate how likely it is that we have seen their face before. When we drive to the store, we estimate which route is most likely to get us there the fastest. When we play a board game, we estimate which move is most likely to lead to victory. Recognizing someone, planning a trip, plotting a strategy—each of these tasks demonstrate intelligence. But rather than hinging primarily on our ability to reason abstractly or think grand thoughts, they depend first and foremost on our ability to accurately assess how likely something is. We just don’t always realize that that’s what we’re doing.

Back in the 1950s, though, McCarthy and his colleagues did realize it. And they understood something else too: Computers should be very good at computing probabilities. Transistors had only just been invented, and had yet to fully supplant vacuum tube technology. But it was clear even then that with enough data, digital computers would be ideal for estimating a given probability. Unfortunately for the first AI researchers, their timing was a bit off. But their intuition was spot on—and much of what we now know as AI is owed to it. When Facebook recognizes your face in a photo, or Amazon Echo understands your question, they’re relying on an insight that is over sixty years old.

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.

The machine learning algorithm that Facebook, Google, and others all use is something called a deep neural network. Building on the prior work of Warren McCullough and Walter Pitts, Frank Rosenblatt coded one of the first working neural networks in the late 1950s. Although today’s neural networks are a bit more complex, the main idea is still the same: The best way to estimate a given probability is to break the problem down into discrete, bite-sized chunks of information, or what McCullough and Pitts termed a “neuron.” Their hunch was that if you linked a bunch of neurons together in the right way, loosely akin to how neurons are linked in the brain, then you should be able to build models that can learn a variety of tasks.

To get a feel for how neural networks work, imagine you wanted to build an algorithm to detect whether an image contained a human face. A basic deep neural network would have several layers of thousands of neurons each. In the first layer, each neuron might learn to look for one basic shape, like a curve or a line. In the second layer, each neuron would look at the first layer, and learn to see whether the lines and curves it detects ever make up more advanced shapes, like a corner or a circle. In the third layer, neurons would look for even more advanced patterns, like a dark circle inside a white circle, as happens in the human eye. In the final layer, each neuron would learn to look for still more advanced shapes, such as two eyes and a nose. Based on what the neurons in the final layer say, the algorithm will then estimate how likely it is that an image contains a face. (For an illustration of how deep neural networks learn hierarchical feature representations, see here.)

The magic of deep learning is that the algorithm learns to do all this on its own. The only thing a researcher does is feed the algorithm a bunch of images and specify a few key parameters, like how many layers to use and how many neurons should be in each layer, and the algorithm does the rest. At each pass through the data, the algorithm makes an educated guess about what type of information each neuron should look for, and then updates each guess based on how well it works. As the algorithm does this over and over, eventually it “learns” what information to look for, and in what order, to best estimate, say, how likely an image is to contain a face.

What’s remarkable about deep learning is just how flexible it is. Although there are other prominent machine learning algorithms too—albeit with clunkier names, like gradient boosting machines—none are nearly so effective across nearly so many domains. With enough data, deep neural networks will almost always do the best job at estimating how likely something is. As a result, they’re often also the best at mimicking intelligence too.

Related

  • Teaching the public about machine learning
  • What is artificial intelligence?
  • The Brookings glossary of AI and emerging technologies

Yet as with machine learning more generally, deep neural networks are not without limitations. To build their models, machine learning algorithms rely entirely on training data, which means both that they will reproduce the biases in that data, and that they will struggle with cases that are not found in that data. Further, machine learning algorithms can also be gamed. If an algorithm is reverse engineered, it can be deliberately tricked into thinking that, say, a stop sign is actually a person. Some of these limitations may be resolved with better data and algorithms, but others may be endemic to statistical modeling.

MACHINE LEARNING APPLICATIONS

To glimpse how the strengths and weaknesses of AI will play out in the real-world, it is necessary to describe the current state of the art across a variety of intelligent tasks. Below, I look at the situation in regard to speech recognition, image recognition, robotics, and reasoning in general.

Speech recognition 

Ever since digital computers were invented, linguists and computer scientists have sought to use them to recognize speech and text. Known as natural language processing, or NLP, the field once focused on hardwiring syntax and grammar into code. However, over the past several decades, machine learning has largely surpassed rule-based systems, thanks to everything from support vector machines to hidden markov models to, most recently, deep learning. Apple’s Siri, Amazon’s Alexa, and Google’s Duplex all rely heavily on deep learning to recognize speech or text, and represent the cutting-edge of the field.

When several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

The specific deep learning algorithms at play have varied somewhat. Recurrent neural networks powered many of the initial deep learning breakthroughs, while hierarchical attention networks are responsible for more recent ones. What they all share in common, though, is that the higher levels of a deep learning network effectively learn grammar and syntax on their own. In fact, when several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

Yet for all the success of deep learning at speech recognition, key limitations remain. The most important is that because deep neural networks only ever build probabilistic models, they don’t understand language in the way humans do; they can recognize that the sequence of letters k-i-n-g and q-u-e-e-n are statistically related, but they have no innate understanding of what either word means, much less the broader concepts of royalty and gender. As a result, there is likely to be a ceiling to how intelligent speech recognition systems based on deep learning and other probabilistic models can ever be. If we ever build an AI like the one in the movie “Her,” which was capable of genuine human relationships, it will almost certainly take a breakthrough well beyond what a deep neural network can deliver.

Image recognition

When Rosenblatt first implemented his neural network in 1958, he initially set it loose on images of dogs and cats. AI researchers have been focused on tackling image recognition ever since. By necessity, much of that time was spent devising algorithms that could detect pre-specified shapes in an image, like edges and polyhedrons, using the limited processing power of early computers. Thanks to modern hardware, however, the field of computer vision is now dominated by deep learning instead. When a Tesla drives safely in autopilot mode, or when Google’s new augmented-reality microscope detects cancer in real-time, it’s because of a deep learning algorithm.

A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, they’ll need to become much more robust.

Convolutional neural networks, or CNNs, are the variant of deep learning most responsible for recent advances in computer vision. Developed by Yann LeCun and others, CNNs don’t try to understand an entire image all at once, but instead scan it in localized regions, much the way a visual cortex does. LeCun’s early CNNs were used to recognize handwritten numbers, but today the most advanced CNNs, such as capsule networks, can recognize complex three-dimensional objects from multiple angles, even those not represented in training data. Meanwhile, generative adversarial networks, the algorithm behind “deep fake” videos, typically use CNNs not to recognize specific objects in an image, but instead to generate them.

As with speech recognition, cutting-edge image recognition algorithms are not without drawbacks. Most importantly, just as all that NLP algorithms learn are statistical relationships between words, all that computer vision algorithms learn are statistical relationships between pixels. As a result, they can be relatively brittle. A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, they’ll need to become much more robust.

Robotics 

What makes our intelligence so powerful is not just that we can understand the world, but that we can interact with it. The same will be true for machines. Computers that can learn to recognize sights and sounds are one thing; those that can learn to identify an object as well as how to manipulate it are another altogether. Yet if image and speech recognition are difficult challenges, touch and motor control are far more so. For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

The reason: Picking up an object like a shirt isn’t just one task, but several. First you need to recognize a shirt as a shirt. Then you need to estimate how heavy it is, how its mass is distributed, and how much friction its surface has. Based on those guesses, then you need to estimate where to grasp the shirt and how much force to apply at each point of your grip, a task made all the more challenging because the shirt’s shape and distribution of mass will change as you lift it up. A human does this trivially and easily. But for a computer, the uncertainty in any of those calculations compounds across all of them, making it an exceedingly difficult task.

Initially, programmers tried to solve the problem by writing programs that instructed robotic arms how to carry out each task step by step. However, just as rule-based NLP can’t account for all possible permutations of language, there also is no way for rule-based robotics to run through all the possible permutations of how an object might be grasped. By the 1980s, it became increasingly clear that robots would need to learn about the world on their own and develop their own intuitions about how to interact with it. Otherwise, there was no way they would be able to reliably complete basic maneuvers like identifying an object, moving toward it, and picking it up.

The current state of the art is something called deep reinforcement learning. As a crude shorthand, you can think of reinforcement learning as trial and error. If a robotic arm tries a new way of picking up an object and succeeds, it rewards itself; if it drops the object, it punishes itself. The more the arm attempts its task, the better it gets at learning good rules of thumb for how to complete it. Coupled with modern computing, deep reinforcement learning has shown enormous promise. For instance, by simulating a variety of robotic hands across thousands of servers, OpenAI recently taught a real robotic hand how to manipulate a cube marked with letters.

For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

Compared with prior research, OpenAI’s breakthrough is tremendously impressive. Yet it also shows the limitations of the field. The hand OpenAI built didn’t actually “feel” the cube at all, but instead relied on a camera. For an object like a cube, which doesn’t change shape and can be easily simulated in virtual environments, such an approach can work well. But ultimately, robots will need to rely on more than just eyes. Machines with the dexterity and fine motor skills of a human are still a ways away.

Reasoning

When Arthur Samuels coined the term “machine learning,” he wasn’t researching image or speech recognition, nor was he working on robots. Instead, Samuels was tackling one of his favorite pastimes: checkers. Since the game had far too many potential board moves for a rule-based algorithm to encode them all, Samuels devised an algorithm that could teach itself to efficiently look several moves ahead. The algorithm was noteworthy for working at all, much less being competitive with other humans. But it also anticipated the astonishing breakthroughs of more recent algorithms like AlphaGo and AlphaGo Zero, which have surpassed all human players at Go, widely regarded as the most intellectually demanding board game in the world.

As with robotics, the best strategic AI relies on deep reinforcement learning. In fact, the algorithm that OpenAI used to power its robotic hand also formed the core of its algorithm for playing Dota 2, a multi-player video game. Although motor control and gameplay may seem very different, both involve the same process: making a sequence of moves over time, and then evaluating whether they led to success or failure. Trial and error, it turns out, is as useful for learning to reason about a game as it is for manipulating a cube.

Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear.

From Samuels on, the success of computers at board games has posed a puzzle to AI optimists and pessimists alike. If a computer can beat a human at a strategic game like chess, how much can we infer about its ability to reason strategically in other environments? For a long time, the answer was, “very little.” After all, most board games involve a single player on each side, each with full information about the game, and a clearly preferred outcome. Yet most strategic thinking involves cases where there are multiple players on each side, most or all players have only limited information about what is happening, and the preferred outcome is not clear. For all of AlphaGo’s brilliance, you’ll note that Google didn’t then promote it to CEO, a role that is inherently collaborative and requires a knack for making decisions with incomplete information.

Fortunately, reinforcement learning researchers have recently made progress on both of those fronts. One team outperformed human players at Texas Hold ‘Em, a poker game where making the most of limited information is key. Meanwhile, OpenAI’s Dota 2 player, which coupled reinforcement learning with what’s called a Long Short-Term Memory (LSTM) algorithm, has made headlines for learning how to coordinate the behavior of five separate bots so well that they were able to beat a team of professional Dota 2 players. As the algorithms improve, humans will likely have a lot to learn about optimal strategies for cooperation, especially in information-poor environments. This kind of information would be especially valuable for commanders in military settings, who sometimes have to make decisions without having comprehensive information.

Yet there’s still one challenge no reinforcement learning algorithm can ever solve. Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear. Should corporate strategy prioritize growth or sustainability? Should U.S. foreign policy prioritize security or economic development? No AI will ever be able to answer higher-order strategic reasoning, because, ultimately, those are moral or political questions rather than empirical ones. The Pentagon may lean more heavily on AI in the years to come, but it won’t be taking over the situation room and automating complex tradeoffs any time soon.

WHAT’S NEXT FOR MACHINE LEARNING?

From autonomous cars to multiplayer games, machine learning algorithms can now approach or exceed human intelligence across a remarkable number of tasks. The breakout success of deep learning in particular has led to breathless speculation about both the imminent doom of humanity and its impending techno-liberation. Not surprisingly, all the hype has led several luminaries in the field, such as Gary Marcus or Judea Pearl, to caution that machine learning is nowhere near as intelligent as it is being presented, or that perhaps we should defer our deepest hopes and fears about AI until it is based on more than mere statistical correlations. Even Geoffrey Hinton, a researcher at Google and one of the godfathers of modern neural networks, has suggested that deep learning alone is unlikely to deliver the level of competence many AI evangelists envision.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics? If all of it can be, then machine learning may well be all we need to get to a true artificial general intelligence. But it’s very unclear whether that’s the case. As far back as 1969, when Marvin Minsky and Seymour Papert famously argued that neural networks had fundamental limitations, even leading experts in AI have expressed skepticism that machine learning would be enough. Modern skeptics like Marcus and Pearl are only writing the latest chapter in a much older book. And it’s hard not to find their doubts at least somewhat compelling. The path forward from the deep learning of today, which can mistake a rifle for a helicopter, is by no means obvious.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics?

Yet the debate over machine learning’s long-term ceiling is to some extent beside the point. Even if all research on machine learning were to cease, the state-of-the-art algorithms of today would still have an unprecedented impact. The advances that have already been made in computer vision, speech recognition, robotics, and reasoning will be enough to dramatically reshape our world. Just as happened in the so-called “Cambrian explosion,” when animals simultaneously evolved the ability to see, hear, and move, the coming decade will see an explosion in applications that combine the ability to recognize what is happening in the world with the ability to move and interact with it. Those applications will transform the global economy and politics in ways we can scarcely imagine today. Policymakers need not wring their hands just yet about how intelligent machine learning may one day become. They will have their hands full responding to how intelligent it already is.