Make Teaching Easier with Artificial Intelligence (Chat GPT)

Learn different types of AI and how to use them as a teacher

Save you time each week on administrative tasks

Plan for the implications of Chat GPT on student assessment

Increase your creativity and professional writing ability as a teacher


  • No prior experience with artificial intelligence required.


Welcome to our course on using modern artificial intelligence to automate teacher administrative tasks. Are you tired of spending countless hours on lesson planning, creating test questions, providing student feedback, and marking assignments? In this course, we will show you how to use tools like OpenAI’s ChatGPT to drastically reduce your workload and free up your time for more important tasks.

We will explore a variety of AI-powered solutions that can assist with various administrative tasks, including lesson planning, writing emails to parents, creating test questions (both multiple-choice and extended-answer/essay), writing cover letters, writing emails to colleagues, differentiating your lessons by providing scaffolded activities, writing summaries of professional reading sources for your own growth and development, providing personalised feedback to students, and even marking extended responses. This course is designed to be practical and hands-on, so you can start implementing these tools in your own teaching practice right away.

By the end of this course, you will have a comprehensive understanding of how to use artificial intelligence to streamline your teacher admin tasks and save time (I use it to save 4 hours each week!). I’ve also included a 70+ page course companion PDF that contains all of the tips and tricks taught throughout this course. Don’t miss this opportunity to join us on this exciting pedagogy-pivoting journey and transform your teaching practice with the power of AI.

Who this course is for:

  • Educators, Teachers (Primary, Secondary and Tertiary), Early-Career Teachers, Pre-Service Teachers

Course content

ChatGPT Important terminologies for Beginners in 1 hour

Students will understand and know about chatGPT

Students will know about chatGPT possibilties

They will know about Various chatGPT API’s like ada, babbage, curie, davinci and Dall E for image generations

Students will know about the basic terminologies like token, prompts, top_p setting, temperature setting etc


  • No any prior knowledge required , any person can take this course


  1. Introduction to chatGPT: In this video, you’ll learn about chatGPT, a powerful language model created by OpenAI. We’ll explore what GPT stands for and what makes chatGPT unique. You’ll also get an overview of the various features and applications of chatGPT, such as generating text, summarizing paragraphs, and translating languages.
  2. All the Basic Terminologies of chatGPT API: In this video, we’ll dive into the basic terminologies of the chatGPT API. We’ll explore key concepts like tokens, prompts, and responses. You’ll learn how to use these terms to interact with the chatGPT API and generate text in a variety of formats.
  3. Conversions example: In this video, we’ll explore how to use chatGPT to generate text based on a given prompt. Specifically, we’ll focus on the “conversions” feature, which allows chatGPT to generate text in a variety of formats. We’ll walk through some examples of how to use this feature, and discuss best practices for getting the most out of it.
  4. Translations example: In this video, we’ll explore how to use chatGPT to translate text from one language to another. We’ll walk through some examples of how to use the translation feature, and discuss key considerations like accuracy and speed. You’ll learn how to use chatGPT to translate text in real-time, as well as how to batch-translate large amounts of text.
  5. Generations example: In this video, we’ll explore how to use chatGPT to generate text based on a given prompt. We’ll walk through some examples of how to use the “generations” feature, which allows chatGPT to generate text in a variety of styles and formats. You’ll learn how to use this feature to generate creative writing, social media posts, and much more.
  6. Creating horror stories: In this video, we’ll explore how to use chatGPT to generate horror stories. We’ll walk through some examples of how to use the “generations” feature to create spooky and suspenseful stories. You’ll learn how to use prompts and other features to customize your horror stories and make them truly terrifying.
  7. Converting from one language to another language: In this video, we’ll explore how to use chatGPT to convert text from one language to another. We’ll walk through some examples of how to use the translation feature to convert text in real-time or in batches. You’ll learn about the strengths and weaknesses of different translation methods, and how to get the most accurate results.
  8. Summarise the big paragraphs: In this video, we’ll explore how to use chatGPT to summarize large paragraphs of text. We’ll discuss the key concepts behind summarization, and walk through some examples of how to use chatGPT to generate concise summaries of long texts. You’ll learn how to customize your summaries based on your needs and preferences, and how to evaluate the accuracy and effectiveness of your summaries.

Who this course is for:

  • Anybody who wants to know about ChatGPT

Course content

ChatGPT Crash Course: ChatGPT for Everyone & ChatGPT Basics

What is ChatGPT

How to use ChatGPT

How to use prompts in ChatGPT

ChatGPT Basics and ChatGPT Essentials


  • No previous knowledge or experience required
  • An open mind and a willingness to learn


Are you ready to harness the power of the most revolutionary AI (Artificial Intelligence) language model? Dive into our beginner-friendly online course, “ChatGPT Crash Course,” and unlock the full potential of ChatGPT for yourself and your business.

ChatGPT, developed by OpenAI, is an advanced language model that has transformed the AI landscape. It can generate human-like text, answer questions, provide suggestions, and even engage in meaningful conversations. By understanding and utilizing ChatGPT, you can unlock limitless possibilities for personal, professional, and creative applications.

Why should you choose this ChatGPT Crash Course?

  • Comprehensive and Beginner-Friendly: Our course is designed for learners with no prior knowledge of ChatGPT or AI. Start with the basics and build your skills step-by-step with our easy-to-follow modules.
  • Real-World Applications: Discover how ChatGPT can be utilized for various purposes, such as writing, content creation, learning, business, solving problems and more!
  • Expert Instructor: Learn from industry professionals with extensive experience in AI and ChatGPT, ensuring you receive top-notch education and practical insights.
  • Interactive Learning: Engage in hands-on projects, group discussions, and quizzes that solidify your understanding of ChatGPT and its potential use cases.
  • Exclusive Resources: Get access to curated resources, templates, and tools to help you integrate ChatGPT seamlessly into your work or personal projects.
  • Ongoing Support: Receive continued guidance even after completing the course, ensuring your success with ChatGPT in real-life applications.

So what are you waiting for? Enroll now in this ChatGPT Crash Course and embark on a journey to transform your understanding of AI, enhance your skills, and stay ahead of the curve in the rapidly evolving world of technology. Don’t miss your chance to become a ChatGPT expert and leverage its power for unlimited potential!

What is ChatGPT?

ChatGPT is an AI chatbot that can generate human-like responses to a wide range of questions and prompts. It is a language model that uses deep learning to generate text that is similar to human writing. You can use ChatGPT for various purposes such as automated customer service, virtual assistants, and more.

What are the benefits or advantages of using ChatGPT?

The advantages of using ChatGPT are that it can save time and money by automating tasks that would otherwise require human intervention. It can also provide quick and accurate responses to customer inquiries, which can improve customer satisfaction. Additionally, ChatGPT can be used to generate content such as articles, summaries, and more.

How is ChatGPT different from Google?

ChatGPT and Google serve different purposes and function in distinct ways, even though they both utilize artificial intelligence and natural language processing. Here are some key differences between the two:

1. Purpose:

– ChatGPT is an AI language model developed by OpenAI, designed to generate human-like text, answer questions, and engage in conversations. It is primarily focused on understanding and generating text-based content.

– Google is a search engine that indexes and retrieves information from the internet. It helps users find websites, images, videos, and other content relevant to their search queries.

2. Functionality:

– ChatGPT processes and generates text based on the input it receives, attempting to provide relevant and coherent responses. It can be used for tasks such as content creation, programming assistance, customer support, and more.

– Google, on the other hand, uses complex algorithms to analyze and rank webpages based on their relevance to a user’s search query. It does not generate content but rather directs users to existing sources of information.

3. User Interaction:

– With ChatGPT, users engage in a more interactive, conversation-like experience. The AI model attempts to provide context-sensitive and coherent responses to user inputs.

– Google provides a list of search results based on the user’s query, and users must click on individual links to access the information they seek.

4. Data Source:

– ChatGPT is trained on a large dataset of text from various sources, and its knowledge is limited to the information available in its training data.

– Google constantly crawls and indexes new information on the internet, making its search results more up-to-date and expansive compared to ChatGPT’s knowledge base.

In summary, ChatGPT is an AI language model designed for text generation and conversation, while Google is a search engine that helps users find information on the internet. They serve different purposes and offer unique functionalities, with ChatGPT focusing on content generation and Google on information retrieval.

Who this course is for:

  • Beginners
  • People interested in learning about ChatGPT & how to use it
  • Content Creators
  • Writers and Authors
  • Small Business Owners
  • Entrepreneurs

Course content

What is ChatGPT? Will it replace Salesforce Developers?

Define ChatGPT

Understand the capabilities of ChatGPT/OpenAI and know implications for salesforce developers

Know the Limitations of the tool

Will developers be replaced by it?


  • No prequisite


OpenAI is a research organization that focuses on developing artificial intelligence technology and promoting the responsible use of AI. One of the main areas of focus for OpenAI is natural language processing, which involves developing AI systems that can understand and generate human-like language. The ultimate goal of designing this course is to give some ideas on how ChatGPT is taking up the market and who will be benefited. People from all tech backgrounds and non-tech are expecting benefits from it as it can be the great assistant on their day to day activities.

The goal of this course is to let developers know that what they can do with this course and to make clear on the limitations of this AI based tools as well. All tools or techniques and technology has some sorts of limitations and so it does. Being an Artificial Intelligence output based on natural language processing model, it captures and sometime even provides the insights of the questions. Questions are answered in an order and gives clear picture of the questions and answers. For developers, it’s something really great to have and get familiar with. To dig out more into this, you need to go through the course.

Any constructive reviews and feedbacks are welcomed!

Thank you!

Who this course is for:

  • Beginners to advanced Developers
  • Salesforce Developers
  • AI Enthusiasts

Course content

Build a ChatGPT app in SwiftUI for iOS 16

Use GPT3 API to build a fully functional app using Xcode and SwiftUI

Knowledge of data management using SwiftUI

Create modules such as “random concept” and “article” generation in the app

@EnvironmentObject and @ObservedObject property wrappers

@Published, @AppStorage, and @FocusState property wrappers

Design an attractive and user-friendly interface that supports both dark and light mode


  • Mac computer with Xcode installed
  • Familiarity with Xcode
  • Familiarity with Swift and SwiftUI
  • Passion for learning and building apps


Learn how to build a GPT-powered iOS app with Xcode and SwiftUI. You will learn how to build a fully functional app that interacts with OpenAI’s GPT3 API. You will be able to use the GPT3 API to craft modules such as “random concept” and “article” generation. This course will teach you the skills you need to create a GPT3 iOS app such as SwiftUI, OpenAISwift, and data management. Whether you are a beginner or an experienced developer, this course is perfect for anyone looking to add new skills to their toolbox. Enroll now and start building your own GPT-powered app today!

What will you learn in this course?

  • Use Xcode and SwiftUI to build a fully functional app
  • Integrate OpenAI’s GPT3 API into your app
  • How to create modules such as “random concept” and “article” generation
  • How to manage data in your app using SwiftUI
  • How to design a user interface with SwiftUI and support both dark and light mode

By the end of the course, you will have an understanding of how to build your own GPT-powered app and have the skills to create other similar applications. I hope you enjoy taking this course and find it to be a useful learning experience.

Who this course is for:

  • Intermediate SwiftUI developers
  • Developers with some knowledge of SwiftUI

Course content

ChatGPT API – Send Text Messages To ChatGPT From Anywhere

Send text messages to ChatGPT

Learn about the Twilio SMS API

Learn about the OpenAI ChatGPT API

Deploy a Simple Python Web Application with Flask and Heroku

Text ChatGPT From Anywhere


  • No programming experience needed. You will learn everything you need to know.


Updated for the new ChatGPT 3.5 Turbo API in 2023!

This course has been revamped to include the latest ChatGPT Turbo API on March 1st, 2023! The Turbo API is optimized for Chat and is far less expensive than its previous counterpart.

Are you interested in taking your Python skills to the next level?

Python is the future of software development. ChatGPT is the future of Artificial Intelligence. Combine both of these in this course and learn how to build a text message bot.

We will look at the latest version of the ChatGPT API and make it easy to text with it from anywhere. Don’t waste time logging on to a desktop computer, interact with ChatGPT on the go!

Hear what Students From My Other Courses Have To Say

“Great course! I love Python and Crypto and this makes perfect combination! Please make more courses similar to this!”

“Great course for those interested in Python and/or Crypto”

Who is this for?

This course is for anyone who wants to take their skills to the next level. Python is a programming language that many believe to be the future of software development, and ChatGPT is revolutionizing AI and the internet as we know it.

Who this course is for:

  • Anyone Interested in Making a Bot to Send Text Messages to ChatGPT with Python

Course content

Become an AI-Powered Engineer: ChatGPT, Github Copilot

What you’ll learn

Overview of AI tools for developers and their impact on software development

Introduction to ChatGPT, a language model developed by OpenAI for code completion and generation

Integration of ChatGPT with popular IDEs and text editors

Hands-on exercises and projects to practice using ChatGPT to generate code snippets and complete code blocks

Introduction to GitHub Copilot, an AI-powered code assistant that suggests code as developers type

Setup and configuration of GitHub Copilot with popular programming languages

Practical applications of GitHub Copilot in software development

Introduction to Copilot Labs, an open-source initiative that creates powerful AI tools for developers

Contributions to Copilot Labs and use of the tools developed by the community

Hands-on exercises and projects to practice using AI tools to streamline the development workflow

Understanding of how AI works and its impact on software development

Knowledge and skills to increase productivity, build innovative applications, and further career in software development


  • Students will need a computer/laptop to do the practical implementation.
  • Basic programming knowledge is required.


In today’s world, Artificial Intelligence (AI) is revolutionizing the way we work and live. One area where AI has made a significant impact is in software development, where it has enabled developers to create innovative applications and systems faster than ever before.

This comprehensive course, “Become an AI-Powered Engineer: ChatGPT, Github Copilot, and Copilot Labs” is designed to help developers learn how to leverage AI tools to streamline their development process. The course covers three popular AI tools: ChatGPT, GitHub Copilot, and Copilot Labs.

First, students will learn how to use ChatGPT, a language model developed by OpenAI, to generate high-quality code snippets and complete code blocks. They will learn how to integrate ChatGPT with their favorite IDEs and text editors, such as VS Code and Sublime Text.

Next, the course covers GitHub Copilot, an AI-powered code assistant that uses machine learning to suggest code as developers type. Students will learn how to set up and use GitHub Copilot with popular programming languages such as Python, JavaScript, and C#.

Finally, the course covers Copilot Labs, an open-source initiative that aims to create powerful AI tools for developers. Students will learn how to contribute to Copilot Labs and use the tools developed by the community.

Throughout the course, students will work on real-world projects and learn how to integrate AI tools into their development workflow. They will gain an understanding of how it can be used to improve software development.

By the end of the course, students will have the skills and knowledge needed to use ChatGPT, GitHub Copilot, and Copilot Labs to streamline their development process, increase productivity, and build innovative applications faster than ever before.

Who this course is for:

  • Developers who want to learn about the latest AI-powered tools for code completion, debugging, and more
  • Software engineers looking to improve their productivity and efficiency with advanced development tools
  • Anyone interested in exploring the intersection of AI and software development

Course content

The 7 Must-Know Deep Learning Algorithms

The field of artificial intelligence (AI) has grown rapidly in recent times, leading to the development of deep learning algorithms. With the launch of AI tools such as DALL-E and OpenAI, deep learning has emerged as a key area of research. However, with an abundance of available algorithms, it can be difficult to know which ones are the most crucial to understand.

Dive into the fascinating world of deep learning and explore the top, must-know algorithms crucial to understanding artificial intelligence.

1. Convolutional Neural Networks (CNNs)

Convolutional Neural Networks (CNNs), also known as ConvNets, are neural networks that excel at object detection, image recognition, and segmentation. They use multiple layers to extract features from the available data. CNNs mainly consist of four layers:

  1. Convolution layer
  2. Rectified Linear Unit (ReLU)
  3. Pooling Layer
  4. Fully Connected Layer

These four layers provide a working mechanism for the network. The convolution layer is the first layer in CNNs, which filters out complex features from the data. Then, the ReLU maps data to train the network. After that, the process sends the map to the pooling layer, which reduces sampling, and converts the data from 2D to a linear array. Finally, the fully connected layer forms a flattened linear matrix used as input to detect images or other data types.

2. Deep Belief Networks

Deep Belief Networks (DBNs) are another popular architecture for deep learning that allows the network to learn patterns in data with artificial intelligence features. They are ideal for tasks such as face recognition software and image feature detection.

The DBN mechanism involves different layers of Restricted Boltzmann Machines (RBM), which is an artificial neural network that helps in learning and recognizing patterns. The layers of DBN follow the top-down approach, allowing communication throughout the system, and the RBM layers provide a robust structure that can classify data based on different categories.

3. Recurrent Neural Networks (RNNs)

Recurrent Neural Network (RNN) is a popular deep learning algorithm with a wide range of applications. The network is best known for its ability to process sequential data and design language models. It can learn patterns and predict outcomes without mentioning them in the code. For example, the Google search engine uses RNN to auto-complete searches by predicting relevant searches.

The network works with interconnected node layers that help memorize and process input sequences. It can then work through those sequences to automatically predict possible outcomes. Additionally, RNNs can learn from prior inputs, allowing them to evolve with more exposure. Therefore, RNNs are ideal for language modeling and sequential modeling.

4. Long Short Term Memory Networks (LSTMs)

Long Short Term Memory Networks (LSTMs) are a Recurrent Neural Network (RNN) type that differs from others in their ability to work with long-term data. They have exceptional memory and predictive capabilities, making LSTMs ideal for applications like time series predictions, natural language processing (NLP), speech recognition, and music composition.

LSTM networks consist of memory blocks arranged in a chain-like structure. These blocks store relevant information and data that may inform the network in the future while removing any unnecessary data to remain efficient.

During data processing, the LSTM changes cell states. First, it removes irrelevant data through the sigmoid layer. Then it processes new data, evaluates necessary parts, and replaces the previous irrelevant data with the new data. Finally, it determines the output based on the current cell state that has filtered data.

The ability to handle long-term data sets LSTMs apart from other RNNs, making them ideal for applications that require such capabilities.

5. Generative Adversarial Networks

Generative Adversarial Networks (GANs) are a type of deep learning algorithm that supports generative AI. They are capable of unsupervised learning and can generate results on their own by training through specific datasets to create new data instances.

The GAN model consists of two key elements: a generator and a discriminator. The generator is trained to create fake data based on its learning. In contrast, the discriminator is trained to check the output for any fake data or errors and rectify the model based on it.

GANs are widely used for image generation, such as enhancing the graphics quality in video games. They are also useful for enhancing astronomical images, simulating gravitational lenses, and generating videos. GANs remain a popular research topic in the AI community, as their potential applications are vast and varied.

6. Multilayer Perceptrons

Multilayer Perceptron (MLP) is another deep learning algorithm, which is also a neural network with interconnected nodes in multiple layers. MLP maintains a single data flow dimension from input to output, which is known as feedforward. It is commonly used for object classification and regression tasks.

The structure of MLP involves multiple input and output layers, along with several hidden layers, to perform filtering tasks. Each layer contains multiple neurons that are interconnected with each other, even across layers. The data is initially fed to the input layer, from where it progresses through the network.

The hidden layers play a significant role by activating functions like ReLUs, sigmoid, and tanh. Subsequently, it processes the data and generates an output on the output layer.

This simple yet effective model is useful for speech and video recognition and translation software. MLPs have gained popularity due to their straightforward design and ease of implementation in various domains.

7. Autoencoders

Autoencoders are a type of deep learning algorithm used for unsupervised learning. It’s a feedforward model with a one-directional data flow, similar to MLP. Autoencoders are fed with input and modify it to create an output, which can be useful for language translation and image processing.

The model consists of three components: the encoder, the code, and the decoder. They encode the input, resize it into smaller units, then decode it to generate a modified version. This algorithm can be applied in various fields, such as computer vision, natural language processing, and recommendation systems.

Choosing the Right Deep Learning Algorithm

To select the appropriate deep learning approach, it is crucial to consider the nature of the data, the problem at hand, and the desired outcome. By understanding each algorithm’s fundamental principles and capabilities, you can make informed decisions.

Choosing the right algorithm can make all the difference in the success of a project. It is an essential step toward building effective deep learning models.

What is machine learning?

Editor’s Note: 

This report is part of “A Blueprint for the Future of AI,” a series from the Brookings Institution that analyzes the new challenges and potential policy solutions introduced by artificial intelligence and other emerging technologies.

In the summer of 1955, while planning a now famous workshop at Dartmouth College, John McCarthy coined the term “artificial intelligence” to describe a new field of computer science. Rather than writing programs that tell a computer how to carry out a specific task, McCarthy pledged that he and his colleagues would instead pursue algorithms that could teach themselves how to do so. The goal was to create computers that could observe the world and then make decisions based on those observations—to demonstrate, that is, an innate intelligence.

The question was how to achieve that goal. Early efforts focused primarily on what’s known as symbolic AI, which tried to teach computers how to reason abstractly. But today the dominant approach by far is machine learning, which relies on statistics instead. Although the approach dates back to the 1950s—one of the attendees at Dartmouth, Arthur Samuels, was the first to describe his work as “machine learning”—it wasn’t until the past few decades that computers had enough storage and processing power for the approach to work well. The rise of cloud computing and customized chips has powered breakthrough after breakthrough, with research centers like OpenAI or DeepMind announcing stunning new advances seemingly every week.

Machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, it’s not possible to tease out the implications of AI without understanding how machine learning works.

The extraordinary success of machine learning has made it the default method of choice for AI researchers and experts. Indeed, machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, it’s not possible to tease out the implications of AI without understanding how machine learning works—as well as how it doesn’t.


The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic. If you think about it long enough, this makes sense. When we look at a picture of someone, our brains unconsciously estimate how likely it is that we have seen their face before. When we drive to the store, we estimate which route is most likely to get us there the fastest. When we play a board game, we estimate which move is most likely to lead to victory. Recognizing someone, planning a trip, plotting a strategy—each of these tasks demonstrate intelligence. But rather than hinging primarily on our ability to reason abstractly or think grand thoughts, they depend first and foremost on our ability to accurately assess how likely something is. We just don’t always realize that that’s what we’re doing.

Back in the 1950s, though, McCarthy and his colleagues did realize it. And they understood something else too: Computers should be very good at computing probabilities. Transistors had only just been invented, and had yet to fully supplant vacuum tube technology. But it was clear even then that with enough data, digital computers would be ideal for estimating a given probability. Unfortunately for the first AI researchers, their timing was a bit off. But their intuition was spot on—and much of what we now know as AI is owed to it. When Facebook recognizes your face in a photo, or Amazon Echo understands your question, they’re relying on an insight that is over sixty years old.

The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.

The machine learning algorithm that Facebook, Google, and others all use is something called a deep neural network. Building on the prior work of Warren McCullough and Walter Pitts, Frank Rosenblatt coded one of the first working neural networks in the late 1950s. Although today’s neural networks are a bit more complex, the main idea is still the same: The best way to estimate a given probability is to break the problem down into discrete, bite-sized chunks of information, or what McCullough and Pitts termed a “neuron.” Their hunch was that if you linked a bunch of neurons together in the right way, loosely akin to how neurons are linked in the brain, then you should be able to build models that can learn a variety of tasks.

To get a feel for how neural networks work, imagine you wanted to build an algorithm to detect whether an image contained a human face. A basic deep neural network would have several layers of thousands of neurons each. In the first layer, each neuron might learn to look for one basic shape, like a curve or a line. In the second layer, each neuron would look at the first layer, and learn to see whether the lines and curves it detects ever make up more advanced shapes, like a corner or a circle. In the third layer, neurons would look for even more advanced patterns, like a dark circle inside a white circle, as happens in the human eye. In the final layer, each neuron would learn to look for still more advanced shapes, such as two eyes and a nose. Based on what the neurons in the final layer say, the algorithm will then estimate how likely it is that an image contains a face. (For an illustration of how deep neural networks learn hierarchical feature representations, see here.)

The magic of deep learning is that the algorithm learns to do all this on its own. The only thing a researcher does is feed the algorithm a bunch of images and specify a few key parameters, like how many layers to use and how many neurons should be in each layer, and the algorithm does the rest. At each pass through the data, the algorithm makes an educated guess about what type of information each neuron should look for, and then updates each guess based on how well it works. As the algorithm does this over and over, eventually it “learns” what information to look for, and in what order, to best estimate, say, how likely an image is to contain a face.

What’s remarkable about deep learning is just how flexible it is. Although there are other prominent machine learning algorithms too—albeit with clunkier names, like gradient boosting machines—none are nearly so effective across nearly so many domains. With enough data, deep neural networks will almost always do the best job at estimating how likely something is. As a result, they’re often also the best at mimicking intelligence too.


  • Teaching the public about machine learning
  • What is artificial intelligence?
  • The Brookings glossary of AI and emerging technologies

Yet as with machine learning more generally, deep neural networks are not without limitations. To build their models, machine learning algorithms rely entirely on training data, which means both that they will reproduce the biases in that data, and that they will struggle with cases that are not found in that data. Further, machine learning algorithms can also be gamed. If an algorithm is reverse engineered, it can be deliberately tricked into thinking that, say, a stop sign is actually a person. Some of these limitations may be resolved with better data and algorithms, but others may be endemic to statistical modeling.


To glimpse how the strengths and weaknesses of AI will play out in the real-world, it is necessary to describe the current state of the art across a variety of intelligent tasks. Below, I look at the situation in regard to speech recognition, image recognition, robotics, and reasoning in general.

Speech recognition 

Ever since digital computers were invented, linguists and computer scientists have sought to use them to recognize speech and text. Known as natural language processing, or NLP, the field once focused on hardwiring syntax and grammar into code. However, over the past several decades, machine learning has largely surpassed rule-based systems, thanks to everything from support vector machines to hidden markov models to, most recently, deep learning. Apple’s Siri, Amazon’s Alexa, and Google’s Duplex all rely heavily on deep learning to recognize speech or text, and represent the cutting-edge of the field.

When several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

The specific deep learning algorithms at play have varied somewhat. Recurrent neural networks powered many of the initial deep learning breakthroughs, while hierarchical attention networks are responsible for more recent ones. What they all share in common, though, is that the higher levels of a deep learning network effectively learn grammar and syntax on their own. In fact, when several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.

Yet for all the success of deep learning at speech recognition, key limitations remain. The most important is that because deep neural networks only ever build probabilistic models, they don’t understand language in the way humans do; they can recognize that the sequence of letters k-i-n-g and q-u-e-e-n are statistically related, but they have no innate understanding of what either word means, much less the broader concepts of royalty and gender. As a result, there is likely to be a ceiling to how intelligent speech recognition systems based on deep learning and other probabilistic models can ever be. If we ever build an AI like the one in the movie “Her,” which was capable of genuine human relationships, it will almost certainly take a breakthrough well beyond what a deep neural network can deliver.

Image recognition

When Rosenblatt first implemented his neural network in 1958, he initially set it loose on images of dogs and cats. AI researchers have been focused on tackling image recognition ever since. By necessity, much of that time was spent devising algorithms that could detect pre-specified shapes in an image, like edges and polyhedrons, using the limited processing power of early computers. Thanks to modern hardware, however, the field of computer vision is now dominated by deep learning instead. When a Tesla drives safely in autopilot mode, or when Google’s new augmented-reality microscope detects cancer in real-time, it’s because of a deep learning algorithm.

A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, they’ll need to become much more robust.

Convolutional neural networks, or CNNs, are the variant of deep learning most responsible for recent advances in computer vision. Developed by Yann LeCun and others, CNNs don’t try to understand an entire image all at once, but instead scan it in localized regions, much the way a visual cortex does. LeCun’s early CNNs were used to recognize handwritten numbers, but today the most advanced CNNs, such as capsule networks, can recognize complex three-dimensional objects from multiple angles, even those not represented in training data. Meanwhile, generative adversarial networks, the algorithm behind “deep fake” videos, typically use CNNs not to recognize specific objects in an image, but instead to generate them.

As with speech recognition, cutting-edge image recognition algorithms are not without drawbacks. Most importantly, just as all that NLP algorithms learn are statistical relationships between words, all that computer vision algorithms learn are statistical relationships between pixels. As a result, they can be relatively brittle. A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, they’ll need to become much more robust.


What makes our intelligence so powerful is not just that we can understand the world, but that we can interact with it. The same will be true for machines. Computers that can learn to recognize sights and sounds are one thing; those that can learn to identify an object as well as how to manipulate it are another altogether. Yet if image and speech recognition are difficult challenges, touch and motor control are far more so. For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

The reason: Picking up an object like a shirt isn’t just one task, but several. First you need to recognize a shirt as a shirt. Then you need to estimate how heavy it is, how its mass is distributed, and how much friction its surface has. Based on those guesses, then you need to estimate where to grasp the shirt and how much force to apply at each point of your grip, a task made all the more challenging because the shirt’s shape and distribution of mass will change as you lift it up. A human does this trivially and easily. But for a computer, the uncertainty in any of those calculations compounds across all of them, making it an exceedingly difficult task.

Initially, programmers tried to solve the problem by writing programs that instructed robotic arms how to carry out each task step by step. However, just as rule-based NLP can’t account for all possible permutations of language, there also is no way for rule-based robotics to run through all the possible permutations of how an object might be grasped. By the 1980s, it became increasingly clear that robots would need to learn about the world on their own and develop their own intuitions about how to interact with it. Otherwise, there was no way they would be able to reliably complete basic maneuvers like identifying an object, moving toward it, and picking it up.

The current state of the art is something called deep reinforcement learning. As a crude shorthand, you can think of reinforcement learning as trial and error. If a robotic arm tries a new way of picking up an object and succeeds, it rewards itself; if it drops the object, it punishes itself. The more the arm attempts its task, the better it gets at learning good rules of thumb for how to complete it. Coupled with modern computing, deep reinforcement learning has shown enormous promise. For instance, by simulating a variety of robotic hands across thousands of servers, OpenAI recently taught a real robotic hand how to manipulate a cube marked with letters.

For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.

Compared with prior research, OpenAI’s breakthrough is tremendously impressive. Yet it also shows the limitations of the field. The hand OpenAI built didn’t actually “feel” the cube at all, but instead relied on a camera. For an object like a cube, which doesn’t change shape and can be easily simulated in virtual environments, such an approach can work well. But ultimately, robots will need to rely on more than just eyes. Machines with the dexterity and fine motor skills of a human are still a ways away.


When Arthur Samuels coined the term “machine learning,” he wasn’t researching image or speech recognition, nor was he working on robots. Instead, Samuels was tackling one of his favorite pastimes: checkers. Since the game had far too many potential board moves for a rule-based algorithm to encode them all, Samuels devised an algorithm that could teach itself to efficiently look several moves ahead. The algorithm was noteworthy for working at all, much less being competitive with other humans. But it also anticipated the astonishing breakthroughs of more recent algorithms like AlphaGo and AlphaGo Zero, which have surpassed all human players at Go, widely regarded as the most intellectually demanding board game in the world.

As with robotics, the best strategic AI relies on deep reinforcement learning. In fact, the algorithm that OpenAI used to power its robotic hand also formed the core of its algorithm for playing Dota 2, a multi-player video game. Although motor control and gameplay may seem very different, both involve the same process: making a sequence of moves over time, and then evaluating whether they led to success or failure. Trial and error, it turns out, is as useful for learning to reason about a game as it is for manipulating a cube.

Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear.

From Samuels on, the success of computers at board games has posed a puzzle to AI optimists and pessimists alike. If a computer can beat a human at a strategic game like chess, how much can we infer about its ability to reason strategically in other environments? For a long time, the answer was, “very little.” After all, most board games involve a single player on each side, each with full information about the game, and a clearly preferred outcome. Yet most strategic thinking involves cases where there are multiple players on each side, most or all players have only limited information about what is happening, and the preferred outcome is not clear. For all of AlphaGo’s brilliance, you’ll note that Google didn’t then promote it to CEO, a role that is inherently collaborative and requires a knack for making decisions with incomplete information.

Fortunately, reinforcement learning researchers have recently made progress on both of those fronts. One team outperformed human players at Texas Hold ‘Em, a poker game where making the most of limited information is key. Meanwhile, OpenAI’s Dota 2 player, which coupled reinforcement learning with what’s called a Long Short-Term Memory (LSTM) algorithm, has made headlines for learning how to coordinate the behavior of five separate bots so well that they were able to beat a team of professional Dota 2 players. As the algorithms improve, humans will likely have a lot to learn about optimal strategies for cooperation, especially in information-poor environments. This kind of information would be especially valuable for commanders in military settings, who sometimes have to make decisions without having comprehensive information.

Yet there’s still one challenge no reinforcement learning algorithm can ever solve. Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear. Should corporate strategy prioritize growth or sustainability? Should U.S. foreign policy prioritize security or economic development? No AI will ever be able to answer higher-order strategic reasoning, because, ultimately, those are moral or political questions rather than empirical ones. The Pentagon may lean more heavily on AI in the years to come, but it won’t be taking over the situation room and automating complex tradeoffs any time soon.


From autonomous cars to multiplayer games, machine learning algorithms can now approach or exceed human intelligence across a remarkable number of tasks. The breakout success of deep learning in particular has led to breathless speculation about both the imminent doom of humanity and its impending techno-liberation. Not surprisingly, all the hype has led several luminaries in the field, such as Gary Marcus or Judea Pearl, to caution that machine learning is nowhere near as intelligent as it is being presented, or that perhaps we should defer our deepest hopes and fears about AI until it is based on more than mere statistical correlations. Even Geoffrey Hinton, a researcher at Google and one of the godfathers of modern neural networks, has suggested that deep learning alone is unlikely to deliver the level of competence many AI evangelists envision.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics? If all of it can be, then machine learning may well be all we need to get to a true artificial general intelligence. But it’s very unclear whether that’s the case. As far back as 1969, when Marvin Minsky and Seymour Papert famously argued that neural networks had fundamental limitations, even leading experts in AI have expressed skepticism that machine learning would be enough. Modern skeptics like Marcus and Pearl are only writing the latest chapter in a much older book. And it’s hard not to find their doubts at least somewhat compelling. The path forward from the deep learning of today, which can mistake a rifle for a helicopter, is by no means obvious.

Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics?

Yet the debate over machine learning’s long-term ceiling is to some extent beside the point. Even if all research on machine learning were to cease, the state-of-the-art algorithms of today would still have an unprecedented impact. The advances that have already been made in computer vision, speech recognition, robotics, and reasoning will be enough to dramatically reshape our world. Just as happened in the so-called “Cambrian explosion,” when animals simultaneously evolved the ability to see, hear, and move, the coming decade will see an explosion in applications that combine the ability to recognize what is happening in the world with the ability to move and interact with it. Those applications will transform the global economy and politics in ways we can scarcely imagine today. Policymakers need not wring their hands just yet about how intelligent machine learning may one day become. They will have their hands full responding to how intelligent it already is.

What is AI? Stark County professor talks ChatGPT, rise of technology

Artificial intelligence has been making headlines in recent weeks as major tech companies like Google and Microsoft have announced new tools powered by the technology.

Experts say the rapid growth of AI could affect manufacturing, health care and other industries.

The Canton Repository spoke to Shawn Campbell, an assistant professor of computer science and cybersecurity at Malone University, about the rise of AI technology and what it means for the future.

What is artificial intelligence?

It is the ability for computers to perform functions typically done by humans, including decision making or speech or visual recognition. The study has been around since the 1950s, according to an article from Harvard University.

Campbell said one type of AI commonly used in the medical field is expert systems. This technology uses knowledge databases to offer advice or make decisions regarding medical diagnoses.

How are Microsoft and Google using ChatGPT?

A developer called OpenAI launched an AI chatbot in November 2022 known as ChatGPT. Users can interact with the chatbot and receive conversational responses.

Campbell said the rise of this technology has created competition between Microsoft and Google. Microsoft plans to invest billions into ChatGPT, and recently announced AI upgrades to its search engine, Bing. Google, meanwhile, has introduced new AI features in Gmail and Google Docs that create text.

The major tech companies are in an arms race, Campbell said, to see who can develop the best AI technology.

Will the growth of AI affect job opportunities in different industries?

There is some concern that AI technology will replace jobs traditionally held by humans. In some cases, it’s already happened. For example, fast-food chain White Castle started installing hamburger-flipping robots in some of its locations in 2020 to reduce human contact with food during the cooking process.

Campbell said it’s possible that AI will result in fewer employees doing certain tasks.

“If you have a product line that had 100 people on it, and they get a new type of machine in and kind of redesign the process, then they do it with 80 people or 60 people, … I do think you’re going to find more of the same jobs being done by fewer people, and those people being freed up to do other tasks, essentially,” Campbell said.

Some worry that trucking jobs will disappear if developers figure out self-driving technology, but Campbell doesn’t expect that to happen anytime soon.

What kind of changes will AI like ChatGPT have on daily life?

Campbell said he expects AI to become a tool that makes life easier, noting that technology like adaptive cruise control and camera tools that let users remove unwanted objects from the background of photos involve AI and are already used on a daily basis.

“I think that’s really the progression it will follow. … It’s being used as a tool, and it’s making people’s jobs easier, or making a long drive more enjoyable and safer as well,” he said.

One of the biggest changes Campbell expects is for data science and analytics to be emphasized more in education. Some employers are already paying for their employees to receive data analytics and literacy training, he said, and Malone University recently added a data analytics major.

Campbell predicted these skills will become important in the job market and that educational institutions may start incorporating data analytics into general curriculum, like they do with writing and public speaking.