What Is Machine Learning? Types and Examples

Simple Definition of Machine Learning

Machine learning involves enabling computers to learn without someone having to program them. In this way, the machine does the learning, gathering its own pertinent data instead of someone else having to do it.

Machine learning plays a central role in the development of artificial intelligence (AI), deep learning, and neural networks—all of which involve machine learning’s pattern- recognition capabilities.

How Machine Learning Evolved

Modern machine learning has its roots in Boolean logic. George Boole came up with a kind of algebra in which all values could be reduced to binary values. As a result, the binary systems modern computing is based on can be applied to complex, nuanced things. 

Then, in 1952, Arthur Samuel made a program that enabled an IBM computer to improve at checkers as it plays more. Fast forward to 1985 where Terry Sejnowski and Charles Rosenberg created a neural network that could teach itself how to pronounce words properly—20,000 in a single week. In 2016, LipNet, a visual speech recognition AI, was able to read lips in video accurately 93.4% of the time.

Machine learning has come a long way, and its applications impact the daily lives of nearly everyone, especially those concerned with cybersecurity.

Machine Learning Definition: Important Terminologies in Machine Learning

All types of machine learning depend on a common set of terminology, including machine learning in cybersecurity. Machine learning, as discussed in this article, will refer to the following terms.


Model is also referred to as a hypothesis. This is the real-world process that is represented as an algorithm.


A feature is a parameter or property within the data-set that can be measured.

Feature Vector

This refers to a set of more than one numerical feature. It is used as an input, entered into the machine-learning model to generate predictions and to train the system.


When an algorithm examines a set of data and finds patterns, the system is being “trained” and the resulting output is the machine-learning model.


After the machine-learning model has been trained, it can receive an input and then provide a prediction regarding the output.

Target (Label)

The target is the value the machine-learning model is charged with predicting.


When a machine-learning model is provided with a huge amount of data, it can learn incorrectly due to inaccuracies in the data. This is called “overfitting” the system.


In an underfitting situation, the machine-learning model is not able to find the underlying trend of the input data. This makes the machine learning model inaccurate.

Machine Learning Meaning: Types of Machine Learning

There are a few different types of machine learning, including supervised, unsupervised, semi-supervised, and reinforcement learning.

Supervised Learning

With supervised learning, the datasets are labeled, and the labels train the algorithms, enabling them to classify the data they come across accurately and predict outcomes better. In this way, the model can avoid overfitting or underfitting because the datasets have already been categorized.

Unsupervised Learning

In unsupervised learning, the algorithms cluster and analyze datasets without labels. They then use this clustering to discover patterns in the data without any human help.

Semi-supervised Learning

In semi-supervised learning, a smaller set of labeled data is input into the system, and the algorithms then use these to find patterns in a larger dataset. This is useful when there is not enough labeled data because even a reduced amount of data can still be used to train the system.

Reinforcement Learning

In reinforcement machine learning, the algorithm learns as it goes using trial and error. The system is provided with input regarding whether an outcome was successful or unsuccessful.

Machine Learning Types

Machine Learning Explained: How Machine Learning Works

Machine learning is based on the discovery of patterns and makes use of the following processes:

Decision Process

The decision process involves the machine-learning model making a classification or prediction based on input data. These then produce estimates regarding patterns found in the data.

Error Determination

With error determination, an error function is able to assess how accurate the model is. The error function makes a comparison with known examples and it can thus judge whether the algorithms are coming up with the right patterns.

Model Optimization Process

In the model optimization process, the model is compared to the points in a dataset. The model’s predictive abilities are honed by weighting factors of the algorithm based on how closely the output matched with the data-set.

Role of Machine Learning in Cybersecurity

Machine learning is already playing an important role in cybersecurity. Its predictive and pattern-recognition capabilities make it ideal for addressing several cybersecurity challenges. It can collect, structure, and organize data and then find patterns that can be used to better inform decisions. 

For example, a machine-learning model can take a stream of data from a factory floor and use it to predict when assembly line components may fail. It can also predict the likelihood of certain errors happening in the finished product. An engineer can then use this information to adjust the settings of the machines on the factory floor to enhance the likelihood the finished product will come out as desired.

Machine learning can also help decision-makers figure out which questions to ask as they seek to improve processes. For example, sales managers may be investing time in figuring out what sales reps should be saying to potential customers. However, machine learning may identify a completely different parameter, such as the color scheme of an item or its position within a display, that has a greater impact on the rates of sales. Given the right datasets, a machine-learning model can make these and other predictions that may escape human notice.

Real-world Applications of Machine Learning

Machine learning is already playing a significant role in the lives of everyday people. In many ways, some of its capabilities are still relatively untapped.

Speech Recognition

Speech recognition is used when a computer transcribes speech into text or tries to understand verbal inputs by users. Speech recognition analyzes speech patterns and uses feedback as to whether or not the output is accurate. In this way, a speech recognition machine-learning model can tell the difference between similar sounds, such as those associated with “f” and “s.” 

For example, when someone asks Siri a question, Siri uses speech recognition to decipher their query. In many cases, you can use words like “sell” and “fell” and Siri can tell the difference, thanks to her speech recognition machine learning. Speech recognition also plays a role in the development of natural language processing (NLP) models, which help computers interact with humans.

Customer Services

Customer service bots have become increasingly common, and these depend on machine learning. For example, even if you do not type in a query perfectly accurately when asking a customer service bot a question, it can still recognize the general purpose of your query, thanks to data from machine -earning pattern recognition.

Computer Vision

Computers are able to “look” at things and categorize them. They can then use these categories to make decisions. Using machine vision, a computer can, for example, see a small boy crossing the street, identify what it sees as a person, and force a car to stop. Similarly, a machine-learning model can distinguish an object in its view, such as a guardrail, from a line running parallel to a highway. It can then use that information to steer a vehicle.

Recommendation Engines

Recommendation engines can analyze past datasets and then make recommendations accordingly. This machine-learning application depends on regression models. A regression model uses a set of data to predict what will happen in the future. 

For example, a company invested $20,000 in advertising every year for five years. Each year, sales went up by 10%. With all other factors being equal, a regression model may indicate that a $20,000 investment in the following year may also produce a 10% increase in sales.

Automated Stock Trading

With the help of AI, automated stock traders can make millions of trades in one day. The systems use data from the markets to decide which trades are most likely to be profitable. They can then execute trades in less than a second.

Challenges Ahead in the Machine-learning Arena

Machine learning, like most technologies, comes with significant challenges. Some of these impact the day-to-day lives of people, while others have a more tangible effect on the world of cybersecurity.

Impact on the Jobs Market

Many people are concerned that machine-learning may do such a good job doing what humans are supposed to that machines will ultimately supplant humans in several job sectors. In some ways, this has already happened although the effect has been relatively limited. 

For example, the car industry has robots on assembly lines that use machine learning to properly assemble components. In some cases, these robots perform things that humans can do if given the opportunity. However, the fallibility of human decisions and physical movement makes machine-learning-guided robots a better and safer alternative.

Also, a machine-learning model does not have to sleep or take lunch breaks. It also will not call in sick or get into disputes with others. Some manufacturers have capitalized on this to replace humans with machine learning algorithms.

However, the fear may be somewhat overblown. While machine-learning can do things humans cannot, it also does jobs that humans would rather not do. The same human resources that machine learning “replaced” can, in many cases, be used to accomplish other tasks—tasks that machines cannot do. These include making managerial decisions on the fly, and serving as mentors, teachers, artists, and other jobs where human discretion is paramount.

Technological Singularity

Technological singularity refers to the concept that machines may eventually learn to outperform humans in the vast majority of thinking-dependent tasks, including those involving scientific discovery and creative thinking. This is the premise behind cinematic inventions such as “Skynet” in the Terminator movies.

However, not only is this possibility a long way off, but it may also be slowed by the ways in which people limit the use of machine learning technologies. The ability to create situation-sensitive decisions that factor in human emotions, imagination, and social skills is still not on the horizon. Further, as machine learning takes center stage in some day-to-day activities such as driving, people are constantly looking for ways to limit the amount of “freedom” given to machines.

Because these debates happen not only in people’s kitchens but also on legislative floors and within courtrooms, it is unlikely that machines will be given free rein even when it comes to certain autonomous vehicles. If cars that completely drove themselves—even without a human inside—become commonplace, machine-learning technology would still be many years away from organizing revolts against humans, overthrowing governments, or attacking important societal institutions.

Privacy Issues

Since machine learning can analyze objects and people’s faces, it is possible for human privacy to be invaded by the machines that collect and store their data, including those that pertain to their belongings and objects within their homes. 

For example, if machine learning is used to find a criminal through facial recognition technology, the faces of other people may be scanned and their data logged in a data center without their knowledge. In most cases, because the person is not guilty of wrongdoing, nothing comes of this type of scanning. However, if a government or police force abuses this technology, they can use it to find and arrest people simply by locating them through publicly positioned cameras. For many, this kind of privacy invasion is unacceptable.

On the other hand, machine learning can also help protect people’s privacy, particularly their personal data. It can, for instance, help companies stay in compliance with standards such as the General Data Protection Regulation (GDPR), which safeguards the data of people in the European Union. Machine learning can analyze the data entered into a system it oversees and instantly decide how it should be categorized, sending it to storage servers protected with the appropriate kinds of cybersecurity.

Bias and Discrimination Issues

Because machine-learning models recognize patterns, they are as susceptible to forming biases as humans are. For example, a machine-learning algorithm studies the social media accounts of millions of people and comes to the conclusion that a certain race or ethnicity is more likely to vote for a politician. This politician then caters their campaign—as well as their services after they are elected—to that specific group. In this way, the other groups will have been effectively marginalized by the machine-learning algorithm.

Similarly, bias and discrimination arising from the application of machine learning can inadvertently limit the success of a company’s products. If the algorithm studies the usage habits of people in a certain city and reveals that they are more likely to take advantage of a product’s features, the company may choose to target that particular market. However, a group of people in a completely different area may use the product as much, if not more, than those in that city. They just have not experienced anything like it and are therefore unlikely to be identified by the algorithm as individuals attracted to its features.

Methods of Machine Learning

There are a few different machine learning types or methods, including the following:

  1. Supervised learning: Supervised learning algorithms are trained with labeled examples, such as inputs when you already know the desired output. The algorithm learns by comparing the produced output with the correct one.
  2. Unsupervised learning: This involves data without any prior labeling, meaning the “correct answer” is not provided. The algorithm then has to figure out what is being shown, with the goal of finding structure within the data.
  3. Semi-supervised learning: This employs both labeled and unlabeled data to train the system, generally combining a sizable amount of unlabeled data with a small amount of labeled data.
  4. Reinforcement learning: This machine learning method is based on trial and error. The algorithm learns which actions result in the biggest rewards.

The Future of Machine Learning

The future of machine learning lies in hybrid AI, which combines symbolic AI and machine learning. Symbolic AI is a rule-based methodology for the processing of data, and it defines semantic relationships between different things to better grasp higher-level concepts. This enables an AI system to comprehend language instead of merely reading data.

Real-World Machine Learning Use Cases

In the real world, machine learning can be used for:

  1. Speech recognition, such as the translation of speech into text
  2. Customer service, including online chatbots that can answer questions as well as a live human
  3. Recommendation engines, such as recommending products that customers may like while they are checking out or browsing items 
  4. Automated stock trading, which can involve maximizing the performance of stock portfolios or making trades without the help of a human

How Fortinet Can Help

Fortinet FortiInsight uses machine learning to identify threats presented by potentially malicious users. FortiInsight leverages user and entity behavior analytics (UEBA) to recognize insider threats, which have increased 47% in recent years. FortiInsight monitors users and endpoints. It looks for the kind of behavior that may signal the emergence of an insider threat and then automatically responds.

FortiInsight can detect when a user or device is out of compliance with security protocols, acting suspiciously, or engaging in other anomalous behavior. It then automatically alerts the users associated with those accounts. Not only does FortiInsight protect organizations from threats but it also provides admins with enhanced visibility into activity on the network. Admins and supervisors can then use the data generated by FortiInsight to examine work patterns, productivity, and habits and adjust training and procedures accordingly.


What exactly is machine-learning?

Machine learning involves enabling computers to learn without someone having to program them. In this way, the machine does the learning, gathering its own pertinent data instead of someone else having to do it.

What are machine-learning examples?

Examples of machine-learning include computers that help operate self-driving cars, computers that can improve the way they play games as they play more and more, and threat detection systems that can analyze user behavior and recognize anomalous activity.

What are the types of machine-learning?

There are a few different types of machine-learning, including supervised, unsupervised, semi-supervised, and reinforcement learning.

What is machine learning? | Definition , Type and Examples

Machine learning is a subset of artificial intelligence (AI) in which computers learn from data and improve with experience without being explicitly programed.

Machine learning definition in detail

Machine learning is a subset of artificial intelligence (AI). It is focused on teaching computers to learn from data and to improve with experience – instead of being explicitly programmed to do so. In machine learning, algorithms are trained to find patterns and correlations in large data sets and to make the best decisions and predictions based on that analysis. Machine learning applications improve with use and become more accurate the more data they have access to.

Applications of machine learning are all around us –in our homes, our shopping carts, our entertainment media, and our healthcare.

How is machine learning related to AI?

Machine learning – and its components of deep learning and neural networks – all fit as concentric subsets of AI. AI processes data to make decisions and predictions. Machine learning algorithms allow AI to not only process that data, but to use it to learn and get smarter, without needing any additional programming. Artificial intelligence is the parent of all the machine learning subsets beneath it. Within the first subset is machine learning; within that is deep learning, and then neural networks within that.

Diagram of AI vs machine learning

Diagram of the relationship between AI and machine learning

What is a neural network?

An artificial neural network (ANN) is modeled on the neurons in a biological brain. Artificial neurons are called nodes and are clustered together in multiple layers, operating in parallel. When an artificial neuron receives a numerical signal, it processes it and signals the other neurons connected to it. As in a human brain, neural reinforcement results in improved pattern recognition, expertise, and overall learning.

What is deep learning?

This kind of machine learning is called “deep” because it includes many layers of the neural network and massive volumes of complex and disparate data. To achieve deep learning, the system engages with multiple layers in the network, extracting increasingly higher-level outputs. For example, a deep learning system that is processing nature images and looking for Gloriosa daisies will – at the first layer – recognize a plant. As it moves through the neural layers, it will then identify a flower, then a daisy, and finally a Gloriosa daisy. Examples of deep learning applications include speech recognition, image classification, and pharmaceutical analysis.

How does machine learning work?

Machine learning is comprised of different types of machine learning models, using various algorithmic techniques. Depending upon the nature of the data and the desired outcome, one of four learning models can be used: supervised, unsupervised, semi-supervised, or reinforcement. Within each of those models, one or more algorithmic techniques may be applied – relative to the data sets in use and the intended results. Machine learning algorithms are basically designed to classify things, find patterns, predict outcomes, and make informed decisions. Algorithms can be used one at a time or combined to achieve the best possible accuracy when complex and more unpredictable data is involved. 

Diagram of how machine learning works

How the machine learning process works

What is supervised learning?

Supervised learning is the first of four machine learning models. In supervised learning algorithms, the machine is taught by example. Supervised learning models consist of “input” and “output” data pairs, where the output is labeled with the desired value. For example, let’s say the goal is for the machine to tell the difference between daisies and pansies. One binary input data pair includes both an image of a daisy and an image of a pansy. The desired outcome for that particular pair is to pick the daisy, so it will be pre-identified as the correct outcome.

By way of an algorithm, the system compiles all of this training data over time and begins to determine correlative similarities, differences, and other points of logic – until it can predict the answers for daisy-or-pansy questions all by itself. It is the equivalent of giving a child a set of problems with an answer key, then asking them to show their work and explain their logic. Supervised learning models are used in many of the applications we interact with every day, such as recommendation engines for products and traffic analysis apps like Waze, which predict the fastest route at different times of day.

What is unsupervised learning?

Unsupervised learning is the second of the four machine learning models. In unsupervised learning models, there is no answer key. The machine studies the input data – much of which is unlabeled and unstructured – and begins to identify patterns and correlations, using all the relevant, accessible data. In many ways, unsupervised learning is modeled on how humans observe the world. We use intuition and experience to group things together. As we experience more and more examples of something, our ability to categorize and identify it becomes increasingly accurate. For machines, “experience” is defined by the amount of data that is input and made available. Common examples of unsupervised learning applications include facial recognition, gene sequence analysis, market research, and cybersecurity.

What is semi-supervised learning?

Semi-supervised learning is the third of four machine learning models. In a perfect world, all data would be structured and labeled before being input into a system. But since that is obviously not feasible, semi-supervised learning becomes a workable solution when vast amounts of raw, unstructured data are present. This model consists of inputting small amounts of labeled data to augment unlabeled data sets. Essentially, the labeled data acts to give a running start to the system and can considerably improve learning speed and accuracy. A semi-supervised learning algorithm instructs the machine to analyze the labeled data for correlative properties that could be applied to the unlabeled data.

As explored in depth in this MIT Press research paper, there are, however, risks associated with this model, where flaws in the labeled data get learned and replicated by the system. Companies that most successfully use semi-supervised learning ensure that best practice protocols are in place. Semi-supervised learning is used in speech and linguistic analysis, complex medical research such as protein categorization, and high-level fraud detection.

What is reinforcement learning?

Reinforcement learning is the fourth machine learning model. In supervised learning, the machine is given the answer key and learns by finding correlations among all the correct outcomes. The reinforcement learning model does not include an answer key but, rather, inputs a set of allowable actions, rules, and potential end states. When the desired goal of the algorithm is fixed or binary, machines can learn by example. But in cases where the desired outcome is mutable, the system must learn by experience and reward. In reinforcement learning models, the “reward” is numerical and is programmed into the algorithm as something the system seeks to collect.

In many ways, this model is analogous to teaching someone how to play chess. Certainly, it would be impossible to try to show them every potential move. Instead, you explain the rules and they build up their skill through practice. Rewards come in the form of not only winning the game, but also acquiring the opponent’s pieces. Applications of reinforcement learning include automated price bidding for buyers of online advertising, computer game development, and high-stakes stock market trading.

Enterprise machine learning in action

Machine learning algorithms recognize patterns and correlations, which means they are very good at analyzing their own ROI. For companies that invest in machine learning technologies, this feature allows for an almost immediate assessment of operational impact. Below is just a small sample of some of the growing areas of enterprise machine learning applications.

  • Recommendation engines: From 2009 to 2017, the number of U.S. households subscribing to video streaming services rose by 450%. And a 2020 article in Forbes magazine reports a further spike in video streaming usage figures of up to 70%. Recommendation engines have applications across many retail and shopping platforms, but they are definitely coming into their own with streaming music and video­ services.
  • Dynamic marketing: Generating leads and ushering them through the sales funnel requires the ability to gather and analyze as much customer data as possible. Modern consumers generate an enormous amount of varied and unstructured data – from chat transcripts to image uploads. The use of machine learning applications helps marketers understand this data – and use it to deliver personalized marketing content and real-time engagement with customers and leads.
  • ERP and process automation: ERP databases contain broad and disparate data sets, which may include sales performance statistics, consumer reviews, market trend reports, and supply chain management records. Machine learning algorithms can be used to find correlations and patterns in such data. Those insights can then be used to inform virtually every area of the business, including optimizing the workflows of Internet of Things (IoT) devices within the network or the best ways to automate repetitive or error-prone tasks.
  • Predictive maintenance: Modern supply chains and smart factories are increasingly making use of IoT devices and machines, as well as cloud connectivity across all their fleets and operations. Breakdowns and inefficiencies can result in enormous costs and disruptions. When maintenance and repair data is collected manually, it is almost impossible to predict potential problems – let alone automate processes to predict and prevent them. IoT gateway sensors can be fitted to even decades-old analog machines, delivering visibility and efficiency across the business.

Machine learning challenges

In his book Spurious Correlations, data scientist and Harvard graduate Tyler Vigan points out that “Not all correlations are indicative of an underlying causal connection.” To illustrate this, he includes a chart showing an apparently strong correlation between margarine consumption and the divorce rate in the state of Maine. Of course, this chart is intended to make a humorous point. However, on a more serious note, machine learning applications are vulnerable to both human and algorithmic bias and error. And due to their propensity to learn and adapt, errors and spurious correlations can quickly propagate and pollute outcomes across the neural network.

An additional challenge comes from machine learning models, where the algorithm and its output are so complex that they cannot be explained or understood by humans. This is called a “black box” model and it puts companies at risk when they find themselves unable to determine how and why an algorithm arrived at a particular conclusion or decision.

Fortunately, as the complexity of data sets and machine learning algorithms increases, so do the tools and resources available to manage risk. The best companies are working to eliminate error and bias by establishing robust and up-to-date AI governance guidelines and best practice protocols.

Machine learning FAQs

What’s the difference between AI and machine learning?

Machine learning is a subset of AI and cannot exist without it. AI uses and processes data to make decisions and predictions – it is the brain of a computer-based system and is the “intelligence” exhibited by machines. Machine learning algorithms within the AI, as well as other AI-powered apps, allow the system to not only process that data, but to use it to execute tasks, make predictions, learn, and get smarter, without needing any additional programming. They give the AI something goal-oriented to do with all that intelligence and data.

Can machine learning be added to an existing system?

Yes, but it should be approached as a business-wide endeavor, not just an IT upgrade. The companies that have the best results with digital transformation projects take an unflinching assessment of their existing resources and skill sets and ensure they have the right foundational systems in place before getting started.

Data science versus machine learning

Relative to machine learning, data science is a subset; it focuses on statistics and algorithms, uses regression and classification techniques, and interprets and communicates results.  Machine learning focuses on programming, automation, scaling, and incorporating and warehousing results.

Data mining versus neural networks

Machine learning looks at patterns and correlations; it learns from them and optimizes itself as it goes. Data mining is used as an information source for machine learning. Data mining techniques employ complex algorithms themselves and can help to provide better organized data sets for the machine learning application to use.

Deep learning versus neural networks

The connected neurons with an artificial neural network are called nodes, which are connected and clustered in layers. When a node receives a numerical signal, it then signals other relevant neurons, which operate in parallel. Deep learning uses the neural network and is “deep” because it uses very large volumes of data and engages with multiple layers in the neural network simultaneously. 

Machine learning versus statistics

Machine learning is the amalgam of several learning models, techniques, and technologies, which may include statistics. Statistics itself focuses on using data to make predictions and create models for analysis.