8 Common Types of Neural Networks

Written by Coursera Staff • Updated on

With the advancements of artificial intelligence and machine learning, neural networks are becoming more widely discussed thanks to their role in deep learning.

A man is giving a presentation on data using a large monitor/screen.

Neural networks learn continuously and, as a result, can improve over time, making intelligent decisions based on the insights identified within the data. Many industries benefit from using neural networks with applications, including medical diagnostics, energy demand forecasting, targeted marketing, and financial prediction. 

An introduction to artificial intelligence, machine learning, and deep learning

Before looking into neural networks, it’s important to understand what artificial intelligence, machine learning, and deep learning are and how they are related. 

Artificial intelligence describes the process of computers being trained to mimic the human brain in how it learns and solves problems. Computers can do this through different types of learning: machine learning and deep learning. 

The term “artificial intelligence” can be traced back to 1956 when computer scientist John McCarthy coined it. However, in 1950, British mathematician and computer scientist Alan Turing discussed the concept of machines being able to think in a groundbreaking paper that played a significant role in the development of artificial intelligence [1].

Machine learning is a series of algorithms, each taking in information, analysing it, and then taking that insight to make an informed decision. As machine learning algorithms are given more and more data, they can become increasingly intelligent and make better, more informed decisions. 

Deep learning is a subset of machine learning. This is where neural networks play a role, as they are used in deep learning to allow data to be processed without a human pre-determining the program. Instead, neural networks communicate data with one another similarly to how the brain functions, creating a more autonomous process.

Overview of neural networks

The basic structure of a neural network consists of three main components: the input layer, the hidden layer, and the output layer. Depending on complexity, a neural network can have one or multiple input, hidden, or output layers.

In the input layer, information is received, and the input node then processes the data, decides how to categorise it, and transfers it to the next layer: the hidden layer.

Information is received from both the input layer and other hidden layers in the hidden layer. There are various hidden layers based on the type of neural network being used. At this point in the process, hidden layers take the input, process the information from the previous layer, and then move it on to the next layer, either another hidden layer or the output layer. 

The output layer is the final layer in a neural network. After receiving the data from the hidden layer (or layers), the output layer processes it and produces the output value.

8 types of neural networks

Various neural networks exist, each with a unique structure and function. This list will discuss eight commonly used neural networks in today’s technology. 

1. Convolutional neural networks

Convolutional neural networks (CNNs) can input images, identify the objects in a picture, and differentiate them from one another. Their real-world applications include pattern recognition, image recognition, and object detection. A CNN’s structure consists of three main layers. First is the convolutional layer, where most of the computation occurs. Second is the pooling layer, where the number of parameters in the input is reduced. Lastly, the fully connected layer classifies the features extracted from the previous layers.

2. Recurrent neural networks

Recurrent neural networks (RNNs) can translate language, speech recognition, natural language processing, and image captioning. Examples of products using RNNs include smart home technologies and voice command features on mobile phones. Feedback loops in the structure of RNNs allow information to be stored similarly to how your memory works. 

3. Radial basis functions networks 

Radial basis function (RBF) networks differ from other neural networks because the input layer performs no computations. Instead, it passes the data directly to the hidden layer. As a result, RBFs have a faster learning speed. Applications of RBF networks include time series prediction and function approximation.

4. Long short-term memory networks

Long short-term memory (LSTM) networks are unique and can sort data into short-term and long-term memory cells depending on whether or not the data needs to be looped back into the network as data points or entire sequences. LSTM can also be used in handwriting recognition and video-to-text conversion.

5. Multilayer perceptrons

Multilayer perceptrons (MLPs) are a neural network capable of learning the relationship between linear and non-linear data. Through backpropagation, MLPs can reduce error rates. Applications that benefit from MLPs include face recognition and computer vision.

6. Generative adversarial networks

Generative adversarial networks (GANs) can generate new data sets that share the same statistics as the training set and often pass as real data. An example of this you’ve likely seen is art created with AI. GANs can replicate popular art forms based on patterns in the training set, creating pieces often indistinguishable from human artwork.

7. Deep belief networks

Deep belief networks (DBNs) are unique because they stack individual networks that can use each other's hidden network layers as the input for the next layer. This allows for the neural networks to be trained faster. They are used to generate images and motion-capture data. 

8. Self-organising maps

Self-organising maps (SOMs), or Kohonen maps, can transform large complex data sets into understandable two-dimensional maps where geometric relationships can be visualised. This can happen because SOMs use competitive learning algorithms in which neurons must compete to be represented in the output. This is decided by which neurons best represent the input. Practical applications of SOMs include displaying voting trends for analysis and organising complex data collected by astronomers so it can be interpreted.

Next steps

On Coursera, you'll find highly rated courses in machine learning and deep learning to help you prepare for a career working with artificial intelligence. Deep Learning Specialisation from DeepLearning.AI will help you learn how to build and apply algorithms to image and video data. 

You can also earn a Machine Learning Professional Certificate from IBM to bolster your resume and develop Python skills to build machine learning algorithms and train neural networks.

Article sources

Stanford Encyclopedia of Philosophy. “Artificial Intelligence, https://plato.stanford.edu/entries/artificial-intelligence/.” Accessed March 11, 2024.

Keep reading

Updated on
Written by:

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.