Close
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
Read Down
Sign in
Close
Welcome!Log into your account
Forgot your password?
Read Down
Password recovery
Recover your password
Close
Search
Logo
Logo
  • Latest News
  • Artificial Intelligence
  • Video
  • Big Data and Analytics
  • Cloud
  • Networking
  • Cybersecurity
  • Applications
  • IT Management
  • Storage
  • Sponsored
  • Mobile
  • Small Business
  • Development
  • Database
  • Servers
  • Android
  • Apple
  • Innovation
  • Blogs
  • PC Hardware
  • Reviews
  • Search Engines
  • Virtualization
More
    Home Big Data and Analytics
    • Big Data and Analytics
    • IT Management

    What Are Neural Networks?

    Artificial neural networks process data in a manner similar to the human brain.

    Written by

    Samuel Greengard
    Published November 9, 2022
    Share
    Facebook
    Twitter
    Linkedin

      eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

      A key element in artificial intelligence, artificial neural networks (ANNs) operate in a manner similar to the human brain. They mimic the way actual biological neurons function in order to find answers for complex computing questions and challenges. The method, which can include millions of artificial neurons, falls under the umbrella of machine learning. It produces mathematical algorithms that are widely used to recognize patterns and solve complex problems in science and business.

      ANNs, which are also referred to as simulated neural networks (SNNs), use circuits to signal each other. Typically, a neural network relies on nodes that are comprised of an input layer, one or multiple hidden layers, and an output layer. Individual nodes are assigned a weight and threshold. When a threshold is activated, data flows through it and connects to other nodes or layers. If data doesn’t trigger a response, the system typically ignores it.

      A neural network uses training data to recognize complex and often hidden patterns and develop algorithms. Over time and with more data, its accuracy improves. As a result, this machine learning technique produces computer algorithms that are valuable for an array of tasks, such as speech recognition, language translation, image recognition, robotics behavior and many other areas of artificial intelligence (AI).

      Deep learning systems—a type of unsupervised machine learning—are increasingly used with neural networks. They’re called “deep learning” because they contain large numbers of neural layers. Using different approaches, data scientists can perform complex tasks that lie outside the ability or scope of humans. Moreover, these systems can continually advance and evolve as new data appears.

      Also see: What is Artificial Intelligence 

      How and Why are Neural Networks Used?

      The growing volume of data generated by computers contains answers for numerous questions and problems. Some industry sources report that upwards of 2.5 quintillion bytes of data are generated daily, and more than 100 zettabytes of data exist globally. This includes structured and unstructured data from databases, sales management systems, sensors, financial systems, blogs, social media, audio and video, text and logs, and spreadsheet files, among others.

      As a result, deep learning systems based on neural nets are widely used by governments, businesses, researchers and others to mine this data. Consulting firm Gartner reports that more than 80% of data scientists now use ANNs, and natural language generation fueled by deep learning is now part of 90% of modern BI and analytics platforms. Common areas for use include life sciences, manufacturing, banking, retail and in the public sector.

      For example, healthcare companies use neural nets to handle tasks such as predictive diagnostics, biomedical imaging and health monitoring. Financial services firms rely on it to detect fraud, oversee credit analysis and automate advisory services. Retailers tap deep learning for marketing, chatbots and augmented reality that’s increasingly used on smartphone apps. Manufacturers rely on the technique for machine vision systems that spot defects and safety violations, and also to automate supply chains and forecast demand.

      Another common use case is smart city initiatives. For instance, neural nets might ingest image data from wireless cameras and the machine learning system subsequently learns how to adapt traffic signals and other systems to optimize traffic flow in real-time. This approach is far less expensive than installing sensors and in pavement. These systems—often incorporating connected Internet of Things (IoT) sensors and devices—can also improve the performance of energy systems and provide other advanced automation, as well as enhance security features.

      Also see: AI vs. ML: Artificial Intelligence and Machine Learning

      A Brief History of Artificial Neural Nets

       The origins of artificial neural networks dates back to 1943. At that time, Warren McCulloch and Walter Pitts, who both worked in the field of neuroscience and computing, introduced a computational model that used algorithms called threshold logic. The model relies on a logical gate or a basic building block, in this case an artificial neuron, to build a larger computational framework.

      Over the 1940s and 1950s, researchers continued to explore artificial neural network models. In 1958, Frank Rosenblatt placed a perceptron, an algorithm for supervised learning, in an actual computing device. By the 1960s, the first functional models with multiple layers began to emerge, and in 1975, researcher Kunihiko Fukushima developed the first multilayered neural network. In the 1980s modern machine learning capabilities began to emerge. Far more powerful computing power and advanced in the field over the following three decades have resulted in far more powerful capabilities.

      One of the key developments in the field was the introduction of graphical processing units (GPUs) in 2010. These systems deliver significant speed and performance advantages, including the ability to reduce errors through greater fine tuning across layers of a model. In fact, GPUs introduce features and capabilities that otherwise aren’t possible in deep learning. Today’s neural nets use several techniques and models to tackle increasingly complex tasks that in some cases exceed human capabilities.

      Also see: Top AI Software 

      How Do Neural Nets Work?

      The basis of a neural net is an artificial neuron. Neurons are placed into three different types of layers:

      • Input layer: The input layer ingests data and converts it into binary code that the computer can utilize.
      • Hidden layer: The hidden layer performs mathematical computations on the data using non-linear processing techniques that work across the various hidden layers of the model. This “weighting” process develops a hierarchical mathematical framework.
      • Output layer: The output layer provides the algorithm that’s used for AI.

      One way to think about neural networks is that each individual node operates its own linear regression model, which includes input data, weights, a bias (or threshold), and an output, according to IBM. Once the system identifies an input layer, it assigns weights that determine the value of any given variable. These inputs are built into a mathematical model. When an output meets the desired critical threshold, it activates the node, relaying the data to the next node in the network. This process continues until an algorithm exists.

      In some cases, data flows through an artificial neural network in one direction, from the input phase to the output phase. However, it’s possible to use other techniques, such as backpropagation, to study data from output back to input.

      This approach makes it possible to improve error detection and reduce biased or inaccurate results. Using this technique, it’s possible to change weightings and adapt and adjust a deep learning model as needed.

      What Types of Artificial Neural Nets Exist?

      While neural nets loosely reflect the way the human brain works, they have become more specialized over time. Today, four primary types of artificial neural nets exist. Each has advantages and disadvantages based on the intended purpose and real-world use case. These include:

      • Convolutional neural networks (CNNs): These machine learning systems are commonly used for machine vision, object detection, image classification and certain types of forecasting. A CNN incorporates five distinct layers: input, convolution, pooling, fully connected and output. These systems require enormous processing power—typically supplied by GPUs.
      • Recurrent neural networks (RNNs): This type of ANN framework typically uses time-series data and other sequential data to produce probabilistic models. In other words, the inputs aren’t independent of one another. This makes it ideal for tasks such as natural language processing, speech recognition, sentiment analysis and text related applications.
      • Feedforward neural networks (FNNs): A descendent of recurrent neural networks, FNN’s do not use any type of cycle or loop to process data and develop a model. Instead, data flows in one direction only—forward from the input nodes and through any hidden nodes en route to the output nodes. Consequently, FNNs are often used for supervised learning tasks such as digital marketing and sales. 
      • Autoencoder neural networks: These unsupervised machine learning systems, sometimes referred to as Autoassociators, ingest unlabeled inputs, encodes data, and then decodes the data as it attempts to pinpoint and extract the most valuable information. The method is designed to reduce data noise. A popular use for this methodology is detecting fraud.
      Also see: The Future of Artificial Intelligence

      How Do Data Scientists Use Neural Nets for Training?

      After a data scientist has identified a use case for an ANN and selected a specific approach, the next step is to put the system into motion. There are two basic approaches that data scientists use: supervised learning and unsupervised learning.

      Supervised learning

      As the name implies, a human oversees this type of machine learning system. The operator labels datasets to help train the algorithm so that it can classify data and predict outcomes accurately.

      For example, a human might label photos of different types of cats—lions, tigers, jaguars, leopards, mountain lions, bobcats, ocelots and housecats—so a system can learn to differentiate them. Casual users may handle this task unknowingly when they tag email as spam, for instance. Supervised learning often plays a role in object recognition, predictive analytics and sentiment analysis.

      Unsupervised learning

      These systems learn from data that hasn’t been classified or tagged by a human. Simply put, the system finds the patterns and builds an algorithmic model on its own—hence the name “unsupervised learning.”

      In some cases, data scientists also use semi-supervised learning, which combines the two approaches, and reinforcement learning, which builds results using a computer program that receives positive and negative “rewards” as it pursues a model.

      Also see: The History of Artificial Intelligence 

      How Are Artificial Neural Nets Impacting the Enterprise?

      Among the more common uses for ANNs is predictive analytics. A growing number of enterprise software platforms—including cloud frameworks—include machine learning, deep learning and other tools that help build advanced AI models.

      This type of predictive analytics is often used for tasks such as delivering more targeted content to customers, understanding credit limits and approvals, building chatbots and other natural language tools, and delivering recommendations for eCommerce, social media and streaming media.

      But predictive analytics is also making its presence felt in many other sectors. For instance, in healthcare, AI-enhanced software guides doctors and other practitioners to relevant outcomes by suggesting drugs and treatments. In manufacturing, machine vision helps manufacturers detect errors and imperfections that would escape the human eye. In fleet management and logistics, software determines routing and how to optimize equipment and fuel—adapting in real-time to weather or traffic. In cybersecurity, neural nets are increasingly used to detect malware and other suspicious behavior on a network.

      These systems are also filtering into everyday use in software development and business. In many cases, enterprise applications include no-code or low-code drag and drop interfaces that let users assemble AI and ML tasks. AI-generated software systems such as OpenAI Codex, IBM’s Project Wisdom, Amazon’s CodeWhisperer and GitHub’s Copilot are also moving into the mainstream. They are trained on huge datasets, and they are able to generate code from natural language input.

      Tapping cloud computing resources, these systems handle a growing array of tasks—from building chatbots and digital marketing systems to building automation into various tasks. However, Forrester warns that gains don’t happen without the right technology platform. It’s critical to invest in systems that support advanced machine learning and deep learning. This often involves clouds that supply powerful GPUs.

      Also see: Best Machine Learning Platforms 

      What Ethical and Legal Concerns Exist?

      One problem with neural networks is that the information they provide is only as good as what’s fed into the system. In addition to the possibility of winding up with a poorly performing system, researchers have found numerous cases of implicit bias, which can result in gender or racial discrimination.

      This can cause problems—including legal repercussions—for insurance companies, healthcare providers, financial services firms and government agencies. As a result, businesses should carefully weigh ethical and legal concerns before applying decision making using neural networks and deep learning.

      What is the Future of Artificial Neural Networks?

      Increasingly powerful computers and faster GPUs promise to push ANNs and deep learning forward. In the coming years, these systems will drive advances in a diverse array of areas, including predictive analytics; autonomous vehicles; swarm robotics; pharmaceutical research, predictive medicine; personal assistants and chatbots; cybersecurity; software development; and manufacturing and supply chain automation.

      As more data accumulates—including IoT sensor data and edge computing advances—new use cases will appear as well.

      Also see: Data Analytics Trends 
      Samuel Greengard
      Samuel Greengard
      Samuel Greengard is a business and technology writer based in West Linn, Oregon. He is author of The Internet of Things (MIT Press) and Virtual Reality (MIT Press).

      Get the Free Newsletter!

      Subscribe to Daily Tech Insider for top news, trends & analysis

      Get the Free Newsletter!

      Subscribe to Daily Tech Insider for top news, trends & analysis

      MOST POPULAR ARTICLES

      Artificial Intelligence

      9 Best AI 3D Generators You Need...

      Sam Rinko - June 25, 2024 0
      AI 3D Generators are powerful tools for many different industries. Discover the best AI 3D Generators, and learn which is best for your specific use case.
      Read more
      Cloud

      RingCentral Expands Its Collaboration Platform

      Zeus Kerravala - November 22, 2023 0
      RingCentral adds AI-enabled contact center and hybrid event products to its suite of collaboration services.
      Read more
      Artificial Intelligence

      8 Best AI Data Analytics Software &...

      Aminu Abdullahi - January 18, 2024 0
      Learn the top AI data analytics software to use. Compare AI data analytics solutions & features to make the best choice for your business.
      Read more
      Latest News

      Zeus Kerravala on Networking: Multicloud, 5G, and...

      James Maguire - December 16, 2022 0
      I spoke with Zeus Kerravala, industry analyst at ZK Research, about the rapid changes in enterprise networking, as tech advances and digital transformation prompt...
      Read more
      Video

      Datadog President Amit Agarwal on Trends in...

      James Maguire - November 11, 2022 0
      I spoke with Amit Agarwal, President of Datadog, about infrastructure observability, from current trends to key challenges to the future of this rapidly growing...
      Read more
      Logo

      eWeek has the latest technology news and analysis, buying guides, and product reviews for IT professionals and technology buyers. The site’s focus is on innovative solutions and covering in-depth technical content. eWeek stays on the cutting edge of technology news and IT trends through interviews and expert analysis. Gain insight from top innovators and thought leaders in the fields of IT, business, enterprise software, startups, and more.

      Facebook
      Linkedin
      RSS
      Twitter
      Youtube

      Advertisers

      Advertise with TechnologyAdvice on eWeek and our other IT-focused platforms.

      Advertise with Us

      Menu

      • About eWeek
      • Subscribe to our Newsletter
      • Latest News

      Our Brands

      • Privacy Policy
      • Terms
      • About
      • Contact
      • Advertise
      • Sitemap
      • California – Do Not Sell My Information

      Property of TechnologyAdvice.
      © 2024 TechnologyAdvice. All Rights Reserved

      Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.

      ×
      OSZAR »