Use of Hopfield Networks as Analytic Tools Research Paper

Exclusively available on Available only on IvyPanda® Made by Human No AI

Through the study of Sheikhan & Hemmati (2011), it is seen that Hopfield networks act as a means for researchers to understand the processing and retrieval of memory in human beings (Sheikhan & Hemmati, 2011).

The reason behind this is connected to the concept of “memory vectors” which are utilized as a way in which patterns are conceptualized, retrieved then subsequently pieced back together in order to create a somewhat synonymous pattern as its original conception.

As an analytic tool, this helps researchers to better understand how memories are retrieved and then combined to create the thoughts and actions that we have at the present.

What must be understood is that while cognitive psychology has enabled researchers and students alike to understand how memory works, there is still a gap in knowledge in being able to “see” so to speak how memories from a variety of different memory vectors are combined into what we know as memory.

This is where Hopfield networks come into play, they are designed as an artificial neural network from which the input of content utilizing a variety of learning rules are implemented so as to understand how they would be applicable in a human neural network.

Through this, psychologists are better able to understand the processes that go into learning and potentially discover new methods that could help to resolve learning impairments or even improve the process of learning as a whole.

One possible theoretical approach to this can be seen in the study of Hsu (2012) which explained that as analytical tool, the Hopfield network can actually be considered an early start to the development of artificial learning networks which could potentially create an A.I. (Artificial Intelligence) (Hsu, 2012).

Hsu (2012) explains this by stating that over time the processes that go into creation of computers will increasingly attempt to replicate the efficiency and learning ability of the human mind.

As such, understanding how learning actually works is the first step to creating an artificial neural network that is independent from outside input and can merely learn on its own.

Other potential applications of the network can be seen in the study of Menezes & Monteiro (2011) which proposed that the discrete-time neural network proposed by Hopfield can be used for storing and recognizing binary patterns (Menezes & Monteiro, 2011).

Through their study, which investigated the removal of simulated neurons within the network, helps to show how there is the potential for the use of the Hopfield model to potentially help in the rehabilitation of individuals that suffer from memory loss as a direct result of damage to some part of their brain which would impact either their short term or long term memory.

For example, one model for understanding the processes involved in working memory is the Baddeley and Hitch (1974) multi-component model which states that working memory operates via a system of “slave systems” and a central controller which supervises the transmission and coordination of information (RepovŠ & Baddeley, 2006).

Despite understanding how memory works and is retrieved to a certain extent, it is still unknown what processes go into combining it to create what we know of as working memory.

The Hopfield model helps to resolve this issue by presenting a “rough sketch” of what we perceive of as a model of a neural network in order to understand that processes may go into the individual memory vectors resulting in present day learning mechanisms.

One example of this process at work can be seen in the astronomical charts and models that are used to represent the present day solar system.

While they are not 100% accurate in terms of correctly showing how the planets move, they do give a rough approximation of positions and processes thereby enabling a better understanding of the process as a whole.

The same can be said of Hopfield networks wherein researchers are aware that they are not a 100% accurate method of understanding the complexities of neural networks and the processes that go into them.

However, by gaining a rough idea of how such mechanisms work in the first place, researchers are able to know how they work and through such discoveries enable the creation of more accurate models and theories regarding the means and methods of human learning and memory creation.

Inherent shortcomings

The inherent shortcoming of such a network though lies in the fact that intrusions can, and often do occur, and, as a result, it cannot really be stated that Hopfield networks act as a mirror for the associative memory mechanisms of the human brain.

On the other hand, studies such as those by Liu, Huang & Chen (2012) attempt to explain such intrusions by stating that even in human memory, the retrieval mechanisms are not 100% accurate with the brain “filling in” so to speak the apparent gaps that occur.

It is this “filling in” process that Liu, Huang & Chen (2012) associates with the intrusions within Hopfield networks as the network attempts to conceptualize the initial image it had to work with the jumble that came about as it was processes through the network (Liu, Huang & Chen, 2012).

The end result is a kind of “filled in” image that is based on the network trying to fill in the gaps with what information it had available. Thus, for (Martinelli, 2010), the Hopfield network is an accurate representation of a “primitive” associative memory network (Martinelli, 2010).

However, it should also be noted that the degradation of information in the Hopfield network is also explained instances such as the Ericsson and Kintsch (1995) model which explains that all individuals utilize skilled memory in everyday tasks however most these memories are stored in long term memory and then subsequently retrieved through various forms of retrieval mechanisms (Martinelli, 2010).

When these memories are retrieved there is no degradation and in fact the act of daily retrieval actually reinforces the memory.

The Hebb learning rule attempts to explain this by stating the learning (as seen in humans or in the case of the Hopfield network) occurs as a direct result of “weights” strengthening the retrieval mechanism.

Thus, over time and repetition, the accuracy of a retrieved image gets better. Such is the case when it comes to Hopfield networks wherein daily retrieval of the “memory” does result in a more accurate image.

Hopfield Networks and Learned Behavior

Based on the article “Extended Hopfield Network for Sequence Learning: Application to Gesture Recognition” by Maurer et al., it can be seen that Hopfield networks can be utilized in order to model what is known as “learned action”.

It is described as sets of motion that are learned based on observation and mimicry (i.e. shooting a basketball, swinging a baseball bat etc.).

However, it must be questioned whether Hopfield networks are an accurate representation of the brain’s learning behavior or if it is merely a vague representation of how information is accurately brought up and translated into action.

It is often the case that people associate memories as a collection of neurons and synapses working in conjunction with each other in order to record pertinent information on a daily basis yet few think of the way in which the concept of memory can be comparable to that of a library where information is stored, recorded and categorized based on its type and attributes.

In the case of learned behavior, the application of Hopfield model as a means of understanding how the brain works is applicable if we assume that memory vectors within the human mind exist which result in the retrieval mechanism of memories.

For example, a memory vector can be considered as a node which connects other memories together that is utilized by a central controller in order to create a distinct action or image.

The action of turning on a faucet can thus consist of several nodes where aspects related to vision, mechanical action and grasping are derived and combined by the central controller in order to create the motion of turning on a faucet.

This can be seen in the case of the Hopfield networks wherein multiple nodes act in order to reproduce the information/image that was inputted into them.

One way in which the Hopfield model, as an accurate model for examining the learning behavior of the human brain, has attempted to be proven can be seen in studies as those by Popescu et al., (2012) which explain that the memory models such as those by Ericsson and Kintsch show that it would be impossible to “hold” so to speak all memories within our working memory rather what occurs is that individuals hold only a few concepts related to a task within their working memory and then use those as indicators to retrieve the information from long term memory (Popescu et al., 2012).

As such, the way in which nodes within the Hopfield model work are the same as how memory retrieval mechanisms work in real life wherein what you consider as learned behavior is the result of combined information from various nodes which in turn result in memory that is being sought.

Unfortunately, studies such as those by Liu et al. (2011) indicate that while the Hopfield model may seem to be an accurate model of how learning mechanisms work over time, it still fails to properly show the connection between storage and retrieval.

What you must understand is that the input mechanism of the Hopfield network is not the same as what is present in the human mind. The nodes in the model essentially receive information from different sources and attempt to create an approximate “whole” from the collected information.

This action does not explain how the memory vectors within the human mind know how to store the correct kind of information (Liu et al., 2011).

Despite this, Liu et al. (2011) does state that Hopfield networks are capable of helping us understand how memories are learned and reinforced but not necessarily the mechanisms that enable them to be stored in a variety of possible storage locations.

Reference List

HSU, W. (2012). Application of competitive Hopfield neural network to brain-computer interface systems. International Journal Of Neural Systems, 22(1), 51-62.

Liu, Y., Huang, Z., & Chen, L. (2012). Almost periodic solution of impulsive Hopfield neural networks with finite distributed delays. Neural Computing & Applications, 21(5), 821-831.

Liu, W., Fu, C., & Hu, H. (2011). Global exponential stability of a class of Hopfield neural networks with delays. Neural Computing & Applications, 20(8), 1205-1209.

Martinelli, G. (2010). A Hopfield neural network approach to decentralized self- synchronizing sensor networks. Neural Computing & Applications, 19(7), 987-996

Menezes, R. R., & Monteiro, L. L. (2011). Synaptic compensation on Hopfield network: implications for memory rehabilitation. Neural Computing & Applications, 20(5), 753-757

Popescu, D., Amza, C., Lăptoiu, D., & Amza, G. (2012). Competitive Hopfield Neural Network Model for Evaluating Pedicle Screw Placement Accuracy. Strojniski Vestnik / Journal Of Mechanical Engineering, 58(9), 509-516.

RepovŠ, G. G., & Baddeley, A. A. (2006). The multi-component model of working memory: Explorations in experimental cognitive psychology. Neuroscience, 139(1), 5-21.

Sheikhan, M. M., & Hemmati, E. E. (2011). High reliable disjoint path set selection in mobile ad-hoc network using Hopfield neural network. IET Communications, 5(11), 1566-1576.

More related papers Related Essay Examples
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2019, July 1). Use of Hopfield Networks as Analytic Tools. https://ivypanda.com/essays/use-of-hopfield-networks-as-analytic-tools/

Work Cited

"Use of Hopfield Networks as Analytic Tools." IvyPanda, 1 July 2019, ivypanda.com/essays/use-of-hopfield-networks-as-analytic-tools/.

References

IvyPanda. (2019) 'Use of Hopfield Networks as Analytic Tools'. 1 July.

References

IvyPanda. 2019. "Use of Hopfield Networks as Analytic Tools." July 1, 2019. https://ivypanda.com/essays/use-of-hopfield-networks-as-analytic-tools/.

1. IvyPanda. "Use of Hopfield Networks as Analytic Tools." July 1, 2019. https://ivypanda.com/essays/use-of-hopfield-networks-as-analytic-tools/.


Bibliography


IvyPanda. "Use of Hopfield Networks as Analytic Tools." July 1, 2019. https://ivypanda.com/essays/use-of-hopfield-networks-as-analytic-tools/.

If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
This academic paper example has been carefully picked, checked and refined by our editorial team.
No AI was involved: only quilified experts contributed.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment
1 / 1