Perceptual Processing and Its Significance in Cognitive Psychology
Cognitive psychology is the branch of general psychology that investigates people’s thinking processes. It delves into the peculiarities of perception, thinking, memory, and language. It helps to understand the brain and address psychological issues. Thus, attention and cognition theories are part of cognitive psychology.
Understanding Bottom-Up and Top-Down Processing
For instance, bottom-up processing begins by gathering sensory information from our environment to form perceptions based on the most recent sensory input. Top-down processing evaluates incoming information in light of prior knowledge, experiences, and expectations. Prior knowledge, experience, and expectations are the main driving forces behind top-down perception since top-down processing relies on them to construct perceptions related to fresh inputs (Wang et al., 2019).
Contrary to the theories of bottom-up processing, where stimulus shapes perception, here prior knowledge is vital (Wang et al., 2019). In top-down processing, perceptions are only based on fresh stimuli from one’s present external environment; in bottom-up processing, perception is driven by the stimulus now experienced in one’s external world (Wang et al., 2019).
Bottom-up processing is also known as data-driven processing since perceptions are built from sensory input, and information is processed beginning with external inputs (Abid et al., 2022). Bottom-up processes send brain codes of sensory information to ever-higher degrees of complexity (Kumar et al., 2021). Success activates the correct word representation in long-term memory, making the semantic information related to a word accessible (Campos et al., 2019). Second, top-down factors that direct how words are perceived represent language-related information and experiences.
Contrasting Theories and Empirical Evidence
It is not well understood how bottom-up and top-down processes work together in the timeline of word recognition. Bottom-up processes are responsible for the development of sensory signals and reflect the processes leading to the recovery of a word’s mental representation, or lexical access (Abid et al., 2022). The model is employed by other researchers who accept its framework and the idea of local or global “contrast” as a key term in defining prominent aspects of a scene (Abid et al., 2022).
Yet, the function of the top-down processes is unclear. They may be delayed and only be important for mental activities after lexical access, or they may quickly affect early lexical processes and influence how words are identified (Cristea et al., 2021). It seems that human brains do not willingly reveal the processes that underlie visual word recognition.
Visual Recognition and Attentional Selection
By combining the biased-choice model for single-stimulus recognition with a choice model for selection from multi-element displays in a race model framework, a comprehensive theory of visual recognition and attentional selection is established. The theory is tractable mathematically, and it outlines the calculations required for selection (Mento et al., 2019). The hypothesis is used to explain current results from several experimental paradigms. Effects of object integrality on selective reporting, number and location of targets in focused attention paradigms, selection criterion, and several distractions in split attention paradigms, the latency of selection cue in partial reporting, and consistent practice in search are among the findings.
The cognitive process that governs the selection of significant information from the environment is called visual attention (Lindsay, 2020). Compared to the previous theories, this one avoids using background knowledge; however, it uses visual stimuli as an important part of cognition and perception (Shomstein et al., 2022). Although visual attention is crucial for higher-order cognitive skills in humans since vision is often our major sense, visual attention deficiencies are a common sign of many neuropsychiatric and neurological illnesses.
As previously said, one crucial role of visual attention is to minimize information overload by making an effort to choose the most pertinent information (Shomstein et al., 2022). For instance, just a few items (perhaps 1-4) may be recognized by processes at any given moment. It follows that you cannot read two streams of language at once, even if the print is large enough to do so without violating acuity restrictions.
A limited fraction of the visual information is sent to later, low-capacity processes using selective attention mechanisms. A spatiotemporal phenomenon is visual selective attention. You first pay attention to one stimulus before moving on to another. In general, “bottom-up” stimulus-based factors and “top-down” user-driven variables together affect how attention is distributed.
Visual search tasks that require observers to find a target object amid a variety of distractor items have been extensively used to study how these components interact. Reaction time (RT) is the measurement of the highest interest when the display is visible until a response is achieved (Hervig et al., 2022). Accuracy is crucial if the display flashes shortly. In either scenario, the index of search efficiency is the slope function connecting the response measure to the set size.
Our brain’s capacity to focus on two separate stimuli at once and respond to the various demands of our environment is known as divided attention. Split attention, a sort of simultaneous attention, enables us to successfully process several information sources and do numerous activities at once (Castro et al., 2019). There are limitations to how well we can multitask and pay attention to different inputs.
The experiments show that declines in attentional skills lead to a decline in performance on tasks requiring dividing attention (Castro et al., 2019). The efficiency with which you accomplish these tasks is lowered when you divide your attention, and you will almost surely perform poorly. The term “interference” is used to describe situations where a person finds it difficult to pay attention to two stimuli at once (Castro et al., 2019).
Since the brain can only process a specific quantity of information, interference occurs. According to several studies, observers may utilize signals to predict when a target will appear, and these cues cause both facilitation and inhibition (Castro et al., 2019). The impact of temporal and spatial signals seems to be roughly cumulative.
Visual illusions play a significant role in higher-level cognition, especially in humans whose major sense is vision. Studies on visual attention processing have increased significantly in recent years, particularly regarding underlying processes. Recent technological advancements encourage this rising interest (Bowling et al., 2019). Transcranial magnetic and direct current stimulation are non-invasive techniques used to alter the excitability of cortical tissues and examine the cognitive functioning of various brain areas (Bowling et al., 2019). At the same time, there are still a lot of unanswered concerns regarding the mechanical underpinnings of visual attention despite the rising interest in the subject and the development of new methodologies.
Template Matching
The method through which the mind recognizes objects by comparison to a particular class of stored mental representations is known as template matching. The idea is that the mind has a massive database of pictures that can be compared to visual input (Wang et al., 2021). The human perceptual system responds to specific information from the visual and auditory senses (Picon et al., 2019). When two objects “match,” it means they are the same thing. Humans are excellent at identifying clear boundaries (Wang et al., 2021).
An edge is an area with a noticeable shift in brightness (texture, color, etc., can be the same). The eye receives visual information through light waves, which are filtered and amplified. These processes can be quantitatively described, but they may also take place in complex or profound ways (Wang et al., 2021). In the end, the two bright spots are noticeably different. Furthermore, the experiments show that perception rests on guessing already-known visual illusions (Picon et al., 2019). This means that matching implies that previous experiences are the basis of cognitive processes.
Attention Schema Theory
According to the attention schema theory (AST), the brain creates the attention schema to help with the endogenous management of attention (Pidduck et al., 2020). An attention model may exist, according to growing behavioral data. According to the attention schema theory (AST), which was initially put out in 2011, the brain primarily creates a descriptive and prescriptive attention model to govern its own (Pidduck et al., 2020).
Following the idea, the endogenous control of attention should be impeded when the model of attention is interrupted or makes mistakes, since it gives the endogenous control of attention a strong advantage (Self et al., 2019). Brain electrical activity is powerful enough to be recorded by electrodes placed on the scalp’s surface when many nearby neurons fire simultaneously (Yu & Zhu, 2019). The electroencephalogram (EEG) produced is the result of adding together all of the signals at an electrode location.
Ethical Considerations in Attention Research
Empirical research is increasingly supporting the ethical considerations and impacts of mindfulness activities. Nevertheless, the functional brain processes behind these advantages have not yet been thoroughly analyzed. Some writers contend that the best way to characterize mindfulness is as a “bottom-up” emotion regulation approach. In contrast, others contend it should be regarded as a “top-down” emotion regulation method (Gordon et al., 2019). The numerous variations in mindfulness’s definitions and uses may cause the current disparities.
References
Abid, N., Khan, A. M., Shujait, S., Chaudhary, K., Ikram, M., Imran, M., Haider, J., Khan, M., Khan, Q., & Maqbool, M. (2022). Synthesis of nanomaterials using various top-down and bottom-up approaches, influencing factors, advantages, and disadvantages: A review. Advances in Colloid and Interface Science, 300, 102597. Web.
Bowling, J. T., Friston, K. J., & Hopfinger, J. B. (2019). Top‐down versus bottom‐up attention differentially modulates frontal-parietal connectivity. Human Brain Mapping, 41(4), 928–942. Web.
Campos, A. C., Pinto, P., & Scott, N. (2019). Bottom-up factors of attention during the tourist experience: An empirical study. Current Issues in Tourism, 23(24), 3111–3133. Web.
Castro, S. C., Strayer, D. L., Matzke, D., & Heathcote, A. (2019). Cognitive workload measurement and modeling under divided attention. Journal of Experimental Psychology: Human Perception and Performance, 45(6), 826–839. Web.
Cristea, I. A., Vecchi, T., & Cuijpers, P. (2021). Top-down and bottom-up pathways to developing psychological interventions. JAMA Psychiatry, 78(6), 593. Web.
Gordon, N., Tsuchiya, N., Koenig-Robert, R., & Hohwy, J. (2019). Expectation and attention increase the integration of top-down and bottom-up signals in perception through different pathways. PLOS Biology, 17(4). Web.
Hervig, M. E.-S., Toschi, C., Petersen, A., Vangkilde, S., Gether, U., & Robbins, T. W. (2022). Theory of visual attention (TVA) applied to rats performing the 5-choice serial reaction time task: Differential effects of dopaminergic and noradrenergic manipulations. Psychopharmacology, 240(1), 41–58. Web.
Kumar, N., Salehiyan, R., Chauke, V., Joseph Botlhoko, O., Setshedi, K., Scriba, M., Masukume, M., & Sinha Ray, S. (2021). Top-down synthesis of graphene: A comprehensive review. FlatChem, 27, 100224. Web.
Lindsay, G. W. (2020). Attention in psychology, neuroscience, and Machine Learning. Frontiers in Computational Neuroscience, 14. Web.
Mento, G., Scerif, G., Granziol, U., Franzoi, M., & Lanfranchi, S. (2019). Dissociating top-down and bottom-up temporal attention in Down syndrome: A neurocostructive perspective. Cognitive Development, 49, 81–93. Web.
Neisser, U.. (1967). Cognitive psychology. Appleton-Century Crofts.
Picon, E., Dramkin, D., & Odic, D. (2019). Visual illusions help reveal the primitives of number perception. Journal of Experimental Psychology: General, 148(10), 1675–1687. Web.
Pidduck, R. J., Busenitz, L. W., Zhang, Y., & Ghosh Moulick, A. (2020). Oh, the places you’ll go: A schema theory perspective on cross-cultural experience and entrepreneurship. Journal of Business Venturing Insights, 14. Web.
Shomstein, S., Zhang, X., & Dubbelde, D. (2022). Attention and platypuses. WIREs Cognitive Science, 14(1). Web.
Wang, S., Wang, H., Zhou, Y., Liu, J., Dai, P., Du, X., & Abdel Wahab, M. (2021). Automatic Laser profile recognition and fast-tracking for structured light measurement using deep learning and template matching. Measurement, 169, 108362. Web.
Wang, W., Shen, J., Cheng, M.-M., & Shao, L. (2019). An iterative and cooperative top-down and bottom-up inference network for Salient Object Detection. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Web.
Yu, Z., & Zhu, Q. (2019). Schema theory-based flipped classroom model assisted with technologies. International Journal of Information and Communication Technology Education, 15(2), 31–48.Web.