Ever since scientists began to make inquiries into what accounts for the surrounding reality’s fundamental essence, it started to become increasingly clear to them that the process of attaining more knowledge ever, in this respect, cannot be considered thoroughly objective. The reason for this is that the concerned process always occurs within the conceptual framework on a particular scientific paradigm, which in turn causes the situation when the empirically obtained data invariably ends up being affected by whatever happened to be the utilized gnoseological approach. In other words, science cannot be concerned with ‘seeking the truth’ by definition, but rather with building the informational models, which more or less adequately describe the observable emanations of a particular phenomenon in question.
We will write a custom Essay on Scientific Paradigms: Theory and Practice specifically for you
807 certified writers online
At their turn, these models are expected to serve the essentially utilitarian purpose, as they provide additional momentum to the ongoing socio-technological progress – hence, allowing people to enjoy the ever-increased standards of living. This paper will explore the validity of the above-stated at length while exposing the actual significance of the suggestion that, “Though the world does not change with a change of paradigm, the scientist afterward works in a different world” (Kuhn, SSR, 121). The paper will also aim to promote the idea that, regardless of how objective a particular scientific paradigm appears to be, it never ceases to remain utterly subjective, as it reflects the genetically predetermined cognitive predispositions, on the affiliated scientists’ part.
Until quite recently, it used to account for a commonplace assumption, among scientists, that the very concept of gnoseology implies the appropriateness of the specifically positivist methods for discovering the laws of how the universe actually functions. These methods are concerned with testing, as such, that allows us to categorize the reality’s emanations and, consequently – to gain an in-depth insight into their fundamental essence. As Kuhn noted, “In the field of the empirical sciences, more particularly, he (a scientist) constructs hypotheses, or systems of theories, and tests them against experience by observation and experiment” (1998, p. 11).
Nevertheless, despite the fact that the earlier mentioned idea does make a perfectly logical sense, it is important to understand that there are a number of clearly discursive overtones to it. That is, this idea reflects the most fundamental premise, upon which the methodology of the 19th century’s empirical science used to be based – the assumption that the universe is, in essence, an utterly complex mechanism. One can be well unaware of the sheer complexity of how it functions; however, the eventual obtainment of a comprehensive understanding of just about all the universe’s laws is theoretically possible. After all, the positivist paradigm of the universe being, in essence, a precisely functioning mechanism implies that there are spatial limits to the universe’s phenomenological extrapolations.
However, in light of the revolutionary breakthroughs in the field of physics and biology, which took place during the course of the 20th century, the earlier mentioned positivist paradigm (and its testing-based methodology) could no longer be considered legitimate. The reason for this is quite apparent – by the beginning of the 20th century, there were created a number of objective preconditions for people’s understanding of the universe to undergo an utterly drastic transformation.
As Kuhn pointed out, “Scientific revolutions are inaugurated by a growing sense… that an existing paradigm has ceased to function adequately in the exploration of an aspect of nature to which that paradigm itself had previously led the way” (1998, p. 79). After all, as it was implied earlier, the positivist paradigm reflects the assumption that, due to being thoroughly objective, the physical reality can be adequately assessed in terms of a ‘thing in itself’. However, as today’s physicists are well aware of, this is far from being the actual case. For example, according to the so-called Principle of Uncertainty (discovered in 1927 by Werner Heisenberg), which contributed towards the creation of the paradigm of Quantum Physics, it is impossible for us to be simultaneously aware of both: the elementary particle’s location and its speed. The reason for this is that the principle’s formula suggests that, once researchers are aware of the independent variable of the particle’s location, the dependent variable of its speed would be projected into the infinity, and vice versa (Heelan 1975, p. 122).
What it means is that the reason why researchers cannot possess complete information about the concerned particle (its speed and location) is not that there is insufficiency to the methodology of how they go about extracting the actual data, but that there is no such information to be found in a priori. In its turn, this implies that the universe’s workings are unpredictable because the universe itself is composed out of the thoroughly unpredictable ‘bricks’ – atoms.
The above-stated indicates that the positivist paradigm of a scientific inquiry, concerned with subjecting the researched phenomenon to the series of tests, is innately fallacious. To illustrate the validity of this suggestion, one can use the ‘allegory of lamps’. Imagine that there is a hypothetical machine, which has two lamps: blue and red. The scientist faces the challenge of defining how these lamps react to switching of the ‘on/off’ button by the mean of conducting a series of tests. While conducting the test-based experiment, this scientist switches the button 100 times – only to observe that, upon being switched, it lights up only the blue lamp.
The scientist proceeds with pressing the button 200, 500, and 1000 times. The effect continues to remain the same – only the blue lamp reacts to the button being switched. As a result, the concerned scientist concludes that the observed effect is indeed thoroughly objective and that, even if conducted elsewhere by other scientists, the experiment in question will continue to yield the same results. However, there may be well a logical apparatus inside of the mentioned machine, which lights up the red lamp upon the ‘on/off’ button being switched every billionth time. If this is indeed being the case, the empirically tested ‘law’ of the necessarily blue lamp reacting to the changes in the button’s actual position, will no longer be considered scientifically legitimate.
The ‘allegory of lamps’ exposes the inconsistency of the positivist assumption that the physically observed workings of the universe are thoroughly consistent with what the surrounding reality really is. After all, according to the paradigm of Quantum Physics, there is indeed a very little reason not to believe in the theoretical possibility of what people tend to perceive as ‘miracles’, denied by science. For example, in full accordance with this paradigm, if repeated billions and billions of times, the actual effect of a glass of water being heated inside of the microwave oven, may well result in turning the contained water into ice. The reason for this is that, due to being composed out of the unpredictably behaving atoms, there is a theoretical possibility for the macro-objects in question to react to the externally applied stimuli in the manner that violates the laws of Newtonian Physics.
Does it mean that the paradigm of Newtonian Physics can no longer be considered scientifically legitimate? Most definitely not. In this respect, one can only agree with Kuhn, who suggested that, “Relativistic dynamics cannot have shown Newtonian dynamics to be wrong, for Newtonian dynamics is still used with great success by most engineers and, in selected applications, by many physicists” (1998, p. 84).
What it means is that, contrary to the test-based positivist paradigm of gnoseology, science does not really seek the ‘objective truth’. Rather, it builds up models that describe what is assumed to be the truth – within the boundaries of a particular scientific discourse, at a particular point of time. For example, there is the geocentric model of Ptolemy, the sun-centered model of Copernicus, the model of the atom by Niels Bohr, the relativistic model of the universe by Albert Einstein, the model of the Periodic Table of Elements by Dmitri Mendeleev, etc. All of these are nothing but the so-called ‘informational models’.
After all, nature knows nothing about the Bohr’s model of the atom, for example – especially given the fact that the actual atoms are anything but what Bohr used to think of them. Nature knows nothing about the Periodic Table of Elements – had it been otherwise, there would not be a need for geologists, because the location of the most sought-after natural resources could be determined mathematically rather than empirically. Nature simply exists and scientists are here to describe it with more or less adequate informational models.
The extent of every particular model’s ‘truthfulness’ is determined by the measure of its utilitarian practicality – the more it is being capable of helping people to address life-challenges, the more it deserves to be referred to as thoroughly truthful, and vice versa. This alone implies the sheer subjectivity of just about any scientific method, regardless of how logically sound and objective it may appear.
Get your first paper with 15% OFF
However, the earlier mentioned subjectivity does not only reflect the fact that, contrary to what the positivist paradigm implies, the universe cannot be conceptualized in terms of a clock-mechanism – such a state of affairs is also predetermined by the evolutionary defined cognitive workings of one’s psyche. The reason for this is that, within the conceptual framework of just about any scientific paradigm, there always exist a number of different subjective axioms, which are being used to ensure the logical consistency of the paradigm-related empirical observations. As Stam noted, “Observation generates empirical facts that are explained at a higher level by empirical generalizations that are in turn explained by theories which contain unobservables… Any given theory is embedded in a web of collateral assumptions” (1996, p. 25).
Even though there is a theoretical possibility for the paradigm’s unobserved axioms to be logically substantiated, this can be only achieved by the mean of rationalizing them at a qualitatively higher level – outside of the concerned theory’s conceptual framework. However, given the fact that it is specifically the faculty of human language, which represents a discursive ‘super-system’, within the framework of which the validity of the unobservable axioms can be proven, in the first place, it means one thing – regardless of what appears to be their actual subject matter, scientific theories/paradigms necessarily reflect the strongly subjective worldviews of those who created them.
Because just about any scientific theory can also be conceptualized in terms of a mathematical function with its own domain of coordinates (definitive domain), it presupposes that the measure of a scientific theory’s objectiveness can only be explored, in regards to how effectively/ineffectively it helps scientists to address a specific research-related task – just as was suggested earlier. It also implies that the extent of a scientific theory’s accuracy has ambivalent qualities – that is, this theory can be simultaneously accurate (while addressing the research-task within the boundaries of its own discursive paradigm) and inaccurate (while addressing the research-task within the boundaries of the non-affiliated paradigm). Once, a particular paradigm-related hypothesis proves capable of predicting what would be the qualitative subtleties of a particular phenomena’s spatial transformation, it attains the status of a legitimate scientific theory.
What has been said earlier provides a certain insight into what can be considered the actual significance of the suggestion, mentioned in the Introduction. Apparently, the author wanted to emphasize the fact that, in full accordance with the conceptualization of just about any scientific paradigm as an informational model, one’s paradigmatic definitions of the surrounding reality’s emanations have no de facto effect of the subject matter in question.
For example, back in ancient times people used to think of the Sun in terms of God. Nowadays, however, even schoolchildren are fully aware that the Sun is nothing but a huge ball of exceedingly hot plasma, the high temperature of which is being maintained by the ongoing transformation of hydrogen into helium inside the star’s core. Yet, just as it used to be the case in the time of antiquity, the ‘today’s’ Sun shines in the manner that it always does – people’s contemporary knowledge of what the Sun is all about, has absolutely no effect on how the concerned star actually ‘works’. Nevertheless, the fact that today’s people know just about everything about how the Sun actually functions, presupposes that, as opposed to what it used to be the case at the time when it was considered a deity, they indeed live in an entirely different world.
After all, the paradigm of Nuclear Physics, within the framework of which today’s scientists assess the processes that take place inside a star, has also brought about nothing short of the civilization’s complete ‘overhaul’. Thus, it would be much more appropriate to suggest that the change of a paradigm does not only change the world for scientists, but for the ordinary individuals, as well.
This suggestion implies that there can be no ‘wrong’ paradigms, by definition – even the ones affiliated with the utterly anti-scientific assumptions. For example, in the early 19th century, it was commonly believed that the temperature of every physical object reflects the amount of caloric (supposedly a substance) that this object supposedly contains. It is needless to mention, of course, that nowadays this idea can stand no criticism, whatsoever. The temperature of every physical object is nothing but the measure of its internal energy, concerned with the movement of molecules. Nevertheless, the obviously fallacious Caloric Theory allowed physicists to formulate a number of formulas, which even today are being commonly applied when it comes to measuring the qualitative aspects of the process of a heat-transfer.
Hence, an undeniable paradox – as practice indicates, many conceptually erroneous theories/paradigms were nevertheless able to result in the conceptualization of the thoroughly valid formulas of a great practical value. Therefore, instead of referring to the emergence of a new paradigm, as such that exposes the outdatedness of the one that it aims to replace, one will be much better off regarding it, as such that naturally capitalizes on the latter. According to Kuhn, “The history of science records a continuing increase in the maturity and refinement of man’s conception of the nature of science” (1998, p. 91). In light of what has been said earlier, one can come up with the following set of observations, in regards to the actual significance of the term paradigm, and in regards of what causes paradigms to replace each other, as time goes on:
The term paradigm refers to a discursive framework, within the conceptual boundaries of which scientists tackle a particular issue of interest. It reflects the linear course of history – hence, exposing the erroneousness of the culturally relativist theories, concerned with promoting the idea that history is essentially cyclical.
The replacement of a particular paradigm with another one is predetermined by a variety of different factors. However, the most important of them is the people’s deterministic tendency to provide definitions to the surrounding reality’s manifestations. The reason for this is that, while doing it, people are able to subjectify themselves within the natural/social environment. In its turn, this enables them to grow ever more ‘existentially sovereign’. Thus, just about any scientific paradigm can be well described, as the indication of its promoters being endowed with the so-called ‘Faustian’ mentality, which motivates the concerned individuals’ strive to ensure their dominance in the affiliated social niche.
The measure of just about any paradigm’s scientific validity reflects its ability to empower people, in the utilitarian sense of this word. This explains the phenomena of some scientific paradigms remaining simultaneous both: conceptually fallacious and yet thoroughly functional, which in turn suggests their overall discursive legitimacy. As the saying goes – whatever is stupid but works, is not stupid.
Regardless of what happened to be a particular paradigm’s subject matter, it never ceases to remain thoroughly phenomenological. That is, a paradigm is nothing but the by-product of some individuals’ ability to operate with utterly abstract categories. This, of course, presupposes that a paradigm cannot exist as a ‘thing in itself’, outside of the workings of one’s consciousness.
As time goes on, the very conceptual premises of the freshly emerged scientific paradigms are likely to grow progressively counterintuitive. The validity of this statement can be illustrated, in regards to the emergence of the paradigm of Quantum Physics, the main conventions of which simply do not make any rational sense, due to being inconsistent with how people go about constructing the dialectical links between causes and effects.
The provided earlier line of argumentation, as to what should be considered the significance of the statement by Kuhn (contained in the assignment), does correlate with the paper’s initial thesis. Apparently, it is indeed thoroughly appropriate to think that, despite the fact that the terms ‘paradigm’ and ‘subjectivity’ can be considered rather synonymous, it does not affect a particular paradigm’s objective ability to serve the cause of progress. This once again implies the legitimacy of the initially mentioned thesis.
Heelan, P 1975, ‘Heisenberg and radical theoretic change’, Journal for General Philosophy of Science, vol. 6. no.1, pp. 113-136. Web.
Kuhn, T 1998, ‘Logic of discovery or psychology of research?’, M Curd, J Cover & C Pincock (eds), Philosophy of science: central issues, W.W. Norton and Company, New York/London, pp. 79-93. Web.
Kuhn, T 1998, ‘The nature and necessity of scientific revolutions’, M Curd, J Cover & C Pincock (eds), Philosophy of science: central issues, W.W. Norton and Company, New York/London, pp. 11-17. Web.
Kuhn, T 2012, The structure of scientific revolutions, University of Chicago Press, Chicago. Web.
Stam, H 1996, ‘Theory and practice’, C Tolman (ed), Problems of theoretical psychology, Captus Press, North York, pp. 24-33. Web.