Competitor Analysis: Conceptual Discussion Essay Essay

Exclusively available on IvyPanda Available only on IvyPanda
Updated:

Conceptual and Theoretical Approaches

Any company, at least once in its history, has performed a competitor analysis on marketing channels, the assortment offered, and service conditions. Unlike a one-time comment, reconnaissance is a long-term, systematic, planned, and purposeful activity to study competitors with the help of open sources of information. The information obtained is used to modernize the company’s operations, such as changes in internal processes, adjustments in assortment, changes in service conditions, and coordination of advertising activities. During economic crises, competitive intelligence becomes particularly important. A problem is often characterized by a decline in the GDP of the country or countries involved, an increase in unemployment, and a decline in living standards. With declining real disposable income, consumers begin to buy less and choose purchase terms more carefully. In order not to lose customers, companies from one market are forced to enter into the fierce competition, offering what others do not: cheaper and more convenient delivery, shorter delivery times, and more good promotions.

All these innovations need to be launched quickly so companies that conduct competitive research continuously find themselves in a better position. Competitive intelligence is not industrial espionage. It implies collecting and analyzing the information that the competitor does not conceal and makes available to the public. Industrial espionage is an illegal action to obtain information that constitutes a commercial, tax, or banking secret (Article 183 of the Criminal Code). If you try to bring “classified” data by illegal methods (such as wiretapping, stealing essential documents, or threatening employees of a competitor), you may be held criminally liable. If you obtain information from open databases, voluntary surveys, or widely used services, however, the chances of a court judgment are low. Areas of competitive intelligence can be divided into central areas, such as financial, structural, strategic, and technical.

Financial is responsible for data that is being collected on the competitor’s capital and everything that concerns pricing, revenues, and income, as well as employees’ salaries, terms of payment to contractors, and other financial flows within the company. Product pricing is vital because the cost is one of the most critical selection criteria for the consumer. Structural deals with everything that concerns the organizational structure and the peculiarities of the company’s functioning are studied. What departments does the company consist of, what is the management team and its characteristics, what is the history of mergers and acquisitions, has the corporate culture been implemented, how is it different, what is the personnel policy – all this and much more can be examined.

According to the strategic aspect, the company’s goals and strategy, both past and present, are examined. Since strategy directly affects the main actions of the competitor in the market, it is crucial to understand as clearly as possible the development plan of the company for the coming years from the product, service, and marketing points of view. Strategic research includes an in-depth study of the opponent’s advertising channels: whether he advertises by himself or with an agency, which sources of traffic to his site and which of them are the most profitable, what strategies and messages are used in Internet marketing, the company’s reputation in the network, and much more. The technical aspect determines scientific advances and manufacturing processes that give some advantage to a competitor’s products or services. In addition, innovations in management policies that increase a company’s efficiency may also be investigated. Results of competitive intelligence Such extensive research can have a myriad of positive effects with high-quality information collection and analysis.

Introduction of price changes to attract consumers to its side. It is expanding the company’s geography by assessing the capacity of regional markets and competitors’ experience. and making decisions about cooperation with specific suppliers and counterparties. Implementation of the most advantageous management tactics, KPIs, incentives, and bonus systems to improve staff performance. They are changing strategies to attract and retain employees and mastering new technologies and creating new production processes to enhance the quality of products and present a decisive advantage to the consumer. These are just a few of the most prominent examples. With continuous operation, a firm can systematically gain many more benefits.

There is almost no difference among these approaches, as they have the same goal and method of implementation, but they are not completely the same. The main difference between them is the priority of application, as each of these approaches operates at different levels of use and have a different duration of effectiveness. I like the continuous approach the most, because it can bring results in a fairly short period of time and show good growth in the early stages of the economy. All of these approaches are linked in that they bring qualitative changes to the structure of the company at various levels. Each firm obtains the necessary knowledge through the use of these analytical methods, among which there cannot be a preferred one. They all work to form one complete structure and to consider each of them separately is not feasible for a number of reasons. The most important of these is the need to combine and create a product based on all inputs.

Database Creation Approaches

There are many ways to search and create databases, which is the basis of the activities of a large number of organizations. The main ones are intelligent data mining, reasoning based on precedents, as well as artificial neural networks.All of them touch the spheres of data analysis, database search and data mining, which will be shown in the process of the paper flow. Each of them has a number of advantages and disadvantages, which will be considered in the course of writing the work. Despite this, the combination of these methods makes the creation and maintenance, as well as the search of the database, the most comfortable and practical, as it is mainly based on the use of artificial intelligence.

Speaking of the first stage, namely data mining, it is fundamental to the whole system, as it is responsible for the process of creating the entire archive, from which you can get a lot of benefits. The term “information retrieval” was first introduced by Calvin Moore in 1948 in his doctoral thesis and has been published and used in the literature since 1950. At first, automated information retrieval systems, or information retrieval systems (IRS), were only used to manage the information explosion in the scholarly literature. Many universities and public libraries began to use IPS to provide access to books, journals, and other documents. IPS became widespread with the advent of the Internet. The most popular search engines for Russian-speaking users are Google and Yahoo.

The main problem associated with this method is its dependence on algorithms using computer technology. They are based on the entire database and without the ability to edit. Thus, it turns out that the creation of the whole database program is responsible, but not human understanding. It becomes evident that this task cannot be left entirely to artificial intelligence because it is not divorced from the extent to which human intelligence exists. Even though it is devoid of the ability to make mistakes, the creation of the basis for all activity cannot be left to a machine. The creation and the search of the database must be done under the supervision of human understanding, with the ability to edit the information received to eliminate unnecessary details. Therefore, current technology introduces such a possibility in its systems and allows to count on a more acceptable result.

The main advantage, in turn, is saving a considerable amount of time without the need for a long search for all the necessary information. Since this process is very time-consuming and labor-intensive, its optimization can have a positive impact on other aspects of work and make the entire work cycle more productive. Thus, while there is a significant disadvantage in the form of having to filter information, this process can still save a lot of energy on more important tasks to create the ideal opportunities for the growth and functioning of the project.

The second way to maintain databases is through precedent-based reasoning. A precedent is a case that has occurred before and serves as an example or justification for subsequent instances of a similar kind. Case-Based Reasoning (CBR) is an approach that solves a new problem by using or adapting a solution to a known issue. Typically, such reasoning methods include four main steps, forming what is known as a precedent-based reasoning cycle or CBR cycle. The primary purpose of using the apparatus of precedents in the framework of LUTs and, in particular, systems of expert diagnostics of complex objects is to give a ready solution to the LDP for the current situation on the basis of precedents that have already taken place in the past in the management of the given object or process.

In the field of databases, this method is used because of the similar principle of operation. The main feature is the search in the database by the similarity of the writing or some principles of those or other components among the others. In view of the similarity, we can conclude that the fragmentation among the components also has a large number of similarities. This method is used not only to search for an already created database but also in the process of forming it. As mentioned earlier, not all methods of creating databases rely on human thinking, which also refers to this method. The main disadvantage associated with this method is also the presence of machine technology applications. Not all computers are capable of correctly interpreting the needs that human being puts before them. Similar to the previous method, full functionality requires an operator, that is, a person who is able to filter the data received. Despite the similarity in the description of the key characters or words, not all of them fit the query, which can not meet the requirements. A positive aspect may be the convenience of grouping all the components, which allows you to connect them into one group for better search. Although this is also handled by the computer system, this method often has a positive outcome.

The third component to be discussed in this paper is artificial neural networks. This method is a mixture of the previous two, which also allows you to rely on the use of machine intelligence and save time and effort. Artificial neural networks (ANN) are mathematical models, and their software or hardware implementations are built on the principle of organization and functioning of biological neural networks – networks of nerve cells of a living organism.

This notion emerged in the study of processes occurring in the brain during thinking and in an attempt to model these processes. The first such model of the brain was the perceptron. Subsequently, these models began to be used for practical purposes, usually in prediction tasks.In terms of machine learning, a neural network is a particular case of pattern recognition methods, discriminant analysis, clustering methods, etc. From the mathematical point of view, training neural networks is a multi-parameter nonlinear optimization problem. In terms of cybernetics, neural networks are used in adaptive control tasks and as algorithms for robotics. From the point of view of computer science and programming development, the neural network is a way to solve the problem of efficient parallelism. And in terms of artificial intelligence, ANN is the basis of the philosophical current of connectionism and the main direction in the structural approach to the study of the possibility of the building (modeling) of natural intelligence with the help of computer algorithms.

As can be seen, there are differences, and they are pretty significant despite the similarity of the principle of operation. In contrast to the already mentioned methods, this method uses more of the capabilities of human brain activity, referring to the brain impulses, which prompt the most adequate and necessary component. Therefore, it can be assumed that it is devoid of the previously mentioned disadvantages, which are entirely dependent on the impact of computer technology, as they were the main when using it, while this method uses the involvement of a person in the process of creating and searching through the database. On the other hand, this method has another significant disadvantage, such as the high cost of equipment to use as well as the time it takes to form a query. For this method to work entirely, it is necessary to have systems for decoding and to download all the required questions into the network, which at this stage of mechanical engineering history is unfeasible. However, this method has a place in theory, despite the impossibility of its interpretation in practice. In addition, the time spent on training personnel to use this method, as well as paid on the creation or search for the necessary elements of the database. On the positive side, it should be noted that this method will be advanced in the future. Its use will be considered a key feature in the creation of any database and have full functionality. In addition, this method is the most accurate among the previously described because it involves the human brain, is able to formulate queries correctly, and includes only the right components for their purposes.

However, this information provides only a theoretical aspect, which has no sound basis without factual support. It is necessary to give examples that will consider this method in practical application. To do this, we can consider the example of creating a fictitious company that specializes in the formation of a virtual library with books by various authors in all available languages. It is obvious that this base will be huge because there are a lot of books that need to fit into one resource. Therefore, to save energy, it is best to use computer technology, as it allows you to focus on other tasks. It is not enough to set the parameter to search for books with the keyword that appears in each of them, as there are no such books. It is necessary to expand the functionality that is available when using these methods. To this end, it is best to use artificial neural networks, as discussed earlier. Neural connections can be used to discover hidden sources and materials, which will help in the construction of the entire database.

This move will not only help speed up the process of creating the necessary program but also use it for marketing purposes. Since this method is rarely used and not many people know about its existence, the use of its name for marketing purposes will bring additional profits. In addition, the appearance of such a powerful competitor in the market of mobile applications, which offers enhanced functionality, will also help improve the financial position of the entire company. In addition, the use of artificial neural networks can also be attracted for the purpose of PR of all such products, which will improve the reputation in the market.

Standard databases provide little functionality to the average user. The search operations themselves are performed through keyword searches, which are linked by three operations, such as AND, OR, and NOT. In their place came the databases, built on an intelligent interface. Unlike its predecessors, these interfaces have the ability to work simultaneously with multiple databases. In addition to increased workload, they have more flexible and advanced functionality, which speeds up work on a particular task (Alexander and Kusleika, 2019). For example, a user enters a phrase, and on the basis of its results are not only on the one phrase but also options that somehow can be associated with this phrase (Bacardit and Llorà, 2018). In fact, a modern database does not work on the information that was originally loaded into it, but it is able to extract this information using data patterns (Improvement of Neural Networks Artificial Output, 2017). Such a structure provides easy teachability for ordinary users in working with it, increases the speed of performing this or that task, and is able to work independently.

Modern databases that are built on an intelligent interface can be used in the healthcare system. For example, if artificial intelligence is given data from a million electronic charts, it will, in addition to finding them easily enough, be able to select the best results for a particular treatment based on the original data (Data Mining from Heterogeneous Data Sources, 2017). It could be used for scientific research, in which AI would select the best options for treating pathologies, and based on this, doctors and lab technicians could hypothesize why this particular treatment strategy was the most successful (Morrison, 2017). Through these actions’ humanity will be able to approach the qualitative treatment of frequently encountered pathologies (such as arthritis, osteochondrosis, and allergies) and consider options for the treatment of currently incurable diseases (Eldad Davidov et al., 2018). Of course, this is all feasible, provided that the AI is initially correctly trained and tuned for certain tasks.

Databases are dependant on an intelligent interface can be used in the pharmaceutical industry. There have already been such uses of AI, for example, in the search for a cure for Ebola (2). With this method, it will be possible to make the process of discovering new drugs less time-consuming and costly due to the fact that the database itself will process a huge amount of information (Eboch, 2018). This quantity includes examples of models of genetic targets, organs, diseases, existing drugs, molecular constituents, etc.

Despite the above-mentioned advantages of applying AI in medicine, there is still the question of ethical and legal limitations. Today there are at least four ethical problems, because of which the use of AI can be illegal and even dangerous for patients (Griffith, 2018). The first is the issue of the patient’s informed consent to data processing. Not all people are willing to consent to their medical history being downloaded and used in tasks on the grounds that the disease might discredit them or that leaking the data would literally ruin their lives. The second issue is data security and transparency (Gupta and Kumar, 2017). Until there is a mechanism that will reliably protect and encrypt user data, minimizing the risk of data leakage to third parties, the issue will still be open. A third obstacle to the use of AI is the fairness and bias of algorithms (Heeringa, West and Berglund, 2017). According to this problem, the development of a database requires a certain list of requirements for a particular database, which all developers commit to.

In addition to design requirements, a similar list should be formed for AI management to remove the chances of unfair use of the database. (5) The last point is data confidentiality, in which only the data necessary for medical work will be entered into the database, encrypting or not the names, surnames, and places of residence of patients (Kroenke et al., 2018). If all data is entered, the company using the database must maintain complete confidentiality so as not to discredit or ruin the patient’s life.

Legal restrictions include the question of the legality of using artificial intelligence in areas of people’s lives. The very fact that there is no law that regulates the work of AI in any area of human life already speaks to a major legal problem with the use of this technology in modern developments and services (Macinnes, 2017). For this reason, modern politicians need to create a law that will regulate the work and capabilities of AI. (Yildirim, Birant and Alpyildiz, 2017) In particular, it is necessary to specify who has the right to manage this technology, what responsibility the managers have, how accurate the technology should be, and on what basis to make decisions on the output of AI or humans. Referring to the information written above, it can be concluded that my favorite method is the use of artificial intelligence in the creation and search of the database, which essentially speeds up time and complies with all standards of legal and moral spheres. We can also say that this approach is now highly valued because it helps to show the new possibilities of computer technology, and in the future will be able to replace many aspects of human labor.

Reference List

Alexander, M. and Kusleika, D. (2019). Access 2019 bible. Indianapolis, In: John Wiley & Sons, Inc.

Bacardit, J. and Llorà, X. (2018). Large-scale data mining using genetics-based machine learning. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 3(1), pp.37–61.

Data Mining from Heterogeneous Data Sources. (2017). International Journal of Science and Research (IJSR), 6(1), pp.2076–2079.

Eboch, M.M. (2018). Data mining. New York, Ny: Greenhaven Publishing.

Eldad Davidov, Schmidt, P., Jaak Billiet and Meuleman, B. (2018). Cross-cultural analysis : methods and applications. New York, Ny: Routledge, An Imprint Of The Taylor & Francis Group.

Griffith, J.F. (2018). Survey Data Analysis In Applied Settings. S.L.: Elsevier Academic Press.

Gupta, K. and Kumar, A. (2017). Text-Mining Applications for Creation of Biofilm Literature Database. Canadian Journal of Biotechnology, 1(Special Issue), pp.24–24.

Heeringa, S., West, B.T. and Berglund, P.A. (2017). Applied survey data analysis. Boca Raton, Fl: Crc Press, Taylor & Francis Group.

Improvement of Neural Networks Artificial Output. (2017). International Journal of Science and Research (IJSR), 6(12), pp.352–361.

Kroenke, D.M., Auer, D.J., Vandenberg, S.L. and Yoder, R.C. (2018). Database processing : fundamentals, design, and implementation. Ny Ny: Pearson.

Macinnes, J. (2017). An introduction to secondary data analysis with IBM SPSS statistics. London Sage Publications.

Morrison, D.A. (2017). Phylogenetic networks: a new form of multivariate data summary for data mining and exploratory data analysis. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 4(4), pp.296–312.

Yildirim, P., Birant, D. and Alpyildiz, T. (2017). Data mining and machine learning in textile industry. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(1), p.e1228.

Print
Cite This paper
Select a referencing style:

Reference

IvyPanda. (2024, March 24). Competitor Analysis: Conceptual Discussion Essay. https://ivypanda.com/essays/competitor-analysis-conceptual-discussion-essay/

Work Cited

"Competitor Analysis: Conceptual Discussion Essay." IvyPanda, 24 Mar. 2024, ivypanda.com/essays/competitor-analysis-conceptual-discussion-essay/.

References

IvyPanda. (2024) 'Competitor Analysis: Conceptual Discussion Essay'. 24 March.

References

IvyPanda. 2024. "Competitor Analysis: Conceptual Discussion Essay." March 24, 2024. https://ivypanda.com/essays/competitor-analysis-conceptual-discussion-essay/.

1. IvyPanda. "Competitor Analysis: Conceptual Discussion Essay." March 24, 2024. https://ivypanda.com/essays/competitor-analysis-conceptual-discussion-essay/.


Bibliography


IvyPanda. "Competitor Analysis: Conceptual Discussion Essay." March 24, 2024. https://ivypanda.com/essays/competitor-analysis-conceptual-discussion-essay/.

Powered by CiteTotal, essay bibliography generator
If, for any reason, you believe that this content should not be published on our website, please request its removal.
More related papers
Updated:
Cite
Print
1 / 1