Specific Malicious Attacks
One of the attacks that should be carefully assessed by the organization is the process of code injection. This weakness will most probably expose the software weaknesses of the organization to a snippet of malicious computer code. Ultimately, a code injection attack will seriously influence the FTP server and compromise sensitive data stored by the organization. The problem with code injection consists of the fact that this type of attack exploits all the available weaknesses in software to infect the end-users and steal sensitive data (Landoll, 2016).
Another specific malicious attack that may have serious consequences for the organization is a DDoS (distributed denial of service) attack. This type of malicious attack is usually used to impact the main server to make it unavailable to users. Because these attacks are aimed at the organization’s network, they should be reviewed more carefully. DDoS is commonly initiated to overload the target server with prohibited requests. The general outcomes include the slow functioning speed of the server or its full stoppage (crash) due to the inability to function properly (Whitman & Mattord, 2016). DDoS attacks are also automated, and certain DDoS snippets may transfer approximately 100 Gbps at the times of maximal performance.
Another specific type of malicious attack that has to be taken into consideration by the organization is buffer overflow vulnerability. These attacks depend on the machine’s architecture and memory regions. The techniques of exploiting allocated memory and call stack are considered to be the most popular among hackers. Nonetheless, this type of attack is the weakest among all three because it commonly fails to perform consistently (Landoll, 2016). The organization has to have an eye for the presence of null bytes and the location of shellcode to counterattack the exploiters of this type of malicious attack.
Potential Impact of Malicious Attacks
First, code injections may end up in the loss of sensitive credentials related to the organization’s FTP server (including login-pass pairs). Moreover, the network may be infected with Trojans or any other critical “sniffers” that may trace the sensitive information that is sent over the network (including intranet). The problem with this type of malicious attack consists of the fact that FTP credentials are naturally communicated by servers in the form of plain text. Therefore, the process of stealing the credentials from a “listening” receiver is not a big deal for the Trojan “sniffer.”
Second, the consequences of a DDoS attack may turn out to be rather pricey. The problem, in this case, is that numerous employees will have to leave their core responsibilities to help the system administrator to cope with the devastating impact of DDoS. Additionally, the organization will have to accept the time-intensive nature of the process of rebooting the server and all of the core applications (including the testing process). One of the biggest concerns for the organization should be an unfinished transaction. In perspective, it may corrupt the data stored on the server and end up in a read/ write (RW) error. This will expose the organization to the danger of restoring all previous transactions, which may be rather problematic.
Third, the biggest impact of a buffer overflow is the crash of the organization’s SMTP server. In other words, this type of malicious attack leads to a lack of availability and may create an infinite software loop. Another aspect of the impact is the arbitrary code inherent in the buffer overflow process. The problem with this feature consists in the fact that it is usually not located within the security policy of the network. If that particular snippet of code is executed, it will lead to the destabilization of the SMTP server and its subsequent malfunction.
Proposed Security Controls
First of all, one should be careful when granting access to the FTP server. Moreover, any given software that is used to store passwords (FileZilla, for instance) may be infected by a Trojan. The process of stealing personal data may be mitigated by one single step – no one should store their passwords on the local machine (Conklin, White, Cothren, & Davis, 2016). Additionally, it is highly recommended to use Secure Shell (SSH) or Secure Copy (SCP) as alternatives to a basic FTP. The organization may also be interested in any type of encrypted messages (instead of the plain text) when setting up the communication with the server. This will ultimately help the organization to reduce the probability of “sniffing” outbreaks.
The impact of DDoS attacks can be mitigated by the implementation of reverse proxies. For example, several hosting locations may be used to distribute an array of reverse proxies. The latter functions as a specific obstacle that does not allow the infiltration of unwanted data and protects the server (Conklin et al., 2016). On a bigger scale, this means that a variety of obstacles will protect the server from an assortment of remote places with the intention of not letting the incoming data to overwhelm the server. This may be either used as a preventative or supporting measure during an attack that has already begun.
One of the most prevalent ways to protect the server from buffer overflows is to develop a function that will return certain values while checking the stack. This will allow the end-users to see if the stack has been altered during the deployment of the function. A segmentation error should be called if there were any changes in the stack (Conklin et al., 2016). The organization may use *.gcc patches to implement these functions. Moreover, the data execution prevention instrument developed by Microsoft can also be used. The simplest way of protecting the stack is to split it into two parts – function call returns and the end date. The organization still has to take into consideration that splitting the stack does not completely prevent the act of stealing sensitive data during a buffer overflow.
Potential Concerns for Data Loss
The existing data shows that almost 50% of the cases of data loss occur due to the malfunction of hardware. Therefore, this should be the core concern of the organization due to its commonness. The process of data loss may transpire via some different forms that include (but are not limited to) controller failure, electrical failure, and crash of the main server (Pfleeger, Pfleeger, & Margulies, 2015).
Another point that should be approached by the organization is the corruption of the software used by the developers and other team members. It is also a rather common event when the software shuts down without letting you save progress. The problem with software corruption consists in the fact that it may happen unexpectedly due to the use of certain diagnostic tools or merely being a resource-intensive process (Landoll, 2016).
The last concern revolves around the viruses that are practically everywhere nowadays. The organization should be concerned with a wide array of modern viruses because criminals may use critical system weaknesses to steal sensitive data or impose damage on the system (Landoll, 2016). Therefore, an antivirus should be installed (if it is not installed yet) and constant updates of the virus bases should be performed.
Potential Impact of the Selected Concerns
The potential impact of the concerns mentioned above may be described using several factors. First, it is the economic impact. It can be seen in the cost of restoring the information that has been lost or mitigating the consequences of a system breach (Whitman & Mattord, 2016). Second, it is a performance impact. It is evident that to be capable of fighting the negative outcomes of data loss, the organization will have to rearrange its human resources in a way that presupposes that all the forces are thrown into the solution of this problem. Third, the organization will be impacted by the lack of time on its hands (with time being the pivotal resource that is available to any given organization) (Pfleeger et al., 2015). One should realize the critical importance of time for the organization that operates in the field of game development.
Data Loss and Data Theft Prevention
Data loss and data theft can be prevented in several ways. First, it is vital to develop a backup plan and store all the data in safe remote locations (Whitman & Mattord, 2016). The backups will help the organization restore their data in case of any external damage or natural disaster. It is also crucial to update the existing data constantly. It is highly suggested to perform scheduled checkups of the data to ensure its relevance and correctness. The last recommendation is to set up specific access levels (Whitman & Mattord, 2016). In perspective, this will allow the organization to grant access only to authorized personnel (depending on their level of access). Developing a confidentiality policy will not be superfluous either.
References
Conklin, W., White, G. B., Cothren, C., & Davis, R. (2016). Principles of computer security (4th ed.). New York, NY: McGraw-Hill Education.
Landoll, D. J. (2016). Information security policies, procedures, and standards: A practitioner’s reference. Boca Raton, FL: CRC Press.
Pfleeger, C. P., Pfleeger, S., & Margulies, J. (2015). Security in computing. Harlow, UK: Prentice Hall.
Whitman, M. E., & Mattord, H. J. (2016). Management of information security. Boston, MA: Cengage Learning.