Introduction
A computer is an electronic machine that is vulnerable to many risks. The computer ought to be protected from risks such as viruses that may affect their normal functioning. In dealing with computer security, many scientists have come up with different ways of protecting a computer from unauthorized parties.
There are many technical areas of computer security but the main ones are initialized CIA. This initials stand for confidentiality, integrity and authentication. Confidentiality means that other unauthorized persons cannot access what is in the computer.
Integrity is used to mean that the information in the computer cannot be hacked or changed by unauthorized persons. Authentication is used to mean that the information in the computer is only available and accessible to the authorized parties (Seong 24). The main reason for enhancing computer security is protecting it from unauthorized persons who cause destructions to the computer system hence interfering with confidential information.
Another issue with computer security is privacy that is related to people who use the internet everyday and they are supposed to ensure that they protect their personal information from the websites they deal with. To ensure that these security systems are in work, some fault-tolerance methods are necessary as discussed below (Seong 83).
Discussion
Most of the software fault tolerance methods are advancements of the old hardware fault tolerance methods that were less effective in performance. Three software fault-tolerance methods are in use today as discussed (Bishop 45).
Recovery blocks
Randell developed the method in which an adjudicator is used to confirm the effects of a similar algorithm. The system using this method is broken down into small recoverable blocks that build up the entire computer security system.
Each of the small blocks is connected alongside primary, secondary and tertiary case codes, which are just next to the adjudicator. According to Seong, “the adjudicator is used to show the effectiveness of the various blocks and incase the primary block fails, then it rolls backs the state of the system and tries to fix the secondary block” (76). In case of failure of a block, then it reveals that the block is not worthy for use.
N-version software
This method works parallel to the traditional N-version hardware. In this method, there are different models that are made up of N different implementations. Each of the variants/ models returns its results after performing any action.
It is from these results that the effectiveness of the modules is determined and then the correct ones can be known. This method is more effective as it can include hardware using multiple versions of software and the results are correct (Bishop 27).
Self-checking software
This method is rarely used compared to the previous methods. It includes extra checks that are set at some checking points. Some rollback recovery methods are also installed in the computer security system. In self-checking, the correct codes are obtained and then used. This method is however not effective because it lacks rigor (Seong 98).
Conclusion
The methods used to create fault computer systems have never been 100 percent effective. They are faced with some failures and most of them are 60-90 percent effective.
This means that more research needs to be done to come up with very effective and reliable methods. In addition, the methods are very expensive to execute hence the next generation must come up with cost-effective methods.
Works Cited
Bishop, Matt. Computer security: art and science. New York: Addison-Wesley Professional, 2003.
Seong. H. Poong. Reliability and risk issues in large-scale safety-critical digital control systems. Washington, DC: Springer, 2009. Print.