The Trusted Computer System Evaluation Criteria is a standard established by the US Department of Defense that outlines the fundamental requirements in order to evaluate the efficiency of the computer security controls that have been integrated onto the computer system. The fundamental role of the TSEC was to assess, catalog and facilitate the selection of computer systems that are to be used for effective processing, data storage and the retrieval of sensitive information (Daly, 2004). The Common Criteria for Information Technology Evaluation is a framework through which users of a computer system can specify the functional and assurance security requirements, after which vendors can facilitate the implementation of computer security basing the claims of the users. The common criteria offer assurance that facilitates specification of user requirements in terms of functional and assurance, vendors’ implementation of their requirements and standard evaluation in order to ensure that a security product meets the international computer security standards. This paper discusses the impacts associated with the transition from the Trusted Computer System Evaluation Criteria to the International Common Criteria for Information security evaluation. The paper provides an overview of the concepts of security assurance and trusted systems, an evaluation of the ways of providing security assurance throughout the life cycle, an overview of the validation and verification, and the evaluation methodology and clarification techniques deployed in both criteria for security evaluation.
Security assurance is one of the core objectives and requirements of the Trusted System Evaluation criteria, which stipulates that a secure computer system should have hardware and software mechanisms, which can be evaluated independently in order to foster adequate assurance that the computer system ensures minimum security requirements. In addition, the concept of security assurance should provide a guarantee that independent portion of the computer system works as it is required. Security assurance guarantees the protection of data and any other resources that it hosts and it controls. The basic argument is that that the hardware or software entity in itself is a resource, and should have appropriate security mechanism (Herrmann, 2003). In order to facilitate the realization of these objectives, there are two principal kinds of security assurance that are required; they include assurance mechanisms and Continuous Protection Assurance. Assurance Mechanisms involve operational and life-cycle assurance; while the Continuous Protection Assurance involve the trusted mechanisms that are used in the implementation of the basic security requirements and ensure that these requirements are not subjected to unauthorized alterations. Trusted system on the hand refers to a system that can be depended upon to undertake its specified functionality and ensure the outlines security policies (Lehtinen, 2006). The underlying argument is that the failure of a trusted system is bound to result to a breaking of a particular security policy. Basically, a trusted system can be perceived to be a reference monitor and play an integral role in monitoring all the access control decisions. The relationship is that security assurance results to a trusted system, with the outcome being an integration of computer hardware and software, and any other middle ware that can be used in the enforcing of particular security policies. In order to avoid failure of the trusted system, higher levels of system assurance are required in order to guarantee the effectiveness of the trusted system. An empirical analysis of the above implies that the TSEC utilized six evaluation methodologies, while the Common Criteria utilized seven evaluation methodologies (Merkow, 2004).
Under the Trusted Systems Evaluation Criteria, life-cycle assurance normally entails the carrying out a security testing, the specification of the design and its respective verification, configuration management and then finally the Trusted System Distribution. One of the TSEC requirements is that security implementation should take place throughout the lifecycle of system development. Security testing is used to determine whether a system has the capability of protecting its data and resources without impairing its overall functionality. Therefore, security testing aims at assessing the ways in which a system ensures confidentiality, data integrity, user authentication, system availability, user authorization and non-repudiation. Specification of the design is done in accordance with the functional and user security requirements. Trusted systems have to integrate the functional and user requirements with the security policies in order guarantee the core objectives of information security. Design specification is an important process during the outlining of design requirements during the implementation of a security system (Lehtinen, 2006). Verification simply entails confirmation that the security system is functioning in accordance with the expected requirements, in the sense that it should meet the minimum security requirements in order for the system to be deemed effective in terms of enforcing security. Configuration management aims at ensuring that there is consistency with respect to security performance. These normally include keeping track of any needed changes and constant adjustment of the security baselines in accordance with the nature of the security threats available. Two core processes are undertaken during configuration management, they include revision control and baselines establishment. Trusted System Distribution on the other hand aims at offering guaranteeing the security of a trusted system prior to its installation. According to the TSE requirements, it is important that the security properties of a trusted system be intact prior to its installation for the user. In essence, the installed system should be an exact copy of the system that was evaluated against the requirements of the TSEC. Basically, the lifecycle of a security system implementation entails the definition of security requirements, design and the implementation. Assurance justification and the design implementation requirement are vital in ensuring that the implemented system meets the security evaluation criteria under the TSEC and the Common Criteria, with the Common Criteria having more evaluation frameworks compared to the TSEC (Daly, 2004).
Validation and verification are vital in ascertaining the effectiveness of a security system. Validation and verification are used in checking out that a security system meets the design specifications and that its functionality is not impaired as required. Validation and verification are significant elements of the Quality Management System. Specifically, validation can be perceived as a quality control strategy used in the evaluation of whether as security system has complied with the international security standards, regulations and specifications. Verification is usually an internal process and takes place during all the phases of the security system development and implementation. Validation on the other hand can be viewed to be a quality assurance process that has the principal objective of analyzing the performance of a security system and also aims at guaranteeing high levels of security assurance, in the sense that the implemented security system meets all the requirements in order for the security system to be deemed effective. Basically, it evaluates the fitness of purpose and facilitates the acceptance of a security product by the end users. Validation entails developing the right security system; while verification involves building the security system in a right manner, implying that it takes into account that the specifications are implemented as required, which means that it is a match against the needs of the users.
There are various evaluation methodology and certification techniques that can be used to determine the levels of security assurance of an information system. The main objective of evaluation methodologies is to determine the vulnerability of a system, which normally includes an assessment of the breaking of the security policies and controls, which may in turn result to a violation of the security policies (Daly, 2004). Formal verification is one methodology that can be deployed in order to ensure that a security system meets some certain constraints. It normally entails the establishment of preconditions and post conditions of the implemented system. In order for the system to be deemed effective, the post conditions must meet all the constraints. Penetration Testing is also another technique that can be used to determine is a security system meets some minimum constraints. It normally entails the hypothesis stating of the characteristics of the system and the state that is likely to impose the system to vulnerability. The result is normally a state that has been compromised. Tests are carried in order to see if the system will become compromised, therefore making the system vulnerable (Herrmann, 2003).
In conclusion, the transition from the transition from the Trusted Systems Evaluation Criteria to the International Common Criteria resulted to more secure systems owing to the fact that the Common Criteria has more evaluation frameworks compared to the TSEC.
References
Daly, C. (2004). A Trust Framework for the DoD Network-Centric Enterprise Services (NCES) Environment. New York: IBM Corp.
Herrmann, D. (2003). Using the common criteria for IT security evaluation. New York: Auerbach.
Lehtinen, R. (2006). Computer security basics. New York: O’Reilly Media, Inc.
Merkow, M. (2004). Computer security assurance using the common criteria. New York: Cengage Learning.