Advanced cyber attacks that target both public and private sectors at the international level have provoked an increase in research and funding for the establishment of novel cyber security technologies.
Government and private institutions are investing in technologies that dynamically appraise networks through real-time and remote agent forensic investigations. Besides, organizations are coming up with cyber security technologies like moving target defense, which prevent hackers from spotting attack surface of a network.
The moving target defense allows a network to alter its variables dynamically, therefore, making it hard for hackers to predict or spot attack surface. Other emerging cyber security technologies include tailored trustworthy spaces and searchable encryption.
The responsibility to guard public and private assets on the state and global level should not fall exclusively on the government. Public and private institutions should work with the government to establish network security measures.
The federal government has taken an active role in curbing cyber crimes by investing in research programs aimed at creating awareness of possible threats and how to address them. Currently, the federal government is trying to fine-tune existing cyber security technologies and to develop novel ones to safeguard networks from emerging threats.
The federal government is working to ensure that private and public institutions install novel cyber security technologies in their networks as a way to boost state economy and curb cyber attacks.
Recent events like the Georgian incursion and the revelation that hackers have stolen billions of dollars from banks globally have created an indulgence that cyber security does not only involve protecting one’s computer. It has devastating repercussions to state defense and economic interests.
For this reason, cyber security is one of the challenges facing contemporary digital community. Today, majority of cyber work is not automated, and hence, it is done by people (Garfinkel, 2014). Besides, many countries do not have qualified cyber experts. Thus, it is difficult for countries to rely on people bestowed the duty to control cyber attacks.
In addition, “The one consistent theme is that cyber defenses commonly used today are simply not effective against most forms of advanced cyber attacks” (Garfinkel, 2014, p. 568). Currently, state and private institutions are collaborating to plan and create cyber security technologies that aid in the fight against cyber crimes.
The technologies are aimed at detecting and combating any authorized intrusion into both government and private corporations’ networks.
Emerging cyber threats and susceptibilities have “Created challenges amounting to financial damages to governments and businesses” (Speicher, 2014, p. 65). Present digitally linked planet offers immense benefits to countries.
Public and private corporations transmit data across the globe within a short period enhancing their efficiency and productivity. Nevertheless, the digital platform is open to cyber attacks subjecting companies to insecurity. Cases of advanced persistent threats (APTs) among the state and key private organizations are rampant across the globe.
Moreover, hackers are targeting modern technologies like Big Data, Smart Grid, and High-End Computing. Another budding trend in cyber attacks is focusing on data found on social networks. Many people use social networks to share confidential information.
However, they do not know that social networks are prone to cyber attacks. Hackers have capitalized on people’s ignorance to perpetrate cyber crimes (Speicher, 2014). Social networks like LinkedIn have subjected national security agents to problems.
It is these threats that have prompted the government and private agencies to work together to combat cyber attacks. This article will discuss emerging cyber security technologies and government’s role in nurturing the development of these technologies.
Moving Target Defense
One of the modern cyber security technologies is moving target defense (MTD). The technology is regularly designed to alter the attack surface of a network, making it hard for attackers to access a system and reducing the susceptibilities and predictabilities available at any time.
According to Jajodia, Ghosh, Swarup, Wang and Wang (2014), hackers use network’s attack surface to determine its vulnerability. They allege that majority of the systems are static. Consequently, it is easy for hackers to access them.
Moving target defense enables organizations to “Change the network IP addresses, operating systems, open ports and protocols, and many other areas of the environment” (Jajodia et al., 2014, p, 35). Jajodia et al. (2014) recommend that it is imperative to make sure that events are not predictable.
Systems do this by making IP addresses’ movements as unpredictable and random as possible. Increase in randomness confuses a hacker making it hard for him or her to access a network. When an attacker inspects a system, he or she does not get consistent information that can help to launch an attack.
Moreover, MTD reduces chances of penetrating into a network because it regularly changes the network’s environment. Additionally, it fights an attack by lowering the number of components of a system that an attacker is familiar with or can access (Jajodia et al., 2014).
Plans are underway to develop Internet Protocol version 6 (IPv6), which will allow organizations to run unlimited number of IP addresses. Besides, IPv6 will make it easy for networks randomly to change their internet addresses, which is a fundamental feature of moving target defense.
Today, if organizations want to use 100 different IP addresses weekly, they require setting aside at least 400 addresses every month. It translates to 4800 IP addresses annually (Jang-Jaccard & Surya, 2014).
Assuming that 1000 corporations wish to change their IP addresses, it would be difficult for the existing IP versions to hold such a huge number of IP addresses. However, IPv6 can hold such a number of IP addresses and execute their requests without difficulties.
Moving target defense is implemented in two ways, which are high-level and low-level behaviors. Jang-Jaccard and Surya (2014) state “Low-level behavior is where the standard semantics of the programming language are preserved, but its undefined semantics are changed” (p. 975).
The primary benefit of low-level behavior is that undefined semantics are mechanically changed. The demerit is that low-level behavior is only applicable to an indeterminate definition. Low-level MTD helps to avert memory encryption and code injection attacks.
High-level behavior entails preventing high-level attacks. It requires system administrators to have knowledge of the system’s functionality and its attack surface. The disadvantage of high-level behavior is that it is resource intensive. Besides, it requires skilled personnel to change the variables of a program.
Uses of Moving Target Defense
One of the challenges of using MTD is upholding a functional network for clients during transformations and reducing the associated costs. Nevertheless, there are companies that have created cost-effective and efficient moving target defenses. One of the companies is JumpSoft Company.
The company has developed subscription-based MTD software known as “JumpCenter” (Jang-Jaccard & Surya, 2014). The software utilizes “Adaptive and reactive mechanized systems, which lower the range of the attack surface” (Jang-Jaccard & Surya, 2014, p. 976).
The idea behind JumpCenter and MTD is to reduce the cost and vulnerability to attack. The software is planted in the application layer to guarantee that a network remains operational during the software’s transformations.
The Department of Homeland Security is already using moving target defense to protect its systems against cyber attacks. The security agency is in the process of developing a complex Internet Protocol (IP) Hopper.
The IP hopper will “Use the available network data and hopping algorithms to allow for the constant switching of both source and destination IP addresses” (Jang-Jaccard & Surya, 2014, p. 981). Once complete, the IP hopper will improve the possibility of a network to interchange continuously and arbitrarily up to 340 IP addresses.
The move will make it tricky for hackers to identify which IP address to target. Besides, it will be hard for hackers to screen a network.
Role of Federal Government
The federal government has spent in improvement of MTD. The Homeland Security is currently running a project aimed at strengthening MTD functionalities to fight cyber attacks. The Homeland Security’s Cyber Security Division is working with other institutions like National Science Foundation and National Security Agency to develop MTD.
The Homeland Security has already awarded research contracts aimed at developing algorithms that facilitate systematic reasoning in MTD systems (Speicher, 2014). Besides, the Homeland Security seeks to create a control technique that will ease the intricacy of Moving target defense system and automate some of its tasks.
In 2011, the Air Force Office of Scientific Research donated $1million to facilitate in development of moving target defense system. One of the benefits of government’s involvement is that it will contribute to making both public and private networks unpredictable, therefore, protecting them from cyber attacks.
One of the demerits of government involvement is that it has focused on protecting military networks and ignored corporate networks, which contribute to economic development (Lynn, 2014). Nonetheless, there is hope that the fight against cyber attacks will be a concerted effort between the government and private institutions.
The federal government is working in partnership with private sectors to enhance cyber security. Currently, it has funded a research by Networking and Information Technology Research Development (NITRD), which aims to transform cyber safety in the country.
Remote Agents Technology
Remote agents or what are commonly referred to as mobile agents are used to keep an eye dynamically on a network’s safety. Dynamic monitoring is essential since a system that is not equipped with current patches has proved vulnerable and unable to withstand modern cyber attacks.
In addition, it is hard for a system administrator to monitor large networks. Large systems comprise numerous nodes, each with regular system changes and clients (Kundur et al., 2014). Remote agents help to execute localized analysis of network security from a remote server or user without relying on timid firewall procedures.
Currently, most public and private corporations use network screening tools that use Simple Network Management Protocol (SNMP) or the “Occasional discharge of scripts built based on network threats which require tedious and complicated updates in order to remain current and valid” (Kundur et al., 2014, p. 7).
The primary difference between conventional Simple Network Management Protocol (SNMP) screening systems and remote agents is that the latter can relate different activities within the system, trigger a warning in the log file, and elevate responsiveness or intensity of danger of other agents.
For instance, systems with remote agents detect security or password threat if a person attempts to log into multiple accounts using the same login details (Kundur et al., 2014).
Additionally, remote agents help system administrators to raise the alert level whenever a root login incidence happens and exceeds a predefined threshold. A system executes all these functions without the need for an administrator.
Use of Remote Agent Technology
Both script and SNMP screening techniques tender inadequate functionality and need trained personnel to search through systems and write updates. A group of scholars at the University of Minnesota sought to solve the challenge of using SNMP and script based screening techniques by creating mobile agents.
The students developed Ajanta mobile agent program (Kundur et al., 2014). The program can distantly sort data and change system’s tasks. In addition, Ajanta mobile agent utilizes localized database to sense and evaluate policy actions to make sure that guidelines are observed.
Using Ajanta, network administrators can steadily impose changes on a system’s screening and sorting plan and easily insert or eliminate agents from a section of the network depending on actions triggered.
Role of Federal Government
After the Wiki Leaks incidence, the federal government realized the importance of using remote monitoring techniques to manage its networks across the globe. Consequently, the government has already reviewed the United States’ international cyber laws as a step to help it use remote monitoring tools to manage its global cyberspace.
Investment in remote agent technologies will benefit not only the government, but also private investors (Kundur et al., 2014). It will assist private investors to subcontract their operations and control them from a distance, thus enhancing their efficiencies.
One demerit of the federal government’s involvement in the development of remote agents is that it might trigger animosity between countries. The federal government and private organizations will be able to spy on other countries or firms secretly.
There are claims that Kaspersky Lab; a Russian company has already discovered that the United States has been surreptitiously spying on a number of countries for quite some time. The company alleges that the United States installed a malware in many computers across the world.
The malware attacks firmware of a computer hard drive and go round security measures (Kundur et al., 2014). With the malware, the United States gathers security intelligence and other classified information from unknowing companies and countries.
Such a technology may lead to conflicts between states. Moreover, it would be disastrous if attackers get hold of the technology.
Real-Time Forensic Analysis
In modern digital world “Criminal proceedings are made easy through the use of computer forensic tools” (Casey, 2014, p. 15). Also associated with network screening is real-time forensic analysis. It is an investigative method used to create situational awareness and regular surveillance of a network.
Casey (2014) alleges “While remote access monitoring actively monitors the network and takes necessary action to correlate threats and increase defenses, real-time forensic analysis allows for an incident to be reproduced and the effects of the event to be analyzed further” (p. 17).
Real-time forensic analysis uses Network Forensic Analysis Tools (NFAT). Garfinkel (2014) asserts “The NFAT prepare a system for forensic examination and make it possible to screen and identify security breach and configuration errors” (p. 560).
The data gathered during the screening process can be used to analyze other events. Apart from filtering a network, real-time forensics has numerous practical applications.
For instance, health care facilities use real-time forensics to assess data moving from one department to another. Additionally, they use NFAT to recover lost data (Garfinkel, 2014).
According to Garfinkel (2014), it is hard to find proof of a risk that has never been detected before. Therefore, it is important to have a real-time picture of all activities that are going on in a network.
Real-time forensic analysis applies advanced methods of risk screening that include “Pervasive network recording, better visibility into network traffic, and deeper understanding of application state to monitor and report on deviant implementation and system behavior” (Casey, 2014, p. 21).
Through real-time forensic analysis, organizations document all network traffics, which assist them to monitor events taking place in a system. It becomes easy for organizations to detect illegal events and handle them before they affect their operations.
Organizations are buying sophisticated infringement deterrence, next-generation firewalls, Security Information Event Management (SIEM) and other apparatuses to boost their network security.
Uses of Real-Time Forensic Analysis
The United States’ Department of State uses real-time forensic analysis technique in its iPost system. The department uses iPost system to measure threats levels by pooling together information gathered from numerous sources like Active Directory and Short Message Services (SMS).
Moreover, the department collects information using a commercial liability analysis tool (Casey, 2014). The information assists managers and engineers to select sections of the network that have high levels of threat and counter risks against these sections correctly.
Role of Federal Government
For over seven years now, the federal government has worked in partnership with private corporations to create real-time forensic software that can help to detect and mitigate cyber attacks.
One of the benefits of government involvement in the development of real-time forensic analysis tools is that it has helped companies to recognize and respond to possible attacks before they happen.
The federal government, through National Science Foundation, has developed an Email Mining Toolkit that facilitates in analysis of email traffics (Casey, 2014). The toolkit aids organizations to isolate and discard illegal emails. Casey (2014) alleges that forensic analysis is a daily activity in legal proceedings.
Therefore, the federal government has established a department that is responsible for computer forensics. The department has trained the public how to detect and deal with emerging network threats, thus, lowering cases of cyber attacks.
Tailored Trustworthy Spaces
The use of cyberspace and Internet has grown to an extent that it is difficult for people and institutions to operate without these two vital communication assets. For people and corporations to interact, they require Internet and cyberspace.
Lynn (2014) alleges that some corporate and personal interactions do not require security, while for some, safety is a must. The challenge is, institutional and individual interactions are treated equally. It is imperative to create various platforms with different levels of security. The approach is known as tailored trustworthy spaces (TTS).
The primary concern with managing programs in a common pool is that every user has access to all the available programs. Lynn (2014) claims “This ‘one size fits all’ methodology creates a lot of potential for attackers to find exploits in the environment where they can use untrusted applications to access trusted application data” (p. 100).
She asserts that the problem can be solved by creating varied trustworthy spaces, which are aligned to demands of programs or information that clients can access.
Privacy, reliability, and accessibility requirements vary from one program to another, one client to another and even from one situation to another. As a result, the objective of TTS is to establish a protected environment for safe programs while tolerating the use of untrusted programs without any restrictions (Lynn, 2014).
Research is underway to determine how to execute tailored trustworthy spaces. Computer programmers are looking for the possibility of creating a safe implementation window that matches the security needs of individual programs, clients, contexts, and circumstances.
In this approach, each secure program will be executed on a separate and safe window where it does not interact with other programs and system software that may retrieve private data without clients’ consent. If this research goes through, it will prevent hackers from using unsecured programs to access private and protected data.
With this method, it is vital to create not only a safe system and execution of the window, but also a safe termination. If not, the method would be prone to data leakage or hacking (Lynn, 2014).
Use of Tailored Trustworthy Spaces
Organizations are likely to benefit from tailored trustworthy spaces. One way that institutions can exploit it is through utilization of self-protecting data. Self-protecting data arises when data guards itself against unauthorized users. Companies achieve this by assigning security rule to each data.
The security rule is designed such that no application can breach it (Geers, 2014). A program can have both secure and insecure facets. However, it has no track or access of the protected data. For instance, a pharmaceutical company can have a recipe for a novel medicine that is extremely classified.
Conventionally, the company would be compelled to hoard the recipe either in a protected program or a safe space. Using self-protecting data, a pharmaceutical company can encrypt the recipe and fix it in any program that can also have unprotected data like a Microsoft Word file (Geers, 2014).
Anyone who intends to use the file can only access the unsecured information stored in the dossier, but not the recipe. It is hard for unauthorized users to access the recipe because it is encrypted. Only those with the right key can have access to the recipe. The recipe can be copied to people through emails and other programs but remain secure.
Another group that can use TTS is Journalists who operate in hostile conditions. Majority of these journalists have to use Internet cafes to communicate with their head offices. Therefore, their information is susceptible to hacking.
Besides, the journalists are frequently subjected to censorship practices such as the Great Firewall of China, which curtails transmission of particular information outside China (Geers, 2014). Majority of these censorship practices stop the transmission of completely encrypted files.
Using tailored trustworthy spaces, journalists can transmit encrypted data by attaching it to unsecured data. It is possible for the encrypted data to go around censorship and reach the planned destination.
Besides, the method can allow journalists to send sensitive information without fear of intimidation as the encrypted data would not be noticed.
Role of the Federal Government
Geers (2014) claims “The federal government has prioritized the research of tailored trustworthy spaces” (p. 299). He alleges that in 2011, the federal government, through the Office of Science and Technology Policy launched a research program aimed at improving the security of digital communications networks.
The federal government is focusing on Transistor-Transistor Logic (TTL) (Lynn, 2014).
It intends to use the “TTL technology in smart grid, which is a fundamental constituent of the national electrical power infrastructure….the federal government has organized for a number of workshops aimed at determining how TTL could protect smart grid from cyber attacks” (Lynn, 2014, p. 98).
The workshops established that TTL can help to develop reliable, safe and vibrant solutions to communications.
Transistor-transistor logic can allow neutral communications providers to run most of the smart grid functions, therefore, eliminating the need for individual power companies to undertake their safety measures (Kundur et al., 2014). Additionally, it can help a country to develop a secure and stable smart grid.
The United States government has already integrated TTL technology in its smart grid. The technology has helped the country to manage its power system and secure it from cyber attacks.
The major drawback of the technology is that it can be disastrous if hackers happen to penetrate it. They would have control of national electric grid system and cause a lot of damage to the country.
The number of people that use cloud computing is growing by the day. The growth has weakened the security of data hoarded in cloud servers and exposed it to hackers.
A major issue is that people can no longer rely on these servers in their present condition. Abdalla et al. (2014) allege “Hackers who can acquire root rights and managers of cloud hosting institutions have unlimited admittance to data on cloud servers” (p. 352).
Therefore, the data is subject to various insecurity concerns since its bearer has no control over how it is utilized or accessed. One method of solving the problem of probing an encrypted cloud server is to get rid of all of the unprotected data through encryption. It gives data owners’ complete control of the encryption technique.
Full encryption constrains database explore abilities. The constraint can be addressed using a conventional approach such as transferring the whole database to a local server, decrypting it and executing the search. The approach is entirely unworkable and resource intensive.
Alternatively, data bearers can allow a server to decrypt information, run the search and transmit the outcomes to clients (Abdalla et al., 2014). The principal challenge of this approach is that the server can ultimately know the data a client is looking for, and thus, render encryption less efficient.
Technologists have come up with a technique that allows clients to access information from a server without having to necessarily decrypt a data. The technique is referred to as searchable encryption, and it addresses security challenges in cloud computing.
One accepted way of using searchable encryption is through search key. Abdalla et al. (2014) state “A search key is a data structure that stores document collections while supporting efficient keyword searches” (p. 350). With this approach, a client feeds in a keyword and the key generates an indicator to the file that has the searched phrase.
They allege that the key can only be safe if the search function bears a “trapdoor” for the search phrase (Abdalla et al., 2014). Otherwise, the search function cannot reveal any information about the protected data.
One method of creating a searchable encryption system is to have the user encrypt both the information and key and send the two to the server. To look for data, the user creates and sends a trapdoor for the data, which the server applies to look for pointers to the necessary file.
Search key improves security for data stored in cloud servers. Nonetheless, the method has a number of limitations. One of the limitations is that it can reveal some information about the user’s search. Particularly, search key method can show the access pattern that tells what files hold the search question.
One can mitigate this limitation by using oblivious random access memories (RAMs), which reduce chances of data leakage.
Use of Searchable Encryption
Bosch, Hartel, Jonker and Peter (2014) claim “Anticipations are high that the worldwide cloud computing market will grow at a 36% compound annual growth rate through 2016, reaching a total market size of $19.5 billion” ( p. 3). Thus, searchable encryption has a tremendous potential.
Because security is a major problem in cloud computing, it means that people will have to look for security measures to safeguard their data. Besides, organizations go for systems that give them full control of their data without compromising its security.
Searchable encryption allows organizations to access their data without worrying about its functionality and safety. Today, few cloud computing providers use searchable encryption on their servers (Bosch et al., 2014). Consequently, many businesses dread saving their data in the cloud.
Cloud services help organizations to cut down on operations cost by enhancing operations and reducing the time that products and services take to reach the market. Once searchable encryption is introduced to the business world, majority of the enterprises will no longer worry about the safety of their data. Besides, they will integrate cloud computing in their information technology (IT) infrastructure.
Some companies have already incorporated searchable encryption in cloud computing. Hitachi, a global engineering and IT firm, has already “Incorporated a form of searchable encryption in a standard algorithm for DNA genome analysis” (Bosch et al., 2014, p. 27).
The particular tool the company is using is referred to as Basic Local Alignment Search Tool (BLAST). The tool was exclusively developed to run genome study in the cloud. Bosch et al. (2014) allege that safety was mainly essential with this assignment since the search questions repeatedly contained a lot of identical variables.
Failure to encrypt the searches would have subjected the project to security risks since it would have been easy to penetrate the server due to the few variables being searched.
Role of Federal Government
The federal government is actively participating in the effort to protect cloud servers. Lately, the National Institute of Standards and Technology (NIST) issued a statement that outlined the challenges encountered in trying to secure cloud servers. The statement cited encrypted search as a primary problem.
According to NIST, it is hard to control search keys even in established systems and data centers because of the quantity of data. Additionally, the amount of cryptographic keys and need for sharing data with multiple users make securing cloud servers an uphill task (Bosch et al., 2014).
In cloud computing, logical and physical management of resources is shared among the cloud actors. Bosch et al. (2014) allege “The actors comprise the users, hosting companies, and brokers who bring together varied cloud providers” (p. 47). The federal government is helping to develop measures to address these challenges.
A fascinating feature of searchable encryption is its impacts on a state. In an unprecedented manner, the application of searchable encryption may hamper the efforts of the government. Cloud hosting makes it easy for the government to access information about different companies and individuals.
The government liaises with cloud hosting companies to get the information. For instance, in case of cyber attacks, the government works with cloud hosting companies to pursue the perpetrators (Bosch et al., 2014). Adoption of searchable encryption can limit information sharing between cloud hosting businesses and the government.
Cloud hosting companies can no longer have access to users’ data since they assume full control of their information. Hence, it is difficult for the government to obtain information that can help them prevent cyber crimes or arrest perpetrators.
Increase in cases of cyber attacks has forced public and private institutions to invest in novel cyber security technologies. Organizations have invested in moving target defense, real-time forensic analysis, searchable encryption, and remote agents technologies to protect their networks from cyber attacks.
Moving target defense allows companies dynamically to change their key variables such as IP addresses. Hence, hackers can hardly predict the pattern of network applications or spot attack surface. Searchable encryption allows companies to encrypt and transmit classified data in such a way that unauthorized users cannot access it.
The technology goes a long way to protect cloud computing from cyber attacks. Additionally, with real-time forensic analysis, institutions can monitor events in their networks and respond to any suspicious or illegal activities.
The federal government has actively supported and nurtured emerging cyber security technologies. The government has heavily invested in research and projects aimed at improving the existing technologies and developing new ones.
For instance, the federal government through the Office of Science and Technology Policy launched a research program to develop tailored trustworthy spaces for digital communication networks.
The federal government has mainly concentrated on transistor-transistor logic (TTL). In addition, the federal government is encouraging public and private institutions to integrate emerging cyber security technologies in their networks.
The government holds that tailored trustworthy spaces, Moving target defense, and searchable encryption are the only techniques that can withstand modern cyber threats. The three technologies, if further improved can combat existing and future threats.
Even though the federal government has made the fight against cyber attacks a concerted effort between public and private institutions, it is important to recognize that if not well managed, this venture may cause conflicts among the countries.
For instance, the federal government has already established a remote agent program that is capable of harvesting military intelligence and other classified information from unknowing institutions. If well managed, such a program may help to fight cyber attacks by collecting and relaying information regarding attackers.
However, if misused, it might lead to conflicts between countries or firms. Hence, it is recommended that all emerging cyber security technologies are utilized prudently. There should be an independent body that supervises implementation of the emerging technologies.
Abdalla, M., Bellare, M., Catalano, D., Kiltz, E., Kohno, T., Lange, T., Malone-Lee, J., Neven, G., Paillier, P., & Shi, H. (2014). Searchable encryption revisited: Consistency properties, relation to anonymous IBE, and extensions. Journal of Cryptography, 21(3), 350-391.
Bosch, C., Hartel, P., Jonker, W., & Peter, A. (2014). Survey of provable secure Searchable Encryption. ACM Computing Surveys, 47(2), 1-51.
Casey, E. (2014). Handbook of digital forensics and investigation. Burlington: Academic Press.
Garfinkel, S. (2014). Digital forensic research: The next ten years. Digital Investigation, 7(3), 564-573.
Geers, K. (2014). The challenge of cyber attack deterrence. Computer Law & Security Review, 26(3), 298-303.
Jajodia, S., Ghosh, A., Swarup, V., Wang, C., & Wang, S. (2014). Moving target defense: Creating asymmetric uncertainty for cyber threats. New York: McGraw-Hill.
Jang-Jaccard, J., & Surya, N. (2014). A survey of emerging threats in cybersecurity. Journal of Computer and System Science, 80(5), 973-993.
Kundur, D., Feng, X., Mashayekh, S., Liu, S., Zourntos, T., & Butler-Purry, K. (2014). Towards modeling the impact of cyber attacks on smart grid. International Journal of Security and Networks, 6(1), 2-13.
Lynn, W. (2014). Defending a new domain: The Pentagon’s cyberstrategy. Foreign Affairs, 89(5), 97-108.
Speicher, C. (2014). Security fabric-tailored trustworthy spaces: Flexibility based on policy management. Santa Clara: Santa Clara University.