A network manager is responsible in designing Databases, understanding them and also in following the conventions of the Database Template. He/she should ensure that better relational database models preparations are followed for all the projects across the network. It is also the duty of the network manager to ensure spatial and tabular data models synchronization in all projects across the network.
We will write a custom Report on Responsibilities of a Network Manager specifically for you
301 certified writers online
The manager should coordinate efforts between those data models in the networks that are similar. In addition he should ensure creation of suitable design documentation, such as model diagrams and data dictionaries. He/she should ensure that database designs are well-suited with field collection needs as well as assist with other projects to ensure that quality database designs are achieved.
- He/she should take charge in coordinating efforts between comparable data models in all networks across the region in so doing he will be contributing to the national efforts of designing datasets that are cohesive.
- He/ she should Implement and maintain Databases ensuring appropriate database software is selected for all the projects across network.
- He/she should ensure suitable storage, accessibility and cataloguing of digital data. This goes along with the responsibility of developing archiving procedures as well as ensuring their practice.
- The network manager should develop archiving measures and at the same time ensure their practice.
- He/she should evaluate legacy data for usability.
- He/she should develop and implement QC/QA procedures and data validation tools.
- He/she should ensure adequate IT security practices are applied to datasets and other products in the network.
- He/she should coordinate efforts between similar datasets across all networks in the region.
- He/she familiarize his/her group with general strategy of NPS regarding IT security.
- He/she should populate data sets ensuring reliability of “location” data collection methods to achieve exact spatial data.
- He/she ensure data integrity as data sets are converted to electronic format.
- He/she provide programming skills for data entry forms and other input methods.
- He/she ensure FDGC-compliant metadata for spatial datasets.
- He/she generate spatial datasets from tabular data.
- He/she analyze data to identify potential data anomalies.
- He/she coordinate cataloguing of reports (data mining) and legacy data.
- He/she ensure population of national datasets such as NPSpecies, Dataset Catalog, and NPBib and ANCS+.
- The network manager should ensure that data allows for appropriate setup of Arc View to Access Links.
- The network manager should make considerable contributions to national efforts of collecting cohesive datasets.
- The network manager should assist with / coordinate training for network data and park technicians and managers.
- The network manager should provide Data Sets/Products which will assist others in the analysis and understanding of collected data.
- The network manager should format data sets as required by others for analysis or reporting tools.
- The network manager should develop web-based and desktop (distributable) applications for data entry, viewing, and reporting.
- The network manager should develop GIS products for data dissemination and analysis.
- The network manager should apply appropriate records management techniques to data sets and products.
- In coordination with the other staff, the network manager should help determine archival strategy for park and network data.
- The network manager should prepare data sets and products for public release.
- The network manager should help statisticians with developing data formats to fit statistical models or analysis packages.
- The network manager should assist with creation of Arc View to Access links.
- The network manager should assist with creation of graphics presentations.
Problems As Revealed In the Case Study
- Unusable network as a result of high priority software process running out of control.
- Unusual network-wide disturbances.
- Freakish hardware malfunction which caused a faulty sequence of network control packets to be generated. In return, this affected the apportionment of software resources in the Imps, leading to a scenario of one IMP process using excessive amount of resources to the detriment of the imp processes.
- Imps in the entire country were affected.
- There was a routing problem which affected all the imports through out the network.
- Restarted imps received bad updates from those of their neighbors which were not restarted.
- There was no protection against malformed updates when designing the routing process.
Since all updates were coming from a single Imp i.e. Imp 50. The Imp 50 had become faulty just before the occurrence of a network –wide outage thus failing to generate any updates during the outage period.
Imp 50 immediate neighbors i.e. Imp 29 was suffering dropping bits, a hardware malfunction but was up when the network was in bad shape.
Addressing the Above Problems
Routing directs forwarding, the passing of logically addressed packets from their source in the direction of their final destination through conciliator nodes; normally hardware devices called bridges, firewalls, routers, gateways, or switches. Common computers with multiple network cards can also forward packets and execute routing, but more limited performance. The routing process typically directs forwarding on the basis of routing tables which uphold a record of the routes to different network destinations. Consequently constructing routing tables, which are stored in the routers’ memory, turns out to be very significant for efficient routing.
Routing is often contrasted with bridging in its assumption that network addresses are structured and that similar addresses imply propinquity within the network. Since structured addresses allow a single routing table entry to characterize the route to a group of devices, structured addressing outperforms unstructured addressing (bridging) in large networks, and has become the leading form of addressing on the Internet, nonetheless bridging is still widely used, though within localized environments.
Larger networks involve complex topologies and may change rapidly while Small networks may involve manually configured routing tables, this makes the manual construction of routing tables infeasible. However, majority of the public switched telephone network (PSTN) uses pre-computed routing tables, with fallback routes when the most direct route becomes blocked. Dynamic routing attempts to solve the above problems through constructing routing tables automatically, depending on information carried by routing protocols, and allowing the network to perform nearly autonomously in avoiding network failures and blockages.
Dynamic routing controls the Internet. Nevertheless, the configuration of the routing protocols often requires a skilled touch; one should not suppose that networking technology has developed to the point of the complete automation of routing.
Distance-vector routing protocol
Distance vector algorithms apply the Bellman-Ford algorithm. This concept assigns a number, the cost, to each of the links between each node in the network. Nodes will then relay information from point A to point B via the path that results in the lowest total cost (i.e. the sum of the costs of the links between the nodes used).
The algorithm operates in a very straightforward way. When it starts a node only knows of its instantaneous neighbors, and the direct cost associated in reaching them. Each node, on regular intervals, sends to each of its neighbors its own current idea of the total cost to get to all the destinations it knows of. The neighboring node(s) will then examine this information, and weigh against it to what they already ‘know’; anything which represents an improvement on what they already have, they insert in their own routing table(s). Eventually, all the nodes in the network will arrive at the best next hop for all destinations, and the best total cost.
When one of the nodes involved goes down, those nodes which used it as their next hop for certain destinations reject those entries, and instead create new routing-table information. They then convey this information to all adjacent nodes, which then repeat the process. Eventually all the nodes in the network receive the updated information, and will then discover new paths to all the destinations which they can still attain.
Link-state routing protocol
When applying link-state algorithms, each node uses a map of the network in the form of a graph as its fundamental data. To produce this, each node fills the entire network with information on what other nodes it can connect to, and each node then independently assembles this information into a map. Using this map, each router then independently determines the least-cost path from itself to every other node using a standard shortest paths algorithm such as Dijkstra’s algorithm. The result is a tree rooted at the current node such that the path through the tree from the root to any other node is the least-cost path to that node. This tree then serves to construct the routing table, which specifies the best next hop to get from the current node to any other node.
Link state and Distance vector routing are both intra-domain routing protocols. They are used inside an autonomous system, but not between autonomous systems. Both of these routing protocols become inflexible in large networks and cannot be used in Inter-domain routing. Distance vector routing is subject to instability if there are more than few hops in the domain. Link state routing needs huge amount of resources to calculate routing tables. It also creates heavy traffic because of flooding.
Path vector routing is used for inter-domain routing. It is similar to Distance vector routing. In path vector routing we assume there is one node (there can be many) in each autonomous system which acts on behalf of the entire autonomous system. This node is called the speaker node. The speaker node creates a routing table and advertises it to neighboring speaker nodes in neighboring autonomous systems. The idea is the same as Distance vector routing except that only speaker nodes in each autonomous system can communicate with each other. The speaker node advertises the path, not the metric of the nodes, in its autonomous system or other autonomous systems.
Get your first paper with 15% OFF
Distance-vector routing protocols are normally straightforward and efficient in small networks, and require little management. However, naïve distance-vector algorithms do not scale well (due to the count-to-infinity problem ), and have poor convergence properties, which has led to the development of more complex but more scalable algorithms for use in large networks, such as link-state routing protocols and loop-free distance-vector protocols (e.g. EIGRP). Loop-free distance-vector protocols are as robust and manageable as distance-vector protocols, while avoiding counting to infinity and hence having good worst-case convergence times.
The primary advantage of link-state routing is that it responds more quickly, and in a bounded amount of time, to connectivity changes. Also, the link-state packets that are sent over the network are smaller than the packets used in distance-vector routing. Distance-vector routing requires a node’s entire routing table to be transmitted, while in link-state routing only information about the node’s immediate neighbors are transmitted. Consequently, these packets are small enough that they do not use network resources to any significant degree. The primary disadvantage of link-state routing is that it requires more storage and more computing to run than distance-vector routing.
In today’s various networks, routing is convoluted by the fact that no single entity is responsible for selecting paths: instead, multiple entities are involved in selecting paths or even parts of a single path. Inefficiency or complications can occur if these entities choose paths to selfishly optimize their own objectives, which may conflict with the objectives of other participants.
The Internet is partitioned into autonomous systems such as internet service providers, each of which has control over routes involving its network, at multiple levels. First, AS-level paths are selected via the BGP protocol, which produces a sequence of ASs through which packets will flow. Each AS may have multiple paths, offered by neighboring ASs, from which to choose. Its decision often involves business relationships with this neighboring ASs, which may be unrelated to path quality or latency. Secondly, once an AS-level path has been selected, there are often multiple corresponding router-level paths, in part because two ISPs may be connected in multiple locations. In choosing the single router-level path, it is common practice for each ISP to employ hot-potato routing: sending traffic along the path that minimizes the distance through the ISP’s own network—even if that path lengthens the total distance to the destination.
Consider two ISPs, A and B, which each have a presence in New York, connected by a fast link with latency 5 ms; and which each have a presence in London connected by a 5 ms link. Suppose both ISPs have trans-Atlantic links connecting their two networks, but A’s link has latency 100 ms and B’s has latency 120 ms. When routing a message from a source in A’s London network to a destination in B’s New York network, A may choose to immediately send the message to B in London. This saves A the work of sending it along an expensive trans-Atlantic link, but causes the message to experience latency 125 ms when the other route would have been 20 ms faster.
A 2003 measurement study of Internet routes found that, between pairs of neighboring ISPs, more than 30% of paths have inflated latency due to hot potato routing, with 5% of paths being delayed by at least 12 ms. Inflation due to AS-level path selection, while substantial, was attributed primarily to BGP’s lack of a mechanism to directly optimize for latency, rather than to selfish routing policies. It was also suggested that, were an appropriate mechanism in place, ISPs would be willing to cooperate to reduce latency rather than use hot-potato routing.
Carr, H.H. & Snyder, C. A. (2007): Data communications and network security. Boston: McGraw-Hill Irwin.
Dennis, A. (2002): Networking in the Internet age, New York: John Wiley & Sons.
Harun, M. (2004): Computer Safety, Reliability and Security, New York, Springer.
Illustrative Risks to the Public in the Use of Computer Systems and Related Technology. 2007. Web.