Comparison Between two Operating Systems Report

Exclusively available on Available only on IvyPanda®
Updated:
This academic paper example has been carefully picked, checked and refined by our editorial team.
You are free to use it for the following purposes:
  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

An operating system (OS) refers to a set of programs which make the computer usable. The programs present in the operating system make it easy to operate a computer. The common types of operating systems include Linux, Microsoft Windows, and Mac OS. These types of operating systems are used in Desktop Computers and Laptops. However, there are other types of operating systems which are used in mobile phones and they include Android, iOS, and Symbian (Smith, 2005).

A computer becomes useful when it has an operating system running in it. In order for a person to distinguish between the different types of operating systems, he should take time to understand the most important concepts that are used in operating systems. The main concepts applied in operating systems include kernel services and application level services. These services play a very vital role in determining how the operating systems perform when it is installed in a computer.

An operating system should be capable of managing all the resources of a computer efficiently. There are many tasks that an operating system helps a computer to accomplish. An operating system enables users to share the same computer and it applies the user interface on the desktop.

It refrains users from interfering with data which that belongs to other users and also allows people to share data stored in a computer. It is also the responsibility of the operating system to schedules resources among the users. The Operating system also allows computers to communicate to other computers through networks. Moreover, it enables the computer to recover from errors, and it also organizes files so that they can be accessed easily and in a more secure manner (Parsons & Oja, 2008).

Operating systems in the world today play three main roles. They ensure that processes run in low privilege. They also invoke the kernel to perform in a high privilege state. People working in offices or in their homes therefore adopt different operating systems based on their usability and their ability to handle processes and services.

It is also the role of the operating system to create abstraction. Abstraction enables the operating system to hide the details of the hardware. In this perspective, the operating system hides the lower level details of the hardware thereby enabling the operating system to perform high level tasks by providing it with higher-level functions (Siever, Figgins, & Love, 2009).

Through abstraction, the operating system is able to convert the world of physical devices into a virtual world. There are various reasons for abstraction. To begin with, it is said that the code which is needed to regulate the operations of the peripheral devices is not usually in a standard form.

As a result, the operating system creates room for the incorporation of hardware drivers. The drivers then perform certain operations on behalf of programs such as facilitating the input/output operations. In addition, the operating system is able to introduce other essential functions when it interacts with the hardware.

For example, the operating system makes file abstraction mechanism possible because the programs do not rely on the disks while performing their operations. Moreover, the operating system is able to enhance the security of a computer through abstraction.

Every operating system has a unique user interface. This uniqueness makes people to have different tastes for different operating systems while performing their daily operations. The main operating systems that people use are Linux and Windows. The manner in which these operating systems manage processes is different.

For example, Linux is known to execute tasks at a faster rate than Windows. However, Windows is supported by many complex applications thereby influencing many people to adopt Windows while performing their daily operations. These operating systems also interpret commands differently because they are run on partitions which have different file systems. The manner in which these operating systems support application integration is also different.

As a result, the operating systems are different with respect to security enhancement, memory management, and process scheduling. These differences influence the performance of the two operating systems thereby making people to have differing opinions on the type of operating system to put into their computers. There are people who rely on Linux while there are other people who rely on Windows to carry out their daily computing tasks (Siever, Figgins, & Love, 2009).

In order to come up with the best operating system therefore, many developers are concentrating on developing an integrated user interface which will encompass the different activities that are undertaken by multiple processes when different computers are connected to a network. Therefore, when considering the type of operating system to use, people need to look at operating systems from various points of view.

They should look at operating systems as resource managers and as extended machines (Smith, 2005). This paper will therefore provide a comparison between Windows XP and Linux since they are the major operating systems which people use to meet their daily computing needs.

Memory Management

Windows XP Memory Management

Windows XP is said to be good at managing the memory in a computer. There are people who believe that using third party memory optimizers is a good idea. However, most of these programs do not work at all. Windows XP is one of the most successful products that Microsoft has ever made.

The overall performance of the operating system is good. This is because Windows XP boots and resumes from hibernation faster. It also has applications which are highly responsive to commands. These conditions make a user to be very satisfied while using a computer that runs Windows XP. In case a person is using a computer that meets Microsoft’s minimum requirements, Windows XP performs optimally (Microsoft, 2005).

The recommended random access memory that a computer running Windows XP should have is 128 megabytes. With this memory size, Windows XP has been able to stand ahead of other previous versions of Windows operating systems. The performance of the operating system becomes better when a person adds more resources such as memory-intensive media applications. Many people are normally tempted to increase the memory of their computers in order to improve their performance.

The easiest way to improve the performance of a computer is by adding memory to it. It is said that although the recommended memory to run Windows XP efficiently is 128 megabytes, the operating system has been observed to perform better than previous versions of Windows.

For example, Windows XP has been observed to run more efficiently on 64 MB of RAM than Windows Millennium Edition (Windows ME). Windows XP is therefore regarded as a satisfactory upgrade for people using Windows ME on their lower end computers (Sechrest & Fortin, 2001).

The virtual memory created in Windows XP plays a very important role in determining how a computer system should perform. For example, when a program is given commands on an Intel CPU, it can use up to 4 GB of the available memory in a computer. The memory used by the running program is usually greater than the RAM of the computer.

Therefore, the hardware allows programs to run continuously until they use up all the 4GB of virtual memory that is created by the operating system. The parts of a program that are loaded into the physical RAM are the ones which facilitate the creation of virtual memory which then enables programs to run efficiently in a computer.

In this perspective, the processor translates the instructions that are created from the virtual addresses so that it can manage to match them up with the physical memory. The processor is known to manage the mapping process of the virtual memory in terms of pages which are estimated to be 4 kilobytes in size (Sechrest & Fortin, 2001). These pages therefore make it easy for the system to map the virtual memory efficiently.

In Windows XP, it is the RAM which loads ‘The Non-Paged area’ and ‘The Page Tool’. The ‘Non-Paged area’ refers to the parts of a system which are never paged out because they normally contain the core codes of the system. For example, whenever a person comes across a blue screen which refers to a ‘Page Fault’ in the ‘Non-Paged Area’, this usually reflects that the hardware has a serious problem. The problem might result from the RAM modules or from damaged codes in a hard disk.

There other instances whereby external utility software such as Norton may put modules in the RAM thereby making the computer to stop functioning properly. In this case, a person is recommended to uninstall such a program in order to get rid of the errors. On the other hand ‘The Page Pool’ is used to handle program codes and data pages which may have had data written on them. The remaining memory in the RAM is normally used to increase the size of the cache (Smith, 2005).

Windows XP ensures that any free memory in the RAM is allocated to a certain process so as to help in improving the performance of the operating system. For example, when a person closes a program, Windows XP retains the code which belongs to the closed program. This is in order to enable the program to run faster when a person needs to use the program again. This happens in case Windows does not find any other use for the free RAM.

However, these tasks are normally dropped whenever a different use for the RAM is found. This is the reason as to why the entire RAM in a computer appears to be used up. There have been cases whereby some programs are said to ‘free’ the RAM for fresh use. However, this is not normally true because such programs only make the computer to reduce its performance.

It is therefore true that memory management in Windows XP is efficient and it makes the computer to run as efficiently as possible by ensuring that all the resources of the computer are fully utilized (Sechrest & Fortin, 2001). As a result, many people still hold on to Windows XP despite the recent Windows upgrades which have been released after Windows XP such as Windows Vista and Windows 7.

Linux Memory Management

Linux operating system is usually designed for different architectures. Its designed in a manner that enables it to separate the physical memory into three different zones depending of the hardware where the operating system is installed.

The three zones found in the operating system include Zone_DMA, zone_NORMAL and zone_HIGHMEM. Zone_DMA consists of Direct Access Memory (DMA) which enables the operating system to be compatible with the Industry Standard Architecture (ISA) devices which can only manage to gain access to the 16 MB of physical memory.

Zone_NORMAL is designed for those devices which can manage to gain access to the 896 MB of physical memory. Zone_HIGHMEM is capable of allocating more than 896 MB of physical memory to devices (Awesome Inc, 2011).

The page allocator is the physical memory manager in Linux-based operating systems such as Ubuntu 11.04. Each zone normally has its own allocator. Therefore, it is the duty of these allocators to free the physical memory in each of the three zones. There are instances whereby the allocation of pages may require pages from other zones.

This scenario is observed when there is a need to save the DMA space or when the needed memory gets too low to be used. In this case therefore, the Kernel uses of any zone that is available. It should however be noted that all Kernel operations take place at zone_NORMAL. Therefore, zone_NORMAL is the critical zone which is responsible for optimizing the performance of the operating system in a machine (Awesome Inc, 2011).

For example, Ubuntu 11.04 uses page sizes of 4096-bytes. These page sizes are known to be efficient in terms of reducing the percentage of internal fragmentation. Ubuntu is also said to use the 2.6.38 kernel. This kernel is said to use the Transparent Huge Page (THP) feature which allows contemporary processors to handle page files simultaneously. Memory in Ubuntu 11.04 operating system is allocated in two major ways.

It is allocated statically when the computer is booting in order to allocate memory to the drivers. It is also allocated dynamically through the page allocator. The buddy service and the slab allocator are the two memory managers which are used to allocate memory to the kernel.

The buddy service makes use of the kmalloc service to allocate the pages when they are needed by the kernel (Awesome Inc, 2011). However, before the pages can be allocated, they are first split into small pieces. On the other hand, the slab allocation mechanism involves the allocation of kernel data into the memory of the computer.

Apart from the main memory managers in Ubuntu 11.04, there are other subsystems which are responsible for managing the physical memory. These subsystems include the virtual memory system and the page cache. The page cache provides a mechanism which makes it possible to store the networked data.

In addition, it serves as the main cache for memory-mapped libraries. Ubuntu is therefore noted to use the virtual memory extensively. In this context, the ‘mmap’ service is used in order to help in mapping devices and files to the memory (Awesome Inc, 2011). Ubuntu is also designed in such a manner that it can manage to use constant virtual memory for every process that runs in the operating system.

Ubuntu 11.04 is capable of extending its physical memory by making use of a partition in the hard disk that is referred to as “swap”. The Architecture employed in the manufacture of the Ubuntu operating system incorporates the Physical Address Extension Kernel technology which allows 32 bit operating systems to use up to 64 GB as RAM.

It is said that even though the additional RAM is a bit slower, it is normally faster than a hard disk (Henderson & Allen, 2011). Though the Physical Extension Kernel can enable the operating system to run faster in 64 bit operating systems, there is less support for 64 bit drivers to facilitate this process.

It has been noted that when a person leaves his computer idle for a long time, the memory usage goes up. This is because Ubuntu tends to make use of the available memory so as to improve the performance of a computer. For example, if a machine has 1 GB of memory and the running programs use up to 200 MB of the available memory, then the remaining 800 MB acts as the cache for the CPU. The idea behind caching is that the computer takes a lot of time to access data that is stored on the hard drive.

However, the computer takes a relatively shorter time to retrieve data that is stored in the main memory. Caching therefore makes use of the main memory to enable the computer to perform tasks faster. This type of optimization is not available for Windows computers. As a result, the free memory in a Windows computer ends up being wasted since no process uses it.

Ubuntu can therefore be observed to make better use of the physical memory than windows XP (Henderson & Allen, 2011). A computer running Ubuntu is therefore more efficient than a computer running Windows.

Process scheduling

Scheduling refers to the manner in which processes and threads are allowed to gain access to the resources of a system. The goal of scheduling is to enable a system to operate efficiently by making the resources of a computer to operate in a balanced state. The main aim of scheduling is to enable computer systems to multitask without lowering their efficiency. The scheduler mostly lays emphasis on the latency and throughput.

Throughput refers to the number of processes that are executed at any given time. On the other hand, latency refers to the time taken for a request to be made and the time taken before the first response is noted (Parsons & Oja, 2008). However, conflict between the throughput and latency often emerges. Depending on a user’s needs therefore, a person can give preference to throughput or latency.

Windows XP Process Scheduling

The early versions of Windows operating systems and MS-DOS were not capable of multitasking. A scheduler was therefore not relevant at the time. However, Windows 3.1x had a non-proactive scheduler which did not interfere with the running programs in any way. This scheduler needed a running program to end so that it could execute another program. However, operating systems which are based on Windows NT operating system make use of the multi-level feedback queue to enable them run multiple programs easily.

These operating systems define priority levels which range from 0 to 31. Priorities 0 to 15 are regarded as normal priorities. However, priorities 16-31 are regarded as soft real-time priorities. These forms of priorities normally require administrative privileges for them to be assigned a certain level.

Level 0 is however reserved for the operating system. The kernel may change the priority of a running application depending on the level of CPU usage (Sechrest & Fortin, 2001). In order to increase the responsiveness of a system, the kernel lowers the priority levels of the processes that use the CPU heavily. However, with the advent of Windows Vista, the scheduler has been modified in such a manner that it can manage to take advantage of the cycle counter register scheduling process which comes with many modern processors.

Ubuntu Process Scheduling

Ubuntu makes use of the pre-emptive kernel which is responsible for allowing Symmetric Multiprocessing (SMP). There are two spaces which are created when processes start running. These spaces include the kernel space and the user space. However, process scheduling in Ubuntu is observed to operate in the kernel space. Process scheduling takes place in the user interface. The Ubuntu operating System comprises of the kernel, System Libraries, and System Utilities (Henderson & Allen, 2011).

The kernel is said to be at the heart of the operating system because it is the one that is responsible for facilitating all forms of abstractions. The abstractions are the ones that are responsible for creating the virtual memory as well as for enhancing processes to run smoothly within the operating system. The system libraries perform various functions which allow applications to communicate with the kernel without requiring any privileges.

The system utilities on the other hand comprise of a set of programs which manage individual tasks of the operating system such as initializing the operating system (Siever, Figgins, & Love, 2009). The system utilities also provide a mechanism whereby the operating system is able to handle tasks such as connecting to networks and allowing the requests to log into a computer.

The processes in Ubuntu have different properties. These include process identity, process environment and process context. The process identity comprises of process ID, credentials, and personality. The process ID is used to identify a particular process that is running in the operating system.

The process ID does not change until it is ended. The credentials are also known as ‘user IDs’ and they are the ones which determine the rights that a person has to a computer. The process environment comprises of variables which determine the type of language that should be displayed in the system (Parsons & Oja, 2008). The process context reflects the state of the processes that are running in the operating system.

The scheduling of processes in Ubuntu is done in two ways. Scheduling involves time-sharing concept and pre-emptive scheduling. Time-sharing concept targets multiple processes whereas pre-emptive scheduling targets the performance of real time tasks. These two scheduling mechanisms are important in that they facilitate the implementation of runnable processes.

The Round-Robin scheduling method is a method which is used to determine the manner in which time-sharing algorithms schedule for the occurrence of processes that take place within the operating system. It does this by creating a loop ending in all the processes that take place in a system. In this context, the processes are given priorities ranging from 0-99 (Smith, 2005). Therefore, the processes which have small numerical values are the ones which are given high priority.

File Systems

There are different file systems that are used depending on the operating systems that a computer is running. Traditionally, Windows was using the FAT file system before migrating to NTFS file system which is regarded as a more stable and secure file system. On the other hand, there are other types of file systems which are adopted depending on the tasks that a person subjects his computer to.

The most popular file systems for the Linux operating system include ext3 or ext4. Other types of file systems such as ext2, ReiserFS, JFS and XFS are rarely used (Parsons & Oja, 2008).

File Systems in Windows XP

Files in Windows XP can be organized based on the FAT file system or the more stable NTFS file system. Windows XP can run on both FAT and NTFS file systems. There are certain considerations that a person should make before deciding on which type of file system to use.

For example, if a person wishes to use additional capabilities which can only be supported by the NTFS file system, a user is recommended to format the partition based on the NTFS file system (Nichol, 2002). The NTFS file system is capable of providing control to the files that users store in a computer. The NTFS file system supports security and privacy for the files belonging to a particular user (Microsoft, 2005).

The NTFS file system also provides more support for the recovery of data than the FAT file system. In the NTFS file system, any changes that are made to the files in a computer are normally journalized thereby making it possible for the files to be recovered in case the program that was using the file crashes.

In addition, the NTFS file system is not likely to suffer severe damage in case of a crash (Nichol, 2002). However, in case both file systems crash, it is possible to boot Windows when using the FAT file system. This makes it possible to repair the file system using the DOS start-up mode. However, in case the NTFS file system gets damaged, it is not possible to boot into Windows thereby making it difficult to repair the computer.

When a user is considering performance and economy in a computer, an NTFS file partition is better. This is because virtual memory is limited to 8 GB in an FAT file system while in an NTFS file system the virtual memory can take up any space thereby helping to improve the performance of the computer. In addition, when searching for directories in the NTFS file system, the process of searching is faster (Nichol, 2002). However, the process of searching for directories in an FAT file system is very slow.

File Systems in Ubuntu

Ubuntu is said to be based on UNIX’s file system. In this perspective, everything in the computer is seen to appear in the form of files. It is the responsibility of the kernel to handle all the types of files that are present in the system. The kernel hides the execution details of the files which are associated with particular software. The virtual file system (VFS) is the one which harbours the details of the files which are associated with particular software (Smith, 2005).

Ubuntu enables a person to increase the size of a partition online when the computer is running. The operating system also allows users to use different devices which are then supposed to act as crucial directories. The different file systems that are supported in Ubuntu include ext2, ext3, ext4, and xfs. In ext2, there are less boot operations since the file system is not journalised. This makes it possible for an operating system to boot at a faster rate.

Ext3 is the successor of ext2. It is a journalised system and has been used as the default file system in Ubuntu for many years. It is also possible for a person to convert the ext3 file system to ext4 file system using Ubuntu. Ext4 is an advancement of the ext3 file system (Smith, 2005).

This type of file system facilitates the creation of an unlimited number of subdirectories. The file system also facilitates the separation of huge files into small extents thereby improving the performance of large files. Extents are known to reduce the degree of fragmentation thereby helping to improve the performance of a system.

Network and Security

Windows XP

Windows XP is said to have been built on the Windows 2000 kernel. However, there is a significant difference between the two operating systems when security is concerned. Not all the security features of Windows XP Professional can be found in Windows XP Home Edition (Labmice, 2006). There are additional security features which are contained in Windows XP professional. Windows XP uses the NTFS file system on all partitions.

This is because these partitions run faster. The NTFS file system also allows users to encrypt their files and folders. Password protection is important because it enables a person to protect his files from intruders (Microsoft, 2005). Windows XP therefore provides the users with an option to incorporate passwords in their user accounts. The operating system also enables the administrator to regulate the activities of the other user accounts thereby ensuring that sensitive data in the computer is not interfered with.

Windows XP also has a firewall which should be kept on at all times when a person is connected to the internet in order to prevent hackers from compromising the data in the computer. Windows XP is also capable of providing users with the possibility of sharing their internet connecting and files by using the Internet sharing feature that is incorporated in the operating system (Labmice, 2006). This feature is capable of providing its users with optimum security that when they are connected to a particular network.

Linux Security

Ubuntu 11.04 is said to have high support for the standard internet protocols which are common in Unix to Unix communications as well as in other non-Unix operating systems. The internet protocols that are supported by Ubuntu include Xerox network systems and other ISO-OSI protocols.

The most important set of network protocols that are used in Ubuntu include the TCP/IP modes of networking. Networking in Ubuntu is executed by various layers of software. The various layers which are used to implement networking in Ubuntu include the socket interface layer, protocol drivers’ layer, and network-device drivers’ layer (Siever, Figgins, & Love, 2009).

All user applications connect to a network through the socket interface layer. The network devices and user applications are responsible for enabling data to reach the protocol drivers. The protocol drivers’ layer detects the device or the application that should be used to transfer of data.

The protocol drivers’ layer is also capable of rewriting, creating, and splitting data packets. The network-device drivers’ layer is the one that is responsible for routing data. The network-device drivers link a system with the most appropriate protocols to facilitate in the routing of data packets. This layer makes use of the Forwarding Information Base (FIB) concept which specifies the destination of the data packets (Siever, Figgins, & Love, 2009).

The layer also holds a cache of the recent routing decisions so as to make the routing process faster the next time the layer is used. The different networking tools for the Ubuntu operating system include the graphical tools, Bluetooth settings, personal file sharing, and remote desktop (Smith, 2005).

Ubuntu operating system is very secure. The security mechanisms adopted by the operating system include authentication and access control. Authentication ensures that only authorized people are allowed to gain access to the system. Access control on the other hand checks whether a user has the right to access certain files.

The operating system prevents access when necessary. Encryption is also an important tool that helps to protect the files of a particular user from intruders (Siever, Figgins, & Love, 2009). In this perspective, Ubuntu gives a person the option of encrypting the home folder so that only the current user and the root user can gain access to the personal files.

Input/ Output (I/O)

Windows XP I/O

The I/O is located in the Hardware Resources. It acts as a communication channel whereby the hardware devices in a computer communicate with each other. The I/O rate is very important in terms of influencing the time that a computer takes to boot. For example, old operating systems such as Windows 2000 often require the hard disk to rotate in a certain direction in order for the system to boot. In case of Desktops, the disks can manage to complete 80 to 100 I/Os in one second.

However, this rate is much slower in laptops (Parsons & Oja, 2008). The slow rate makes a computer to take more time to boot. However, with the development of Windows XP, the poorly organized I/O process was improved thereby making it easy for the operating system to be fetched when the operating system is starting.

In this context therefore, the I/O process can be overlapped when the machine is initializing. This provides a mechanism whereby the data which must be read when a computer is booting is scattered all over the disk thereby making the computer to take less time to boot.

Many people look at I/O as a traditional way of analysing operating systems. They argue that the I/O is not important in cases where the computer controls the hardware directly. Operating systems such as DOS, Windows 95, and Windows 98 did not block access to the I/O in any way.

As a result, access to I/O was very easy and therefore posed very serious security threats to the computer. In this context therefore, it is possible for a program that is misbehaving to enter into the I/O address. This state of affairs ends up creating hitches in a network or the hard disk.

Operating systems such as Windows 2000, NT, ME, and XP block access to the I/O. However, there are still certain programs which can manage to gain access to the I/O (Smith, 2005). This is an indication that Windows is weak because of its inability to block access to the I/O completely.

Linux I/O

However, Linux has been able to adopt a mechanism which has proved to be effective in restricting unauthorised applications from gaining access to the I/O. Since Ubuntu does not run applications under root privileges, it is not vulnerable to security threats like Windows.

Ubuntu has been able to manipulate the permissions and the ownership of running software processes whereby a user is granted limited access to the I/O. Since Ubuntu has been able to create a bullet-proof method of limiting access to the I/O, it is very possible for Linux to program a hardware I/O which does not have security weaknesses. Ubuntu has been successful at limiting access to the I/O by designing a trusted I/O enabling program whose ownership is set to the root user.

This means that if a root user initializes the root enabling program, the program can only run in root privileges. Therefore, when the I/O enabling program is run, it only allows access to the ports that are desired (Radcliffe, 2005). These ports include the serial ports and the parallel printer ports. Linux operating systems are therefore more efficient than Windows.

Reference List

Awesome Inc 2011, Ubuntu 11.04 – General Overview. Web.

Henderson T. & Allen B. 2011, Ubuntu 11.04. Web.

Labmice 2006, Windows XP Security Checklist. Web.

Microsoft 2005, . Web.

Nichol, A. 2002, . Web.

Parsons, J. J. & Oja, D. 2008, Computer Concepts Illustrated Introductory, Cengage Learning, New York.

Radcliffe, P. J. 2005, . Web.

Sechrest, S. & Fortin, M. 2001, . Web.

Siever, E., Figgins, S. & Love, R. 2009, Linux in a Nutshell, O’Reilly Media Inc, Cambridge.

Smith, R. W. 2005, Linux In A Windows World, O’Reilly Media Inc, Cambridge.

Print
More related papers
Cite This paper
You're welcome to use this sample in your assignment. Be sure to cite it correctly

Reference

IvyPanda. (2019, May 17). Comparison Between two Operating Systems. https://ivypanda.com/essays/comparison-between-two-operating-systems-report/

Work Cited

"Comparison Between two Operating Systems." IvyPanda, 17 May 2019, ivypanda.com/essays/comparison-between-two-operating-systems-report/.

References

IvyPanda. (2019) 'Comparison Between two Operating Systems'. 17 May.

References

IvyPanda. 2019. "Comparison Between two Operating Systems." May 17, 2019. https://ivypanda.com/essays/comparison-between-two-operating-systems-report/.

1. IvyPanda. "Comparison Between two Operating Systems." May 17, 2019. https://ivypanda.com/essays/comparison-between-two-operating-systems-report/.


Bibliography


IvyPanda. "Comparison Between two Operating Systems." May 17, 2019. https://ivypanda.com/essays/comparison-between-two-operating-systems-report/.

Powered by CiteTotal, easy citation creator
If, for any reason, you believe that this content should not be published on our website, please request its removal.
Updated:
Cite
Print
1 / 1