what technology did virtualization has given rise to
Virtualization Technology
An Introduction to Virtualization
In Virtualization for Security, 2009
What Is Virtualization?
-
Virtualization technologies have been around since the 1960s. Starting time with the Atlas and M44/44X projects, the concept of time-sharing and virtual retentivity was introduced to the computing world.
-
Funded by large research centers and system manufacturers, early virtualization applied science was simply bachelor to those with sufficient resources and clout to fund the purchase of the big-iron equipment.
-
As time-sharing evolved, IBM developed the roots and early compages of the virtual automobile monitor, or VMM. Many of the features and design elements of the System370 and its succeeding iterations are still found in modern-twenty-four hours virtualization technologies.
-
Subsequently a short quiet menstruum when the computing world took its optics off of virtualization, a resurgent accent began again in the mid-1990s, putting virtualization back into the limelight as an constructive means to gain loftier returns on a company'south investment.
Read full chapter
URL:
https://world wide web.sciencedirect.com/scientific discipline/commodity/pii/B9781597493055000013
Futurity Trends
Nihad Ahmad Hassan , Rami Hijazi , in Data Hiding Techniques in Windows OS, 2017
Virtualization Engineering
Virtualization technology has a huge impact on the It industry, peculiarly in data centers to save costs and increase performance. On the information hiding side, the virtual auto can be used and launched from within a USB zippo drive. This imposes great risks to organizations trying to fight confronting information leakage. A user can fully launch an OS from a USB stick, assuasive him/her to apply steganography tools without leaving whatever traces on the host motorcar. If accessing a USB is prohibited past an organization'southward security policy (and this is an excellent security measure that must exist implemented without exception), a user can utilise the IaaS model of cloud calculating and install VM software on the host account. Then he/she tin can perform stego actions on the remote server, thus bypassing all network security measures related to fighting against data hiding techniques.
The virtual machine can too impose a real challenge to forensic investigators. A user can acquit a criminal activeness or conceal data using VM and then delete information technology from the host automobile. Recovering deleted VM files (especially afterwards performing information wiping on HDD) then retrieving information from information technology is too difficult, and incommunicable in many situations.
To recap, to enhance security in an organisation, regular actions performed by regular users must be restricted. For case, it is better to prohibit accessing all models of cloud services and technically prevent users from attaching USB sticks and all removable devices to the corporate network. It is too recommended to foreclose users from synchronizing their mobile phones with work PCs and using corporate Internet connections on their mobile devices.
Virtualization technology is advancing chop-chop, while more than developers are aiming to build cloud-friendly applications. All these advances can exist explored later to simplify data hiding techniques and data leakage.
Read full chapter
URL:
https://www.sciencedirect.com/scientific discipline/article/pii/B9780128044490000087
Information technology Audit Components
Stephen D. Gantz , in The Basics of IT Audit, 2014
Virtualized environments
Virtualization technology provides an alternative technical arroyo to delivering infrastructure, platforms and operating systems, servers, software, and systems and applications. Nearly virtualized calculating environments have much in common with conventional data centers, simply employ loftier-performing hardware and specialized software that enables a single physical server to role as multiple meantime running instances. This approach increases capacity utilization and, in IT service-based models such as cloud computing, allows organizations to make more efficient apply of their Information technology resource past scaling up or downwardly as business concern needs warrant. Auditing virtualized computing environments uses many of the aforementioned procedures and criteria used for data middle audits, with additional emphasis on the provisioning, deprovisioning, management, and maintenance of multiple virtual servers that share computing, network, and infrastructure resources.
The apply of cloud computing and associated third-political party service providers is becoming sufficiently common that Information technology audits may address such services distinct from other audited components. In many respects—including significant utilize of virtualization engineering—cloud computing services are quite similar to conventional outsourced application hosting and managed infrastructure services long used by some organizations. Distinctions emphasized by cloud service vendors include on-demand service provisioning, ubiquitous network access, resource pooling, elastic capabilities and services, and metered usage and associated billing and payment models. The anticipated growth in cloud calculating is ane factor motivating the development of cloud-specific control frameworks, intended in detail to address concerns about information security in cloud computing. Available frameworks include the Cloud Controls Matrix)[16] developed past the nonprofit Cloud Security Brotherhood and the Federal Adventure and Authorisation Management Programme (FedRAMP) [17] administered by the General Services Administration for use by deject service providers serving Usa regime agencies. These command frameworks offer It auditors boosted points of reference on the types of controls that should be nowadays in cloud computing environments.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124171596000067
Why SDN?
Paul Göransson , ... Timothy Culver , in Software Defined Networks (2d Edition), 2017
2.4.one Compute and Storage Virtualization
Virtualization technology has been effectually for decades. The first commercially available VM technology for IBM mainframes was released in 1972 [ x], complete with the hypervisorand the ability to abstract the hardware below, assuasive multiple heterogeneous instances of other operating systems to run above it in their own space. In 1998 VMware was established and began to evangelize software for virtualizing desktops besides every bit servers.
Apply of this compute virtualization technology did not explode until data centers became prevalent, and the need to dynamically create and tear down servers, besides every bit moving them from ane physical server to another, became important. One time this occurred, however, the state of data center operations immediately changed. Servers could be instantiated with a mouse click, and could be moved without significantly disrupting the operation of the server existence moved.
Creating a new VM, or moving a VM from i physical server to another, is straightforward from a server ambassador's perspective, and may exist achieved very rapidly. Virtualization software, such as VMware, Hyper-5, KVM, and XenServer, are examples of products that allow server administrators to readily create and move virtual machines. This has reduced the time needed to first upwards a new example of a server to a thing of minutes or even seconds. Fig. 2.iii shows the simple creation of a new example of a virtual machine on a different physical server.
Too, storage virtualization has existed for quite some fourth dimension, as has the concept of abstracting storage blocks and allowing them to be separated from the actual physical storage hardware. As with servers, this achieves efficiency in terms of speed (eastward.m., moving ofttimes used data to a faster device), too as in terms of utilization (e.g., allowing multiple servers to share the same physical storage device).
These technological advancements allow servers and storage to exist manipulated quickly and efficiently. While these advances in computer and storage virtualization have been taking place, the same has not been true in the networking domain [11].
Read full chapter
URL:
https://www.sciencedirect.com/scientific discipline/article/pii/B9780128045558000028
Forensic Analysis
In Virtualization for Security, 2009
Summary
Virtualization engineering offers many benefits in the field of digital forensics. With virtualization, investigators have the ability to produce higher quality forensics in less time. The selection to refresh the image to ensure integrity is no longer painful; it is a elementary touch of the "refresh" button. Moving from your Windows tools to your Unix tools is simply a reboot away. Virtualization permits us to detect the doubtable'south computer operating on the suspect's data without fear of contagion. This alone would make the utilize of virtualization desirable, merely in that location is much more. Virtualization gives us the ability to clarify booby-traps and fourth dimension bombs left by the doubtable without putting the show in jeopardy. If the example you lot are investigating affects your entire enterprise, virtualization permits y'all to observe and instrument the suspect computer to gather intelligence well-nigh external components of the incident and to identify internal participants.
Let us not forget the value obtained past using virtualization to demonstrate circuitous evidence in a way the approximate and jury tin can readily understand. Even though the demonstration is non in and of itself evidence, the power of demonstration cannot be understated.
In short, virtualization is a powerful tool in the forensic investigator's toolkit.
Read full chapter
URL:
https://world wide web.sciencedirect.com/science/article/pii/B9781597493055000098
Free energy Efficiency in Data Centers and Clouds
Seyed Morteza Nabavinejad , Maziar Goudarzi , in Advances in Computers, 2016
3.3.3 Hypervisor Enhancement
Virtualization technology has a promising outcome on resource utilization in datacenters. Since the main concern at the time of designing electric current hypervisors such every bit Xen was the calculating-intensive applications they endure from poor network I/O performance. For case, single-root I/O virtualization, which is the current standard for network virtualization, is based on interrupt. Since treatment each interrupt is plush, the performance of network virtualization depends on the resource allocation policy in each hypervisor.
For conquering same trouble, Ref. [44] proposes a parcel aggregation mechanism that can handle the transfer process in a more than efficient and rapid manner. All the same, the aggregation step itself introduces a new source of filibuster and must be addressed. Hence, the queuing theory has been used to model and dynamically tune the system in lodge to achieve the best tradeoff between filibuster and throughput.
Regarding the resource allocation policy trouble, the credit-based scheduler, the de facto resource scheduler in Xen hypervisor, cannot handle the I/O-intensive workloads properly. The reason is that it is not aware of dissimilar behaviors of various VMs and handles all of them in the same manner. Then, the I/O-intensive VMs practise not earn enough credit for treatment network interrupts and hence retrieve the data in a slow way that leads to high latency and response time.
To tackle the mentioned problem and eliminate the clogging which is caused by scheduler, Ref. [45] introduces a workload-aware network virtualization model. This model monitors the behavior of VMs and divides them based on their beliefs in two categories: I/O-intensive and CPU-intensive and then handles them past Shared Scheduling and Agile Credit Allocation. When an I/O-intensive VM faces burst traffic, the shared scheduling gives it more than credit and so it could be able to handle the traffic. Agile credit allocation is responsible for adjusting the total credit based on number of I/O-intensive VMs in order to reduce the wait time for each I/O-intensive VM.
Reference [46] showtime introduces a semantic gap that exists between VMM and VMs. The gap is that VMM is unaware of processes inside VMs so it cannot schedule the VMs efficiently. As an example, when a VM sends a request to another co-located VM, the co-located VM must earn vCPU so information technology can process the request and provide the response ASAP but the current scheduling in VMM does non consider this affair. Equally a result, the response latency increases and it may lead to quality of service (QoS) violation. Moreover, it gets worsen when the co-located VMs are CPU or IO intensive considering the competition for CPU increases dramatically. Figure 5 illustrates the machinery of inter-VM advice in current hypervisors.
To resolve the problem, they propose the communication-aware inter-VM scheduling (CIVSched) algorithm which is aware of communication amongst co-located VMs. CIVSched monitors the packets that are transport through the network and identifies the target VM and schedules the VM in a manner to reduce response latency. The CIVSched prototype is implemented on Xen hypervisor.
For each DomU guest (VM), there is a virtual front end-cease commuter that VM sends the requests for I/O operations to information technology. Then these requests are sent to dorsum-end driver which is in Dom0 guest. And finally, the back-end driver sends the captured requests to real device driver and returns the responses to the front-end driver.
The CIVSched must bide past ii design principles: low latency for inter-VM and depression latency for the inner-VM procedure. These ii design principles assist CIVSched to subtract the inter-VM latency. For realizing the ii above-mentioned requirements, the CIVSched adds five modules to the Xen I/O mechanism. The AutoCover (Automatic Discovery) module finds the co-located VMs and stores their MAC address and IDs in a mapping table. The CivMonitor checks all the packets transmitted by VMs and when finds an inter-VM packet, informs the CivScheduler most it. Then, CivScheduler gives more credit to target VM and so it can handle the parcel equally fast as possible. Until now, the first design principle (low latency for inter-VM) is satisfied but the other i still needs attention. Regarding the 2d principle, CivMonitor identifies the process of target VM that volition receive the packet via TCP/UDP port number within the packet and passes the information to the target VM. Finally, PidScheduler and PidTrans modules within the invitee VM schedule the target procedure with respect to decreasing latency.
For evaluating the CIVSched, it has been implemented on Xen hypervisor version four.1.2 and is compared with XenandCo [47] scheduler (another proposed scheduler for Xen) and Credit scheduler which is the base of operations scheduler in Xen. For comparison the Network latency, experiments consist of a ping-pong test, a simulation test and a real-world Web application scenario but with synthetic benchmarks. Fairness Guarantees is too evaluated because the fairness of scheduler straight affects the fairness of CPU resources allocated to each VM. The UnixBench suite 4.1.0 is adopted for evaluating the performance overhead of CIVSched on host'southward operation. Performance overhead is measured at two levels: when in that location are just two VMs on the host (light consolidation) and when in that location are seven VMs running simultaneously on the host (heavy consolidation).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0065245815000698
State of the Art on Technology and Practices for Improving the Energy Efficiency of Information Storage
Marcos Dias de Assunção , Laurent Lefèvre , in Advances in Computers, 2012
4.iii.i Combining Server and Storage Virtualization
Past combining server virtualization with storage virtualization it is possible to create disk pools and virtual volumes whose chapters can be increased on need according to the applications' needs. Typical storage efficiency of traditional storage arrays is in the 30–40% range. Storage virtualization tin increase the efficiency to seventy% or higher according to certain reports [35], which results in less storage requirements and energy savings.
Storage virtualization technologies tin can be classified in the post-obit categories [24]:
- •
-
Block-level virtualization: this technique consists in creating a storage puddle with resource from multiple network devices and making them available equally a single primal storage resources. This technique, used in many SANs, simplifies the direction and reduces cost.
- •
-
Storage tier virtualization: this virtualization technique is generally termed as Hierarchical Storage Management (HSM) and allows information to be migrated automatically betwixt unlike types of storage without users existence aware. Software systems for automated tiering are used for conveying out such data migration activities. This approach reduces cost and power consumption because it allows but data that is frequently accessed to exist stored on loftier-operation storage, while information less often accessed can be placed on less-expensive and more power-efficient equipments that utilise techniques such as MAID and data de-duplication.
- •
-
Virtualization across time to create active athenaeum: this blazon of storage virtualization, besides known as active archiving, extends the notion of virtualization and enables online access to data that would exist otherwise offline. Tier virtualization software systems are used to dynamically identify the data that should be archived on disk-to-disk fill-in or tape libraries or brought dorsum to active storage.
Storage virtualization is a technology that complements other solutions such as server virtualization past enabling the quick creation of snapshots and facilitating virtual car migration. It likewise allows for thin provisioning where actual storage chapters is allocated to virtual machines when they need to write data rather than allocated in advance.
Read full chapter
URL:
https://www.sciencedirect.com/science/commodity/pii/B9780123965288000043
A Toolkit for Modeling and Simulation of Real-fourth dimension Virtual Machine Allocation in a Cloud Data Center
Wenhong Tian , Yong Zhao , in Optimized Deject Resources Direction and Scheduling, 2015
11.2.2 Modeling VM allocation
With virtualization technologies, deject computing provides flexibility in resources allocation. For example, a PM with two processing cores can host two or more than VMs on each core concurrently. VMs can merely be allocated if the total used amount of processing ability by all VMs on a host is non more the i bachelor in that host.
Taking the widely used example of Amazon EC2, nosotros show that a uniform view of unlike types of VMs is possible. Table eleven.ane provides 8 types of VMs from Amazon EC2 online information. Amazon EC2 does not provide information on its hardware configuration. However, we can therefore form 3 types of unlike PMs (or PM pools) based on compute units. In a existent CDC, for example, a PM with 2×68.four GB memory, 16 cores×3.25 units, and 2×1690 GB storage can exist provided. In this way, a uniform view of dissimilar types of VMs is possibly formed. This kind of classification provides a uniform view of virtualized resources for heterogeneous virtualization platforms, due east.g., Xen, KVM, VMWare, and brings great benefits for VM direction and allocation. Customers only need to select suitable types of VMs based on their requirements. In that location are eight types of VMs in EC2, equally given in Table eleven.1, where MEM stands for memory with unit GB, CPU is normalized to unit (each CPU unit is equal to 1 Ghz 2007 Intel Pentium processor [4]) and Sto stands for difficult disk storage with unit of measurement GB. Three types of PMs are considered for heterogeneous cases, as given in Table 11.two.
MEM | CPU (units) | BW(or Sto) | VM |
---|---|---|---|
i.seven | i (one cores×1 units) | 160 | i-1(1) |
7.5 | four (2 cores×2 units) | 850 | one-ii(2) |
15.0 | viii (iv cores×2 units) | 1690 | 1-three(3) |
17.1 | 6.five (2 cores×3.25 units) | 420 | ii-one(4) |
34.2 | 13 (4 cores×three.25 units) | 850 | 2-two(5) |
68.4 | 26 (8 cores×three.25 units) | 1690 | 2-3(6) |
1.7 | five (2 cores×2.5 units) | 350 | 3-1(7) |
seven.0 | 20 (eight cores×2.5 units) | 1690 | iii-2(viii) |
PM | CPU (units) | MEM | BW (or Sto) |
---|---|---|---|
1 | 16 (4 cores×4 units) | 160 | 1-i(i) |
two | 52 (xvi cores×3.25 units) | 850 | 1-ii(2) |
iii | xl (sixteen cores×ii.v units) | 1690 | ane-3(three) |
Currently, CloudSched implements dynamic load balancing, maximizing utilization, and energy-efficient scheduling algorithms. Other algorithms, such as reliability-oriented and cost-oriented, tin be practical as well.
Read full affiliate
URL:
https://world wide web.sciencedirect.com/science/commodity/pii/B9780128014769000112
Integration of Big Data and Information Warehousing
Krish Krishnan , in Information Warehousing in the Age of Large Data, 2013
Data virtualization
Data virtualization engineering can be used to create the next-generation data warehouse platform. As shown in Figure 10.nine, the biggest benefit of this deployment is the reuse of existing infrastructure for the structured portion of the information warehouse. This approach too provides an opportunity to distribute workload effectively across the platforms thereby assuasive for the best optimization to be executed in the architectures. Data Virtualization coupled with a strong semantic compages can create a scalable solution.
- ●
-
Pros:
- ●
-
Extremely scalable and flexible architecture.
- ●
-
Workload optimized.
- ●
-
Easy to maintain.
- ●
-
Lower initial cost of deployment.
- ●
-
Cons:
- ●
-
Lack of governance can create as well many silos and degrade functioning.
- ●
-
Complex query processing tin become degraded over a period of time.
- ●
-
Performance at the integration layer may need periodic maintenance.
- ●
-
Data loading is isolated across the layers. This provides a foundation to create a robust information management strategy.
- ●
-
Data availability is controlled to each layer and security rules can be implemented to each layer every bit required, avoiding whatever associated overhead for other layers.
- ●
-
Data volumes tin be managed across the individual layers of data based on the information type, the life-cycle requirements for the data, and the cost of the storage.
- ●
-
Storage functioning is based on the data categories and the performance requirements, and the storage tiers tin can be configured.
- ●
-
Operational costs—in this architecture the operational cost calculation has fixed and variable cost components. The variable costs are related to processing and calculating infrastructure and labor costs. The fixed costs are related to maintenance of the data virtualization platform and its related costs.
- ●
-
Pitfalls to avoid:
- ●
-
Loosely coupled data integration.
- ●
-
Incorrect data granularity beyond the different systems.
- ●
-
Poor metadata across the systems.
- ●
-
Lack of data governance.
- ●
-
Complex information integration involving likewise many computations at the integration layer.
- ●
-
Poorly designed semantic architecture.
At that place are many more possible architectural deployments to integrate Big Information and create the next-generation data warehouse platform. This chapter'south goal is to provide you a starter kit to begin looking at what it will take for whatever organization to implement the next-generation data warehouse. In the next department we hash out the semantic framework approach.
Read full chapter
URL:
https://world wide web.sciencedirect.com/science/article/pii/B9780124058910000106
Choosing the Correct Solution for the Chore
In Virtualization for Security, 2009
Publisher Summary
The development of virtualization technology has continued to improve the solutions for age onetime information technology problems. This chapter focuses on these improved solutions. In order to effectively use the virtualization solutions that exist, it is important to understand immediate and future needs. Some people view virtualization as a more efficient utilise of resources but at the same fourth dimension might think that using a virtualization platform will not produce the same functioning every bit using a single operating organisation on a unmarried hardware platform. This may be true, merely the real challenge is knowing how computing resources demand to be used in the organization. IT organizations with the fastest performing systems understand the core business drivers backside the organisation. Once those goals are well understood, virtualization can be a powerful weapon in realizing them. The different flavors of virtualization abstruse almost every angle of the computing stack. Server virtualization solutions abstract hardware resources such as CPU, memory, disk, and network interfaces to virtual machines. In addition virtualization solutions can abstract software interfaces, GUI interfaces, portions of operating systems, portions of applications, and fifty-fifty low level kernel drivers.
Read full affiliate
URL:
https://world wide web.sciencedirect.com/science/article/pii/B9781597493055000025
Source: https://www.sciencedirect.com/topics/computer-science/virtualization-technology
0 Response to "what technology did virtualization has given rise to"
Post a Comment