how to create a site for free

Server virtualization strategies  and best practice uses

Below are 8 best practice considerations on best servers for virtualization and beyond.

Sizing a server is one of the first steps in building a virtualization infrastructure. The primary decision to make is between having many small servers, known as popcorn servers, or having fewer, larger servers. Scroll down for more...

Mobirise

1. Sizing a server requires use case comparisons

Sizing a server is one of the first steps in building a virtualization infrastructure. The primary decision to make is between having many small servers, known as popcorn servers, or having fewer, larger servers.

Popcorn servers are best for scale-out architectures because they enable the inexpensive addition of servers as needed. Popcorn server systems grow when more copies of the app stack are added to the initial setup. Popcorn servers won't work, however, if the task requires more memory than is available.

Big servers present different options. They offer separation that can match the small instances popcorn servers provide. Use cases and their unique needs are the primary factors when sizing a server. In-memory databases, for instance, run better on larger servers because having fewer servers reduces latency. Running fewer servers also enables admins to reduce latency and bandwidth by reducing the amount of travel between the boxes.

Cost is another consideration. To handle big instances, large servers require numerous, costly solid-state drives. Additionally, the custom nature of large server hardware drives prices higher. This cost per instance can favor smaller servers in certain contexts. Smaller servers also exist in a more competitive market, which forces vendors to push prices down.


Mobirise

2.Evaluate security of servers when building infrastructure

In an increasingly security-conscious world, IT buyers must carefully consider security features during server selection. Different servers offer a variety of security features, including ones that enable detection, protection and recovery.

For example, admins can boost the security of servers with hardware validation that uses cryptographic signatures to check that only valid drivers and firmware are in place. Vendors also offer native data encryption that can safeguard data on the move and at rest.

Other servers offer the detection of unexpected and unauthorized changes to firmware or configurations. They can even lock themselves down to block these changes, alert admins and log these events. Admins can then use these logs to analyze potential security problems, vulnerabilities and threats.

If a security breach does occur, some servers are equipped with recovery features that may enable them to restore firmware to a previous state after the server detects a compromise, for example. Others can do the same for an OS or wipe all the configuration settings when necessary.

The security of servers is becoming a more prominent consideration as criminals skip past obvious targets and aim for the relatively vulnerable target of server firmware. Server firmware's growing complexity makes it a worthwhile target for attackers who want to avoid detection. For instance, a corrupted BIOS update, downloaded uncritically, can offer an easy path into a network.

Security is likely to grow as a feature consideration for server buyers.

Mobirise

3. Examine server I/O problems to avoid network bottlenecks

Server fleets are shrinking as data centers move workloads to the cloud. IT administrators must carefully consider use cases for different server systems to get the most value and best performance out of increasingly specialized deployments. Admins must avoid creating server I/O problems as they consolidate and streamline their server systems.

It's difficult to know how many VMs can reside on a server because different configurations use different amounts of memory space and processor cores. A good guideline for server selection in this regard is to keep in mind that a server with more memory and processor cores will generally offer more VMs and better consolidation. More consolidation is possible for organizations willing to consider blade servers or hyper-converged infrastructure systems.

For further optimization, admins must consider the effects of network limits on server I/O. Enterprise workloads are almost constantly moving data and accessing storage, but if numerous VMs share the same low-end network port, network bottlenecks can strangle server I/O performance.

A faster network interface can greatly improve server consolidation. Generally, this is possible through either a 10 Gigabit Ethernet (GbE) port or by choosing a server with multiple 1 GbE ports.

Mobirise

4. Compare blade servers vs. rack servers

IT administrators must choose between two primary server hardware platforms when deploying virtualization: blade server vs. rack. Rack servers are wide and flat, which enables stacking and bolting into larger frames. Rack servers require physical labor from IT staff, such as mounting to the rack and connecting cables.

Some admins prefer rack servers, which are generic and have a relatively low cost of entry. Others prefer blade servers, which offer a centralized management console and infrastructure efficiencies. Blade servers drive the data center trend toward single-vendor and away from multivendor, heterogeneous environments. This simplicity enables blade server systems to offer a single management plane for configuration and management.

The efficiencies of blade servers vs. rack servers require careful comparison because blade server systems run the risk of vendor lock-in.

Blade infrastructure offers better consolidation of power and cooling systems inside the blade enclosure. Cable management favors blade servers in the blade server vs. rack debate because rack-mount servers are also more likely to suffer from cabling management problems, whereas blade servers combine many of the cable requirements behind the switches on the blade's enclosure.

The density of the configuration possibilities, as well as the simple management practicalities, have put blade servers on a path of steady growth. Despite this growth, competition from hyper-converged vendors threatens to make the blade server vs. rack debate moot.

Mobirise

5. Consolidate servers to create efficiency and improve performance

Virtualization, storage evolution and server performance development have all enabled significant server consolidation. The more horsepower IT administrators can get, the fewer total servers will be necessary for a workload and the more they can consolidate servers.

Nowadays, nonvolatile memory express drives can replace as many as six Serial-Attached SCSI drives and garner similar performance. The addition of solid-state drives, which use less power and take up less space, furthers the ability to consolidate servers.

Virtual server cluster performance is also improved thanks to the addition of in-memory databases, which have become practical due to memory expansion. This performance boost enables admins to use fewer servers or have much faster runtimes.

As organizations seek to consolidate servers even more, they often aim to replace RAID storage arrays with more compact storage appliances, but hyper-converged infrastructure (HCI) appliances offer stiff competition to traditional replacements. The demand to consolidate servers won't cease, so HCI will continue to be a competitive option.

The cloud offers another method to consolidate servers by outsourcing workloads that would have required physical hardware. Containers offer yet another method because they can increase instance density.

Mobirise

6. The bigger the better for container servers

Container technology offers a dramatic change to virtualization deployments, including a significant shift in server selection. Container servers favor larger hardware and, as such, IT administrators must incorporate this into their server calculations.

In comparison to hypervisor-based virtualization, containers need much less memory space for an instance. In a given server, admins might be able to pack in as many as three times the number of containers than VMs.

Memory efficiency is the primary reason for the increased density in container servers, but the low overhead of containers also increases workload speed. Evolutions in storage technology have also made larger servers more suitable for containers.

Solid-state drives increase speeds and nonvolatile dual-inline memory modules boost that even more. Using nonvolatile memory express can deliver data directly to containers, even with increased instance counts.

As the number of containers increases, large container servers will likely remain the best option for admins considering this technology.

Mobirise

7. Consider hyper-converged servers when building a server system

Hyper-converged infrastructure (HCI) is a relatively new development for virtual infrastructure construction, and hyper-converged servers present a compelling option for IT administrators building servers. HCI is a relatively inexpensive option with advantages in throughput, latency and bandwidth.

Hyper-converged server systems combine and integrate compute, storage and network technology into one unit that a single vendor supports. HCI benefits from the evolution of disk drive and capacity technology. Servers share local disk drives to speed up program loading, even though multiple servers can share the same storage through a storage area network.

As disk drives have gotten smaller, vendors have begun selling servers with more drives. Data compression and deduplication became possible as solid-state drives improved speed, which meant a significant boost in effective capacity in a more consolidated form.

Nonvolatile memory express has emerged as the best protocol for solid-state drive primary storage, which further accelerates data transfer and lowers overhead. HCI offers commercial, off-the-shelf nodes at a much lower price than traditional arrays.

As the possibilities for speed and consolidation rise, admins might find that hyper-converged servers compete well against conventional options.

Mobirise

8.Hybrid cloud server choices depend on infrastructure limitations

Data management and data movement are the two most important considerations when building a cloud server for a hybrid cloud system. Hybrid clouds are generally built on commercial, off-the-shelf components. IT administrators must base their choices largely on networking and storage factors.

In the U.S., fiber technology has been slow to spread. The remaining wide area network infrastructure is slow, which has created a bottleneck between private and public clouds.

Generally, admins must choose between two paths for a hybrid cloud server. The first option involves powerful x64 servers that admins can virtualize with either containers or a hypervisor; the other option is a cluster of less powerful, small servers that admins can manage with orchestration software or a hypervisor.

In the former option, admins have many choices, including dual-CPU 1U servers and 2U quad-CPU servers, depending on the amount of memory needed. In the latter option, a system of small servers works well, but inexpensively in limited contexts, such as media delivery and web serving.