Hyper-V V3 resources can be aggregated into clusters, and through the use of new VHDX sharable disk stores, can create islands internally — or for cloud-hosted purposes, external clouds whose resources should be opaque to other cloud components. We were not able to successfully find constructs to test the opaque nature of what should be isolated clouds, but rudimentary tests seemed to prove isolation. The VHDX format can also be dynamically re-sized as the need arises; we found that the process is fast, although during that period, disk and CPU resources can peak until the modification is over. Heavy CPU/disk-imposed limitations thwart resizing by slowing it.
We also tested Hyper-V and 2012R2 IPAM and Microsoft’s SDN successfully under IPv4 (other limitations prevented heavy IPv6 testing). Software defined networks (SDN) cross a turf that is divided in many organizations: virtualization and network management teams. Network management staff have traditionally used IPS, routing, switching and infrastructure controls to balance traffic, hosts, even NOC hardware placement. SDN use means that what were once separate disciplines are now forced to work together to make things work inside the host server’s hypervisor, where the demarcation was once where the RJ-45 connector meets the server chassis.
IPAM allowed us to define a base allocation of routeable and/or non-routeable addresses, then allocate them to VMs hosted on Hyper-V hosts or other hosts/VMs/devices on our test network. We could in turn, allocate virtual switches, public private or internal, connected with static/blocked and sticky DHCP. Inter-fabric VM movements still require a bit of homework, we found. Using one IPAM is recommended.
[ALSO: Windows 8.1 cheat sheet]
What we like is that the SDN primitives and IPAM can work well together, with well-implemented planning steps. We could create clouds easily, and keep track of address relationships. A Microsoft representative mused over the spreadsheets that carry IP relationship management information in many organizations, calling it crazy. We would agree, and believe that hypervisor or host-based IPAM is a great idea. If only DNS were mixed in more thoroughly — and it’s not — we’d be complete converts to the concept. We found it very convenient nonetheless, although errors were more difficult to find when they occurred, such as address pool depletions. Uniting networking and virtualization/host management disciplines isn’t going to be easy.
The Bad News
We found head-scratchers and limitations. We found several initial foibles installing the operating system on bare metal to what should be generic hardware. We were able to overcome them, but warn installers that they’ll need to consider that Windows 2012 and especially R2 might require updated server BIOS firmware to UEFI-compatible, as happened with our Lenovo ThinkServer and HP DL 380 Gen8 servers. When Windows 2012 R2 can’t install (R2 or Hyper-V V3-R2), we received an inarticulate flash of an error message. We actually took a video of it to capture that there was a problem with ACPI — and not UEFI. The turf between platform providers and OS/hypervisor makers is still real and strong, but Microsoft isn’t alone, as we’ve incurred driver/platform mysticism with VMware and Oracle, too.
We found the Hyper-V role cannot be re-instantiated. This means that no hypervisor on top of a hypervisor. Microsoft claims that there has been no customer demand for this, but it also imposes a limitation. Although running a hypervisor atop a hypervisor seems silly, there are cases where it’s useful. One role often cited is in production test labs, and another where Microsoft’s SDN is used — Hyper-V V3 must always be the base layer talking to the metal and silicon of a server, precluding other schemes direct access to the metal and therefore impeding other SDN schemes.
The Azure Pack uses the same Hyper-V infrastructure as Windows Server 2012 R2. Microsoft offers a sample of what other third party providers may offer in the form of services and ready-to-deploy pre-built appliances. We were reminded of what TurnKeyLinux started several years ago, in terms of usable appliances built from Linux substrates. There isn’t a huge variety of appliance samples available, but what we tested, worked — full WordPress websites that were ready for skins and customizations.
A Service Bus, actually message bus, connects components in the clouds serviced by the Azure Pack and Hyper-V. The Service Bus connects Microsoft-specific API sets, after a framework “namespace” is created. Communications can be subscribed and published to the framework and its members in the namespace talk via REST, Advanced Message Queueing Protocol/AMQP, and Windows instrumentation APIs. The Service Bus reminds us of products like Puppet, Chef, and others in the Linux world, communicating in a stack-like framework for rapid deployment and ease of VM and infrastructure fleet management.
Where Windows 8.1 is upgraded on Windows 7 or Windows 8 platforms, the upgrade was fast and made no mistakes. Windows XP can be run atop Hyper-V or in a Type 2 hypervisor application, but we didn’t test this, as we’ve retired Windows XP completely and we hope that readers have, too. Like Windows 8.0, 8.1 can use the latest version of Hyper-V V3 as a foundation, so that other OS versions can be used on the same host hardware, with resource limitations to guests or 8.1, SDN, IPAM, and other Hyper-V features.
The Windows 8.1 UI is initially identical to Windows 8.0, but with the addition of a desktop icon that can be touched/chosen to be optionally or subsequently a resident resource more familiar to XP and Windows 7 users. We found it’s also possible to boot directly to an Apps screen that allows apps to be easily chosen, although not with the same vendor topical drop-boxes that Win XP and Windows 7 might be used to. If there are many applications, the screen must be scrolled. Windows XP/7 users who have accumulated many dozens of applications might be scrolling frequently as long lists of applications can fill many screens.
We found more UI customization choices, and discovered we could make very busy combinations of Live Tiles. It’s possible to insert RSS feeds into tiles where supported, allowing what we feel is an addicting amount of information available within just a handful of tiles, and the appeal of moving tiles combinations on tablets to suit differing use situations. Apps that use “traditional” windows are easier to manage, and users can now move multiple windows adjacent to each other (especially handy on multiple monitors) without having snap behavior crater their placement choices, as occurred in 8.0 and even Windows 7 editions.
Desktop/notebook users have now taken second seat to tablets in this upgrade, and some of the hoped for bridges to Windows 7-ish look-and-feel are missing as we found the 8.1 changes more easily demonstrated on tablets. However, mouse or touch sweeps are more customizable, although consistencies can be imposed in Group Policy. If you’re looking for the familiar Start button, you’ll still need to garner it from a third party app provider. Microsoft, like Apple and Google, would really prefer that you obtain Start Buttons and other third party applications from Microsoft’s online store, which is far more filled with new, familiar, and diverse applications than when Windows 8.0 was released. You can still install from “unauthorized” sources if preferred or forbid that if you’re draconian or simply worried about security.
Recent changes to 8.1 in terms of speed weren’t dramatic, in our subjective analysis. Windows 8.1 uses Server Message Block V3/SMB3 features when connecting to Windows 2012+ network resources that allow several features, including SMB Encryption, SMB traffic aggregation for speed, and TPC “signing” for ostensibly trustable, ostensibly non-repudiating host and client relationships. We say ostensibly, as we’re unsure of a comprehensive methodology to test these, and therefore, have not.
Microsoft has been very busy. Windows Server 2012 R2, while a strong operating system update, is perhaps more about Hyper-V V3 and Azure Pack, and represents a trend towards platform strengthening on Microsoft’s part as platform flexibility starts to replace the operating system as the functional least common denominator for applications infrastructure. Towards these ends, Hyper-V now controls more of the network than the operating system, more of the storage connectivity and options then the operating system, and more of the application availability and administrative control nexus than ever before.
For its part, Windows 8.1 is now the client-side of the experiences rendered by web access, and client/cloud-based services, which become increasingly location-irrelevant where persistent connectivity is available. The Windows 8.1 release comes in fewer forms than Window 8.0, which comes in fewer forms than Windows 7. The shrinking forms betrays that the versions must now be synchronized across a wide variety of platforms, from traditional desktops and notebooks, to tablets, phones, and VDI/Desktop-as-a-Service platforms. More attention to this variety of user device in Windows 8.1 also includes attention paid to criticisms of the seemingly lurching change from former Windows UIs to the tiled interface of Windows 8.
Windows as a client is no longer like the old leaky Windows, but it’s approachable in a more familiar way. Whether the 8.1 client changes can re-enamor disaffected users, and roll with new competitive punches, remains to be seen.
How We Tested Windows Server 2012 R2 and Windows 8.1
For Windows Server 2012 R2, we tested the RTM version downloaded from the MSDN website. We deployed and tested the DataCenter version on both bare metal servers from HP (DL580G5, 16core, iSCSI, Dell (Compellent iSCSI SAN and older Dell servers), and Lenovo (ThinkServer 580 with 16 cores, 32GB) and various hypervisors. Windows 2012 R2 installed basic operations successfully atop VMware vSphere 5.1 and 5.5, Oracle VirtualBox 4.2, aforementioned Hyper-V V3, and Citrix XenServer 6.2, and we found much flexibility and a few servers that needed the aforementioned firmware upgrades for Hyper-V or 2012 R2.
Windows 8.1 was tested on a Microsoft Surface Pro, Lenovo T530 notebooks, and as virtual machines, upgrading from Windows 7 Professional and Windows 8.0 Enterprise versions, as well as fresh installs on UEFI the T530 notebooks hardware.
Testing was performed between the Lab (Gigabit Ethernet switched infrastructure) connected via Xfinity Broadband to our NOC at Expedient/nFrame in Indianapolis (Gigabit Ethernet switched infrastructure with 10GB links on Extreme Switches, connected via a GBE backbone to core routers, Compellent iSCSI SAN, with numerous hosts running VMware, XenServer, BSD, various flavors of Linux, Solaris, in turn connected to Amazon Web Services and Microsoft’s Azure Cloud).