Deploying NSX-T Management Cluster

In a previous blog post, NSX-T architecture explained and now we can start implementation of NSX-T. Deployment process of NSX-T Data Center beings with deployment of NSX-T Management cluster. In NSX-T 3.0 management cluster is consist of three NSX-T managers which include both management and control plane. The management plane provides Web UI, REST API and also interface to other management platforms like vCenter Server, vCloud Director or vRealize Automation. The Control plane is responsible for computing and distributing network run time state.

NSX-T managers can be deployed on ESXi or KVM hypervisor. If you are planning to use ESXi platform to host NSX-T managers, an OVA file should be used. On the other hand for KVM platform, a QCOW2 image will be used for NSX-T manager deployment. It is important to note that mixed deployments of managers on both ESXi and KVM are not supported. Based on type of deployment and size of environment, NSX-T manager node size configuration should be selected. Following is the four different configuration options and their requirements.

Continue reading “Deploying NSX-T Management Cluster”

NSX-T 3.0 Deep Dive

In series of blog posts we are going to walk through different steps to setup a NSX-T Data Center infrastructure. If you are new to NSX-T, please first go ahead and read the Introduction to VMware NSX. To get more insight on NSX-T architecture you can continue with NSX-T Architecture and Components post. Because we are using NSX-T 3.0 for the purpose of this implementation deep dive, you can also review What’s new in NSX-T 3.0 blog post.

https://d3utlhu53nfcwz.cloudfront.net/171901/cdnImage/article/913ec53d-8797-4531-99b8-f41e2db1ff50/?size=Box320

Following are the required steps to build a solid NSX-T Data Center foundation. Please follow each step and we are going to update and complete this list regularly.

Continue reading “NSX-T 3.0 Deep Dive”

VMware vSAN 7.0 Witness Appliance Deployment

As part of vSAN Stretched or 2-Node cluster configuration, a witness appliance should be deployed and configured. This witness appliance will host witness components that are being used in split-brain failure scenarios. The witness component will act as a tie-breaker and help vSAN cluster to satisfy the quorum requirements. The witness server could be installed as a dedicated physical ESXi host or a specialized virtual witness appliance can be used instead. The main reason for having witness as a virtual appliance is it does not require an extra vSphere license to consume and eventually save some cost especially for smaller implementation like ROBO. The other reason behind using a virtual appliance is for multi-cluster environments like VCF stretched cluster implementation. Due to the reason of each vSAN cluster needs its own witness, then you can consolidate all of them on one physical host on a third site.

https://blogs.vmware.com/virtualblocks/files/2016/11/SCDIAG.png
Continue reading “VMware vSAN 7.0 Witness Appliance Deployment”