When it come to setting up a hybrid cloud environments, one of the most important topics is networking. It is usually comes down to stretch on-prem network segments to the public cloud environment. This blog post is going to simply describe NSX-T architecture on AVS as the default networking and security stack. If you are new to AVS you can read Introduction to AVS blog post first, and then continue with this article.Continue reading “AVS Hybrid Networking with NSX-T”
On September 22nd 2020, during Ignite 2020 , Microsoft announced the general availability of next generation of VMware Azure Solution(AVS). If you want to learn about basics of AVS, you can read my previous blog post on Introduction of Azure VMware Solutions. Now AVS is now generally available in four Region at US East, US West, West Europe (Netherlands) and Australia(NSW). AVS also going to be available in Japan East, UK South and South Central US in the near future. You can check the availability of Azure VMware Solution by checking Azure Products by Region page for details.Continue reading “Azure VMware Solution goes into GA”
Azure VMware Solution (AVS) enables you to run VMware SDDC stack natively on Azure to build-up a hybrid cloud infrastructure. AVS is a VMware validated solution that being delivered by Microsoft on Azure environment. According to Microsoft’s release statement in May 2020, “You can provision a full VMware Cloud Foundation environment on Azure and gain compute and storage elasticity as your business needs change”. Popular scenarios for this solution are datacenter footprint reduction, On-demand datacenter expansion, disaster recovery & business continuity and finally application modernization.Continue reading “Introduction to Azure VMware Solution (AVS)”
NSX-T installation comes with a out of the box self-signed SSL certificate. Because of security and compliance reasons, most of customers want to replace default self-signed certificate with a CA signed certificates. We have been looking for guide that explains how to do this step-by-step but unfortunately we couldn’t find one! There are some very useful guides like this one from VMware but as you read through, you realize the documentation is not complete. So to make story short, we looked around and ran SSL certification replacement.Continue reading “NSX-T 3.0 SSL Certificate Replacement – Part 1”
Now that we have finalize deploying three managers in NSX-T management cluster we can go ahead and configure a Virtual IP(VIP) on it. We can use NSX-T internal mechanism to set an IP address on the cluster or setup an external load balancer in front of NSX-T managers. Configuring VIP which is recommended by VMware is more simple but using a LB would load balance traffic among NSX-T managers. This is a design question and should be chosen based on requirements and customer needs.
Please keep in mind that if you want to choose this approach, you need to have all NSX-T managers are on the same subnet. In this case, managers are attached to SDDC Management network. To configure Virtual IP, login to NSX-T Manager UI, choose System and on the left panel select Appliances then click on SET VIRTUAL IP option.Continue reading “Configure Virtual IP for NSX-T Management Cluster”
In a previous blog post, NSX-T architecture explained and now we can start implementation of NSX-T. Deployment process of NSX-T Data Center beings with deployment of NSX-T Management cluster. In NSX-T 3.0 management cluster is consist of three NSX-T managers which include both management and control plane. The management plane provides Web UI, REST API and also interface to other management platforms like vCenter Server, vCloud Director or vRealize Automation. The Control plane is responsible for computing and distributing network run time state.
NSX-T managers can be deployed on ESXi or KVM hypervisor. If you are planning to use ESXi platform to host NSX-T managers, an OVA file should be used. On the other hand for KVM platform, a QCOW2 image will be used for NSX-T manager deployment. It is important to note that mixed deployments of managers on both ESXi and KVM are not supported. Based on type of deployment and size of environment, NSX-T manager node size configuration should be selected. Following is the four different configuration options and their requirements.Continue reading “Deploying NSX-T Management Cluster”
Today , April 28th 2020, DellEMC VxRail team released VxRail 7.0 software package which supports vSphere 7.0a. This software package includes;
- VxRail Manager 7.0.000 build 16050759
- VMware ESXi 7.0 GA build 15843807
- VMware vCenter Server 7.0 GA build 15952498
- VMware vSAN 7.0 GA build 15843807
- VMware vRealize Log Insight 4.8.0 GA build 13036238
On April 7th 2020, VMware introduced next major release of its Network Virtualization & Security solution. NSX-T 3.0 introduces variety of new features which enhance the adoption of software-defined networking in private, pubic and hybrid-cloud environment.
Following are some of the new features and enhancements that are available in NSX-T 3.0 Datacenter;Continue reading “What’s New in NSX-T 3.0”
On March 10th 2020, VMware released VMware Cloud Foundation(VCF) 4.0 along side a refresh on its other SDDC protofolio including vSphere 7.0, vSAN 7.0 and vRealize Suite 2019 latest release. By deploying VCF 4.0, you can take advantage of all the components that are included in the package and there are some features which only available with VCF 4.0. For example Kubernetes capabilities of vSphere 7 are only included as part of VCF 4.0 with Tanzu. Following you can find Bill of Materials(BoM) for VCF 4.0.
One of the new capabilities that have been added to VCF 4.0 is the possibility to use NSX-T in Management workload domains. Before VCF 4.0, Management workload domain had to use NSX-V as networking and security virtualization solution. NSX-T will also used as a defacto network and virtualization solution for VM and container workload. With use of NSX-T we have the option to bring up one NSX-T Management cluster that can serve many workload domains.
VCF 4.0 also supports latest update of vRealize Suite 2019 which includes;
- vRealize Automation 8.1
- vRealize Opertions 8.1
- vRealize Log Insight 8.1
All the above products have the capability to operate based on container workloads beside normal VM workload. VCF SDDC Manage 4.0 together with vRealize Suite Lifecycle Manager 8.1 will automate the process of lifecycle management for both VCF core components and also vRealize suite components.
As it mentioned in Introduction to VMware NSX , NSX-T Datacenter is built on three integrated layers of components which are Management Plane, Control plane & Data plane. This architecture and separation of key roles enables scalability without impacting workloads.
NSX-T Management cluster which built from three-node NSX-T managers controller nodes. Management plane and control plane are converged on each node. NSX managers provides Web-GUI and REST API for management purposes. This is one of the architectural difference compared to NSX-V which had to integrate into vSphere Client & vCenter server. NSX Manager is also could be consumed by Cloud Management Platform(CMP) like vRealize Automation to integrate SDN into cloud automation platforms. NSX-T Manager can also connect to vSphere infrastructure through integration with vCenter Server(Compute Manager).Continue reading “NSX-T Architecture & Components”