vSphere 7.0 introduced by VMware in March 2020 and went to GA in April 2020. Many new features like DRS & vMotion improvement and also Lifecycle Manager has been released. After half a year VMware introduced first major update on vSphere 7 and today this release went into GA. It is now publicly available, you can download it from VMware and take advantage of this latest and greatest release! Here in this blog post I will go through the new features and capabilitiesContinue reading “vSphere 7.0 Update 1 is now Globally Available!”
In Part 1 of NSX-T SSL Certificate Replacement, the process of certificate template preparation and request has been explained. This blog post will teach you how to import and replace the generated certificate into NSX-T Manager. It is really important to verify the imported certificate before replacing it. I want to point out that if you are using a Virtual IP for you NSX-T management cluster, you should have generated the SSL certificate for management cluster’s Virtual IP address.Continue reading “NSX-T 3.0 SSL Certificate Replacement – Part 2”
Now that we have finalize deploying three managers in NSX-T management cluster we can go ahead and configure a Virtual IP(VIP) on it. We can use NSX-T internal mechanism to set an IP address on the cluster or setup an external load balancer in front of NSX-T managers. Configuring VIP which is recommended by VMware is more simple but using a LB would load balance traffic among NSX-T managers. This is a design question and should be chosen based on requirements and customer needs.
Please keep in mind that if you want to choose this approach, you need to have all NSX-T managers are on the same subnet. In this case, managers are attached to SDDC Management network. To configure Virtual IP, login to NSX-T Manager UI, choose System and on the left panel select Appliances then click on SET VIRTUAL IP option.Continue reading “Configure Virtual IP for NSX-T Management Cluster”
In previous blog post we started NSX-T implementation by deploying first NSX-T Manager. Before deploying other two NSX-T Managers we need to add a Compute Manager. As it defines by VMware, “A Compute Manager is an application that manage resources such as hosts and VMs. One example is vCenter Server”. We do this because other NSX-T Managers will be deployed through Web UI and with help of vCenter Server. We can add up to 16 vCenter Servers in a NSX-T Management cluster.
To add compute manager in NSX-T, It is recommended to create a service account and customized vSphere Role instead of using NSX-T default admin account. The reason behind defining a specific role is because of security reasons. As you can see in the below screen shot I created a vSphere Role call “NSX-T Compute Manager” with the required privileges. I use this Role to assign permission to the service account on vCenter Server.Continue reading “Add Compute Manager to NSX-T 3.0”
In series of blog posts we are going to walk through different steps to setup a NSX-T Data Center infrastructure. If you are new to NSX-T, please first go ahead and read the Introduction to VMware NSX. To get more insight on NSX-T architecture you can continue with NSX-T Architecture and Components post. Because we are using NSX-T 3.0 for the purpose of this implementation deep dive, you can also review What’s new in NSX-T 3.0 blog post.
Following are the required steps to build a solid NSX-T Data Center foundation. Please follow each step and we are going to update and complete this list regularly.Continue reading “NSX-T 3.0 Deep Dive”
As part of vSAN Stretched or 2-Node cluster configuration, a witness appliance should be deployed and configured. This witness appliance will host witness components that is being used in split-brain failure scenarios. Witness component will act as a tie breaker and help vSAN cluster to satisfy the quorum requirements. Witness server could be installed as a dedicated physical ESXi host or an specialized virtual witness appliance can be used instead. The main reason of having witness as an virtual appliance is it does not require extra vSphere license to consume and eventually save some cost specially for smaller implementation like ROBO. The other reason behind using a virtual appliance is for multi-cluster environment like VCF stretched cluster implementation. Due to the reason of each vSAN cluster needs its own witness, then you can consolidate all of them on one physical host on a third site.Continue reading “VMware vSAN 7.0 Witness Appliance Deployment”
Following the first blog post about deployment of vIDM, this post will cover how to configure vIDM and implement NSX-T Role Based Access Control (RBAC) with help of vIDM. As you might noticed, in NSX-T 2.5 and earlier release RBAC cannot be enabled without use of vIDM.
When you login to administration page with vIDM’s admin user account, dashboard would be the fist page you will land. Dashboard contains login information and applications which are used by users and analytics.
To start vIDM configuration click on Identity & Access Management. Here you can join vIDM to Active directory domain, add directory to sync with vIDM and define user attributes which get synchronized from directory service to vIDM.Continue reading “Deploying & Configuring VMware Identity Manager (vIDM) – Part 2”
As it mentioned in Introduction to VMware NSX , NSX-T Datacenter is built on three integrated layers of components which are Management Plane, Control plane & Data plane. This architecture and separation of key roles enables scalability without impacting workloads.
NSX-T Management cluster which built from three-node NSX-T managers controller nodes. Management plane and control plane are converged on each node. NSX managers provides Web-GUI and REST API for management purposes. This is one of the architectural difference compared to NSX-V which had to integrate into vSphere Client & vCenter server. NSX Manager is also could be consumed by Cloud Management Platform(CMP) like vRealize Automation to integrate SDN into cloud automation platforms. NSX-T Manager can also connect to vSphere infrastructure through integration with vCenter Server(Compute Manager).Continue reading “NSX-T Architecture & Components”
VMware has announced new update to per-CPU licensing model. Ok don’t panic VMware is not going to bring back vRAM licensing model but they added new CPU related license type. Effective from April 2nd 2020, building a server with a processor which has more than 32 cores needs additional license. According to VMware’s website, “Under the new model, one CPU license covers up to 32 cores in a single CPU”. This means, additional license requires to be purchased for every 32 physical CPU cores! So if there is a single-CPU server with up to 32 physical cores, as before, 1 license should be purchased. But if there is single-CPU server with 64 cores, 2 licenses needed because as it said before every license covers a single CPU with up to 32 cores. To get a better view of this change, take a look at below image from VMware.
Fortunately for those who are going to buy servers and VMware licenses till April 30th 2020, there is “Free per CPU licensing” program. According to VMware website, “Any existing customers who purchase VMware software licenses, to be deployed on a physical server with more than 32-cores per CPU, prior to April 30, 2020 will be eligible for additional free per-CPU licenses to cover the CPUs on that server”.
You can also get more information from VMware’s website!
VMware Cloud Foundation(VCF) is VMware’s integrated SDDC platform for private and hybrid cloud infrastructures. This software package integrates VMware’s Compute, Storage and Network Virtualization solutions with a centralized automated lifecycle management tool call SDDC Manager. The core components of VCF are vSphere (Compute), vSAN (Storage) and NSX (Network & Security). VMware vRealize Suite can also be optionally added to VCF to increase the capability of SDDC infrastructure with performance & capacity Management and cloud management. Since VCF 3.8 beside running normal virtual machine workloads, you can also run containers with use of VMware Enterprise PKS.
To start implementing VCF at least seven ESXi hosts is needed, four for Management Workload Domain(WLD) which hosts infrastructure components of SDDC and another three host for running actual infrastructure WLD. These nodes can be vSAN ready nodes or you can take advantage of DellEMC’s VxRAIL platform and run more integrated Hyper-converged(HCI) platform. The Management WLD brought up with use of special virtual appliance call Cloud Builder. This awesome tool brings up four first nodes in management cluster alongside Platform Service Controllers(PSC), vCenter Servers, NSX manager & controllers and vRealize Log Insight. After the initial bring up process VCF infrastructure management will be done through SDDC Manager.Continue reading “Introduction to VMware Cloud Foundation (VCF)”