Sunday, June 30, 2019

What is the VMware Photon OS

Photon OS is an Open-Source Linux developed by VMware for cloud-native applications such as vCloud Air and virtual infrastructure services, like vSphere. Photon OS is used especially for VCSA6.x & SRM8.x as the guest OS of OVF. VMware announced because of costumer's needs to an environment to provide consistency development through production. Considering all aspects of infrastructure: Computing, Networking and Storing, Photon OS provides a fully integrated platform to make sure all of these will provide all abilities that are required by VMware platform's App developers and costumers. 
Photon OS supports running of highly-applicable Containers (Rocket, Docker  & Garden) and also developer Apps that must be deployed in to the Containers. Beside Project Lightwave (as another open-sourced project for Access / Identity management) Container deployed by Photon OS and all of their Workloads will be protected by security enforcement.
Although current version of Photon OS is ver3.0 but historically each version introduce many optimization features around VMware environments (like Kernel Message Dumper in ver2.0). Updates for Photon OS always delivered as the Package (yum & rpm are supported), also you can upgrade this product in-place with an offline downloaded package and then run (there is no patch):

# tdnf install photon-upgrade   
# photon-upgrade.sh

If you install VCSA (with built-in photon OS) you need to provide almost 10GB RAM, but minimum recommended free memory for Photon itself is 2GB. As VMware mentioned resource requirement highly depends on installation types  (Minimal, Full, OSTree Server), virtualization environment (ESXi, Workstation or Fusion), Linux kernel (Hypervisor optimized or Generic) and distribution file (preinstalled OVA/OVF or a more complex setup with ISO). It's good to know about installation types of Photon OS:
 1.Minimal: Lightweight version and the best choice for Container providing.
 2.Full: With additional package and the better option for development of container-based application.
 3.OSTree Server: This one is suitable as a repository and also management node for all other Photon OS hosts and also.
All in the Hypervisor optimized kernel type, you all have required for virtualization by VMware hypervisor not anymore components, then they will be removed. So selecting Generic means needs all. To provide Docker feature you need to do:

# systemctl start docker           Run the daemon service
# systemctl enable docker        Enable service startup

With respect to opinions and discussions about development and security considerations of virtual infrastructure services, VMware release Photon OS as an open-source product, then it can support other public cloud environment, for example: Amazon Elastic Compute Cloud (EC2), Google Compute Engine (GCE) and Microsoft Azure. To read more about Photon OS you can refer to following links:
And also you can download its source from following GitHub link:

Wednesday, June 26, 2019

VMworld 2019 (US) - Less than 60 days


Exactly less than two months before starting of VMworld 2019 United States (San Francisco) , 25 ~ 29 August.

Be ready for this great event. You can read about this:
https://www.vmworld.com/en/us/index.html

Of course Europa (Barcelona) will begin in November 2019: 
https://www.vmworld.com/en/europe/index.html




Monday, June 24, 2019

VMware VDI (Horizon View) Troubleshooting - Part III


In the third part of the VDI troubleshooting series, unlike the last two parts, I want to talk about client-side connection problems. For instance, if there is a dedicated subnet of IP addresses for Zero Client devices, then incorrect setup or miss-configuration of routing settings can be the reason for the connection problem between VDI clients and servers. Same way, wrong VLAN configs (ID, subnet, Inter VLAN Routing) can be the main reason for the trouble. So I provided a checklist of "What to do if you have a problem with your Horizon connection servers?"

1. Check the correctness of Zero/Thin client's communication infrastructure (routing, switching, etc) to the VDI servers (Connection Server, Security Server)
2. Check network connection between Connection Server subnet and deployed Virtual Machines of Desktop Pool, if they are separated. Of course, logically there is no need to connect their dedicated Hosts/Clusters to each other, so you can have separate ESXi Clusters, one for Desktop pools and another for VDI Servers.
3. Investigate the vCenter Server is accessible from Connection Server and also its related credential.
4. If you have a Composer Server, check it's Services. So many times I saw the Composer Server service does not start after a server reboot, while it's automated and no warning/error event has been reported. Also, you need to check the ODBC Connection between Composer Server and its Database.
5. Investigate installed View Agent state inside the Desktop Pool's VMs. If you need to provide client redirection to the desktop (without the presence of Connection Server) View Direct Agent is needed too.
6. A TCP connection on port 4001(non-SSL)/4002(SSL-based) between Desktop's View Agent and Connection Server must be established, It's required for connection and you can check it by running netstat -ano | findstr "4001".
7. Review the User Entitlement for provided Desktop Pools, maybe there is a mistake especially when you add AD Groups instead of AD Users. (also check them, are they still available or assigned to the other users?)
8. Type of Virtual Desktop provisioning is also important. Except for Full Clone, on Linked Clone and Instant Clone models, you need to check the status of Virtual Desktops in Inventory\Resources\Machines of the View Admin web page.
9. If there is an interruption in connected sessions, you need to review their states in Inventory\monitoring of the View Admin web page.
10. For the last Note: DO NOT FORGET TO CONFIGURE EVENT DATABASE! I had encountered too many Horizon View deployment that did not configure any event database, so in troubleshooting situations, we had NOTHING to know really what happened.
I hope it can be helpful for you all buddy...

Saturday, June 15, 2019

Manage VCSA Certificates - Chapter I

Every part of the virtual infrastructure environment needs a channel to communication and a safe and secure channel always requires a certificate. ESXi Hosts, vCenter Server, NSX Manager, Horizon Connection Server and so on, each one of them has at least a machine certificate or a web-access management portal with a self-signed SSL certificate. After introducing of vSphere6.0 Platform Service Controller (PSC) will handle the vSphere generated certificates with a web access panel that has been called VMware Certificate Authority (VMCA). But in this post I want to introduce some CLI to manage VMware certificates:
  1.  VECS-CLI: This is a useful CLI to manage (create, get, list, delete) certificate stores and private keys. VECS (VMware Endpoint Certificate Stores) is the VMware SSL Certificate repository. Pic1 show usage of some of its syntax:
  2. DIR-CLI: Manage (create, list, update, delete) everything inside the VMware Directory Service (vmdir): solution user accounts, certificates, and passwords.
  3. Certool: View, Generate and revoke certificates.
There are many types of stores inside the VECS:
  1. Trusted Root: Includes all of the default or added trusted root certificates.
  2. Machine SSL: With the release of vSphere6.0 all communication of VC & PSC services are executed through a reverse proxy, so they need a machine SSL certificate that is also backward compatible (ver 5.x). Embedded PSC also requires Machine Certificate for its vmdir management tasks.   
  3. Solution users: VECS stores for a separate certificate with a unique subject for each of solution users like VPXD. These user certificates are used for authentication with VC SSO.
  4. Backup: Provides revert action to restore (only) the last state of certificates.
  5. Others: Contains VMware or some Third-party solution certificates.
Now let me ask what are the roles of solution users? There are five solution users:
  1. machine: License server and logging service are the main acts. It's important to know Machine solution user certificate is totally different from machine SSL certificate that has been required for the secure  connections (like LDAP for vmdir / HTTPS for web access) in each node of VI (VC / PSC instance)
  2. SMS: Storage Monitoring Service.
  3. vpxd: vCenter Daemon activity (Managing of VPXA - ESXi host agents)
  4. vpxd-extensions: Like Auto Deploy and Inventory service
  5. vsphere-WebClient: lol, certainly web client and some additional services like performance chart.
The default paths of certificate management utilities are down below:
    /usr/lib/vmware-vmafd/bin/vecs-cli
    /usr/lib/vmware-vmafd/bin/dir-cli
    /usr/lib/vmware-vmca/bin/certool

And for windows type of vCenter server you can go to the:
   "%programfiles%\vmware\vcenter server\vmafdd

Surely I will talk about what is the vmafd itself and other useful CLI vdcpromo in this path on another post. Also, I will provide a video about how to work with certificate-manager." is the default path of windows-based vCenter server.
For the last note, always remember that deleting Trusted Roots is not permitted, because if you do it, it can cause some sophistic problems in your VMware certificate infrastructure.

Sunday, June 9, 2019

Flush the DNS cache in VMware Components (CLI)

Sometimes we want to clean DNS cache of some virtual components to reconnect / re-establish connections between them. For example if you change the FQDN of ESXi hosts that has been registered with a Manual A record, maybe vCenter reconnect operation fails, because it still try to connect to the old record. So if you need to fix it, you should to do DNS cache cleaning. Now let's go to do this:

For vCenter Server Appliance (VCSA):
# systemctl restart dsnmasq

For ESXi Host:
# /etc/init.d/nscd restart

For vRealize Operation Manager (vROM):
# /etc/init.d/nscd restart
(remember you may need to run this command with sudo if you login with admin, then you need to set root's password in the first try to access)

For vRealize Log Insight (vRLI):
# /etc/init.d/nscd restart
(Just note you should login directly with root account)



Saturday, June 1, 2019

Security Recommendation and Hardening on Virtual Environments - Chapter Three




This post is the third part of security recommendations for vSphere environments and in the following of last two parts: Chapter One & Chapter Two
In this section, I will explain about some ESXi related security considerations, so let's begin:

1. Keep the Audit logs Persistent: If you install ESXi on the media likes SD Memories, because of inconsistent nature of saving ESXi data on these types of disks, after host restarting you will lose all of the system log files in the /var/log path. Also in this scenario after the first boot, you will see a warning about "logs are stored on non-consistent storage" so you need change their local path to another datastore (whatever storage, local or shared) to keep them safe, even after reboot the host.

2. Set the Syslog Server: Asset Log generation and keeping them into the safe repositories for examining and analyzing is a main step in the network management area. ESXi hosts as the most important components in the virtual infrastructure environments, must be fully monitored, So the major step to do is configuring Syslog server to store and investigate the ESXi logs. 

3. Secure NFS communication: NFS as the popular NAS protocol is the best access method to use  shared repositories between ESXi hosts for such useful files like ISO media. (it's the most popular file-sharing protocol in UNIX-based systems) It's recommended to secure NFS communication channel. If you planned to configure Linux-based NFS, use TLS/SSL encryption(v4  because of its standalone encryption)  or implement Kerberos(v5 the last edition) as the authentication mechanism for the windows server NFS role. Attention: NEVER use anonymous access (no server authentication) even in read-only granted access for ESXi servers.

4. Lockdown Mode: Lockdown mode is a way of hardening access to the ESXi and can prevents from direct login to the host, then it will be accessible only by local console or through management systems like vCenter server. It's crucial to choose carefully between Normal mode (DCUI / VC) or Strict mode (Only VC), because if you permanently lost the vCenter server, there is no way to manage data-center's ESXi hosts and should reinstall them. So provide at least one exception user to keep its permissions before entering into the lockdown mode. Also It's highly recommended to consider lockdown mode only for accounts/credentials of third-party solution, like monitoring, backup & etc.

5. vSphere installation bundles Acceptance Level: There are four level for configure trust to the bundle files of vSphere environment: VMware (Certified or Accepted) and supported file from (Partner or Community). Select whatever you need but do not trust to any of community or even partner supported, at least VMware Accepted is a good choice for this security field.

6. Enable Host Encryption Mode: Now what? CoreDump files will be encrypt always. This option is useful whenever the host is in a high risk of compromising its cryptographic data.



I will start a new journey soon ...