Simplifying(hopefully) vSphere for Desktop licensing

There has not been a single week at VMware so far that I have not been asked to clarify vSphere for Desktop licensing.

Last week, two of Singapore’s biggest FSI customers contacted me on exactly this. I thought it will be a good idea to document some of the clarifications that was needed.


vSphere for Desktop is a license to run “VDI and related” workloads.

Note: All details that follows is as of 1 July 2016. VMware has all rights to change licensing in the future.

For starters, the following are some of the salient aspects you will need to remember:

  • vSphere for desktop is only meant to run VDI and related workloads. This includes Windows Desktop OS workloads and Windows Server OS workloads to run Remote Desktop Services based applications or desktops. This license also includes VDI management components such as Connection brokers, profile servers, application delivery controllers that are used as in a VDI environment. Monitoring tools are also covered by this license. So in a nutshell, anything related to VDI is covered by the license.
  • vSphere for desktop licensing is NOT based on CPU or sockets.

vSphere for Desktop for VMware Horizon VDI

  • All editions of VMware Horizon are bundled with vSphere for Horizon.
  • Horizon licensing is based on named user or concurrent user. It doesn’t matter to vSphere for Desktop how many hosts you use for your Horizon VDI and related workload. As long as you only host VDI and related workloads and not other non-vdi server workloads, you are free to use this license on as many hosts.
    • o For egs, lets say you have 300 VDI desktops. You may host 300 desktops and Horizon Management cluster on any number of physical hosts and this license covers all.
  • vCenter for Desktop is also included as a part of Horizon. So you may run as many vCenter Servers for your Horizon VDI infrastructure.

vSphere for Desktop for non-VMware VDI (for egs: Citrix)

  • vSphere for Desktop can be bought separately to host VDI and related workloads from another vendor such as Citrix.
  • The licensing is NOT based on CPU or sockets or named users or concurrent users.
  • The licensing is based on number of “Powered On” VMs. This means that the total number of “powered on” vms are counted. For egs, if you have 10 vms for VDI management, 100 desktops for user workloads, you will need to license 110 VMs as a part of vSphere for Desktop.
  • Please note XenApp or RDS VM is also considered as a VM even though you can host multiple sessions or users on a single RDS or XenApp VM. So a single XenApp or RDS workload will only consume a single vSphere for Desktop license.
  • vSphere for Desktop does not include vCenter for Desktop.
    • You will need to purchase vCenter separately to manage your VDI infrastructure.
    • As of writing this, you CANNOT buy vCenter for Desktop separately. vCenter for Desktop is ONLY bundled as a part of VMware Horizon and not available for non-Horizon customer. So you will to buy the “normal” vCenter Server to manage your VDI workload.




READONLY USB and Client Drive Redirection in Horizon 7

One of the features of Horizon 7 is the ability to redirect USB and Client Drives as Read-only.

This is extremely useful for customers who are security conscious and do not want users to modify files on drives or usb devices connected to the endpoint devices from their virtual desktops.

A good example is a customer who is using VDI for Internet Separation. Lets a Govt Agency do not allow internet access on their production network. They may choose to publish Internet Explorer for users for web browsing via VDI or RDS. In such a case, IE browser runs in a totally isolated environment from the production network. Users are not allowed to download files or make modifications to the endpoint devices on the production network. However, they may still want users to upload files to external websites or use data on the endpoint. This can be done redirecting Client Drive and USB devices on the client endpoint to the virtual desktops.

This functionality is achieved by pushing down a group policy as below. Documentation reference.

Preventing Write Access to Shared Folders

To prevent write access to all folders that are shared with the remote desktop, create a new string value named permissions and set its value to any string that begins with r, except for rw.

HKLM\Software\VMware, Inc.\VMware TSDR\permissions=r

USB and Client Drive Redirection without the Policy

The client drives and USB drives are seen as below.

drives shown

Users are able to write/modify files on the Client Drives and USB


USB and Client Drive Redirection with the Policy

Registry setting is modified


Users get an error while trying to write to the drives.

cannot read

Launching an ICA session takes forever – Stuck at Connecting

Recently I had a high profile customer having a terrible connection experience launching ICA sessions.

ICA connecting window

I checked the usual suspects like connection timeouts, disabling session reliability, checking for NAT settings and so on. No improvement.

However one thing I observed was launching the ICA session from within the same desktop network did not exhibit this behavior. So it must be something at the level of the client network settings or the ICA file itself.

After some Googling and messing around, it turned out to be Client Proxy settings inherited by the ICA file from the Internet Explorer Proxy settings. The users had proxy settings configured in their Internet Explorer. So the ICA connection first goes to proxy server before making a direct connection and that explains the long delay in establishing the connection.


Configure Client Proxy settings in StoreFront to None instead of the default “auto”. Use this CTX article



Citrix StoreFront – Few good things

On my recently completed project, I spent quite a bit of time debugging Storefront as I missed seemingly irrelevant but important practices. I also burnt my fingers few times because I failed to fully understand the basic principles of a redundant and fault tolerant Storefront implementation from a design perspective.

The following are the 3 important things that I learnt:

Always have two Storefront Server Groups and not ONE.


Previously, in a typical Web Interface implementation, we have two WI servers load balanced by a NetScaler LB. WI servers are nothing but IIS servers which are independent of each other. Each WI server needs to be configured individually and doesn’t have any dependency on the each other.

However, Storefront servers are different. Though Storefront is also an independent IIS server, but when joined in a server group propagates changes from the master Storefront server (the Storefront server on which the change is made and you choose to execute “Propagate changes”) to all other servers. This implies that if there is bad configuration on one of the Storefront server and you propagate changes from that server, all the servers in the server group gets affected and that makes the server group a single point of failure. And this happens more often than you think. Hence, there is a need for a secondary server group.

What’s a good practice?

  • Create two Storefront server groups with the same base url. Each server group is recommended to have 2 or more Storefront servers.
  • Create two NetScaler LB Service Groups for each of the Storefront server groups.
  • On the NetScaler LB VIP for Storefront, only one Service Group will be active at any point in time. The other Service Group will be a “Backup vServer” which will be active only when first Service Group is down.
  • If your environment does not have NetScaler, I suggest having a secondary URL for the secondary Storefront server group. For example, let the first Storefront server group have the base url and the second one being Though it may look untidy and may require users to key in another url on their browsers if their first url is not functional, it still allows them to login and consume their desktops and apps.

Adding Domain user to Local administrator group of Storefront Servers

Until recently I used to add users to “local administrators” group of a machine through “Restricted Groups” which is a horrible idea with bad consequences. Since it removes all other users from “local administrators” and only keeps what you specified in the “restricted groups”, it breaks Storefront as storefront places few built-in users into “local administrators”.

This is how the local administrators group of a Storefront server looks like soon after installation of Storefront.


As you can see there are accounts such as CitrixClusterService and CitrixConfigurationReplication that are local accounts also added automatically into the local administrators group during installation. These accounts must remain there for Storefront to function correctly. Using “Restricted Groups” removes these accounts and replaces them with only those domain user accounts that you specify. So refrain from using “Restricted Groups”.

Here the right way of adding a domain user as a local administrator on a machine. Use a GPO -> Computer Configuration -> Preferences. This way, it just adds users to the group and doesn’t replace the existing users who are already in the administrators group.

Base URL resolution

Storefront server group is always configured with a base url. Base url is just a FQDN such as that points to the NetScaler load balancing VIP for Storefront. This VIP will translate to one of the Storefront servers’ IP address.

But sometimes I ve observed that it may not resolve correctly resulting in the occasional dreaded error “Cannot complete request”. If you check the error logs, you will find that the Storefront “Discovery Service” has failed.

The Discovery Service can fail due to many reasons but more often than not it’s because of DNS resolution of the base url to one of the Storefront Servers. Discovery Service is nothing but a service that runs on every Storefront server that queries data about itself. It contacts the base url which returns the IP address of one of the Storefront servers’ IP address. It uses that IP address to fetch the required information.

But the simple fact is that all the information that a Storefront server ever needs is already known to itself and it doesn’t need to go via the long path of contacting the base url -> NS LB VIP -> Storefront IP.

So I suggest to you create a “hosts” file entry on every Storefront server for base url to point to its own IP address. For example, if the Storefront IP address is and base url is, I suggest create a hosts file entry for This way when the discovery service runs, it doesn’t need to go to the NetScaler LB VIP and then resolve to one of the Storefront servers, instead it can just get all the information from itself. Hence the discovery service of the Storefront Server does not depend on NetScaler or DNS for it’s functioning.

You must be wondering that DNS and NetScaler are perhaps fundamental blocks it is supposed to be working reliably all the time, which is true. But I was quite surprised by the number of times people make random configuration changes to DNS records, clearing caches, adding aliases, reusing URLs and so on. So fewer dependencies, better.

XenApp installation throws 603 Error Message

Installing XenApp throws Error 1603 and breaks the installation midway.

The Citrix website points to but I strongly suggest check for the following before doing that.

If you are installing XenApp 6.5 on Windows 2008 Server R2, please ensure that the following services are turned on.

  • RPC
  • Baseline Filtering
  • COM related Services
  • Baseline Filtering Service
  • IPSec related services
  • Print Spooler Service.

If you are installing XenApp 5.0 on Windows Server 2003 SP2, please ensure that the following services are turned on.

  • RPC (Remote Procedure Calls)
  • Server
  • Workstation
  • Windows Management Instrumentation Driver Extensions
  • IPSec related
  • Print Spooler service
  • COM related services



Suggestions to avoid Blue Screen of Death while configuring Provisioning Services

Introduction: You might encounter BSOD while streaming PVS targets when using some combination of hardware and software under certain circumstances. This usually occurs when the BNIstack driver fails to work correctly with the Ethernet driver causing Windows kernel to crash resulting BSOD.bsod

The infrastructure used for this exercise:

PVS version: Provisioning Services 6.1,

Hypervisor: XenServer 6.0.2,

PVS target: Windows 2008 R2 target installed with XenApp 6.5

Suggestion#1 – Before beginning

Install the latest Microsoft hotfixes related to network stack on the Windows target devices before installing PVS target device software.

Windows 7 and Windows 2008 R2 operating systems

Windows 7 SP1 and Windows 2008 R2 SP1 operating systems

Windows Vista and Windows 2008 operating systems

Suggestion#2 – Installation of Master Target Device Software

Software for master target device is available at several places. For example, It comes bundled with XenApp 6.5 installer. You also get an option to install the PVS target software at the time of installing XenApp. I suggest avoiding this. Always use software that comes with PVS installation ISO.

Ensure you install the latest version of the XenTools or VMware tools.

Few important things during installation:

  • Master target device software MUST be the last system software that is installed on the image. This means that you should install it after your Anti-Virus, Xentools, VMware tools, security patches etc…otherwise anything that alters the network stack will break PVS.
  • A note about Anti Virus and security software: Make sure that you know what you are installing. Anything that is kernel based or touching the network stack is going to give you head aches. As a best practice, only after you complete all the security hardening, you should install PVS target software. This will make sure that network stack is now known to PVS and will remain unaltered.  Please follow CTX124185 for Anti-Virus installation.

Also keep a track of the hotfixes for PVS 6.1. There are a bunch of them and you should only install the hotfix when you encounter an issue and it matches exactly with the hotfix symptoms.

Suggestion#3 – Single NIC targets instead of multi-homed ones

Traditionally, PVS best practices mandate having two NICs for PVS targets. Maybe this is not necessary any more especially for smaller implementations which are lesser than 1000 VMs.

I think having just one NIC simplifies the installation, setup and most importantly causes fewer headaches as the BNIstack driver can just be attached one NIC and few routing decisions for the network stack.

For more information, look here.

Suggestion#4 – Removing Ghost NICs from the master image

For reasons like plugging and unplugging different networks, running dim.exe or some other reason, you may find some Ghost NICs listed under your hardware inventory which may cause your PVS to break. This happens because during the device boot process, PVS server may bind to one of these Ghost NICs and that will cause BSOD.

To find Ghost NICs and remove them, please refer to below article.

Suggestion#5 – Registry hack to delay the PVS driver binding to Ethernet NIC

Use this suggestion only if all the above have been implemented and you still get BSOD. Ignore otherwise.

If you are using an operating system older than Windows 2008 R2, you may want to delay the bind call made by the PVS device software to the NIC.

When the OS tries to boot, the PVS device software needs to bind to NIC to start the PVS streams. If the NIC has not initialized yet, it may lead to a BSOD. So a delay is set at the PVS device software so that the binding is delayed.

Refer to the below KB.

That’s all.



Disclaimer: This article is purely based on my experiences and insights. My employer Citrix Systems has neither reviewed nor endorsed anything that is mentioned above.

Backing up and restoring Citrix Web Interface 5.x

Though this topic is covered at many places, I couldn’t locate one that was direct, simple and concise. Hence writing this on my own.

Backing up

  • On the Web Interface Server, stop the IIS Server. On the command line of the Web Interface server type “iisreset /stop”
  • Create a backup of the site folder. For example: Backup Xenapp to Xenapp-backup
  • Start the IIS Server. Type “iisreset /start”

Restoring Web Interface

  • On the Web Interface Server, stop the IIS Server. On the command line of the Web Interface server type “iisreset /stop”
  • If you have site created already with the name of the backup, you may just rename to backup from Xenapp-backup to Xenapp,
  • On the Citrix Web Interface Management Console, select the site, right click and select “Repair”. This will reload the site properties and as good as the fresh website.
  • Start the IIS service by typing “iisreset /start” on the console.

Related article:



Keytool for Dummies

Intention: This article intends to be simple, clear, concise and accurate. I encourage you to read further after this. Google is your friend. This is targeted at an audience with little or no knowledge whatsoever about internet security infrastructure but is stuck in a sorry state of dealing with keytool and digital certificates. Hope this article useful. Thank you for your time.

Some basics

Even before I dive into Java keytool utility, you need have a high level understanding of the following:

SSL – A way to secure internet communication from your browser to a secure website. The websites using SSL will have https:// to their name as shown below.

PKI or Public Key Infrastructure – SSL uses PKI to implement security. PKI uses two keys (keys are some kind of math functions) to secure communication between a browser and a secure website.

  • Public key – A key which is made made public (published online and given away) to anyone and everyone who wishes to communicate with the secure website. It is used by the receiver of the key to convert normal messages into cryptic messages. These cryptic messages are useless until it is converted back to normal messages.
  • Private Key – This is a secret (hence private) key which is possessed and maintained only by the secure website. This key is used to convert the cryptic messages into normal messages.

When a message (say a text message) is passed between the browser and a secure website, public key is used to convert the text message into a cryptic form(processed is called encryption). This cryptic message can only be transformed to the normal text message by using the private key that the website owns. So other than the secure website, no one else can use this message as they don’t have the private key to convert the cryptic message to normal message.

Digital certificate – Consider this to be like a Driving License. Driving License is issued by the Government Authority certifying that you know how to drive. People trust this license as they trust the Government Authority. Similarly, for secure web transactions, Certificate Authority (Verisign or Thawte or Digicert etc…) is trusted by all computers and browsers, people can get an elo duo boosting with no problem. If a website presents a certificate issued by a Trusted CA, your browser trusts that the website is secure. The digital certificate contains a public key (with some information) of the secure website. This public key is used to encrypt (make the messages cryptic) the communication. The cryptic messages then need to be transformed into normal using the private key maintained only by the secure website.

Your browser has a list of Certification Authorities that it trusts like below.

Chrome Browser Trusted Certificates

Java Keytool utility

Keytool is a program to manage private key, public key and the digital certificates (provided to by the Certificate Authority like Verisign or Thawte) of the secure website. Keytool stores all the certificates and the digital certificates it manages in a container (which also a file) called a keystore. Using keytool, you can add, delete, and view different keys and certificates stored in a container.

The following are the different phases of implementing SSL security through keytool.

  • Creating a keystore and a private key
  • Creating Certificate Signing Request(CSR)
  • Retrieving certificates from CA
  • Importing Root certificates to your keystore
  • Importing intermediate certificates to your keystore
  • Importing the server certificates to keystore

Step 1 – Creating a keystore and a private key

Before generating keys and installing certificates, you ll need a container to store them. So the first step is to create that store called keystore.

The command to create a keystore with a Private Key is:

keytool –genkey –alias –keyalg RSA –keysize 2048 –keystore webserver.keystore

You ll be prompted to fill in details after which a keystore with a private key is created. It is important to provide the ‘hostname or FQDN’ of your webserver as the alias.

Below is what the command means:

-genkey is the command to generate a Private Key and create a keystore if there is none,

-alias is the tag which is used to identify the entry in the keystore. Consider this to be names that identify keystore entries. You can provide any name but we recommend using hostname or FQDN while generating the private key.

-keyalg is the algorithm used to encrypt keys. Usually RSA is what is used.

-keysize is the size in bits. Nowadays 2 bytes or 2048 bits is the standard

-keystore is the name of the keystore which in this case is webserver.keystore. If a keystore with this name does not exist in the system, keytool program will create one.

Important Notes

  • If you don’t specify –keystore option, the system will take the default keystore. So make sure that you are not making modifications to the default keystore but instead to the newly created keystore. It may be a good practice to find out all the keystore files in the already in the system and move them to a different folder. You may find the location of the keystore file from the operating system documentation or searching the entire filesystem by “sudo find / -name “*keystore”
  • It is a good practice to provide the hostname of the server as the alias. Highly recommended. In the above case, the hostname is

To view the Private Key stored in the keystore you may execute the following command:

keytool -list -v -keystore webserver.keystore

Step 2 – Creating Certificate Signing request

Using the above step you have created a keystore and a private key. Now you will need to apply to a Certificate Authority (like Verisign or Thawte) to issue your server a Digital Certificate. This request is called CSR or Certificate Signing Request.

The command to create a CSR is:

keytool -certreq -v -alias -file aruntest.pem -keystore webserver.keystore

Once this command is executed successfully, you ll get the message

Below is what the command means:

-certreq specifies that this is a certification request to be send to CA

-alias should be the same as the alias that you used in Step -1 for generating the Private Key.

-file is the certification request file that we want to create to be sent to CA.

-keystore is the keystore for which the certificate is being created.

You will need to ensure that above values are accurate and exactly matches the values of Step-1.

In the above example, aruntest.pem is the request that we created. The file when opened looks like this.
















Step 3- Retrieving certificates from CA

Now that you have created the certificate request for your webserver, it is time to go to the website of a CA for signing and endorsement. You can download the ROOTCA and the Intermediate certificates from the issuer’s website or by contacting their Tech Support. The CA usually gives 3 types of certificates:

  • Server certificate for webserver,
  • RootCA certificate,
  • Intermediate certificate.

Server certificate is the one which contains your public key and which certifies your server. The below says that Issuer is Thawte which is reputed CA and is issued to

Then you have the RootCA certificate. This is the CA’s certificate and it has all the details of the CA like below. Notice the issued to and issue by.

Now you may also have another certificate called the Intermediate certificate. This is nothing but a certificate which comes between the server certificate and the RootCA certificate. This is like a bridge between RootCA certificate and the server certificate.

Step 4 – Importing Root certificates to your keystore

In Step 1, you have already created a keystore and it is populated with a Private Key. In Step 3, you have generated certificates for RootCA, your webserver and Intermediate CA. You now need to install these certficates.

The order in which the certificates are installed is important. First you ll need to install RootCA certificate and then you need to install the intermediate certificate and last the webserver certificate.

So as the first step, let us install the RootCA certificate. You may use the following command and may use any alias.

keytool -import -trustcacerts -alias root -file RootCertFileName.crt -keystore webserver.keystore.

where RootCertFileName.crt is the RootCA certificate and the keystore name is webserver.keystore

Step 5 – Importing Intermediate certificates to your keystore

After installing RootCA certificate, you need to then install intermediate certificate.

keytool -import -trustcacerts -alias intermediate -file Intermediate-Digicert.crt -keystore webserver.keystore

It is to be noted there may not be an intermediate certificate in some cases. In such cases, just a RootCA certificate will suffice.

Step 6 – Importing Server certificate to your keystore

After installing RootCA and Intermediate certificates, you need to then import server certificates.

keytool -import -trustcacerts -alias -file arunpccert.crt -keystore webserver.keystore

Imp Note: Make sure that you provide the same alias as your Private Key which in this example is the hostname

To view the keys in keystore, you may do it like this:

keytool -list -keystore webserver.keystore

Verifying the certificates are correctly installed

Once the keystore is populated with keys and certificates, you may verify that the certificate chain is established from the server certificate to intermediate certificate to the RootCA certificate.

If you run the command:

keytool -v -list -keystore webserver.keystore

It will list information of all the certificates. Look for the PrivateKey entry and see whether the Certificate Chain length is 2 or above. This will tell you that server certificate is able to establish a chain till the RootCA.



Alias name:

Creation date: May 9, 2012

Entry type: PrivateKeyEntry

Certificate chain length: 3

Configuring your SSL Connector

Tomcat will need an SSL Connector configured before it can accept secure connections.

Open the Tomcat server.xml file in a text editor (this is usually located in the conf folder of your Tomcat’s home directory). Find the connector that will be secured with the new keystore and uncomment it if necessary (it is usually a connector with port 443 or 8443 like the example below).

Specify the correct keystore filename and password in your connector configuration. When you are done your connector should look something like this:

<Connector port="443" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" clientAuth="false" sslProtocol="TLS" keyAlias="server" keystoreFile="/home/user_name/webserver.keystore” keypass=”your_keystore_password” />

Note: If you are using version 7 of Tomcat you will need to change “keypass” to “keystorePass”.

Save your changes to the server.xml file.

Restart Tomcat Server.

Cannot play DVD in an RDP or Citrix HDX session

While you try to play a DVD in an RDP session or a Citrix HDX session, you get the following error:

The error message goes like this: Windows Media Player cannot play this DVD because it is not possible to turn on analog copy protection on the output display.


This is a limitation currently and is designed to be one. Such an error is usually because media such as DVD which is protected by DRM (Digital Rights Management) does not allow content to be redirected from the server (which is the Terminal services or Citrix machine) to the End point. This is more a legal or a regulation issue rather than technical issue.

Some more info

What is DRM?

Digital Right Management is a way copyright owners of the digital content exercise their muscle to prevent using their content in a way they have not authorized. Read more on Wikipedia.

In this context, DVD manufacturer would like to sell more copies of the DVD. They would like everyone who wants to watch a DVD to buy a copy. But when you play a DVD inside a RDS or a Citrix server, only one copy of the DVD is needed and all the end users can view the content through RDP or HDX without buying more copies. This is supposedly copyright violation and loss of business for them. Hence they don’t allow you to do so.

I tried editing registry settings in the machine to bypass the check but I have not been able to do so. So I guess this is a designed limitation.

More info can be found here and here.

Simple test to confirm this outside RDP or Citrix HDX

  1. Insert a DRM protected DVD in your laptop drive .
  2. Share your laptop drive and give permissions for anyone to access this DVD drive.

  3. On another machine on the network, access this shared drive. (Type \\<IPAddress-of-the-latop>\E$ where E is the assigned share name)
  4. Try to now play the DVD. If there is an error, it means that it cannot be played remotely. This means that it cannot be played in an RDP or a HDX session as well.



Buying Gold for my daughter

The other day at the lunch table, my colleague jokingly asked:

“Arun, being a Malayalee, have you started buying Gold for your daughter’s wedding?”

For those of you who don’t know the relationship between Gold and Malayalees, here it is:

Malayalees love gold. Especially when it comes to their daughters’ wedding.

I guess it originated from the logic that Gold can be an asset in the future and instead of wasting a lot of money on clothes, car and the wedding function, it might be wiser to spend it on Gold.

I don’t subscribe to this view at all but his question definitely got me thinking.

I know with globalization, travel, exposure, internet, etc… the definition of “asset” is changing. What we perceive as highly valuable today may not be that valuable in the future.

But the corollary is actually more interesting.

Some resources that could be cheap today can be of immense value tomorrow.

So I thought, what could be that thing my daughter may want when she is 16?

Of course, I have no clue and this surely cannot be predicted, but I am certainly going to take a shot at it. Here I go.

Everyone today has digital presence. Everyone has an email address for communication, Skype ID for chat, Facebook ID for social networking, LinkedIn ID for career etc…This is given.

So I think buying a domain name “snehaarun” is a good idea and will be of immense value in the future. Think about it:

  • Good Domain names are on the verge of extinction. They are very hard to get and even if you get, it will be at a very high price.
  • If you want to start a blog, want to publish anything, start a project, or a movement, voice your opinion, it is always best to do it on your own domain than
  • Some of you may have already realized this. When it comes to employment, your online personal brand is more important than anything else right now. So, the online brand called “You” is much better built on your own domain name.
  • Most importantly, it is nice to have url

Hence, I go to GoDaddy and purchase for $80 which is valid for the next 10 years.

Today Sneha turns 1. Hopefully she will appreciate her Daddy’s 1st birthday gift J