Now that we have seen a basic cluster setup , we should expand on it and show you how you can make your cluster as tolerant as possible. This article will talk about splitting out the tiers to different machines and highlight the advantages of doing this.
As we have seen before this is a simple cluster setup , and is a good starting point for any Data Collection Cluster.
When you install Data Collection one of the first questions you are asked is do you want to do a single machine install or a cluster install. If you say single, then in the future if you want to add new machines to turn your single setup into a cluster it is harder to do. If you start with a Cluster , then it is simple to add addition machines. So to improve our cluster we could just keep on adding in more IIS machines. Like this,
The above is ok for a system design, but we can do better. For more security and also improved capacity , we can strip out the Interviewer Tier from the IIS machines , this frees them up just think about collecting data and adds another machine into the cluster dedicated with working with that data and putting it where it is needed.
Now of course as the interviewer tier gets to be overloaded we can just add in more machines at that level to address the additional load. All this time we have been adding machine into the Web and the Interview tier , but we have not been thinking about SQL. When it comes to making SQL more tolerant all we can do is keep adding more hard disk space, memory and CPUs to the machine until it can cope no more. Way before you get to this stage you should consider upgrading to a SQL Cluster.
A very simple over view of an SQL Cluster is as follows. As Far as Data Collection is concerned there is one machine that it points its data to. This is in fact not a machine at all but a Virtual IP Address ( VIP ) that points to the SQL Instance name which is made up of a number of SQL Nodes. The normal node number is two, and they are physical machines. One of which is active and one is passive, by passive we mean not really doing anything , but just waiting for the active node to have an issue and pass all the work over to it. The actual data is not stored on any of the nodes it is stored on an additional machine with all the nodes pointing to that one place. So this means we don’t have lots of data files sitting around the place , we just have one. The setup would look something like this,
Q: Ok I am sold on this setup , How fault tolerant is this cluster Setup?
When we have designs like this, they are at the top end of the fault tolerance chart. And we should really start to talk or use the words High Availability. In this system, every machine has a failover machine that will come take on work in a matter of seconds.
Q: What Hardware do I Required for this Setup?
What can you afford? The more money you have the better equipment you can get and the less likely you will be to have failures where you system is 100% done. But don’t forget, if you have lots of machines dedicated to the 3 tiers and you forget about the FMROOT directory where all the files sit and that goes does you will have wasted all your money.
Q: Does all this have to sit in a DMZ?
The answer to this is no, but you have to be very careful if you do split it up between let’s say a DMZ and a Domain. There are certain features that require certain permissions and rights and if your network design does not support them you won’t be able to install the product. So Ask and expert to clarify your network topology as it will save you some time in the long run.
We could go into allot more detail about the design of your networks but for the time being we will leave it here. And hopefully all the articles we have written will have given you enough information to get started.