Updated: Using Consul For Service Discovery in Multiple Data Centers Version 1.4 – Part 1

1 Star2 Stars3 Stars4 Stars5 Stars (No Ratings Yet)
Loading...

Configuring Consul for Service Discovery in a Multiple Data Centers

Since my last post on how to configure Consul for Service Discovery in a Multi Data Center, a number of things have changed and got updated. You can see the original post by going here part 1 and here part 2, the original write-up was using version 0.9.2 while the current version is 1.4.2.

Below I updated the consul configuration to work with version 1.4 (the most recent version as of this writing).

Before looking on the configuration changes, its good to point-out some of the new features added to more recent versions of Consul.

Below is a partial list on some of the new or enhance features.

  1. ACLs: With recent versions you can now configure ACL(access control lists) who has access to what, this includes the Web-UI, Rest calls as well as the CLI.
  2. SSL:Full support for SSL – configured with consul tls…
  3. UI Update: The web UI got a major overall facelift.

For the full list of changes / enhancements (and their are many since version 0.9.2) look here.

Note: One of the changes / issues I ran in to with the current version(s), are the dismissal of using 0.0.0.0 as the -client bind address(really a Go change), in order for Consul to be available for DNS, etc requests i.e listen to all address, you will have to change the below

In addition to the above, some configuration parameters changed or got deprecated.

Consul Server Example

The IP address schema used in the consul configuration is below.

The table below lists names and IP address used in this configuration (feel free to replace with your needs).

DC1
 Name  IP Addrss
 Consul Servers
 ConsulMaster1  10.150.100.17
 ConsulMaster2  10.150.100.18
 ConsulMaster3  10.150.100.19
 Consul Client
 Dc1Client1  10.150.0.145
 End Host
 dc1-devops1  10.150.0.106
DC2
 Name  IP Address
 Consul Servers
 ConsulMaster1  10.150.100.17
 ConsulMaster2  10.150.100.18
 ConsulMaster3  10.150.100.19
 Consul Client
 Dc2Client1  10.50.0.145
 End Host
 dc2-devops1  10.50.0.106

Consul Multi Data Center layout used in this article

Please take a look on part 1 of my original consul article, for a similar IP Address schema you can use.

Consul server installation and configuration

In the below test I used Solaris zones installation configuration.
For a Solaris zone instillation example please take a look on part 1 (using version 0.9.2).

First, lets download consul.
For a list of latest releases click here.
I used version 1.4.2 the latest version as of this writeup.

Next, lets configure user and groups.

Next, we need to generate an encryption key.

Finally we need to create the Consul config.json. you do so by running the below.
Consul config.json for the Consul Servers

Consul Server DC1 – First node config.json
Note: Replace DNS and IP address information to reflect your environment.

Consul Server DC2 – First node config.json

Note: The above config.json is for the first node. Make sure to replace the below fields on the two other nodes, nodes two and three.

  1. bind_addr
  2. node_name
  3. client_addr

Tip: The performance keyword is by default set to 5. the reason for that (to my understanding) is to accommodate AWS t2.tiny configurations, for maximum performance set this to 1.

Next, to start the consul servers, just run the below.
First on the DC1 3 nodes. then, once up, run on the 3 DC2 nodes.
Tip: You can remove the nohup to run in the foreground.

Note: The above startup enables the Web UI. if you don’t like the Web UI on the Consul servers just remove the -ui option.

Now, Lets move to the Consul client configuration.
Consul config.json for the Consul Clients
Consul Client DC1 – First node config.json

Consul Client DC2 – First node config.json

Note: The Address property under services, can be used to replace the DNS reply address for this service lookup.

Create a startup script with the below.

To start consul, just run the below.

Tip: You can omit the nohup to run in the foreground(for troubleshooting).

If all done correctly, you should now have a working Consul cluster.
To access the Web UI , just go to any Consul server, port 8500.
For example http://10.150.100.17:8500 would bring you to the below screen, pick your DC and continue to node and services selection.

List of Consul nodes.

A failed Consul node services.

To continue reading part two, on how to configure Consul for Multi Data Center click here.

Note: This article was update using Consul version 1.4, to access the original article using Consul version 0.9.2 click here.

Like what you’re reading? give it a thumbs up by rating the article.

You might also like – related to Docker Kubernetes / micro-services.

Leave a Reply

avatar
  Subscribe  
Notify of