scale-kong-create-kong-cluster

Overview:

In the previous sections, you Installed Kong Using Docker, Provisioned, Protected and Rate Limited your API, explored various Configuration Files and Configured Secure Access to your APIs and the Kong Admin Loopback, and Load Balanced Incoming Requests using Kong Ring Balancer.

In this section, you will scale your environment, by adding Kong nodes to a cluster.

Screen Shot 2017 10 18 At 11.50.20 Am

Multiple Kong nodes pointing to the same datastore must belong to the same “Kong Cluster”. A Kong cluster allows you to scale the system horizontally by adding more machines to handle a bigger load of incoming requests, and they all share the same data since they point to the same datastore.

A Kong cluster can be created in one datacenter, or in multiple datacenters, in both cloud or on-premise environments. Kong will take care of joining and leaving a node automatically in a cluster, as long as the node is configured properly.

Scenario

In this exercise, you will scale Kong to 3 nodes and verify the configurations.

High Level Tasks

  1. Scale Kong
  2. Verify Scale
  3. Check Consul Logs

Detail Configurations

1. Scale Kong

$ docker-compose scale kong=3

Results:

Starting compose_kong_1 ... done
Creating compose_kong_2 ... done
Creating compose_kong_3 ... done

This creates and starts 2 more Kong instances to your cluster. So there are 3 Kong instances in this cluster. Generally, it is recommended to have at least 3 Kong instances per core. Containers will automatically load balance, entered into the console and Loadbalanced in the frontend NGINX LB. Docker compose allows an easy and quick way to configure Kong to scale in your environment.


2. Verify Scale

$ Docker ps -a

You should see the 2 additional compose containers similar to this:

CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS                      PORTS                                                                            NAMES
323c7b91d536        compose_kong             "/docker-entrypoin..."   8 minutes ago       Up 8 minutes (healthy)      7946/tcp, 8000-8001/tcp, 8443-8444/tcp                                           compose_kong_3
2b104b93ec05        compose_kong             "/docker-entrypoin..."   8 minutes ago       Up 8 minutes (healthy)      7946/tcp, 8000-8001/tcp, 8443-8444/tcp                                           compose_kong_2
949654cbc3b7        compose_kong             "/docker-entrypoin..."   21 minutes ago      Up 21 minutes (healthy)     7946/tcp, 8000-8001/tcp, 8443-8444/tcp

3. Check Consul Logs

To view the consul logs type the following command. Remember to use your unique container id e.g. ec05dd44b3c9

$ docker container logs --details ec05dd44b3c9

The results of your logs

2017-10-18T07:38:35.527045338Z
    2017/10/18 07:38:35 [INFO] serf: EventMemberJoin: ec05dd44b3c9 172.18.0.3
    2017/10/18 07:38:35 [INFO] serf: EventMemberJoin: ec05dd44b3c9.dc1 172.18.0.3
    2017/10/18 07:38:35 [INFO] raft: Node at 172.18.0.3:8300 [Follower] entering Follower state
    2017/10/18 07:38:35 [INFO] consul: adding server ec05dd44b3c9 (Addr: 172.18.0.3:8300) (DC: dc1)
    2017/10/18 07:38:35 [INFO] consul: adding server ec05dd44b3c9.dc1 (Addr: 172.18.0.3:8300) (DC: dc1)
    2017/10/18 07:38:35 [ERR] agent: failed to sync remote state: No cluster leader
    2017/10/18 07:38:36 [WARN] raft: Heartbeat timeout reached, starting election
    2017/10/18 07:38:36 [INFO] raft: Node at 172.18.0.3:8300 [Candidate] entering Candidate state
    2017/10/18 07:38:36 [INFO] raft: Election won. Tally: 1
    2017/10/18 07:38:36 [INFO] raft: Node at 172.18.0.3:8300 [Leader] entering Leader state
    2017/10/18 07:38:36 [INFO] consul: cluster leadership acquired
    2017/10/18 07:38:36 [INFO] consul: New leader elected: ec05dd44b3c9
    2017/10/18 07:38:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
    2017/10/18 07:38:36 [INFO] consul: member 'ec05dd44b3c9' joined, marking health alive
    2017/10/18 07:38:39 [INFO] agent: Synced service 'consul'
    2017/10/18 07:38:49 [INFO] agent: Synced service 'kong-8001-949654cbc3b7'
    2017/10/18 07:38:49 [INFO] agent: Synced service 'kong-8001-949654cbc3b7'
    2017/10/18 07:38:49 [INFO] agent: Synced check 'kong-8001-949654cbc3b7'

Verify that acluster leader is assigned and services has synced. In the logs, notice the “New leader elected:ec05dd44b3c9” log and the members “joined, marking health alive” messages.

Summary

Great! You added 2 additional Kong nodes to your environment with a simple scale command. All communication between the nodes and update to each other are automatic.

What Next - Coming Soon

As of Kong v.0.11.x, Serf is not a dependency anymore. Kong nodes now handle cache invalidation events via a built-in database polling mechanism. The “Datastore Cache” section of the configuration file contains 3 new properties:

db_update_frequency

db_update_propagation

db_cache_ttl

Lets explore these properties and configurations- COMING SOON

BACK TO MAIN

Edit this page