CoreOS — etcd vs. etcd2, Configuration & Cluster Discovery

There are multiple great utilities and tools within the CoreOS ecosystem. One of them is etcd, a distributed key value store. etcd is used to share and store data within a cluster of CoreOS machines and make the data generally available for every machine. Additionally, the shared configuration is used for service discovery within the cluster.

Within this article, we’ll walk through the etcd daemon, its configuration options and the automatic cluster discovery process. Within the next article, we extend the view on etcd and look at the command line utility etcdctl and etcd’s HTTP API to get/set key-value-pairs.

CoreOS Series Overview


etcd is a highly-available, distributed key value store and responsible to store data across a cluster of CoreOS machines. It automatically handles leader election via Raft Consensus protocol. If you want to read more about Raft, have a look at these slides at speakerdeck or check out for an animated illustration. Using the Raft protocol makes etcd tolerant for machine failures.

The idea of leader selection within a cluster basis on the split brains principle. As long as there are at least 3 machines left within a cluster, etcd will gracefully select a new leader in case the previous one failed.

Within this guide, we’re going to get, set, and watch for key changes within the cluster. Of course, you can execute the example commands on your own machines. Within the previous article, we show you how to set up a cluster of CoreOS machines.

etcd vs etcd2

CoreOS currently ships with both, etcd and etcd2. It’s highly recommended to use etcd2 since it integrates all security and feature updates. Within this series of CoreOS posts, we use etcd2 and only write etcd2 when using the command line. If we just write etcd, it’s probably because of the context and we mean the actual tool etcd. In the context of command line, we always use the commands of etcd2.

etcd Configuration

CoreOS uses a declarative configuration file called cloud-config to customize different OS specific items like network configurations and systemd units. CoreOS reads the cloud-config during system startup and translates the configuration entries to systemd unit drop-ins for the etcd2.service. The cloud-config file consists of a coreos block and has various entries. Those entries are indented and identified by name. Use the etcd2 identifier and define configuration options. The cloud-config below illustrates an etcd configuration.


    advertise-client-urls: http://$public_ipv4:2379
    initial-advertise-peer-urls: http://$private_ipv4:2380
    # listen on both the official ports and the legacy ports
    # remove legacy ports if you don’t rely on them
    listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001

You can show available options by using the -h flag for etcd2 command.

$ etcd2 -h
usage: etcd [flags]  
       start an etcd server

       etcd --version
       show the version of etcd

       etcd -h | --help
       show the help information about etcd

member flags:  
clustering flags:  
proxy flags:  
security flags:  
unsafe flags:  

You can include every option into your cloud-config file. Therefore, remove the doubled dash and use the <key>: <value> style. For example: the configuration command etcd2 --discovery https://url/token translates to discovery: https://url/token.

Current Cloud-Config

CoreOS stores the current cloudinit config within the 20-cloudinit.conf file located within the /run hierarchy. Retrieve the information by using cat and print out the files content.

$ cat /run/systemd/system/etcd2.service.d/20-cloudinit.conf

The current discovery url is stored within the ETCD_DISCOVERY variable. While booting the cluster nodes every machine will contact the discovery service and request available information about other cluster members. The first node will not receive any information about other nodes and assign itself the leader role. Every other node checking in receives the information about the nodes already joined to the cluster.

etcd Cluster Discovery

Booting up a CoreOS cluster can be done statically or dynamically. Within a static CoreOS cluster, every machine needs to know the IP address of all other machines. You have to pre-define the initial list of existing etcd instances within the cluster before adding a new node. This mechanism is used if your nodes have static IP addresses assigned, the nodes don’t have internet access or there is no existing etcd cluster available.

Besides the static cluster setup, etcd provides an automated cluster discovery to bootstrap a new cluster with the help of an existing one. This is a typical scenario when using DHCP within your network or you don’t know the static IP addresses of cluster peers upfront. Cloud providers generally use DHCP for networking which makes it hard for you to easily set up a cluster of machines statically.

We focus on dynamic etcd cluster setup and the following section describes how to use the public etcd discovery server to bootstrap a new cluster.

Public Discovery Service

Within the previous post on how to set up a CoreOS cluster, we already used the public etcd discovery service. etcd uses a unique discovery token to identify a cluster and each machine using this token joins it. You need to define the discovery token within the CoreOS cloud-config. During the boot phase, CoreOS checks the discovery address and uses it to join the cluster.

The publicly available CoreOS discovery service is available at It’s just an etcd cluster available to the internet. You can request a new token by visiting with a specifed cluster size n (n defaults to 3 when not defined). The response for a token request will be the url including the token.  

The discovery url identifies a unique etcd cluster and you must request a new token for every new cluster. Don’t reuse existing discovery tokens! Additionally, the tokens should only be used for initial cluster boot.

If you bootstrap a CoreOS cluster with more instances than specified while requesting the discovery token, the additional nodes fall back into proxy mode. Proxy nodes don’t participate in consensus of etcd cluster and act as a transparent node forwarding all client requests to the active etcd cluster instances.

Private etcd Discovery Service

Since etcd uses an existing cluster to bootstrap a new one, you can of course use your own existing etcd cluster. We’ll deep dive into the topic on how to use your own etcd instances as a discovery service to bootstrap new CoreOS clusters.


We showed you how to configure etcd via cloud-config and command line. You can use any options available within the command line utility and define it within the cloud-config file. CoreOS automatically translates the configurations during system boot to systemd unit drop-ins. Furthermore, we explained the static and dynamic service discovery process of etcd to bootstrap a new CoreOS cluster.

This is the first of two articles about etcd. Within the next post, we’ll dig deeper into the command line utility etcdctl and how to set/get/change/watch key-value pairs.

Additional Resources

Explore the Library

Find interesting tutorials and solutions for your problems.