Portrait of Martijn

Solr on Docker Swarm with overlay network on bare metal

13 Nov 2015

The Docker 1.9 anouncement proclaims that "Networking is ready to use in production and works with Swarm and Compose". Let's try that out with Solr on my trinity cluster.

The docker site has a Get started with multi-host networking guide which uses docker-machine and Virtualbox VMs. It first creates a VM for running a discovery database, then starts further virtual machines with docker hosts that use that database. For my bare-metal setup I can't do that. Furthermore, to use Swarm, the swarm agents and master need to be able to talk to eachoter on the network, which can be a little more tricky when they are running in containers. Finally, docker-machine creates TLS certificates, which is obviously recommended for production deployments, but I don't want to complicate my setup with in first instance, and is not supported by the Jenkins plugin yet.

So for this demo I'll do it more old-school: instead of using docker-machine I'll tweak the fabric code from my previous blog to install the discovery database and the swarm code into the Docker host OS. You can find the updated fabric code here if you want to follow along. Obviously you'll need Fabric installed and loaded.

I'll run the individual fabric targets, and explain what it's doing. Where the output is the same for all hosts, I'll not reproduce everything, for brevity. The hosts are called trinity10, trinity20, trinity30.

Preparing the servers

Let's get started with fab info, to confirm that my servers are running the latest Ubuntu LTS, with a kernel 3.16 (the minimum requirement):

[trinity10] Executing task 'info'
[trinity10] run: cat /etc/lsb-release
[trinity10] out: DISTRIB_ID=Ubuntu
[trinity10] out: DISTRIB_RELEASE=14.04
[trinity10] out: DISTRIB_CODENAME=trusty
[trinity10] out: DISTRIB_DESCRIPTION="Ubuntu 14.04.2 LTS"
[trinity10] out: 

[trinity10] run: uname -a
[trinity10] out: Linux trinity10 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 17:43:14 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[trinity10] out: 

Next copy my ssh key to the servers:

[trinity10] Executing task 'copy_ssh_key'
[trinity10] put: /Users/mak/.ssh/id_dsa.pub -> tmpkey.pem
[trinity10] sudo: mkdir -p ~mak/.ssh
[trinity10] out: sudo password: 

[trinity10] out: 
[trinity10] sudo: cat ~mak/tmpkey.pem >> ~mak/.ssh/authorized_keys
[trinity10] out: sudo password:
[trinity10] out: 
[trinity10] sudo: chown mak:mak ~mak/.ssh
[trinity10] out: sudo password:
[trinity10] out: 
[trinity10] sudo: chown mak:mak ~mak/.ssh/authorized_keys
[trinity10] out: sudo password:
[trinity10] out: 
[trinity10] sudo: rm ~mak/tmpkey.pem
[trinity10] out: sudo password:
[trinity10] out: 

This enables IP forwarding and installed some tools we'll need later:

[trinity10] Executing task 'install_prerequisites'
[trinity10] sudo: modprobe ip6_tables
[trinity10] sudo: echo 'ip6_tables' >> "$(echo /etc/modules)"
[trinity10] sudo: modprobe xt_set
[trinity10] sudo: echo 'xt_set' >> "$(echo /etc/modules)"
[trinity10] sudo: sysctl -w net.ipv6.conf.all.forwarding=1
[trinity10] out: net.ipv6.conf.all.forwarding = 1
[trinity10] out: 

[trinity10] sudo: echo net.ipv6.conf.all.forwarding=1 > /etc/sysctl.d/60-ipv6-forwarding.conf
[trinity10] sudo: apt-get install --yes --quiet unzip curl git
[trinity10] out: Reading package lists...
[trinity10] out: Building dependency tree...
[trinity10] out: Reading state information...
[trinity10] out: git is already the newest version.
[trinity10] out: Suggested packages:
[trinity10] out:   zip
[trinity10] out: The following NEW packages will be installed
[trinity10] out:   curl libcurl3 unzip

Next, install Docker-engine, and add the main user to the docker group:

[trinity10] Executing task 'install_docker'
[trinity10] sudo: apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
[trinity10] out: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.2nV22v3Rss --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
[trinity10] out: gpg: requesting key 2C52609D from hkp server pgp.mit.edu
[trinity10] out: gpg: key 2C52609D: public key "Docker Release Tool (releasedocker) <docker@docker.com>" imported
[trinity10] out: gpg: Total number processed: 1
[trinity10] out: gpg:               imported: 1  (RSA: 1)
[trinity10] out: 

[trinity10] run: grep DISTRIB_CODENAME /etc/lsb-release |sed 's/.*=//'
[trinity10] out: trusty
[trinity10] out: 

[trinity10] put: <file obj> -> /etc/apt/sources.list.d/docker.list
[trinity10] sudo: apt-get --yes --quiet update

[trinity10] sudo: apt-cache policy docker-engine
[trinity10] out: docker-engine:
[trinity10] out:   Installed: (none)
....
[trinity10] sudo: apt-get --yes --quiet install docker-engine

[trinity10] sudo: adduser mak docker
[trinity10] out: Adding user `mak' to group `docker' ...
[trinity10] out: Adding user mak to group docker
[trinity10] out: Done.
[trinity10] out: 

[trinity10] sudo: sudo service docker restart
[trinity10] out: docker stop/waiting
[trinity10] out: docker start/running, process 2966
[trinity10] out: 

And check that is running:

[trinity10] Executing task 'docker_version'
[trinity10] run: docker version
[trinity10] out: Client:
[trinity10] out:  Version:      1.9.0
[trinity10] out:  API version:  1.21
[trinity10] out:  Go version:   go1.4.2
[trinity10] out:  Git commit:   76d6bc9
[trinity10] out:  Built:        Tue Nov  3 17:43:42 UTC 2015
[trinity10] out:  OS/Arch:      linux/amd64
[trinity10] out: 
[trinity10] out: Server:
[trinity10] out:  Version:      1.9.0
[trinity10] out:  API version:  1.21
[trinity10] out:  Go version:   go1.4.2
[trinity10] out:  Git commit:   76d6bc9
[trinity10] out:  Built:        Tue Nov  3 17:43:42 UTC 2015
[trinity10] out:  OS/Arch:      linux/amd64
[trinity10] out: 

[trinity10] run: status docker
[trinity10] out: docker start/running, process 8221
[trinity10] out: 

Next, pre-pull some docker images, including zookeeper and solr. Whereas most tasks execute sequentially for clarity, this task is marked to execute in parallel on all three servers.

[trinity10] Executing task 'pull_docker_images'
[trinity20] Executing task 'pull_docker_images'
[trinity30] Executing task 'pull_docker_images'

Installing the discovery service

At last we can get to the exciting bit. We'll install etcd for discovery, rather than Consul, solely because I happen to have fabric code already for that. I install it into a user directory.

[trinity10] Executing task 'install_etcd'
[trinity10] run: wget -nv https://github.com/coreos/etcd/releases/download/v2.2.1/etcd-v2.2.1-linux-amd64.tar.gz
[trinity10] out: 2015-11-11 20:08:48 URL:https://github-cloud.s3.amazonaws.com/releases/11225014/c527b502-7388-11e5-84a6-8dd3809f76f4.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAISTNZFOVBIJMK3TQ%2F20151111%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20151111T200844Z&X-Amz-Expires=300&X-Amz-Signature=d2ec3bf60171c89ff6e8dfeb255b7ebbd980775e79acf3fd9a087feac64816c2&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Detcd-v2.2.1-linux-amd64.tar.gz&response-content-type=application%2Foctet-stream [7353742/7353742] -> "etcd-v2.2.1-linux-amd64.tar.gz" [1]
[trinity10] out: 

[trinity10] run: tar xvzf etcd-v2.2.1-linux-amd64.tar.gz
...
[trinity10] run: cd etcd-v2.2.1-linux-amd64; /bin/pwd
[trinity10] out: /home/mak/etcd-v2.2.1-linux-amd64
[trinity10] out: 

[trinity10] run: ip -4 addr show dev eth0 | grep inet | awk '{print $2}' | sed -e 's,/.*,,'
[trinity10] out: 192.168.77.10
[trinity10] out: 

[trinity10] out: sudo password: 
[trinity10] put: <file obj> -> /etc/init/etcd.conf
[trinity10] sudo: service etcd start
[trinity10] out: sudo password:
[trinity10] out: etcd start/running, process 3142
[trinity10] out: 

The interesting thing in the above is the configuration of etcd, which I do in an Upstart config file in /etc/init/etcd.conf. That looks like:

description "Etcd daemon"

start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [!2345]
limit nofile 524288 1048576
limit nproc 524288 1048576

respawn

kill timeout 20

chdir /home/mak/etcd-v2.2.1-linux-amd64

script
    /home/mak/etcd-v2.2.1-linux-amd64/etcd \
      -name etcd-trinity10 \
      --advertise-client-urls http://192.168.77.10:2379 \
      --listen-client-urls http://0.0.0.0:2379 \
      --listen-peer-urls http://0.0.0.0:7001 \
      --initial-advertise-peer-urls http://192.168.77.10:7001 \
      --initial-cluster etcd-trinity10=http://192.168.77.10:7001,etcd-trinity20=http://192.168.77.20:7001,etcd-trinity30=http://192.168.77.30:7001 \
      --initial-cluster-state new

Note that I define the etcd cluster members explicitly; each of my three hosts participates as an etcd cluster member. That is not required (indeed not recommended) for large clusters, but for only 3 servers it makes sense.

Let's verify that the etcd cluster came up:

[trinity10] Executing task 'check_etcd'
[trinity10] run: curl -L http://localhost:2379/version
[trinity10] out: {"etcdserver":"2.2.1","etcdcluster":"2.2.0"}
[trinity10] run: curl -L http://localhost:2379/v2/machines
[trinity10] out: http://192.168.77.10:2379, http://192.168.77.20:2379, http://192.168.77.30:2379
[trinity20] Executing task 'check_etcd'
[trinity20] run: curl -L http://localhost:2379/version
[trinity20] out: {"etcdserver":"2.2.1","etcdcluster":"2.2.0"}
[trinity20] run: curl -L http://localhost:2379/v2/machines
[trinity20] out: http://192.168.77.10:2379, http://192.168.77.20:2379, http://192.168.77.30:2379
[trinity30] Executing task 'check_etcd'
[trinity30] run: curl -L http://localhost:2379/version
[trinity30] out: {"etcdserver":"2.2.1","etcdcluster":"2.2.0"}
[trinity30] run: curl -L http://localhost:2379/v2/machines
[trinity30] out: http://192.168.77.10:2379, http://192.168.77.20:2379, http://192.168.77.30:2379

and we reconfigure Docker to use this etcd cluster:

[trinity10] Executing task 'install_docker_config'
[trinity10] sudo: cp "$(echo /etc/default/docker)"{,.bak}
[trinity10] put: <file obj> -> /etc/default/docker
[trinity10] sudo: service docker restart
[trinity10] out: docker stop/waiting
[trinity10] out: docker start/running, process 6712
[trinity10] out: 

The config file looks like:

# Docker Upstart and SysVinit configuration file

# Customize location of Docker binary (especially for development testing).
#DOCKER="/usr/local/bin/docker"

# Use DOCKER_OPTS to modify the daemon startup options.
#DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4"

# If you need Docker to use an HTTP proxy, it can also be specified here.
#export http_proxy="http://127.0.0.1:3128/"

# This is also a handy place to tweak where Docker's temporary files go.
#export TMPDIR="/mnt/bigdrive/docker-tmp"
DOCKER_OPTS="--cluster-store=etcd://192.168.77.10:2379,192.168.77.20:2379,192.168.77.30:2379 --cluster-advertise=192.168.77.10:2375 -H unix:///var/run/docker.sock -H tcp://192.168.77.10:2375"

The DOCKER_OPTS there tell the Docker where its cluster is, what local Docker daemon to advertise, to listen on both the IP and the unix domain socket.

Installing Swarm

Next, I want to install Swarm, using their "installation for developers". That's probably overkill, and instead I could find a binary somewhere. But, I like it.

So first install Go:

[trinity10] Executing task 'install_go'
[trinity10] run: wget -nv https://storage.googleapis.com/golang/go1.5.1.linux-amd64.tar.gz -O /tmp/go1.5.1.linux-amd64.tar.gz
[trinity10] out: 2015-11-11 20:15:13 URL:https://storage.googleapis.com/golang/go1.5.1.linux-amd64.tar.gz [77875767/77875767] -> "/tmp/go1.5.1.linux-amd64.tar.gz" [1]
[trinity10] out: 

[trinity10] sudo: tar -C /usr/local -xzf /tmp/go1.5.1.linux-amd64.tar.gz
[trinity10] put: files/golang.profile -> /etc/profile.d/golang.sh
[trinity10] run: source /etc/profile.d/golang.sh; go get github.com/tools/godep

Note this uploads a shell profile for Go to /etc/profile.d/golang.sh. That looks like:

export GOPATH=$HOME/go
export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin

With that in place, installing swarm is simple:

[trinity10] Executing task 'install_swarm'
[trinity10] run: go get github.com/docker/swarm

Next, start the swarm agent on all the nodes:

[trinity10] Executing task 'install_swarm_agent'
[trinity10] run: ip -4 addr show dev eth0 | grep inet | awk '{print $2}' | sed -e 's,/.*,,'
[trinity10] out: 192.168.77.10

[trinity10] put: <file obj> -> /etc/init/swarm-agent.conf
[trinity10] sudo: service swarm-agent stop || true
[trinity10] out: stop: Unknown instance: 

[trinity10] sudo: rm -f /var/log/upstart/swarm-agent.log
[trinity10] sudo: service swarm-agent start
[trinity10] out: swarm-agent start/running, process 6515

[trinity10] sudo: tail /var/log/upstart/swarm-agent.log
[trinity10] out: INFO[0000] Registering on the discovery service every 20s...  addr=192.168.77.10:2375 discovery=etcd://192.168.77.10:2379,192.168.77.20:2379,192.168.77.30:2379/

The /etc/init/swarm-agent.conf file looks like:

description "Swarm agent"

start on runlevel [2345]
stop on runlevel [016]

respawn
respawn limit 3 20

kill timeout 20

script
  cd /home/mak
  exec ./go/bin/swarm join \
    --advertise=192.168.77.10:2375 \
    etcd://192.168.77.10:2379,192.168.77.20:2379,192.168.77.30:2379/

We tell it where the discovery configuration is, and where our Docker deamon is listening.

Next, start the swarm master, only on trinity10:

[trinity10] Executing task 'install_swarm_master'
[trinity10] run: ip -4 addr show dev eth0 | grep inet | awk '{print $2}' | sed -e 's,/.*,,'
[trinity10] out: 192.168.77.10

[trinity10] sudo: cp "$(echo /etc/init/swarm-master.conf)"{,.bak}
[trinity10] put: <file obj> -> /etc/init/swarm-master.conf
[trinity10] sudo: service swarm-master stop || true
[trinity10] out: swarm-master stop/waiting

[trinity10] sudo: rm -f /var/log/upstart/swarm-master.log
[trinity10] sudo: service swarm-master start
[trinity10] out: swarm-master start/running, process 6917

[trinity10] sudo: tail /var/log/upstart/swarm-master.log
[trinity10] out: INFO[0000] Listening for HTTP                            addr=0.0.0.0:3375 proto=tcp
[trinity10] out: INFO[0000] Registered Engine trinity20 at 192.168.77.20:2375 
[trinity10] out: INFO[0000] Registered Engine trinity30 at 192.168.77.30:2375 
[trinity10] out: INFO[0000] Registered Engine trinity10 at 192.168.77.10:2375 

Notice the master is seeing all the agents.

Let's try it out! We'll set the DOCKER_HOST environment variable to point to the Swarm master. We'll run the docker command on trinity10, but we could have run it from any server, or indeed machines elsewhere on the network.

[trinity10] Executing task 'swarm_info'
[trinity10] run: DOCKER_HOST=tcp://192.168.77.10:3375 docker info
[trinity10] out: Containers: 0
[trinity10] out: Images: 12
[trinity10] out: Role: primary
[trinity10] out: Strategy: spread
[trinity10] out: Filters: health, port, dependency, affinity, constraint
[trinity10] out: Nodes: 3
[trinity10] out:  trinity10: 192.168.77.10:2375
[trinity10] out:   └ Containers: 0
[trinity10] out:   └ Reserved CPUs: 0 / 8
[trinity10] out:   └ Reserved Memory: 0 B / 16.44 GiB
[trinity10] out:   └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-30-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs
[trinity10] out:  trinity20: 192.168.77.20:2375
[trinity10] out:   └ Containers: 0
[trinity10] out:   └ Reserved CPUs: 0 / 8
[trinity10] out:   └ Reserved Memory: 0 B / 16.44 GiB
[trinity10] out:   └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-30-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs
[trinity10] out:  trinity30: 192.168.77.30:2375
[trinity10] out:   └ Containers: 0
[trinity10] out:   └ Reserved CPUs: 0 / 8
[trinity10] out:   └ Reserved Memory: 0 B / 16.44 GiB
[trinity10] out:   └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-30-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs
[trinity10] out: CPUs: 24
[trinity10] out: Total Memory: 49.32 GiB
[trinity10] out: Name: trinity10
[trinity10] out: 

We have multiple hosts -- that looks good!

Multi-host networking

So let's try the network portion. We'll create two overlay networks, one named netalphabeta, one netsolr:

[trinity10] Executing task 'create_networks'
[trinity10] run: docker network create --driver=overlay --subnet 192.168.91.0/24 netalphabeta
[trinity10] out: 20e242e3f334fe311760626008b22a6be60615ac3a9d81e1d064611a025e81e3
[trinity10] out: 

[trinity10] run: docker network create --driver=overlay --subnet 192.168.89.0/24 netsolr
[trinity10] out: eaf0fe86b358530b995d33d2d72609441f3d7e733ab8aa9c7a3812c6aa60eb07
[trinity10] out: 

[trinity10] run: docker network ls
[trinity10] out: NETWORK ID          NAME                DRIVER
[trinity10] out: 20e242e3f334        netalphabeta        overlay             
[trinity10] out: eaf0fe86b358        netsolr             overlay             
[trinity10] out: 34ab958246e6        bridge              bridge              
[trinity10] out: d1190903a442        none                null                
[trinity10] out: bc3507be54d6        host                host                
[trinity10] out: 

Next, I'll create some test containers. I've reproduced the output from both, because it's interesting to compare the output of the addr command on each. Note that I set the hostname on the containers; that's not necessary, but I do because of this issue.

(fabric)mak@crab 923 docker-swarm-overlay [master] $ fab create_test_container_alpha
[trinity10] Executing task 'create_test_container_alpha'
[trinity10] run: docker pull busybox:latest
[trinity10] out: trinity30: Pulling busybox:latest...
[trinity10] out: trinity10: Pulling busybox:latest...
[trinity10] out: trinity20: Pulling busybox:latest...
[trinity10] out: trinity10: Pulling busybox:latest... : downloaded
[trinity10] out: trinity20: Pulling busybox:latest... : downloaded
[trinity10] out: trinity30: Pulling busybox:latest... : downloaded
[trinity10] out: 

[trinity10] run: docker run -e constraint:node==trinity10 --net netalphabeta --name c-alpha --hostname=c-alpha.netalphabeta -tid busybox:latest
[trinity10] out: 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f
[trinity10] out: 

[trinity10] run: docker inspect --format '' 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f
[trinity10] out: 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f
[trinity10] out: 

[trinity10] run: docker inspect --format '' 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f
[trinity10] out: /c-alpha
[trinity10] out: 

[trinity10] run: docker inspect --format '' 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f
[trinity10] out: netalphabeta
[trinity10] out: 

[trinity10] run: docker inspect --format '' 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f
[trinity10] out: 192.168.91.2
[trinity10] out: 

container_id=93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f, container_name=c-alpha, ip_address=192.168.91.2
[trinity10] run: docker exec -i 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f hostname
[trinity10] out: c-alpha.netalphabeta
[trinity10] out: 

[trinity10] run: docker exec -i 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f ls -l /sys/devices/virtual/net/
[trinity10] out: total 0
[trinity10] out: drwxr-xr-x    5 root     root             0 Nov 13 20:17 eth0
[trinity10] out: drwxr-xr-x    5 root     root             0 Nov 13 20:17 eth1
[trinity10] out: drwxr-xr-x    5 root     root             0 Nov 13 20:17 lo
[trinity10] out: 

[trinity10] run: docker exec -i 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f ip link list
[trinity10] out: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue 
[trinity10] out:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[trinity10] out: 8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue 
[trinity10] out:     link/ether 02:42:c0:a8:5b:02 brd ff:ff:ff:ff:ff:ff
[trinity10] out: 11: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
[trinity10] out:     link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
[trinity10] out: 

[trinity10] run: docker exec -i 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f ip addr list
[trinity10] out: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue 
[trinity10] out:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[trinity10] out:     inet 127.0.0.1/8 scope host lo
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 ::1/128 scope host 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue 
[trinity10] out:     link/ether 02:42:c0:a8:5b:02 brd ff:ff:ff:ff:ff:ff
[trinity10] out:     inet 192.168.91.2/24 scope global eth0
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 fe80::42:c0ff:fea8:5b02/64 scope link tentative 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 11: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
[trinity10] out:     link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
[trinity10] out:     inet 172.18.0.2/16 scope global eth1
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 fe80::42:acff:fe12:2/64 scope link tentative 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 

[trinity10] run: docker exec -i 93843deb5441bf54d14c242ad4f05a3e396797b8d965405a78909f56eb8ae21f ip route list
[trinity10] out: default via 172.18.0.1 dev eth1 
[trinity10] out: 172.18.0.0/16 dev eth1  src 172.18.0.2 
[trinity10] out: 192.168.91.0/24 dev eth0  src 192.168.91.2 
[trinity10] out: 


Done.
Disconnecting from trinity10... done.
(fabric)mak@crab 924 docker-swarm-overlay [master] $ fab create_test_container_beta
[trinity10] Executing task 'create_test_container_beta'
[trinity10] run: docker pull busybox:latest
[trinity10] out: trinity30: Pulling busybox:latest...
[trinity10] out: trinity10: Pulling busybox:latest...
[trinity10] out: trinity20: Pulling busybox:latest...
[trinity10] out: trinity20: Pulling busybox:latest... : downloaded
[trinity10] out: trinity10: Pulling busybox:latest... : downloaded
[trinity10] out: trinity30: Pulling busybox:latest... : downloaded
[trinity10] out: 

[trinity10] run: docker run -e constraint:node==trinity20 --net netalphabeta --name c-beta --hostname=c-beta.netalphabeta -tid busybox:latest
[trinity10] out: c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840
[trinity10] out: 

[trinity10] run: docker inspect --format '' c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840
[trinity10] out: c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840
[trinity10] out: 

[trinity10] run: docker inspect --format '' c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840
[trinity10] out: /c-beta
[trinity10] out: 

[trinity10] run: docker inspect --format '' c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840
[trinity10] out: netalphabeta
[trinity10] out: 

[trinity10] run: docker inspect --format '' c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840
[trinity10] out: 192.168.91.3
[trinity10] out: 

container_id=c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840, container_name=c-beta, ip_address=192.168.91.3
[trinity10] run: docker exec -i c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840 hostname
[trinity10] out: c-beta.netalphabeta
[trinity10] out: 

[trinity10] run: docker exec -i c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840 ls -l /sys/devices/virtual/net/
[trinity10] out: total 0
[trinity10] out: drwxr-xr-x    5 root     root             0 Nov 13 20:18 eth0
[trinity10] out: drwxr-xr-x    5 root     root             0 Nov 13 20:18 eth1
[trinity10] out: drwxr-xr-x    5 root     root             0 Nov 13 20:18 lo
[trinity10] out: 

[trinity10] run: docker exec -i c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840 ip link list
[trinity10] out: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue 
[trinity10] out:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[trinity10] out: 8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue 
[trinity10] out:     link/ether 02:42:c0:a8:5b:03 brd ff:ff:ff:ff:ff:ff
[trinity10] out: 11: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
[trinity10] out:     link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
[trinity10] out: 

[trinity10] run: docker exec -i c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840 ip addr list
[trinity10] out: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue 
[trinity10] out:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[trinity10] out:     inet 127.0.0.1/8 scope host lo
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 ::1/128 scope host 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue 
[trinity10] out:     link/ether 02:42:c0:a8:5b:03 brd ff:ff:ff:ff:ff:ff
[trinity10] out:     inet 192.168.91.3/24 scope global eth0
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 fe80::42:c0ff:fea8:5b03/64 scope link tentative 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 11: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue 
[trinity10] out:     link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
[trinity10] out:     inet 172.18.0.2/16 scope global eth1
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 fe80::42:acff:fe12:2/64 scope link 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 

[trinity10] run: docker exec -i c295ef0c24bb700710e56e776c2e9e7b771f9fd2d2da480a2cf8b08a11f5f840 ip route list
[trinity10] out: default via 172.18.0.1 dev eth1 
[trinity10] out: 172.18.0.0/16 dev eth1  src 172.18.0.2 
[trinity10] out: 192.168.91.0/24 dev eth0  src 192.168.91.3 
[trinity10] out: 


Done.
Disconnecting from trinity10... done.

So this created c-alpha with address 192.168.91.2, on trinity10, and c-beta with 192.168.91.3 on trinity20.

Can they ping eachoter?

[trinity10] Executing task 'ping_test_containers'
[trinity10] run: docker exec -i c-alpha ping -c 1 c-beta.netalphabeta
[trinity10] out: PING c-beta.netalphabeta (192.168.91.3): 56 data bytes
[trinity10] out: 64 bytes from 192.168.91.3: seq=0 ttl=64 time=0.465 ms
[trinity10] out: 
[trinity10] out: --- c-beta.netalphabeta ping statistics ---
[trinity10] out: 1 packets transmitted, 1 packets received, 0% packet loss
[trinity10] out: round-trip min/avg/max = 0.465/0.465/0.465 ms
[trinity10] out: 

[trinity10] run: docker exec -i c-beta ping -c 1 c-alpha.netalphabeta
[trinity10] out: PING c-alpha.netalphabeta (192.168.91.2): 56 data bytes
[trinity10] out: 64 bytes from 192.168.91.2: seq=0 ttl=64 time=0.413 ms
[trinity10] out: 
[trinity10] out: --- c-alpha.netalphabeta ping statistics ---
[trinity10] out: 1 packets transmitted, 1 packets received, 0% packet loss
[trinity10] out: round-trip min/avg/max = 0.413/0.413/0.413 ms
[trinity10] out: 

Yup, they can. Fantastic.

We can have a quick look to see what's happening on the network when we do this, with tcpdump on trinity10:

$ tcpdump -n -T vxlan -p udp
20:19:28.986639 IP 192.168.77.10.42169 > 192.168.77.20.4789: VXLAN, flags [I] (0x08), vni 256
IP 192.168.91.2 > 192.168.91.3: ICMP echo request, id 7424, seq 0, length 64
20:19:28.986871 IP 192.168.77.20.56688 > 192.168.77.10.4789: VXLAN, flags [I] (0x08), vni 256
IP 192.168.91.3 > 192.168.91.2: ICMP echo reply, id 7424, seq 0, length 64
20:19:29.104059 IP 192.168.77.10.7946 > 192.168.77.20.7946: VXLAN, flags [.] (0x00), vni 7300197
79:32:30:a5:53:65 > 74:72:69:6e:69:74, ethertype Unknown (0x714e), length 18: 
    0x0000:  6fcd 03a8                                o...
20:19:29.104658 IP 192.168.77.20.7946 > 192.168.77.10.7946: VXLAN, flags [.] (0x02), vni 6648142
[|ether]
20:19:29.202219 IP 192.168.77.20.56688 > 192.168.77.10.4789: VXLAN, flags [I] (0x08), vni 256
IP 192.168.91.3 > 192.168.91.2: ICMP echo request, id 7680, seq 0, length 64
20:19:29.202318 IP 192.168.77.10.42169 > 192.168.77.20.4789: VXLAN, flags [I] (0x08), vni 256
IP 192.168.91.2 > 192.168.91.3: ICMP echo reply, id 7680, seq 0, length 64

This shows the VXLAN ethernet-in-UDP encapsulation.

Solr

So, on to Solr. First we'll start a zookeeper container, and inspect it:

[trinity10] Executing task 'create_test_zookeeper'
[trinity10] run: docker pull jplock/zookeeper
[trinity10] out: Using default tag: latest
[trinity10] out: trinity30: Pulling jplock/zookeeper:latest...
[trinity10] out: trinity10: Pulling jplock/zookeeper:latest...
[trinity10] out: trinity20: Pulling jplock/zookeeper:latest...
[trinity10] out: trinity20: Pulling jplock/zookeeper:latest... : downloaded
[trinity10] out: trinity10: Pulling jplock/zookeeper:latest... : downloaded
[trinity10] out: trinity30: Pulling jplock/zookeeper:latest... : downloaded
[trinity10] out: 

[trinity10] run: docker run --net netsolr --name zookeeper1 -e contraint:node==trinity10 --hostname=zookeeper1.netsolr -tid jplock/zookeeper
[trinity10] out: 6ac8f7902323314b70bf58e10c860a601a9363a17d274b40d8432a479a9f1d56
[trinity10] out: 

[trinity10] run: docker inspect --format '' zookeeper1
[trinity10] out: 6ac8f7902323314b70bf58e10c860a601a9363a17d274b40d8432a479a9f1d56
[trinity10] out: 

[trinity10] run: docker inspect --format '' zookeeper1
[trinity10] out: /zookeeper1
[trinity10] out: 

[trinity10] run: docker inspect --format '' 6ac8f7902323314b70bf58e10c860a601a9363a17d274b40d8432a479a9f1d56
[trinity10] out: netsolr
[trinity10] out: 

[trinity10] run: docker inspect --format '' 6ac8f7902323314b70bf58e10c860a601a9363a17d274b40d8432a479a9f1d56
[trinity10] out: 192.168.89.2
[trinity10] out: 

container_id=6ac8f7902323314b70bf58e10c860a601a9363a17d274b40d8432a479a9f1d56, container_name=zookeeper1, ip_address=192.168.89.2
[trinity10] run: docker exec -i 6ac8f7902323314b70bf58e10c860a601a9363a17d274b40d8432a479a9f1d56 hostname
[trinity10] out: zookeeper1.netsolr
[trinity10] out: 

[trinity10] run: docker exec -i 6ac8f7902323314b70bf58e10c860a601a9363a17d274b40d8432a479a9f1d56 ls -l /sys/devices/virtual/net/
[trinity10] out: total 0
[trinity10] out: drwxr-xr-x 5 root root 0 Nov 13 20:20 eth0
[trinity10] out: drwxr-xr-x 5 root root 0 Nov 13 20:20 eth1
[trinity10] out: drwxr-xr-x 5 root root 0 Nov 13 20:20 lo
[trinity10] out: 

[trinity10] run: docker exec -i 6ac8f7902323314b70bf58e10c860a601a9363a17d274b40d8432a479a9f1d56 ip link list
[trinity10] out: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default 
[trinity10] out:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[trinity10] out: 8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default 
[trinity10] out:     link/ether 02:42:c0:a8:59:02 brd ff:ff:ff:ff:ff:ff
[trinity10] out: 11: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
[trinity10] out:     link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
[trinity10] out: 

[trinity10] run: docker exec -i 6ac8f7902323314b70bf58e10c860a601a9363a17d274b40d8432a479a9f1d56 ip addr list
[trinity10] out: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
[trinity10] out:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[trinity10] out:     inet 127.0.0.1/8 scope host lo
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 ::1/128 scope host 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
[trinity10] out:     link/ether 02:42:c0:a8:59:02 brd ff:ff:ff:ff:ff:ff
[trinity10] out:     inet 192.168.89.2/24 scope global eth0
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 fe80::42:c0ff:fea8:5902/64 scope link 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 11: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
[trinity10] out:     link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
[trinity10] out:     inet 172.18.0.2/16 scope global eth1
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 fe80::42:acff:fe12:2/64 scope link tentative 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 

[trinity10] run: docker exec -i 6ac8f7902323314b70bf58e10c860a601a9363a17d274b40d8432a479a9f1d56 ip route list
[trinity10] out: default via 172.18.0.1 dev eth1 
[trinity10] out: 172.18.0.0/16 dev eth1  proto kernel  scope link  src 172.18.0.2 
[trinity10] out: 192.168.89.0/24 dev eth0  proto kernel  scope link  src 192.168.89.2 
[trinity10] out: 

Next, we start the Solr nodes. We pass the address of the zookeeper node. We use constraints to place the nodes on specific different docker hosts, for illustration purposes.

[trinity10] Executing task 'create_test_solr1'
[trinity10] run: docker pull makuk66/docker-solr:5.2-no-expose
[trinity10] out: 5.2-no-expose: Pulling from makuk66/docker-solr
[trinity10] out: Digest: sha256:a578041a5e8f1d6f0591d8530ba8ccd8aeb32d1d0d277201c71adaf509a03241
[trinity10] out: Status: Image is up to date for makuk66/docker-solr:5.2-no-expose
[trinity10] out: 

[trinity10] run: docker inspect --format '' zookeeper1
[trinity10] out: 192.168.89.2
[trinity10] out: 

[trinity10] run: docker run --net netsolr --name solr1 --hostname=solr1.netsolr --label=solr -e contraint:node==trinity10 -tid makuk66/docker-solr:5.2-no-expose bash -c '/opt/solr/bin/solr start -f -z 192.168.89.2:2181'
[trinity10] out: b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f
[trinity10] out: 

[trinity10] run: docker inspect --format '' solr1
[trinity10] out: b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f
[trinity10] out: 

[trinity10] run: docker inspect --format '' solr1
[trinity10] out: /solr1
[trinity10] out: 

[trinity10] run: docker inspect --format '' b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f
[trinity10] out: netsolr
[trinity10] out: 

[trinity10] run: docker inspect --format '' b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f
[trinity10] out: 192.168.89.3
[trinity10] out: 

container_id=b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f, container_name=solr1, ip_address=192.168.89.3
[trinity10] run: docker exec -i b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f hostname
[trinity10] out: solr1.netsolr
[trinity10] out: 

[trinity10] run: docker exec -i b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f ls -l /sys/devices/virtual/net/
[trinity10] out: total 0
[trinity10] out: drwxr-xr-x 5 root root 0 Nov 13 20:30 eth0
[trinity10] out: drwxr-xr-x 5 root root 0 Nov 13 20:30 eth1
[trinity10] out: drwxr-xr-x 5 root root 0 Nov 13 20:30 lo
[trinity10] out: 

[trinity10] run: docker exec -i b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f ip link list
[trinity10] out: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default 
[trinity10] out:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[trinity10] out: 24: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default 
[trinity10] out:     link/ether 02:42:c0:a8:59:03 brd ff:ff:ff:ff:ff:ff
[trinity10] out: 26: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
[trinity10] out:     link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
[trinity10] out: 

[trinity10] run: docker exec -i b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f ip addr list
[trinity10] out: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
[trinity10] out:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[trinity10] out:     inet 127.0.0.1/8 scope host lo
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 ::1/128 scope host 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 24: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
[trinity10] out:     link/ether 02:42:c0:a8:59:03 brd ff:ff:ff:ff:ff:ff
[trinity10] out:     inet 192.168.89.3/24 scope global eth0
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 fe80::42:c0ff:fea8:5903/64 scope link 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 26: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
[trinity10] out:     link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
[trinity10] out:     inet 172.18.0.3/16 scope global eth1
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 fe80::42:acff:fe12:3/64 scope link 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 

[trinity10] run: docker exec -i b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f ip route list
[trinity10] out: default via 172.18.0.1 dev eth1 
[trinity10] out: 172.18.0.0/16 dev eth1  proto kernel  scope link  src 172.18.0.3 
[trinity10] out: 192.168.89.0/24 dev eth0  proto kernel  scope link  src 192.168.89.3 
[trinity10] out: 

[trinity10] run: docker logs b3be35a17af34c08324e6ef707a2131b35d9257f4041a253f998e0ba93664f7f
[trinity10] out: 
[trinity10] out: 
[trinity10] out: Starting Solr in SolrCloud mode on port 8983 from /opt/solr/server
[trinity10] out: 
[trinity10] out: 
[trinity10] out: 
[trinity10] out: 0    [main] INFO  org.eclipse.jetty.util.log  [   ] – Logging initialized @562ms
[trinity10] out: 
[trinity10] out: 339  [main] INFO  org.eclipse.jetty.server.Server  [   ] – jetty-9.2.10.v20150310
[trinity10] out: 
[trinity10] out: 375  [main] WARN  org.eclipse.jetty.server.handler.RequestLogHandler  [   ] – !RequestLog
[trinity10] out: 
[trinity10] out: 380  [main] INFO  org.eclipse.jetty.deploy.providers.ScanningAppProvider  [   ] – Deployment monitor [file:/opt/solr-5.2.1/server/contexts/] at interval 0
[trinity10] out: 
[trinity10] out: 3334 [main] INFO  org.eclipse.jetty.webapp.StandardDescriptorProcessor  [   ] – NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
[trinity10] out: 
[trinity10] out: 3375 [main] WARN  org.eclipse.jetty.security.SecurityHandler  [   ] – ServletContext@o.e.j.w.WebAppContext@18be83e4{/solr,file:/opt/solr-5.2.1/server/solr-webapp/webapp/,STARTING}{/solr.war} has uncovered http methods for path: /
[trinity10] out: 
[trinity10] out: 3421 [main] INFO  org.apache.solr.servlet.SolrDispatchFilter  [   ] – SolrDispatchFilter.init()WebAppClassLoader=2009787198@77caeb3e
[trinity10] out: 
[trinity10] out: 3447 [main] INFO  org.apache.solr.core.SolrResourceLoader  [   ] – JNDI not configured for solr (NoInitialContextEx)
[trinity10] out: 
[trinity10] out: 3448 [main] INFO  org.apache.solr.core.SolrResourceLoader  [   ] – using system property solr.solr.home: /opt/solr/server/solr
[trinity10] out: 
[trinity10] out: 3449 [main] INFO  org.apache.solr.core.SolrResourceLoader  [   ] – new SolrResourceLoader for directory: '/opt/solr/server/solr/'
[trinity10] out: 
[trinity10] out: 3684 [main] INFO  org.apache.solr.core.SolrXmlConfig  [   ] – Loading container configuration from /opt/solr/server/solr/solr.xml
[trinity10] out: 
[trinity10] out: 3812 [main] INFO  org.apache.solr.core.CoresLocator  [   ] – Config-defined core root directory: /opt/solr/server/solr
[trinity10] out: 
[trinity10] out: 3870 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – New CoreContainer 60292059
[trinity10] out: 
[trinity10] out: 3871 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – Loading cores into CoreContainer [instanceDir=/opt/solr/server/solr/]
[trinity10] out: 
[trinity10] out: 3872 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – loading shared library: /opt/solr/server/solr/lib
[trinity10] out: 
[trinity10] out: 3873 [main] WARN  org.apache.solr.core.SolrResourceLoader  [   ] – Can't find (or read) directory to add to classloader: lib (resolved as: /opt/solr/server/solr/lib).
[trinity10] out: 
[trinity10] out: 3911 [main] INFO  org.apache.solr.handler.component.HttpShardHandlerFactory  [   ] – created with socketTimeout : 600000,connTimeout : 60000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,
[trinity10] out: 
[trinity10] out: 4288 [main] INFO  org.apache.solr.update.UpdateShardHandler  [   ] – Creating UpdateShardHandler HTTP client with params: socketTimeout=600000&connTimeout=60000&retry=true
[trinity10] out: 
[trinity10] out: 4293 [main] INFO  org.apache.solr.logging.LogWatcher  [   ] – SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
[trinity10] out: 
[trinity10] out: 4295 [main] INFO  org.apache.solr.logging.LogWatcher  [   ] – Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
[trinity10] out: 
[trinity10] out: 4298 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – Node Name: 
[trinity10] out: 
[trinity10] out: 4299 [main] INFO  org.apache.solr.core.ZkContainer  [   ] – Zookeeper client=192.168.89.2:2181
[trinity10] out: 
[trinity10] out: 4373 [main] INFO  org.apache.solr.common.cloud.ConnectionManager  [   ] – Waiting for client to connect to ZooKeeper
[trinity10] out: 
[trinity10] out: 4416 [zkCallback-2-thread-1-processing-{node_name=192.168.89.3:8983_solr}] INFO  org.apache.solr.common.cloud.ConnectionManager  [   ] – Watcher org.apache.solr.common.cloud.ConnectionManager@475e3b8e name:ZooKeeperConnection Watcher:192.168.89.2:2181 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
[trinity10] out: 
[trinity10] out: 4417 [main] INFO  org.apache.solr.common.cloud.ConnectionManager  [   ] – Client is connected to ZooKeeper
[trinity10] out: 
[trinity10] out: 4536 [main] INFO  org.apache.solr.cloud.ZkController  [   ] – Register node as live in ZooKeeper:/live_nodes/192.168.89.3:8983_solr
[trinity10] out: 
[trinity10] out: 4547 [main] INFO  org.apache.solr.common.cloud.SolrZkClient  [   ] – makePath: /live_nodes/192.168.89.3:8983_solr
[trinity10] out: 
[trinity10] out: 4569 [main] INFO  org.apache.solr.cloud.Overseer  [   ] – Overseer (id=null) closing
[trinity10] out: 
[trinity10] out: 4591 [main] INFO  org.apache.solr.cloud.ElectionContext  [   ] – I am going to be the leader 192.168.89.3:8983_solr
[trinity10] out: 
[trinity10] out: 4597 [main] INFO  org.apache.solr.common.cloud.SolrZkClient  [   ] – makePath: /overseer_elect/leader
[trinity10] out: 
[trinity10] out: 4612 [main] INFO  org.apache.solr.cloud.Overseer  [   ] – Overseer (id=94859823444721667-192.168.89.3:8983_solr-n_0000000003) starting
[trinity10] out: 
[trinity10] out: 4753 [main] INFO  org.apache.solr.cloud.OverseerAutoReplicaFailoverThread  [   ] – Starting OverseerAutoReplicaFailoverThread autoReplicaFailoverWorkLoopDelay=10000 autoReplicaFailoverWaitAfterExpiration=30000 autoReplicaFailoverBadNodeExpiration=60000
[trinity10] out: 
[trinity10] out: 4799 [OverseerCollectionProcessor-94859823444721667-192.168.89.3:8983_solr-n_0000000003] INFO  org.apache.solr.cloud.OverseerCollectionProcessor  [   ] – Process current queue of collection creations
[trinity10] out: 
[trinity10] out: 4799 [main] INFO  org.apache.solr.common.cloud.ZkStateReader  [   ] – Updating cluster state from ZooKeeper... 
[trinity10] out: 
[trinity10] out: 4829 [OverseerStateUpdate-94859823444721667-192.168.89.3:8983_solr-n_0000000003] INFO  org.apache.solr.cloud.Overseer  [   ] – Starting to work on the main queue
[trinity10] out: 
[trinity10] out: 4837 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – No authentication plugin used.
[trinity10] out: 
[trinity10] out: 4840 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – Security conf doesn't exist. Skipping setup for authorization module.
[trinity10] out: 
[trinity10] out: 4884 [main] INFO  org.apache.solr.core.CoresLocator  [   ] – Looking for core definitions underneath /opt/solr/server/solr
[trinity10] out: 
[trinity10] out: 4902 [main] INFO  org.apache.solr.core.CoresLocator  [   ] – Found 0 core definitions
[trinity10] out: 
[trinity10] out: 4906 [main] INFO  org.apache.solr.servlet.SolrDispatchFilter  [   ] – user.dir=/opt/solr-5.2.1/server
[trinity10] out: 
[trinity10] out: 4907 [main] INFO  org.apache.solr.servlet.SolrDispatchFilter  [   ] – SolrDispatchFilter.init() done
[trinity10] out: 
[trinity10] out: 4930 [main] INFO  org.eclipse.jetty.server.handler.ContextHandler  [   ] – Started o.e.j.w.WebAppContext@18be83e4{/solr,file:/opt/solr-5.2.1/server/solr-webapp/webapp/,AVAILABLE}{/solr.war}
[trinity10] out: 
[trinity10] out: 4944 [main] INFO  org.eclipse.jetty.server.ServerConnector  [   ] – Started ServerConnector@5965d37{HTTP/1.1}{0.0.0.0:8983}
[trinity10] out: 
[trinity10] out: 4945 [main] INFO  org.eclipse.jetty.server.Server  [   ] – Started @5511ms
[trinity10] out: 
[trinity10] out: 

and the second solr node

(fabric)mak@crab 949 docker-swarm-overlay [master] $ fab create_test_solr2
[trinity10] Executing task 'create_test_solr2'
[trinity10] run: docker pull makuk66/docker-solr:5.2-no-expose
[trinity10] out: 5.2-no-expose: Pulling from makuk66/docker-solr
[trinity10] out: Digest: sha256:a578041a5e8f1d6f0591d8530ba8ccd8aeb32d1d0d277201c71adaf509a03241
[trinity10] out: Status: Image is up to date for makuk66/docker-solr:5.2-no-expose
[trinity10] out: 

[trinity10] run: docker inspect --format '' zookeeper1
[trinity10] out: 192.168.89.2
[trinity10] out: 

[trinity10] run: docker run --net netsolr --name solr2 --hostname=solr2.netsolr --label=solr -e contraint:node==trinity20 -tid makuk66/docker-solr:5.2-no-expose bash -c '/opt/solr/bin/solr start -f -z 192.168.89.2:2181'
[trinity10] out: 969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866
[trinity10] out: 

[trinity10] run: docker inspect --format '' solr2
[trinity10] out: 969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866
[trinity10] out: 

[trinity10] run: docker inspect --format '' solr2
[trinity10] out: /solr2
[trinity10] out: 

[trinity10] run: docker inspect --format '' 969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866
[trinity10] out: netsolr
[trinity10] out: 

[trinity10] run: docker inspect --format '' 969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866
[trinity10] out: 192.168.89.4
[trinity10] out: 

container_id=969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866, container_name=solr2, ip_address=192.168.89.4
[trinity10] run: docker exec -i 969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866 hostname
[trinity10] out: solr2.netsolr
[trinity10] out: 

[trinity10] run: docker exec -i 969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866 ls -l /sys/devices/virtual/net/
[trinity10] out: total 0
[trinity10] out: drwxr-xr-x 5 root root 0 Nov 13 20:37 eth0
[trinity10] out: drwxr-xr-x 5 root root 0 Nov 13 20:37 eth1
[trinity10] out: drwxr-xr-x 5 root root 0 Nov 13 20:37 lo
[trinity10] out: 

[trinity10] run: docker exec -i 969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866 ip link list
[trinity10] out: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default 
[trinity10] out:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[trinity10] out: 19: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP mode DEFAULT group default 
[trinity10] out:     link/ether 02:42:c0:a8:59:04 brd ff:ff:ff:ff:ff:ff
[trinity10] out: 21: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
[trinity10] out:     link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
[trinity10] out: 

[trinity10] run: docker exec -i 969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866 ip addr list
[trinity10] out: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
[trinity10] out:     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[trinity10] out:     inet 127.0.0.1/8 scope host lo
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 ::1/128 scope host 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 19: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
[trinity10] out:     link/ether 02:42:c0:a8:59:04 brd ff:ff:ff:ff:ff:ff
[trinity10] out:     inet 192.168.89.4/24 scope global eth0
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 fe80::42:c0ff:fea8:5904/64 scope link 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 21: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
[trinity10] out:     link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
[trinity10] out:     inet 172.18.0.3/16 scope global eth1
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out:     inet6 fe80::42:acff:fe12:3/64 scope link 
[trinity10] out:        valid_lft forever preferred_lft forever
[trinity10] out: 

[trinity10] run: docker exec -i 969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866 ip route list
[trinity10] out: default via 172.18.0.1 dev eth1 
[trinity10] out: 172.18.0.0/16 dev eth1  proto kernel  scope link  src 172.18.0.3 
[trinity10] out: 192.168.89.0/24 dev eth0  proto kernel  scope link  src 192.168.89.4 
[trinity10] out: 

[trinity10] run: docker logs 969fc2963cbacd1162ba1b3e3948ec5f2c49ae9cda97db0ba9de05ce48cd7866
[trinity10] out: 
[trinity10] out: 
[trinity10] out: Starting Solr in SolrCloud mode on port 8983 from /opt/solr/server
[trinity10] out: 
[trinity10] out: 
[trinity10] out: 
[trinity10] out: 0    [main] INFO  org.eclipse.jetty.util.log  [   ] – Logging initialized @557ms
[trinity10] out: 
[trinity10] out: 329  [main] INFO  org.eclipse.jetty.server.Server  [   ] – jetty-9.2.10.v20150310
[trinity10] out: 
[trinity10] out: 367  [main] WARN  org.eclipse.jetty.server.handler.RequestLogHandler  [   ] – !RequestLog
[trinity10] out: 
[trinity10] out: 371  [main] INFO  org.eclipse.jetty.deploy.providers.ScanningAppProvider  [   ] – Deployment monitor [file:/opt/solr-5.2.1/server/contexts/] at interval 0
[trinity10] out: 
[trinity10] out: 3365 [main] INFO  org.eclipse.jetty.webapp.StandardDescriptorProcessor  [   ] – NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
[trinity10] out: 
[trinity10] out: 3406 [main] WARN  org.eclipse.jetty.security.SecurityHandler  [   ] – ServletContext@o.e.j.w.WebAppContext@18be83e4{/solr,file:/opt/solr-5.2.1/server/solr-webapp/webapp/,STARTING}{/solr.war} has uncovered http methods for path: /
[trinity10] out: 
[trinity10] out: 3453 [main] INFO  org.apache.solr.servlet.SolrDispatchFilter  [   ] – SolrDispatchFilter.init()WebAppClassLoader=2009787198@77caeb3e
[trinity10] out: 
[trinity10] out: 3479 [main] INFO  org.apache.solr.core.SolrResourceLoader  [   ] – JNDI not configured for solr (NoInitialContextEx)
[trinity10] out: 
[trinity10] out: 3480 [main] INFO  org.apache.solr.core.SolrResourceLoader  [   ] – using system property solr.solr.home: /opt/solr/server/solr
[trinity10] out: 
[trinity10] out: 3481 [main] INFO  org.apache.solr.core.SolrResourceLoader  [   ] – new SolrResourceLoader for directory: '/opt/solr/server/solr/'
[trinity10] out: 
[trinity10] out: 3723 [main] INFO  org.apache.solr.core.SolrXmlConfig  [   ] – Loading container configuration from /opt/solr/server/solr/solr.xml
[trinity10] out: 
[trinity10] out: 3852 [main] INFO  org.apache.solr.core.CoresLocator  [   ] – Config-defined core root directory: /opt/solr/server/solr
[trinity10] out: 
[trinity10] out: 3916 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – New CoreContainer 60292059
[trinity10] out: 
[trinity10] out: 3917 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – Loading cores into CoreContainer [instanceDir=/opt/solr/server/solr/]
[trinity10] out: 
[trinity10] out: 3918 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – loading shared library: /opt/solr/server/solr/lib
[trinity10] out: 
[trinity10] out: 3918 [main] WARN  org.apache.solr.core.SolrResourceLoader  [   ] – Can't find (or read) directory to add to classloader: lib (resolved as: /opt/solr/server/solr/lib).
[trinity10] out: 
[trinity10] out: 3957 [main] INFO  org.apache.solr.handler.component.HttpShardHandlerFactory  [   ] – created with socketTimeout : 600000,connTimeout : 60000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,
[trinity10] out: 
[trinity10] out: 4341 [main] INFO  org.apache.solr.update.UpdateShardHandler  [   ] – Creating UpdateShardHandler HTTP client with params: socketTimeout=600000&connTimeout=60000&retry=true
[trinity10] out: 
[trinity10] out: 4345 [main] INFO  org.apache.solr.logging.LogWatcher  [   ] – SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
[trinity10] out: 
[trinity10] out: 4347 [main] INFO  org.apache.solr.logging.LogWatcher  [   ] – Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
[trinity10] out: 
[trinity10] out: 4350 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – Node Name: 
[trinity10] out: 
[trinity10] out: 4350 [main] INFO  org.apache.solr.core.ZkContainer  [   ] – Zookeeper client=192.168.89.2:2181
[trinity10] out: 
[trinity10] out: 4421 [main] INFO  org.apache.solr.common.cloud.ConnectionManager  [   ] – Waiting for client to connect to ZooKeeper
[trinity10] out: 
[trinity10] out: 4464 [zkCallback-2-thread-1-processing-{node_name=192.168.89.4:8983_solr}] INFO  org.apache.solr.common.cloud.ConnectionManager  [   ] – Watcher org.apache.solr.common.cloud.ConnectionManager@62414720 name:ZooKeeperConnection Watcher:192.168.89.2:2181 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None
[trinity10] out: 
[trinity10] out: 4466 [main] INFO  org.apache.solr.common.cloud.ConnectionManager  [   ] – Client is connected to ZooKeeper
[trinity10] out: 
[trinity10] out: 4564 [main] INFO  org.apache.solr.common.cloud.ZkStateReader  [   ] – Updating cluster state from ZooKeeper... 
[trinity10] out: 
[trinity10] out: 5600 [main] INFO  org.apache.solr.cloud.ZkController  [   ] – Register node as live in ZooKeeper:/live_nodes/192.168.89.4:8983_solr
[trinity10] out: 
[trinity10] out: 5610 [main] INFO  org.apache.solr.common.cloud.SolrZkClient  [   ] – makePath: /live_nodes/192.168.89.4:8983_solr
[trinity10] out: 
[trinity10] out: 5633 [main] INFO  org.apache.solr.cloud.Overseer  [   ] – Overseer (id=null) closing
[trinity10] out: 
[trinity10] out: 5653 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – No authentication plugin used.
[trinity10] out: 
[trinity10] out: 5655 [main] INFO  org.apache.solr.core.CoreContainer  [   ] – Security conf doesn't exist. Skipping setup for authorization module.
[trinity10] out: 
[trinity10] out: 5711 [main] INFO  org.apache.solr.core.CoresLocator  [   ] – Looking for core definitions underneath /opt/solr/server/solr
[trinity10] out: 
[trinity10] out: 5729 [main] INFO  org.apache.solr.core.CoresLocator  [   ] – Found 0 core definitions
[trinity10] out: 
[trinity10] out: 5734 [main] INFO  org.apache.solr.servlet.SolrDispatchFilter  [   ] – user.dir=/opt/solr-5.2.1/server
[trinity10] out: 
[trinity10] out: 5734 [main] INFO  org.apache.solr.servlet.SolrDispatchFilter  [   ] – SolrDispatchFilter.init() done
[trinity10] out: 
[trinity10] out: 5757 [main] INFO  org.eclipse.jetty.server.handler.ContextHandler  [   ] – Started o.e.j.w.WebAppContext@18be83e4{/solr,file:/opt/solr-5.2.1/server/solr-webapp/webapp/,AVAILABLE}{/solr.war}
[trinity10] out: 
[trinity10] out: 5772 [main] INFO  org.eclipse.jetty.server.ServerConnector  [   ] – Started ServerConnector@8519cb4{HTTP/1.1}{0.0.0.0:8983}
[trinity10] out: 
[trinity10] out: 5773 [main] INFO  org.eclipse.jetty.server.Server  [   ] – Started @6334ms
[trinity10] out: 
[trinity10] out: 


Done.
Disconnecting from trinity10... done.

and do a test to make sure the Solr's are serving pages:

[trinity10] Executing task 'create_test_solrclient'
[trinity10] run: docker run --net netsolr --name solrclient-PCLSB8 --hostname solrclient-PCLSB8.netsolr -i makuk66/docker-solr:5.2-no-expose curl -sSL http://solr1.netsolr:8983/
[trinity10] out: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
...
[trinity10] run: docker run --net netsolr --name solrclient-NC43KA --hostname solrclient-NC43KA.netsolr -i makuk66/docker-solr:5.2-no-expose curl -sSL http://solr2.netsolr:8983/
[trinity10] out: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">

Yup, that works.

Next, create a collection:

[trinity10] run: docker exec -i -t solr1 /opt/solr/bin/solr create_collection -c sample -shards 2 -p 8983
[trinity10] out: Connecting to ZooKeeper at 192.168.89.2:2181
[trinity10] out: 
[trinity10] out: Uploading /opt/solr/server/solr/configsets/data_driven_schema_configs/conf for config sample to ZooKeeper at 192.168.89.2:2181
[trinity10] out: 
[trinity10] out: 
[trinity10] out: 
[trinity10] out: Creating new collection 'sample' using command:
[trinity10] out: 
[trinity10] out: http://192.168.89.4:8983/solr/admin/collections?action=CREATE&name=sample&numShards=2&replicationFactor=1&maxShardsPerNode=1&collection.configName=sample
[trinity10] out: 
[trinity10] out: 
[trinity10] out: 
[trinity10] out: {
[trinity10] out: 
[trinity10] out:   "responseHeader":{
[trinity10] out: 
[trinity10] out:     "status":0,
[trinity10] out: 
[trinity10] out:     "QTime":4674},
[trinity10] out: 
[trinity10] out:   "success":{"":{
[trinity10] out: 
[trinity10] out:       "responseHeader":{
[trinity10] out: 
[trinity10] out:         "status":0,
[trinity10] out: 
[trinity10] out:         "QTime":4238},
[trinity10] out: 
[trinity10] out:       "core":"sample_shard2_replica1"}}}

and loads sample data into it

[trinity10] Executing task 'solr_data'
[trinity10] run: docker exec -it --user=solr solr1 bin/post -c sample /opt/solr/example/exampledocs/manufacturers.xml
[trinity10] out: java -classpath /opt/solr-5.2.1/dist/solr-core-5.2.1.jar -Dauto=yes -Dc=sample -Ddata=files org.apache.solr.util.SimplePostTool /opt/solr/example/exampledocs/manufacturers.xml
[trinity10] out: 
[trinity10] out: SimplePostTool version 5.0.0
[trinity10] out: 
[trinity10] out: Posting files to [base] url http://localhost:8983/solr/sample/update...
[trinity10] out: 
[trinity10] out: Entering auto mode. File endings considered are xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
[trinity10] out: 
[trinity10] out: POSTing file manufacturers.xml (application/xml) to [base]
[trinity10] out: 
[trinity10] out: 1 files indexed.
[trinity10] out: 
[trinity10] out: COMMITting Solr index changes to http://localhost:8983/solr/sample/update...
[trinity10] out: 
[trinity10] out: Time spent: 0:00:00.733

Now we can query the data. We'll a couple of searches: first we'll search for a record through both Solrs, and should find it through either. Then we'll do searches restricting to specific shards, and should find it only in one:

[trinity10] Executing task 'solr_query'
demonstrate you can query either server and get a response:
' | grep -v '^$' docker exec -it --user=solr solr1 curl 'http://localhost:8983/solr/sample/select?q=maxtor&indent=true' | tr -d '
[trinity10] out: <?xml version="1.0" encoding="UTF-8"?>
[trinity10] out: <response>
[trinity10] out: <lst name="responseHeader">
[trinity10] out:   <int name="status">0</int>
[trinity10] out:   <int name="QTime">16</int>
[trinity10] out:   <lst name="params">
[trinity10] out:     <str name="q">maxtor</str>
[trinity10] out:     <str name="indent">true</str>
[trinity10] out:   </lst>
[trinity10] out: </lst>
[trinity10] out: <result name="response" numFound="1" start="0" maxScore="0.5986179">
[trinity10] out:   <doc>
[trinity10] out:     <str name="id">maxtor</str>
[trinity10] out:     <str name="compName_s">Maxtor Corporation</str>
[trinity10] out:     <str name="address_s">920 Disc Drive Scotts Valley, CA 95066</str>
[trinity10] out:     <long name="_version_">1517758386017402880</long></doc>
[trinity10] out: </result>
[trinity10] out: </response>
[trinity10] out: 

got one found, as expected
' | grep -v '^$' docker exec -it --user=solr solr2 curl 'http://localhost:8983/solr/sample/select?q=maxtor&indent=true' | tr -d '
[trinity10] out: <?xml version="1.0" encoding="UTF-8"?>
[trinity10] out: <response>
[trinity10] out: <lst name="responseHeader">
[trinity10] out:   <int name="status">0</int>
[trinity10] out:   <int name="QTime">15</int>
[trinity10] out:   <lst name="params">
[trinity10] out:     <str name="q">maxtor</str>
[trinity10] out:     <str name="indent">true</str>
[trinity10] out:   </lst>
[trinity10] out: </lst>
[trinity10] out: <result name="response" numFound="1" start="0" maxScore="0.5986179">
[trinity10] out:   <doc>
[trinity10] out:     <str name="id">maxtor</str>
[trinity10] out:     <str name="compName_s">Maxtor Corporation</str>
[trinity10] out:     <str name="address_s">920 Disc Drive Scotts Valley, CA 95066</str>
[trinity10] out:     <long name="_version_">1517758386017402880</long></doc>
[trinity10] out: </result>
[trinity10] out: </response>
[trinity10] out: 

got one found, as expected
demonstrate the response only comes from a single shard:
' | grep -v '^$' docker exec -it --user=solr solr1 curl 'http://localhost:8983/solr/sample/select?q=maxtor&indent=true&shards=localhost:8983/solr/sample_shard1_replica1' | tr -d '
[trinity10] out: <?xml version="1.0" encoding="UTF-8"?>
[trinity10] out: <response>
[trinity10] out: <lst name="responseHeader">
[trinity10] out:   <int name="status">0</int>
[trinity10] out:   <int name="QTime">15</int>
[trinity10] out:   <lst name="params">
[trinity10] out:     <str name="q">maxtor</str>
[trinity10] out:     <str name="shards">localhost:8983/solr/sample_shard1_replica1</str>
[trinity10] out:     <str name="indent">true</str>
[trinity10] out:   </lst>
[trinity10] out: </lst>
[trinity10] out: <result name="response" numFound="1" start="0" maxScore="0.5986179">
[trinity10] out:   <doc>
[trinity10] out:     <str name="id">maxtor</str>
[trinity10] out:     <str name="compName_s">Maxtor Corporation</str>
[trinity10] out:     <str name="address_s">920 Disc Drive Scotts Valley, CA 95066</str>
[trinity10] out:     <long name="_version_">1517758386017402880</long></doc>
[trinity10] out: </result>
[trinity10] out: </response>
[trinity10] out: 

' | grep -v '^$' docker exec -it --user=solr solr1 curl 'http://localhost:8983/solr/sample/select?q=maxtor&indent=true&shards=localhost:8983/solr/sample_shard2_replica1' | tr -d '
[trinity10] out: <?xml version="1.0" encoding="UTF-8"?>
[trinity10] out: <response>
[trinity10] out: <lst name="responseHeader">
[trinity10] out:   <int name="status">0</int>
[trinity10] out:   <int name="QTime">12</int>
[trinity10] out:   <lst name="params">
[trinity10] out:     <str name="q">maxtor</str>
[trinity10] out:     <str name="shards">localhost:8983/solr/sample_shard2_replica1</str>
[trinity10] out:     <str name="indent">true</str>
[trinity10] out:   </lst>
[trinity10] out: </lst>
[trinity10] out: <result name="response" numFound="0" start="0" maxScore="0.0">
[trinity10] out: </result>
[trinity10] out: </response>
[trinity10] out: 

found only in one shard, as expected

Done.
Disconnecting from trinity10... done.

Good, that all works.

External Connectivity

These containers can connect out, with source NAT masquerading on the dockr host, courtesy of the docker_gwbridge. But how can I get traffic in?

Ideally we can just route to the solr IPs directly, and with Calico I can, except this is currently broken.

If when you created the solr containers you set -p 8983, then that enables port forwarding. So you can use that:

$ curl 2>/dev/null -q -L http://$(docker port solr1 8983)/ | grep -i title

Another approach would be to go via a proxy which could also provide some load balancing and access control, at the risk of a single point of failure. A simple example on trinity10:

$ mkdir my-haproxy-config
$ cat > my-haproxy-config/haproxy.cfg <<EOM
global
    maxconn 256

defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

listen http-in
    bind *:8983
EOM
# append backend server lines for the solr containers
$ export DOCKER_HOST=tcp://192.168.77.10:3375
$ hosts=$(docker ps --filter=label=solr --no-trunc --format '' | sed 's,^.*/,,')
$ for c in $hosts; do ip=$(docker inspect --format '' $c); echo "    server $c $ip:8983 maxconn 32"; done >> my-haproxy-config/haproxy.cfg
# run the proxy with our config file
$ docker run -d --name solrproxy -p 8983:8983 --net netsolr -v $(pwd)/my-haproxy-config:/usr/local/etc/my-haproxy-config haproxy:1.6 haproxy -f /usr/local/etc/my-haproxy-config/haproxy.cfg
# and test
$ curl 2>/dev/null -q -L http://localhost:8983/ | grep -i title
  <title>Solr Admin</title>

I had to do this on one of the Docker hosts, because of the host-mounted directory. You could generalise and extend this and make the proxy container look for new solr containers (through Zookeeper, or the docker CLI as above) every so often and rewrite/reload the HAProxy config file. Then you can make your own Docker image for that, and then you can run it through swarm on any docker host, without having to worry about the host-mounted directories.

Conclusion

So yes, Docker 1.9 multi-host networking works, and Solr can use it for distributed configurations.

Possible improvements:

  • store etcd and swarm outside $HOME
  • enable TLS certificates, just like docker-machine does
  • use docker-machine to install the Docker hosts
  • use Docker volumes to store your index
  • use a distributed Zookeeper configuration
  • use Project Calico instead of the overlay network and avoid VXLAN
  • you could use zookeeper for both docker and solr, so you don't need etcd. But I like them separate
  • determine if Consul would be better in some way