This web site has been retired. Please follow my activities at pztrick.com.

pZtrick.com

the personal site of patrick paul

Who's Got Two Thumbs and an OpenStack Cluster?!

| Comments

This guy.

OpenStack is a global collaboration of developers and cloud computing technologists producing the ubiquitous open souce cloud computing platform for public and private clouds. The project aims to deliver solutions for all types of clouds by being simple to implement, massively scalable, and rich.

I’ve been busy trying to get a proof-of-concept cluster running that I also intend to apply to some big data problems using Python & Celery. After a few false starts, I finally have my private cloud up and running.

DevStack #

My Westeros cluster My cluster is a multi-node cloud configured using DevStack. At right, you can see a picture of the Horizon dashboard where I have launched five virtual machines. (ASOIAF fans should recognize the hostnames. ;)

For this proof-of-concept deployment, I have one cloud controller node and two* cloud compute nodes. The cloud controller is the command console that intelligently provisions CPU, RAM, and disc volume storage from the various compute nodes when a user chooses to launch a virtual machine. The compute nodes are a commodity and I can increase the computing power available to my cloud merely by deploying additional compute nodes.

In this setup, the cloud controller is a single point of failure, but more sophisticated OpenStack clusters can provide high-availability by configuring multiple controllers.

My whiteboard For an example of high availability, here is a cluster topology I had whiteboarded in December when I first grew interested in OpenStack. In this configuration, high availability would be achieved with 2x cloud controllers and redundant swift/cinder zones. Of course, I am not an OpenStack expert yet and could certainly improve this cloud knowing what I have learned since December and what I am yet to learn. ;)

Configuration Details

I originally sought to use Quantum networking, but ultimately got it working with nova-network. I think April 2013 is a bit on the bleeding edge for Quantum networking deployments and there are not many guides available (particularly for OpenStack grizzly) – I guess I’m part of the problem for not forging ahead with my own troubleshooting to make it work and to thereafter share the wisdom with others. ;)

So, without further ado, here are my two nodes:

Hostname Specs NICs
highgarden 8GB RAM
x64 CPU
Ubuntu 12.04.2
eth0 (Public 53.53.53.11)
eth1 (Private 192.168.0.1)
dorne 8GB RAM
x64 CPU
Ubuntu 12.04.2
eth0 (Public 53.53.53.12)
eth1 (Private 192.168.0.2)

highgarden is the cloud controller node and dorne is the compute node. This cloud can be expanded by adding additional compute nodes (each with unique IP address).

eth0 on each node is plugged directly into a public switch and configured with a static IP. eth1 is plugged into a private switch that is shared among the private cloud nodes (or, in this case, between the cloud controller and compute nodes).

Getting started

On each machine, you will want to clone the DevStack repository. DevStack should install any required packages (e.g. mysql-server) though you may need to apt-get them for yourself.

bash terminal #
1
patrick@highgarden:~$  git clone git://github.com/openstack-dev/devstack.git -b stable/grizzly

Ensure your files are configured as follows on each node before executing ~/devstack/stack.sh.

highgarden configuration (controller)

/etc/network/interfaces
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# the loopback network interface
auto lo
iface lo inet loopback

# eth0 -- public network
auto eth0
iface eth0 inet static
        address 53.53.53.11
        netmask 255.255.255.0
        gateway 53.53.53.1
        dns-nameservers 8.8.8.8 8.8.4.4

# eth1 (bridged) -- private network
auto br100
iface br100 inet static
        bridge_ports eth1
        bridge_stp off
        bridge_maxwait 0
        bridge_fd 0
        address 192.168.0.1
        netmask 255.255.255.0
~/devstack/localrc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# META
HOST_IP=53.53.53.11
MULTI_HOST=True
DEST=/opt/stack/
SCHEDULER=nova.scheduler.simple.SimpleScheduler
RECLONE=yes

# NETWORKING
FIXED_RANGE=192.168.0.0/24
FIXED_NETWORK_SIZE=256
PUBLIC_INTERFACE=eth0
FLOATING_RANGE=53.53.53.112/29  # Additional subnets added in local.sh
NET_MAN=FlatDHCPManager
FLAT_INTERFACE=eth1
FLAT_NETWORK_BRIDGE=br100
VIRT_DRIVER=libvirt
LIBVIRT_TYPE=kvm

# CREDENTIALS
ADMIN_PASSWORD=password_for_dashboard
MYSQL_PASSWORD=mysql_root_password
RABBIT_PASSWORD=rabbit_password
SERVICE_PASSWORD=secret_pass
SERVICE_TOKEN=secret_token

# CINDER
VOLUME_GROUP=nova-volumes

# LOGGING
SYSLOG=True
VERBOSE=False
LOG_COLOR=False
LOGFILE=/opt/stack/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen

# DISABLE TEMPEST
disable_service tempest

# Optionally, disable compute on the controller node?
#disable_service n-cpu
~/devstack/local.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
source ~/bashrc
source ~/openrc

. openrc admin admin

# DOWNLOAD SOME IMAGES WE LIKE
cd ~
mkdir -p local-images
cd ~/local-images

wget --no-clobber http://archive.ubuntu.com/ubuntu/dists/precise/main/installer-amd64/current/images/netboot/mini.iso
glance image-create --name "ubuntu-minimal" --disk-format=raw --container-format=bare --is-public=True < mini.iso

wget --no-clobber http://cloud-images.ubuntu.com/releases/precise/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img
glance image-create --name "ubuntu-cloud" --disk-format=raw --container-format=bare --is-public=True < ubuntu-12.04-server-cloudimg-amd64-disk1.img

# RESERVE MANAGEMENT IPS
for i in `seq 2 10`; do /opt/stack/nova/bin/nova-manage fixed reserve 192.168.0.$i; done

# ADD KEYS
nova keypair-add --pub_key ~/local-keys/zenan.pub zenan

# ALLOW PING SSH
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

# DELETE DEMO TENANT
keystone tenant-delete demo

# MAKE PROMISCUOUS
sudo ip link set eth0 promisc on

# ADD MORE FLOATING IPS
nova floating-ip-bulk-create 53.53.53.88/29 --pool public
# Delete 53.53.53.113 as this is known to be problematic/ first in range
nova floating-ip-bulk-delete 53.53.53.113

# BROADCAST MESSAGE (wall.txt just broadcasts that the local.sh has executed)
wall /home/username/devstack/wall.txt

You must also pre-configure a block device called nova-volumes for block storage.

dorne configuration (compute node)

/etc/network/interfaces
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# the loopback network interface
auto lo
iface lo inet loopback

# eth0 -- public network
auto eth0
iface eth0 inet static
        address 53.53.53.12
        netmask 255.255.255.0
        gateway 53.53.53.1
        dns-nameservers 8.8.8.8 8.8.4.4

# eth1 (bridged) -- private network
auto br100
iface br100 inet static
        bridge_ports eth1
        bridge_stp off
        bridge_maxwait 0
        bridge_fd 0
        address 192.168.0.2
        netmask 255.255.255.0
~/devstack/localrc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# META
HOST_IP=53.53.53.12
SERVICE_HOST=53.53.53.11
MULTI_HOST=True
DEST=/opt/stack/
LOGFILE=/opt/stack/logs/stack.sh.log

# See https://bugs.launchpad.net/devstack/+bug/1136028
DATABASE_TYPE=mysql

# SERVICES
KEYSTONE_AUTH_HOST=$SERVICE_HOST
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
GLANCE_HOSTPORT=$SERVICE_HOST:9292

# CREDENTIALS
ADMIN_PASSWORD=password_for_dashboard
MYSQL_PASSWORD=mysql_root_password
RABBIT_PASSWORD=rabbit_password
SERVICE_PASSWORD=secret_pass
SERVICE_TOKEN=secret_token

# LOGGING
SYSLOG_HOST=$SERVICE_HOST
VERBOSE=False
LOG_COLOR=False
SYSLOG=True
SCREEN_LOGDIR=$DEST/logs/screen
LOGFILE=/opt/stack/logs/stack.sh.log

# nova-network NETWORKING
ENABLED_SERVICES=n-cpu,rabbit,n-api,n-net,n-novnc,n-xvnc
FIXED_RANGE=192.168.0.0/24
FIXED_NETWORK_SIZE=256
PUBLIC_INTERFACE=eth0
FLOATING_RANGE=53.53.53.112/29
NET_MAN=FlatDHCPManager
FLAT_NETWORK_BRIDGE=br100
FLAT_INTERFACE=eth1
VIRT_DRIVER=libvirt
LIBVIRT_TYPE=kvm

I did not require a local.sh on my compute node.

Additional networking details

The configuration above should have your dashboard and private network running 110%. However, before my floating IPs would work properly, I had to execute these commands:

bash termal
1
2
3
sudo ip link set eth0 promisc on
sudo sysctl -w net.ipv4.conf.eth0.rp_filter=0
sudo sysctl -w net.ipv4.conf.eth1.rp_filter=0

Note: These commands will not persist through a reboot.

Networking is definitely my kryptonite; as I grow more knowledgeable I will share better configurations for OpenStack networking (hopefully on Quantum, too).

Launching DevStack

Now we are all set. Reboot and/or restart networking to pick up the new interface settings, and then execute stack.sh on your cloud controller.

bash terminal #
1
patrick@highgarden:~/devstack$  ./stack.sh

After the script completes, navigate your browser to its IP to access the admin dashboard.

Finally, execute stack.sh on each compute node and they will join the cloud.


* I actually have three compute nodes, as I am running nova on the cloud controller also.

Next Steps

I’m excited to have a cluster available and will start educating myself on Puppet for automated system administration and Celery for distributed computing in Python.

Comments