Automatic DNS and SSL on Kubernetes with LetsEncrypt – Part 2

TL;DR Part 2 of how to make your Kubernetes cluster super awesome by adding two pods which automatically handle public DNS registration and SSL certs for any deployment you choose! Reduces the complexity of deployments and reduces manual extra tasks.

Part one recap.

In part one we discussed the advantages of the Kubernetes Ingress Controller and configured our cluster to automatically register the public IP’s of ingress controllers into AWS Route53 DNS using an annotation.

Go and catch up HERE if you missed it!

TLS / SSL

Now it’s time to add the really fun stuff, we already know what subdomain we want to register with our ingress controller (you did read part one right?), so we have all the information we need to automatically configure SSL for that domain as well!

There’s something awesome about deploying an app to Kubernetes, browsing to the URL you configured and seeing a happy green BROWSER VALID SSL connection already setup.

Free, Browser Valid SSL Certificates…. as long as you automate!

If you haven’t heard of LetsEncrypt, then this blog post is going to give you an extra bonus present. Lets encrypt is a browser trusted certificate authority, which charges nothing for it’s certs and is also fully automated!

This means; Code can request a domain cert, prove to the certificate authority that the server making the request actually owns the domain (by placing specific content on the webserver for the CA to check) and then receive the valid certificate back, all via the LetsEncrypt API.

If you want to know how this all works, visit the Lets Encrypt – How It Works Page..

Lets Encrypt Automated Proof of Ownership
Lets Encrypt Automated Proof of Ownership

Without LetsEncrypt, this process would have manual steps for validation as with most other CA’s and potentially no API for requesting certs at all. We really must thank their efforts for making all this possible.

Using Lets Encrypt with Kubernetes Ingress Controllers

Much like with the automatic DNS problem, google returns more questions than solutions, with different bits of projects and GitHub issues suggesting a number of paths. This blog post aims to distill all of my research and what worked for me.

After testing a few different things, I found that a project called Kube-Lego did exactly what I wanted;

  • Supports configuring both GCE Ingress Controllers and NginX ingress controllers with LetsEncrypt Certs (I’m using GCE in this example).
  • Supports automatic renewals and the automated proof of ownership needed by LetsEncrypt.

Another reason I liked kube-lego, is that it’s standalone. The LetsEncrypt code isn’t embedded in the LoadBalancer (Ingress Controller) code itself, this would have caused me problems:

  1. I’m using Googles’ GCE loadbalancers so I have no access to their code anyway.
  2. Even if I was running my own NginX/Caddy/Etc ingress controller pods, If LetsEncrypt was embedded, I’d need to write some clustering logic in order to have more than one instance of them running, otherwise all of them would race each other to get a cert for the same domain and i’d end up in a mess (and rate limited from the LetsEncrypt API).

KubeLego seemed like the most flexible choice.

Installing KubeLego

Installation is pretty simple, as the documentation at https://github.com/jetstack/kube-lego was much better than the dns-controller from Part 1 of this article.

Firstly, we configure a ConfigMap that the kube-lego pod will get settings from, i’ve saved this as kube-lego-config-map.yaml

Now we need a deployment manifest for the kube-lego app itself, i’ve saved this as kube-lego.yaml

Notice our Deployment references our configmap to pull settings for e-mail and API endpont. Also notice the app exposes port 8080, more on that later!

We can now deploy (both configmap and app) onto our k8’s cluster:

Voila! We’re running kube-lego on our cluster.

Testing Kube-lego

You can view the logs to see what kube-lego is doing, by default it will listen for all new ingress-controllers and take action with certs if they have certain annotations which we’ll cover below.

Also, if the application fails to start for whatever reason, the health check in the deployment manifest above will fail and Kubernetes will restart the pod.

your pod name will differ for the logs command:

Putting it all together

Here we are going to create an app deployment for which we want all this magic to happen!

However, there is a reason automatic DNS registration was part one of this blog series.. LetsEncrypt validation depends on resolving the domain requested down to our K8’s cluster, so if you haven’t enabled automatic DNS (or put the ingress controllers public IP in DNS yourself), then LetsEncrypt will never be able to validate ownership of the domain and therefore never give you a certificate!

May be worth revisiting part1 of this series if you haven’t already (it’s good, honest!)

App Deployment manifests

If you’re familiar with Kubernetes, then you’ll recognise the following manifests simply deploy us a nginx sample ‘application’ in a new namespace. The differences to enable DNS and SSL are all in the ingress controller definition.

namespace.yaml Creates our new namespace:

nginx.yaml  Deploys our application:

service.yaml  is needed to track active backend endpoints for the Ingress Controller (notice it’s of type: NodePort, it’s not publicly exposed)

Finally, our Ingress Controller. ingress-tls.yaml  I’ve highlighted the ‘non standard’ bits which enable our automated magic.

Lets deploy these manifests to our Kubernetes cluster and watch the magic happen!

Right, now our ingress controller is going to go and configure a GCE load balancer for us (standard), this will be allocated a public IP and our dns-controller will register this against our hostname in Route53:

And looking in our AWS Route53 portal:

Route53 showing Updated DNS Records
Route53 showing Updated DNS Records

Excellent!

While this was happening. kube-lego was also configuring the GCE loadbalancer to support LetsEcrypt’s ownership checks. We look at the LoadBalancer configuration in Google’s cloud console and see that a specific URL path has been configured to point to the kube-lego app on 8080.

This allows kube-lego to control the validation requests for domain ownership that will come in from LetsEncrypt when we request a certificate. All other request paths will be passed to our actual app.

LetsEncrypt configuration on ingress LB via kube-lego
Kube-lego adds configuration to the ingress loadbalancer to pass LetsEncrypt ownership challenges

This will allow the kube-lego process (requesting certs via LetsEncrypt) to succeed:

As soon as a valid cert is received, kube-lego re-configures the GCE LoadBalancer for HTTPS as well as HTTP (notice in the above screenshot, only Protocol HTTP is enabled on the LB when it is first created).

Kube-lego configures SSL certificate into GCE's ingress load balancer
Kube-lego configures SSL certificate into GCE’s ingress load balancer

The Result

The whole process above takes a couple of mins to complete (LB getting a public IP, DNS Registration, LetsEncrypt Checks, Get Cert, Configure LB with SSL) but then… Huzzah! Completley hands off publicly available services, protected by valid SSL certs!

Now your developers can deploy applications which are SSL by default without any extra hassle.

Appreciate any corrections, comments of feedback, please direct to @mattdashj on twitter.

Until next time!

Matt

Automatic DNS and SSL on Kubernetes with LetsEncrypt – Part 1

TL;DR Howto make your Kubernetes cluster super awesome by adding two pods which automatically handle public DNS registration and SSL certs for any deployment you choose! Reduces the complexity of deployments and reduces manual extra tasks.

Overview

Kubernetes Ingress controllers provide developers an API for creating HTTP/HTTPS (L7) proxies in front of your applications, something that historically we’ve done ourselves; Either inside our application pod’s with our apps, or more likley, as a separate set of pods infront of our application, strung together with Kubernetes Service’s (L4).

Without Ingress Controller

Public kubernetes service flow without ingress controllers

With Ingress Controller

Ingress controller simplifying k8s services

Technically, there is still a Service in the background to track membership, but it’s not in the “path of traffic” as it is in the first diagram.

Whats more, Ingress controllers are pluggable, a single Kubernetes API to developers, but any L7 load balancer in reality, be it Nginx, GCE, Treifik, or Hardware… Excellent.

However, there are some things Ingress controllers *DONT* do for us, and that is what I want to tackle today…

  1. Registering our ingress loadbalancer in public DNS with a useful domain name.
  2. Automatically getting SSL/TLS certificates for our domain and configuring them on our Ingress load balancer.

With these two additions developers can deploy their application to K8’s, and automatically have it accessible and TLS secured.. Perfect!

DNS First

DNS is fairly simple, yet a google search for this topic makes it sound anything but. Lots of different suggestions, github issues, half-started projects.

All we want it something to listen for new Ingress Controllers, find the public IP given to the new Ingress loadbalancer and update DNS with the apps DNS name and loadbalancer IP.

After some research, code exists to do exactly what we want, it’s called ‘dns-controller’ and it’s now part of the ‘kops’ codebase from the cluster-ops SIG. It currently updates AWS Route53, but thats fine, as it’s what i’m using anyway.

https://github.com/kubernetes/kops/tree/master/dns-controller

However, the documentation is slim and unless you’re using KOPS, it’s not packaged in a useful way. Thankfully, someone has already extracted the dns-controller pieces and packaged them in a docker container for us.

The security guy in me points out: If you’re looking at anything more than testing, i’d strongly recommend packaging the DNS-Controller code yourself so you know 100% whats in it.

DNS – Here’s how to deploy (1/2)

Create the following deployment.yaml manifest

This pulls down our pre-packaged dns-controller code and runs it on our cluster. By default i’ve placed this in the kube-system namespace.

The code needs to change AWS Route53 DNS entries *duh*, so it also needs AWS Credentials.

(I recommend using AWS IAM to create a user with ONLY the access to the Route53 zone you need this app to control. Don’t give it your developer keys, anyone in your K8’s cluster could potentially read them)

When we’ve got our credentials, create a secret with your AWS credentials file in it as follows..

we’ve The path to your AWS credentials file will differ. If you don’t have a credentials file, it’s a simple format as shown below.

Now deploy your dns-controller into K8’s with  kubectl create -f deployment.yaml

You can query the applications logs to see it working, by default it will try to update any DNS domain it finds configured for an Ingress controller with a matching zone in Route53.

Example log output:

You will see errors here if it cannot find your AWS credentials (check your secret) and that the credentials are valid!

Using our new automated DNS service.

Right! How do we use it? This is an ingress controller without automatic DNS..

And this is one WITH our new automatic DNS registration..

Simply add the annotation  dns.alpha.kubernetes.io/external: "true" to any ingress controller and our new dns-controller will try to add the domain listed under  - host: app.ourdomain.com  to DNS with the public IP of the Ingress controller.

Try it out! My cluster is on GCE (a GKE cluster), we’re using the google load balancers. I’m noticing they take around ~60 seconds to get a public IP assigned, so DNS can take ~90-120 seconds to be populated. That said, I don’t need to re-deploy my ingress controllers with my software deployments, so this is acceptable for me.

In the next section, we’ll configure automatic SSL certificate generation and configuration for our GCE load balancers!

Go read part two now, CLICK HERE.

Comments or suggestions? Please find me on twitter @mattdashj

Change your ISP WiFi Password in 2017

Here’s a rather odd New Years Resolution for you. If you have SKY Broadband, change your WiFi Password. If you have another ISP, read on. This is likley to apply to you too!

Why? Because the default passwords, while they look random, are pretty weak compared to the tools attackers have available in 2017.. As I found out by hacking my own sky wifi.

Mumble Mumble, WPA2, Secure, no flaws… right?

Well yes, modern wifi protection (WPA2) is very good, there are no known flaws to speak of, which leaves attackers one option: age old password guessing or ‘brute force cracking’.

So whats the problem?

All the Sky Wifi routers i’ve seen so far (friends houses, mine, etc) all have passwords of the following format;

  • 8 Upper Case A-Z characters.

‘be safe online, choose good passwords’ is drummed into us everywhere now, (I even saw some posters on the London underground!) so most of you will see the problem, 8 characters of uppercase A-Z requires a hell of a lot less guesses at the password than if we threw in some numbers, or some lower case characters, or some special characters (* ~ @ etc).

We could also make the password longer, or a mixture of all of the above.

How bad is it?

OK, looking at it technically, any combination of 8 A-Z characters gives you 208827064576 possible combinations.

26 ^ 8 = 208827064576

Sounds like a lot of guesses, but for a modern graphics card, 80,000 to 300,000 guesses a second is pretty trivial depending on the card.

208827064576 / 80000 = 2610339 seconds.
2610339 / 60 (minutes) / 60 (hours) = 725 Hours

So one entry-level graphics card at 80,000 guesses a second would take 725 hours (30 days) to guess every possible password the router could have by default.

Thats not very long considering your neighbours likley posses the computing power needed to be on your network in less than a month.

To the Cloud!

Someone with a graphics card can do the above, but more concerning, anyone with a bit of knowledge can actually guess much quicker for a fraction of the price!

Introducing Amazon web services (AWS), offering computing and number crunching power in the cloud, hired by the second/hour/day; A solution for millions of businesses and startups that don’t want to buy and manage their own server farms. AWS likley powers apps you use every day. Netflix being one example.

But these resources can also be used to speed up our guessing process, here we have an AWS instance (computer in the cloud) offering 16 graphics cards in one, for the low low price of £11.70 an hour.

Amazon AWS Graphics Card Instances.

 

16 times the power!

So now the guessing process just got 16 times quicker, without having to buy any graphics cards or have any computers running at home at all.

Here we can see the AWS instance running a brute force password guessing attack against my router, using all 16 graphics cards at once.

Knowing the password will be 8 upper case A-Z characters makes automating this attack much easier. This tool can just be left running.

We can see that each of the 16 graphics cards is producing over 80,000 guesses a second, giving us a total of 1394,000 guesses/second.

208827064576 / 1394000 = 149805 Seconds
149805 / 60 (minutes) / 60 (hours) = 41.7 Hours

So now we 100% know, that we will have found the password within 41.7 hours. It could take less (remember that 100% is every possible guess, chances are the actual password won’t be the last one we try.. so we could get lucky and find the password after 10%, 40% etc).

You see i’m 4% through, with 1 hour and 20 mins elapsed and 1 day and 15 hours to go. Thats slightly less than our calculator estimate above.

24 + 15 + 1hr20 = 40 Hours 20 Mins.

Say 41 hours in total (including setup of the Amazon AWS machine). Thats £480 and less than two days to guarantee I have access to your network.

Now this may sound like a lot of money, but consider malicious intent, be it corporate espionage, ransomware, spying, further hacking the computers on the network (e-mail, facebook, online banking etc).. £480 is actually affordable to most.

Not Just Sky

I feel it necessary to say i’m not having a go at Sky specifically here. They just happen to be my ISP and I noticed the default passwords were A-Z only.

There are many, many other broadband providers that ship WiFi routers with the same style of A-Z only 8 character passwords. Check yours and if necessary, log into the router and change your password to something more secure, see below for details.

Whats the solution?

So heres the thing about password guessing, knowing the format of the password ahead of time ( 8 characters, all A-Z uppercase for example) makes knowing the amount of guesses simple, as you saw with our easy calculations above.

Changing that length, or changing the ‘known format’, makes an attackers life much harder.

Lets say for example, the attacker knew the password was A-Z uppercase, and between 6 and 8 characters long. Suddenly, they would have to try guesses for

  • A-Z combinations with 6 characters (308915776 guesses)
  • A-Z combinations with 7 characters (8031810176 guesses)
  • A-Z combinations with 8 characters (or original 208827064576 guesses).

Thats an extra 8340725952 guesses on top of our original number in order to guarantee we crack the password.

8340725952 / 1394000 (guesses a second) = 1.67 hours
Costing the attacker an extra £19.53

Now obviously, i’m not suggesting making your WiFi password shorter. I’m just saying that not knowing the exact format and composition of a WiFi password can make the process harder, longer and less effective.

Lets look at what we should do, and the implications to an attacker…

A single extra character, still A-Z uppercase:

5429503678976 possible combinations = 45 Days on our AWS setup = £12,000

Two extra characters, still A-Z uppercase:

141167095653376 possible combinations = 1172 Days (3.2 years!) on our AWS setup = £329,098

8 characters, combination of A-Z upper and a-z lowercase.

54507958502660 possible combinations = 452.5 days on our AWS setup = £127,062

8 characters, combination of A-Z upper, a-z lower and numbers 0-9

221919451578090 possible combinations = 1842.5 days on our AWS setup = £517,387.5

So there you have it.. more characters is good, different ‘character sets’ (numbers, lowercase etc) is good.

I’d recommend not going for <Dictionary Word>123. or <Dictionary Word><Dictionary Word> as other ‘dictionary attacks not covered in this post will try combinations of words to crack the password instead.

Personally, I prefer the options above, random with more characters and character sets, or if you do want to use words to make it really long, add a good number of letters + numbers of randomness at the start, middle or end.

Either way, you’re going to be in a much better position than an attacker seeing a ‘SKYABCD’ style WiFi network and knowing he has a guaranteed way in.

Comments or corrections to twitter @mattdashj

 

Signing Exchange E-Mail on the iPhone 7 / 6 / 5 or iPad

Quick walkthrough for setting up signed outgoing e-mails on the iPhone / iPad

Scenario: You have a free E-Mail signing certificate such as the one from Comodo, you’ve set it up on your desktop/laptop e-mail, but you also send a lot of mail from your iPhone / iPad too.

There are two steps to getting signed mail working on the iPhone.

Step 1: Install your certificate and Private Key onto the iPhone using the ‘Apple Configurator version 2’.

Download the ‘apple configurator 2’ from the App Store onto your mac.
(This is a tool from apple that lets you create profiles and roll-out changes such as certificates to your iPhones/iPads/AppleTV’s.

Open it.

Goto File > new profile.

A new profile window appears, In the general tab, give the profile a name as below:

screen-shot-2016-11-03-at-03-28-16

Then, go into your mac key store (The app is called ‘keychain access’). Goto certificates, you should find your imported Comodo cert listed with your e-mail address as the title as below:

screen-shot-2016-11-03-at-03-29-23

Right click your mail certificate and chose export.

This will export your Certificate and Private key into one ‘.p12. file. You’ll be prompted to protect the exported certificate with a new password. (Don’t leave it blank. You’ll only need the password once in about a minutes time, so may as well make it strong!).

screen-shot-2016-11-03-at-03-29-51

screen-shot-2016-11-03-at-03-30-14

Now you should have a ‘.p12’ file in your documents. Yes? Good.

Back to the Apple Configurator Profile screen.. Click on the ‘Certificates’ section on the Left and click the ‘Configure’ Button. You will be prompted to add a certificate, use the finder window that appears to find and select your new .P12 file.

screen-shot-2016-11-03-at-03-32-09

You will then need to give the Profile the password you just used for the P12 export. Type it in the ‘password:’ field, you’ll know if it’s right as the window will change from showing this:

screen-shot-2016-11-03-at-03-32-25

To this:

screen-shot-2016-11-03-at-03-32-36

Thats it! We can now save this profile and add it to our iPhone/iPad.

Save it by clicking the title at the top of the profile window and give it a name. Mine saved in my iCloud drive, this is fine.

screen-shot-2016-11-03-at-03-32-56

Now, plug your phone into your Mac via USB. It will appear in the ‘Apple Configurator 2’ Main window.

screen-shot-2016-11-03-at-03-35-13

 

 

Right click it, chose Add > Profile. Then select our new .mobileconfig file we’ve just saved.

 

screen-shot-2016-11-03-at-03-35-31

screen-shot-2016-11-03-at-03-35-46

Then, follow the instructions on the Mac and on your iPhone to install the certificate.. The iPhone will need your iPhone password and warn you the ‘Profile is unsigned’. This is fine.

Once done, you can unplug your phone from your Mac, you’re ready for step 2…

Step 2: Turn S/MIME E-Mail signing on within your iPhone settings and select the certificate you just uploaded.

This is the easy bit.

On your phone. Go into settings > Mail.

Chose ‘Accounts’, then select the account the certificate is for (Mine is my Exchange account).

Then, select the ‘Account your@email.com’ line at the top of the screen to drill into that accounts’ settings…

img_3145

From here, Click ‘Advanced Settings’.

Finally, in Advanced settings, turn ‘SMIME’ to on. Then click on the new option ‘Sign’.

img_3146

Turn the sign setting on, you’ll be asked to chose a certificate. The one from the profile we uploaded should be listed for you to select, as below:

img_3147

Thats it, your e-mails should now be sent signed!

Matt

DCOS.io OpenDCOS Authentication Token

Looking to script some containers against an OpenDCOS Deployment however the authentication for OpenDCOS is OAuth against either Google, Github or Microsoft.

DCOS login options screenshot
DCOS login options

 

The docs (here) discusses requesting an auth token for a given user, but the API URL/Path doesn’t seem to work in OpenDCOS.

Turns out, the correct URL is below. Paste in a browser, authenticate and your token will be provided.

https://<YOUR-DCOS-MASTER-IP>/login?redirect_uri=urn:ietf:wg:oauth:2.0:oob

This is the same URL you’ll be asked to authenticate against if you install the DCOS local CLI.

You can then send this in any requests to the DCOS services (such as marathon) using a HTTP header as below:

 

Mac OSX El Capitan Secure Erase

So, it’s time to give my old corporate Macbook Pro 15″ back to who knows where.

Time to move my data to my new (much the same) Macbook Pro 15″ and secure erase my old SSD.. Right?

Wrong! Seems the recovery partition on El Capitan (Hold down CMD + R on boot) completley prevents any of the ‘secure erase’ options; the button for security options just isn’t there!

Anyway, the disk utility is just a pretty GUI on the ‘diskutil’ command line.

So, to run a very secure (and lengthy) 35-pass wipe on your main disk…

once you have the “OSX Utilities” window showing, goto Utilities > Terminal from the menu bar, then on the terminal type the following command:

 diskutil secureErase 3 disk0 

For a quicker, US DoD 7-pass secure erase, run:

 diskutil secureErase 2 disk0 

Or an even quicker, US DoE 3-pass secure erase, run:

 diskutil secureErase 4 disk0

If the command errors with “device in use” you’ll need to unmount your MacOSX partition first with the following command:

 diskutil unmountDisk disk0 

WARNING: Any of these options will permanently, irreversibly destroy ALL data on your disk. Please make sure you have no external storage directly attached, or you may just wipe that instead.

The secureErase commands will then show a progress bar and estimated time to completion. The 34 Pass wipe on a mid-2012 256GB SSD estimates 8 hours.

Yes, you’re going to need a charger 😉

Matt

OpenStack infrastructure automation with Terraform – Part 2

TL;DR: Second of a two post series looking at automation of an openstack project with Terraform, using the new Terraform OpenStack Provider.

With the Openstack provider for Terraform being close to accepted into the Terraform release, it’s time to unleash it’s power on the Cisco Openstack-based Cloud..

In this post, we will:

  • Write a terraform ‘.TF’ file to describe our desired deployment state including;
    • Neutron networks/subnets
    • Neutron gateways
    • Keypairs and Security Groups
    • Virtual machines and Volumes
    • Virtual IP’s
    • Load balancers (LBaaS).
  • Have terraform deploy, modify and rip down our infrastructure.

If you don’t have the terraform openstack beta provider available, you’ll want to read Part 1 of this series.

Terraform Intro

Terraform “provides a common configuration to launch infrastructure“. From IaaS instances and virtual networks to DNS entries and e-mail configuration.

The idea being that a single Terraform deployment file can leverage multiple providers to describe your entire application infrastructure in one deployment tool; even if your DNS, LB and Compute resources come from three different providers.

Support for different infrastructure types is supported by provider modules, it’s the Openstack provider we’re focused on testing here.

If you’re not sure why you want to use Terraform, you’re probably best getting off here and having a look around Terraform.io first!

Terraform Configuration Files

Terraform configuration files describe your desired infrastructure state, built up of multiple resources, using one or more providers.

Configuration files are a custom, but easy to read format with a .TF extension. (They can also be written in JSON for machine generated content.)

Generally, a configuration file will hold necessary parameters for any providers needed, followed by a number of resources from those providers.

Below is a simple example with one provider (Openstack) and one resource (an SSH public key to be uploaded to our Openstack tenant)

Save the above as  demo1.tf and replace the following placeholders with your own Openstack environment login details.

Now run $terraform plan  in the same directory as your demo1.tf  file. Terraform will tell you what it’s going to do (add/remove/update resources), based on checking the current state of the infrastructure:

Terraform checks, the keypair doesn’t already exist on our openstack provider, so a new resource is going to be created if we apply our infrastructure… good!

Terraform Apply!

Success! At this point you can check Openstack to confirm our new keypair exists in the IaaS:

 

Terraform State

Future deployments of this infrastructure will check the state first, running $terraform plan  again shows no changes, as our single resource already exists in Openstack.

That’s basic terraform deployment covered using the openstack provider.

Adding More Resources

The resource we deployed above was ‘ openstack_compute_keypair_v2 ‘. Resource types are named by the author of a given plugin! not centrally by terraform (which means TF config files are not re-usable between differing provider infrastructures).

Realistically this just means you need to read the doc of the provider(s) you choose to use.

Here are some openstack provider resource types we’ll use for the next demo:

“openstack_compute_keypair_v2”
“openstack_compute_secgroup_v2”
“openstack_networking_network_v2” 
“openstack_networking_subnet_v2”
“openstack_networking_router_v2”
“openstack_networking_router_interface_v2”
“openstack_compute_floatingip_v2”
“openstack_compute_instance_v2”
“openstack_lb_monitor_v1”
“openstack_lb_pool_v1”
“openstack_lb_vip_v1”

If you are familiar with Openstack, then their purpose should be clear!

The following Terraform configuration will build on our existing configuration to:

  • Upload a keypair
  • Create a security group
    • SSH and HTTPS in, plus all TCP in from other VM’s in same group.
  • Create a new Quantum network and Subnet
  • Create a new Quantum router with an external gateway
  • Assign the network to the router (router interface)
  • Request two floating IP’s into our Openstack project
  • Spin up three instances of CentOS7 based on an existing image in glance
    • With sample metadata provided in our .tf configuration file
    • Assigned to the security group terraform created
    • Using the keypair terraform created
    • Assigned to the network terraform created
      • Assigned static IP’s 100-103
    • The first two instances will be bound to the two floating IP’s
  • Create a Load Balancer Pool, Monitor and VIP.

Before we go ahead and $terraform plan ; $terraform apply  this configuration.. A couple of notes.

Terraform Instance References / Variables

This configuration introduces a lot of resources, each resource may have a set of required and optional fields.

Some of these fields require the UUID/ID of other openstack resources, but as we haven’t created any of the infrastructure yet via  $terraform apply , we can’t be expected to know the UUID of objects that don’t yet exist.

Terraform allows you to reference other resources in the configuration file by their terraform resource name, terraform will then order the creation of resources and dynamically fill in the required information when needed.

For example. In the following resource section, we need the ID of an Openstack Neutron network in order to create a subnet under it. The ID of the network is not known, as it doesn’t yet exist. So instead a reference to our named instance of the the openstack_network_v2 resource,   tf_network  is used and from that resource we want the ID passing to the subnet resource hence the .id  at the end.

Regions

You will notice each resource has a region=""  field. This is a required field in the openstack terraform provider module for every resource (try deleting it, $terraform plan  will error).

If your openstack target is not region aware/enabled, then you must set the region to null in this way.

Environment specific knowledge

Even with dynamic referencing of ID’s explained above, you are still not going to be able to copy, paste, save and $terraform apply , as there are references in the configuration specific to my openstack environment, just like username, password and openstack API URL in demo1, in demo2 you will need to provide the following in your copy of the configuration:

  • Your own keypair public key
  • The ID of your environment’s ‘external gateway’ network for binding your Neutron router too.
  • The pool name(s) to request floating IP’s from.
  • The Name/ID of a glance image to boot the instances from.
  • The Flavour name(s) of your environment’s instances.

I have placed a sanitised version of the configuration file in a gist, with these locations clearly marked by <<USER_INPUT_NEEDED>> to make the above items easier to find/edit.

http://goo.gl/B3x1o4

Creating the Infrastructure 

With your edits to the configuration done:

Terraform Apply! (for the final time in this post!)

Enjoy your new infrastructure!

We can also confirm these items really do exist in openstack:

Destroying Infrastructure

$terraform destroy  will destroy your infrastructure. I find this often needs running twice, as certain objects (subnets, security groups etc) are still in use when terraform tries to delete them.

This could simply be our terraform API calls being quicker than the state update within openstack, there is a bug open with the openstack terraform provider.

First Run:

Second Run: Remaining resources are now removed.

Thats all for now boys and girls!

Enjoy your weekend.

 

 

 

OpenStack infrastructure automation with Terraform – Part 1

Update: The Openstack provider has been merged into terraform. It comes with the terraform default download as of 0.4.0.

Get it HERE: https://terraform.io/downloads.html

Then proceed directly to the second part of this series to get up and running with Terraform on Openstack quickly!

Or.. read more below for the original post.

Continue reading OpenStack infrastructure automation with Terraform – Part 1

UCS vMedia Configuration and Boot Order

Just a quick note on Cisco UCS vMedia.

If you have configured a remote CD/DVD from a remote ISO and UCS manager is showing the image is ‘mounted’ but your server is still stuck in a PXE/Netboot loop…

It may be helpful to know that your regular boot order policy in your service profile doesn’t apply here.

AKA. If you have ‘CD/DVD’ in your Boot Order.
This still wont automatically boot into a vMedia CD/DVD.

UCS System manager boot priority list
UCS System manager boot priority list

 

Solution
You’ll need to F6 from the KVM console on server boot, there you will see an option for booting from CIMC vMedia DVD.

B Series UCS F6 Boot Options
B Series UCS F6 Boot Options

This will get you where you need to be!

Also. For those that don’t know. You can check the status of your vMedia mount under Equipment > Server > Inventory > CIMC.

Scroll down and you’ll see something like below.

UCS System Manager vMedia inventory

 

Matt

ZFS on Linux resilver & scrub performance tuning

Improving Scrub and Resilver performance with ZFS on Linux.

I’ve been a longtime user of ZFS, since the internal Sun beta’s of Solaris Nevada (OpenSolaris).
However, for over a year i’ve been running a single box at home to provide file storage (ZFS) and VM’s and as I work with Linux day to day, chose to do this on CentOS, using the native port of ZFS for linux.

I had a disk die last week on a 2 disk RAID-0 mirror.
Replacement was easy, however reslivering was way to slow!

After hunting for some performance tuning ideas, I came across this excellent post for Solaris/IllumOS ZFS systems and wanted to translate it for Linux ZFS users. http://broken.net/uncategorized/zfs-performance-tuning-for-scrubs-and-resilvers/

The post covers the tunable parameter names and why we are changing them, so I won’t repeat/shamelessly steal. What I will do is show that they can be set under linux just like regular kernel module parameters:

[root@ZFS ~]# ls /etc/modprobe.d/
anaconda.conf blacklist.conf blacklist-kvm.conf dist-alsa.conf dist.conf dist-oss.conf openfwwf.conf zfs.conf

[root@ZFS ~]# cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=2147483648 zfs_top_maxinflight=64 zfs_resilver_min_time_ms=5000 zfs_resilver_delay=0

Here you can see I have set the zfs IO limit to 64 from 32, the resilver time from 5 sec from 3 and the delay to zero. Parameters can be checked after a reboot:

cat /sys/module/zfs/parameters/zfs_resilver_min_time_ms

Result: After a reboot, my resilver speed increased from ~400KB/s to around 6.5MB/s.

I didn’t tweak anymore, it was good enough for me and had other things to get on with.

One day i’ll revisit these to see what other performance I can get out of it. (I’m aware on my box, the RAM limitation is causing me less than ‘blazing fast’ ZFS usage anyway)

Happy Pools!

[root@ZFS ~]# zpool status
pool: F43Protected state: ONLINE
scan: resilvered 134G in 2h21m with 0 errors on Tue Jun 24 01:07:12 2014

pool: F75Volatile
state: ONLINE scan: scrub repaired 0 in 5h41m with 0 errors on Tue Feb 4 03:23:39 2014

 

Matt