I created this article with the intent of explaining the migration journey from deploying a legacy application with manual steps to an automated Kubernetes deployment with proper DevOps practices. Its intent is not to help you understand Kubernetes deeper (there’s an abundance of materials out there already).

As a Cloud Solution Architect for Microsoft, every week I work with our partners to assist them towards containerization and Kubernetes. I’ll use AKS and discuss it’s strengths and weaknesses without holding punches. Disclaimer: Given I work for Microsoft, I am self-aware of my bias. So in this article, I will make an effort to be more critical of Azure to balance that out.

Beginning With the End in Mind, I created the following outline:

Intent

Duckiehunt is secure, monitored and deployable with the least amount of manual effort, cost and code-change.

Purpose

I wrote Duckiehunt in 2007 as a LAMP website. It embodies many of the customer requirements I see:

  • Old code, using legacy tooling
  • Want a reliable, resilient infrastructure
  • Want to automate deployment
  • Don’t want to re-write
  • Migration should involve minimal/no code change
  • Need to update to modern standards (e.g. HTTPS, MySQL encryption, private DB instance with backups)

Outcomes

  • CI/CD (Code Check-in triggers automated tests and pushes to Production)
  • Monitoring cluster + app (visualization + alerts if down)
  • HTTPS enabled for duckiehunt.com (CA Cert + forced redirection to https)
  • Running on Kubernetes (AKS)
  • Managed MySQL

Milestones: (in reverse order of accomplishment)

  • Production DNS migrated
  • Azure Monitor + Container Monitoring Solution + LogAnalytics
  • Distinct Dev + Prod environments
  • VSTS + Github integration
  • Securely expose UI + API
  • Integrated MySQL instance
  • Installed on AKS
  • Test in Minikube
  • Migrate App to Container

From here on, I’ll explain my journey as steps fulfilling the milestones I created. I’ll list my estimated time, as along with my actual time to compare. The times below are not “Time to get X working”, but “Time to get X working correctly and automate as if I had to support this in production” (which I do). As a result, they’re much higher than a simple success case.

Migrate app to Container

Estimated Time: 4 hours. Actual Time: 10 hours

I wrote this in 2007 using a PHP version that is no longer supported (5.3) and a framework (CodeIgniter) that is not as active. I didn’t want to re-write it yet. Thankfully, 5.6 is mostly backwards compatible and I was able to find a container using that.

I would have been done in ~4 hours; however, I lost an embarrassing amount of hours banging my head against the wall when I automated the docker build. (I would always get 404) I learned this was because Linux’s file system is case-sensitive and OSX’s is not, and the PHP framework I chose in 2007 expects the first character of some files to start with a capital letter. *grumble* *grumble*

Test in Minikube

Estimated time: 12 hours. Actual Time: 10 hours

Now that I got my PHP app running in a container, it was time to get it running inside Kubernetes. To do this, I needed to deploy, integrate and test the following: Pod, Service, Secrets, Configuration, MySQL and environment variables.

This is a pretty iterative approach of “This, this…nope…how about this?…Nope…This?…ah ha!…Ok, now this…Nope.” This is where Draft comes in. It’s a Kubernetes tool specifically designed for this use case, and I think I’ve started to develop romantic feelings for this tool because of how much time and headache it saved me while being dead simple to use.

Install in AKS

Estimated time: 8 hours. Actual time: 2 hours

Creating a new AKS cluster takes about 10 minutes and is instantly ready to use. Because I had done the work on testing it Minikube the hard-word was already done, but I expected some additional hiccups. Again, this is where my love and adoration of Draft started to shine. I was almost done in 30 minutes, but I took some shortcuts with Minikube that came back to bite me.

Integrated MySQL instance

Estimated time: 2 hours. Actual time: 3 hours

Azure now offers MySQL as a Service (aka Azure Database for MySQL) and I chose to use that. I could have run MySQL in a container in the cluster; however, I would have had to manage my own SLA, backups, scaling, etc. Given my intent of this project is to have the least amount of work and cost, and the cost is still within my MSDN budget, I chose to splurge.

I spent an hour experimenting with Open Service Broker for Azure (a way of managing external dependencies, like MySQL, native to K8S). I really like the idea, but I wanted one instance for both Dev + Prod and needed a high control over how my app read in database parameters (since it was written in 2007). If I was doing more deployments than one, OSBA would be the right fit, but not this time.

Steps taken:

  1. Create the Azure Database for MySQL Instance
  2. Created the dev/prod accounts
  3. Migrated the data (mysqldump)
  4. White-listed the source IPs (To MySQL, the cluster traffic looks as if it’s coming from the Ingress IP address)
  5. Injected the connection string to my application (Using K8S Secrets)

Then I was off to the races. OSBA would have automated all of that for me, but I’ll save that for a proverbial rainy day.

Securely expose UI + API

Estimated time: 4 hours. Actual time: 20 hours

This was the most frustrating part of the entire journey. I decided to use Nginx Ingress Controller with Cert-manager (for SSL). There’s lots of old documentation that conflicts with recommended practices, which led to lots of confusion and frustration. I got so frustrated I purposely deleted the entire cluster and started from scratch.

Lessons’ learned:

  1. nginx-ingress is pretty straight-forward and stable. Cert-manager is complicated and I had to restart it a lot. I really miss kube-lego (same functionality, but deprecated. Kube-lego was simple and reliable)
  2. Put your nginx-ingress + cert-manager in kube-system, not in the same namespace as your app
  3. You might have to restart cert manager pods when you modify services. I had issues where cert-manager was not registering my changes.
  4. cert-manager might take ~30 minutes to re-calibrate itself and successfully pull the cert it’s been failing on for the last 6 hours
  5. cert-manager creates secrets when it tries to negotiate, so be mindful of extra resources left around, even if you delete the helm chart
  6. cert-manager injects its own ingress into your service for verifying you own the domain. If you don’t have your service/ingress working properly, cert-manager will not work
  7. If you’re doing DNS changes, cert-manager will take a long time to “uncache” the result. Rebooting kibe-dns doesn’t help.
  8. There’s no documentation for best-practices for setting up 2 different domains with cert-manager (e.g. dev.duckiehunt.com; www.duckiehunt.com)
  9. AKS’s HTTP application routing is a neat idea, but you cannot use custom domains. So you’re forced to use its *.aksapps.io domain for your services. Great idea, but not useful in real-world scenarios

To summarize, I was finally able to get development and production running in two different namespaces with one ingress controller and one cert-manager. Should have been simple, but death-by-1000-papercuts ensued with managing certs for each of them. Now I’m wiser, but the journey was long and frustrating. That might involve a blog post of its own.

VSTS + Github integration

Estimated time: 4 hours. Actual time: 2 hours

VSTS makes CI/CD easy. Real easy. Almost too easy.

I lost some time (and ~8 failed builds) because the VSTS UX isn’t intuitive to me and documentation is sparse. But now that it’s working, I have a fully automated Github commit -> Production release pipeline which completes within 5 minutes. This will save me a tremendous amount of time in the future. This is what I’m most excited about.

Azure Monitor + Container Monitoring Solution + LogAnalytics

Estimated time: 3 hour. Actual time: None.

This was the surprising part. All of this work was already done for me by setting up the AKS cluster and integrated into the portal. I was impressed that this was glued together without any additional effort needed.

That said, here’s some “gotchas”:

  • The LogAnalytics SLA is ~6 hours. My testing showed that new logs showed up within 5 minutes, but after a cluster is newly created, initial logs would take ~30 minutes to appear.
  • The LogAnalytics UX isn’t intuitive, but the query language is extremely powerful and each of the pods logs were available by clicking through the dashboard.
  • Monitoring and Logging are two pillars of the solution; however, Alerting is missing from the documentation. That integration is forthcoming, and will likely involve another blog entry.
  • The “Health” tile is useful for getting an overview of your cluster; however, the “Metrics” tile seems pretty limited. Both are still in Preview, and I expect to see additional improvements coming soon.

Production DNS migrated

Estimated time: 1 hour. Actual time: 1 hour

Since I did the heavy lifting in the “Securely expose UI + API” section, this was as easy as flipping a light switch and updating the DNS record in my registrar (dreamhost.com). No real magic here.

Summary

This has been a wonderful learning experience for me, because I was not just trying to showcase AKS/K8S and its potential, but also using it as it is intended to be used, thus getting my hands dirtier than normal. Most of the underestimated time was spent on a few issues that “rat-holed” me due to technical misunderstandings and gaps in my knowledge. I’ve filled in many of those gaps now and hope that it saves you some time too.

If this has been valuable for you, please let me know by commenting below. And if you’re interesting in getting a DuckieHunt duck, let me know as I’d love to see more take flight!

P.S. The source code for this project is also available here.

WARNING: SSH’ing into an agent node is an anti-pattern and should be avoided. However, we don’t live in an ideal world, and sometimes we have to do the needful.

Overview

This walkthrough creates an SSH Server running as a Pod in your Kubernetes cluster and uses it as a jumpbox to the agent nodes. It is designed for users managing a Kubernetes cluster who cannot readily SSH to into their agent nodes (e.g. AKS) does not publicly expose the agent nodes for security considerations).

This is one of the steps in the Kubernetes Workshop I have built when working with our partners.

NOTE

It has been tested in AKS cluster; however, it should also work in other cloud providers.

You can follow the steps on the SSH to AKS Cluster Nodes walkthrough; however, that requires you to upload your Private SSH key which I would rather avoid.

Assumptions

* The SSH Public key has been installed for your user on the Agent host
* You have jq installed Not vital, but makes the last step easier to understand.

Install an SSH Server

If you’re paranoid, you can generate your own SSH server container; however, [this one by Corbin Uselton](https://github.com/corbinu/ssh-server) has some pretty good security defaults and is available on Docker Hub.

kubectl run ssh-server --image=corbinu/ssh-server --port=22 --restart=Never

Setup port forward

Instead of exposing a service with an IP+Port, we’ll take the easy way and use kubectl to port-forward to your localhost.

NOTE: Run this in a separate window since it will need to be running for as long as you want the SSH connection

kubectl port-forward ssh-server 2222:22

Inject your Public SSH key

Since we’re using the ssh-server as a jumphost, we need to inject our SSH key into the SSH Server. Using root for simplicity’s sake, but I recommend a more secure approach going forward. (TODO: Change this to use a non-privileged user.)

cat ~/.ssh/id_rsa.pub | kubectl exec -i ssh-server -- /bin/bash -c "cat >> /root/.ssh/authorized_keys"

SSH to the proxied port

Using the SSH Server as a jumphost (via port-forward proxy), ssh into the IP address of the desired host.

# Get the list of Host + IP's
kubectl get nodes -o json | jq '.items[].status.addresses[].address'
# $USER = Username on the agent host
# $IP = IP of the agent host
ssh -J root@127.0.0.1:2222 $USER@$IP

NOTE: If you get “WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!” You might need to add `-o StrictHostKeyChecking=no` to the SSH command if you bounce across clusters. This is because SSH believes that the identity of the host has changed and you need to either remove that entry from your `~/.ssh/known_hosts` or tell it to ignore the host identity.

Cleanup

  • kubectl delete pod ssh-server
  • Kill the kubectl port-forward command

This week, I found myself in one of the most unique and challenging situations of my life. And now that it’s all over, I find myself in tears. Not because of sadness, but because I now know myself as someone who can actually make a difference is this world, despite the circumstances.

Now for a little backstory.

It should be no surprise that I love to build. I found my best friend, Lee Gibson, when a LEGO set came up at a White Elephant party and we both schemed on how to win it. I’ve created a non-profit called “The Trebuchet Society”, with the primary goal of hosting SlingFest, a (mostly) annual event designed to gather builders from around the area to create trebuchets and toss pumpkins hundreds of feet. It’s a blast and fuels my desire to build and be around other builders.

In 2014, I discovered TheLab.ms via a tweet. A budding Makerspace/Hackerspace. Its mission is to foster a collaborative environment wherein people can explore and create intersections between technology, science, art, and culture.

I found my people.

Their guiding principles were more focused on education and ethical hacking instead of building trebuchets, but that’s cool. My mom was a librarian, so education is in my blood. I just wanted to be around like-minded people.

I watched Shawn Porter, Roxy Dehart and Richard Gowen pour their heart out into it and build it from scratch. TheLab even got an article in the Plano Magazine.

As with all non-profits, you want awareness, engagement and members. These usually bring in new ideas and fresh blood. Sometimes in alignment with your own ideas, sometimes not. And as a father, I can tell you, there is no rage in the world like watching something happen to your baby.

Fast forward a few years and after some leadership changes, the last of the founders resigned as a board member, and a number of positions were either vacant or MIA. Then the Education Coordinator resigned. Then the President resigned. Then the Floating Board Member. And the Vice President. And the Secretary.

Their reasons were their own. And I support them 100%.

I was now in one of the most unique and confronted situations of my life. The sole Board Member of TheLab.ms. A community that I’ve been with from almost the very start and loved so dearly was fighting amongst itself. Anger and frustration was evident on a daily basis. People were burnt out.

Thankfully, I had an ace in my pocket. For the last 6 months, I’ve been registered in a course called “Team Management and Leadership Program” from Landmark Worldwide. It is a course designed around creating teams and teamwork in any situation that produce powerful results in many areas of life with freedom and ease. I called my coach and the classroom leader in tears that day. I felt completely broken down and had no idea how to make this work. Through an insightful and “tough love” conversation, I started to see a path forward.

I organized a last-minute event and invited people to create the future of TheLab. I expected about 6 people to show up. I had to hold back my emotions when the room completely filled up, including members I hadn’t seen in years. These were people who, despite the burn-out, despite the anger, despite the frustration, deeply wanted TheLab to not just survive, but to thrive. It was showtime.

In an hour and a half, we dug deep, asked some good questions and had some fun. We had some deep, meaningful conversations about the future and not the past. And most importantly, people stepped up to the plate to take on some big leadership positions. Elections are next week and I invite all of you to learn what we’re about. I have never been more proud to be part of an organization than I am right now.

I have found my people.

Again.

Emerging civilizations naturally gravitate towards beds of water. Growing up in lower Louisiana, the Mighty Mississippi was where my ancestry settled. It was a source of commerce, livelihood and fisheries which provided sustainability that allowed the surrounding areas to flourish to the ecosystem it is now.

Technology mimics this cultural expansion and KubeCon/CloudNativeCon is the riverbed where developers and operators around the world arrive to ship and receive containers from the Kubernetes dock.

I was fortunate enough to join 50+ other Microsoft’ers and 4000+ others KubeCon/CloudNativeCon on Dec 5-8th. This hotbed of activity has flourished from the internal foundational work that Google created to a vibrant open source community. This small stream has gathered enough momentum to be undeniable in the development and operations community.

Untitled

Kubernetes is software that makes it easier to run your software. Software development is hard, not just because you have to worry about your code, but you also have to worry about monitoring, maintaining, updating, scaling and more. Kubernetes was the pilot program for a larger organization called the Cloud Native Compute Foundation. The CNCF was designed to be stewards for this and other projects with the intention of making software easier to develop and operate.

If you missed the event, and want to vicariously live through my notes, you’re in luck as I keep pretty detailed notes:
https://github.com/lastcoolnameleft/Conference-Notes/tree/master/KubeCon-2017

This year was the year of the Service Mesh and socks.

Untitled

The week was not just an opportunity to learn from other experts, but to be at the forefront of new announcements from my favorite cloud.

Azure Announcements:

  • Virtual Kubelet – The new version of the Kubernetes connector was announced at KubeCon. This enables Azure to extend Kubernetes to Azure Container Instances (ACI), and provide our customers with per-second billing and NO virtual machine management for containers.
  • Ark – a migration tool which enables teams to move AWS and GCP (cross cloud Kubernetes tool) to Azure. Microsoft and Heptio (the creators of Ark) have formed a strong partnership. Ark delivers a strong Kubernetes disaster recovery solution for customers who want to use it on Azure.
  • Open Service Broker for Azure – We announced the open sourcing the Open Service Broker for Azure (OSBA), built using the Open Service Broker API. OSBA exposes popular Azure services to Kubernetes such as Azure CosmosDB, Azure Database for PostgreSQL, and Azure Blob Storage.
  • Metaparticle – Brendan Burns announced during the Keynote address, the delivery of an experimental model for coding for cloud. Metaparticle attempts to reduce the complexity and duplication of code for deploying software to Kubernetes.
  • Kashti – A visualization dashboard for https://github.com/azure/brigade

Untitled

Other notable announcements:

  • Kubeflow – Machine Learning Toolkit for Kubernetes
  • Alibaba Cloud is a platinum member of CNCF
  • Codefresh announces support for Helm Charts in Kubernetes
  • CoreOS Tectonic 1.8 released
  • Oracle announces new open source Kubernetes Tools
  • Weaveworks Cloud Enterprise Edition
  • Many more that I’ve forgotten or didn’t jot down

Oh, and it snowed in Austin. It was a KubeCon Miracle!

Untitled

P.S. A special shout-out to my travel/seminar buddies, Al Wolchesky, Kevin Hillinger, Nick Eberts, Brian Redmond and Eddie Villalba.

I was recently invited to participate in the Microsoft Partner blog where I shared my love of containers.

I’m especially passionate about container technology because of how much it makes the developer’s life easier. Unfortunately, it’s one of those things that must be experienced to truly understand. I tried to boil my thoughts town to just a few paragraphs here. Check it out and let me know what you think!

https://blogs.technet.microsoft.com/msuspartner/2017/11/13/how-i-learned-to-stop-worrying-and-love-the-containers/

Azure App Service for Linux is a pretty neat offering from Azure. You get all of the DevOps features you want (A/B Testing, Hosted Application, Tiered Support, Button-click scaling, lots of templates and more!) without the headache of managing VM’s.

9 years ago, I wrote a quacky little website called “Duckiehunt“. Unfortunately, I didn’t pay the tech debt and things kept breaking until it was abandoned. I’m now using Duckiehunt as a learning ground for Azure’s services and alternatives.

Azure App Service for Linux was the perfect fit. However, back in 2008 SSL wasn’t as ubiquitous. Now, it’s a badge of shame to NOT have it. Azure does offer an App Service Certificate, but I’d like to find a cheaper/more open solution.

Enter Let’sEncrypt from Mozilla and the EFF. If you don’t know, EFF are the unsung heroes of the internet. They fight tirelessly to support your freedom and rights on the internet. Mozilla and EFF offer Let’sEncrypt as a free way to encrypt websites via CertBot. Now I’ll dig into the technical details behind encrypting an App Service for Linux with Let’sEncrypt.

Step #1: Get CertBot
Because I’m on OSX, I was able to run: brew install certbot. For the full range of options, CertBot’s webpage has what you need.

Step #2: Create Cert locally

Before CertBot can create the certificate for you, it must first validate you own the domain. It will prompt you for a few questions, and then ask you to create a file on the webhost and add content to that file for validation.

Thankfully, Azure App Service for Linux provides a terminal access to your container so you can make these modifications yourself.

➜ sudo certbot certonly -d duckiehunt.com –manual

Create a file containing just this data:

%RANDOM STRING 1%

And make it available on your web server at this URL:

http://duckiehunt.com/.well-known/acme-challenge/%RANDOM STRING 2%

——————————————————————————-
Press Enter to Continue

Step #3: Add the validation file to you website

I then went to the Kudu instance of my App Service and ran:

➜ mkdir /var/www/html/.well-known/acme-challenge/
➜ echo “%RANDOM STRING 2%” > %RANDOM STRING 1%

At this point, the validation is in place and it’s time to continue with Chatbot by pressing “Enter”.

Waiting for verification…
Cleaning up challenges

IMPORTANT NOTES:
– Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/duckiehunt.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/duckiehunt.com/privkey.pem
Your cert will expire on 2017-11-12. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
“certbot renew”
– If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let’s Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le

Huzzah! I’ve now got a certificate. Time to upload.

Step #4: Upload the certificate to Azure
Azure has a pretty descriptive set of steps for associating a certificate to your App Service, which I was able to follow.

Openssl will ask for a Password which you need to keep as you upload the cert to Azure.

➜ cd /etc/letsencrypt/live/duckiehunt.com
➜ openssl pkcs12 -export -out myserver.pfx -inkey privkey.pem -in fullchain.pem
Enter Export Password:
Verifying – Enter Export Password:
➜ cp myserver.pfx ~/Desktop

Step #5: Bind the certificate to your App Service

From here on you’re ready to Bind your SSL Certificate to your App Service. I’ll let Microsoft’s documentation lead the way from here.
https://docs.microsoft.com/en-us/azure/app-service-web/app-service-web-tutorial-custom-ssl#bind-your-ssl-certificate
Step #6: Bask in doing your part to secure the internet.

In summary, the process was pretty painless.

  • I used Let’sEncrypt to create a new Certificate for my App Service for Linux by creating a file that Let’sEncrypt could use to validate I owned the site.
  • I then encrypted that certificate to upload to Azure.
  • Once it was uploaded, I bound that certificate to my domain and voila! A more secure Duckiehunt

One bummer is that the certificate is intended to expire in 3 months instead of the industry standard of 12 months. The renewal process looks pretty easy, but that’s a different blog post.

–Tommy feels that he’s done his part in making the world a bit safer.

Like most children of the 80’s, I loved playing with LEGO. By mixing and matching bricks, you could physically manifest your imagination.

My first LEGO set was the Blacktron – Renegade.

Blacktron Renegade

By following the instructions, I was able to explore space and move strange and dangerous cargo from distant planets. By moving the wings around, I was able to make the Batwing and fly around Gotham. (Well before anyone else realized that potential.)

This was an immensely rewarding experience that I’ve carried with me through my professional career.

Naturally, the toys of the child lead us to adulthood. I knew I wanted to spend my life building. Creating. Spawning new ideas. I wanted to physically manifest my ideas into structures that others would see, admire and even work/play/live in. When I learned that you could get a job doing this, I was elated. I knew this was exactly what I wanted to do. My mission in life was set.

One fateful day, when I was sharing my new life mission with my Godmother she informed me: “To be an architect you have to know how to draw.” Anyone who’s seen me sign a check, write on a whiteboard, or even attempt to draw a square knows artistry genes were not bestowed upon me. I was crushed. My life’s mission was aborted and I was unsure what to do with myself.

My first drawing of the Falgout Family (I ran out of time for arms)
To quote my wife: “Those are people? I thought those were windows…”

I drew this
I drew this. Not sure what my obsession with blue people was. That drawing is nightmare fuel for me.

In High School, when Career Day came I didn’t care about any session other than the local architect. As torturous as it was, I still wanted to know what it was like. All I remember was “hard work…something something…dedication”.

Fast forward to the last 12 months. I made an exciting and brave leap to join Microsoft, and am now a “Cloud Solution Architect”. I’m an Architect. I’m a real, bonafide Architect. (I’m literally crying as I write this as I’m so overwhelmed with a sense of accomplishment.) My bricks aren’t 8x8x9.6 mm, they’re CPU Cores. I no longer have one toychest, I have 36 datacenter regions, spanned across the world.

Thankfully, I’m not planning to give up on those plastic pieces of creativity, as I’ve currently got a Star Destroyer hanging from the ceiling of my man cave. And even more sets left to complete.

LEGO Star Destroyer hanging from the ceiling.

If I could go back and comfort my younger self during that heartbreaking moment, I’m sure I would have told him: “hard work…something something…dedication”.

//build is a developer-centric conference Microsoft hosts every year. Since I never expected to work for Microsoft, I wasn’t even aware of //build. So, when my manager asked me if I was excited to attend and I told him no, I now know why that was the naive answer.

AWS has a head start on cloud services over Azure. But if this conference was any indication, Microsoft is taking this all the more serious.

Here’s some of the announcements that really caught my eye:

Click here for my detailed conference notes.

  • CosmosDB: Originally the distributed storage behind DocumentDB, CosmosDB allows not only a document store, but a MongoDB API, a key-value store and a graph database (Gremlin). That alone is pretty impressive; however, the portion that impresses me the most is how CosmosDB handles consistency. Traditionally, a database will offer either strong or eventual consistency.

    However, CosmosDB goes far beyond those two models and introduces 3 more that are all available as a turn-key solution. (Bounded Staleness, Session and Consistent Prefix (a new model of their own design))

    As a data guy, this is impressive to say the least. Not just because I work here, but because this is a new level of choice that I haven’t seen before and am excited about.

  • Speaking of being a data guy, offering Postgres and MySQL as a service made me giddier than it probably should. That said, AWS has had it for a while, so I’m more excited that we’re catching up.
  • AI: There’s no denying that machine intelligence is on the rise. Netflix’s $1,000,000 prize was just the start, and the pot has gotten bigger. The teams demo’ed Object detection and identification in manufacturing rooms, that led to a “sledgehammer selfie”. You had to be there.
  • Skype: While Skype may not be sexy technology, if it can provide an email transcript of a meeting with a list of action items (assigned by voice commands) as the demo provided, that might change.
  • Powerpoint + AI: Powerpoint isn’t really sexy either. Even less than Skype. In fact, I’d put it along the same sexiness as Orkut. But the demo of speech-to-text + text translation got a huge round of applause (the demo showed a Spanish presenter translated to Chinese in seconds.)
  • ServiceFabric: The team announced a GA for 5.6, and while it was already available, Windows + Linux containers. It can also ingest docker-compose files, which is interesting, but sent a mixed message to the OSS community.
  • Fluent Design: I’m color blind, so visual design is often lost on me. Other people seemed excited about it. So, that’s nice.
  • Lin on Win: Ubuntu Bash on Windows is nothing new. But now you can download Ubuntu, Fedora and SUSE from the App store instead of enabling “developer mode”. Oh yeah, iTunes is on App Store now too. Dude.
  • Hololens: Microsoft’s current Hololens is very neat, but costs ~$3000. Microsoft announced is a $399 model from Acer, which will be available in time for the holidays. Microsoft’s Hololens uses a transparent screen in front of your eyes to overlay augmented reality, and the Acer model provides a complete-view screen with cameras on the side to augment. There were 19 mixed reality experiences (vendors/partners) attending //build.

  • The parties: Microsoft spared no expense in ensuring that the guests enjoyed themselves. My highlight was walking around CenturyLink Field (home of the Seattle Seahawks) and screaming “Who Dat!”. Rock-aoke (Karaoke with a live-band) was a huge hit too.

Want to pretend you were there from experiencing my photos? Now you can!

See you next year!

TL;DR: Size matters.

After Oracle’s surprise announcement of their containerization of Oracle DB, Oracle WebLogic and a few of their other core technologies, I decided to test it out for myself. (Speaking authentically, I’m leery of their commitment; however, I recognize that I work on Open Source at Microsoft, so who am I to judge?)

My end-goal is to get Oracle DB 12.2 running in a container on Kubernetes inside Azure Container Service. This is Part 1 of my walkthrough from 0 to operational.

Build and Verify the Container

Unlike most Docker projects, Oracle does not have a public image on Docker Hub. To get started, you’ll need to:

Clone the github repo

git clone git@github.com:oracle/docker-images.git
...
Receiving objects: 100% (5643/5643), 425.77 MiB | 5.41 MiB/s, done.

Wait…what?! 425MB?!

After some sleuthing, it appears they once included the OracleLinux binaries in the git repo but have not purged them. Poor Github. I have a tremendous amount of appreciation for their architects and support engineers. Below is the SHA1 of the blob, the # of bytes of each file and the path.

CLICK TO SHOW DETAILS


git clone git@github.com:oracle/docker-images.git
Cloning into 'docker-images'...
remote: Counting objects: 5643, done.
remote: Compressing objects: 100% (35/35), done.
remote: Total 5643 (delta 12), reused 0 (delta 0), pack-reused 5607
Receiving objects: 100% (5643/5643), 425.77 MiB | 5.41 MiB/s, done.
Resolving deltas: 100% (3164/3164), done.

git:(master) git rev-list --objects --all \
| git cat-file --batch-check='%(objecttype) %(objectname) %(objectsize) %(rest)' \
| awk '/^blob/ {print substr($0,6)}' \
| sort --numeric-sort --key=2 | tail -7

35eda80405d711ae557905633d9f9b8d756afb94 42358832 OracleLinux/7.0/oraclelinux-7.0.tar.xz
e359def3dde981199ea692bbb26c24bd37e6fd68 42765288 OracleLinux/7.1/oraclelinux-7.1.tar.xz
0956d25bcb27f804cfc37f2a519a5cfb35af0955 43951872 OracleLinux/6.8/oraclelinux-6.8-rootfs.tar.xz
6de0b5011f509e53623ab0170fbc72e8bb53b501 43953520 OracleLinux/6.9/oraclelinux-6.9-rootfs.tar.xz
b05b9f4971b6d28330545fadc234eb423815dd59 47275816 OracleLinux/7.2/oraclelinux-7.2-rootfs.tar.xz
9b07a976e61ed2cf3a02173bf8c2d829977f2406 49130232 OracleLinux/7.3/oraclelinux-7.3-rootfs.tar.xz
3b7610a3df4892e9cf4f5d01eb3d55bcd3f2ad54 50369896 OracleLinux/6.7/oraclelinux-6.7-rootfs.tar.xz

Click to hide details

Moving right along…

Download the Oracle DB instance from their website

Since Oracle does not allow anyone else to distribute their software, you must go to their site, register (Larry Ellison now has my email), and download. Unfortunately, the login process does not allow me to “wget” the file and put on a remote machine, so I must download locally via browser. I chose “Oracle Database 12c Release 2”

-rw-r--r--@ 1 thfalgou staff 3.2G Apr 27 10:07 linuxx64_12201_database.zip

Another 3.2GB.

I now have an alternate version of Sir Mix A Lot’s infamous song going in my head: I LIKE BIG BINARIES AND I CANNOT LIE…

Moving right along…

Run their buildDockerImage.sh from the Github Repo

The documentation isn’t explicit about where to store the downloaded image. (in my case the ‘OracleDatabase/dockerfiles/12.2.0.1’ directory)

Now the moment of truth. From the “OracleDatabase/dockerfiles” directory, run buildDockerImage.sh

CLICK TO SHOW DETAILS

dockerfiles git:(master) time ./buildDockerImage.sh -v 12.2.0.1 -s
...
Building image 'oracle/database:12.2.0.1-se2' ...
Sending build context to Docker daemon 3.454 GB^M^M
Step 1/16 : FROM oraclelinux:7-slim
---> 442ebf722584
...
Pages and pages of output. So much text that my iTerm buffer no longer had the initial command.
...
Oracle Database Docker Image for 'se2' version 12.2.0.1 is ready to be extended:

--> oracle/database:12.2.0.1-se2

Build completed in 658 seconds.

./buildDockerImage.sh -v 12.2.0.1 -s 3.68s user 8.15s system 1% cpu 10:57.49 total

Click to hide details

10 Minutes later, the container is finally built. 10 minutes. 10!

Perhaps I’m being overly dramatic; however, the Docker Ecosystem has lots of high expectations and one of those is rapid development and deployment through small, composable artifacts. Granted, building and deploying a new version of database is not a common occurrence; however, the process it not conducive to DevOps. That said, this is their first foray into this, so I’m still excited to see the change.

dockerfiles git:(master) docker images
oracle/database 12.2.0.1-se2 f788cd5b4b9d 4 minutes ago 14.8 GB
oraclelinux 7-slim 442ebf722584 6 days ago 114 MB
fedora latest 15895ef0b3b2 7 days ago 231 MB
microsoft/mssql-server-linux latest 7b1c26822d97 7 days ago 1.35 GB
nginx latest 5766334bdaa0 3 weeks ago 183 MB
ubuntu latest 0ef2e08ed3fa 8 weeks ago 130 MB
...

14GB? I take that back.

Start the container

Let’s get the party started…

dockerfiles git:(master) docker run --name oracledb -p 1521:1521 -p 5500:5500 oracle/database:12.2.0.1-se2
ORACLE PASSWORD FOR SYS, SYSTEM AND PDBADMIN:

LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 28-APR-2017 03:21:48

Copyright (c) 1991, 2016, Oracle. All rights reserved.

Starting /opt/oracle/product/12.2.0.1/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.2.0.1.0 - Production
System parameter file is /opt/oracle/product/12.2.0.1/dbhome_1/network/admin/listener.ora
Log messages written to /opt/oracle/diag/tnslsnr/91c68ac2b2bf/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
...
Copying database files
1% complete
...

Huzzah! After about 9 minutes, it’s finally started! Let’s test it!

~ docker exec -ti oracledb sqlplus pdbadmin@ORCLPDB1

SQL*Plus: Release 12.2.0.1.0 Production on Fri Apr 28 03:58:10 2017

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

SQL>

We’re in!!! It worked!

It is at this point that I realize I’ve already gone through 2 drams of Aberlour and I should probably stop for the night. Provided there is enough interest (and whiskey), I’ll write-up Step 2 of getting this running on Kubernetes in ACS. As for now, I should stop while the world is only mildly spinning.

NOTE 1: If the database auto-generates a password with a “/” in it, I’ve found it doesn’t work. You can change that by running:
docker exec ./setPassword.sh

NOTE 2: If you run this multiple times, make sure to run “docker system prune” as it fills up your disk fast. On my 3rd try, I hit the following error, even with lots of space on my disk.
[FATAL] [DBT-06604] The location specified for 'Fast Recovery Area Location' has insufficient free space.
CAUSE: Only (9,793MB) free space is available on the location (/opt/oracle/oradata/fast_recovery_area/ORCLCDB/).
ACTION: Choose a 'Fast Recovery Area Location' that has enough space (minimum of (12,780MB)) or free up space on the specified location.

NOTE 3: It looks like everyone uses Docker now…

After hearing about it for years, I was fortunate enough to attend DockerCon this time around. Since joining Microsoft as a Open Source Technical Evangelist, 80% of my job is either learning or teaching. This was my first OSS conference since joining Microsoft, and I was eager to share with others my experiences.

I was even more excited to find out that a Drew Erny (my Godmother’s grandson) was not only attending, but presenting! It was also a change for me to hobnob with some of the Docker elite and some of the other Microsoft movers and shakers.

I’ve captured all of my conference notes here, but below is my overview of the event and here’s some pictures:

Announcements:

  • Running Linux Containers native on Windows – This demo had a hiccup, but shows some interesting potential
  • Docker Multi-Stage Build – TL;DR – Specify multiple FROM’s separate build env from deploy artifact. For more details
  • MobyProject – Open Source project to help developers create their own Docker-like container platform. This one was unclear at first, until I read a few more articles on it.
  • LinuxKit – A toolkit for building secure, portable and lean operating systems for containers was open sourced live on stage!

Keynote:

  • Topics ranged from enterprise deployments to enterprise scaling to enterprise security and “how to convince your enterprise boss” and “Docker Enterprise. Look at how Enterprisey we are and how Dockery other enterprises are”.
  • Day 1’s keynote felt more developer centric, and Day 2’s felt more enterprise centric. Afterwards, I also noticed the undertone of “Look how Enterprise Docker is” in not just the keynotes, but many of the presentations. Docker is definitely positioning itself to be more respected in the Enterprise world. I get it and completely understand it, but the message was tilted every so slightly towards that slant.
  • NOTE: There used to be rumors of Microsoft buying Docker. If Microsoft had, and then Docker made the same Enterprise slant, there would be a HUGE backlash. Docker has worked hard to be beloved and it shows.

Untitled

Pre-event Organization:

  • Since I registered late, I missed a number of the critical emails including an FYI to RSVP to a party that was waitlisted by the time I discovered it. Thankfully, by then I had found my own crew to dine and drink with.
  • The DockerCon app was helpful for detailing the tracks and available sessions and adding them to the DockerCon app’s calendar. Would be helpful if it exported to a personal calendar for reminders as I got caught up in the Expo hall many times.

Event Organization:

  • As a coordinator of 1000+ people events, I understand exactly how difficult this is. Your best hope is that no one really notices the blood, sweat and tears that go into setting it up. And it’s now that everything is done that I appreciate how good of a job they did.
  • The was more than adequate signage and information for what is happening and where.
  • This is the first convention I’ve been to that included a swing set, which was awesome. Lots of break-out areas, separated by pallets and bean-bag private spaces.

Untitled

Ecosystem Expo:

  • Microsoft and IBM were the platinum sponsors and it showed as they were the first two you saw when walking in. Outside of that, there were plenty of vendors eager to talk and lots of great swag. Drones were the most popular prize, but sadly the luck of the Coonass wasn’t with me.
  • Lots of great vendors. I got to pick the brains of talented teams at AWS, Rancher, Yippie.io, Redhat, Docker, Aqua, RedisLabs, 1&1, Citrix, Cloud Native Compute Foundation, Oracle (yes, that Oracle. They provide Oracle server on containers now!)

Presentations:
Lots of great presentations and speakers.

  • “Creating Effective Images” was the top rated and thankfully repeated since I missed it the first time. I highly recommend watching when it becomes available online.
  • Docker Swarm Deep Dive – Drew Erny did a great job of headlining this talk with demos from some of his compatriots. I saw how Docker bakes security into everything they do which will make all of our lives easier. I have been focused on Kubernetes, but the new announcements for Docker Swarm have gotten me really excited, especially how they handle Secrets and image security, software supply chain lifecycle and desktop deployments.

Here’s some great quotes I overheard:

  • “I only use microservices to effectively hide the root cause of any problem I create”
  • “Whatever layer you’re at, the layer below you is just magic”
  • “To quote WuTang: Cache rules everything around me.”
  • “Bro, do you even Load Balance?”
  • “Complaint Driven Development”
  • “According to metrics, you don’t have metrics”
  • This love poem

Untitled

Prior to DockerCon, I really hoped to attend and meet a few more Microsoft’ers and some Docker’ers(?) but got swept up into the community and the common goal it has for deploying software better, faster, stronger. I can’t wait till next year.

P.S. If you are interested in toying around with Docker, check out: http://training.play-with-docker.com/ It’s a great walkthrough without the need to install anything (browser based development!)