Category Archives: Work Related Stuff

GitOps for Presentations

Yes, I work for Microsoft. No, I do not like PowerPoint. Here’s my alternative with the source code which I’ll explain here.

For 20+ years I’ve done UNIX/Linux development and have worked at Microsoft for 6 years. And I’ve learned that Microsoft will typically build the all-encompassing Enterprise-ready solution and the OSS ecosystem will build a narrow-focused tool that you can piece together with others.

Each have their own benefits and constraints. There is No Silver Bullet.

A common set of requirements I encounter are:

  • I need to easily present to a public audience
  • I might have to use someone else’s computer
  • I want to share the slides afterwards
  • I need to quickly update the slides
  • I just want to display text and images. (PowerPoint is an absurdly impressive tool with lots of features that I rarely use.)

Internal Microsoft SharePoint policy prevents sharing slides with external visitors. This often results in emailing 10-100MB PPTs or PDF files around. Blah!

Piecing bits of OSS, I present to you “GitOps for Presentations”. It involves:

  • Git + GitHub – Version control of content
  • Markdown – Easy styling of content
  • MARP – Converts CommonMark to HTML, PDF, PPT
  • VSCode – Edit the content (There’s even a MARP extension which allows you to preview in real-time!)
  • GitHub Actions – Build the presentation from Markdown
  • GitHub Pages – Host the presentation

Benefits:

Limitations:

  • MARP’s formatting is basic. Especially if you’re coming from PowerPoint

That’s cool, but why didn’t you …

  • use Remark or Reveal.js?
  • There’s many great presentations frameworks, but I wanted something really simple. KISS
  • You should be able to replace MARP with any of those other frameworks and still get the same results.
  • just present your PPT and email it?
  • That requires work and time. At conferences, I don’t have time/might forget to follow-up with everyone. I create a QR code and put it at the end of the slides. This enables self-service, discovery and also saves me previous keystrokes.
  • use slides.com or Google Slides?
  • Microsoft has embraced OSS and purchased GitHub, so I wanted to find a way to explore integrating all of this. I’ve been very happy with the results!

I’m sold! How do I get started?

  • I’ve made it easy for anyone to get started by creating a GitHub template for this project (which is also a presentation)
  • Click “Use this template” and create a new repository
  • Enable GitHub Actions to auto-publish to GitHub Pages
  • In your new Repo, click Settings -> Pages
  • Set Source to GitHub Actions
  • You’re done!

PEDANTIC DISCLAIMER:

  • I’m quite familiar with GitOps, and while this is outside of running Kubernetes clusters as IaC, there are some similarities with the top-level concept of using Git to set my desired state of my presentation.
  • MARP technically uses CommonMark. It’s close enough for what most people will need

Managing Azure Subscription Quota and Throttling Issues

As Azure customers and partners build bigger and more complex solutions in their subscriptions, you might hit quota and throttling issues. These can be irksome and cause confusion. This article will walkthrough some of the scenarios I’ve seen and how to design with them in mind.

Let’s make sure we’re on the same page regarding terminology used in this article:

Managing Quotas

Because quotas are mostly static, viewing your quotas is pretty simple. Simply to go the Azure Portal and click on “My quotas”.

If you need to increase your quota, you might need to open an Azure Support ticket. For example, if you need to start deploying in a new region, you might need to open a ticket to increase the “Total Regional vCPUs” and “VMSS” quotas in “West Central US”. Once the ticket has been approved, the quota will be available to you.

Managing Throttling

For the most part, you won’t need to worry about throttling, but if you’re doing very large scale deployments with LOTS of constant churning of resources, you might hit throttling limits.

These limits are less about the number of resources, but HOW you use the resources. For example:

  • You can have 5000 AKS cluster in one subscription, each AKS cluster can have a maximum of 100 node pools. If you try creating the max # of AKS clusters with the max # of node pools simultaneously, then you’ll definitely hit the throttling limit.
  • Some OSS projects aggressively call ARM and the RP API’s in a reconciliation loop. Multiple instances of these projects will also hit the throttling limit.

Since throttling is specific to the current time window, it can be trickier. There’s no “hard formula” for when you’ll hit a threshold. But when you do, you’ll probably start seeing 429 HTTP status responses.

Throttling Examples

Thankfully, you can get insights into your current throttling status by looking at response headers for the requests.

  • x-ms-ratelimit-remaining-subscription-reads – # of read operations to this subscription remaining
  • x-ms-ratelimit-remaining-subscription-writes – # of writes operations to this subscription remaining
  • x-ms-ratelimit-remaining-resource – Compute RP specific header, which could show multiple policy statuses. (see “Read Request for GETting a VMSS” below for details)

Let’s dig into this deeper using the Azure CLI.

Example: Create a Resource Group (Write Request)

Because this request creates a RG, it will count against our subscription writes:

“`markdown
? az group create -n $RG –location $LOCATION –verbose –debug –debug 2>&1 | grep 'x-ms'

DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-client-request-id': '<guid>'
DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-ratelimit-remaining-subscription-writes': '1199'
DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-request-id': '<guid>'
DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-correlation-request-id': '<guid>'
DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-routing-request-id': 'SOUTHCENTRALUS:20230512T163152Z:<guid>'
“`

NOTE: The key point is how the x-ms-ratelimit-remaining-subscription-writes is now 1199 (instead of the standard 1200 per hour as per the Subscription and Tenant limits)

Example: GET a VMSS (Read Request)

This request performs a GET (read) request on an existing VMSS. This is similar to the write request for the RG, but since Compute RP also has a separate set of throttling policies, it also counts against the Compute RP limits.

markdown
? az vmss show -n $VMSS_NAME -g $RG --debug 2>&1 | grep x-ms
DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-client-request-id': '<guid>'
DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-ratelimit-remaining-resource': 'Microsoft.Compute/GetVMScaleSet3Min;197,Microsoft.Compute/GetVMScaleSet30Min;1297'
DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-request-id': '<guid>'
DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-ratelimit-remaining-subscription-reads': '11999'
DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-correlation-request-id': '<guid>'
DEBUG: cli.azure.cli.core.sdk.policies: 'x-ms-routing-request-id': 'SOUTHCENTRALUS:20230512T162738Z:<guid>'

NOTE: The key point is how x-ms-ratelimit-remaining-resource has two key-value pairs:

  • Microsoft.Compute/GetVMScaleSet3Min;197 – I ran this command before, so I have 197 requests available in the 3 minute window for performing GET requests on the VMSS resource
  • Microsoft.Compute/GetVMScaleSet30Min;1297 – I now have 1297 requests available in the 30 minute window for performing GET requests on VMSS resources

NOTE: x-ms-ratelimit-remaining-subscription-reads doesn’t seem to decrease (11999). Even if I run the same command again. I haven’t figured that out yet.

Designing with quotas and throttling in mind

Most Azure deployments won’t need this type of fine tuning, but just in case, there’s some documented Throttling Best Practices as well as my personal pro-tips:

  • Use the Azure SDK, as many services have the recommended retry guidance built-in
  • Instead of creating and deleting VMSS (which consume multiple VMSS API requests), scale the VMSS to 0 (which only consumes 1 VMSS API request)
  • Any type of Kubernetes cluster auto-scaler will perform a reconciliation loop with Azure Compute RP. This could eat into your throttling limits
  • Use the Azure Quota Service API to programmatically request quota increases

If you’re unable to workaround the throttling limits, then the next step is to look at the Deployment Stamp pattern using multiple subscriptions. You can programmatically create subscriptions using Subscription vending.

Hopefully this article has helped you understand quotas limits and throttling limits in Azure, and how to work around them. Let me know if you have any additional questions and/or feedback and I can follow-up with additional details.

AKS + Private Link Service + Private Endpoint

This walkthrough shows how to setup a Private Link Service with an AKS cluster and create a Private Endpoint in a separate Vnet.

While many tutorials might give you a full ARM template, this is designed as a walkthrough which completely uses the CLI so you can understand what’s happening at every step of the process.

It focuses on an “uninteresting” workload and uses podinfo as the sample app. This is because it’s easy to deploy and customize with a sample Helm chart.

This is inspired and leans heavily on the Azure Docs for creating a Private Link Service.

Architecture


Private Link Endpoint Service

Prerequisites

Assumptions

This walkthrough assumes you let Azure create the Vnet when creating the AKS cluster. If you manually created the Vnet, then the general steps are the same, except you must enter the AKS_MC_VNET, AKS_MC_SUBNET env vars manually.

Setup Steps

First, create a sample AKS cluster and install Podinfo on it.

# Set these values
AKS_NAME=
AKS_RG=
LOCATION=

# Create the AKS cluster
az aks create -n $AKS_NAME -g $AKS_RG

# Get the MC Resource Group
AKS_MC_RG=$(az aks show -n $AKS_NAME -g $AKS_RG | jq -r '.nodeResourceGroup')
echo $AKS_MC_RG

# Get the Vnet Name
AKS_MC_VNET=$(az network vnet list -g $AKS_MC_RG | jq -r '.[0].name')
echo $AKS_MC_VNET

AKS_MC_SUBNET=$(az network vnet subnet list -g $AKS_MC_RG --vnet-name $AKS_MC_VNET | jq -r '.[0].name')
echo $AKS_MC_SUBNET

AKS_MC_LB_INTERNAL=kubernetes-internal

AKS_MC_LB_INTERNAL_FE_CONFIG=$(az network lb rule list -g $AKS_MC_RG --lb-name=$AKS_MC_LB_INTERNAL | jq -r '.[0].frontendIpConfiguration.id')
echo $AKS_MC_LB_INTERNAL_FE_CONFIG

# Deploy a sample app using an Internal LB
helm upgrade --install --wait podinfo-internal-lb \
--set-string service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"=true \
--set service.type=LoadBalancer \
--set ui.message=podinfo-internal-lb \
podinfo/podinfo

Install Steps – Create the Private Link Service

These steps will be done in the MC_ resource group.

# Disable the private link service network policies
az network vnet subnet update \
--name $AKS_MC_SUBNET \
--resource-group $AKS_MC_RG \
--vnet-name $AKS_MC_VNET \
--disable-private-link-service-network-policies true

# Create the PLS
PLS_NAME=aks-pls
az network private-link-service create \
--resource-group $AKS_MC_RG \
--name $PLS_NAME \
--vnet-name $AKS_MC_VNET \
--subnet $AKS_MC_SUBNET \
--lb-name $AKS_MC_LB_INTERNAL \
--lb-frontend-ip-configs $AKS_MC_LB_INTERNAL_FE_CONFIG

Install Steps – Create the Private Endpoint

These steps will be done in our private-endpoint-rg resource group.

PE_RG=private-endpoint-rg
az group create \
--name $PE_RG \
--location $LOCATION

PE_VNET=pe-vnet
PE_SUBNET=pe-subnet

az network vnet create \
--resource-group $PE_RG \
--name $PE_VNET \
--address-prefixes 10.0.0.0/16 \
--subnet-name $PE_SUBNET \
--subnet-prefixes 10.0.0.0/24

# Disable the private link service network policies
az network vnet subnet update \
--name $PE_SUBNET \
--resource-group $PE_RG \
--vnet-name $PE_VNET \
--disable-private-endpoint-network-policies true

PE_CONN_NAME=pe-conn
PE_NAME=pe
az network private-endpoint create \
--connection-name $PE_CONN_NAME \
--name $PE_NAME \
--private-connection-resource-id $PLS_ID \
--resource-group $PE_RG \
--subnet $PE_SUBNET \
--manual-request false \
--vnet-name $PE_VNET

# We need the NIC ID to get the newly created Private IP
PE_NIC_ID=$(az network private-endpoint show -g $PE_RG --name $PE_NAME -o json | jq -r '.networkInterfaces[0].id')
echo $PE_NIC_ID

# Get the Private IP from the NIC
PE_IP=$(az network nic show --ids $PE_NIC_ID -o json | jq -r '.ipConfigurations[0].privateIpAddress')
echo $PE_IP

Validation Steps – Create a VM

Lastly, validate that this works by creating a VM in the Vnet with the Private Endpoint.

VM_NAME=ubuntu
az vm create \
--resource-group $PE_RG \
--name ubuntu \
--image UbuntuLTS \
--public-ip-sku Standard \
--vnet-name $PE_VNET \
--subnet $PE_SUBNET \
--admin-username $USER \
--ssh-key-values ~/.ssh/id_rsa.pub

VM_PIP=$(az vm list-ip-addresses -g $PE_RG -n $VM_NAME | jq -r '.[0].virtualMachine.network.publicIpAddresses[0].ipAddress')
echo $VM_PIP

# SSH into the host
ssh $VM_IP

$ curl COPY_THE_VALUE_FROM_PE_IP:9898

# The output should look like:
$ curl 10.0.0.5:9898
{
"hostname": "podinfo-6ff68cbf88-cxcvv",
"version": "6.0.3",
"revision": "",
"color": "#34577c",
"logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
"message": "podinfo-internal-lb",
"goos": "linux",
"goarch": "amd64",
"runtime": "go1.16.9",
"num_goroutine": "9",
"num_cpu": "2"
}

Multiple PLS/PE

To test a specific use case, I wanted to create multiple PLS and PE’s. This set of instructions lets you easily loop through and create multiple instances.

# podinfo requires a high numbered port, eg 9000+

SUFFIX=9000
helm upgrade --install --wait podinfo-$SUFFIX \
--set-string service.annotations."service\.beta\.kubernetes\.io\/azure-load-balancer-internal"=true \
--set service.type=LoadBalancer \
--set service.httpPort=$SUFFIX \
--set service.externalPort=$SUFFIX \
--set ui.message=podinfo-$SUFFIX \
podinfo/podinfo

# This might be easier to hard-code
AKS_MC_LB_INTERNAL_FE_CONFIG=$(az network lb rule list -g $AKS_MC_RG --lb-name=$AKS_MC_LB_INTERNAL -o json | jq -r ".[] | select( .backendPort == $SUFFIX) | .frontendIpConfiguration.id")
echo $AKS_MC_LB_INTERNAL_FE_CONFIG

PLS_NAME=aks-pls-$SUFFIX
PE_CONN_NAME=pe-conn-$SUFFIX
PE_NAME=pe-$SUFFIX

az network private-link-service create \
--resource-group $AKS_MC_RG \
--name $PLS_NAME \
--vnet-name $AKS_MC_VNET \
--subnet $AKS_MC_SUBNET \
--lb-name $AKS_MC_LB_INTERNAL \
--lb-frontend-ip-configs $AKS_MC_LB_INTERNAL_FE_CONFIG

PLS_ID=$(az network private-link-service show \
--name $PLS_NAME \
--resource-group $AKS_MC_RG \
--query id \
--output tsv)
echo $PLS_ID

az network private-endpoint create \
--connection-name $PE_CONN_NAME \
--name $PE_NAME \
--private-connection-resource-id $PLS_ID \
--resource-group $PE_RG \
--subnet $PE_SUBNET \
--manual-request false \
--vnet-name $PE_VNET

PE_NIC_ID=$(az network private-endpoint show -g $PE_RG --name $PE_NAME -o json | jq -r '.networkInterfaces[0].id')
echo $PE_NIC_ID

PE_IP=$(az network nic show --ids $PE_NIC_ID -o json | jq -r '.ipConfigurations[0].privateIpAddress')
echo $PE_IP

echo "From your Private Endpoint VM run: curl $PE_IP:$SUFFIX"

I created this article to help myself (and hopefully you!) to clearly understand all of the resources and how they interact to create a Private Link Service and Private Endpoint fronting a private service inside an AKS cluster. This has been highly enlightening for me and I hope it has for you too.

Urinal Noise

Coming from a guy who has a national reputation for licking people, one can imagine that it would take something fierce to really gross me out.

I also live pretty much an open book life, but there are some times that are sacred to me. Namely, when I’m releaving my bodily needs. I think that those are times that should be done behind closed doors and with no one else observing or even being aware. (I’ll admit that I’ve done my buisness while on the phone before, but it’s one of those things that I feel really guilty about and made sure that I was on mute during those un/comforting times. So, that makes it ok.)

Therefore, I was surprised when I walked into the work bathroom to find someone giving directions while operating “hands-free”. I then took it upon myself to ensure that his compantion was quite aware of his social faux pas (pis?). No hitting the back wall, I was aiming for the water, baby. I was somehow able to fill the entire bathroom with that famous sound and could tell that I made my urinal-neighbor quite shifty as he obviously was trying to quicken his own process. But these are things you really can’t rush. No really.

When he finally left (I forget if he washed afterwards), I felt somewhat guilty, but even more embarassed when I noticed I was laughing all by myself at a urinal. Great story for the next guy.

— Snoopykiss wants a mini. Cooper that is.

When w00t! goes wrong

One of my small pleasures in life is the art of subtle humor. My friend Matt Mussleman is the supreme king of such humor. I, on the other hand, tend to keep it to myself and many times too obscure or bizarre. For example, one of the servers I created at work is named Tatu, also one of the tools I wrote generates logs named TPS reports. I bond instantly to anyone to pick up on it.

I’ve decided to name my latest server “w00t”, my new desktop will be “yarr” and I’ve already got a “pago”. Unfortunately, as I write this, I’m at work fighting with Yarr and w00t. Thus producing a bunch of other four letter word exclamiations, but thankfully, most everyone is either gone or doesn’t speak English and is using a vacuum cleaner.

On a side note, the last time I really heard anything from the Aforementioned Brooke was right after she posted a comment on my site. Maybe I’ll hear from here again. 🙂

5 Years already?!

Didn’t I move to Dallas just a few weeks ago?Didn’t I start working for Nortel the day after that?
Didn’t I just buy my house?!
Then what the heck is this email saying that I’m getting a gift for my 5 years of employment?!!!


Hopefully, that doesn’t mean that I’m getting old.  Because that would probably require me settling down, getting married, having little rugrats, and being “responsible.”  I’ve successfully avoided all of that so far, and I plan on continuing to do so.


I plan on having a small taste of this whole, “family” thing coming soon as my house will soon have it’s “maximum family stability factor” tested when Melissa stays here for a few days before heading back to Maryland, and my mom and dad come to visit and bring my two nephews, Charles and Evan.  Yes, I will be good to see all of them; however, I’m not sure how high my “family tolerance” meter goes to.  I bet that it’ll be pushed to 11.


The Utah Swing Exchange is getting more and more on my mind as the time nears.  Yea!  I used to think that I would get some of my best new pics at Utah, but I’m starting to think that the prize winners can be the expressions I get when I tell them that I’m going to Utah to dance. 


“You’re going to Utah…to dance?!  They have music over there?””You mean, you don’t know these people and you’re going to stay with them?”


I should have my pics from my second trip to the Scar. Renaissance Faire up soon.  Word to the wise:  Don’t go when raining.  Blah.

Lindy Gras a cometh

For those still not aware of my new habit of going to Lindy Exchanges, get used to me mentioning them.  I should be hitting quite a few this year.  The latest one which I am preparing myself for tonight is Lindy Gras, hosted in one of my old neigborhoods of New Orleans.


Most of the dances will be right by alma-mater, Tulane.  I’m really looking forward to it.  Although the drive (Yes, I’m driving there instead of flying), will be quite an 8 hour hike which I’m not looking forward to.  Thankfully, I’ve got Greg to keep me busy. Plus, I just found all of my missing MP3 CD’s!  Yea!  Each CD is about 11 hours of music, so we’ll never run out of music.  Yarrr!


I’m also planning another trip down to New Orleans in two weekends for the classic celebration of Mardi Gras.  This time, George, Lee, Michael and I are planning to make the journey.  I’m not sure how that will turn out since George is our more “pure” friend joining us.  I wonder how his brain will handle the massive visual breast intake.  Will it fry his brain?  Or will Pat O’s do it first?


Planning a trip to New Orleans always concerns me.  Living there for 4 years and hearing about your friends getting mugged, I’m always the cautious one, watching my step.  I hope that this will be a safe time for me and my fellow Lindy Hoppers.  May God keep us safe.  Especially through this time of international tension.  I hope that no crazy guy decides to bomb New Orleans while we’re there.  Pray for a safe trip for me.


Dancing like there’s no tomorrow,Moi.


P.S.  I also have the pictures from Matt Weyandt’s 80’s B-day party up.  It was a total blast, dude.  Complete with Rubix Cube cake and almost working Atari’s.

Those Special Words that Mean So Much

In this society of uncertainty and insecurity of ones future, there are always those little things that managers can say to you that have you realize that they’re not going to lay you off anytime soon. Today, I heard those special words…

Working in Nortel’s 3rd Generation Wireless Department (3G/UMTS/Wireless Data), I’m developing this script which will be able to configure their servers automatically in a matter of minutes. You see this, is a big deal, because many times, it takes people hours if not days to configure their switch properly. So, needless to say, this is a BIG deal. And I’m the brains behind the operation there.

Anywho, I’m doing all of my development on this separate workstation I have in my cube. And everything pretty much resides on it. I was talking to my manager today about future roles for me and he said that he wanted documentation of everything, because “If you were to die tomorrow, we’d be screwed.” (If you haven’t caught on by now, those are the Special Words I was referring to earlier.)

Some people might think that’s morbid. Some people might think that means I’ve got more work to do. Some people might get a power trip from that…but Me…I think that’s AWESOME! (Ok, maybe I enjoy the power trip a tad, but that’s ok.) You see, I have now proven my value to the company, and future productivity now is vested in my staying with Nortel. Now yes, I know that once I’m done with this project, that I can be canned, but I’ve also got a few ideas up my sleeve that I can prevent to management for future job security. 🙂

Feeling good about myself and looking forward to the Studio 54 party on Friday-Me.