Scaling the UI in Thunderbird, including messages list et al.

I use a lot of software on a lot of devices. Sometimes the defaults just don't work for me. Thunderbird is a great mail application, but using it on Ubuntu with a high dpi screen, the messages list was tiny.

There is an advanced setting that allows you to scale the UI, but was difficult to find how to actually change it. This old support forum explains how to do it in an older version, and this Super User post gets you close.

The bottom line is that you need to get to the Configuration Editor. to edit

layout.css.devPixelsPerPx

The default value was -1.0, but its a scaling setting that you can dial in to get the exact size you want. For my situation, 2.25 seems to work well, but the change is immediate so you can quickly get a sense of what's going to work for you.

Upgrading this site's Nuxt2 to Nuxt3 and Content Module v2

I often use this site to play with new technology, and as such, it goes through a lot of technical changes. When the site was originally upgraded to Nuxt2, it had already been out for a while and Nuxt3 was in beta stage. So I knew this upgrade was coming. Working on some other projects, I realized I needed a bit better handle on Nuxt3 and decided to jump in.

Using Content v2

Content v2 comes with a bunch of quality of life improvements. The ability to write Vue components that can be used in markdown, with parameters is incredible. I'm using that in a few projects, and hope to leverage it on this site too.

I had a hard time getting things to work, because I didn't read the docs. I started with documentDriven mode enabled to generate sitemap.xml. Since I was porting my Nuxt2/Content1 site, things weren't working. Running the site with npm run dev things seemed fine; however, npm run generate would fail with 404 errors on some content:

Errors prerendering:
  ├─ /api/_content/query/brX4CwCJoQ.1710967419806.json (13ms)
  │ ├── Error: [404] Document not found!
  │ └── Linked from /
  ├─ /api/_content/query/wxmlyJ2dlX.1710967419806.json (13ms)
  │ ├── Error: [404] Document not found!
  │ └── Linked from /blog

It turns out that the default route that gets added with documentDriven: true was conflicting in with my [...slug].vue file in a way that didn't totally break things, but didn't totally work either.

I ended up fixing that by backing out of documentDriven mode and updated my slug:

<script setup lang="ts">
const { path } = useRoute();
const { data: article } = await useAsyncData(`catchall-${path}`, () => {
  return queryContent().where({ _path: { $regex: path } }).findOne();
});
</script>
<template>
  <blog-details v-if="article" :article="article"/>
  <ContentDoc v-else />
</template>

This allows me to use my existing blog-details component to render my articles, but also allows me to fall back to the <ContentDoc /> renderer if needed. The astute observer will see that right now all routes go through the blog-details -- at a future date, if/when I want a different treatment I will update the query to only use it if the route starts with /blog/* and use different component for other paths.

This breaks the @nuxt/sitemap plug-in however, more on that below.

Handling Images

The site is using both @nuxt/image and nuxt-content-assets which allows me to store my images right along side my *.md files in the /content/ path. The docs explain how to make it all work, but I'm including a brief snip of the relevant configuration I needed to make it all work.

// nuxt.config.ts
modules: [
  'nuxt-content-assets',
  '@nuxt/content',
  '@nuxt/image',
  '@nuxtjs/sitemap'
],
//...
image: {
  dir: '.nuxt/content-assets/public'
}

The nuxt-content-assets package requires this component be added to work with @nuxt/image:

// /components/content/ProseImg.vue
<template>
  <nuxt-img />
</template>

Sitemap.xml

With Nuxt Content documentDriven: false the sitemap doesn't generate any content, and the mechanism for automatically generating urls is a bit different in v5+ (for Nuxt3). Putting this kind of code into nuxt.config.ts is an anti-pattern, so support was dropped.

This requires setting up a server route, which can then be referenced in nuxt.config.ts as the source for the sitemap data.

// /server/api/sitemap.ts

import { serverQueryContent } from '#content/server';

export default defineSitemapEventHandler(async (e) => {
  const routes = [];

  const posts = await serverQueryContent(e).find();

  routes.push({
    url: '/',
    changefreq: 'weekly',
    priority: 0.5
  });

  routes.push({
    url: '/blog',
    changefreq: 'weekly',
    priority: 0.75
  });

  posts.map((p) => {
    routes.push({
      loc: p._path,
      lastmod: p.lastMod ? p.lastMod : p.date,
    });
  });

  return routes;
});

With this change to nuxt.config.ts to make it link up:

sitemap: {
  sources: ['/api/sitemap'],
},

Tests that worked fine on Windows are failing with an IOException on Ubuntu.

I have a set of integration tests that have been working great on Windows for quite some time. While troubleshooting an unrelated issue I was running my tests on Ubuntu 20.04 LTS via WSL, and about half of the tests were failing with this IOException.

The Error Message

System.IO.IOException : The configured user limit (128) on the number of inotify instances has been reached, or the per-process limit on the number of open file descriptors has been reached.

It seems that due to the way my tests work using WebApplicationFactory<Startup> and calling factory.WithWebHostBuilder(builder => ..., I'm creating too many instances of the HostBuilder, which by default installs a file system watcher on the appsettings.json file. All those instances tap out the default number of inotify instances allowed. Switching to a polling mechanism clears this up.

The Solution I Found

Configure the environment variable to have dotnet use polling instead.

export DOTNET_USE_POLLING_FILE_WATCHER=true

I don't know if there are unintended side effects of this change; however, since its only applying to test runs, I feel its a safe change to make.

HTTP Status Codes Explained for Most Folks

There are lots of jokes about HTTP status codes, what they mean, and general frustration with the inconsistent way in which they're used.

HTTP 1xx

  • 100: Continue sending me the request.

HTTP 2xx

  • 200: OK, here you go.
  • 201: OK, I created it for you.

HTTP 3xx

  • 301: Moved what you want to a new spot over here.
  • 302: Found what you're looking for over here.

HTTP 4xx

  • 400: Your bad.
  • 401: You aren't logged in.
  • 403: You're logged in, but can't do/see that.

HTTP 5xx: Our bad

  • 500: We screwed up.
  • 503: We forgot to start the server.

There are many more status codes, and as a developer I encourage other developers to use appropriate and specific response codes. For more specific details, Wikipedia has a good overview.

Implied route parameters in ASP.NET Core Form Tag Helpers

With a route like /change/{id} the ID parameter is implicitly implied if the view contains a form with method="post" back to the same action name, for example:

View:

<asp-form asp-action="change" method="post">

Controller:

[HttpPost]
public ActionResult Change(int id, Model model)

When changing the form action to to include a change confirmation step:

<asp-form asp-action="confirmChange" method="post">

The id route parameter was not included by default anymore, so the controller action was receiving a default(int):

[HttpPost]
public ActionResult ConfirmChange(int id, Model model)

ASP.NET Core seems to include the current route parameters when posting back to the same action name.

The fix was fairly simple, you must explicitly include the route parameters on the <asp-form> tag when posting across action names.

<asp-form asp-action="change" asp-route-id="@Model.SomeId" method="post">

See also: Stack Overflow on updating route parameters.

Errors installing SSH Server on domain joined Windows 10 laptop using WSUS

While attempting to setup SSH access to a Windows 10 machine, following this guide: https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse, I kept running into a generic Windows install failed error. It became apparent that the issue was because the system was failing to download the optional windows feature.

Turns out that for some reason the WSUS server that the machine was connected to didn't have that optional feature, so a local gpedit was necessary to configure the machine to directly download optional features from Windows Update.

windows-gpedit-chang

After checking the box here, it was as simple as re-running the install from the Settings app.

Going without a Framework or a Theme

One of my side projects is https://time2temp.com -- a simple site for checking bbq times and temperatures. When I originally put the site up, I grabbed an html5up theme, slapped some inline Vue templates, and a bit of javascript to load it up.

It worked, but it was HEAVY: 20 network requests over 500kb!

20 requests at 500kb+

To serve a single "page" that essentially has a list, that was wildly overkill. How can I make that more simple and more fast?

Drop the framework, and more importantly, drop the bloated CSS theme that included tons of features not used in my simple one page site.

11 requests at 300kb

Finally, trimming down the bloated theme and pulling out only the few parts I was actually using resulted in a much slimmer, and by extension faster site that is cheaper to operate:

5 requests at less than 20kb

I also converted the logo from a fancy css based thing to a purely static png one, that is only 6kb. That was a huge savings in terms of data transfer.

Homemade Coldbrew Coffee The Easy Way

I love coffee, and cold coffee is my favorite. I often get asked about my cold brew process, so I'm writing it down to share with everyone.

Equipment

You can use any equipment you like, but here's what you will need.

  • Two containers (roughly equal size) and large enough to hold your cold brew.
  • Sieve / Coffee filters.

I use a pair of large mason jars, because they have measurements on the side which is helpful for the Process.

Process

Preparing a batch

I start with a 3:1 ratio of water to grounds.

  1. I put the grounds into my mason jar, filling about to the 200ml line.
  2. Fill to 800ml line with cold water.
  3. Shake or stir the mixture up to ensure all grounds are wet.
  4. Throw it in the fridge for 18-20 hours.

It doesn't hurt to stir up the batch once or twice while its in the fridge cold brewing, just make sure not to do it right before you strain it out, as that'll make it more difficult to get the grounds out.

Handling a steeped/brewed batch

Strain the grounds out, I use my sieve to go between my two jars a couple times, to catch the most grounds as possible.

Once you have strained to your satisfaction, you now have cold brew concentrate!

Results

The above process makes a nice concentrate. I recommend diluting with a 60/40 concentrate to additional cold water mixture, though you will want to zero in on the exact amount of additional water you want to add to your concentrate.

Adding a Sitemap.xml file for Nuxt Content

Nuxt has a built-in sitemap generator plugin, which is great! The content plugin also has a nice section on integrating with the sitemap plugin. Those were really useful starting points for me, but I had a few special requirements.

  • specify lastmod appropriately
  • specify priority to posts over lists
  • customize changefreq for some of the built-in mapped pages

I ended up with the following nuxt.config.js section to accommodate my use:

sitemap: {
  hostname: 'https://www.brosstribe.com',
  gzip: true,
  routes: async () => {
    const routes = [];

    const { $content } = require('@nuxt/content');
    const files = await $content({ deep: true })
      .only(['path', 'lastMod', 'date'])
      .fetch();

    routes.push({
      url: '/',
      changefreq: 'weekly',
      priority: 0.5
    });

    routes.push({
      url: '/blog',
      changefreq: 'weekly',
      priority: 0.75
    });

    for (const file of files) {
      routes.push({
        url: file.path === '/index' ? '/' : file.path,
        changefreq: 'yearly',
        priority: 1,
        lastmod: file.lastMod || file.date
      });
    }

    return routes;
  }
}

This allows me to specify that my posts don't change often, but I do go in and fix typos or make updates occasionally so I don't want them flagged as never I also wanted them to take priority over my home and blog list pages. So I set their priority to 1, and the root page and the /blog page to lower priority relative to my posts.

The other thing I wanted to include was my lastmod flag, which I track in Nuxt Content using front matter.

---
date: 2022-03-04
lastMod: 2022-03-04
---

Note that I had to update my Content query to include my fields, and then specify them:

const files = await $content({ deep: true })
      .only(['path', 'lastMod', 'date'])
      .fetch();
///
{
  lastmod: file.lastMod || file.date
}

this says use the last modification date, otherwise the created date.

Overall this function is fairly bloated, but given that nuxt.config.js is essentially a dumping ground of all of the various plugin configs, mine is small enough right now it wasn't worth making a separate file and importing that; however, that is a viable route mentioned in the sitemap plugin's docs.

Using Github and Azure Static Web Apps to host my Nuxt Site

Hooking up Azure Static Web Apps to a Github repository is very easy and works incredibly well.

Using Github actions, and a managed secret deployment key on the Azure side, the I used the default integration yml generated by Azure to set up continuous integration action. Out of the box, it creates a new environment for each pull request on the repository with a unique url. This is a useful feature for managing different experimental paths and seeing how it looks on an "almost" production site.

The one thing I did to make this work with my Nuxt static site, is I use the

app_build_command: "npm run generate"

as opposed to the built-in script, which I think is

npm run build

and this allows me to have my static site generated and fully rendered by the Nuxt tooling and pushed up to the Azure Static Web App.

Migrating the site to Nuxt with Content module

The site has been hosted on Umbraco for many years, prior to that it was a windows live spaces blog, and before that I forget... Technology moves fast, and has long been time to switch platforms.

I have wanted to switch to a markdown based blog for a while. First, to see if I can do it; but also to reduce the need for a database and app service to run the blog. Just a static site generated when I make changes is plenty.

Setting up routing

The standard nuxt content module blog suggests using a slug system to route blog content. For a simple blog, that would probably work, but I don't like the way it guides you to drop all content in a single folder. It was also difficult to get working with any kind of sub folder system.

I wanted to have a system like this

  • content/blog
  • content/blog/archive
  • content/blog/archive/post1.md
  • content/blog/2018/post5.md
  • content/blog/2019/post15.md

To support this kind of folder structure, and keep sane routing, following the default nuxt content system works very well. I ended up with a simple _.vue file in the main pages folder.

<template>
  <div>
    <article>
      <h1>{{ article.title }}</h1>
      <p class="article-meta">
        {{ formatDate(article.date) }}
        by <a href="#">{{ article.author.name }}</a>
      </p>
      <nuxt-content :document="article" />
    </article>
  </div>
</template>
<script>
export default {
  async asyncData({ $content, params }) {
    const article = await $content(params.pathMatch).fetch();
    return { article };
  },
  methods: {
    formatDate(input) {
      // fix day behind issue
      // https://stackoverflow.com/a/45407500/86860

      // Date object a day behind
      const utcDate = new Date(input);
      // local Date
      const localDate = new Date(
        utcDate.getTime() + utcDate.getTimezoneOffset() * 60000
      );

      return localDate.toLocaleDateString('en-US', {
        weekday: 'long',
        year: 'numeric',
        month: 'long',
        day: 'numeric'
      });
    }
  }
};
</script>

I've included a little date formatter here, that converts the string date tag stored in the markdown files, and converts it to the users local timezone (this prevents the day behind issue noted in the comment).

This makes for very simple system of writing and managing my main content, and keeps the routing very simple:

<NuxtLink :to="article.path">
  {{ article.title }}
</NuxtLink>

Handling images

Dealing with images in the content was a little tricky, but with a bit of a searching the solution is to enable components in content.

First, ensure that nuxt.config.js has components enabled:

export default {
  components: true
}

Then setup an image binding component in components/global, which I found a great example of here: https://woetflow.com/posts/working-with-images-in-nuxt-content/.

<script>
export default {
  props: {
    src: {
      type: String,
      required: true
    },
    alt: {
      type: String,
      required: true
    }
  },
  methods: {
    imgSrc() {
      try {
        return require(`~/assets/images/${this.src}`);
      } catch (error) {
        return null;
      }
    }
  }
};
</script>
<template>
  <img :src="imgSrc()" :alt="alt" class="inline-max" />
</template>

<style scoped>
.inline-max {
  max-width: 100%;
}
</style>

Doing this pushes all the images through webpack, but means that referencing them in a markdown file is pretty easy:

<content-image 
  src="folder/file.png"
  alt="alt text"></content-image>

Looking Forward

This was a really fun project. I've simplified my website setup, using a static site generator to make the full website at build time.

This should also make it easier to try out some interesting interactive components inside blog posts and I hope to take advantage of that in future posts.

Setting Your Metric for Hyper-V Wireless Networking

I've been struggling to setup Hyper-V networking on a laptop for a while now. Until recently I was able to work around it, and not actually solve the problem.

Its well documented, that sharing a Wireless network card in Hyper-V wont work. Since Windows 10 1709, a nifty 'Default Switch' has been provided to help VMs connect to the network via NAT using the hosts default connection. I could only ever get this to work intermittently. Why? Metric.

I believe this has something to do with my specific setup, so I'll outline that too. I use my laptop in three primary modes.

  1. Docked which has multiple DisplayPort, USB, 3.5mm, and Ethernet ports, but I use wireless networking
  2. Totally mobile, not connected to anything but power, obviously wireless.
  3. USB-C Dock with HDMI and USB, again wireless networking.

Eventually I came to notice that it worked when I was not docked. As a diagnostic step I disabled the unused Ethernet connection, and things started working. This told me there was some issue with determining which connection the VM should use.

I set the Metric on my Wireless connection to 1, on my Virtual Adapter (VPN) to 2, and my Wired Ethernet to 50. Things have been working great since. Go figure.

Adapter Settings => IPv4 Properties => Advanced => Uncheck Automatic metric, and specify a value.

Windows network metric settings panel

Comprehensive Guide to Fixing the FileMaker Web View Control on Windows

Working with the FileMaker Web View control can be a challenge on windows. The FileMaker Web View control is essentially a shim that allows us to put the MSHTML Control on a FileMaker layout. Common wisdom is that this control is "essentially IE" while true, that is misleading. By default the control operates in IE5.5 mode! There is a lot of historical information and decisions that brought us to those defaults. Fact is, it doesn't have to remain that way. I'm going to outline how to get the MSHTML Control up to IE11 mode.

There are a couple levers we can pull to nudge the control into supporting modern standards. The Document Mode and the Input Model.

Setting up "Document Mode"

There are several ways to set the document mode that the control uses, and this behaves much like the IE the full browser. It can be specified couple ways.

HTTP Header sent from the web server hosting the content loaded in the control

This can be done many ways, depending on the server system in question. A simple way for a site hosted by IIS is to simply add this to web.config:

<system.webServer>
  <httpProtocol>
    <customHeaders>
      <clear />
      <add name="X-UA-Compatible" value="IE=Edge" />
    </customHeaders>
  </httpProtocol>
</system.webServer>

Meta tag in the body of the document rendered

Lots of examples of doing this method. Advantage to this method is that it works with a data url.

Registry setting on the computer running FileMaker Pro

Each method has benefits and drawbacks, and the later versions of FileMaker set the registry setting during install. FileMaker v16 and v17 both do. Other versions YMMV. It looks like this:

feature browser emulation registry screen shot

Configuring the Input Model

The input model is the flag that toggles some more modern javascript apis, such as Pointer Events, among others. This can only be controlled via a registry setting on the computer running FileMaker Pro.

To DISABLE Legacy Input Mode (which is enabled unless you do this for any MSHTML Control) you must create the following registry key:

HKEY_CURRENT_USER (HKLM requires different keys based on bitness of FMPA version)
    SOFTWARE
        Microsoft
            Internet Explorer
                Main
                    FeatureControl
                        FEATURE_NINPUT_LEGACYMODE
                            FileMaker Pro Advanced.exe = (DWORD) 00000000

The zero value tell the operating system that when "FileMaker Pro Advanced.exe" (adjust accordingly if you're not using advanced) requests an MSHTML Control, it should disable the Legacy Input mode which is intended to support old legacy enterprise systems built for IE5.5 or IE6.

This is what it looks like prior to creating any entries:

empty registry section for legacy mode

Once you've pulled both levers, modern websites and controls will work much better while embedded inside your FileMaker solution. We're still working with IE11, so rendering issues will still present themselves. Your site must account for this, but at least more modern programming APIs will be available.

Debugging Javascript in a FileMaker Web View Control

Using built in FileMaker tooling, there is no way to see output from console.log, or other diagnostic tools when deploying some web content inside a web view control. I have found a way to do it using some freely available tools.

Here's a little demonstration of how it works:

console.log shows in visual studio

Using Visual Studio 2017 Community edition (download), it is possible to get access to this information. The process is simple once you know the steps.

Step 1: Open your solution to a layout with your web view control.

Step 2: Fire up Visual Studio, and use the Debug => Attach To Process menu:

Attach To Process Menu Item

Step 3: Select debugging type as "Scripting"

Set-Scripting-As-Debug-Attach-To

Step 4: Attach to the "FileMaker Pro Advanced.exe" process.

Step 5: Use Debug => Window => Script Console.

Debug-Show-Window-Javascript-Console

Once attached, you can view the console output and even script source and hit break points and step through code line by line. Basically anything you can do in the Internet Explorer 11 Developer Tools, you can do through Visual Studio attached to the web view control

It seems to work best if the layout has only one web view control. I've run into an error that Visual Studio was unable to attach to the process, and restarting FileMaker has corrected that.

For reasons I have not yet been able to understand, once you enter layout mode, you cannot attach for debugging to that instance of FileMaker anymore. You have to close FileMaker and start over. If you know why this is, or a way around it please reach out to me. I'd love to update this post with that information.

For reference, the tests above used a single file with a simple layout with a web viewer pointed at a field with this value:

data:text/html,<!DOCTYPE html><head><meta http-equiv="X-UA-Compatible" content="IE=Edge" /></head><body><button id="click">Logs</button><script>document.getElementById('click').addEventListener('click', function() { console.log('logged from click'); }, false)</script></body>

How to provide non-webpack'd configuration (easy to edit post-build/deploy) to a webpack'd single page application

SPAs, or single page applications, are all the rage these days. They have their merits, and they are beneficial for many scenarios. One issue that has plagued me repeatedly when working on SPAs is trying to define environment specific variables that are NOT KNOWN at build time. WebPack is great, and scary, and confusing all at the same time, but it packages everything up at once. If you don't know the value at build time, you're out of luck.

While there are dozens of ways to handle this situation, some including separate build and release pipelines with tokens, I opted for a more low tech solution. Let me set the stage before I continue, as I think it helps paint the picture around why I like this solution. My app connects to some web services, and the uri of said services will be different for each deployment. Its important to note, that my SPA and web services are hosted on different domains and setup with CORS configuration. I can't simply use relative paths.

I created a simple config.js file and included it in the head section of my index.html (the entry point for my SPA). It looks like this:

window.api_root_url = "https://runtime-api.example.com";
window.client_id = "my-client-id-for-open-id-connect";

In my SPA code (I'm using typescript) I created a simple globals.ts file which wraps these into type safe constants.

export const apiRootUrl: string = (window as any).api_root_url;
export const clientId: string = (window as any).client_id;

and then whenever I need to reference my api endpoint or my client id I can simply import and use:

import * as globals from '@/globals';
// later on...
console.log(globals.apiRootUrl);

This works well, and provides a single file to edit on the deployed solution. It makes it easy to configure via ftp, ssh, rdp, etc.

Graphing United States Road/Highway Lane Mileage by State and Type Of Road

There are a few data visualization projects I want to tackle and a good number of them have a cartography component. I knew I had to get started with one and build on it or I'd never get any of them rolling. I ran across a 'Functional System Lane Length' data set from Federal Highway Administration. I figured the table could be cleaned up and presented in a more intuitive way using a map and it would force me to kick start this little prototype.

This is what I came up with, its still a work in progress, though updates will likely come slowly and will be applied to the next data visualization project. This is how I approached it.

Visualization Technology

I had worked with jqvmap on some previous projects, and they have a great sample for color coding a map based on a set of data, so I decided to stick with something I already knew. I also knew I wanted to throw this in a separate page that I could embed on my blog here, so I wanted to make sure it could be completely self contained. Data Processing

The data was available in an html table form, as well as pdf and excel. I found an online html table to json converter, and used that to build an array of data from the table. This got me close to what I wanted, but it left a bad taste in my mouth indexing into a multi dimensional array to pull out data every time I wanted to change data sets. This is what it looked like before any conversion:

[
    ["INTERSTATE","OTHER FREEWAYS AND EXPRESSWAYS","OTHER PRINCIPAL ARTERIAL","MINOR ARTERIAL","MAJOR COLLECTOR","MINOR COLLECTOR (2)","LOCAL (2)","TOTAL","INTERSTATE","OTHER FREEWAYS AND EXPRESSWAYS","OTHER PRINCIPAL ARTERIAL","MINOR ARTERIAL","MAJOR COLLECTOR","MINOR COLLECTOR","LOCAL (2)","TOTAL","",""  ],
    ["Alabama","2,414","-","6,045","8,398","24,615","12,441","94,725","148,638","2,189","138","4,905","6,070","7,547","372","41,481","62,701","211,339"  ],
    ["Alaska","2,057","-","1,612","867","2,743","2,862","15,294","25,434","303","-","503","475","510","472","3,898","6,162","31,597"  ],
    ["Arizona","3,714","70","3,417","2,691","8,552","3,789","61,350","83,584","1,462","1,552","3,771","9,278","4,352","456","40,505","61,375","144,959"  ],
    ["Arkansas","1,752","288","4,996","6,360","23,780","13,656","122,548","173,378","1,469","424","2,208","4,364","4,504","499","23,685","37,154","210,532"  ],
    ["California","5,254","1,518","8,253","12,647","24,037","15,080","82,233","149,022","9,671","9,283","25,149","30,758","27,593","644","142,263","245,361","394,383"  ],
]

Clearly, something needed done, so I wrote a small processing utility and JSON.stringify'd it:

var ddt = [];
for (var i = 0; i < states.length; i++) {
    var abvr = StateAbvrFromName(states[i][0]); // since data shows state name and jqvmap uses state code, il, ca, etc
    ddt[ddt.length] = {
        'State': { 'Name' : states[i][0], 'Code': abvr },
        'Rural': {
            'Interstate': parseInt(states[i][1].replace(/,/g, ""), 10),
            'Other_Freeways_Expressways': parseInt(states[i][2].replace(/,/g, ""), 10),
            'Other_Principal_Arterial': parseInt(states[i][3].replace(/,/g, ""), 10),
            'Minor_Arterial': parseInt(states[i][4].replace(/,/g, ""), 10),
            'Major_Collector':parseInt(states[i][5].replace(/,/g, ""), 10),
            'Minor_Collector_2': parseInt(states[i][6].replace(/,/g, ""), 10),
            'Local_2': parseInt(states[i][7].replace(/,/g, ""), 10),
            'Total': parseInt(states[i][8].replace(/,/g, ""), 10),
        },
        'Urban': {
            'Interstate': parseInt(states[i][9].replace(/,/g, ""), 10),
            'Other_Freeways_Expressways': parseInt(states[i][10].replace(/,/g, ""), 10),
            'Other_Principal_Arterial': parseInt(states[i][11].replace(/,/g, ""), 10),
            'Minor_Arterial': parseInt(states[i][12].replace(/,/g, ""), 10),
            'Major_Collector': parseInt(states[i][13].replace(/,/g, ""), 10),
            'Minor_Collector_2': parseInt(states[i][14].replace(/,/g, ""), 10),
            'Local_2': parseInt(states[i][15].replace(/,/g, ""), 10),
            'Total': parseInt(states[i][16].replace(/,/g, ""), 10),
        },
        'Total' : parseInt(states[i][17].replace(/,/g, ""), 10),
    };
}

Throwing It All Together

Going in to this little project, I knew that I wanted to run without any server side code, I wanted users to be able to minipulate the data to change the map, and I knew I wanted it to be as light weight as possible. I didn't want any node modules or components or any 'build' time tooling to be required to make this work.

I opted to use some jQuery event handling to glue everything together, and I'm actually happy with how it worked out.

My Great Grandma's Buttermilk Pancakes

The ingredients:

  • 2 cups flour
  • 2 tsp baking powder
  • 1 tsp salt
  • 1/2 tsp baking soda
  • 3 Tbs sugar
  • 2 eggs, separated
  • 2 cups buttermilk (or more)
  • 1/4 cup melted butter

Sift flour, measure and re-sift with other dry ingredients. Combine beaten yolks with buttermilk and add to dry ingredients. Add melted butter. Beat egg whites until stiff and fold into mixture. Bake on hot waffle iron or make into pancakes.

When a Difference In Your Environment Make a Difference

I'm using IdentityServer4 for a couple projects. Its great, and might warrant a post of its own; however, one thing I've been struggling with is loading the jwt signing certificates. I've been using the powershell cmdlet New-SelfSignedCertificate to generate the pfx files and it works great on my local computer.

Running this on a web server has caused some grief, but lead to my learning a bit more about how these things actually work.

This is the final code that works, but I'll walk through how I got to it.

new X509Certificate2(keyFilePath, keyFilePassword, X509KeyStorageFlags.MachineKeySet | X509KeyStorageFlags.EphemeralKeySet)​

The first thing is that the certificate file itself can have indicators stating whether to load to User store or Machine store; so I've used MachineKeySet to ensure we override that.

Additionally I'm also specifying EphemeralKeySet, which indicates the private keys should not be persisted anywhere. This is the key part that allows everything to work when running as ApplicationPoolIdentity on a server without administrative privileges.

Without passing the additional Storage Flags, the system uses whats in the certificate file/data and that works locally since I have a user profile and/or administrative rights. By default, IIS does NOT load the user profile which means no environment variables or user certificate store. In order to use MachineKeySet you need administrative privileges (something your web facing accounts should not have) so that's where Ephemeral comes in to play. Nothing is persisted so the admin rights are not needed. There are some incompatibilities with this approach; noted in the sources below, but for jwt signing it works.

For reference and searches, here are the errors I got/overcame.

Using default constructor without flags:

Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException: The system cannot find the file specified

Specifying only MachineKeySet, while running as ApplicationPoolIdentity:

Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException: Access denied

Relevant sources:

Introducing my latest side project - FMData

I switched jobs in March of this year, that is a story for a different time, but the important thing is that it reintegrated me with some technology I had not used in quite some time: FileMaker. FileMaker is a database system that provides UI and basic scripting capabilities all in a single package. Users use the FileMaker client to access data stored in FileMaker databases, utilizing layouts and scripts in the database. Its an all in one system.

As a web developer with a lot of C# code under my belt, I wanted to connect to data in FileMaker from the outside. Historically the way to do this was through their XML Publishing Engine, I picked up an existing open source project and modified it to suit my needs. Ten years ago, this was a great solution and it worked well. It still does, but as things change so must we. In FileMaker 17 the Data API was introduced. It uses JSON and is RESTful. A new package was needed. Enter my side project: FMData. I built this as a learning exercise, but quickly realized it could be useful to other developers.

The library provides coverage for the FileMaker 17 Data API, and I've just released v2.1, cleaning up a handful of bugs and improving coverage of the underlying FileMaker API. I still don't have full coverage, but I'm inching towards it. I have tons of ideas for features and enhancements, so be sure to keep an eye on the project. If you find it useful let me know. If you find a bug, open an issue. If you have a feature idea, open an issue on github and consider making a pull request.

The package is available on Nuget, and getting some data is really this simple:

var client = new FileMakerRestClient("server", "fileName", "user", "pass"); // without .fmp12
var toFind = new Model { Name = "someName" };
var results = await client.FindAsync(toFind);
// results = IEnumerable<Model> matching with Name field matching "someName" as a FileMaker Findrequest.

That's all there is to it, and there's more examples on the project site: https://fmdata.io/

Continuous Integration for Open Source Projects .NET Projects

I operate a couple niche open source projects. They don't generate much activity, but they've been useful to me over the years so I share them with the world to help anyone else that happens upon them.

They're hosted on my GitHub page. Which is great for sharing the source code and allowing folks to submit issues and submit pull requests (not that my projects are big enough to get any real activity, but I can hope). There isn't a good way to share the binary output from GitHub. You need to utilize additional tools and software. I'm using AppVeyor and MyGet and I outline my configuration below.

The full CI setup could be achieved with MyGet alone since they also offer build services; but I'm using a combination of MyGet (pre-release package hosting and AppVeyor for builds).

AppVeyor Setup

In order to get my .NET Standard 2.0 library to build in AppVeyor I had to make a few changes from the default configuration.

Build Setup

build configuration: build nuget packages, do dotnet restore pre build"

On the build configuration tab you need to tick the box to build Nuget Packages, and most importantly add a pre-build script to perform

dotnet restore

Deployment Setup

Deployment tab on the left as part of the build, not part of the AppVeyor project across the top.

On the Settings >> Deployment tab, in order to push to MyGet you will need to provide the MyGet Feed Url and API key. Both of these are easy to obtain on your feeds detail page.

MyGet

There are plenty of resources for setting up a MyGet feed, so I'm not going into those details, but this is where you get the settings utilized in AppVeyor:

myget nuget push url

The last step is pushing the MyGet packages up to Nuget; which can be done directly through the MyGet interface. Right now, this is a manual process for me. I have two separate AppVeyor builds setup for the same project, pushing to the same MyGet feed. One connected to the develop branch and one linked to master. Within AppVeyor I have enabled assembly version patching so they all end up in the MyGet feed and I can push the master releases out to Nuget.

I'm looking into having the build create release tags in the repository after a successful build, but haven't figured out how I want that to work yet.

Fix for Remote Desktop Connection Manager (RDCMan) on High DPI Devices

If you're like me, you probably find yourself needing to remote into servers from time to time. Again, if you're like me, you probably got tired of doing it manually and found a tool to help you. I know did, I found and live by RDCMan.

One of the many beautiful features, is it gives you the ability to store an encrypted file with all your connection, display, settings along with credentials. So you're only a short double click away from being on the remote desktop you need to be on.

Its been working for me for years. In the last few years, High-DPI devices have become more common and RDCMan didn't play well. That is, by default. One simple operating system setting/configuration was all it took to get sorted out.

Simply go to the properties of your shortcut, on the compatibility tab, change the HighDPI drop down from Application to System.

rdcman connection properties

I'm rather disappointed it took me this long to figure out, but now that I have it working its fantastic and my remote sessions are no longer scaled way down to fit.

Dream Build Play - Part One - My Technology Stack

Choosing the right technologies for a project is one of the early decisions you must make and its a crucial factor in long term success. Technology choices can make a project go smoothly or they can be a constant impediment to forward progress.

Since this a just for fun build, I'm going to try to throw a bunch of things at it, but first lets start with the basic technology stack.

Front end

UWP, obviously, but more specifically the plan is to use html and javascript as the building blocks packaged up inside a UWP application. To cap things off, I figured why stop there, lets throw in typescript, to get statically typed javascript and throw in a couple front-end frameworks to boot. Aurelia, for data binding and non-gamey stuff. Phaser-CE for gamey things.

Server Side

I'm a Full Stack .NET developer in my day life, so in order to focus on the game specifics for the front end, I decided not to bite off another layer of complexity by choosing a tech stack I'm not already proficient with. With that in mind, C# service/controller layer and Razor views (that serve up the baseline for the aforementioned Aurelia framework to pickup). The plan is to host this on Azure App Services, and then to throw some new things in the mix, I'm going to try to find a way to fold in CosmosDB, Notifications Hub, and maybe even some container services.

To summarize this up into a nicely packaged list:

  • Languages and Frameworks
    • C# + ASP.NET Core
    • Typescript + Aurelia
    • Phaser-CE
  • Apps and Tools
    • Tiled
    • Paint.NET
    • The GIMP

The idea is by using a lot of tooling I'm already familiar and comfortable with, I'll be able to focus on the game specific tasks for this project.

A fix for the error - A route named 'something or other' is already in the route collection. Route names must be unique.

I was setting up a new Umbraco Project, for the first time in a while. At my company, we have a base implementation that we 'fork' manually by copy/pasting the repository and creating a specific one for each client.

I set it up and try to run it, and immediately hit this error:

A route named 'something or other' is already in the route collection. Route names must be unique.

A good error message that describes the problem exactly, and how to fix it. My problem? I'm not defining any routes manually, and this is the same code that was working in another project. What gives? There are lots of posts explaining how to update your RouteConfig to remove the duplicate.

Eventually I come across a highly up voted post, suggesting to delete the bin files and try again. I know the bin folder is created by the compiler for my project; but deleting everything in there felt a little draconian to me. This is an axe kind of fix, and I'm thinking this is more of a scalpel problem.

One of those manual steps to creating our client specific projects, is to go in and rename folders, solution, and project files (and their internal path references to each other). This went just fine, and from a Visual Studio perspective everything linked up worked, and built.

Then it hit me. I renamed the project and its compiled output was in the bin folder along with the newly built and compiled version with my new name. Ultimately I had two copies of identical code in the bin folder with different physical file names.

I deleted the dll files from my original build with the old name, and that ultimately fixed the problem.

Really useful Nuget packages - FastMember. Convert a generic list to a DbDataReader on the quick.

Utilizing .NET Core has been a pretty great experience. There have been a few gotchas with APIs not being available in the base package. I was really stoked to see that the SqlBulkCopy classes are part of .NET Core. I was less thrilled to note that DataTable is there in .NET Core 1.0 but just an empty non-usable class.

That means converting from a generic IEnumerable<T> to a DataTable/Set is not an option.

Enter DbDataReader: another way to utilize BulkCopy.

If you have an IDataReader instance, the BulkCopy WriteToServer method has an overload to cover that; however, I'm an ORM to pull in some data form various sources so I basically have List<T>s, not IDataReaders. Searching the web it's pretty difficult to find a generic way to convert from a generic collection to a IDataReader. Much harder than it should be.

Enter FastMember: Convert an IEnumerable<T> to a DbDataReader, fast!

This great package makes the process easy and extremely fast. Basic demo shows how simple this package makes things.

using (SqlBulkCopy bulkcopy = new SqlBulkCopy(connection)
{
    using (var reader = ObjectReader.Create(toInsert))
    {
        bulkcopy.WriteToServer(reader);
    }
    bulkcopy.Close();
}

Anatomy of a blob storage Uri, and how to use a blob name prefix to make Azure do your filtering.

Sounds simple enough, right? The blob storage account has a Uri and each part means something.

There are only three levels of hierarchy built into the system:

  1. Account
  2. Container
  3. Blob

Seen as:

https://[STORAGE_ACCOUNT_NAME].blob.core.windows.net/CONTAINER-NAME/BLOB-NAME

Within the Blob itself, the NAME property can be used to create additional ‘virtual’ directories, but they are just that. Virtual. This is where things get pretty powerful. Using the Storage Client libraries for .NET, the ListBlobsSegmentedAsync method allows you to have Azure filter out blobs based on prefix. The prefix filter here only applies to the Blob Name. If we look at a specific example (redacted to protect the guilty):

https://[STORAGE_ACCOUNT].blob.core.windows.net/CONTAINER/VirtualFolder1/2017/7/01/15/fileanme.ext

You see this whole part VirtualFolder1/2017/07/01/15/fileanme.ext is all the Blob Name. It just so happens to be setup by folders Year/Month/Date/Hour and because of this we can use the ListBlobsSegmentedAsync to filter based on it.

var list = await container.ListBlobsSegmentedAsync(
    "VirtualFolder1/2017/07", 
    true, 
    BlobListingDetails.None,
    int.MaxValue, 
    null, 
    null, 
    null);

This would give us all files for the month of July in the year 2017, regardless of which day or hour they are listed in. Doing it this way allows me to query a much more limited set of data, meaning I have to process out fewer files locally and there is less data transfer in and out as a result.

The main caveat I’ve found is that you cannot use wildcards, so you can’t find all the blobs for July in any year without doing multiple queries. Because the container is not part of the blob name, you cannot query across containers either.

Error Migrating App Service Plan because of ‘microsoft.insights/webtests’

Had two azure subscriptions along with two AppServicePlans (one each) for cost savings I wanted to combine them, but you can’t use the ‘Change AppServicePlan’ on the individual AppServices unless they’re in the same sub, resource group, and region.

Each time I tried to use the portal to move one AppServicePlan to the other subscription, I would get this error:

the subscription ‘subscription-guid’ is not registered for resource types 'microsoft.insights/webtests (centralus)'. #code: missingregistrationsfortypes#

Turns out that I had some existing web tests from the old portal that were orphaned and couldn’t be opened/read from the new portal.

Enter the commandline tool from the web portal (yes, the terminal in the web portal, its neat check it out!):

az resource list –resource-type microsoft.insights/webtests

and there I get a nice json list of the resources that were giving me trouble, along with their “id” so I was able to delete them with

az resource delete –id /subscriptions/[guid]/resourceGroups/[my-resource-group]/providers/microsoft.insights.webtests/[test-name]

Someone more in tune with the bash shell could probably link those up to double down and delete all the items returned with a single command, but I was able to do it manually as I only had a few of these troublesome resources clogging up my migration.

Great Tools - Screen To Gif

I’m going to use this post to tag a series of great tools that I use. For those times when a screenshot isn’t quite good enough. ScreenToGif is a nice, clean screen capture and gif editor. Its a free download and it just runs. No installer (though it does need .NET 4.6.1) just download, save, and go.

Here’s a screenshot of the tool, around my Live Writer editor:

Screen To Gif screenshot

ScreenToGif has multiple capture options, but the one I find most useful is screen. You drag the ScreenToGif window around the screen area you wish to record. You simply hit the record/stop buttons to record while you work, then it opens your project in the editor to make any post-production changes you need. Then you can save as a .gif file for distribution.

Here is a short recording of the above:

Screen To Gif demo gif

Often a screenshot is plenty, but sometimes a quick gif communicates so much more.

Model Binding with File Uploads using ASP.NET Core Action Method Parameter Attributes

Started with a simple task: Upload a file to an ASP.NET Core API Controller. The project I’m working on uses a front-end SPA framework, so the file upload is coming from javascript and not directly from an html form post.

First lets look at what I was doing wrong and then we can understand why it was wrong.

A quick peek at the client side code that pushes this data to the controller (note this is actually typescript):

// note photoFiles is bound to via Aurelia binding.
var form = new FormData() for (let i = 0; i < this.photoFiles.length; i++) {
    form.append(`files[${i}]`, this.photoFiles[i]);
}
this.http.fetch(`/api/photoUpload/${this.targetId}`, {
    method: 'post', 
    body: form, 
    headers: new Headers() 
})
.then(response => { console.log(response); });

Since we’re loading our ‘FormData’ object into the ‘body’ of the http post, it made sense to me to wire up the ASP.NET Controller Action as follows:

public async Task photoUpload(
     [FromRoute] Guid propertyId,
     [FromBody] IEnumerable files)
{
     foreach (var file in files)
     {
         var name = file.Name;
         Console.WriteLine(name);
     }

    return new ObjectResult(null);
}

This would result an an HTTP 415 Unsupported Media Type response, before the action code ever executed.

It turns out [FromBody] actually kicks ASP.NET Core into JSON Model Binder mode; which obviously cannot handle the file data that is coming through. More information on Model Binding can be found here on Andrew Lock’s blog.

Easy enough; lets drop the [FromBody] attribute since its clearly not helping. Without going through the code again, the method is executed; however the files parameter has a count of zero.

At this point, I’ve been searching around enough to have seen a few blog posts that suggest forgoing the Model Binding and simply use the built in

var form = await Request.ReadFormAsync();

var files = form.Files;

method. I try this and it does in fact work. I see my uploaded files. Some folks would stop here with a working solution, but I like to know why something I expected to work didn’t. If the data is coming through, it should work as an action parameter as well as directly reading from the form.

Enter the [FromForm] attribute, useful in cases when you want to bind to a specific form field. Modify the above code to use [FromForm] on the IEnumerable<IFormFile> parameter from our action:

public async Task photoUpload(
    [FromRoute] Guid propertyId,
    [FromForm] IEnumerable files
)

and wait, why isn’t that working? Still getting an empty IEnumerable when the method executes. No Http 415 though, so at least its not a regression.

I went back and compared the ACTUAL http post data that was sent and compared it with a direct form post from a

and noticed that there was one slight bug in the assembly of the FormData on the front-end:
var form = new FormData()
for (let i = 0; i < this.photoFiles.length; i++) {
     form.append(`files`, this.photoFiles[i]);
}

The original code was doing something like files0, files1, etc for each file uploaded, however, the regular control simply uses the same ‘name’ as the input tag.

Connecting Open Live Writer to Umbraco Channels For Authoring Content or Blog Posts

Update 2018: Umbraco has removed 'channels' and the underling xmlrpc package that made this work. There might be another way to get it linked up, but I haven't spent time on it or figured it out yet.

Open Live Writer is a great, free, open source version of the late great Windows Live Writer, from the Windows Live Essentials package of yore. I’ve never been a prolific blogger, but having a nice slick thick client for formatting and image maintenance is always great. Using Umbraco and Open Live Writer works well, but it doesn’t have to be only for blogging! With Umbraco Channels you can direct content from Open Live Writer to various areas of your site.

I’ve set this up a dozen times, and every time I have to google around until I find the solution.

Umbraco Setup

Setup a channel for the user, in the Backoffice go to:

Users => Users => Your-Account => Content Channels (Tab)

Open live writer configuration

I’ve called out the important areas.

Start Node in Content: should match the folder you setup in the website.

Start Node in Media Library: useful if you want to keep all the media for your posts grouped together

Document Type: This is the document type that will be created for each Post in Open Live Writer.

Description Field: the field on the document type in which the main content will go.

Open Live Writer Setup

Website => http://www.example.com/[folder-if-setup]

User => Backoffice Username for this channel

Password => Self Explanitory

Blog Type => Metaweblog API

EndPoint => http://www.example.com/umbraco/channels.aspx

That’s the part I always forget.

What’s interesting here, is that if you want to manage multiple content sections from within Open Live Writer, you can create multiple Umbraco Backoffice users with one Channel per area in the site. Since Open Live Writer supports multiple accounts, you can link them all up and have a mostly seamless experience. Using different ‘accounts’ is a clunky way to manage multiple areas, but if you think of them as channels it makes some sense.

xUnit Tests in a VSTS Build Failing Upgrading to netcoreapp1.1 and Microsoft.NETCore.App 1.1.1 with project.json and preview 2.1 tooling.

When using netcoreapp1.0 I had been using the existing Visual Studio Test task from the Build Editor (v1.x) and simply overriding the ‘path to dlls’ with a ‘path to project.json` file as outlined here.

Upon upgrading the application and all tests to netcoreapp1.1 VSTS started failing builds with the following error:

Error: 'test-xunit' returned '-2147450749'.

Running these tests locally though Visual Studio, dotnet test and vstest.console.exe all worked just fine.

Scouring the internet, you’ll find plenty of sources suggesting that you add the nuget package ‘Microsoft.DotNet.InternalAbstractions’ to your test project. In my case, this did NOT solve the problem.

The only way I could get it working was to downgrade the test projects from

Microsoft.NETCore.App : { version: 1.1.1 }

to

Microsoft.NETCore.App: { version: 1.1.0 }

I suspect that the build agent doesn’t have v1.1.1; and that is why running locally the tests always work. All I know for sure is that running it locally everything worked fine, but it would blow up on a VSTS Build Agent.

Optimizing Front End Resources

Optimizing front-end resources can be lots of fun, if you’re using a good framework for it. For your standard ASP.NET web application WebGrease and the built-in Microsoft tools work pretty darn well. Things get a little bit interesting when you start working with things like LESS or SASS as compilers for these systems can start bundling things up and making minified versions. That’s fine and good, until you need or want managed caching. That’s where server side frameworks come into play.

Enter: ClientDependency Framework. Its what powers this site. Its a nice simple framework that makes sense. The code speaks for itself, and since it lives in the views it doesn’t get lost in some .cs file.

Firstly, its available as a nuget package. In order to get started, you want to simply punch in these commands:

PM> Install-Package ClientDependency
PM> Install-Package ClientDependency-Mvc

Then you can jump right into your view code:

@{Html.RequiresCss("~/Content/Site.css");}

@{Html.RequiresJs("~/Scripts/jquery.js");}

This makes the view depend on these files, they are not rendered here, but when these views are used, the last call below knows which scripts to include:

<head>
...
@Html.RenderCssHere()
</head>
<body>
...
@Html.RenderJsHere()
</body>

ClientDependency Framework handles all the messiness of combining and minifiying these resources, adding query strings to the <script> and <link> tags so they are cached, but can be invalidated in future updates. Using this framework properly, you should never have to tell a client, just do a “hard” (CTRL + F5) refresh.