Archiving My Stack Overflow Contributions

Over the years I’ve answered questions on Stack Overflow, asked a few of my own, and generally tried to make the internet a little better, and useful one post at a time. Across the network sites I contributed to the most regularly, I helped over 6,000,000 people based on my questions and questions where I shared a highly upvoted answer. I wanted to bring that content home, alongside the rest of my content on this site.

Why

Stack Overflow gave developers a lot, it came to the internet in a time when searching for expert answers to technical programming problems was a not an insignificant effort, and was a difficult part of being a developer. Stack Overflow was fast, reduced signal to noise ratio, and got straight to the point. The situation was so bad, that there is an XKCD comic about the subject.

Times change, and while Stack Overflow has a lot of great content, it is no longer the community I joined over sixteen years ago. I wish them well, and hope they can align with the community. Its clear by the way the organization is being run that the community feedback that made the site what it is, is no longer a priority. I want to ensure that my contributions remain available, to myself and others, for as long as I maintain an internet presence.

How

I used Claude Code to generate a throwaway TypeScript script that hits the Stack Exchange API, fetches all my answers and questions, and converts them to Markdown files for this site. The script paginates through the API, converts HTML to Markdown with Turndown, and drops each post into a folder.

Accepted answers become blog posts, since at least someone found them useful, while my questions with a score of 10+ become blog posts. Everything else goes to an archive folder. The script works with any Stack Exchange site — just pass the site key and your user ID.

Running it

Install turndown temporarily (it won’t be saved to package.json):

npm install --no-save turndown @types/turndown

Run the script, passing your user ID and site:

SO_USER_ID=12345 SO_SITE=stackoverflow SO_API_KEY=your_key npx tsx so-backup.ts

Available sites: stackoverflow, serverfault, superuser, gamedev, dba (add more in the SITE_MAP). User IDs are per-site — your SO ID won’t work on Server Fault.

You can register for a free API key at stackapps.com/apps/oauth/register to get 10,000 requests/day instead of the default 300.

Clean up after:

npm uninstall turndown @types/turndown
rm so-backup.ts

The script

import TurndownService from 'turndown';
import { writeFileSync, mkdirSync, existsSync } from 'node:fs';
import { join } from 'node:path';

const SO_API = 'https://api.stackexchange.com/2.3';
const USER_ID = Number(process.env.SO_USER_ID ?? '0');
const SITE_KEY = process.env.SO_SITE ?? 'stackoverflow';
const PAGE_SIZE = Number(process.env.PAGE_SIZE ?? '100');
const BLOG_MIN_SCORE = 10;
const BLOG_DIR = 'src/content/blog';
const ARCHIVE_DIR = 'src/content/stackarchive';
const API_KEY = process.env.SO_API_KEY ?? '';

interface SiteConfig { key: string; name: string; url: string; }

const SITE_MAP: Record<string, SiteConfig> = {
  stackoverflow: { key: 'stackoverflow', name: 'Stack Overflow', url: 'https://stackoverflow.com' },
  serverfault: { key: 'serverfault', name: 'Server Fault', url: 'https://serverfault.com' },
  superuser: { key: 'superuser', name: 'Super User', url: 'https://superuser.com' },
  gamedev: { key: 'gamedev', name: 'Game Development', url: 'https://gamedev.stackexchange.com' },
  dba: { key: 'dba', name: 'Database Administrators', url: 'https://dba.stackexchange.com' },
};

function apiParams(): string { return API_KEY ? `&key=${API_KEY}` : ''; }
function delay(ms = 500): Promise<void> { return new Promise((r) => setTimeout(r, ms)); }

interface SOPost {
  answer_id?: number; question_id?: number; body: string; score: number;
  creation_date: number; owner: { user_id: number; display_name: string };
  tags?: string[]; title?: string; is_accepted?: boolean; accepted_answer_id?: number;
}
interface SOComment {
  comment_id: number; body: string; score: number;
  owner: { user_id: number; display_name: string };
}

const turndown = new TurndownService({ headingStyle: 'atx', codeBlockStyle: 'fenced', fence: '```' });
turndown.addRule('fencedCodeBlock', {
  filter: (node) => node.nodeName === 'PRE' && !!node.querySelector('code'),
  replacement: (_content, node) => {
    const code = (node as HTMLElement).querySelector('code')!;
    const className = code.getAttribute('class') || '';
    const langMatch = className.match(/language-(\w+)/) || className.match(/lang-(\w+)/);
    const lang = langMatch ? langMatch[1] : '';
    return `\n\n\`\`\`${lang}\n${code.textContent || ''}\n\`\`\`\n\n`;
  },
});

async function fetchJson<T>(url: string, retries = 3): Promise<T> {
  for (let attempt = 1; attempt <= retries; attempt++) {
    const res = await fetch(url);
    const data = await res.json();
    if (data.backoff) {
      console.warn(`  Backoff: ${data.backoff}s`);
      await new Promise((r) => setTimeout(r, data.backoff * 1000));
    }
    if (data.error_id === 502 || res.status === 429) {
      const wait = (data.backoff || 10) * attempt;
      console.warn(`  Rate limited, waiting ${wait}s...`);
      await new Promise((r) => setTimeout(r, wait * 1000));
      continue;
    }
    if (data.error_id) throw new Error(`API error: ${data.error_name} - ${data.error_message}`);
    if (!res.ok) throw new Error(`HTTP ${res.status}: ${url}`);
    return data;
  }
  throw new Error(`Failed after ${retries} retries: ${url}`);
}

async function fetchAnswers(userId: number, site: string, page = 1) {
  await delay();
  return fetchJson<{ items: SOPost[]; has_more: boolean }>(
    `${SO_API}/users/${userId}/answers?order=desc&sort=votes&site=${site}&filter=withbody&pagesize=${PAGE_SIZE}&page=${page}${apiParams()}`);
}
async function fetchQuestions(userId: number, site: string, page = 1) {
  await delay();
  return fetchJson<{ items: SOPost[]; has_more: boolean }>(
    `${SO_API}/users/${userId}/questions?order=desc&sort=votes&site=${site}&filter=withbody&pagesize=${PAGE_SIZE}&page=${page}${apiParams()}`);
}
async function fetchQuestion(qid: number, site: string) {
  await delay();
  return (await fetchJson<{ items: SOPost[] }>(`${SO_API}/questions/${qid}?site=${site}&filter=withbody${apiParams()}`)).items[0];
}
async function fetchAcceptedAnswer(aid: number, site: string) {
  await delay();
  return (await fetchJson<{ items: SOPost[] }>(`${SO_API}/answers/${aid}?site=${site}&filter=withbody${apiParams()}`)).items[0] ?? null;
}
async function fetchComments(postId: number, userId: number, type: 'answers' | 'questions', site: string) {
  await delay();
  const data = await fetchJson<{ items: SOComment[] }>(
    `${SO_API}/${type}/${postId}/comments?site=${site}&order=desc&sort=creation&filter=withbody${apiParams()}`);
  return data.items.filter((c) => c.score >= 3 || c.owner.user_id === userId);
}

function toSlug(title: string): string {
  const slug = title.toLowerCase().replace(/[^a-z0-9]+/g, '-').replace(/^-+|-+$/g, '');
  if (slug.length <= 60) return slug;
  const t = slug.slice(0, 60); const d = t.lastIndexOf('-');
  return d > 20 ? t.slice(0, d) : t;
}
function formatDate(epoch: number) {
  const iso = new Date(epoch * 1000).toISOString().split('T')[0];
  return { year: iso.slice(0, 4), dateStr: iso };
}
function escapeYaml(s: string): string {
  return /[:"'\[\]{}#&*!|>%@`]/.test(s) || s.includes('\n')
    ? `"${s.replace(/\\/g, '\\\\').replace(/"/g, '\\"')}"` : `"${s}"`;
}
function writePost(filePath: string, content: string) {
  const dir = join(filePath, '..');
  if (!existsSync(dir)) mkdirSync(dir, { recursive: true });
  writeFileSync(filePath, content, 'utf-8');
  console.log(`  Written: ${filePath}`);
}

async function processAnswer(answer: SOPost, userId: number, site: SiteConfig) {
  const question = await fetchQuestion(answer.question_id!, site.key);
  const comments = await fetchComments(answer.answer_id!, userId, 'answers', site.key);
  const { year, dateStr } = formatDate(answer.creation_date);
  const slug = toSlug(question.title || 'untitled');
  const questionBq = turndown.turndown(question.body).split('\n').map((l) => `> ${l}`).join('\n');
  const answerMd = turndown.turndown(answer.body);
  const tags = (question.tags || []).map((t) => `  - ${escapeYaml(t)}`).join('\n');
  const isAccepted = answer.is_accepted ?? false;
  const acceptedNote = isAccepted ? ' *(accepted answer)*' : '';

  let transition: string;
  if (isAccepted && answer.score > 0) {
    transition = `*I posted the following answer, which was chosen as the accepted answer and received ${answer.score} upvote${answer.score === 1 ? '' : 's'}:*`;
  } else if (isAccepted) {
    transition = `*I posted the following answer, which was chosen as the accepted answer:*`;
  } else if (answer.score > 0) {
    transition = `*I posted the following answer, which received ${answer.score} upvote${answer.score === 1 ? '' : 's'}:*`;
  } else {
    transition = `*I posted the following answer:*`;
  }

  let content = `---
title: ${escapeYaml(question.title || 'Untitled')}
description: ${escapeYaml(`My answer to "${question.title}" on ${site.name}`)}
date: ${dateStr}
author:
  name: Nate Bross
tags:
${tags}
source: "${site.name}"
sourceUrl: "${site.url}/a/${answer.answer_id}"
---

*Someone [asked on ${site.name}](${site.url}/q/${question.question_id}):*

${questionBq}

${transition}

${answerMd}`;

  if (comments.length > 0) {
    const cm = comments.map((c) => `**${c.owner.display_name}** (${c.score}): ${turndown.turndown(c.body)}`).join('\n\n');
    content += `\n\n<details>\n<summary>Notable comments</summary>\n\n${cm}\n\n</details>`;
  }
  content += `\n\n---\n*Originally posted on [${site.name}](${site.url}/a/${answer.answer_id}) — ${answer.score} upvotes${acceptedNote}. Licensed under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0/).*\n`;
  writePost(join(isAccepted ? BLOG_DIR : ARCHIVE_DIR, year, `${slug}.md`), content);
}

async function processQuestion(question: SOPost, userId: number, site: SiteConfig) {
  const comments = await fetchComments(question.question_id!, userId, 'questions', site.key);
  const { year, dateStr } = formatDate(question.creation_date);
  const slug = toSlug(question.title || 'untitled');
  const questionMd = turndown.turndown(question.body);
  const tags = (question.tags || []).map((t) => `  - ${escapeYaml(t)}`).join('\n');
  let acceptedSection = '';

  if (question.accepted_answer_id) {
    const accepted = await fetchAcceptedAnswer(question.accepted_answer_id, site.key);
    if (accepted) {
      const bq = turndown.turndown(accepted.body).split('\n').map((l) => `> ${l}`).join('\n');
      acceptedSection = `\n\n---\n\n> [${accepted.owner.display_name} answered](${site.url}/a/${question.accepted_answer_id}) (${accepted.score} upvotes):\n>\n${bq}`;
    }
  }

  let content = `---
title: ${escapeYaml(question.title || 'Untitled')}
description: ${escapeYaml(`A question I asked on ${site.name}`)}
date: ${dateStr}
author:
  name: Nate Bross
tags:
${tags}
source: "${site.name}"
sourceUrl: "${site.url}/q/${question.question_id}"
---

*I [asked this on ${site.name}](${site.url}/q/${question.question_id}):*

${questionMd}${acceptedSection}`;

  if (comments.length > 0) {
    const cm = comments.map((c) => `**${c.owner.display_name}** (${c.score}): ${turndown.turndown(c.body)}`).join('\n\n');
    content += `\n\n<details>\n<summary>Notable comments</summary>\n\n${cm}\n\n</details>`;
  }
  content += `\n\n---\n*Originally posted on [${site.name}](${site.url}/q/${question.question_id}) — ${question.score} upvotes. Licensed under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0/).*\n`;
  writePost(join(question.score >= BLOG_MIN_SCORE && question.accepted_answer_id ? BLOG_DIR : ARCHIVE_DIR, year, `${slug}.md`), content);
}

async function main() {
  if (!USER_ID) {
    console.error('Usage: SO_USER_ID=12345 SO_SITE=stackoverflow npx tsx so-backup.ts');
    console.error('Available sites:', Object.keys(SITE_MAP).join(', '));
    process.exit(1);
  }
  const site = SITE_MAP[SITE_KEY];
  if (!site) { console.error(`Unknown site "${SITE_KEY}".`); process.exit(1); }
  console.log(`User ID: ${USER_ID}, Site: ${site.name}`);

  let page = 1, totalAnswers = 0;
  while (true) {
    const answers = await fetchAnswers(USER_ID, site.key, page);
    totalAnswers += answers.items.length;
    for (const a of answers.items) {
      try { await processAnswer(a, USER_ID, site); }
      catch (err) { console.warn(`  Skipped answer ${a.answer_id}: ${(err as Error).message}`); }
    }
    if (!answers.has_more) break;
    page++;
  }
  console.log(`Processed ${totalAnswers} answer(s) from ${site.name}`);

  page = 1; let totalQuestions = 0;
  while (true) {
    const questions = await fetchQuestions(USER_ID, site.key, page);
    totalQuestions += questions.items.length;
    for (const q of questions.items) {
      try { await processQuestion(q, USER_ID, site); }
      catch (err) { console.warn(`  Skipped question ${q.question_id}: ${(err as Error).message}`); }
    }
    if (!questions.has_more) break;
    page++;
  }
  console.log(`Processed ${totalQuestions} question(s) from ${site.name}`);
  console.log('\nDone!');
}

main().catch((err) => { console.error('Error:', err); process.exit(1); });

SharpFM a FileMaker Developer Tool

One thing that FileMaker makes difficult, is sharing your work without sharing the whole FileMaker file. Sharing table schema, scripts, calculations, custom functions, etc is all difficult without sharing the complete .fmp12 file which often includes data and other file specific objects.

SharpFM is a utility to make it easy to save these FileMaker objects outside of a FileMaker app.

FileMaker allows you to copy/paste all of these objects from one file to another if you have both files open in the same copy of FileMaker. SharpFM is able to tap into this behavior to allow us to save those objects as raw XML. Essentially tapping into the Clipboard API to receive those objects and convert the raw binary data to XML that can be saved anywhere. In other words, you can’t copy a FileMaker script and paste it into Notepad, but you can paste it into SharpFM! Same with tables, layouts, script steps and full scripts. Copy from FileMaker, paste into SharpFM. Then you can do the reverse too, copy from SharpFM and paste into FileMaker!

Once you have pasted the object into SharpFM, it stores them in a folder you select. From there you can save, share, and edit the XML files and then use SharpFM to copy the raw XML back into the same, or a different FileMaker file.

You can download it from GitHub: https://github.com/fuzzzerd/SharpFM. Go to Releases and download the latest version.

Feedback is welcome! If you use it, find it interesting, have a question, or want to contribute big fixes or enhancements, please create an issue so we can discuss how to get it done.

How I make Strawberry Italian Ice

Summer is hot. I like sweets. I derived my recipe from an ancient older Chicago Tribune article.

Ingredients

  • 1 cup sugar
  • 1 cup water
  • 20 strawberries (substitute a similar volume of your favorite fruit)
  • 6 cups ice cubes

Directions

The first step is to make our fruit sugar mixture:

  1. Dice fruit into small pieces.
  2. Make simple syrup: Mix 1 cup sugar and 1 cup water, boil until sugar is dissolved.
  3. Mix 1 cup of hot simple syrup with diced fruit.
  4. Allow to cool, and refrigerate overnight.

Note: this simple syrup recipe leaves a little extra for other uses.

The second step is to make italian ice:

  1. Pour the fruit and sugar mixture over 6 cups of ice. Blend with a powerful blender until smooth.

You can eat it right away, but it can be a little runny after being in the blender. I put it into a freezer safe container for an hour or two before serving. If frozen overnight, it may need to thaw for a few minutes before serving.

Shortcut

If you have simple syrup already and don’t want to wait, simply throw 1 cup of simple syrup, fruit, and 6 cups ice into blender.

Enjoy

Eat it quick.

Scaling the UI in Thunderbird, including messages list et al.

I use a lot of software on a lot of devices. Sometimes the defaults just don’t work for me. Thunderbird is a great mail application, but using it on Ubuntu with a high dpi screen, the messages list was tiny.

There is an advanced setting that allows you to scale the UI, but was difficult to find how to actually change it. This old support forum explains how to do it in an older version, and this Super User post gets you close.

The bottom line is that you need to get to the Configuration Editor. to edit

layout.css.devPixelsPerPx

The default value was -1.0, but its a scaling setting that you can dial in to get the exact size you want. For my situation, 2.25 seems to work well, but the change is immediate so you can quickly get a sense of what’s going to work for you.

Upgrading this site's Nuxt2 to Nuxt3 and Content Module v2

I often use this site to play with new technology, and as such, it goes through a lot of technical changes. When the site was originally upgraded to Nuxt2, it had already been out for a while and Nuxt3 was in beta stage. So I knew this upgrade was coming. Working on some other projects, I realized I needed a bit better handle on Nuxt3 and decided to jump in.

Using Content v2

Content v2 comes with a bunch of quality of life improvements. The ability to write Vue components that can be used in markdown, with parameters is incredible. I’m using that in a few projects, and hope to leverage it on this site too.

I had a hard time getting things to work, because I didn’t read the docs. I started with documentDriven mode enabled to generate sitemap.xml. Since I was porting my Nuxt2/Content1 site, things weren’t working. Running the site with npm run dev things seemed fine; however, npm run generate would fail with 404 errors on some content:

Errors prerendering:
  ├─ /api/_content/query/brX4CwCJoQ.1710967419806.json (13ms)
  │ ├── Error: [404] Document not found!
  │ └── Linked from /
  ├─ /api/_content/query/wxmlyJ2dlX.1710967419806.json (13ms)
  │ ├── Error: [404] Document not found!
  │ └── Linked from /blog

It turns out that the default route that gets added with documentDriven: true was conflicting in with my [...slug].vue file in a way that didn’t totally break things, but didn’t totally work either.

I ended up fixing that by backing out of documentDriven mode and updated my slug:

<script setup lang="ts">
const { path } = useRoute();
const { data: article } = await useAsyncData(`catchall-${path}`, () => {
  return queryContent().where({ _path: { $regex: path } }).findOne();
});
</script>
<template>
  <blog-details v-if="article" :article="article"/>
  <ContentDoc v-else />
</template>

This allows me to use my existing blog-details component to render my articles, but also allows me to fall back to the <ContentDoc /> renderer if needed. The astute observer will see that right now all routes go through the blog-details — at a future date, if/when I want a different treatment I will update the query to only use it if the route starts with /blog/* and use different component for other paths.

This breaks the @nuxt/sitemap plug-in however, more on that below.

Handling Images

The site is using both @nuxt/image and nuxt-content-assets which allows me to store my images right along side my *.md files in the /content/ path. The docs explain how to make it all work, but I’m including a brief snip of the relevant configuration I needed to make it all work.

// nuxt.config.ts
modules: [
  'nuxt-content-assets',
  '@nuxt/content',
  '@nuxt/image',
  '@nuxtjs/sitemap'
],
//...
image: {
  dir: '.nuxt/content-assets/public'
}

The nuxt-content-assets package requires this component be added to work with @nuxt/image:

// /components/content/ProseImg.vue
<template>
  <nuxt-img />
</template>

Sitemap.xml

With Nuxt Content documentDriven: false the sitemap doesn’t generate any content, and the mechanism for automatically generating urls is a bit different in v5+ (for Nuxt3). Putting this kind of code into nuxt.config.ts is an anti-pattern, so support was dropped.

This requires setting up a server route, which can then be referenced in nuxt.config.ts as the source for the sitemap data.

// /server/api/sitemap.ts

import { serverQueryContent } from '#content/server';

export default defineSitemapEventHandler(async (e) => {
  const routes = [];

  const posts = await serverQueryContent(e).find();

  routes.push({
    url: '/',
    changefreq: 'weekly',
    priority: 0.5
  });

  routes.push({
    url: '/blog',
    changefreq: 'weekly',
    priority: 0.75
  });

  posts.map((p) => {
    routes.push({
      loc: p._path,
      lastmod: p.lastMod ? p.lastMod : p.date,
    });
  });

  return routes;
});

With this change to nuxt.config.ts to make it link up:

sitemap: {
  sources: ['/api/sitemap'],
},

Tests that worked fine on Windows are failing with an IOException on Ubuntu.

I have a set of integration tests that have been working great on Windows for quite some time. While troubleshooting an unrelated issue I was running my tests on Ubuntu 20.04 LTS via WSL, and about half of the tests were failing with this IOException.

The Error Message

System.IO.IOException : The configured user limit (128) on the number of inotify instances has been reached, or the per-process limit on the number of open file descriptors has been reached.

It seems that due to the way my tests work using WebApplicationFactory<Startup> and calling factory.WithWebHostBuilder(builder => ..., I’m creating too many instances of the HostBuilder, which by default installs a file system watcher on the appsettings.json file. All those instances tap out the default number of inotify instances allowed. Switching to a polling mechanism clears this up.

The Solution I Found

Configure the environment variable to have dotnet use polling instead.

export DOTNET_USE_POLLING_FILE_WATCHER=true

I don’t know if there are unintended side effects of this change; however, since its only applying to test runs, I feel its a safe change to make.

HTTP Status Codes Explained for Most Folks

There are lots of jokes about HTTP status codes, what they mean, and general frustration with the inconsistent way in which they’re used.

HTTP 1xx

  • 100: Continue sending me the request.

HTTP 2xx

  • 200: OK, here you go.
  • 201: OK, I created it for you.

HTTP 3xx

  • 301: Moved what you want to a new spot over here.
  • 302: Found what you’re looking for over here.

HTTP 4xx

  • 400: Your bad.
  • 401: You aren’t logged in.
  • 403: You’re logged in, but can’t do/see that.

HTTP 5xx: Our bad

  • 500: We screwed up.
  • 503: We forgot to start the server.

There are many more status codes, and as a developer I encourage other developers to use appropriate and specific response codes. For more specific details, Wikipedia has a good overview.

Implied route parameters in ASP.NET Core Form Tag Helpers

With a route like /change/{id} the ID parameter is implicitly implied if the view contains a form with method="post" back to the same action name, for example:

View:

<asp-form asp-action="change" method="post">

Controller:

[HttpPost]
public ActionResult Change(int id, Model model)

When changing the form action to to include a change confirmation step:

<asp-form asp-action="confirmChange" method="post">

The id route parameter was not included by default anymore, so the controller action was receiving a default(int):

[HttpPost]
public ActionResult ConfirmChange(int id, Model model)

ASP.NET Core seems to include the current route parameters when posting back to the same action name.

The fix was fairly simple, you must explicitly include the route parameters on the <asp-form> tag when posting across action names.

<asp-form asp-action="change" asp-route-id="@Model.SomeId" method="post">

See also: Stack Overflow on updating route parameters.

Connection strings in Entity Framework

Someone asked on Stack Overflow:

I am referencing 2 databases in ASP.NET using Entity Framework.

In my web.config file, I can see the connection strings for the 2 databases:

<connectionStrings>
    <add name="RContext" 
         connectionString="metadata=res://*/Models.RModel.csdl|res://*/Models.RModel.ssdl|res://*/Models.RModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=localhost\SQLEXPRESS;initial catalog=RStreamline;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework&quot;" 
         providerName="System.Data.EntityClient" />
    <add name="CEntities" 
         connectionString="metadata=res://*/Models.CModel.csdl|res://*/Models.CModel.ssdl|res://*/Models.CModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=localhost\SQLEXPRESS;initial catalog=RStreamline;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework&quot;" 
         providerName="System.Data.EntityClient" />
</connectionStrings>

Can I somehow implement alternate connection strings where the datasource refers to the prod server for the release?

I posted the following answer, which was chosen as the accepted answer and received 1 upvote:

This is typically handled with web.config transforms.

In your project you would have:

  • web.config
  • web.Release.config

For example in your web.Release.config transform you would have something like this:

<?xml version="1.0"?>
<configuration xmlns:xdt="https://schemas.microsoft.com/XML-Document-Transform">
  <connectionStrings>
    <add name="RContext" 
      connectionString="RContext-Prod-Connection-String" 
      xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
    <add name="CEntities" 
      connectionString="CEntities-Prod-Connection-String" 
      xdt:Transform="SetAttributes" xdt:Locator="Match(name)"/>
  </connectionStrings>
</configuration>

You’ll notice the xdt:Transform="SetAttributes" xdt:Locator="Match(name)" bit, which says, in the main web.config find the connectionString by name and replace its attributes with the ones defined here.

This will automatically happen when you publish the application.


Originally posted on Stack Overflow — 1 upvotes (accepted answer). Licensed under CC BY-SA.

Errors installing SSH Server on domain joined Windows 10 laptop using WSUS

While attempting to setup SSH access to a Windows 10 machine, following this guide: https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse, I kept running into a generic Windows install failed error. It became apparent that the issue was because the system was failing to download the optional windows feature.

Turns out that for some reason the WSUS server that the machine was connected to didn’t have that optional feature, so a local gpedit was necessary to configure the machine to directly download optional features from Windows Update.

windows-gpedit-chang

After checking the box here, it was as simple as re-running the install from the Settings app.

signed letter b

Dad. Geek. Gamer. Software developer. Cloud user. Old Car enthusiast.  Blogger.


Top Posts


profile for Nate on Stack Exchange, a network of free, community-driven Q&A sites
a proud member of the blue team of 512KB club
Thoughts, opinions, and ideas shared here are my own. © 2026 Nate Bross.