Bug reports and what to do with them

At a minimum, a good bug report contains the answer to three questions:

  • What did you do?
  • What did you expect to happen?
  • What actually happened?

This doesn't just cover software, it's a reasonable place to start any fault diagnosis:

  • What did you do? I tried to open the door.
  • What did you expect to happen? The door to open.
  • What actually happened? Nothing!

(Although you will receive reports with more data than this, well written, detailed reports will be the exception.)

(You probably have an intuition already about what the problem is. Learn to distrust that intuition, or at least treat it with scepticism.)

The first step in diagnosing an issue is to make sure you understand correctly what the issue is. People often missrepot the issue for various reasons; they don't understand the technology ("The TV remote is broken" > the remote needs new batteries, is not being pointed at the TV, is actually the remote for the hi-fi), they're trying to be helpful ("The TV wasn't responding to the remote so I took out all the cables and wrapped the ends in tin foil to help them conduct and I've put them back in and now the TV is on fire and the remote still doesn't work"), or they're worried about looking stupid ("The TV remote in the demo room isn't working, fix it before the clients turn up in 5 minutes and stop bothering me with stupid questions").

You are going to need to ask questions, you will need to ask her basic questions that could be interpreted as insulting ("Of course it's plugged in, do you think I'm stupid"), and you will probably need to ask the same question more than once as people helpfully answer a different question.

Looking at our door example, try to establish why they are trying to open the door? Is it a door they go through often, or is this the first time? At this stage, you're still looking for context. Again, try not to think of solutions at this point, or even causes. The first step is to establish what actually happened, ideally well enough that you can reliably trigger the issue locally.

I say ideally, but it's very close to essential to be able to replicate the problem at will. If you're dealing with software, this is a good time to write a new test case. Write enough code to trigger the issue, and then start taking code away from your test until you get the smallest reliable trigger. This exercise has two aims. First, writing the test case should help find the rough area of interest in your source. Second, having a reliable test case means you can be confident that you have fixed the issue! Without a test, you can't be sure that your 'fix' has worked, or even fixed the right problem. With a test, you can apply the fix, run the test, confirm the issue doesn't reoccur, and then remove the fix, rerun the test and confirm the issue comes back. (Also, including the test in your automated test suite (you do have an automated test suite, yes?) makes it harder for someone else to reintroduce the issue later).

Once you understand the problem, have isolated the issue, built a test (or series of tests, don't hold back here), written and confirmed your fix, this is a good time to look though your codebase for similar patterns (or exact duplicates) that you can fix at the same time. (It's great to have users report bugs, but it's far better to not have bugs for them to report)


Looks like I can use IControllerModelConvention to manipulate controllers at app start-up time, to add hostname to routing (to set a default hostname).


Error page ideas

These are ideas for the static (well, ssi maybe) pages from nginx. If possible, check for a body from the backend and use that, otherwise send a self contained static page (light on the css, no js).

Use the 4xx/5xx split ("It's not me it's you"/"It's not you it's me"), a custom description of the specific error, and advice on what to try next. Include a transaction ref (maybe).

400 - Bad request - "You've done something so generically wrong that I can't tell what it was." 401 - Should be issued by a backend with content, can't do much with it at the proxy. 403 - Forbidden - "Your provided credentials do not permit access to this page. Please wait while I summon a security enforcement team to your location" 404 - Not Found - "You asked for a page that isn't here. This page is here instead." 405 - Method not permitted - "You can't $method this page!" 413 - Content to large - "You've sent more data than that page can cope with"

500 - Server error - "I think something has wrong gone." 501 - Not implemented - "I don't even understand the question." 502 - Bad Gateway - "I asked a friend for the page but they've let me down." 503 - Service unavailable - "I am far to busy to talk to you" 504 - Gateway timeout - "I asked a friend for the page but they're taking to long."

I wonder how much trouble it's going to be to get nginx to pick a random line from a set.


Now running with the new Nginx setup.

I tried putting all the site configs in one file and using the $host variable to pick the right port (to proxy to) and certificates. Didn't work.

Instead, now I've split out the config into two include files, and a file for each domain/site. The per domain files just set the server name (per the Host: header), the proxy port, and the path to the certs, and then include the http include file (with the standard redirect) and the https include file (with all the other (mostly proxy) settings).

It's a shame that Nginx doesn't have a configuration level between http and server, to group config for similar sites, although using include is working for me here.

Last thought about this - I should be able to use the $server_name variable in the https include file to make the path to certs, so the only per site config is the server name and proxy port. (The point would be to minimise the chance that I'm going to forget to change a name when I next setup a site).

(Ok, I said last thought, but husband's site is static and can be handled by nginx on its own)


The wemail refactor is going ok, I think.

Webmail runs behind two domains. One for the content of emails, and one for everything else. This is to take advantage of the same origin rules, so scripts in emails can be effectively isolated from the main site.

The previous version of webmail ran as two separate dotnet apps to serve these two domains so that requests to the content domain couldn't 'leak' into the interface domain. (And also because, frankly, I didn't know any better.)

However, Asp core can route based on hostname, which gives enough of an isolation guarantee that I can fold the code that serves content into the main app, and handle both sets of requests.

I have to tag each 'endpoint' (roughly, each action method in the controllers) with a hostname, either the content domain or the interface domain. There is an existing HostAttribute, but (since it's an attribute) it can't be set at runtime from config, and I don't want to set it on every controller in case I forget and miss one.

This is where the IControllerModelConvention from a few posts back comes in. I've added a call in Startup to add a convention that loops through every Action to add a host to the RouteValues collection, either the interface domain (default) or the content domain (if the action is tagged with the right attribute).

(Strictly at this point some of that isn't true. I've set up the convention and I'm looping through the actions, but I need to create an attribute and add it to the appropriate action(s) (although i think there's only one). Still, it should work).

This is a bunch of work but it's worth it, mostly because it's one less server to run, manage, and allocate resources for. (Also, the two servers used to communicate through the db, and that might not be needed, but fixing that isn't so big a priority).

Anyway. I'm feeling good about that whole thing.


One of the (very many) things that's bugging me at the moment is part of the code for the Webmail server.

The code that sends the actual content of an email to the client boils down to a call something like:

Message.Part.WriteTo(HttpResponse.Content)

(the names are all wrong, don't worry about it)

The problem with this is that I can't take advantage of the frameworks tools to send files to clients. (Where 'file' in this case is exactly equivalent to a 'stream').

I could, of course, save the content to memory/disk (based on some size huristic), but that would cost both storage space, and time. I'd therefore prefer some kind of streaming solution.

I'm still thinking about it, and I'm not sure it's worth the effort.




I'm pushing myself to hard again. It's the weekend and I should be relaxing but I've got a brain full of "got to do stuff!".

I mean, I'm not actually doing stuff (ish, I got some work done on webmail this morning, put away yesterday's washing, put in another load, organised Morrisons for tomorrow and drove us both to Asda for s few things).

What I mean is, I feel bad when I just sit (or lay down) and relax. I don't get to relax, I keep thinking about all the stuff I 'should' be doing (although not in any concrete way, just the whole "I should be doing something" feeling).

So I'm going to stop here and get some sleep.


TODO

Reality

  • Husband wants help dying their hair

Infrastructure

  • Decommision postgres (see osric.uk)
  • install nginx error pages
  • Move husband's website

Blog stuff

  • Move controls to the left bar if there is space (where 'enough space' is defined as the screen is wider than the current --max-width)
  • Add a calendar with links to posts
  • Add forward/backward links to individual posts
  • Add a 'save all to zip' option

Webmail

  • Get dovecot configured
  • Get postfix configured
  • Turn off the Server sent events stuff, for now at least
  • Get the host security stuff working Now that I've tagged endpoints with RouteValues[host], requests to the 'wrong' host should 404 (not 401/redirect to auth)

Osric.uk

  • Move database from postgres to sqlite, decommission postgres

Gah, I forget that restarting the server logs me out.

Husband's dyed their hair, I've sorted nginx error pages, and I've moved the blog menu to the left on wider screens.

I want to add a calendar to the left bar, that shows the current month, except it updates to show the right dates for blog entries.

Clearly it's just a gimmick, probably not worth the effort.


Image service

Google is warning me that I'm running out of photo storage space, so it's time to get serious about pulling my images out of Google photos and into some kind of useful local service.

The hard part, for me, at least, is designing a useful and plesent to use interface. I can lean on other designs, but it's still going to take a bit of work.

A specific feature I want is a 'Tag/Describe a random photo' page that loads a photo at random (either from all photos, or from untagged photos) and lets me set the tags and/or description. (Exif has an 'Image Description' field that's probably the right place to start).


The move from postgres to sqlite for osric.uk is ready to push. I would have done it this evening, except I was watching the final episode of The Orville series 3 which was just awful. I've been pleasantly surprised by most of the season, it's been a good, modern sci-fi TV show. The scripts have been a little rough/loose in places but the technical side (effects, camera work, editing, etc) have all been good and the stories have been interesting (if a little poorly executed at times). And then we got this episode. Two (really unconnected) stories, the robot crewmember propsed to his girlfriend, and the reasonable questions about "what does love mean to an emotionless machine" were skated over in favour of "here's some funny bad advice got the comedy robot". (The other story was a stab at "this is why we don't interfere with primitives", it didn't add anything new to the genre) I couldn't take it. I had to bail 15 minutes from the end, which (for stupid logistical reasons) meant I've shutdown my development environment for the night. An, well. The code will still be there tomorrow.


Want/need to get hi rez image uploads sorted, along with basic image manipulation tools.

I should benchmark image sharp to see if doing transforms on the fly is viable (along with finding out how it handles rotate - is the canvas resized automatically, what is the background fill it it's not).

I also need/want to put the design time into operators/language/macros, and an interpreter to parse them into image sharp calls.

Image Operators

Requirements

  • Must comfortably fit in a query string (avoid blocks, avoid whitespace significance)
  • Must be constructable from a push button interface (can use HTML5 inputs with minimal styling)
  • Should be constructable by a human with a text editor
  • Should be forgiving of technically invalid input (sensible defaults, clamp ranges, only invalid if contradictory, impossible, or conflicting)
  • Should do as little work as possible (e.g., spot an operation followed by it's inverse is a no-op and skip both)
  • May allow users to define macros/functions/shortcuts for common operations
  • May allow more than one output (e.g., rotate original and save as full size and thumbnail)
  • Must preserve original input
  • Must use consistent addressing (i.e., either top, left, width, height, or top, left, bottom, right)
  • Should allow percent as well as pixels
  • Should allow calculations (e.g., width/2)
  • May allow user variables (easy enough)
  • Should have useful set of system variables
  • Must apply operations left to right
  • Should report syntax errors with a location

Useful operations

  • crop(top, left, width, height)
  • resize(percent) // maintain aspect ratio
  • resize(width, height) // change aspect ratio
  • rotate(angle) // +ve clockwise, 0 pointing up

Implementation ideas

This is sounding like a basic expression parser. I'm not that worried about efficiency, so tokenize->ast->interpret should be fine (and I can always add caching later at various points). Tokens will be basic maths (including decimal (not floating point) numbers) ('+', '-', '*', '/', '(', ')') and identifiers/keywords (I'm not sure about idenifiers that aren't keywords yet).

I certainly want comments, so i can comment out chunks of code (How complicated are you expecting your stuff to get!), and really any pair of characters will do.

Time to have another flick though Crafting Interpreters, I guess.




It might be too hot again.

Otherwise, today was a good day. I'm writing a k8s monitor app at/for work and I think I've got the "watch" stuff cracked so I can do things like leave a page open showing events and pod logs.

Husband's dad is coming up for the weekend, this weekend (eeek!). Husband's has got a fairly complete itinerary planned, but I'm still going to need to interact with the honoured elder.

But it's late and I'm tired, so good night dear reader(s), see you soon.

(Note to self: Anonymous usage stats aren't immoral)


I'm tagging the blog software as version 1, it's time to think about version 2.

Blog - Version 2

Damnit, it's really irritating, but I think I'm going to have to put everything in a database. The logic chain runs something like:

  • I want to (algorithmically) add everything that I can to the blog. What I played on Spotify, what pushes to GitLab, what I asked Google, the photos I take, maybe even the emails i send and receive, all the online stuff I do should be an entry on the blog.
  • That's a lot of stuff, and I should keep track of its provenance (? source, context)
  • Some of it is personal/offensive/immoral/illegal (depending on context), and so needs to be tagged as such (and not shown to people who don't have the right auth)
  • I want to be able to comment on automatic posts ("So this is a photo of me and husband on holiday") (and probably manual posts too - see Bernice Summerfield for examples), and probably have replies to comments and comment trees
  • I want to save copies of posts before they're edited.

That's a bunch of metadata on top of the actual data. I could keep it in files, and I'm quite tempted by something like a Mine message.

Digression - Keep it all in files

Sqlite is very nice, and likely to be available for the foreseeable future, but there's something about text files that suggests a higher level of permanence.

Somewhere above I wrote about using IMAP as a backend, but does the idea because IMAP says messages are immutable. However, I also dropped the whole "Use rfc822 formatted files for storage" thing, and that may have been a mistake (or at least, premature).

(Note: I can't remember the current version number for rfc822, it's somewhere in the 5000s, I think. Please read 'rfc822' as 'The current internet mail format rfc' unless otherwise stated)

Ignore the email heritage of rfc822. It gives a structured way to add metadata to a file (metadata at the top of the file as Key: Value pairs, a blank line, and then the file data). Add in Mime, and there's support for keeping several files together (e.g., a post and photos (or other attachments), a post and it's edits, a post and it's comments).

There are plenty of independent tools to create, read, and update mime messages (delete is easy), and I'm already familiar with MimeKit for C#.

I would need to worry about simultaneous access/locking (but I could manage that from within the app). I might need to sort out indexing, although I do like keeping info in the filename.

Conclusion

Add always, add another layer of abstraction. Write an IBlogStorage interface with operations for get and save entries. Move to an

I begin to see why people advocate so hard for interfaces. I'm starting to think in terms of an IEntry with operations like Get Content(Version?), SetContent(string), AddAttachment, AddComment(Comment? Parent), that I can write different backends for.

(Poot. I think "storage layer" has gone out of fashion in favour of "database layer". Entity Framework makes too many assumptions that it's underlying storage is sql, and it's got into my brain. Ah, well I'll cope).

More thinking later, it's hot and late now. See you in the morning, dear reader.


I had a bit of a play with ASP Core Areas at $WORK today, and they are the right solution to the "I want this project to have a bunch of loosely connected modules that aren't worth putting into different projects/processes" problem.

Adding it to a project is easy enough, create a top level Areas folder with a sub-folder per sub-project/module/area.

Each of those sub-folders has it's own set of Model/Contoller/View folders/types, and the framework will find the right view, so long as each controller has an Area attribute.

Di config can go in program/startup as normal, or one can try something a bit more fancy. (The demo at work has an IStartup interface with a method that takes an IServiceCollection. At app start time, all the implementations of the interface are found, instantiated, and called. An alternative that's been in my head for a while is to tag types that should be available though DI with an attribute, although I'm starting to prefer the interface way since it's much more flexible.

Anyway. Have I already said goodnight tonight? Goodnight readers, sleep well.


Write ahead logs

Roughly, when the storage engine gets a request that would change a file, write the details of the change to disk before doing the change, so if the change is corrupted then we can tell by comparing what should have happened (the log) with what did happen (the real files).

I'd be tempted at that point to go full Event Source and treat the log as the source of truth, and read state at boot time.

Or, not waste time badly implementing a database and use a real database instead.

Maybe it's entity I don't like? Maybe I should go back to writing the raw sql myself (except entity is really convenient? It's so nice to be able to more or less ignore everything lower level than sets of types. Pointing at moving from postgres to sqlite as an example, none of my logic changed (and ok, it's all fairly basic). Maybe I should move the blog to a db).

Blog Schema


Entry
  Id
  ->Owner
  ->Content
  ->Comments []
  ->Versions []
  
Content
  Id
  Blob
  Created Timestamp
  ContentType
  Size

Comment
  Id
  ->Content
  ->Parent Comment?
  ->Parent Entry
  ->Child Comments[]
  ->Versions []
  ->Owner
  State {Unreviewed, Ham, Spam, Flagged}
  ->ReviewRecord[]

Thing is, it's very easy to get carried away with database schema design. Do I really need to keep a log of who reviewed a comment (and when), when I'm the only person with an account?

Commands, Queries, and Events

Commands are instructions ("Create an entry"). Events are a report on something that happened in the past ("An entry has been created"). Queries ask about the state of things ("What entries exist?").

Somewhere there's an engine that convents commands events. We also need a storage system that can listen to events and answer queries. Finally, there must be some kind of system that generates commands, send queries, and processes the results.

I think I'm overthinking again


That was a little unexpected, but I've moved production from Postgres to sqlite. Next time, I should try to remember what's been merged when I push to prod.


Now I've got the by date view working, previous and next become more pressing.

I need to make a choice about semantics: Does previous/next always refer to entries, or on (e.g.) a day view, does it mean the previous/next day, even if there aren't any entries.

Ignore that, it's clearly about entries. Probably once I add in other data sources, there's going to be entries every day anyway, but I might as well skip empty days (and stop at the date of the earliest entry). (A reader who decodes the URL format is welcome to load a blank page, but if I add auto links to the infinite past, some stupid spider will blindly follow them.)(serves it right)


Datasource

The word 'Datasource' in the previous entry triggered a minor epiphany. It's exactly the right term to use for the various components I want to add (Google Photos, Twitter posts, blog posts, etc.). If I can come up with a fairly basic set of common operations (get Name, getEntries(from, to)) for datasources, and for entries, then I can wrap everything in a couple of interfaces, and maybe pull some DI tricks at start time to dynamically include known sources.

I'm not sure yet what impact that will have on the existing blog code, especially creating entries. (I think I also want to tweak my language to mark the difference between a post on my blog, and an entry in the 'lifestream' (urgh, not using that name).

And ok, clearly since the blog is just another datasource, it won't need any changes. Instead, what I need is a new page that ~~gloms together~~ aggregates the various sources and displays them.

Requirements

First draft, off the top of my head, blah blah blah.

Entries

  • Entries draw themselves. The environment will draw a header (the time the entry was created, the source, and an edit link if the entry is editable), and the entry supplies HTML ready to add to the output stream (i.e., any user sourced markup is either escaped or very well sanitised)
  • Minimum data set:
    • Creation date
    • Source (link)
    • Permalink (a link to the entry in isolation, but still hosted here)
    • Body/Content
  • Under consideration
    • Something like a big vs small flag, although they might be per source (bookmarks and Spotify plays are small, photos and blog posts are big, although blog posts can be small)
    • IsFirst/IsLast/Next/Previous (to help the renderer, e.g., pull all the previouses, get the most recent, and that's the date for the previous page)

Datasources

  • I really want everything on one page, but there might be alot of data once I start pulling in photos (and bookmarks). I could default to current year (or even current day), but I really don't want to.

Alright, looks like I'm too tired to carry this on. See y'all later.



Another one of those 'just want to sit here and cry' mornings, although I don't seem to be able to actually cry anyomore.

Depression sucks.


I have been writing software since 1981, when Dad brought home a ZX81. Since then I have used more than a dozen different programming languages, and I still enjoy writing code and using software to solve problems, whatever the language.

I have been SC cleared for about four years, from when my current role within HMRC started. Nominally, the role is 'Front End Developer', but I have been leading on a range of software projects, writing full stack code from the HTML/JavaScript/CSS front-end, though the .Net/.Net core C# MVC web tier, past the Entity Framework/Entity Core database access later, down to the Dockerfile and bash scripts to automate the build.

As an example, a couple of years ago, HMRC needed to update 'COBRA', a Microsoft Access application that was used to track the near-real-time flow of money into and out of the department's bank accounts as customers make payments to us, and as we make payments to customers. The outputs from this app are sent to the Treasury fine times a day to inform them of how much cash on hand the government has.

As a Microsoft Access application, the previous version was restricted to a single user. This didn't fit very well with the high-profile nature of the application, and I was asked to rewrite it as a high availability solution.

I designed the solution around an AWS RDS MSSQL Server instance running in Multi-AZ replication mode. After that, it was a fairly standard C# .NET Framework MVC application, duplicated across two EC2 frontend machines (again, split across two AWS availability zones for redundancy).

My current role is split between development and system administration. The sysadmin part of the role covers creating and maintaining the CI/CD solution for the team. We have been using a self hosted install of GitLab as a git server and job runner, along with copies of Jenkins installed on our EC2 instances to take care of actual deployment.

Most recently, we are in the process of moving to the departments hosted Application Lifecycle Management (ALM) tooling; including their GitLab install, Artifactory, and Vault. We are also moving to a Kubernetes (K8S) cluster, so I have been updating our build pipelines to use Docker to make container images for deployment.

Our teams move to .NET Core has helped the move to containers, as ASP Framework apps only really work well under Windows. However, the move to ASP Core on Linux container has meant that we are using Oauth 2 against Azure Active Directory for authentication/authorization instead of Kerberos and local Domain Active Directory.

While the team owns our main project, we are called in to help other projects, since we seem to have a reputation as a team that works quickly and well. For example, last year I was working with the Inteligent Payment Project (IPP) to build them an MVC application to capture data from front line users and submit it to the projects API. I worked with the architect and the product team from early in the design process to make sure that the form I was building was as simple as possible, by avoiding asking the user for information that wasn't needed by the backend process, or that could be synthesized from other answers.


Working as a front line contact center advisor has helped me develop many things

  • An understanding of the customer point of view
  • The ability to explain complex technical ideas to people from different backgrounds

2005-2017 Call Center Advisor (HMRC)

As a front line call handler I took calls from the public, answered their questsions, and updated systems in line with department policy.

Working in this role helped me develop the skill of explaining complex technical ideas to people with a wide range of knowlege.

2017-2018 Guidance Author (HMRC)

I was promoted into a role with the Guidance team, who are responsible for writing and maintaining HMRCs internal guidance for front line call center advisors and other process workers.

As part of this role I moved to the tehcnical side of the team after updating some ASP Classic Visual Basic to run more efficiently, replacing a runtime of 30 minutes with one of 2 or 3 seconds.

2018 - Current Guidance Development Team (HMRC)

Another promotion lead me to my current role. I am working as a developer and system administrator with the Guidance Development Team. We build and maintain the software that hosts the internal gudiance.


That's NUglify integrated into the site. I've got a new Middlewhere that minifies js/css files, and stashes the minified copies in memory.

I'm thinking about bundling. I'm also thinking about a cleverer cache that will flush to disk every so often, but that's a component in its own right (see also: Image transforms).

Bundles take a list of files and concatenates them, to reduce the number of requests for a page. It's not so critical now with HTTP 2 and 3 reusing the connection, but there's still overhead from headers.

The main problem I've got with bundling is that I want to be able to specify a different list of files for each page, and the best way to do it is not obvious.

The page needs to communicate the list of files to the bundler, and it needs to do it via the browser, since the server doesn't know which bundle request goes with which page.

Having said that, I'm settling on a 'just stick the list of files in the URL' approach, probably using a tag helper to convert something easy for humans to type and maintain into something that's easy for the machine to parse at the other end, probably as the query string.


Service workers

Up till now, I have mostly ignored the activate event. This is a mistake. The activate event signals that the service worker code has been updated, and so I might want to invalidate the cache and repopulate it.

Other important notes

  • The page that calls register doesn't get the service worker! The page has to load through the service worker before it will use the service worker.
  • A new version of the service worker is installed if/when the fetched version is different
    • This implies that we should be careful loading the sw from sw cache
  • The new version doesn't start taking events until all pages using the old worker have closed
  • Refreshing the page doesn't count as closing! You must navigate away or close the tab. (Irritating from a debugging point of view)
  • force-reload (shift-reload?) bypasses the service worker

It looks like we're expected to use a different cache name with each different version of the sw script, and delete old caches from the activate event.

Todo: Work out how to programmatically set the cache version, ideally using either the hash of the sw code, or the slug of the timestamp.



Testing ASP Core apps

Prompted by $WORK, it's time to look properly at how to do end-to-end testing of ASP Core apps. I'm going to use Webmail as my example, mostly because it's got at least one heavy runtime dependency (dovecot).

My understanding of modern practice is that I should be running my test in a container, or possibly more than one:

  • The app itself
  • Dependencies (dovecot here)
  • The test runner

Running in containers gives me a known starting state every time, assuing I can get my containers setup right.

Having said that - I've got most of the script to build the machine anyway, is it going to add much more to the runtime to spend an extra 10 seconds starting a VM compared to a container? I suppose it depends on how long the tests take to run.

More data: Podman runs on WSL!

OK, time to write some Containerfiles.


Just had to login again, I'm trying to remember why I restarted the server. It might have been logging stuff?

New todo: write a reader for systemd journal.


I've been moving webmail over from Bytemark to Mythic Beasts, and it's nearly ready, the only remaining problem is that the Spsmassassin integration in using (spamass-milter) isn't correctly picking up the name for the destination mailbox.

I think that the problem is that it runs to early in the processing pipeline, before postfix has done the alias lookup Allegedly, it can lookup aliases at process time but it doesn't seem to be doing that.

The alternative is to use spampd, a wrapper around SA that acts as an LMTP proxy, and is therefore well past alias translation.

Of course, there's still a problem! spampd doesn't use SA's per user preferences. I hacked the previous version to do so, but the author rejected my patch as they'd just done a rewrite. Find to see how much work it's going to be to fix it, I guess



Squeeee! I'm fairly sure that was the last thing that needed fixing/looking at. I've already tested sending mail (as part of dkms), and that's all setup and working (with a script that can generate new dkms keys every month!).

Just need to move over actual emails and update dns, then I can get the old servers turned off. Eeek.

I kind of wish I felt worse about moving away from Bytemark, but they moved away first when they sold themselves to a faceless conglomerate without really telling anyone. Shame, but I'm looking forward to Mythic, and (given the discount from their job advert challenge) it's going to bed much cheaper for a bigger machine (£16/month for 2 core 4GB vs £32/month for a pair of 1 core 1GB).

I should pull down all the config. I'm very tempted to wipe and reinstall to check that I've got everything, and to tidy up wrong turnings. I'm going to think about that a bit more.


wepiu recipe

  • postfix
    • postfix
    • postfix-sqlite
  • dovecot
    • dovecot-auth-lua
    • dovecot-antispam
    • dovecot-core
    • dovecot-imapd
    • dovecot-lmtpd
    • dovecot-sieve
    • dovecot-sqlite
  • support
    • postfix-policyd-spf-python
    • spamassassin
    • libnet-server-perl
    • Custom spampd
    • opendkim
    • opendkim-tools
  • web
    • libnginx-mod-http-headers-more-filter
    • dotnet (via Microsoft)
    • nginx-light
  • system
    • apt-transport-https
    • certbot
    • curl
    • firewalld
    • jq
    • locate
    • make
    • tcpdump
    • unbound
    • wget
    • wireguard
    • vim

Webmail move: Checklist

  • Stop old and new postfix/dovecot/nginx processes
  • Start copy of mail folders to temp dir
  • Copy TLS certs to new machine
  • Check new postfix/dovecot conf is pointing at the right cert
  • Confirm that new postfix has the right domains
  • Create sites in new nginx for new domains
  • Update webmail app config for new domain names
  • Login to Bytemark panel
  • Login to Mythic account
  • Get the "yes it's really my domain code" from Mythic and apply it to the Bytemark panel (for all domains that Bytemark is still hosting)
  • Delete other records from Bytemark dns
  • Add domains to Mythic
  • Create records for mythic
  • Wait for mail messages to finish transferring
  • Move into place
  • Start new dovecot, maybe wait for indexing (Todo: check if there's a "reindex" command)
  • Start nginx, confirm dns and imap
  • Check imap from phone
  • Start new postfix

If there are problems then can point dns back at old machines.

Todo: Setup a couple of scripts to add/remove the DNS records from the mythic api


Running a local CA, I mean, how hard could it be?

I like the idea of mutual TLS (mTLS); each service has it's own key pair signed by a common certificate so they all know that they're talking to the right people. Generating a self-signed certificate (a CA) is a one liner, and generating signed keypairs isn't much harder.

I think the tricky bit is key rotation. Ideally, keypairs would have a short lifetime (under a week, maybe under a day), so there must be a way to automatically install new keys and (where needed) restart services.

But that's still just a script, yeah? Create key, sign key, copy key into place, restart service (or ask the service to reload it's certificates if it can). Maybe it's because everything is on the same machine and so I don't need to worry about secure transport, but even so, that's still a solved problem (using certificate signing requests).

Maybe I'm missing something obvious?


Project/ideas list 2022-08-28

  • Improve my webmail UI
  • Spotify interface
  • Google images
  • "game"
  • Image upload editor


I've got plenty of space left and right of the title/controls, so I can easily put more controls there. I want a first/last, and I like the idea of a calendar, although that might need to hide behind a drop-down.

The other fix to do is for the latest/last seen tags. Instead of two (one ::before and one ::after) and some messing about with positioning, I want one ::before, and then latest/last seen can set content (with a .latest.lastseen combined selector for when the entry is tagged with both.

Except that won't work if/when I bring in proper tagging.

I'm going to ignore that for now. Get the current problem fixed, and then think about how to fix it if/as/when I actually do bring in tagging. (I'm not sure, for example, how I'd tag this entry).


I also want to add counters for each page, probably broken down into url/user agent/count tuples. (I'd like to grab ip as well, but that's PI enough I'd need to ask consent. I wonder if ASN is PI?)

Thinking about it, I want to add date into that as well, so I can look for trends (or see spikes).

I should probably also track response status code, and maybe response generation time

I was thinking earlier, it's not Prometheus that's the problem, it's grafana. Grafana is both too complex, and not able to do the things I want. Time to research alternatives. (Yes, Google charts is on the list, but so is writing my own (basic!) SVG chat drawing tool).


That's latest/last seen fixed/moved. They're now ::after the header, and with no messing about with position.

I looked at adding the .latest class on the server, but I got bogged down with lists and stuff. (I've been trying to hard to handle entries lazily, but since I'm fairly sure the page is held in memory until it's been generated, I'm not sure I see the point).


Hello world! Feeling alright today, compared to the recent moving average. Turns out that I've missed maybe half my anti depressants over the last few weeks, which might explain the annoying amount of depression I've been feeling. It looks like the trick is to take pills as part of the "feed the cats" activity so I don't forget.

Otherwise, worried about money. Just run September's budget, and house has good something like £1.50 spare after bills, food, clearing overstay, putting aside half (!) the cost of getting the kittens neutered next month, and the money for my filings. And that's all before the estimated 80% rise in power bills next month. (We're paying £138/month direct debit, so another £110/month, maybe).

Kittens are at least fit and healthy, alternating between sleeping and chasing each other round the house. Storm is maybe tolerating them, but doesn't seem to want to play. Since the kittens don't understand (or maybe don't care) that Storm doesn't want to play with them, there's still a bit of conflict, but it does seem to be settling down.

Work still sucks, but that might just be a physical thing (in that I have to sit upright for 8 hours and my body doesn't like that), although the stupidity of his things are done is still sapping my will.


Having (another? I've lost track) stab at setting up Ory Hydra as an OAuth/OIDC server. I've fixed the permisison problems and passed a basic smoke test, so now its time to actually think.

There are three components: the public hydra server (the API that handles the actual OAuth stuff), the private/admin hydra server (for registring clients etc.), and the UI server (my bit, that draws the login/logout pages).

This is all behind nginx and under the same hostname (current best guess: auth.osric.uk), with nginx proxying the appropriate paths to the appropriate servers.

That's going to need another nginx config (Ory docs have an example) and another TLS cert. Given my current design choices, it's problaby going to be another user database as well. (Hydra doesn't do user management)

That all seems reasonable, yes?


To remember your current position in the blog, this page must store some data in this browser.

Are you OK with that?