Shopping list again. We've been through this a few times so we should all know the words by now.

Lists are our aggregation roots. Lists have a name, a unique id, a list of users with access, and a list of items. Items have a name and a status. Items are distinguished by name (ignoring case).

Commands are CreateList, RenameList, DeleteList, AddItem, RenameItem, SetItemStatus, DeleteItem, ShareList, AcceptList.

Clients handle commands from the ui locally and pass events to the backend to be fanned out to other clients. The backend stores all events and sends new events to a client when it connects.

Backend stores events and a projection of the current state of lists that's built at app start time and updated as events arrive. Database is a single table of list id, timestamp, event blob.


I've spun up a new VM to host the osric.uk stuff (so pulling it off wepiu in the cloud). I still haven't brought myself to renting a Hezner machine - they offer a really good deal for what you get, but it's still £30/month.

(I guess I haven't got used to £30/month being affordable. Hubbies new pc was about 2.5k£, or just under 7 years hosting at that price).

If i got a new machine (and it is a machine, not a VM), I'd probably set up Wireguard so it's connected to wepiu (and the house machines, just to make life easier) and keep DNS pointing at wepiu (Mythic have a much better rep than Hetzner).

So this month, I'm going to move the osric.uk stuff to a VM on one of the home machines to prove the theory, and if it works I'll buy the Hetzner machine at the start of February and copy stuff over.

(I'll probably leave the home VM setup as a staging/test server, I've got a spare domain i can use for it, and it might be fun to set up automated ui tests).

It's a shame I can't take advantage of Hetzner's unlimited bandwidth, but too many services are still ipv4 only, and I don't think I'm anywhere close to the limit at mythic.


In the spirit of "got to start somewhere":

Sheep wander the world eating. Sometimes, for no obvious reason, they will panic and run around for a while, and then calm down and start eating again. Sheep that are eating mostly stay still, but will absently move forwards now and then. Sheep that are panicking move much faster, and occasionally change direction. Sheep will try to avoid obstacles, and will stop moving (even if they are panicking) if they don't have anywhere to go.

Sheep can perceve other nearby sheep, and will tend to panic if a neighbour is panicking (with the chance increasing as the panicking sheep is closer). However, once a sheep starts to panic, it pays much less attention to it's neighbours.


That's shopping thrown up on a server (on ''anat", the new hosting VM). Not even half finished, but going well. Will try to get a bunch more done tomorrow (before work eats up my energy).


Looks like I can use podman secrets to stash the NuGet config file and inject it into the build containers. (That's a far cleaner plan that the current "copy it into the right folder and hope the gitignore for is up to date).

Runtime secrets are already managed by storing them in a file and mounting the path (which is what this new command does anyway).


I want to script the container build somehow - especially if/as I'm going to need --secret arguments now. Since I'm Windows based now (weird, huh?), I should use poweshell, I just have to be careful not to accidentally write make by mistake.

(make doesn't do what I actually want, which is too run a program to get the date of a target. e.g., my source folder is dated last Tuesday, what's the date on the latest image? make wants me to touch image as part of the build but that's not source of truth (although I don't know how to get the build date of an image anyway))

But since poweshell runs under Linux anyway, a two line ppdman build, podman push script isn't the end of the world. If I'm careful, I only need one copy and it can pick up the tag from the source folder.

(Ok, now I want a 'build all' version that I can run after I've updated the shared project to bump version numbers in dependent projects, build, push, and restart them.)

(That's not so "out there", right? Also, I can add "increase project version on build" code, and maybe even "bump minor" and "bump major" options)


I've dumped self hosted elastic on favour of Grafana Cloud. Elastic is just to heavy, and I can't be bothered to set up the Grafana stack at home (especially as the logs and traces both need weird databases and/or S3 compatible storage).

On the plus side, I've written a few poweshell functions for building and publishing container images, for updating .net dependencies, and for automatically bumping .net project versions. This means I could(and hopefully will) be able to tweak something in the shared web project, and automatically update and publish dependant projects.


Turns out podman secrets don't work the way I expected, in that there seems to be a different namespace for build secrets compared to runtime secrets. This means that injecting nuget.conf isn't as easy as I'd hoped.

However, it looks like copying the file into the podman machine should solve the problem (and i can probably wrap that in a script, why not?)

(Putting my powershell module into my one drive was a good idea, now I just need to do something similar for my bashrc in Linux)


NuGet protocol works with basic auth, and my auth service is OIDC, so there's a conflict there.

I don't want to give apps other than auth access to the auth db, so NuGet needs to make some kind of call to auth.


"Scope" is a red herring, at least for my current use case.

Scope is the client asking the server for access to a type of resource.

Roles are bundles of permissions that the client should interpret in a way that the server expects (e.g., permission to "read files" shouldn't be used to access telescope controls)

How complex do I want to make this? I've got a picture in my head of individual endpoints each with their own set of permissions, but on the other hand it's me, husband, and a couple of friends, and the last three just want t sleeve where they can share files.

That gives three roles - me, hubs, and the other two.

[A short time passes]

I've thought of a couple more roles: an explicit "anyone including not logged in", and read only NuGet downloader role (and hopefully a read only container downloader role once I've figured out how to hook in that subsystem).

Maybe I'm taking about policies here rather than roles? I get an "all access" policy, hubs (and probably the Dr Who Boyz) get file share access, and the policy for automated downloaders. Then all I'm left with is how to map an account to a policy (maybe that's what roles are?)


Yay! I've got the new authorisation policy setup working!

As you may already know (can't remember if I've actually said), I'm on a mission to split up osric.uk into a bunch of smaller sites/packages, to make it easier for me to mess with part of the site without impacting the rest of the site.

On the critical path is getting authentication and authorisation working the way I want. I've setup an OIDC server (Thanks Ory Hydra!) and that's working well for authentication ("Who are you?" checks), but I want to be able to give out accounts without giving away the homeworld, and that's what authorisation is for.

In theory it's easy, asp core has policy authorisation built in. In practice, because I'd made a couple of mistakes, it's taken much longer than expected.

For the record, the mistakes were:

  • Not including the Razor page model in the Razor page, so the framework couldn't pick up the attribute that set the policy
  • Using "Is in role 'User'" as a proxy for "Is the user logged in", when the user role doesn't exist

I've even got "forwarding basic auth" working, where an app can forward the Authenticae header to the auth app, so only the auth app needs to access the auth database.

All together pretty chuffed!


To remember your current position in the blog, this page must store some data in this browser.

Are you OK with that?