PorkrollPosadist [he/him, they/them]

Hexbear’s resident machinist, absentee mastodon landlord, jack of all trades

Talk to me about astronomy, photography, electronics, ham radio, programming, the means of production, and how we might expropriate them.>

  • 129 Posts
  • 2.79K Comments
Joined 5 years ago
cake
Cake day: July 25th, 2020

help-circle



  • Now that I’m out of work and have a little more time, I would like to elaborate a little further. Personally, I do run a collection of different size / speed disks as a single volume, so I don’t mean to discourage this in general. It’s just extra work and would require you to start over anyway to do it properly.

    I’ve done this in two iterations. Originally I had a setup using LVM’s caching feature, where I combined a 500GB SSD and a 2TB HDD into a single volume. The configuration didn’t yield a 2.5TB volume though. It was still 2TB. The SSD simply mirrored the most frequently accessed blocks on the HDD. This caching is implemented at the block-layer, which means you have the freedom to choose any filesystem you like to use on top of it (in addition to other block-layer mechanisms like LUKS encryption). I just formatted the resulting logical volume with Btrfs.

    Today, I am running a setup with Bcachefs which combines a 1TB NVMe and two 6TB HDDs into a 12TB volume. This setup does not use LVM. Bcachefs implements support for multiple block devices at the filesystem driver level. It performs the same type of caching as LVMCache (or bcache, which it is derived from), but allows other features like replication and compression to be configured at the file/directory level - which is not possible in a block-layer driver (which is oblivious to the filesystem implemented on top of it).

    Bcachefs is particularly vulnerable to the bus factor though, The main developer is an abrasive character and got himself suspended from kernel development a while back (not sure if this is still the case, but lmao. I’m committed to this setup now for better or worse). At least he’s not an axe murderer. LVMCache with a more conventional filesystem is a much more future-proof approach, though it lacks some of the fanciness.

    In either case, this kind of caching strategy is nice to take advantage of large, cheap HDDs while having NVMe-like performance most of the time. There might not be much benefit in your case using a m.2 NVMe as a cache for a SATA SSD. Both are much faster than a HDD.

    None of these options will be available in a distro installer anyway though. This is firmly in “rolling your own” territory :)

    Also, when I said unpredictable performance, that still means at least SSD performance. I wouldn’t expect this to grind anything to a halt. It’s just that the filesystem driver only sees a virtual block device and has no idea that e.g. the second two thirds of the drive are slower than the first. It is unable to make any smart optimizations. Performance is at the mercy of where a file happens to land within that space. It might just be the case that not needing to worry about juggling capacity between separate filesystems is worth that trade-off. I’m over here burning TERABYTES for speed, but some people would kill for an extra terabyte at any speed.












  • Lots of MRIs are running windows XP or probably windows 7 for example.

    This stuff is inconsequential in the grand scheme of things. These are appliances, not workstations. They should have no custom software installed on them that the manufacturer didn’t put there. There are IT implications involved with running embedded systems based on obsolete operating systems software, but there is nothing you can do about these systems as an end user besides isolating them to their own highly restricted subnet.

    The story is exactly the same in manufacturing. There is a lot of CNC machinery based on unsupported versions of Windows. You’re not getting service from Microsoft, and the only good thing your IT department can do with these machines is create a back-up image of the hard drives and whatever floppy disks / CD ROMs they find inside the electrical cabinet. If you actually wipe the thing and re-install Windows from scratch (let alone anything else), it will never work again unless you fly a service technician out for a full week. The configuration of these machines is very fragile, based on top of a ton of undocumented in-house drivers for hardware which isn’t even available on the open market, and a lot of settings which need to be adjusted based on which motor they happened to have in the warehouse in the particular month it was assembled.

    This kind of machinery has nothing to do with what operating system or productivity software is used throughout the office.

    I can relate to the FileMaker thing though. This is exactly what the company I work at is doing. They’ve got something like 30 internal applications based on it and an expensive ERP system and for better or worse it will be around until the company goes under. Lord knows why they didn’t just use a Postgres database. They’re paying multiple salaried in-house software developers anyway. They would have saved literally millions of dollars in licensing.