• 16 Posts
  • 118 Comments
Joined 1 year ago
cake
Cake day: July 29th, 2023

help-circle





  • Great question (and we are reaching the outside edge of my knowledge here). Something like 3-5% of carbon in plants is taken up from the soil by plant roots. I don’t fully understand the mechanism, but the organic carbon percentage is an important competent in the calculation of how much artificial nitrogen a crop is going to need, so I guess it’s probably some biochemical process for making the nitrogen available.

    The organic carbon percentage is closely watched by farmers and is something of an indication of soil health. ie if your crop rotation is reducing the OC% over time then you probably need to reconsider it. It’s one of the reasons burning crop stubbles is a much rarer practice now.


  • Hay is cut from any sort of cereal plant early in it’s lifecycle, specifically before the plant starts concentrating it’s energy into the seeds. At this stage the plant stalk is sweeter (even to a human - give it a bite). After flowering, the plant is concentrating it’s energy into the seeds. By the time it’s fully done this (which takes a number of weeks), there is very little protein in the stalk, and it’s far less palatable (or nutritious) to animals. The plant stalk is now essentially ‘straw’.

    Commercial hay can be mowed from a meadow (in Australia usually ryegrass) in which case it will have all sorts mixed in, or from crops intended for making good hay (in Australia usually oats or wheat). Commercial straw (which has a tiny market) is cut after the grain has been harvested from the top of the plant. In commercial broadacre cropping in poor soil areas (the bulk of Australia’s grain areas) it’s usually better economics to keep your crop residue including straw since the cost to replace the carbon would be higher that what you’d get for the straw after the cost of harvesting it.

    Source: I play a lot of Minecraft







  • My NAS and production server run 24/7, I’ve got a dev server that I turn off if I’m not expecting to use it for a week or so. Usually when I do that, I immediately need it for something and I’m away from home. I have chosen equipment to try and minimize energy use to allow for constant running.

    My view on UPS is it’s a crucial part of getting your availability percentage up. As my home lab turned into crucial services I used to replace commercial cloud options, that became more important to me. Whether it is to you will depend on what you’re running and why.

    I’ve heard that one of the most likely times for hard drives to fail is on power up, and it also makes sense to me that the heating/cooling cycles would be bad for the magnetic coating, so my NAS is configured to keep them spinning, and it hasn’t been turned off since I last did a drive change.






  • I switched from Copilot to Codeium after only a couple of months of Copilot use - just based on the cost since currently I’m just a hobby coder.

    The main difference I’ve noticed is that Codeium doesn’t seem as smart about the local context as Copilot. Copilot would look at how I’m handling promises in a project, and stick to that, whereas Codeium would choose a strategy seemingly at random.

    A second, and maybe more telling example, is that I do my accounts using ‘plain text accounting’ in VS Code. This is a very niche approach to accounting software and I imagine is hardly in the training sets at all - there certainly would not be a lot of public domain text accounts in the particular format (BeanCount) I use in public code repositories. Codeium doesn’t make any suggestions for entries as I’m entering transactions, whereas Copilot would see that the account names I’m using are present in another file in the project and suggest them, and very quickly figure out the formatting of transactions and suggest them correctly.





  • I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.

    Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.

    So -

    • weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS’s, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected.
    • Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups.
    • From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They’re on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container.
    • Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS
    • Yearly: visit the remotes and have a proper check/clean up/updates