I’d love you to check back later with your conclusions.
I’d love you to check back later with your conclusions.
Guide to Self Hosting LLMs with Ollama.
ollama run llama3.2
If it’s an M1, you def can and it will work great. With Ollama.
Great question (and we are reaching the outside edge of my knowledge here). Something like 3-5% of carbon in plants is taken up from the soil by plant roots. I don’t fully understand the mechanism, but the organic carbon percentage is an important competent in the calculation of how much artificial nitrogen a crop is going to need, so I guess it’s probably some biochemical process for making the nitrogen available.
The organic carbon percentage is closely watched by farmers and is something of an indication of soil health. ie if your crop rotation is reducing the OC% over time then you probably need to reconsider it. It’s one of the reasons burning crop stubbles is a much rarer practice now.
Hay is cut from any sort of cereal plant early in it’s lifecycle, specifically before the plant starts concentrating it’s energy into the seeds. At this stage the plant stalk is sweeter (even to a human - give it a bite). After flowering, the plant is concentrating it’s energy into the seeds. By the time it’s fully done this (which takes a number of weeks), there is very little protein in the stalk, and it’s far less palatable (or nutritious) to animals. The plant stalk is now essentially ‘straw’.
Commercial hay can be mowed from a meadow (in Australia usually ryegrass) in which case it will have all sorts mixed in, or from crops intended for making good hay (in Australia usually oats or wheat). Commercial straw (which has a tiny market) is cut after the grain has been harvested from the top of the plant. In commercial broadacre cropping in poor soil areas (the bulk of Australia’s grain areas) it’s usually better economics to keep your crop residue including straw since the cost to replace the carbon would be higher that what you’d get for the straw after the cost of harvesting it.
Source: I play a lot of Minecraft
Thanks. I ended up going with Garage (in Docker), and installed the minio client cli for these tasks.
Love the effort you’ve put into this question. You’ve clearly done some quality research and thinking.
When I asked myself this same question a couple of years ago, I ended up just buying a second hand Synology NAS to use alongside my mini-pc. That would meet your criteria, and avoids the (I’m not sure what magnitude) reliability risk of using disks connected over USB. It’s more proprietary than I’d like, but it’s battle tested and reliable for me.
An M1 MacBook with 16GB cheerfully runs llama3:8b outputting about 5 words a second. A second hand MacBook like that probably costs half to a third of a secondhand RTX3090.
It must suck to be a bargain hunting gamer. First bitcoin, and now AI.
edit: a letter
I use the Continue VS Code plugin with Ollama to use a couple of different models (deepseek-coder-v2 & starcoder2) to recreate a local only Github Copilot type experience for coding. This is on an M1 Apple Silicon though. For autocomplete the generation needs to be pretty brisk - I’m not sure how that would go in a VM without a GPU.
My NAS and production server run 24/7, I’ve got a dev server that I turn off if I’m not expecting to use it for a week or so. Usually when I do that, I immediately need it for something and I’m away from home. I have chosen equipment to try and minimize energy use to allow for constant running.
My view on UPS is it’s a crucial part of getting your availability percentage up. As my home lab turned into crucial services I used to replace commercial cloud options, that became more important to me. Whether it is to you will depend on what you’re running and why.
I’ve heard that one of the most likely times for hard drives to fail is on power up, and it also makes sense to me that the heating/cooling cycles would be bad for the magnetic coating, so my NAS is configured to keep them spinning, and it hasn’t been turned off since I last did a drive change.
Yeah na, put your home services in Tailscale, and for your VPS services set up the firewall for HTTP, HTTPS and SSH only, no root login, use keys, and run fail2ban to make hacking your SSH expensive. You’re a much smaller target than you think - really it’s just bots knocking on your door and they don’t have a profit motive for a DDOS.
From your description, I’d have the website on a VPS, and Immich at home behind TailScale. Job’s a goodun.
+1 for the main risk to my service reliability being me getting distracted by some other shiny thing and getting behind on maintenance.
+1 for Syncthing. I run it on a server at home, then on my MacBook over Tailscale. For web access I run FileBrowser (also over Tailscale) against the same directory.
I switched from Copilot to Codeium after only a couple of months of Copilot use - just based on the cost since currently I’m just a hobby coder.
The main difference I’ve noticed is that Codeium doesn’t seem as smart about the local context as Copilot. Copilot would look at how I’m handling promises in a project, and stick to that, whereas Codeium would choose a strategy seemingly at random.
A second, and maybe more telling example, is that I do my accounts using ‘plain text accounting’ in VS Code. This is a very niche approach to accounting software and I imagine is hardly in the training sets at all - there certainly would not be a lot of public domain text accounts in the particular format (BeanCount) I use in public code repositories. Codeium doesn’t make any suggestions for entries as I’m entering transactions, whereas Copilot would see that the account names I’m using are present in another file in the project and suggest them, and very quickly figure out the formatting of transactions and suggest them correctly.
E. Jean Carroll could buy it off them for the lols.
Greta Tintin Thunberg
I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.
Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.
So -
No answer, but just to say I run most of my services with this setup - Docker in a Debian LXC under Proxmox, and don’t have this issue. The containers are ‘privileged’, and I have ‘nesting’ ticked on, but apart from that all defaults.
Photograph: Fredrik Varfjell/NTB/AFP/Getty Images