Just tried it and seems too complicated haha. With traccar I just had to deploy a single service and use either the official app or previously gpslogger sending the data to an endpoint.
With owntracks the main documentation seems to be deploy it into the base system, docker is kind of hidden.
And with docker you need to deploy at least 3 services: recorder, Mosquitto, and the front end.
The app doesn’t tell you what’s expected to be filled into the fields to connect to the backend. I tried with https but haven’t been able to make it work.
To be fair, this has been just today. But as long as a service has a docker compose I’ve always been able to deploy it in less than 10 minutes, and the rest of the day is just customizing the service.
It looks amazing!
How well fitted would this be for a Google maps timeline replacement?
I see you mention we need to upload the files which maybe could be obtained from an app like https://github.com/mendhak/gpslogger
I already had a flow to have them on my server with syncthing, so I could easily use your api to process them.
The thing would be to have each trail be marked as each day and have a way of showing them nicely (I haven’t tested everything in the demo hehe).
Is there a plan to be able to process any GPS standard to automatically generate the trails?
I’m currently using traccar, but it looks more like a fleet management than something to remember where you’ve been.
I can share you a bit my journey and setups so maybe you can take a better decision.
In vultr with the second smallest shared CPU (1vCPU, 2GB RAM) several of my services have been running fine for years now:
invidious, squid proxy, TODO app (vikunja), bookmarks (grimoire), key-value storage (kinto), git forge (forgejo) with CI/CD (forgejo actions), freshrss, archival (archive-box), GPS tracker (traccar), notes (trilium), authentication (authelia), monitoring (munin).
The thing is since I’m the only one using them usually only one or two services receive considerable usage, and I’m kind of patient so if something takes 1 minute instead of 10 seconds I’m fine with it. This is rare to happen, maybe only forgejo actions or the archival.
In my main pc I was hosting some stuff too: immich, jellyfin, syncthing, and duplicati.
Just recently bought this minipc https://aoostar.com/products/aoostar-r7-2-bay-nas-amd-ryzen-7-5700u-mini-pc8c-16t-up-to-4-3ghz-with-w11-pro-ddr4-16gb-ram-512gb-nvme-ssd
(Although I bought it from amazon so I didn’t had to handle the import.)
Haven’t moved anything off of the VPS, but I think this will be enough for a lot of stuff I have because of the specs of the VPS.
The ones I’ve moved are the ones from my main PC.
Transcoding for jellyfin is not an issue since I already preprocessed my library to the formats my devices accept, so only immich could cause issues when uploading my photos.
Right now the VPS is around 0.3 CPU, 1.1/1.92GB RAM, 2.26/4.8GB swap.
The minipc is around 2.0CPU (most likely because duplicati is running right now), 3/16GB RAM, no swap.
There are several options for minipc even with potential to upgrade ram and storage like the one I bought.
Here’s a spreadsheet I found with very good data on different options so you can easily compare them and find something that matches your needs https://docs.google.com/spreadsheets/d/1SWqLJ6tGmYHzqGaa4RZs54iw7C1uLcTU_rLTRHTOzaA/edit
(Here’s the original post where I found it https://www.reddit.com/r/MiniPCs/comments/1afzkt5/2024_general_mini_pc_guide_usa/ )
For storage I don’t have any comments since I’m still using a 512GB nvme and a 1TB external HDD, the minipc is basically my start setup for having a NAS which I plan to fill with drives when I find any in sale (I even bought it without ram and storage since I had spare ones).
But I do have some huge files around, they are in https://www.idrive.com/s3-storage-e2/
Using rclone I can easily have it mounted like any other drive and there’s no need to worry of being on the cloud since rclone has an encrypt option.
Of course this is a temporary solution since it’s cheaper to buy a drive for the long term (I also use it for my backups tho)
If you go the route of using only linux sshfs is very easy to use, I can easily connect from the files app or mount it via fstab. And for permissions you can easily manage everything with a new user and ACLs.
If you need to access it from windows I think your best bet will be to use samba, I think there are several services for this, I was using OpenMediaVault since it was the only one compatible with ARM when I was using a raspberry pi, but when you install it it takes over all your net interfaces and disables wifi, so you have to connect via ethernet to re-enable it.
In the VPS I also had pihole and searxng, but I had to move those to a separate instance since if I had something eating up the resources browsing internet was a pain hehe.
Probably my most critical services will remain in the VPS (like pihole, searxng, authelia, squid proxy, GPS tracker) since I don’t have to worry about my power or internet going down or something that might prevent me from fixing stuff or from my minipc being overloaded with tasks that browsing the internet comes to a crawl (specially since I also ran stuff like whispercpp and llamacpp which basically makes the CPU unusable for a bit :P ).
To access everything I use tailscale and I was able to close all my ports while still being able to easily access everything in my main or mini pc without changing anything in my router.
If you need to give access to someone I’d advice for you to share your pihole node and the machine running the service.
And in their account a split DNS can be setup to only let them handle your domains by your pihole, everything else can still be with their own DNS.
If this is not possible and you need your service open on the internet I’d suggest having a VPS with a reverse proxy running tailscale so it can communicate with your service when it receive the requests while still not opening your lan to the internet.
Another option is tailscale funnel, but I think you’re bound to the domain they give you. I haven’t tried it so you’d need to confirm.
I use https://lemmyverse.net/
You can search for all communities of all instances, or click in a specific instance.
Yeah, I just searched a bit and found this https://stackoverflow.com/questions/28348678/what-exactly-is-the-info-hash-in-a-torrent-file
The torrent file contains the hashes of each piece and the hash of the information about the files with the hashes of the pieces, so they have complete validation of the content and amount of files being received.
I wonder if the clients only validate when receiving or also when sending the data, this way maybe the seeding can be stopped if the file has been corrupted instead of relaying on the tracker or other clients to distrust someone that made a mistake like the OP of that post.
How torrents validate the files being served?
Recently I read a post where OP said they were transcoding torrents in place and still seeding them, so their question was if this was possible since the files were actually not the same anymore.
A comment said yes, the torrent was being seeded with the new files and they were “poisoning” the torrent.
So, how this can be prevented if torrents were implemented as a CDN?
An in general, how is this possible? I thought torrents could only use the original files, maybe with a hash, and prevent any other data being sent.
A note taking app can be turned into a diary app if you only create notes for each day.
Even better if you want to then expand a section of a diary entry without actually modifying it nor jumping between apps.
Obsidian can easily help you tag and link each note and theme/topic in each of them.
There are several plugins for creating daily notes which will be your diary entries.
Also it’s local only, you can pair it with any sync service, the obsidian provided one, git, any cloud storage, or ones which work directly with the files like syncthing.
Just curious, what are the special features you expect from a diary service/app which a note taking one doesn’t have?
Yes, each sperm and egg are unique since the process they go through ensures the chromosomes have been mixed.
Both sex cells (gametes) go through meiosis.
shuffles the genes between the two chromosomes in each pair (one received from each parent), producing lots of recombinant chromosomes with unique genetic combinations in every gamete […] produces four genetically unique cells, each with half the number of chromosomes as in the parent
You get half of your chromosomes from each of your parents, so their bodies are in charge of setting which half their child will use.
Afterwards which trait will be present goes into dominant and recessive genes.
(of course this is more complicated and someone might do a better job at explaining it in depth)
Glad to see you solved the issue, I just want to point out that this might happen again if you forget your db is in a volume controlled by docker, better to put it in a folder you know.
Last month immich released an update to the compose file for this, you need to manually change some part.
Here’s the post in this community https://lemmy.ml/post/14671585
Also I’ll include you this link in the same post, I moved the data from the docker volume to my specific one without issue.
https://lemmy.pe1uca.dev/comment/2546192
Or maybe another option is to make backups of the db. I saw this project some time ago, haven’t implemented it on my services, but it looks interesting.
https://github.com/prodrigestivill/docker-postgres-backup-local
I’m just annoyed by the regions issues, you’ll get pretty biased results depending in what region you select.
If you try to search for something specific to a region with other selected you’ll find sometime empty results, which shows you won’t get relevant results about a search if you don’t properly select the region.
Probably this is more obvious with non technical searches, for example my default region is canada-en and if I try “instituto nacional electoral” I only get a wiki page, an international site and some other random sites with no news, only when I change the region I get the official page ine.mx and news. For me this means kagi hides results from other regions instead of just boosting the selected region’s ones.
It’s regarding appropriate handling of user information.
I’m not sure it includes PII. Basically it’s a ticketing system.
The pointers I got are: the software is secure and reliable to store the data and be able to be queried to understand the updates the data had.
I think you have two options:
/etc/lighttpd/conf-available/15-pihole-admin.conf
. In there you can see what’s the base url to be used and other redirects it has. You just need to remember to check this file each time there’s an update, since it warns you it can be overwritten by that process.I’ve been using https://kolaente.dev/vikunja/vikunja
It has options for sharing and assigning people to a task, but I only use it for personal stuff so haven’t checked properly those features.
I’m not sure how the integration experience would be since I’m not familiar with calDAV.
What’s the feed you have issues with?
Or is it with all feeds?
None of my feeds have a read remaining paragraphs
to expand the article in the FreshRSS UI.
As I mentioned, this one sends the full article https://www.404media.co/rss/
And this one has a partial article with a link to open the page in the site http://feeds.arstechnica.com/arstechnica/index/
I wish for it be like this for all the articles as some of the articles that load in full I’m not always interested in, and end up having to scroll through the whole thing
To skip to the next article you can configure the shortcuts native to FreshRSS, I think the default ones are h
for the next unread article and k
for the previous article. (I think these are the defaults because I haven’t changed them and I see these in my config screen)
For mobile I’m using the touch control extension in here https://github.com/langfeld/FreshRSS-extensions
I’m not sure what you mean by articles not loading properly.
I haven’t had any issues with FreshRSS’ UI showing all the data.
Have you checked the feed sends all the article in it?
For example ars’ feed sends a few paragraphs and includes a link at the end with Read the remaining X paragraphs
404media’s does send all the article content in their feed.
9to5google’s only send you a single line from the article!!
So, it depends on what you need.
If you want to see the full content probably you need an extension which either curls the link of each item in the feed and replaces the content received by the feed with the one received bu the curl, or one which includes an iframe with the link so the browser loads it for you.
IIRC there are two youtube extensions which do something similar to change the links for invidious or piped, one replaces the content with the links, and the other adds a new element to load the video directly in the feed.
The port for your postgres container is still the same for other containers, what you did was just map 5432 to 8765 for your host.
You don’t need to change the port or the host the immich services try to access within the network docker compose creates.
You still have container_name: immich_postgres
so you didn’t change anything for the other containers.
What you did was change how to write the command to up or down the container.
From docker compose up database
to docker compose up immich-database
(which normally you won’t use since you want to up and down everything at once).
If you do docker ps
you’ll still see the name of the container is immich_postgres
Same, I have multiple services running in some machines and I’ve never had the need to modify the ports inside each docker-compose network. Just the exposed ports are the ones I’ve changed, and just for integrating with external services or the reverse proxy.
I’m just downloading moonlighter again xP
It’s also included in Netflix, but the reviews say it crashes and data gets corrupted.
All the ones I’ve used require a separate service to actually do the query.
You can use traccar, owntracks, or wanderer (this one is not realtime tho, and requires for you to find an app to send the data).
There’s also gpslogger which can record everything locally (or send it to any URL you set), but you need another app or service to be able to query it properly.