Skip Navigation

Posts
3
Comments
44
Joined
2 yr. ago

  • I set up custom bash scripts collecting information (df, docker json, smartCTL etc) Either parse existing json info or assemble json strings and push it to Homeassistant REST api (cron) In Homeassistant data is turned into sensors and displayed. HA sends messages of sensors fail.Info served in HA:

    • HDD/SSD (size, smartCTL errors, spin up/down, temperature etc)
    • Availability/health of docker services
    • CPU usage/RAM/temperature
    • Network interface/throughput/speed/connections
    • fail2ban jails

    Trying to keep my servers as barebones as possible. Additional services/apps put strain on CPU/RAM etc. Found out most of data necessary for monitoring is either available (docker json, smartCTL json) or can be easily caught, e.g.

    df -Pht ext4 | tail -n +2 | awk '{ print $1}

    It was fun learning and defining what must be monitored or not, and building a custom interface in HA.

  • Your friends will comment on interface when you share music to them :)Hardly using the UI myself ;)

  • Had for years airsonic, later airsonic advanced. The overhead is huge compared to Navidrome.Had never an issue with Navidrome and it is much snappier. No even starting to talk about the modern interface compared to Airsonic.

  • Lidarr can be used for tagging too and it does have a web interface.Cleaning a messed up library with Beets is tough and depends on how the individual files are sorted. Start importing/organizing a small part or some albums to find out how it works. And a backup of data is always recommend!

  • I can recommend Navidrome. Organizing of library with Lidarr and (Beets)[https://beets.io]I’m using Beets for tagging because of the Discogs plugin.Lidarr for visual overview of the library.

  • Just put all commands into a bash file. Starting with ‘’docker tag’’ changing tag to something else in case I need to revert and than pull, compose up. All run by crontab weekly. In case something breaks the latest working container is still there.

  • beets music library management and tagging for geeks

  • Thanks. Now I have to buy a new device!

  • I’m using network overlays for individual containers and separation.Secondly fail2ban installed on host to secure docker services. Ban FORWARDING chains specific to docker instead of INPUT chains. [fail2ban docker](Configure Fail2Ban for a Docker Container – seifer.guru) Use 2FA for services if available.

    Rootless docker has limitations when it comes to port exposing, storage drivers, network overlays etc.

    The host is auto-updating security batches but rebooted manually only.Docker containers are updated manually too. I built all containers from file and don’t pull them because most are modified (plugins, minimizing sizes, dedicated user rights etc.)

  • Add VPN and you made the best out of it :)

  • I think it's not so much "insecure" as that it's that external SSH access is less secure than no access. And for home-managed systems exposed externally, I would recommend a smaller attack area.

  • Why would you expose SSH on a home production server?Hosting several dockerized apps for friends since years. Only 80/443 proxy ports are open. Apps are secured with 2FA and monitored by fail2ban + kept up-to-date. Never had any issue.

  • 😂

  • Define which data is from value. I got 68TB of data but realistically only 3 TB are from such value I maintain several copies (Raspi + SSD) and online backup. The rest of data is stored on a cheap server build at a family member and synchronized twice a year. Make sure your systems and drives are all encrypted. And test your backups and redeployment strategy.

  • If your data is such valuable, I’m sure you took the time to setup a complete encrypted system (LUKS).

  • Can relate to the approach. Keeping host barebones and everything dockerized + data volumes hosted separately will ease maintanance. For rapid redeployment a custom script will set up firewall/fail2ban/SSH/smartCTL/crontab/docker/docker-compose and finally load from another instance backups of all docker images. Complete setup from scratch takes 10-15 minutes. Tried Ansible but ended up custom scripting.

    All my data is stored offsite twice a year. Data from high value is stored on a SSD as data volume and 2 other SSD’s as encrypted TAR + AWS S3. Rotation daily/weekly.

  • Haha, same service, same reason it was removed. The tool works well but we all have different habits. Just didn’t suit my work style.

  • Deleted double posts. The web app doesn’t update whether my posts are successfully transmitted. Need to be more patient I guess