Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)M
Posts
15
Comments
361
Joined
9 mo. ago

  • This is a fantastic comment. Thank you so much for taking the time.

    I wasn't planning to run a GUI for my git servers unless really required, so I'll probably use SSH. Thanks, yes that makes the part of the reverse proxy a lot easier.

    I think your idea of having a designated "master" (server 1) and having rolling updates to the rest of the servers is a brilliant idea. The replication procedure becomes a lot easier this way, and it also removes the need for the reverse-proxy too! - I can just use Keepalived, set up weights to make one of them the master and corresponding slaves for failover. It also won't do round-robin so no special stuff for sticky sessions! This is great news from the perspective of networking for this project.

    Hmm, you said to enable pushing repos to the remote git repo instead of having it pull? I was going create a wireguard tunnel and have it accessible from my network for some stuff but I guess it makes sense.

    Thanks again for the wonderful comment.

  • Sorry, I don't understand. What happens when my k8s cluster goes down taking my git server with it?

  • I think I messed up my explanation again.

    The load-balancer in front of my git servers doesn't really matter. I can use whatever, really. What matters is: how do I make sure that when the client writes to a repo in one of the 5 servers, the changes are synced in real-time to the other 4 as well? Running rsync every 0.5 second doesn't seem to be a viable solution

  • You mean have two git servers, one "PROD" and one for infrastructure, and mirror repos in both? I suppose I could do that, but if I were to go that route I could simply create 5 remotes for every repo and push to each individually.

    For the k8s suggestion - what happens when my k8s cluster goes down, taking my git server along with it?

  • GitHub didn't publish the source code for their project, previously known as DGit (Distributed Git), now known as spokes. The only mention of it is in a blog post on their website but I don't have the link handy right now

  • Thank you. I did think of this but I'm afraid this might lead me into a chicken and egg situation, since I plan to store my Kubernetes manifests in my git repo. But if the Kubernetes instances go down for whatever reason, I won't be able to access my git server anymore.

    I edited the post which will hopefully clarify what I'm thinking about

  • Apologies for not explaining better. I want to run a loadbalancer in front of multiple instances of a git server. When my client performs an action like a pull or a push, it will go to one of the 5 instances, and the changes will then be synced to the rest.

    I have edited the post to hopefully make my thoughts a bit more clear

  • Apologies for not explaining it properly. Essentially, I want to have multiple git servers (let's take 5 for now), have them automatically sync with each other and run a loadbalancer in front. So when a client performs an action with a repository, it goes to one of the 5 instances and the changes are written to the rest.

    I have edited the post, hopefully the explanation makes more sense now

  • B2

  • Dom0 being based on Fedora has been a gripe of mine for a while now. Fedora isn't that secure without some effort either. Unfortunately, I have no way to confirm which one out of them is "more secure".

    Do you have any sort of automated test framework in mind which one can use to test distros against attacks?

  • Thanks for the tip, love Capy.

    You're right, Whonix is probably the better idea. I use kick secure now but if I move to Qubes then I'll use Whonix as a default.

    I'll have to read more about secureblue. I haven't given the documentation as much time as I should have. I guess you could run an HVM for now.

    Why do you rank secureblue over Whonix?

  • Hey, I recognise you now! That was a great post, I had a lot of fun reading it. If I could follow people on Lemmy I'd follow you.

    What do you think about Kicksecure (and Kicksecure inside of Qubes)? I know they are criticised for backports but leaving that issue aside, I think they have created a very handy distro. I personally go through CIS benchmarks for most of stuff including Kicksecure but it's definitely nice to have a prey hardened distro (SecureBlue too but I hear SecureBlue isn't a big team, not sure how much time they have to address the broad range of desktop Linux security issues).

    Honestly, Qubes is the best at this by far. Their method of compartmentalisation takes away the complexity of reasonable security from the end-user making it a mostly seamless experience. I personally think that if you were to put GrapheneOS and Qubes OS side-by-side on uncompromised hardware, I'd take Qubes. I'd run GrapheneOS inside Qubes with a software/hardware TPM passed through if I could.

  • I'd donate to them if someone is willing

  • Thanks. You are correct, however since root is required for certain processes, I will use different users and doas for my needs.

    I have realised that it is hard for me to justify the reason why I want to harden an OS for personal use. I gave privilege escalation out, but after reading your comment I have realised that that is not the only thing I am looking to "fix". My intention with running hardened_malloc was to prevent DoS attacks by malicious applications trying to exploit unknown buffer overflow situations, and LibreSSL and musl were to reduce the attack surface.

    I agree with your comment though. I'm just wondering about how I can specify a reason (and why such a reason is required to justify hardening of a distro). I haven't found much of a reason for the existence of OpenBSD, Kicksecure, Qubes etc other than general hardening and security.

  • Deleted

    Permanently Deleted

    Jump
  • I think it's time I learn e-hentai's UI seriously

  • Deleted

    Permanently Deleted

    Jump
  • Sorry I didn't open the article. Who is suing them?

  • Thank you for that. Yes, I only really follow his post roughly.

    Unfortunately, I don't think secureblue is going to be a possible choice. I like the secureblue project, I think it's awesome but what I'm working with will likely only come with a Rocky/AlmaLinux base.

  • You raise a valid point. In which case, I want to try and prevent malicious privilege escalation by a process on this system. I know that's a broad topic and depends on the application being run, but most of the tweaks I've listed work towards that to an extent.

    To be precise, I'm asking how to harden the upcoming AlmaLinux based Dom0 by the XCP-NG project. I want my system to be difficult to work with even if someone breaks into it (unlikely because I trust Xen as a hypervisor platform but still).

    I admit I was a bit surprised by the question since I've never consciously thought about a reason to harden my OS. I always just want to do it and wonder why OSes aren't hardened more by default.

  • What do you mean? I want to harden it in a general sense against exploitation