Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)S
Posts
2
Comments
71
Joined
1 yr. ago

  • No I get that, thanks a lot for explaining! I work with a bunch of other stuff where help is mostly also only available on discord so that’s fine.

    I really need to read into the whole Android stuff more. I know privacy and security are different topics, it’s just a weird thing to wrap my head around that Android would be the most secure option.

    Another issue is that for what I’m doing I need to rent VPSes and there you’re already quite limited as to what you can run on them, probably Android wouldn’t be an option right? And let’s say I want to deploy some apps there would this work on Android out of the box? I know it’s Linux under the hood I’m just not really deep into the more advanced Linux stuff tbh.

  • Hey! Thanks for this. I’ve worked with Ubuntu and Debian but mostly work on Mac. I’m interested in going deeper into Linux distros and am completely fine with working from terminal. I’m just curious what exactly makes the Fedora and secureblue distros more difficult to understand how far I am from running a secure distro.

  • Deleted

    Permanently Deleted

    Jump
  • I think cops are just getting fucked over by dealers so they don’t know the real prices

  • You can drploy a Cloudflare worker that exposes an APi endpoint with an SQLite DB completely for free and without doing any maintenance. I don’t think the DB is encrypted , so it wouldn’t be my first choice if privacy is a concern. There’s a bit of a learning curve with all the UI bloat but once you figured it out it’s a very hassle free solution.

  • I’ve read a lot about using a VPS with reverse proxy but I’m kind of a noob in that area. How exactly does that protect my machine? Couldn’t an attacker with access to the VPS still harm my local machine? Currently I’m just using a WireGuard tunnel to log into my server, from what I understand you’d tunnel the service from the VPS to the homeserver and then on the VPS URL you could watch right m?

    And do I understand correctly that since we’re using the reverse proxy the possible attack surface just from finding the domain would be limited to the web interface of e.g. Jellyfin?

    Sorry for the chaotic & potentially stupid questions, I’m just really a confused beginner in this area.

  • Shit just works as usual

  • OP that’s a killer list of books you’ve read. IMO you have a point. To all the people who say that you’d be alienated from watching old movies, that method acting is important and that special effects of the last 20 years are what makes it different, idk. It really depends on what you’re looking for.

    Hitchcock movies or the stuff with Humphrey Bogart, Marlon Brando, even the super racist Italo Western movies, the very old Kubrick stuff, that’s all great cinema.

    I’m as left wing as it gets but I also get very alienated by the “diversity” and “feminism“ modern Hollywood & Netflix cinema. It’s the same type of diversity and feminism that exists in corporate, where there is diversity in terms of ethnicity and sexuality, but only within class. It’s a fictional world to me the same way the old movies are, just done by a different bunch of people living in their own world.

    There’s still some good cinema and good shows out there every now and then, but to think old movies can’t compete with modern TV & cinema just because they’re old is a very simplistic take.

  • AppleTV is smooth but as others pointed out you shouldn’t trust their claims that they’re private. They’re probably more private than e.g. Google out of the box, but not an actual privacy company.

    Side note: in case you’re an Apple user and you weren’t aware of this, you can make it a bit better by obtaining your private key so that you have actual E2EE and you can add hardware keys for 2FA which makes it more secure. Of course this should be the bare minimum but it’s nice that they support these things out of the box.

    Regarding metadata: https://support.apple.com/en-us/102651#%3A%7E%3Atext=This+metadata+is+always+encrypted%2CAdvanced+Data+Protection+is+enabled.

    Maybe it’s worth checking if you’re okay with what kind of metadata they’re processing.

  • Don’t ever mention Winnie the Pooh

  • No porn and drugs but „free speech“? Yeah right, no thanks. If my account on mastodon gets banned on an instance I go somewhere else.

    Of course if fediverse becomes too centralized the couple instances left might just defederate from everyone else, but OTOH what protects me from a couple individuals downvoting me into oblivion on Bastyon?

    They’re both decentralized in their own way but communities have to fight against malicious actors that attack the decentralization.

  • Typical politician, identifies the problem only to draw the absolute wrong conclusion.

  • Thanks for the reply, still reading here. Yeah thanks to the comments and reading some benchmarks I abandoned the idea of getting an Apple, it’s just too slow.

    I was hoping to test Qwen 32B or llama 70b for running longer contexts, hence the apple seemed appealing.

  • Congrats on being that guy

  • You’re aware that there’s the OpenAI API library right? https://github.com/openai/openai-python

    It’s really nothing fancy especially on Lemmy where like 99% of people are software engineers…

  • Are you drunk?

  • Yeah I found some stats now and indeed you’re gonna wait like an hour to process if you throw like 80-100k token into a powerful model. With APIs that kinda works instantly, not surprising but just to give a comparison. Bummer.

  • Thanks! Hadn’t thought of YouTube at all but it’s super helpful. I guess that’ll help me decide if the extra Ram is worth it considering that inference will be much slower if I don’t go NVIDIA.

  • Yeah I was thinking about running something like Code Qwen 72B which apparently requires 145GB Ram to run the full model. But if it’s super slow especially with large context and I can only run small models at acceptable speed anyway it may be worth going NVIDIA alone for CUDA.

  • Selfhosted @lemmy.world

    Using Mac M2 Ultra 192GB to Self-Host LLMs?

  • Selfhosted @lemmy.world

    Selfhosting GitLab?