Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)T
Posts
5
Comments
19
Joined
2 yr. ago

  • .

  • I thought the ‘hot’ ranking was a mixture of votes and comment engagement?

    Hot: Like active, but uses time when the post was published

    https://join-lemmy.org/docs/users/03-votes-and-ranking.html

    I do feel like there needs to be some further tweaking, controversial should have a time falloff so it shows recent controversy instead of something 6 months old for example.

    Yeah, I believe the "Most Comments" sort should have a time limit too. There is an issue opened about it: Controversial post sort should have time limit

  • This is not possible because sorting is done in the database, so adding a new sort option requires a database migration with new indexes, columns and updated queries. Not something that can be done with a simple plugin.

    @[email protected] in https://github.com/LemmyNet/lemmy/issues/3936#issuecomment-1738847763

    An alternative approach could involve utilizing an API endpoint that provides metadata for recent posts, allowing users to implement custom sorting logic on their client side using JavaScript. This API endpoint is currently accessible only to moderators and administrators

    There is already such an API endpoint which is available for mods and admins.

    @[email protected] in https://lemmy.ml/comment/9159963

  • Where? I haven't heard any of that.

  • I did read the links, and I still strongly feel that no automated mechanical system of weights and measures can outperform humans when it comes to understanding context.

    But this is not a way to replace humans; it's just a method to grant users moderation privileges based on their tenure on a platform. Currently, most federated platforms only offer moderator and admin levels of moderation, making setting up an instance tedious due to the time spent managing the report inbox. Automating the assignment of moderation levels would streamline this process, allowing admins to simply adjust the trust level of select users to customize their instance as desired.

  • Trust lvls themselves are just Karma plus login/read tracking aka extra steps.

    Trust Levels are acquired by reading posts and spending time on the platform, instead of receiving votes for posting. Therefore, it wouldn't lead to low-quality content unless you choose to implement it that way.

    The Karma system is used more as a bragging right than to give any sort of moderation privilege to users.

    But in essence is similar, you get useless points with one and moderation privileges with the other.

    If you are actually advocating that the Fediverse use Discourse’s service you have to be out of your mind.

    You are making things up just so you can call me crazy. I'm not advocating anything of the sort.

  • Karma promotes shitposting, memes and such, I've yet to see that kind of content on Discourse.

  • Yeah, and the FOSS alternative Codidact isn't any better. What's the point of asking for solutions for bugs when even an LLM can solve that already? I want proper solutions to actual problems so that I can find everything in there, not just troubleshooting bugs.

  • There has to be a way to federate trust levels otherwise all of this just isn't applicable to the fediverse. One of the links I posted talks about how to federate trust levels. So the appeal is processed by a user with a higher trust level.

  • A system like this rewards frequent shitposting over slower qualityposting. It is also easily gamed by organized bad faith groups. Imagine if this was Reddit and T_D users just gave each other a high trust score, valuing their contributions over more “organic” posts.

    You are just assuming that this would work similarly to Reddit based on karma. I don't know why you would assume the worst possible implementation just so you can complain about this. If you had read the links, you would know that shitposting wouldn't help much because what contributes most to Trust Levels in Discourse is reading posts.

  • Lemmy @lemmy.ml

    Rethinking Moderation: A Call for Trust Level Systems in the Fediverse

  • Fediverse @lemmy.ml

    Rethinking Moderation: A Call for Trust Level Systems in the Fediverse

  • Fediverse @lemmy.ml

    The Great Monkey Tagging Army: How Fake Internet Points Can Save Us All!