I doubt many people outside the US have any clue about whether the US justice system needs to be restructured, so there goes ~95% of the global population.
Excluding people from discussions because they don't agree with 'one' point is setting yourself up for failure.You aren't winning anyone over with an all-or-nothing attitude, you're cutting off many potential allies.
I only use lemmy regularly.I'll still check specific subreddits which don't have a lemmy equivalent, but not that often, and never signed in or with the official app.
Callbacks and decorators are fine, but callbacks/decorators to a function which itself takes a function pointer and returns another function pointer are crazy.
I've thankfully never had to use recursive callbacks or decorators, but it seems like it could very quickly become difficult to keep track of.
That's a great question! Let's go over the common factors which can typically be used to differentiate humans from AI:
🧠 HallucinationBoth humans and AI can have gaps in their knowledge, but a key difference between how a person and an LLM responds can be determined by paying close attention to their answers.
If a person doesn't know the answer to something, they will typically let you know.But if an AI doesn't know the answer, they will typically fabricate false answers as they are typically programmed to always return an informational response.
✍️ Writing stylePeople typically each have a unique writing style, which can be used to differentiate and identify them.
For example, somebody may frequently make the same grammatical errors across all of their messages.Whereas an AI is based on token frequency sampling, and is therefore more likely to have correct grammar.
❌ Explicit materialAs an AI assistant, I am designed to provide factual information in a safe, legal, and inclusive manner. Speaking about explicit or unethical content could create an uncomfortable or uninclusive atmosphere, which would go against my guidelines.
A human on the other hand, would be free to make remarks such as "cum on my face daddy, I want your sweet juice to fill my pores." which would be highly inappropriate for the given context.
🌐 Cultural differencesPeople from specific cultures may be able to detect the presence of an AI based on its lack of culture-specific language.For example, an AI pretending to be Australian will likely draw suspicion amongst Australians, due to the lack of the word 'cunt' in every sentence.
💧Instruction leaksIf a message contains wording which indicates the sender is working under instruction or guidance, it could indicate that they are an AI.However, be wary of predominantly human traits like sarcasm, as it is also possible that the commenter is a human pretending to be an AI.
🎁 Wrapping upWhile these signs alone may not be enough to determine if you are speaking with a human or an AI, they may provide valuable tools in your investigative toolkit.Resolving confusion by authenticating Personally Identifiable Information is another great step to ensuring the authenticity of the person you're speaking with.
Would you like me to draft a web form for users to submit their PII during registration?
Agreed. I don't understand the people who claim it's easier to work with, or better for prototyping.
Automatic typing exists. Type casting exists and is even handled automatically in some scenarios. Languages like java and C# can manage memory for you, and have the same portability and runtime requirement as python.
Prototyping in python and then moving to another language later makes no sense to me at all.
The field is called mathematics, but I see math as a short form of mathematic or mathematical.
Calling something a 'math' question or a 'maths' question both make sense. But something like "I hate math" sounds like you hate a singular mathematic, which sounds weirder to me than "I hate maths" (the field).
Linus is also a mage and temporarily absorbs her karate skills before business trips