• 0 Posts
  • 25 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle








  • 100% agree.

    For anyone who may disagree, consider thinking of excess wealth as excess food.

    If you were in a stadium full of people that represent all of humanity, and you have more food than you could ever even eat in multiple lifetimes are you not an evil person for not sharing with those who are literally starving to death?

    These are people with the amount of wealth who could easily subsidize paying a team of people to plan out how to appropriate give away most of their wealth so they don’t have “excess food” by the time they die - and not have it impact their day to day lifestyle. Instead they let others starve.


  • I was also not sure what this meant, so I asked Google’s Gemini, and I think this clears it up for me:


    This means that the creators of Llama 3.3 have chosen to release only the version of the model that has been fine-tuned for following instructions. They are not making the original, “pretrained” version available.

    Here’s a breakdown of why this is significant:

    • Pretrained models: These are large language models (LLMs) trained on a massive dataset of text and code. They have learned to predict the next word in a sequence, and in doing so, have developed a broad understanding of language and a wide range of general knowledge. However, they may not be very good at following instructions or performing specific tasks.
    • Instruction-tuned models: These models are further trained on a dataset of instructions and desired outputs. This fine-tuning process teaches them to follow instructions more effectively, generate more relevant and helpful responses, and perform specific tasks with greater accuracy.

    In the case of Llama 3.3 70B, you only have access to the model that has already been optimized for following instructions and engaging in dialogue. You cannot access the initial pretrained model that was used as the foundation for this instruction-tuned version.

    Possible reasons why Meta (the creators of Llama) might have made this decision:

    • Focus on specific use cases: By releasing only the instruction-tuned model, Meta might be encouraging developers to use Llama 3.3 for assistant-like chat applications and other tasks where following instructions is crucial.
    • Competitive advantage: The pretrained model might be considered more valuable intellectual property, and Meta may want to keep it private to maintain a competitive advantage.
    • Safety and responsibility: Releasing the pretrained model could potentially lead to its misuse for generating harmful or misleading content. By releasing only the instruction-tuned version, Meta might be trying to mitigate these risks.

    Ultimately, the decision to release only the instruction-tuned model reflects Meta’s strategic goals for Llama 3.3 and their approach to responsible AI development.



  • nor any evidence of them selling or allowing anyone access to their servers and recent headline news backs this up

    The entire point is that you shouldn’t have to put your trust that a third party (Telegram or whoever takes over in the future) will not sell/allow access to your already accessible data.

    There’s no evidence that MTProto has ever been cracked, nor any evidence of them selling or allowing anyone access to their servers and recent headline news backs this up

    Just because it’s not happening now does not mean it cannot happen in the future. If/when they do get compromised/sold, they will already have your data; it’s completely out of your control.

    Google, on the other hand, routinely allow “agencies” access to their servers, often without a warrant

    Exactly my point. Google are using the exact same “security” as Telegram. Your data is already compromised. Side note - supposedly RCS chats between Android is E2EE although I wouldn’t trust it as, like Telegram, you’re mixing high/low security context, which is bad OPSEC.

    WhatsApp - who you cite as a good example of E2E encryption - stores chat backups on GDrive unencrypted by default

    1. Security is about layers. E2EE is better than not having E2EE. Same as transport layer encryption is better than none. Would you prefer anyone on the wire can read your messages just because it’s not perfect in every single use case? No, and for that same reason, E2EE is better.
    2. Backups can be made E2EE [1]. Is this perfect? No. But its significantly better than Telegram.
    3. I’m only pointing out that Whatsapp is better for privacy than Telegram - I still don’t personally use or recommend it.

    … can you be sure the same is true for the people on the other end of your chats?

    Valid concern, but this threat exists on almost every single platform. Who’s to stop anyone from taking screenshots of all your messages and not storing them securely?

    [1] https://www.tomsguide.com/news/whatsapp-encrypted-backups




  • Lacking end-to-end encryption does not mean it lacks any encryption at all, and that point seems to escape most people.

    Not using end-to-end encryption is the equivalent of using best practice developed nearly 30 years ago [1] and saying “this is good enough”. E2EE as a default has been taking off for about 10 years now [2], that Telegram is going into 2025 and still doesn’t have this basic feature tells me they’re not serious about security.

    To take it to its logical conclusion you can argue that Signal is also “unencrypted” because it needs to be eventually in order for you to read a message. Ridiculous? Absolutely, but so is the oft-made opine that Telegram is unencrypted.

    Ridiculous? Yes, you’re missing the entire point of end-to-end encryption, which you immediately discredit any security Telegram wants to claim:

    The difference is that Telegram stores a copy of your chats that they themselves can decrypt for operational reasons.

    Telegram (and anyone who may have access to their infrastructure, via hack or purchase) has complete access to view your messages. This is what E2EE prevents. With Telegram, someone could have access to all your private messages and you would never know. With E2EE someone would need to compromise your personal device(s). One gives you zero options to protect yourself against the invasion of your privacy, the other lets you take steps to protect yourself.

    the other hand, if you fill your Telegram hosted chats with a whole load of benign crap that nobody could possibly care about and actually use the “secret chat bullshit” for your spicier chats then you have plausible deniability baked right in.

    The problem here is that you should not be mixing secure contexts with insecure ones, basic OPSEC. Signal completely mitigates this by making everything private by default. The end user does not need to “switch context” to be secure.

    [1] Developed by Netscape, SSL was released in 1995 - https://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1.0,_2.0,_and_3.0

    [2] Whatsapp gets E2EE in 2014, Signal (then known as TextSecure, was already using E2EE) - https://www.wired.com/2014/11/whatsapp-encrypted-messaging/




  • Go fuck yourself with this pacifist attitude

    there you go again, assuming you know anything about who i am, who i support, and what actions i’ve taken/not taken.

    my critique of your attitude is that you blame others in a “matter of fact” way without knowing anything about them. childish, immature, and edgy are appropriate descriptions for your reactions. i’m no longer responding after this as clearly you are quick to anger and anger doesn’t lead to rational thought. if you’re not already seeing a therapist, i strongly suggest it. lashing out and projecting doesn’t hurt anyone but yourself.