

Hey I just wanted to say thank you for this, I understand how generally anti-defederation you are based on past posts and I really appreciate someone who can re-evaluate their stance on a case by case basis like this as new information comes up.
Hey I just wanted to say thank you for this, I understand how generally anti-defederation you are based on past posts and I really appreciate someone who can re-evaluate their stance on a case by case basis like this as new information comes up.
Never really got into TikTok myself, always just seemed like a worse version of Vine. But I’m also not the target demographic so there’s that.
Hah, fair enough. I was using that more as the generic “go do something that’s not just reading Reddit posts on a computer all day”. But I have also been hiking!
This appears to be the correct link: https://getaether.net/
Seems more similar to Discord imo.
Ah, that’s what I get for skimming!
Politics aside, why debate Joe Rogan about vaccines? He’s not a scientist, he’s an entertainer. I don’t get my medical advice from a person without trained medical knowledge. I wouldn’t take mechanical advice from someone who’s not a mechanic or works heavily with cars in their freetime, that’s how you can mess up your car. Why would this be different? People need to stop giving his opinions so much value when he has zero related education/experience to back them up…
Usually neglect, and then when I realize (seeing wilting or things like that) an over-compensation of watering. Which is why I think succulents may be the better call because apparently they generally require less frequent watering
Yeah I’ve noticed that one growing a little taller lately, been putting it in direct sun so hopefully that helps it out! But yeah my current goal is to keep them alive first, so if the plant gets unnaturally tall but still otherwise happy I’ll take that as a win.
This is exactly the sort of thing I’m worried about with AI.
Let’s take a quick step back. AI/Machine Learning is a program that is set to learn how to accomplish one specific job, and to do that job very well. For this example, let’s say the AI needs to be able to identify any picture with a cat in it. Programmers develop the framework for this code, and then feed the AI with test cases aimed to “teach” the AI how to do this job with minimal errors. It will be fed correct pictures as well as incorrect ones (some with other animals, or paintings rather than pictures). With enough test cases and human confirmation that the results were correct or incorrect, the AI can successfuly identify pictures of cats with little to no errors.
But thing is, and this is important, the developers of AI generally don’t know exactly how the AI program is able to make these determinations. They just feed it test cases and confirmation when the bot is right. AIs obviously don’t have human brains and think the way we do, so the connections they make are through various patterns that people may not be able to determine. This is fine with identifying cat photos, but let’s apply this back to the Uber and DoorDash payment methods. This means that these companies are not paying their employees based on human standards and expectations of a job well done, but based off of pattern recognition from an AI that may lower or raise pay based off of elements that are completely unknown to the worker and the company, and may not even be items the company wants to encourage (they just don’t know what the AI is rewarding).
I have no concerns of the unrealistic “robots cause the apocalypse” nonsense that hollywood loves, my concern is people assigning AI jobs that AI shouldn’t do and assuming AI is some master super intellect instead of the trained program it is.