Rendered at 01:56:36 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
j_bum 17 minutes ago [-]
> Here's a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?
I do not buy this argument. Of course, most of the content on these platforms is innocuous, and may as well be paint drying.
What's harmful are the harnesses that these companies have built to exploit the content.
> Of course not. Because infinite scroll is not inherently harmful.
Yes it is [0].
> Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful.
Yes, they can be [1] [2].
> These features only matter because of the content they deliver. The "addictive design" does nothing without the underlying user-generated content that makes people want to keep scrolling.
These harnesses only work because people feed the machine. The harnesses are still harmful.
This whole argument is predicated on a strawman that makes no sense.
A gun doesn't work without bullets. But if a company designs and hands out the gun to the world, they should be liable for the consequences, even if they rely on users for the ammunition.
Even beyond the dangerous legal precedent it sets, we're all cheering for a legal precedent that human persons don't have volition or free will and that multi-media can somehow bypass normal sensation pathways a act directly on want like drugs do. And that's simply not true. Believing that and setting up a legal precedent means that now the government can use violent force to regulate anything shown on a screen. This is going to cause incredible damage to our society as a whole and to individual peoples lives. Government use of force is far more dangerous than unsupported memes/old-wive tales from the 1970s.
voidmain 52 minutes ago [-]
I too fear what governments will actually do in this area. But I think you may be underestimating the threat to personal agency.
Imagine you are trapped in a groundhog day like time loop - but you are not the person who remembers previous loops. "Z" is. He tries to convince you to do something, over and over and over, thousands or millions of times, refining his approach based on your reactions while you remember nothing. Are you really confident that your free will protects you from being taken advantage of in this situation?
Now imagine that instead of a time loop, Z has a million clones of you. He tries his persuasion on one of them at a time, refining it until it works reliably before using it on you. You are just as vulnerable.
Now suppose he has a billion people, not identical to you but drawn from the same distribution. He has a harder computational problem, mapping the high dimensional manifold of their responses to create a model of you sufficiently accurate to manipulate you. But with enough data he can approximate the results of the previous case without more than a tiny fraction of his experimentation being visible to you.
Any relationship where one party gets to surveil and monitor not only the other party, but millions or billions of like parties, has the potential to be a deeply abusive one. We should not tolerate such situations whether the surveilling party is a government or not.
I do not buy this argument. Of course, most of the content on these platforms is innocuous, and may as well be paint drying.
What's harmful are the harnesses that these companies have built to exploit the content.
> Of course not. Because infinite scroll is not inherently harmful.
Yes it is [0].
> Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful.
Yes, they can be [1] [2].
> These features only matter because of the content they deliver. The "addictive design" does nothing without the underlying user-generated content that makes people want to keep scrolling.
These harnesses only work because people feed the machine. The harnesses are still harmful.
This whole argument is predicated on a strawman that makes no sense.
A gun doesn't work without bullets. But if a company designs and hands out the gun to the world, they should be liable for the consequences, even if they rely on users for the ammunition.
[0] https://doi.org/10.1145/3544548.3580729
[1] https://doi.org/10.1145/3491101.3519829
[2] https://counterhate.com/research/deadly-by-design/
Imagine you are trapped in a groundhog day like time loop - but you are not the person who remembers previous loops. "Z" is. He tries to convince you to do something, over and over and over, thousands or millions of times, refining his approach based on your reactions while you remember nothing. Are you really confident that your free will protects you from being taken advantage of in this situation?
Now imagine that instead of a time loop, Z has a million clones of you. He tries his persuasion on one of them at a time, refining it until it works reliably before using it on you. You are just as vulnerable.
Now suppose he has a billion people, not identical to you but drawn from the same distribution. He has a harder computational problem, mapping the high dimensional manifold of their responses to create a model of you sufficiently accurate to manipulate you. But with enough data he can approximate the results of the previous case without more than a tiny fraction of his experimentation being visible to you.
Any relationship where one party gets to surveil and monitor not only the other party, but millions or billions of like parties, has the potential to be a deeply abusive one. We should not tolerate such situations whether the surveilling party is a government or not.