I think theres many healthy reactions to the situations involving AI but I am concerned slightly over how gamed AI contrarianism is at times.
A low hanging example are social media engagement farming accounts like "pictures AI could never create" meanwhile its entirely just stolen slop content for the sake of getting a paycheck out of said engagement.
Social media nonsense is one thing, but I feel we're going to increasingly see people's frustrations redirected and weaponized in more harmful ways. Its an easy hairpin trigger towards brigading.
I read a post in the last election cycle where somebody was horrified by the polarization of modern politics, but had a solution: Explain to everyone about the evils of high fructose corn syrup, and the people would join together and rise up to demand more comprehensive regulation, forming a nucleus of harmony that would cross issues and save the country.
There seems to be a similar narrative around AI, that the sheep will look around and realize how much it is lying to them, and combine to throw off the oppressor. I kind of wish I could recapture that kind of optimism.
The Ayn Rand quote ("Man cannot survive except through his mind. He comes on earth unarmed. His brain is his only weapon") neatly distills precisely what worries me the most about an AI dominated future: that those in control of our destiny seem to have swallowed her misanthropic philosophy that (to paraphrase Rand again) "he is not a social animal".
Man, in fact, cannot survive without society. You don't have to be a communist to realise this. Until now the stratification of society has had certain unavoidable limits - everyone has a finite lifespan, everyone has an upper bound of intelligence and physical ability - as well as self imposed limits of regulation through states or unions. When kings and empires have come to dominate, revolutions have at least attempted to reform the social order, if not reset it. I fear that with AGI in the hands of the likes of Musk and Thiel we may soon be entering an age when men with Rand's worldview have the kind of power that makes them utterly untouchable and any chance of building a just and democratic future becomes impossible.
Having lived through the Dot Com Bubble/Bomb, the AI situation feels eerily similar.
The hype and over promotion of AI, as well as polluting the commons with slop are "unfortunate"; but the power of what it can do and how it can transform how we live and work is also undeniable.
> what it can do and how it can transform how we live and work is also undeniable.
I’m still not so sure on that part. Maybe, eventually? But it feels like we are still trying to find a problem for it to solve.
Has there been any actual, life transformative use cases from an LLM outside of code generation? I can certainly sit here and say how impactful Claude code has been for me, but I honestly can’t say the same thing for the other users where I work. In fact, the quality of emails has went down since we unleashed Copilot to the world, and so far no one has reported any real productivity gains.
When AI first passed the original Turing Test in spirit - producing text indistinguishable from a human - we didn’t declare machines intelligent. Instead, we raised the bar: now we ask if they can create music, art, or literature that feels human.
But if every time AI meets the challenge, we redefine the challenge, are we really measuring intelligence - or just defending human exceptionalism? At what point do we admit that creativity isn’t a mystical trait, but a process that can emerge from algorithms as well as neurons?
Here’s the real question: should we measure AI against the best humans can do - Einstein, Picasso, and Coltrane - standards most humans themselves can’t reach? Or should we measure success by how well AI enables the next Einstein, Picasso, and Coltrane?
I think we need to move to the era of Assisted Intelligence, a symbiotic relationship between AI and human intelligence.
> At what point do we admit that creativity isn’t a mystical trait, but a process that can emerge from algorithms as well as neurons?
I think anyone who already works in a creative field does acknowledge this. Creativity is, in fact, a process, and a skill that can be broken down into steps, taught to others, and practiced. Graham Wallas broke down the creative process all the way back in the 1920s and it boils down to making novel and valuable connections between existing ideas. What does an LLM do other than that exact process?
> Computer scientist Louis Rosenberg
they have conveniently omitted he's also CEO of "UNANIMOUS AI"
This reminds me of Amara's law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.
I think theres many healthy reactions to the situations involving AI but I am concerned slightly over how gamed AI contrarianism is at times. A low hanging example are social media engagement farming accounts like "pictures AI could never create" meanwhile its entirely just stolen slop content for the sake of getting a paycheck out of said engagement.
Social media nonsense is one thing, but I feel we're going to increasingly see people's frustrations redirected and weaponized in more harmful ways. Its an easy hairpin trigger towards brigading.
Raising concerns because AI slop can literally destroy your brand identity or reputation is called grieving, then. Thanks.
You can't ignore the problems of current AI agents by "rising above them". The people who question it aren't in denial, you guys are.
Dupe from yesterday: https://news.ycombinator.com/item?id=46120830
Just because it's slop doesn't mean it can't profoundly reshape society
I read a post in the last election cycle where somebody was horrified by the polarization of modern politics, but had a solution: Explain to everyone about the evils of high fructose corn syrup, and the people would join together and rise up to demand more comprehensive regulation, forming a nucleus of harmony that would cross issues and save the country.
There seems to be a similar narrative around AI, that the sheep will look around and realize how much it is lying to them, and combine to throw off the oppressor. I kind of wish I could recapture that kind of optimism.
The Ayn Rand quote ("Man cannot survive except through his mind. He comes on earth unarmed. His brain is his only weapon") neatly distills precisely what worries me the most about an AI dominated future: that those in control of our destiny seem to have swallowed her misanthropic philosophy that (to paraphrase Rand again) "he is not a social animal".
Man, in fact, cannot survive without society. You don't have to be a communist to realise this. Until now the stratification of society has had certain unavoidable limits - everyone has a finite lifespan, everyone has an upper bound of intelligence and physical ability - as well as self imposed limits of regulation through states or unions. When kings and empires have come to dominate, revolutions have at least attempted to reform the social order, if not reset it. I fear that with AGI in the hands of the likes of Musk and Thiel we may soon be entering an age when men with Rand's worldview have the kind of power that makes them utterly untouchable and any chance of building a just and democratic future becomes impossible.
Having lived through the Dot Com Bubble/Bomb, the AI situation feels eerily similar.
The hype and over promotion of AI, as well as polluting the commons with slop are "unfortunate"; but the power of what it can do and how it can transform how we live and work is also undeniable.
> what it can do and how it can transform how we live and work is also undeniable.
I’m still not so sure on that part. Maybe, eventually? But it feels like we are still trying to find a problem for it to solve.
Has there been any actual, life transformative use cases from an LLM outside of code generation? I can certainly sit here and say how impactful Claude code has been for me, but I honestly can’t say the same thing for the other users where I work. In fact, the quality of emails has went down since we unleashed Copilot to the world, and so far no one has reported any real productivity gains.
[Cope Intensifies]
When AI first passed the original Turing Test in spirit - producing text indistinguishable from a human - we didn’t declare machines intelligent. Instead, we raised the bar: now we ask if they can create music, art, or literature that feels human.
But if every time AI meets the challenge, we redefine the challenge, are we really measuring intelligence - or just defending human exceptionalism? At what point do we admit that creativity isn’t a mystical trait, but a process that can emerge from algorithms as well as neurons?
Here’s the real question: should we measure AI against the best humans can do - Einstein, Picasso, and Coltrane - standards most humans themselves can’t reach? Or should we measure success by how well AI enables the next Einstein, Picasso, and Coltrane?
I think we need to move to the era of Assisted Intelligence, a symbiotic relationship between AI and human intelligence.
> At what point do we admit that creativity isn’t a mystical trait, but a process that can emerge from algorithms as well as neurons?
I think anyone who already works in a creative field does acknowledge this. Creativity is, in fact, a process, and a skill that can be broken down into steps, taught to others, and practiced. Graham Wallas broke down the creative process all the way back in the 1920s and it boils down to making novel and valuable connections between existing ideas. What does an LLM do other than that exact process?