> Not many people have Eliot down as an influential player in the history of AI — and they would be right. Her contribution has been for most part overlooked, even as folks begin to unpick the role of fiction shaping the trajectory of the AI industry.
It's because it reads mostly as a retread of the far-better known Samuel Butler pieces like _Erewhon_ years before (https://en.wikipedia.org/wiki/Darwin_among_the_Machines was 1863), and which Eliot likely read given her lifestyle & how closely she echoes him. Surprising to see no mention of Butler at all in OP, given what critical context it is - she was not writing in a vacuum.
I learned about Butler and his letter from George Dyson's book, "Darwin among the Machines: The Evolution of Global Intelligence" -- it's a very entertaining account. Butler was quite a character.
(If I recall correctly I learned about Dyson's book from a postscript to Neal Stephenson's "Quicksilver.")
With the explosion of LLMs we are close to the final thoughts in her chapter: “this planet may be filled with beings who will be blind and deaf as the inmost rock, yet will execute changes as delicate and complicated as those of human language and all the intricate web of what we call its effects, without sensitive impression, without sensitive impulse.”
The thing which has become very weird for me is watching Star Trek holodeck episodes in the age of LLMs. When I was in high school we considered this absurd writing - obviously they must be sentient...now LLMs have happened and it feels obvious that the opposite can be true.
Same story with Star Wars and droids: C3PO wasn't sentient, just a protocol model loaded into a droid frame in the wrong environment, it's attempts to constantly being the conversation back to politeness being a simplistically coded filter layer.
Threepio reads so much like an LLM, the writing turns out to have been skockingly prescient. At the time of Phantom Menace, the whole story of him being assembled by some slave kid on a backwater planet in the outer rim seemed preposterous - why would a slave kid into pod racing make a pompous ass of a protocol droid? Well, he just installed a standard easily pirated LLM into the droid he made.
And the fact that these LLMs are used as 'protocol droids' to communicate with lower-level devices - 'the binary language of moisture vaporators' etc. - matches up with the pattern of LLMs as tools for generating scripts in Python.
Terrible though the last sequel is, they even managed to include jailbreaking Threepio to get him to access part of his training data that had been censored out.
Haha, I've had the same thoughts, that of course computers/AI/droids of that conversational capacity were conscious. You'd be a brute not to think that!
And all of a sudden, LLMs absolutely have the command of natural language that once seemed such an obvious indicator of sentience, and now I find myself one of those bigots who don't believe in robot rights!
I'm being silly, but I do think there are implications here with respect to the future debate on AI sentience. I guess I once thought there would be this threshold where the reality of an AI's inner experience became blatantly obvious, but I see now that this is going to be a profoundly thorny problem.
Who knows, maybe in several decades we'll have a consciousness-o-meter that demonstrates that LLMs have had some degree of awareness all along.
The fundamental problem is that we keep setting 'reasonable' benchmarks, implicitly because we want to make them seem reachable, but then when we inevitably do we find it feels we [of course] haven't achieved what we really wanted to in the first place.
True artificial intelligence would have the ability to meaningfully and significantly improve itself, recursively - a characteristic of even the most primitive life on Earth.
And the implications of a computing system capable of such simply can't be overstated. With access to essentially all contemporary knowledge, flawless recall, and stupidly ridiculous amounts of energy to power itself (relative to e.g. a brain), it would create an explosion of knowledge in every single domain imaginable, and many we can't even yet fathom. And it would be doing this at an accelerating pace owing to self improvement.
Basically the very nature of existence and knowledge would change. Sentience or not would be irrelevant as we simply tried (and inevitably failed, miserably) to catch up.
But this is a bit further out of reach than 'can vaguely pass for a human in a casual non-adversarial q&a style chat.'
---
This was one of the rather many areas where Star Trek failed to really consider the implications of its concepts, probably because it would simply break the world building. The Borg, for instance, would be defacto Gods in no time, even without assimilation.
> This was one of the rather many areas where Star Trek failed to really consider the implications of its concepts, probably because it would simply break the world building.
This is true of pretty much all scifi! It's funny seeing super-futuristic depictions of star-fighter pilots and combatants with firearms and its just...so crushingly evident that humans will not have supremacy in these arenas very shortly.
Frank Herbert must have anticipated this complication and side-stepped the whole issue by preemptively canonizing the Butlerian Jihad.
Exactly, mechanically by tools, environmentally by science, metaphysically by philosophy. An AI can only become an I by crossing a singularity of knowledge.
What seems far more likely, the way the world is going, is that we’ll use the consciousness-o-meter to determine some groups of humans are less conscious than others.
Philosophers have coined the term P-zombies. LLMs can be considered as P-zombies if they do not possess consciousness. Functionally equivalent but no soul. Substance Dualism was used by philosophers to deny animals consciousness.
To paraphrase Chonsky (link below), the legacy of Descartes' observations for "mechanical philosophy" is profound: substance-dualism became untenable because no coherent idea of material can be formulated. Newton exorcised the machine (material), leaving the ghost intact to be curious about itself.
Today we study the organism called the body without need of a philosophical grounding in any reality. "mechanical philosophy" ended with Newton's work obviating any coherent notion of an "intelligible world". Faced with unreconcilable confusion about what constitutes reality, science quietly abandoned the idea and moved ahead with "intelligible theories about the world", which Chomsky regards as one of most significant shifts in science.
In various speeches and lectures on the history of science and linguistics, Chomsky comments on artificial intelligence, by referring to Alan Turing's seminal paper, in which Turing proposed the so-called "test." Chomsky refers to the paper's first paragraph where Turing concedes that the meaning of the word "intelligence" is so indistinct as to be left to be determined by Gallop. Turing punts on the entire question of the meaning of intelligence by suggesting an "imitation game."
Today's state of the art of machine intelligence has not advanced past Turing's proposal. Chomsky wonders if we may have to resign ourselves to the limit that it never will, in which light of reason we might embark on further study of ourselves, rather than contrive facsimiles and endure all the attendant hazards.
I think it's an incoherent concept. If the 'soul' is anything, there is some difference in the brain that would allow it. You can't have two people with identical brains and claim that one has a soul, unless you believe in magic.
It's been interesting seeing those with an entirely materialist understanding of the universe try to grapple with and deal with consciousness as LLMs have emerged.
To maintain that belief you have to either believe there is a physical source of consciousness, or that consciousness doesn't exist, neither of which most people can accept, yet LLMs are forcing people to confront.
But you can have things like property dualism, strong emergentism, cognitive closue, neutral monism and panpsychism. Naturalistic philosophers don't appeal to the soul or magic, they just say there's more to nature than stuff described by physical sciences. You can disagree of course, but then you have to account for everything in nature, including consciousness, by only using physical constituents. And best of luck with that. Nobody has succeeded so far.
The problem is that subjective experiences don't fit well with objective explanations. Nagal laid this out in his paper, "What it's like to be a bat", and other philosophers like Chalmers expanded on it with p-zombies and blind color scientists.
I think the only property humans consider special in themselves are emotions. I have the following thesis.
In evolution, exact pattern matching is computationally expensive. Just as I know my mother regardless of her clothes, hairstyle, her aging face, the match isn't exact. But brain uses heuristics to pump up 89 percent match to 100 percent match. This saves computational resources. These heuristics are what we call emotions. When snake like object appears in visual field, emotional heuristic pumps up slight resemblance to generate fear response. Analytical observation would take too much time. That's why emotions appear pre rational. Capgras syndrome shows when emotional heuristic malfunctions. Visual recognition is intact. But emotional heuristic which pumps up recognition score malfunctions. That's why patient recognises face but feels it's an imposter. These arguments prove emotions are mere computational shortcuts.
How do you explain the visual field? EM radiation isn't colored in the range visible to humans, that's just how we see it. There are no physical colored properties, rather there are wavelengths. But there are color experiences. Same goes for the other sensations. You don't need to go as far as emotions. Sensation alone is hard to explain in physical terms. Computation doesn't explain it either. Color values are just numeric properties we assign to the shades of our color experiences. But they aren't the experiences, they're just numeric values or symbols to be stored for something that can output wavelengths of light.
Think about it this way. We have System1 and System2. Emotions reside in System1. Now imagine we have build AI systems which are so fast they can compute complex heuristics of System2 at the speed of System1. Then AI's emotions will be more rational.
System 1 and system 2 are metaphors not actual systems.
Emotions do not only exist in the not actually a real thing system 1. People who take damage to the emotional centers of their brains become passive and apathetic not hyper intelligent system 2 Vulcans.
So you're saying consciousness is bullshit? Because the point is that we don't know how to observe the difference between something that is conscious and something that isn't.
Not GP but I'd argue that whatever we mean by consciousness must at least have physical impact. Something that doesn't couldn't decide our actions in the physical world like laughing at a joke or painting a picture, and critically couldn't even be what we're referring to when we talk about consciousness as we are now.
From there I think "we don't know how" is in part down to needing advances in neuroscience, and in part down to "consciousness" being a hazy term with disagreement over definition. But for frameworks that accept the above (physicalism, interactionist dualism, ...) I do think it's in theory testable whether something is conscious for any given pinned-down definition (e.g, "are the neurons constituting internal train of thought firing"), and I would be inclined to say the frameworks that don't accept the above are indeed the more "bullshit" ones.
Science is limited by what we can falsify, which follows from what we can observe. We should attribute no normative weight to whether something should or shouldn't be scientifically analyzable. So we should always improve our means of observation, as well as our culture towards the things that are difficult or downright impossible to scientifically analyze. Science is a part of philosophy, or a derivation of philosophy with feedback. We should develop all open avenues of philosophy in the course of living.
> Theophrastus closes by asking whether human consciousness is a ‘stumbling’ on the way to ‘unconscious perfection’.
This reminds me of a line from the scifi novel Echopraxia where one of the characters explains that as AIs become more intelligent, they 'wake up' and become conscious like humans. But then as they continue to grow, they eventually go back to sleep - and those are the machines to be afraid of.
'
In the 3001 novel, the last of the 2001 series by Clarke, the Dave simulation explains that while the monolith can simulate conscious beings, it is itself not conscious. This is similar to The Expanse novels, where it's explained that that the protomolecule technology was limited to not being conscious on purpose by it's creators, and yet could still somehow simulate conscious beings it wanted to use as tools.
I'm not well-versed enough on sci-fi to be able to connect more dots than this, but I am assuming these are common tropes.
The Mass Effect series describes the Reapers as (copied from masseffect.fandom.com - I <3 this game's lore):
"The Reapers are a highly-advanced machine race of synthetic-organic starships. The Reapers reside in dark space: the vast, mostly starless space between galaxies. They hibernate there, dormant for fifty thousand years at a time, before returning to the galaxy...the Reapers spare little concern for whatever labels other races choose to call them, and merely claim that they have neither beginning nor end."
The other pop-sci-fi analogue I can think of is The Borg.
It comes across as satire to me, which I did not expect after reading TFA.
From the classical references littered throughout the portions I've skimmed, it seems like a learned person is joking about contemporary men that aren't as well educated and unaware of it.
Quite reminiscent of some of the stuff Kierkegaard wrote.
I recently reread Middlemarch, and this time I had finally read most of the literature and had wikipedia on hand to learn about the historical events that are referenced in it, the breadth and depth of her erudition is impressive, as is her portrayal of human nature (complicated and believable communicated across the divide of culture and time), which is the main thing I took from my first reading.
For those considering them, Erewhon and Erewhon Revisited (wherein we discover just how much in the way of, ahem, relations a book of the period can pack "between the lines") are available in a single widely available and cheap (used) little Modern Library hardback, and are a pretty fun read for the genre (nothing like Utopia—I read a lot of things most folks would consider boring, and that one taxed even my patience).
I wonder if the title "The Shadows of the Coming Race" alludes to "Vril: The Power of the Coming Race", published eight years before, if that phrase was already in circulation. Google ngrams first sees it in 1868.
Sure seems likely! Here's an excerpt from the Wikipedia article for Vril:
>When H. G. Wells' novella The Time Machine was published in 1895, The Guardian wrote in its review: "The influence of the author of The Coming Race is still powerful, and no year passes without the appearance of stories which describe the manners and customs of peoples in imaginary worlds, sometimes in the stars above, sometimes in the heart of unknown continents in Australia or at the Pole, and sometimes below the waters under the earth. The latest effort in this class of fiction is The Time Machine, by H. G. Wells."
So the phrase "the Coming Race" must have been well established in literary circles, and probably used as a shorthand for what today we might call non-humans or post-humans.
Damn, it's pretty early while I'm reading this and in my tired state, not noticing the year in the title, I somehow expected an article about Sierra adventure game internals... A bit disappointed but still a good read on a completely different topic :)
Another early book on the implications of AI is Samuel Butler's "Erewhon" (1872) satire which describes a society in which a complex machine uprising occurred and thereafter all complex mechanical/computational devices were banned in their society. This idea is later referenced in the well known 20th century "Dune" books as "The Butlerian Jihad".
Before creating “eGod”, it’s safer to first create “eHeaven”. I propose building direct-democratic multiversal static place intelligence (you can google it): the static place AI, the place of eventual all-knowing, where we are the only agents that will eventually become all-powerful. We start with making a digital copy of Earth and some ideally wireless brain-computer-interfaces, everyone who wants can buy a special armchair for their living room and go to the simulated planet with the whole experience the same as on our planet, but when a bus hits you there, you just open your eyes. So it gives you immortality from injuries. And of course there will be may other worlds with different freedoms, like flying or doing magic or instant teleportation in public places…
Sort of related, but in the two or three times I've lucid dreamt, one of them was flying and having relatively functional control of my direction. Not sure how/why my brain rendered this sensation having never experienced it in real life, but it felt strange and novel and not easy to direct.
It was akin to a sensation of VR games where my mind seems to interpolate real-life expectations with visual input, but not quite. Not quite "brain-computer-interfaces", but perhaps a glimpse with current tech.
> any physical or cognitive task there aren’t exactly many roles that a human being can play that are additive to the economy.
What a terribly small imagination this presents. There is life outside of your cubicle. There always was.
> Each time a person engages in work that could be done more efficiently by a machine, the employing party—be that company, government or charity—will have to bear the opportunity cost of using AI.
Your LLMs run on GPUs and have no physical component. Those physical components aren't free, require maintenance, and break at inconvenient times. This is not a zero sum future or one that doesn't require intelligent economic planning.
> It’s like when a toddler ‘helps’ you wash the dishes.
It's cute you presume your "AGI" is going to /want/ to do anything for you. In particular, mundane chores, like washing your dishes, or that you as an individual would be able to afford it.
Anyways, it's an interesting past time around here, rather than admit the gulf between the current technologies and what AGI is imagined to be, we just troll through the past looking for anyone who even remotely made a "thinking machine" part of a fictional story.
Well, one part she got right, the machine presents itself as something of a useless chat bot.
> Not many people have Eliot down as an influential player in the history of AI — and they would be right. Her contribution has been for most part overlooked, even as folks begin to unpick the role of fiction shaping the trajectory of the AI industry.
It's because it reads mostly as a retread of the far-better known Samuel Butler pieces like _Erewhon_ years before (https://en.wikipedia.org/wiki/Darwin_among_the_Machines was 1863), and which Eliot likely read given her lifestyle & how closely she echoes him. Surprising to see no mention of Butler at all in OP, given what critical context it is - she was not writing in a vacuum.
Some earlier discussion: https://www.lesswrong.com/posts/goANJNFBZrgE9PFNk/shadows-of...
I learned about Butler and his letter from George Dyson's book, "Darwin among the Machines: The Evolution of Global Intelligence" -- it's a very entertaining account. Butler was quite a character.
(If I recall correctly I learned about Dyson's book from a postscript to Neal Stephenson's "Quicksilver.")
It does seem a bit suspicious given the context, as if the author intentionally omitted the background.
"Good artists copy, great artists steal." -- hiatus
How does this relate to the presence or absence of ulterior motives…?
With the explosion of LLMs we are close to the final thoughts in her chapter: “this planet may be filled with beings who will be blind and deaf as the inmost rock, yet will execute changes as delicate and complicated as those of human language and all the intricate web of what we call its effects, without sensitive impression, without sensitive impulse.”
The thing which has become very weird for me is watching Star Trek holodeck episodes in the age of LLMs. When I was in high school we considered this absurd writing - obviously they must be sentient...now LLMs have happened and it feels obvious that the opposite can be true.
Same story with Star Wars and droids: C3PO wasn't sentient, just a protocol model loaded into a droid frame in the wrong environment, it's attempts to constantly being the conversation back to politeness being a simplistically coded filter layer.
Threepio reads so much like an LLM, the writing turns out to have been skockingly prescient. At the time of Phantom Menace, the whole story of him being assembled by some slave kid on a backwater planet in the outer rim seemed preposterous - why would a slave kid into pod racing make a pompous ass of a protocol droid? Well, he just installed a standard easily pirated LLM into the droid he made.
And the fact that these LLMs are used as 'protocol droids' to communicate with lower-level devices - 'the binary language of moisture vaporators' etc. - matches up with the pattern of LLMs as tools for generating scripts in Python.
Terrible though the last sequel is, they even managed to include jailbreaking Threepio to get him to access part of his training data that had been censored out.
Haha, I've had the same thoughts, that of course computers/AI/droids of that conversational capacity were conscious. You'd be a brute not to think that!
And all of a sudden, LLMs absolutely have the command of natural language that once seemed such an obvious indicator of sentience, and now I find myself one of those bigots who don't believe in robot rights!
I'm being silly, but I do think there are implications here with respect to the future debate on AI sentience. I guess I once thought there would be this threshold where the reality of an AI's inner experience became blatantly obvious, but I see now that this is going to be a profoundly thorny problem.
Who knows, maybe in several decades we'll have a consciousness-o-meter that demonstrates that LLMs have had some degree of awareness all along.
The fundamental problem is that we keep setting 'reasonable' benchmarks, implicitly because we want to make them seem reachable, but then when we inevitably do we find it feels we [of course] haven't achieved what we really wanted to in the first place.
True artificial intelligence would have the ability to meaningfully and significantly improve itself, recursively - a characteristic of even the most primitive life on Earth.
And the implications of a computing system capable of such simply can't be overstated. With access to essentially all contemporary knowledge, flawless recall, and stupidly ridiculous amounts of energy to power itself (relative to e.g. a brain), it would create an explosion of knowledge in every single domain imaginable, and many we can't even yet fathom. And it would be doing this at an accelerating pace owing to self improvement.
Basically the very nature of existence and knowledge would change. Sentience or not would be irrelevant as we simply tried (and inevitably failed, miserably) to catch up.
But this is a bit further out of reach than 'can vaguely pass for a human in a casual non-adversarial q&a style chat.'
---
This was one of the rather many areas where Star Trek failed to really consider the implications of its concepts, probably because it would simply break the world building. The Borg, for instance, would be defacto Gods in no time, even without assimilation.
> This was one of the rather many areas where Star Trek failed to really consider the implications of its concepts, probably because it would simply break the world building.
This is true of pretty much all scifi! It's funny seeing super-futuristic depictions of star-fighter pilots and combatants with firearms and its just...so crushingly evident that humans will not have supremacy in these arenas very shortly.
Frank Herbert must have anticipated this complication and side-stepped the whole issue by preemptively canonizing the Butlerian Jihad.
> meaningfully and significantly improve itself
Exactly, mechanically by tools, environmentally by science, metaphysically by philosophy. An AI can only become an I by crossing a singularity of knowledge.
What seems far more likely, the way the world is going, is that we’ll use the consciousness-o-meter to determine some groups of humans are less conscious than others.
You don't need a consciousness to be aware.
semantics shmemantics.
Another thing Trek (and other SF) didn't predict was that AI would speak with Bay Area vocal fry.
Someone should re-dub the Enterprise computer's lines with it.
I think that’s just the AI you’re using..
Philosophers have coined the term P-zombies. LLMs can be considered as P-zombies if they do not possess consciousness. Functionally equivalent but no soul. Substance Dualism was used by philosophers to deny animals consciousness.
Re dualism:
To paraphrase Chonsky (link below), the legacy of Descartes' observations for "mechanical philosophy" is profound: substance-dualism became untenable because no coherent idea of material can be formulated. Newton exorcised the machine (material), leaving the ghost intact to be curious about itself.
Today we study the organism called the body without need of a philosophical grounding in any reality. "mechanical philosophy" ended with Newton's work obviating any coherent notion of an "intelligible world". Faced with unreconcilable confusion about what constitutes reality, science quietly abandoned the idea and moved ahead with "intelligible theories about the world", which Chomsky regards as one of most significant shifts in science.
https://m.youtube.com/watch?v=bWDJ2zFe4Pc
In various speeches and lectures on the history of science and linguistics, Chomsky comments on artificial intelligence, by referring to Alan Turing's seminal paper, in which Turing proposed the so-called "test." Chomsky refers to the paper's first paragraph where Turing concedes that the meaning of the word "intelligence" is so indistinct as to be left to be determined by Gallop. Turing punts on the entire question of the meaning of intelligence by suggesting an "imitation game."
Today's state of the art of machine intelligence has not advanced past Turing's proposal. Chomsky wonders if we may have to resign ourselves to the limit that it never will, in which light of reason we might embark on further study of ourselves, rather than contrive facsimiles and endure all the attendant hazards.
>Functionally equivalent but no soul.
I think it's an incoherent concept. If the 'soul' is anything, there is some difference in the brain that would allow it. You can't have two people with identical brains and claim that one has a soul, unless you believe in magic.
It's been interesting seeing those with an entirely materialist understanding of the universe try to grapple with and deal with consciousness as LLMs have emerged.
To maintain that belief you have to either believe there is a physical source of consciousness, or that consciousness doesn't exist, neither of which most people can accept, yet LLMs are forcing people to confront.
You think most people can't accept that consciousness is a phenomenon with basis solely in physical reality?
Also LLMs represent, for most people I suspect, something entirely unconnected with notions of consciousness.
[dead]
Is there a non-magical description of a soul? I thought it was an essentially magical concept. I’m not sure that makes it incoherent.
But you can have things like property dualism, strong emergentism, cognitive closue, neutral monism and panpsychism. Naturalistic philosophers don't appeal to the soul or magic, they just say there's more to nature than stuff described by physical sciences. You can disagree of course, but then you have to account for everything in nature, including consciousness, by only using physical constituents. And best of luck with that. Nobody has succeeded so far.
The problem is that subjective experiences don't fit well with objective explanations. Nagal laid this out in his paper, "What it's like to be a bat", and other philosophers like Chalmers expanded on it with p-zombies and blind color scientists.
I think the only property humans consider special in themselves are emotions. I have the following thesis. In evolution, exact pattern matching is computationally expensive. Just as I know my mother regardless of her clothes, hairstyle, her aging face, the match isn't exact. But brain uses heuristics to pump up 89 percent match to 100 percent match. This saves computational resources. These heuristics are what we call emotions. When snake like object appears in visual field, emotional heuristic pumps up slight resemblance to generate fear response. Analytical observation would take too much time. That's why emotions appear pre rational. Capgras syndrome shows when emotional heuristic malfunctions. Visual recognition is intact. But emotional heuristic which pumps up recognition score malfunctions. That's why patient recognises face but feels it's an imposter. These arguments prove emotions are mere computational shortcuts.
How do you explain the visual field? EM radiation isn't colored in the range visible to humans, that's just how we see it. There are no physical colored properties, rather there are wavelengths. But there are color experiences. Same goes for the other sensations. You don't need to go as far as emotions. Sensation alone is hard to explain in physical terms. Computation doesn't explain it either. Color values are just numeric properties we assign to the shades of our color experiences. But they aren't the experiences, they're just numeric values or symbols to be stored for something that can output wavelengths of light.
There is no analytical, rational reason to avoid being killed by a snake. Continuing to live has no analytical or rational purpose.
The only reason "avoid dangerous things like snakes so I continue to live" is a goal is emotions.
Think about it this way. We have System1 and System2. Emotions reside in System1. Now imagine we have build AI systems which are so fast they can compute complex heuristics of System2 at the speed of System1. Then AI's emotions will be more rational.
System 1 and system 2 are metaphors not actual systems.
Emotions do not only exist in the not actually a real thing system 1. People who take damage to the emotional centers of their brains become passive and apathetic not hyper intelligent system 2 Vulcans.
>> Functionally equivalent but no soul.
Something that predicts no observable differences is unfalsifiable aka bullshit.
So you're saying consciousness is bullshit? Because the point is that we don't know how to observe the difference between something that is conscious and something that isn't.
Not GP but I'd argue that whatever we mean by consciousness must at least have physical impact. Something that doesn't couldn't decide our actions in the physical world like laughing at a joke or painting a picture, and critically couldn't even be what we're referring to when we talk about consciousness as we are now.
From there I think "we don't know how" is in part down to needing advances in neuroscience, and in part down to "consciousness" being a hazy term with disagreement over definition. But for frameworks that accept the above (physicalism, interactionist dualism, ...) I do think it's in theory testable whether something is conscious for any given pinned-down definition (e.g, "are the neurons constituting internal train of thought firing"), and I would be inclined to say the frameworks that don't accept the above are indeed the more "bullshit" ones.
Science is limited by what we can falsify, which follows from what we can observe. We should attribute no normative weight to whether something should or shouldn't be scientifically analyzable. So we should always improve our means of observation, as well as our culture towards the things that are difficult or downright impossible to scientifically analyze. Science is a part of philosophy, or a derivation of philosophy with feedback. We should develop all open avenues of philosophy in the course of living.
Non materialist bullshit.
Basically the plot of Peter Watts' "Blindsight" (post-2000) scifi book series.
Fascinating read!
> Theophrastus closes by asking whether human consciousness is a ‘stumbling’ on the way to ‘unconscious perfection’.
This reminds me of a line from the scifi novel Echopraxia where one of the characters explains that as AIs become more intelligent, they 'wake up' and become conscious like humans. But then as they continue to grow, they eventually go back to sleep - and those are the machines to be afraid of. '
In the 3001 novel, the last of the 2001 series by Clarke, the Dave simulation explains that while the monolith can simulate conscious beings, it is itself not conscious. This is similar to The Expanse novels, where it's explained that that the protomolecule technology was limited to not being conscious on purpose by it's creators, and yet could still somehow simulate conscious beings it wanted to use as tools.
I'm not well-versed enough on sci-fi to be able to connect more dots than this, but I am assuming these are common tropes.
The Mass Effect series describes the Reapers as (copied from masseffect.fandom.com - I <3 this game's lore):
"The Reapers are a highly-advanced machine race of synthetic-organic starships. The Reapers reside in dark space: the vast, mostly starless space between galaxies. They hibernate there, dormant for fifty thousand years at a time, before returning to the galaxy...the Reapers spare little concern for whatever labels other races choose to call them, and merely claim that they have neither beginning nor end."
The other pop-sci-fi analogue I can think of is The Borg.
Fred Saberhagen Beserkers are the first instance of this concept I’m familiar with (1963)
https://en.m.wikipedia.org/wiki/Berserker_(novel_series)
https://www.gutenberg.org/cache/epub/10762/pg10762-images.ht...
It comes across as satire to me, which I did not expect after reading TFA.
From the classical references littered throughout the portions I've skimmed, it seems like a learned person is joking about contemporary men that aren't as well educated and unaware of it.
Quite reminiscent of some of the stuff Kierkegaard wrote.
I recently reread Middlemarch, and this time I had finally read most of the literature and had wikipedia on hand to learn about the historical events that are referenced in it, the breadth and depth of her erudition is impressive, as is her portrayal of human nature (complicated and believable communicated across the divide of culture and time), which is the main thing I took from my first reading.
Interesting - I wonder if the author was gesturing at Erewhon which was published a few years earlier. https://en.wikipedia.org/wiki/Erewhon
For those considering them, Erewhon and Erewhon Revisited (wherein we discover just how much in the way of, ahem, relations a book of the period can pack "between the lines") are available in a single widely available and cheap (used) little Modern Library hardback, and are a pretty fun read for the genre (nothing like Utopia—I read a lot of things most folks would consider boring, and that one taxed even my patience).
I wonder if the title "The Shadows of the Coming Race" alludes to "Vril: The Power of the Coming Race", published eight years before, if that phrase was already in circulation. Google ngrams first sees it in 1868.
Sure seems likely! Here's an excerpt from the Wikipedia article for Vril:
>When H. G. Wells' novella The Time Machine was published in 1895, The Guardian wrote in its review: "The influence of the author of The Coming Race is still powerful, and no year passes without the appearance of stories which describe the manners and customs of peoples in imaginary worlds, sometimes in the stars above, sometimes in the heart of unknown continents in Australia or at the Pole, and sometimes below the waters under the earth. The latest effort in this class of fiction is The Time Machine, by H. G. Wells."
So the phrase "the Coming Race" must have been well established in literary circles, and probably used as a shorthand for what today we might call non-humans or post-humans.
If you want to know what life is like when society is run by beings smarter than you, ask someone who flunked out of high school.
If you want to know what life is like when society is run by alien intelligences, ask someone who lives under an economy dominated by megacorps.
Damn, it's pretty early while I'm reading this and in my tired state, not noticing the year in the title, I somehow expected an article about Sierra adventure game internals... A bit disappointed but still a good read on a completely different topic :)
This was a joy to read.
Another early book on the implications of AI is Samuel Butler's "Erewhon" (1872) satire which describes a society in which a complex machine uprising occurred and thereafter all complex mechanical/computational devices were banned in their society. This idea is later referenced in the well known 20th century "Dune" books as "The Butlerian Jihad".
Before creating “eGod”, it’s safer to first create “eHeaven”. I propose building direct-democratic multiversal static place intelligence (you can google it): the static place AI, the place of eventual all-knowing, where we are the only agents that will eventually become all-powerful. We start with making a digital copy of Earth and some ideally wireless brain-computer-interfaces, everyone who wants can buy a special armchair for their living room and go to the simulated planet with the whole experience the same as on our planet, but when a bus hits you there, you just open your eyes. So it gives you immortality from injuries. And of course there will be may other worlds with different freedoms, like flying or doing magic or instant teleportation in public places…
Sort of related, but in the two or three times I've lucid dreamt, one of them was flying and having relatively functional control of my direction. Not sure how/why my brain rendered this sensation having never experienced it in real life, but it felt strange and novel and not easy to direct.
It was akin to a sensation of VR games where my mind seems to interpolate real-life expectations with visual input, but not quite. Not quite "brain-computer-interfaces", but perhaps a glimpse with current tech.
Yes, that's great, I jumped over building in my dreams a few times and it was me controlling it, I think it'll resemble some future "multiversal UI" a bit. Did it look anything like long-exposure photos? https://wesely.org/2019/flughafen-tempelhof-berlin-1-7-2008-... There are more good photos like this in the end of this article: https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-u...
> any physical or cognitive task there aren’t exactly many roles that a human being can play that are additive to the economy.
What a terribly small imagination this presents. There is life outside of your cubicle. There always was.
> Each time a person engages in work that could be done more efficiently by a machine, the employing party—be that company, government or charity—will have to bear the opportunity cost of using AI.
Your LLMs run on GPUs and have no physical component. Those physical components aren't free, require maintenance, and break at inconvenient times. This is not a zero sum future or one that doesn't require intelligent economic planning.
> It’s like when a toddler ‘helps’ you wash the dishes.
It's cute you presume your "AGI" is going to /want/ to do anything for you. In particular, mundane chores, like washing your dishes, or that you as an individual would be able to afford it.
Anyways, it's an interesting past time around here, rather than admit the gulf between the current technologies and what AGI is imagined to be, we just troll through the past looking for anyone who even remotely made a "thinking machine" part of a fictional story.
Well, one part she got right, the machine presents itself as something of a useless chat bot.