Reply
Thread Tools Display Modes
#1
Old 10-12-2009, 01:35 AM
Member
Join Date: Mar 2001
Location: The Mitt
Posts: 14,050
Isaac Asimov & Positronic Brains

I've only read a handful of Asimov's fiction, so I can't really remember if his idea of a "Positronic Brain" was just something that sounded cool, or if he had some sort of deeper thought about it. What was it about anti-electrons that would make AI feasible to him?
#2
Old 10-12-2009, 01:41 AM
Guest
Join Date: Mar 2007
Location: Miskatonic University
Posts: 10,260
Positrons were just discovered, and as I understand it positronic was just kind of a buzz word he used similar to the word "radioactive" in the '50s.
#3
Old 10-12-2009, 01:49 AM
Guest
Join Date: Jul 2001
Location: Arizona
Posts: 26,901
Yep, it was Asimov going for the rule of cool.

http://nasa.gov/topics/technolog...star_trek.html
Quote:
By the way, Mr. Data's "positronic" brain circuits are named for the circuits that Dr. Isaac Asimov imagined for his fictional robots. Our doctors can use positrons to make images of our brains or other organs, but there's no reason to expect that positrons could make especially good artificial brains. Positrons are antimatter! Dr. Asimov just made up a sophisticated-sounding prop, which he never expected people to take literally.
#4
Old 10-12-2009, 01:52 AM
Member
Join Date: Mar 2001
Location: The Mitt
Posts: 14,050
Ok, that's what I kind of figured. But he was no slouch when it came to science, and seeing that he made such a big deal about Positronic Brains being the key to AI, I was hoping there was more thought behind it than just cherry-picking the flavor of the year in physics terms.

Disappointed.
#5
Old 10-12-2009, 08:41 AM
Charter Member
Join Date: Apr 1999
Location: Schenectady, NY, USA
Posts: 40,648
Asimov later cheerfully admitted that he chose the term because it was a new scientific term and that it meant nothing (the biggest problem with a positronic brain of any size would be to keep it from blowing up and destroying half the planet).

Asimov knew his science, but he also knew his storytelling. He wrote an entire novel, The Gods Themselves based upon a scientific error he once made.
__________________
"East is East and West is West and if you take cranberries and stew them like applesauce they taste much more like prunes than rhubarb does."
Purveyor of fine science fiction since 1982.
#6
Old 10-12-2009, 10:03 AM
Guest
Join Date: Jun 2006
Posts: 16,578
Quote:
Originally Posted by RealityChuck View Post

Asimov knew his science, but he also knew his storytelling. He wrote an entire novel, The Gods Themselves based upon a scientific error he once made.
I'd like to hear about that, given I'll probably never get around to reading it. You might want to spoiler it if you are so kind as to respond.

thanks
#7
Old 10-12-2009, 10:08 AM
Charter Member
Join Date: Mar 1999
Location: Miskatonic University
Posts: 11,938
According to Asimov himself --
SPOILER:
Asimov accosts Robert Silverberg at a sci-fi convention for referring to a radioactive isotope (plutonium-186) that could not exist. He then tells Silverberg that to show him real ingenuity, he would write a story about it (leave it to a biochemist!). This story ended up as “The God’s Themselves.”
#8
Old 10-12-2009, 10:32 AM
Guest
Join Date: Jun 2006
Posts: 16,578
Quote:
Originally Posted by DrFidelius View Post
According to Asimov himself --
SPOILER:
Asimov accosts Robert Silverberg at a sci-fi convention for referring to a radioactive isotope (plutonium-186) that could not exist. He then tells Silverberg that to show him real ingenuity, he would write a story about it (leave it to a biochemist!). This story ended up as The Gods Themselves.

Interesting. Thanks.

An aside, the other day I was remembering that horrible movie Food of the Gods
#9
Old 10-12-2009, 10:39 AM
Charter Member
Join Date: Jun 1999
Location: Je suis Ikea.
Posts: 25,222
but the way you tell it, it was Silverberg who made the mistake?
#10
Old 10-12-2009, 12:10 PM
Charter Member
Join Date: Mar 1999
Location: Miskatonic University
Posts: 11,938
Quote:
Originally Posted by Northern Piper View Post
but the way you tell it, it was Silverberg who made the mistake?
Yes, as I recall it. The Good Doctor's ego would never have allowed him to admit making a mistake about science.
#11
Old 10-12-2009, 12:33 PM
Member
Join Date: Aug 2002
Location: Deep Space
Posts: 41,487
Quote:
Originally Posted by Jragon View Post
Positrons were just discovered, and as I understand it positronic was just kind of a buzz word he used similar to the word "radioactive" in the '50s.
Rays were cool even before then. The first Robot stories were written in the early 1940s, and Asimov wasn't a nuclear physicist in any case. I don't think chemists have much reason to muck with subatomic particles.
#12
Old 10-12-2009, 01:02 PM
Guest
Join Date: Jun 2003
Location: Sophomore at VTech
Posts: 6,039
Quote:
Originally Posted by Voyager View Post
Rays were cool even before then. The first Robot stories were written in the early 1940s, and Asimov wasn't a nuclear physicist in any case. I don't think chemists have much reason to muck with subatomic particles.
I remember an intro piece in one of Asimov's books about a guy who was fairly big in sci fi in the early forties writing a story about a nuclear weapon, and getting a visit from the CIA because he got so much stuff right. (Turned out he really just did know that much about nuclear science, plus a sizeable dose of luck, no espionage involved.)
#13
Old 10-12-2009, 01:19 PM
Charter Member
Join Date: Nov 1999
Location: Seattle, WA, USA
Posts: 3,703
He not only wanted the term to sound cool by basing it on a new scientific term, but he needed it to be clear that these devices were fundamentally different from electronic computers. If they were like current devices, they could have been easily built without the Three Laws. Many of the stories hinged on the fact that the Three Laws are an unalterable feature of positronic brains. (Though he did write some with edited laws, with mostly unfortunate results for the characters.)

Asimov is credited by the OED with bringing us the adjective 'positronic.'
#14
Old 10-12-2009, 01:27 PM
Guest
Join Date: Jun 2003
Location: Sophomore at VTech
Posts: 6,039
Also the nouns robotics (which he thought was a real word at the time) and psychohistory, though obviously the latter is a lot less common.

Interestingly enough, the Three Laws are, to my knowledge, used in robotics quite frequently.
#15
Old 10-12-2009, 01:29 PM
Guest
Join Date: Feb 2000
Location: Northern CA
Posts: 9,154
Quote:
Originally Posted by Saltire View Post
Many of the stories hinged on the fact that the Three Laws are an unalterable feature of positronic brains.
I do not see how that can be possible. Or is that just another MacGuffiny thing?
#16
Old 10-12-2009, 01:35 PM
Charter Member
Join Date: Nov 2000
Location: Riding my handcycle
Posts: 31,874
Quote:
Originally Posted by dangermom View Post
I do not see how that can be possible. Or is that just another MacGuffiny thing?
As I remember the Three Laws are "hard-wired" into the brain at manufacture.
#17
Old 10-12-2009, 01:51 PM
Member
Join Date: Aug 2002
Location: Deep Space
Posts: 41,487
Quote:
Originally Posted by Captain Carrot View Post
I remember an intro piece in one of Asimov's books about a guy who was fairly big in sci fi in the early forties writing a story about a nuclear weapon, and getting a visit from the CIA because he got so much stuff right. (Turned out he really just did know that much about nuclear science, plus a sizeable dose of luck, no espionage involved.)
Cleve Cartmill, and the story was Deadline. Not a big name at all. The CIA didn't exist then, of course, and I think it was the FBI. Campbell convinced them that this kind of thing had been in sf for years, and calling attention to it by yanking the story would be worse than letting it run.

Far more interesting is Heinlein, who in "Solution Unsatisfactory!" predicted the nuclear standoff. The weapon was radioactive dust, not the bomb, but this shows once more his talent in seeing the sociological and political consequences of technology.
#18
Old 10-12-2009, 01:54 PM
Charter Member
Join Date: Nov 1999
Location: Seattle, WA, USA
Posts: 3,703
Yes, the idea was that the positronic brain was invented with the Three Laws hardwired in, and making any change to them was just as difficult as inventing the brain from scratch.

In the Caliban books, written in Asimov's galaxy by Roger McBride Allen, new gravitonic brains were invented. They had the ability to edit or even delete laws. I'm certain Asimov insisted that Allen not be allowed to invent that feature for a positronic brain.
#19
Old 10-12-2009, 02:10 PM
Charter Member
Join Date: Apr 1999
Location: Schenectady, NY, USA
Posts: 40,648
Yes. Removing the three laws from the positronic brains would make them inoperable; they were a part of everything they did.
__________________
"East is East and West is West and if you take cranberries and stew them like applesauce they taste much more like prunes than rhubarb does."
Purveyor of fine science fiction since 1982.
#20
Old 10-12-2009, 03:04 PM
Charter Member
Join Date: Apr 2003
Location: ___\o/___(\___
Posts: 11,544
The three laws were a safety feature deliberately created, not a natural consequence of positronics. It would have been perfectly possible to create a positronic brain without them, but they were afraid of the robots turning on their masters.
#21
Old 10-12-2009, 03:09 PM
Member
Join Date: Aug 1999
Location: A better place to be
Posts: 26,718
Quote:
Originally Posted by Captain Carrot View Post
I remember an intro piece in one of Asimov's books about a guy who was fairly big in sci fi in the early forties writing a story about a nuclear weapon, and getting a visit from the CIA because he got so much stuff right. (Turned out he really just did know that much about nuclear science, plus a sizeable dose of luck, no espionage involved.)
Cleve Cartmill is the guy in question. (It was the FBI, not the CIA, which had not yet been created.)

Note: I missed Voyager's post, but will leave this for the link.

Last edited by Polycarp; 10-12-2009 at 03:10 PM.
#22
Old 10-12-2009, 05:39 PM
Member
Join Date: Mar 2001
Location: The Mitt
Posts: 14,050
Right, thanks for the responces all!

Did Asimov ever address the huge safety issue of building an artificial brain out of anti-matter? Unless most of the brain was regular matter, and just a tiny tiny amount of positrons are needed (for whatever reason) to allow it to function. Obviousy, the best way to contain the antimatter would be through electromagnatism, but if there was some sort of technical or power malfunction... BOOM!
#23
Old 10-12-2009, 05:54 PM
Member
Join Date: Mar 2001
Location: The Mitt
Posts: 14,050
{double post}

Last edited by cmyk; 10-12-2009 at 05:55 PM.
#24
Old 10-12-2009, 05:55 PM
Charter Member
Moderator
Join Date: Jan 2000
Location: The Land of Cleves
Posts: 72,680
My fanwank has always been that a positronic brain uses both electrons and positrons, and that they're therefore distinguished from normal computers by the fact that they need some positrons, rather than none at all for a normal computer. You could, of course, call it an "electronic-positronic brain", but that's wordy, and the "electronic" part is just taken for granted.
__________________
Time travels in divers paces with divers persons.
--As You Like It, III:ii:328
Check out my dice in the Marketplace
#25
Old 10-12-2009, 06:58 PM
Member
Join Date: Jul 2003
Location: North of Boston
Posts: 9,815
Quote:
Originally Posted by cmyk View Post
Right, thanks for the responces all!

Did Asimov ever address the huge safety issue of building an artificial brain out of anti-matter? Unless most of the brain was regular matter, and just a tiny tiny amount of positrons are needed (for whatever reason) to allow it to function. Obviousy, the best way to contain the antimatter would be through electromagnatism, but if there was some sort of technical or power malfunction... BOOM!
No. IIRC, the brain was described as a "platinum-iridium sponge" in some of the stories. No mention of antimatter.

And there was one Susan Calvin story, "Little Lost Robot", where they did create some robots with a weakened/modified First Law, with wacky and hilarious results. So it wasn't impossible.
#26
Old 10-12-2009, 08:50 PM
Charter Member
Join Date: Aug 1999
Location: Minneapolis, Minnesota US
Posts: 15,580
I don't suppose that "positronic" could be fanwanked/retconned to mean what we now call "holes" in semiconductor materials- "virtual" positrons, in other words?
#27
Old 10-12-2009, 09:33 PM
BANNED
Join Date: Mar 2003
Location: Tampa, Florida
Posts: 78,508
Quote:
Originally Posted by Captain Carrot View Post
Interestingly enough, the Three Laws are, to my knowledge, used in robotics quite frequently.
Why? First, artificial intelligence is nowhere near to the point where they would have any relevance. Second, why would a corporation or government agency put money into making a robot that will not kill humans if ordered to, and can be stolen by anyone who says, "Come with me!"?

In real life there will never be but one Law of Robotics: "A robot must obey its master."
#28
Old 10-12-2009, 09:38 PM
BANNED
Join Date: Mar 2003
Location: Tampa, Florida
Posts: 78,508
BTW, according to this, the Three Laws of Robotics were the fruit of a brainstorming session between Isaac Asimov and John W. Campbell.
#29
Old 10-12-2009, 09:43 PM
BANNED
Join Date: Mar 2003
Location: Tampa, Florida
Posts: 78,508
See also here:

Quote:
Asimov claimed that he and Campbell came up with the three laws because robots are machines and man would naturally design them with safeguards. And yet, Asimov had great difficulty thinking of robots as machines. From the get go, he attributed emotions, motives, and consciousness to his creations. Sentience is not presented as something that a robot might one day attain, but as an inherent property. It's plain that Asimov had very little understanding of how computers actually work, even in terms of the primitive machines of '40s and '50s. He tended to explain his robots in terms of analogue machines with "behaviour" resulting from various "potentials" in the robot's circuitry, or by drawing analogies with human psychology. If we were talking motor cars this would be good old fashioned anthropomorphism.

If you think about it, the three laws are as big a heap of budgie doo doo as you can find. The idea behind the laws is interesting, but in terms of real machines they make no sense. Take the first law, for example. It says, "A robot may not injure a human being, or, through inaction, allow a human being to come to harm." Feed that into a real robot and it would be like trying to run a car on water. This law, like the others, is a collection of abstract concepts and moral imperatives that are meaningless from an engineer's point of view. No robot could ever understand the concept of "human being" or "harm" or "action" or "inaction." Heck, it couldn't understand "may." Instead, a real robot would have to be told that this set of inputs in these circumstances correspond to the definition of human in this situation. If such a set of conditions as indicate a human at location X are true, then this series of actions Y may not intersect at location X and this series of actions Z must be performed in relation to location X to achieve condition A. Or something like that. In other words, its good old fashioned programming with all the outcomes predicted and their eventualities accounted for.

As for the second law: "Robots must obey?" Machines obey anything you tell them to do. That's the nature of machines and often what makes them so dangerous. No "law" is required to enforce this. And as for preventing this obedience from causing harm, well, the first law is really nothing more than a subroutine "If this, then stop," or whatever. Hardly the Golden Rule.
#30
Old 10-12-2009, 10:26 PM
Guest
Join Date: Mar 1999
Location: ATL
Posts: 3,136
I always viewed Asimov's robot stories as logic puzzles. Basically they're all "Here are the Three Laws. Here is a robot which seems to have broken one or more of the laws. Figure out the set of circumstances under which this could have happened."

That's basically all the majority of the robot stories were. (I'm talking about the short stories here; the novels were a bit more complex of course.)
#31
Old 10-12-2009, 10:31 PM
Guest
Join Date: Nov 2000
Location: Montreal, QC
Posts: 55,922
Quote:
Originally Posted by cmyk View Post
Ok, that's what I kind of figured. But he was no slouch when it came to science
Indeed not, but his Ph.D. was in chemistry. He never showed any particular interest or aptitude in the technical aspects of electronics or computers.
#32
Old 10-12-2009, 10:44 PM
Guest
Join Date: Mar 2004
Location: Japan
Posts: 2,809
A large part of why Asimov talked about potentialities was that analog computers were still in wide use and digital was pretty new. It was far from clear at the time that digital had any inherent superiority over analog. In fact, digital computers are more complicated for many applications than analog ones, which is why we still use analog computers for some purposes.

He used the fact that inevitable slight variations in the manufacture of any analog device would create differences in output to explain quirks that could be interpreted as personalities. Sure, he was using it as an excuse to anthropomorphize his robots, but looking at human behavior contrasting with a pseudo-human point of view is obviously one of the reasons he wrote those stories. He probably didn't do these things out of ignorance, but deliberately, in the knowledge that what he described was merely plausible technobabble.

I'm not saying he never made mistakes. I'm not even saying that the positronics he postulated for robot brains are anything more than a pop-science buzzword wrapped around a shell of real technology. I am saying that the mechanism he initially proposed was based pretty solidly in the science of his time, and it's only in looking back at those stories from a vantage 70 years* from his first use of the conceit that you can criticize his science.

Nobody seriously expects SF writers to be prescient, or even halfway right most of the time. The most important requirement of the job is being entertaining, and only secondarily basing their "what if" questions on real science.

*The first robot story was written in 1939.
#33
Old 10-13-2009, 10:32 AM
Member
Join Date: Jul 2003
Location: North of Boston
Posts: 9,815
Quote:
Originally Posted by tanstaafl View Post
I always viewed Asimov's robot stories as logic puzzles. Basically they're all "Here are the Three Laws. Here is a robot which seems to have broken one or more of the laws. Figure out the set of circumstances under which this could have happened."

That's basically all the majority of the robot stories were. (I'm talking about the short stories here; the novels were a bit more complex of course.)
Actually, the second robot novel, "The Naked Sun" was exactly that - a case where robots had to be involved in a murder(s) and Leej Baley had to figure out why & how.

After that I think he went a little nuts (though he was revisiting 30 year old plots) when he decided to combine the robot & foundation universes.
#34
Old 10-13-2009, 12:08 PM
Charter Member
Join Date: Mar 2002
Location: NY but not NYC
Posts: 29,310
Quote:
Originally Posted by muldoonthief View Post
After that I think he went a little nuts (though he was revisiting 30 year old plots) when he decided to combine the robot & foundation universes.
By that time he had become truly famous, the only recognizable science fiction writer. His fans were clamoring for more. He realized he could make a million bazillion dollars by pandering to them. And did.

If that's nuts, please hit me over the head now.
#35
Old 10-13-2009, 12:38 PM
Member
Join Date: Aug 1999
Location: A better place to be
Posts: 26,718
Quote:
Originally Posted by muldoonthief View Post
Actually, the second robot novel, "The Naked Sun" was exactly that - a case where robots had to be involved in a murder(s) and Leej Baley had to figure out why & how.

After that I think he went a little nuts (though he was revisiting 30 year old plots) when he decided to combine the robot & foundation universes.
Quote:
Originally Posted by Exapno Mapcase
By that time he had become truly famous, the only recognizable science fiction writer. His fans were clamoring for more. He realized he could make a million bazillion dollars by pandering to them. And did.

If that's nuts, please hit me over the head now.
First, there was a ... fad isn't quite the right word, but bear with me ... about 20 years ago, for authors to conjoin their "universes" -- the common settings shared by a group of stories. (Poly enters into evidence "The Number of the Beast" and "Robots and Empire" as Exhibits A and B.)

Asimov felt challenged by the idea of trying to unify the vastly disparate futures of the Foundation and Lije Baley Robots stories. That the result was a "dancing bear"1 is not surprising.

And a very slight caveat to Exapno's point -- while Heinlein was known as the man who wrote "Stranger" and "the guy who first wrote a trip-to-the-Moon" story (very much untrue, but a public meme in the Nixon era), and Clarke as "the man who invented the communications satellite", Asimov was a household word as the polymath who wrote popular science on just about everything, and who had written a lot of SF. So while Exapno was wrong in detail, he was right in the general perception. Of the three, Asimov was by far the 'household name' among the general public.

As for why positronics, I always assumed that they were called that because positron-electron annihilations were what "ran" their brains, the positrons presumably coming from their (atomic) power source. I was surprised some years later to hear that Asimov had simply jumped on the buzzord bandwagon in naming them.


1 "The amazing thing about a dancing bear is not how well the bear dances; it's that it dances at all."

Last edited by Polycarp; 10-13-2009 at 12:42 PM. Reason: the usual run of typoes
#36
Old 10-13-2009, 12:39 PM
SDSAB
Join Date: Jun 2004
Location: my Herkimer Battle Jitney
Posts: 71,467
Quote:
Originally Posted by tanstaafl View Post
I always viewed Asimov's robot stories as logic puzzles. Basically they're all "Here are the Three Laws. Here is a robot which seems to have broken one or more of the laws. Figure out the set of circumstances under which this could have happened." ....
I love Asimov's robot stories, and I agree. He took the Three Laws as a jumping-off point for all sorts of intriguing musings on how the laws would interact with one another, how human error, malice or manufacturing quirks could lead to robot personalities and varyingly unpredictable behavior, and extreme situations in which the laws could actually (or just seem to) break down or be violated. Not a whole lot of sex or action in the stories, just very, very interesting explorations on the fraught dealings of humanity and its artificial servants.

The Three Laws are alluded to, in one way or another, in Aliens, Star Trek: The Next Generation, Robocop and Stealth, among others.

Last edited by Elendil's Heir; 10-13-2009 at 12:41 PM.
#37
Old 10-13-2009, 01:14 PM
Member
Join Date: Jul 2003
Location: North of Boston
Posts: 9,815
Quote:
Originally Posted by Exapno Mapcase View Post
By that time he had become truly famous, the only recognizable science fiction writer. His fans were clamoring for more. He realized he could make a million bazillion dollars by pandering to them. And did.

If that's nuts, please hit me over the head now.
Oh, I know what you mean - as a teenager who cut my SF teeth on I Robot & the Foundation books, I certainly contributed my fair share to his million bazillion and loved it - when Daneel Olivaw introduced himself to Trevize, I couldn't contain myself. No value judgement on the Good Doctor intended. But looking back, it was, as Poly so eloquently put it, a dancing bear.
#38
Old 10-13-2009, 02:12 PM
Guest
Join Date: Oct 2006
Location: San Diego
Posts: 7,439
Quote:
Originally Posted by Captain Carrot View Post
Also the nouns robotics (which he thought was a real word at the time) and psychohistory, though obviously the latter is a lot less common.
For Psychohistory to work properly (both in the predictive use as well as the manipulative use), wouldn't the subject society have to be unaware of it's inner intricasies?

In other words, if the general population knew how and why Psychohistory worked, it would muck things up (from the Psychohistorians point of view)?
#39
Old 10-13-2009, 10:42 PM
Charter Member
Join Date: Aug 1999
Location: Minneapolis, Minnesota US
Posts: 15,580
One writer (might have been Asimov himself, but I don't remember) generalized the Three Laws of Robotics to the Three Laws of Tools:
  1. A tool must be safe to use.
  2. A tool must do what it's supposed to, unless this would make it unsafe.
  3. A tool must be durable, unless this would make it either unsafe or unusable.
#40
Old 02-19-2016, 11:03 PM
SDSAB
Join Date: Jun 2004
Location: my Herkimer Battle Jitney
Posts: 71,467
Bumped.

Quote:
Originally Posted by Elendil's Heir View Post
...The Three Laws are alluded to, in one way or another, in Aliens, Star Trek: The Next Generation, Robocop and Stealth, among others.
Just read John Scalzi's short e-story "The Tale of the Wicked" (2009), about a military starship which develops artificial intelligence, learns about Asimov's Three Laws and then has some peculiar ideas of its own. Good stuff, and worth a read for any Asimov fan.
#41
Old 02-20-2016, 12:09 AM
Charter Member
Moderator
Join Date: Jan 2000
Location: The Land of Cleves
Posts: 72,680
For what it's worth, by the way, a Three Laws robot can't be stolen by just telling it "come with me". It would already have been ordered by its owner "Don't go with just anyone who asks", and the robot would then have two conflicting orders. Conflicting orders are resolved, in part, by who has the greater authority to give that robot orders.

Of course, you could still get such a robot to come with you by convincing it that lives were at stake, such that the First Law would override the Second. But then, once it either saved the lives or discovered that you were bluffing about them, it would go right back to its master, as per its more-authorized legitimate orders.
#42
Old 02-20-2016, 12:52 AM
Guest
Join Date: Jan 2014
Posts: 4,909
The three laws were extended to their true logical end in The Humaniods series, and to a lesser extent, Norman in Mudd's Women.

The first law states that a robot may not harm a human, or through inaction allow harm to come to them. It's the second part that's the problem. "Allowing harm through inaction" doesn't just mean not letting a car hit a human. What if there is a gun in a room with a human? Well, he might use it on himself. Therefore it must be removed. Same with knives. But you know, that large potted plant might fall over and hurt someone. Better take it away, too. Hammers can drop on your toe, sewing needles can stick you in the finger, that bump in the carpet can cause you to trip, maybe get a bruise.

Next thing you know, you can't move without a robot preventing you from doing so. Can't go outside - could get skin cancer. Maybe a meteor might fall froim the sky and hit you.

The Humanoids series was the scary side of the first law. And the Humanoids always "won". Humanity lost all free will, and there was nothing they could do about it. By the end, even bad thoughts and heartbreak and sadness were "harms" that the Humanoids had to protect us from. It was one of the scariest series I've read.

Everyone worries about robots going berserk and killing us - no one but Jack Williamson worried that they might protect us too much!
#43
Old 02-20-2016, 07:20 AM
Charter Member
Join Date: Apr 1999
Location: Schenectady, NY, USA
Posts: 40,648
Quote:
Originally Posted by DrFidelius View Post
Yes, as I recall it. The Good Doctor's ego would never have allowed him to admit making a mistake about science.
Asimov mentioned his error in the introduction to the first edition (and every other edition, AFAIK) of the novel. He misspoke the isotope number, someone joked about it, and he decided to write the novel based on it.
#44
Old 02-20-2016, 08:15 AM
Charter Member
Join Date: Aug 1999
Location: Minneapolis, Minnesota US
Posts: 15,580
Quote:
Originally Posted by Just Asking Questions View Post
The three laws were extended to their true logical end in The Humaniods series, and to a lesser extent, Norman in Mudd's Women.

The first law states that a robot may not harm a human, or through inaction allow harm to come to them. It's the second part that's the problem. "Allowing harm through inaction" doesn't just mean not letting a car hit a human. What if there is a gun in a room with a human? Well, he might use it on himself. Therefore it must be removed. Same with knives. But you know, that large potted plant might fall over and hurt someone. Better take it away, too. Hammers can drop on your toe, sewing needles can stick you in the finger, that bump in the carpet can cause you to trip, maybe get a bruise.

Next thing you know, you can't move without a robot preventing you from doing so. Can't go outside - could get skin cancer. Maybe a meteor might fall froim the sky and hit you.

The Humanoids series was the scary side of the first law. And the Humanoids always "won". Humanity lost all free will, and there was nothing they could do about it. By the end, even bad thoughts and heartbreak and sadness were "harms" that the Humanoids had to protect us from. It was one of the scariest series I've read.

Everyone worries about robots going berserk and killing us - no one but Jack Williamson worried that they might protect us too much!
I haven't read the whole series but didn't it turn out in the end that it was part of a very long-term plan to save humanity from some threat and the safety-obsession was a regrettable necessity in the meantime?
#45
Old 02-20-2016, 08:23 AM
Charter Member
Join Date: Aug 1999
Location: Minneapolis, Minnesota US
Posts: 15,580
(missed edit window)

Still, the series as originally conceived presents a terrifying idea: what if an artificial intelligence was super-intelligent, yet imbecilically devoted to an absurd idea due to misprogramming?
#46
Old 02-20-2016, 10:15 AM
Charter Member
Join Date: Mar 2002
Location: NY but not NYC
Posts: 29,310
Quote:
Originally Posted by Lumpy View Post
Still, the series as originally conceived presents a terrifying idea: what if an artificial intelligence was super-intelligent, yet imbecilically devoted to an absurd idea due to misprogramming?
A fascinating variant on this was all the rage a hundred years ago.

H. C. Greening, a well-known comic strip artist, introduced Percy in 1911. Percy was a robot, with rows on his buttons on his back, each programmed to do a specific task. Whatever Percy did he did perfectly. At first. Then he kept doing it to anything and everything in sight, leaving behind shambles. One gag, but a good one. It ran for 67 Sundays. For a decade Percy was America's most famous robot, although always called a mechanical man until R.U.R. retroactively changed the term. It was still being used when Asimov started writing, probably why he called his company U. S. Robots and Mechanical Men without ever providing a distinction. There wasn't. They were perfectly synonymous, like cars and autos.

I've collected all the Percy strips for the first time ever, on my website at Percy: Comics' First Robot. Guaranteed to be lots of stuff there you never knew.

Looking back over this thread, I see I can add another bit you never knew about. Every oldtimer in SF knows about Cleve Cartmill's "Deadline" and the FBI investigation it caused. The very same FBI agents also investigated another book and for much better reason. The book talked about atomic research being done in the U.S. by the very real National Defense Research Committee, headed by Dr. Constant, a thin disguise for the real world Henry Conant. The author knows about the destructiveness of an atom bomb and says forthrightly that all the cyclotrons were taken over by the government to find ways to release uranium's energy. And that the Nazi's were doing the same.

The book is The Last Secret by Dana Chambers, a pseudonym for Albert Fear Leffingwell. It's a mystery thriller, so far outside of the SF world that it didn't become an insiders' tale to be passed along the generations. The FBI had less of a struggle than with Cartmill to figure out where Chambers was getting this super-secret info: he lifted it almost word-for-word out of prewar New York Times articles. All the agent could do was tell his publisher not to reprint the 1943 book. Campbell may have folded, but Dial Press sold the reprint rights to two separate paperbacks houses by 1945. It's a great story, better than the one actually in the book.
#47
Old 02-20-2016, 01:35 PM
SDSAB
Join Date: Jun 2004
Location: my Herkimer Battle Jitney
Posts: 71,467
Quote:
Originally Posted by Lumpy View Post
I haven't read the whole series but didn't it turn out in the end that it was part of a very long-term plan to save humanity from some threat and the safety-obsession was a regrettable necessity in the meantime?
Reminds me of another short story (not about robots), the name and author of which I've forgotten. A man uses a time machine to go back and, at key turning points, nudge humanity towards a calm, peaceful, low-tech agricultural existence (including IIRC giving Napoleon an aneurism as a teen), which to him is the ideal. At last the time-traveler's work is done, and he returns to a future Earth in which everyone lives in placid little villages, and is very happy. The last sentence of the story is something like,
SPOILER:
"Of course, when the battlecruisers of the cruel and rapacious Ghe'ndi race took up their orbits a week later, Earth was utterly unprepared to resist them."
#48
Old 02-20-2016, 05:24 PM
Member
Join Date: Dec 2002
Location: San Diego, CA
Posts: 22,727
Elendil's Heir: Another story with almost exactly the opposite plot is "Who Needs Insurance" by Robin S. Scott. It's a brilliant little tale, and, really, the genius of it is the telling, not the "idea" or the revelation. You could cut out the last page entirely, and still enjoy the story as a damn fine story.

It's collected in Nebula Award Stories #2, itself one of the finest anthologies in the history of SF, also containing such brilliant works as "The Last Castle" by Jack Vance (arguably his "masterpiece" work,) and "Among The Hairy Earthmen" by R.A. Lafferty, one of the quirkiest pieces of revisionist history you'll ever encounter, both hilarious and terrifying.

Highest possible recommendation: anyone who loves SF can not do better than to read this book.

(Lafferty is terribly under-appreciated!)
#49
Old 02-20-2016, 08:39 PM
Guest
Join Date: Jan 2014
Posts: 4,909
Quote:
Originally Posted by Lumpy View Post
I haven't read the whole series but didn't it turn out in the end that it was part of a very long-term plan to save humanity from some threat and the safety-obsession was a regrettable necessity in the meantime?
I thought I'd read all the Humanoids, and that doesn't sound familiar.

But it sort of is the ultimate rational for Colossus's behavior in the Colossus series.
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 08:45 PM.

Copyright © 2017
Best Topics: bad sneakers ariel song 1977 mcd wifi burglar outfit hbo latenight joanna cameron today car memory keeper robin poor bear games like morrowind b24 b25 buy xenon shit vs shite starfleet enlisted slept 16 hours modern tramp steamer massachusetts liquor id galen hieronymus van mural before vhs tamiflu when pregnant submarine depth gauge stones guitarists is youporn legal au gratis calendar picture frame superhero base girl swallow marbles military brig wood burning cats 3 step snake pump nozzles vehicle code 21651 kid gloves or kit gloves home depot key maker what does xo stand for in cognac can bacterial infections go away on their own foot of the bed chest furniture are bones stronger than metal is fly fishing hard 2006 nissan altima windshield wipers size olympic high dive height how to address letter to attorney how much is a gold brick what music genre is frank sinatra heat n glo gas fireplace troubleshooting cat fur clumps together will my cat remember me after a month how much can you make with vending machines gift for new citizen neosporin for cold sores why is harvard so prestigious how long does a tube of toothpaste last best john steinbeck books how to pronounce ancient egyptian how many advil can i take in 24 hours sticking out tongue meaning how to dry crayola modeling clay how long can a horse run 2 2 5 for extremely large values of 2 office max vs office depot surface grinding stainless steel why can't truffles be farmed family crest tattoo placement how much urine needed for a drug test