The Sword and Laser discussion

This topic is about
Blindsight
2011 Reads
>
BS: So, Why Personify the Space Probe?
date
newest »

message 1:
by
Matthew
(new)
-
rated it 4 stars
Apr 06, 2011 08:15PM

reply
|
flag



It's not like it's that hard getting into the book. My reading pace is average, I'd say, but I am never quite sure who is talking at any time and what their role is.

I don't mind first person POV books but if you're going to go first person you either need to stick with a single POV or make it super obvious when the POV shifts and who it's shifting to.

It does get better further in as it is almost all from Siri, but there is a flashback or two that threw me for a curve. Especially when you get to the Bates flashback.

Though it isn't stated, I took the personalization of the probe to mean that it was semi-organic. Given the way any mechanical modification is shown in the book as de-humanizing it seemed out of place in the book as a purely mechanical device. I'm sure I'm reading way too far into a short POV, but it works for me.

The POV changes can be a bit off putting I have found as well.

It took a few short sessions with the book to really get a handle on the narrative style - it's very DIFFERENT from what I've been reading lately.

I got into the writing style about 25% into the book. Up to then it always felt like it was different POV and I couldn't figure out who was who. It really *is* all Siri, and I don't know why it's so confusing. Maybe it's going back and forth in time and the whole "imagine you're..." parts lure you off-track.



I got the sense that the satellite wasn't organic, but putting us in its head gave it the appearance of being organic, meaning it only has a point of view because we're imagining it has one (as the author is asking us to do).

I think maybe that was the only point. To me, it was a clever way of providing exposition while exploring this theme that seems to be emerging of AI vs. cognizance (and all that comes in between).

I'd accept that, but the author is having Siri ask us to imagine we are the probe, and I don't see Siri being that happy about anything. I didn't want to make a big deal about it, but after reading the whole book it is a little jarring.
I suppose that the author could be making the point that happiness is only possible for the simplest of sentient creatures. If so, .... that's cold.

Ha. In that case, maybe Siri feels that way.
I haven't read far past that point, but I think the satellite POV is an important point of discussion; it should be a big deal. Watts does this POV flip again to us when putting us in the "heads" of the first and second waves, and so he's already exploring that line between sentience and AI. Maybe because Siri's "half-in, half-out" emotionally, he imagines going all the way to AI would be preferable. By the way, I didn't get the sense the satellite was happy, but rather wasn't unhappy. It just was.

Yeah, there are several times the POV shifts, and each time it seems that it's Siri saying, "now imagine that you're X..." It does seem tied to his whole synthesist method of observing and analyzing others' points of views. He says several times that his own dealings with humans is similar to the Chinese Room scenario -- that he's faking feelings/interactions by following a large set of observed rules. There's things he tells us at other times that suggest that "I am a Chinese Room myself" is an overstatement, but the point of the space probe-identification may be that the exercise of identifying with them and imagining their motivations (even though they're machines) is what he also does with humans, too, just on a much more complex level. Ie, fellow humans are in a way as abstract to him as machines?