The Sword and Laser discussion

Alif the Unseen
This topic is about Alif the Unseen
96 views
2014 Reads > AtU: About Software and Abstraction

Comments Showing 1-8 of 8 (8 new)    post a comment »
dateUp arrow    newest »

Steve (plinth) | 179 comments Since abstraction and the abstraction of language is a theme in the book, I thought I would take some time to fill in some of the gaps about computing that come up. This is a very long screed.

Let's start with the some of the basics of computation. Given all computers today, here is a list of all they are capable of doing:
1. Reading a value from a (limited) memory
2. Writing a value to memory
3. Simulating simple arithmetic
4. Performing basic Boolean logic
5. Changing where code is being executed based on the results of 1-4 (or unconditionally)

This should raise some eyebrows in skepticism. For example, simulating arithmetic? Really? Sure - if you ask a second grader what's 255 + 1, s/he will say "256" most of the time. You ask a computer (for 8-bit arithmetic), and it will say 0 (plus a carry). Which is clearly not arithmetic, but a limited simulation.

As it turns out, there is a fictional computer invented by Alan Turing which does even less, but oddly no computer can do more than since his has infinite memory. Turing's computer had memory that was a tape divided up into cells. Each cell could have one symbol written on it and the total number of symbols was finite. The computer can read a cell and based on what's written in the cell either write a new symbol and then move the tape one cell in either direction.
It can also halt. That's it. But Turing showed that his computer was capable of calculating all computable functions.

The problem with a Turing Machine and with a typical computer is that it is impractical to write anything significant for them since the process of doing so is tedious.

To avoid the tedium, one will typically design a different, less tedious language that is either directly or indirectly translated to the tedious language.

So what do these new languages do? For the most part, they formalize the arithmetic better into the concepts of integers, whole numbers, decimal numbers, and single language symbols (like 'a' or ا (alif)), how to organize these values, how to lay out instructions for manipulating these values, how to name these instructions as groups that can be reused and to a certain extent implementing rules to prevent or allow those named groups to be accessed.

So this is our first abstraction of one language on top of another.

The actual computer has hardware that is specialized for doing tasks - like reading from a disk or transferring data on a network. Each piece of hardware (which I will hereafter call a device) has a different way of doing what it needs to do and how it works. So you have a hard drive and a keyboard and a mouse and a video card. How can we handle working with them?

We define the operations that allow a computer to talk to a device. In Linux/UNIX based operating systems, it comes down to 5 tasks:
1. open (give me access to the device)
2. close (relinquish access)
3. read (get data from the device)
4. write (send data to the device)
5. control (give the device instructions that are neither reading nor writing)

The is the device abstraction (our second) which lets us be neutral on how we talk to all devices.

Now on top of that we need abstractions to make each class of device act uniformly. These are drivers. Now each printer can be printed to in a uniform way from software.

When I write an application that uses these things, almost assuredly, I'm going to define a model of computation that makes the application run. Guess what? That's another abstraction.

In one project I did about a decade ago, I had roughly 5 abstractions working in concert with each other. One was a device abstraction, one was an assembly line model, one was a computational abstraction, one was a data modeling abstraction, and so on.

My whole day is creating or linking abstractions together.

All these abstractions have costs. Some are efficiency (too many layers and your code runs slowly), some are memory (uses too much), some are readability costs, some are costs associated with a leaky abstraction.

Now, how does this tie into Alif the Unseen? There is a fair amount of talk about the abstraction of language (spoken written) which wants to be used to achieve a particular goal and how human language is unsuited to this task because of limitations of what is expressible or due to ambiguity in the source language (whether intentional or not).

I love that Wilson makes a point of the distinction between translation and interpretation, the latter inherently giving an implied nod to inaccuracy. Wilson also talks about your language influencing the way your think, but that is up for debate and I disagree only in that the Turing argument applies here - there may be a difference in the efficiency of expressibility, but there is an equivalence.

Interesting side note - you can roughly date a language by number of words for colors. Young languages have relatively few words for colors (usually light, dark, red). Because I don't have a specific word for the green of the leaves on a young tomato plant doesn't mean I can neither think nor communicate about it.

So the basis was sound: if nothing else, computers and software are tools of abstraction. It's easy to do and encouraged for problem-solving, because generally speaking, you're not actually solving a problem but trying to answer the question, "can I solve this problem?" or "what might happen if I do this?" with the understanding that watching a model or abstraction fail is far cheaper than watching a bridge collapse. The danger, of course, is that it's turtles all the way down.

In general, the technobabble was decent but the ability for Alif to (view spoiler)

tl;dr - all language is an abstraction. Computers are really good for building new abstractions and are, in fact, designed to be that way.


Paulo Limp (paulolimp) | 164 comments I tend to agree. Being from IT as well, and a former programmer, I can identify that some research has been done to make it look plausible. I found the discussion about quantum computers and the analogy to them being able to "understand metaphors" quite interesting.
It was a bit over the top when his computer melted over too much stress - mine just reboots when I try to make it understand jokes :-)
But all in all it didn't bother me. Some writers use the speed of light or black holes to pseudo-science their stories. This one uses computers. OK by me.


terpkristin | 4407 comments I enjoyed the programming aspects as a vehicle for the story and a fantasy for what we might be able to do...one day. I thought it was an awesome idea but I assumed (this area NOT being my area of expertise) that it was a little...optimistic. ;)


Buzz Park (buzzpark) | 394 comments The programming aspect was a little silly, but I was willing to press the "I Believe" key for the sake of the story. :-)


message 5: by Karl (new)

Karl Smithe | 77 comments Steve wrote: "tl;dr - all language is an abstraction. Computers are really good for building new abstractions and are, in fact, designed to be that way. "

But the computer is doing nothing but manipulating abstractions that it does not "UNDERSTAND".

People complain about my saying that because I am supposed to explain what UNDERSTAND means that computers do not do. But does a baby UNDERSTAND what hunger is and what milk is before it knows the abst5ractions for them?


Steve (plinth) | 179 comments Interesting. 'Understand' is an odd word to use since it itself is nebulous. People notoriously have a horrible time being able to measure understanding (c.f. 'quality' in Zen and the Art of Motorcycle Maintenance). I would suggest that 'experience' is a better word in your example.


Ulmer Ian (eean) | 341 comments I had a coworker who worked in a tropical country, his hard drives would fail all the time. I mean I guess a dry country is much nicer for computers, but still. Melting isn't sooo off. :D

(view spoiler)


Geir (makmende) Abdullah said it succinctly in chapter 3: “That doesn’t make any sense, but whatever.”

The technotalk certainly didn't add any value to the book for my part. Occasionally it seemed to make sense, maybe just by accident. I just learned to ignore it to the point where it was just vaguely annoying.


back to top