The Sword and Laser discussion

This topic is about
Alif the Unseen
2014 Reads
>
AtU: About Software and Abstraction
date
newest »


It was a bit over the top when his computer melted over too much stress - mine just reboots when I try to make it understand jokes :-)
But all in all it didn't bother me. Some writers use the speed of light or black holes to pseudo-science their stories. This one uses computers. OK by me.



But the computer is doing nothing but manipulating abstractions that it does not "UNDERSTAND".
People complain about my saying that because I am supposed to explain what UNDERSTAND means that computers do not do. But does a baby UNDERSTAND what hunger is and what milk is before it knows the abst5ractions for them?


(view spoiler)
Let's start with the some of the basics of computation. Given all computers today, here is a list of all they are capable of doing:
1. Reading a value from a (limited) memory
2. Writing a value to memory
3. Simulating simple arithmetic
4. Performing basic Boolean logic
5. Changing where code is being executed based on the results of 1-4 (or unconditionally)
This should raise some eyebrows in skepticism. For example, simulating arithmetic? Really? Sure - if you ask a second grader what's 255 + 1, s/he will say "256" most of the time. You ask a computer (for 8-bit arithmetic), and it will say 0 (plus a carry). Which is clearly not arithmetic, but a limited simulation.
As it turns out, there is a fictional computer invented by Alan Turing which does even less, but oddly no computer can do more than since his has infinite memory. Turing's computer had memory that was a tape divided up into cells. Each cell could have one symbol written on it and the total number of symbols was finite. The computer can read a cell and based on what's written in the cell either write a new symbol and then move the tape one cell in either direction.
It can also halt. That's it. But Turing showed that his computer was capable of calculating all computable functions.
The problem with a Turing Machine and with a typical computer is that it is impractical to write anything significant for them since the process of doing so is tedious.
To avoid the tedium, one will typically design a different, less tedious language that is either directly or indirectly translated to the tedious language.
So what do these new languages do? For the most part, they formalize the arithmetic better into the concepts of integers, whole numbers, decimal numbers, and single language symbols (like 'a' or ا (alif)), how to organize these values, how to lay out instructions for manipulating these values, how to name these instructions as groups that can be reused and to a certain extent implementing rules to prevent or allow those named groups to be accessed.
So this is our first abstraction of one language on top of another.
The actual computer has hardware that is specialized for doing tasks - like reading from a disk or transferring data on a network. Each piece of hardware (which I will hereafter call a device) has a different way of doing what it needs to do and how it works. So you have a hard drive and a keyboard and a mouse and a video card. How can we handle working with them?
We define the operations that allow a computer to talk to a device. In Linux/UNIX based operating systems, it comes down to 5 tasks:
1. open (give me access to the device)
2. close (relinquish access)
3. read (get data from the device)
4. write (send data to the device)
5. control (give the device instructions that are neither reading nor writing)
The is the device abstraction (our second) which lets us be neutral on how we talk to all devices.
Now on top of that we need abstractions to make each class of device act uniformly. These are drivers. Now each printer can be printed to in a uniform way from software.
When I write an application that uses these things, almost assuredly, I'm going to define a model of computation that makes the application run. Guess what? That's another abstraction.
In one project I did about a decade ago, I had roughly 5 abstractions working in concert with each other. One was a device abstraction, one was an assembly line model, one was a computational abstraction, one was a data modeling abstraction, and so on.
My whole day is creating or linking abstractions together.
All these abstractions have costs. Some are efficiency (too many layers and your code runs slowly), some are memory (uses too much), some are readability costs, some are costs associated with a leaky abstraction.
Now, how does this tie into Alif the Unseen? There is a fair amount of talk about the abstraction of language (spoken written) which wants to be used to achieve a particular goal and how human language is unsuited to this task because of limitations of what is expressible or due to ambiguity in the source language (whether intentional or not).
I love that Wilson makes a point of the distinction between translation and interpretation, the latter inherently giving an implied nod to inaccuracy. Wilson also talks about your language influencing the way your think, but that is up for debate and I disagree only in that the Turing argument applies here - there may be a difference in the efficiency of expressibility, but there is an equivalence.
Interesting side note - you can roughly date a language by number of words for colors. Young languages have relatively few words for colors (usually light, dark, red). Because I don't have a specific word for the green of the leaves on a young tomato plant doesn't mean I can neither think nor communicate about it.
So the basis was sound: if nothing else, computers and software are tools of abstraction. It's easy to do and encouraged for problem-solving, because generally speaking, you're not actually solving a problem but trying to answer the question, "can I solve this problem?" or "what might happen if I do this?" with the understanding that watching a model or abstraction fail is far cheaper than watching a bridge collapse. The danger, of course, is that it's turtles all the way down.
In general, the technobabble was decent but the ability for Alif to (view spoiler)[put together a functional computational of the Thousand and One Days in C++ in a few days was hard to swallow and the computer melting itself down as a result of running it is just plain laughable, but it's dramatic I guess. I will say that his code collapsing on itself is, in fact, a common thing that happens in writing software: your abstraction is leaky or the code has grown beyond your ability to understand the interconnectedness of the code and the side-effects of making a "minor" change. I won't discuss security - I'm not a security expert, but I'll just say that the portrayal was naive, probably for making it simpler to comprehend and to make a better narrative. (hide spoiler)]
tl;dr - all language is an abstraction. Computers are really good for building new abstractions and are, in fact, designed to be that way.