Aug. 5th, 2009

maradydd: (Default)
(These questions are all things I asked myself while reading this paper. I don't necessarily know the answers. In fact, some of them I don't know the answers to. That's what makes them discussion topics. :) Feel free to bring up other discussion topics in the comments -- these are just some possible jumping-off points intended to provoke discussion.)

1. Shannon tells us up-front that the system he's describing is semantics-agnostic: he's interested in engineering a system where the message that Alice sends is the same message that Bob receives, and if Bob misunderstands it, well, that's his problem. Was this a wise design choice? Is it ever the case that an electronic message (email, IM) loses meaning (semantic content) where an equivalent analog message (a letter, notes passed in class) wouldn't? Have people changed their communication style to compensate for this?

Another way of phrasing that question: do analog messages possess any information that is lost in conversion to a digital channel?

If so, what might be a way of preventing that information from being lost?

(My [partial] answer: It was a necessary design choice; they were doing the best they could at the time, and simply getting text from place to place was hard enough. But, yes, I think information can be lost: for instance, with a handwritten letter you can often tell whether someone was writing in a hurry, or if they were especially upset [e.g., shaky writing].)

2. Shannon observes that the size of the set of all possible messages is, in practice, finite. We can see that this is true, since any message sent is itself finite, and there are thus a finite number of possibilities for the content of that message; fewer still will conform to the rules of any known language. (For instance, there are 26*26*26 possible three-letter words using the Latin alphabet. Some, like "cat", are meaningful in some language; others, like "qzk", are not.) Based on astronomical observations, the number of atoms in the observable universe is estimated to be somewhere between 10^79 and 10^81. Does this mean that the possible number of states of the universe is finite, albeit very, very large? How does the metric expansion of space factor into this?

(Another way of asking this: does the law of conservation of energy hold for the universe as a whole? In other words, is the universe in toto a closed system? Does conservation of energy imply an upper bound on the information content of the universe?)

3. We've figured out how to transmit text, sound, images, and combinations of the above pretty effectively, but that's about it. Apart from taste, touch, smell and proprioception (yipes!), what other forms of input-to-humans (or other organisms!) might we hypothetically want to transmit? If it's not something that humans normally perceive -- electromagnetic field strength and ionizing radiation level are two that come to mind -- what might be some ways of communicating that to humans effectively?

4. Shannon claims "we can ... approximate to a natural language by means of a series of simple artificial languages", and gives an example in section 3. Is his claim correct? How closely can we approximate without bringing semantics into the picture? How closely can we approximate if we do bring semantics into the picture? Does Shannon's omission of semantic content from his design decisions affect the way he defines information entropy? Could we come up with an alternate measure of entropy that does take semantic content into account? Or, is there a way to use Shannon entropy to measure the information of the semantic content of a message? (Perhaps a more basic question: how can we quantify the semantic content of a message?)

Profile

maradydd: (Default)
maradydd

September 2010

S M T W T F S
   1234
567891011
12131415 161718
19202122232425
26 27282930  

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags