maradydd: (Default)
It's been a busy week for comments over here at Radio Free Meredith, and there have been some exciting discussions going on. I'd like to break yet another comment thread out into a post of its own: [livejournal.com profile] heron61 and I got to talking about some freedom-of-speech stuff, which you can go read if you want to, and I'm going to continue that discussion here.

Why? Well, it's been two weeks since we started our discussion of Claude Shannon's "A Mathematical Theory of Communication". However, instead of moving on to Part 2 of that paper, I'm going to talk about the history of telecommunications, from both a technical and an economic standpoint. I'll explain some fundamentals -- many of which were driving forces behind Shannon's research -- and we'll explore the problems of bandwidth scarcity, how they got started, how information theory has helped to address them, and why they're still relevant today. I also have a modest proposal, but that will be a separate post.

I'm going to offer a counter-proposal to [livejournal.com profile] heron61's proposal of reintroducing the Fairness Doctrine, but to do so I'm going to need to step back in time and give a sort of technological history of broadcast media and why it works the way it does today. Hopefully [livejournal.com profile] enochsmiles will also jump in and put in his $.028 (he gets paid in euros) about the telecom side of things -- he was right there in the thick of it for a lot of what was going on between the big telecom providers in the late '90s and he's got a lot of good domain knowledge.

So. In the beginning, there was radio. (Actually, the telegraph came before radio, as did the telephone, and those will be important in the big picture, but we're talking about how mass media came to be, so we're going to start with radio.) At first, radio was just wireless telegraphy using Morse code, which is still well-loved by hams like me. Every radio signal that conveys something other than a continuous tone has a bandwidth, which is literally how wide a piece of spectrum the signal needs in order to be transmitted and received effectively. Bandwidth is measured in hertz, abbreviated Hz. One Hz is one cycle per second: the wave starts at zero, rises to its peak, falls back down past zero to its trough, then rises back up to zero.

For wireless telegraphy, also known as CW, the bandwidth can be as little as 20 Hz, which is a really narrow slice -- if I'm transmitting that signal using a 28.000000 MHz carrier wave or "transmitting on 28 MHz", with a 20 Hz modulation frequency, if someone else in range is simultaneously transmitting at, say, 28.000010 MHz, also with a 20 Hz modulation frequency, our signals will interfere with each other, but if the other guy moves up to a carrier wave at 28.000040 MHz, we're fine. (Modulating one frequency with another frequency gives you what's called "sidebands" which are the sum and difference of the two signals, so the other guy has to move all the way up to 28.000040 to keep his lower sideband from overlapping with your upper sideband.) With the amount of bandwidth allocated for 10-meter (that is, signals with a wavelength of around 10 meters) narrow CW on the current ITU region 1 amateur bandplan, there's room for 1750 simultaneous 20 Hz signals: that band goes from 28.000 MHz to 28.070 MHz and each transmission would use a carrier wave that is separated from its neighbours by 40 Hz to either side. (I'm oversimplifying this a lot, because there are a bunch of things that come into play when figuring out how wide a CW signal is, but they're not hugely relevant here.)

I didn't explain what CW stands for yet because now I'm going to explain AM radio: the two are interlinked. CW stands for Continuous Wave. It's a carrier signal of constant amplitude and constant frequency, and to use it to communicate, you turn it on and off. Shannon talked about this a little bit in Part 1, Section 1 when he described the signals used in telegraphy: a dot is ON for one unit of time and OFF for one unit of time, a dash is ON for three units of time and OFF for one unit of time, a letter space is OFF for three units of time, and a word space is OFF for six units of time. (Clever readers may be thinking, "Hmm, is this units-of-time stuff important?" Yes, it is. We'll get to that, though it'll be a bit.) Since the signal is of constant frequency and constant amplitude, each dot or dash sounds the same when represented as a human-audible signal, i.e., a sound wave; they're just longer or shorter in duration. (Humans can't hear radio-frequency waves, but we can mathematically -- and electrically! -- map those waves down to a range of frequencies that people can hear.) But a CW signal is really just a special case of an AM, or Amplitude Modulated, signal.

Amplitude modulation just means changing, or modulating, the amplitude (informally, the height of the wave) of that carrier signal in order to produce variations in sound. This was originally invented for the telephone. When you speak into an analog telephone, the sound waves of your voice create pressure on a membrane in the mouthpiece, causing the membrane to vibrate. Those vibrations are mapped to the DC voltage (voltage, as a measurement, is just the amplitude of an electrical current) on the phone line -- the voltage rises and falls in sync with the vibration of the membrane caused by the pressure of the sound waves of your voice. The varying amplitude of your voice modulates the voltage (amplitude!) of the current on the telephone line, and that current travels over wires between you and whoever you're talking to. On the other end, the receiver translates that varying voltage into vibrations of a membrane in the earpiece, and the sound waves from that little buzzing membrane travel down the other person's ear canal to the person's eardrum, where they make the eardrum vibrate, the nervous system translates that vibration into nerve signals which the brain can interpret, and the other person hears what you're saying. Phew!

AM radio works in a very similar fashion, but instead of modulating DC amplitude (voltage!) going over a wire, we modulate the amplitude of a radio signal. The thing is, in order to transmit a voice signal, you need much more bandwidth than you do for CW. Here's why. See, the frequency of the modulation signal determines just how quickly you can raise or lower the amplitude of the carrier signal. A guy named Harry Nyquist proved back in 1928 that a wave of B cycles per second can be used to transmit 2B code elements per second (if anyone's interested, we could read that paper sometime -- for now, just remember you have two sidebands to work with), so with a 20 Hz modulation frequency we actually have 40 code elements per second or 2400 code elements per minute. For somewhat obscure reasons, the word PARIS is used as a baseline for establishing transmission speed. (Like in typing, a "word" is really "five characters".) PARIS in Morse code is [.--. .- .-. .. ...] -- so let's look at how many times we could transmit PARIS in a minute.

By Shannon's reckoning (which is a little different from how hams do it, but let's go with Shannon), a dot takes up 2 time units, a dash takes up 4 time units, the space between two letters takes up 3 time units, and the space between two words takes up 6 time units. So we've got (2+4+4+2+3+2+4+3+2+4+2+3+2+2+3+2+2+2+6) = 54. 2400 code elements per minute divided by 54 code elements per word gives us roughly 44 words per minute. That's the absolute maximum words per minute we can possibly transmit using a 20 Hz modulation frequency -- the maximum capacity of the channel. If we could key Morse faster than that -- like Ted McElroy, who could do over 70 words per minute -- we'd need a higher modulation frequency, which would eat up more bandwidth because the sidebands to either side of the carrier would have to be larger.

But this is just dots and dashes. You have to make the amplitude fluctuate (the technical term for this is sampling, which I'll use from here on out) much more rapidly in order to reproduce audio. CD-quality audio uses a sampling rate of 48 kHz. In AM bandwidth terms, 48 kHz is a huge modulation frequency. Today's FCC regulations limit the AM modulation frequency to 10.2 kHz (before 1989 it was 15 kHz), which is why AM radio doesn't sound anywhere near as good as a CD. And the FCC really hasn't allocated very much of the broadcast spectrum for commercial broadcasting; it never has.

History break! Also, why we can't have nice things. )

Through all this time, the federal government has not changed the spectrum allocation for AM radio. But what about FM radio? What about television? We'll look at those as well -- but first, let's look at how FM works.

If amplitude modulation means raising or lowering the amplitude of a carrier wave to produce changes in sound, then frequency modulation is raising or lowering the frequency of a carrier wave to produce changes in sound. If you look at the waveform of an audio signal in the time domain (using, say, a program like Audacity), you'll see a sinusoidal wave of varying frequency and varying amplitude. The job of a frequency modulator is to combine this waveform with a sinusoidal carrier wave of fixed frequency and amplitude, to be sent out by a transmitter, and the job of an FM receiver is to tune in the modulated signal (by locking onto the carrier wave), strip out the carrier, and convert the modulating signal back into audio in much the same way that an AM receiver does, i.e., by turning it into a fluctuating voltage (amplitude!).

FM turns out to be much more robust against interference than AM, as you've no doubt noticed if you've ever listened to either of them while driving. If an AM receiver picks up two signals at or near the same carrier frequency, it can't determine which shifts in amplitude correspond to which signals, so it just demodulates everything it picks up at the tuned-in frequency and you end up hearing talk radio and the baseball game garbled together. Since the amplitude of an FM signal is constant, the signal strength is constant as long as you and the transmitter stay in the same place. This makes it easy for an FM receiver to pay attention to only the stronger of two carriers at or near the same frequency (known as the capture effect, so the weaker signal is attenuated (diminished) at the receiver, and only the stronger signal gets demodulated. This is a really nice property to have in a radio -- remember the problems back in the '20s with stations colliding on the air -- but it comes at a cost: FM requires more bandwidth than AM.

Rather than try to shoehorn FM into the 520 kHz-1610 kHz AM band, the FCC originally decided to put it in the VHF (Very High Frequency) part of the spectrum -- originally 42-50 MHz, later 88-106 MHz, and eventually the 87.8-108 MHz that it is today. That's nearly 20 times the allocation available to AM -- and for good reason, since each FM channel is 200 kHz wide, as opposed to the mere 10.2 kHz bandwidth per channel of AM. But that's only 101 channels. It was a lot back in the early days, but the spectrum filled up quickly, and broadcasters rapidly figured out that spectrum real estate was an incredibly valuable resource. So did the FCC. Licenses to operate a radio station are sold at auction, and the process is expensive and complicated. (As a concrete example, 122 licenses across the country are going up for sale this September 9th. The lowest opening bids are $1500 apiece, for stations in Peach Springs, AZ [pop. 600], Oak Grove, LA [pop. 2174], Rocksprings, TX [pop. 1285], San Isidro, TX [pop. 270], and Spur, TX [pop. 1088]. At the high end, $200,000 apiece, we've got stations in Lamont, CA [pop. 13,296] and Murrieta, CA [pop. 44,282]. So this should give you some idea of just how much Clear Channel has had to shell out for its 900-some stations in markets nationwide, both large and small.)

If you've read this far, you may be wondering what happened to all the information theory -- we started out so well, with Shannon and Nyquist and everything! Well, this is what Shannon had to work with back in 1948: analog transmission media over channels that could easily be polluted with crosstalk and environmental interference (i.e., weather). Building on Nyquist's work, he wanted to formally represent the notion of how much information could reliably be transmitted over a channel, with or without noise -- and in order to do that, he first had to characterise what information is.

After Shannon's groundbreaking work, engineers suddenly had the tools to figure out ways to represent information so that it could be transmitted more reliably, e.g., error-correcting codes. Also -- and this is the important part with respect to the FCC -- they had the tools to figure out how to interleave channels over the same carrier, thereby exploiting a single carrier frequency to transport multiple independently tunable channels.

Tune in next post, where we'll talk about the history and future of television, multiplexing, multiple-access protocols, software-defined radio, and some possible futures for the broadcast spectrum -- and the role information theory plays in all of them.

Comments are open -- have at.
maradydd: (Default)
Short notice, I know, but this morning at 9am PST/noon EST, Mackenzie Cowell (founder of diybio.org) and I will be on a talk radio program, The Food Chain, discussing the emergent biohacking movement and its possible effects on food. You can listen in on a number of AM radio stations, or over the Internet, and the audio will be available as a podcast later.

Come join in the discussion!

ETA: Well, that went reasonably well -- I was a little startled at first to find out that the station was a FOX News affiliate, but no one called for our heads on platters or anything. I'll post a link to the podcast when it's up, and y'all can listen to Mac sound extremely intelligent and level-headed, Sandra Porter being cautiously enthusiastic, and me waxing far too rhapsodic about molecular computing and epigenetics. ;) Some day I'll remember not to get carried away...
maradydd: (Default)
Dear Congress,

What part of the Sixth Amendment do you not fucking understand?

Also, CNN: that is not a bill for detainee trials, that is a bill against detainee trials, and seeing as how the last time habeas corpus was suspended on anything remotely resembling this scale it was the Civil War, the rejected Specter amendment merits a metric shit-ton more than just a "highlight of this article". The fact that the Specter amendment even had to be proposed in the first place -- that Congress would propose and approve a bill that flat-out removes the Sixth Amendment rights of anyone even suspected of a certain class of crime -- is the real news here. You fucking traitors.

Much hatred,
Meredith

(I mean, seriously. The thing that's really fucking terrifying about this little maneuver is, how do you appeal on Constitutional grounds a decision that's made in a secret court where you don't even get told the evidence against you? Appeals happen because of procedural errors in the lower courts or inconsistencies in the law, but when both the laws in question (see Gilmore v. Gonzalez) and the court's operation and decisions are sequestered away from the public, there is no way to challenge these decisions, full stop, because you are not given anything that you can challenge.)
maradydd: (Default)
Several folks took me to task for the "stuff like highway funding" remark in my last post, so I decided to do a little digging and find some real numbers about how federal expenditures by state break down. Conveniently, the US Census Bureau produces an annual report, Federal Aid to States, which covers all this stuff and explains it with lots of tables and pie charts and things. Let's take a look at how Uncle Sugar gives his kids their allowance, hmm?
Warning: lots of boring statistics ahead )
maradydd: (Default)
I generally prefer to absent myself from the morass of meta-journalism that is discussion about the blogosphere itself, but that said, if you want to read an incredibly patronising article, you don't have to look a lot farther than this Eric Engberg op-ed.

The editorial focuses on the shitstorm of commentary that swept the web on 2 November, natch. Certainly, a lot of that was wishful thinking, a lot of it was misinformation, and a lot of it was just flat-out wrong. That's fine, because it's all true. What gets under my skin, though, is stuff like the following:
While out on the campaign trail covering candidates, my own network’s political unit would not even give me exit poll information on election days because it was thought to be too tricky for a common reporter to comprehend. If you are standing in the main election night studio when your network’s polling experts start discussing the significance of a particular state poll, you the reporter will hear about three words out of one hundred that you will understand. These polls occur in the realm of statistics and probability. They require PhD-style expertise to understand. The people who analyze them for news organizations, like the legendary Warren Mitofsky and Martin Plissner at CBS News -- have trade associations like doctors do to certify their work.
First of all, never you mind that a binomial distribution absolutely does not take a PhD to understand; it's standard fare for the latter half of your average undergrad Stats 101, and I can explain it to a high school student of above-average intelligence such that he'll remember it when he gets into Stats 101. That isn't the point at all. The point to which I object is Engberg's attitude that because We the People aren't certified to deal with these Scary Data, we shouldn't be allowed to put our grubby little hands on them at all.

Well, you know, the vast majority of We the People aren't going to grok most of what goes into the EnsEMBL genomics database, or the reports and data on the Center for Army Lessons Learned, but it's all up there for anyone to take a look at. You want to see some data that could be outright dangerous if used irresponsibly, paw through some of the stuff on CALL; there are POIs in there that can get you killed if you're not observing proper safety precautions. Them's the breaks; you pick the information you want and how you want to use it.

Engberg continues:
When you the humble reporter are writing a story based on the polls you need one of these gurus standing over your shoulder interpreting what they mean or you almost certainly will screw it up. There is a word for this kind of teamwork and expertise. It’s called "journalism."
Now, I'll absolutely concede that it is the responsibility of people who provide information to others to double-check that what they're putting out is correct, and part of that responsibility includes consulting expert resources before running one's mouth. (It's also especially amusing that this sort of high-horsery is coming from CBS, given the colossal fuckup that was the Bush National Guard Documents scandal.) But there's also a flip side of the coin: when people screw things up, they are expected to print retractions. This happens in blogs all the time; this happens in print media as well, but I don't even have to invoke my expertise as someone whose job it was for several years to proofread the laid-out pages of a major metropolitan newspaper to remind you that the print media usually do their damnedest to bury retractions in the tiniest print they can get away with.

I'll even argue that in blogs, particularly in political blogs talking about transitional situations like elections, the truth will out not only because people call each other out (as was the topic of much discussion after Rathergate), but because transitional situations come to an end and everyone finds out What Really Happened all at once. This leads directly into a facet of the blogosphere that Engberg is utterly glossing over: its time scale is radically compressed from that of print or even TV journalism.

With one exception: sports.

The blogosphere allows for a play-by-play of what's going on from moment to moment, just like the commentators in a football game describing every action on the field for the loyal listeners back home. So what if the commentators point out that the Cowboys are up by a field goal at one point in the first half, but they end up losing the game anyway? That doesn't change the fact that, say, from the field goal at 7:34 pm until the Texans scored a touchdown at 7:52, the Cowboys were ahead. Likewise, if Wonkette points out at one point that Kerry is up 52-47 in Ohio -- which he was at one point, because I was one of those no-life dorks hitting Reload on cnn.com all fucking night of the election, until I got sick of it and went off to grade papers -- that isn't changed by his ultimately losing the election. Engberg seems to be of the opinion that blog readers are looking for Gospel Truth and receiving at best, half-truth, at worst, lies, damned lies and statistics. I submit that Engberg misunderstands what we're interested in. Journalism, of the type he describes, can indeed provide a slice of what's going on all over the country, but it must wait until long after the fact to do so. Those of us who are interested in a truly up-to-the-minute assault of information understand that we're going to have to take it with a shakerful of salt; that seasoning is the price we pay for a slice of life that we can get from blogs.

Profile

maradydd: (Default)
maradydd

September 2010

S M T W T F S
   1234
567891011
12131415 161718
19202122232425
26 27282930  

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags