Monday, December 31, 2012

Easy Rules for Improving Your Writing

Never be afraid to stomp on something to death and start over. In fact, get in the habit of doing it.

Don't keep a long, awkward sentence in hopes of reworking it. There's no sense polishing a turd.

Write multiple variations of the same sentence. Choose the best one.

Sentences that begin with a gerund (an -ing verb). Start over. Let verbs be verbs, not nouns.

Sentences that start with "There is" or use a "There is . . . that" construction: Cremate at once.

Sentences that begin with a subsidiary clause (one that doesn't contain the subject of the sentence) followed by a comma: Toxic. Keep in a sealed lead box.

Sentences in which the subject is far removed from the predicate, or located near the end of the sentence: Junk. No cash value.

Sentences that begin with an adverb or adverbial clause, followed by a comma: Seldom the best thing to do.

Sentences with more than one subserviant clause: Puts a huge workload on the reader. Chunk it up. Write it as more than one sentence.

The use of possessive pronouns (such as whose) with inanimate objects: Amateurish. Don't say "The brands whose prices are going up will be announced each week." Say: "Price increases for brands will be announced weekly."

"Effective" or "effectively": Whenever you say something like "effective marketing" (or "writing effectively") ask yourself as opposed to what? Isn't the effectiveness of what you're proposing already implied? (It sure as heck better be.)

Eliminate phrases like "the fact that" or "based on the fact that" or "due to the fact that." They're never needed. Show causality with "because" or "due to." Better yet, make causation implicit in what you're saying.

Very is overused and thus weak, not strong. Rather than strengthening something with very, you're often weakening it. Try exceedingly, extraordinarily, astonishingly, etc.

Avoid weak modifiers like "somewhat," "rather," and "fairly." Make definitive statements.

Semicolons; avoid.

Ellipses . . . ditto.

Watch out for exclamation points!

And above all, don't not avoid double negatives.

 
Tomorrow's post is special: In it, you'll see how 850 words of clear, direct prose can result in a Nobel Prize. (Yes, a Nobel Prize.) Come back tomorrow to read the whole incredible story.

Sunday, December 30, 2012

On the Need for Shorter Sentences

Want to make your writing easier to read? Stay away from long sentences, and vary your sentence lengths.

The old rule of thumb about sentence length used to be: If you can't read a sentence aloud in one breath, it's too long. In my opinion (which is all that counts here; it's my blog), that rule is off by a factor of two. You should be able to read aloud any two consecutive sentences without incurring hypoxia.

You'll find it much easier to obey the out-loud rule if you simply vary your sentence lengths. After a long-ish sentence or two, give the reader a break by throwing in a shorter sentence. The shorter the better.

Also vary paragraph lengths.

Combine the two techniques (varied sentence lengths; varied paragraph lengths). Try this easy experiment: Write or rewrite a paragraph to have a super-short opening sentence (say, six words or less). Write or rewrite a paragraph (not necessarily the same one) to end on a super-short sentence. In either case, take note of the short sentence's impact relative to all the other sentences.

A good strategy for simplifying long-ish sentences is to start by eliminating "nice but not strictly necessary" words, then chunk the sentence up into single thoughts. Consider this example:

Because of the fact that a widespread practice of discrimination continues in the field of medicine, women have not at the present time achieved equality with men.

This sentence is grammatically correct. But let's face it; it sucks ass. It lacks impact and sounds like "student writing." Strip out unnecessary words ("at the present time," "because of the fact") and get right to the core meaning. Discrimination continues in medicine; that's one thought. Women have not yet achieved equality with men; that's another thought. Which is more important? To me, the key takeaway is that women have not achieved equality with men. Once somebody tells me that, I want to know why. So don't tell me the why first; tell me the what, followed by the why. "In medicine, women have yet to achieve equality with men due to widespread discrimination."

Have mercy on the reader's brain. Do some pre-parsing for your already overworked reader. For example: When you have a sentence made up of two clauses separated by a comma and a "but," consider splitting the sentence into two sentences. Let the second one begin with "But." Consider:

Statistics show that most people believe aliens have visited earth, but there is no convincing physical evidence for such a belief.

That's grammatically correct. It's also a mouthful. Try something like: "Statistics show that most people believe aliens have visited earth. But there is no convincing physical evidence for such a belief." You've saved the reader an important bit of parsing. Plus, the average sentence length is now 10.5 instead of 21.

I'm pretty sure a math geek could prove quite easily that the amount of effort required to understand a sentence grows exponentially (not linearly) with the number of words or phrases in the sentence. That's because the first thing a reader tries to do, if the sentence is non-trivial, is parse the sentence into least-ambiguous form. As sentence length grows, the number of possible parsings grows out of control because of all the possible permutations of meaning. Eventually, if the sentence gets to be long enough, the reader's head explodes. We don't want that.

January is Prevent Head Explosion Month (tell your friends), so please, do your best to simplify your prose. It's really not that hard. The alternative is a big fat bloody mess, no matter how you look at it.

Saturday, December 29, 2012

Brain Anatomy and the Difficulty of Writing

I had an Aha moment the other day when I was thinking about why it is so many people consider writing difficult (often frighteningly so).

Want to see how hard it
is to overcome left-brain
language dominance? Name
the words' colors out loud
without reading the words.
This is the so-called
Stroop Effect.
The key insight: Brain anatomy is not well optimized to make good writing easy. 

We know that most of the language centers are on the left side of the brain. We also know that the left brain is where linear, logical, rule-based thinking occurs. The right brain understands less-linear things like metaphors, idioms, graphical relationships in space, and improvisation. In at least a colloquial sense, we think of the right brain as "creative."

Therefore, the difficulty of writing is partly anatomical. The brain's language centers are proximal to the rigidly logical "linear thinking" parts of the brain. If you're a computer programmer, that's a good thing. If you're trying to write poetry, it's not.

It's not impossible to trick the right brain into becoming more involved in left-brain tasks. My favorite tricks are:

Buy an Etch A Sketch.

Build a Lego puzzle.

Develop an intimate relationship with a yo-yo.

If what you're writing needs accompanying illutrations, work on the illustrations first.

Read some poetry before trying to write prose. Try some e.e. cummings.

Create a pro forma "outline" for your piece in the form of a quick doodle with lots of circles and boxes (a Venn diagram on crystal meth). I like a white-board for this, but the back of an envelope will do just as well.

Find an image that elicits a strong non-verbal reaction in you. Study it. Modify it in Photoshop.

Watch your favorite TED Talk video. Or watch a new one.

Listen to music that relies on improvisation, or at least lack of repetition. (Read up on the Mozart Effect.) My favorite musical works in this regard are the live solo performances of pianist Keith Jarrett. Most of his concert performances are pure improvisation. Some are quite abstract (think Jackson Pollock on piano). If you're not familiar with Jarrett, buy his signature Köln Concert album, and go from there.

Maybe you know of other tricks. Leave them in a Comment below. Thanks!

Friday, December 28, 2012

"Problem Words" in English and How to Use Them

I want to talk about some specific words and usage issues that cause trouble for a great many native speakers of English who should know better. Call these pet peeves, if you must. I don't like to think of them as pets, though. Savage beasts all.

Ironic means that the outcome of something had a distinct quality of unexpectedness to it. But I like to think it means something more. To me it implies that there are (or were) two possible outcomes or interpretations of something, one that's expected but turns out to be wrong, and one that's not expected but actually true. Contrast this with the word paradoxical, which (to me) implies two outcomes that seem to be at odds with one another yet are both demonstrably true. Use paradoxical when there are two true yet seemingly incompatible outcomes. Ironic is less concrete a word and not as widely understood as paradoxical.

Poignant means keenly distressing to the senses and/or arousing deep and often somber emotions. It doesn't mean bittersweet. It can be an outcome of a bittersweet situation, but by itself it does not mean bittersweet.

Decimate means to reduce by one tenth. Never use it to mean "destroy completely." Decimation was (in Roman times) the practice of killing one out of every ten mutineers (or sometimes one in ten prisoners of war), as a means of demoralizing the nine out of ten survivors. The preferred meaning of decimate remains reduction by ten percent. You can use it to mean "reduce significantly," but never use it to mean total eradication.

Irregardless is always incorrect. Use "regardless" unless you want to appear careless or stupid.

Don't say "which" when you mean "that." Example: "The subject which interests me most is philosophy." Use that, not which, in such a sentence. There's a difference between "The crane that was the cause of the accident was demolished" and "The crane, which was the cause of the accident, was demolished." Which should be reserved for clauses set off by commas.

For God's sake learn the difference between it's (a contraction) and its (possessive). The reason people get this mixed up is that the rule for making something possessive, in English, is to add apostrophe-s to the end of whatever it is. So it's natural to think that if you add apostrophe-s to "it," you get a possessive form. Not true, though. The possessive form of it is its.

Learn to use "nor" as the negative form of "or."
In particular, don't use "or" in connection with "neither." Don't do: "Using ain't is neither correct or necessary." The word "neither" here demands that you use "nor."

Try not to use "almost always" or "almost never." It's semantically akin to saying "almost infinite." The words always, never, and infinite are absolute and binary. Something is infinite, or it's not. Something either occurs always, or it doesn't. "Almost" and "always" are two different concepts.

Don't say infer when you mean imply.

Bemused has nothing whatsoever to do with amusement. (Read that again.) It has everything to do with bewilderment or befuddlement.

Peruse means to read carefully, not to skim lightly or read haphazardly.

Who versus whom: My advice? Don't worry about "whom" versus "who" unless you're writing for an audience that cares about such things. It's not always better to use "whom" properly. Using it properly can mark you as a self-righteous pedant! It all depends on the audience. My rule is to always use "who" unless you're convinced the reader will object to its improper use. (And you might have noticed, I don't much care about splitting infinives.) Most readers won't care. You're writing for most readers, by the way (and not your high school English teacher), aren't you?

And finally:

Literally refers to something that actually happened (or is happening) in reality. It represents the concrete reality of something, not anything metaphoric. There's nothing speculative (nor merely descriptive) about a thing that's literal. "He literally went insane" means the person actually became clinically schizophrenic per DSM-IV-TR #295.1–295.3, 295.90. "He literally went ballistic" means the person had enough momentum to follow a ballastic trajectory through space. "He literally melted down" means the person became hot enough to exceed the melting point of his constituent materials. Don't say literally unless you really mean it.

Thursday, December 27, 2012

Common Writing Mistakes and How to Avoid Them

I want to talk about some tips for streamlining your writing and giving it more impact (as well as making it more "correct," grammatically). These tips address problems that even competent writers have. I catch myself making some of these mistakes. But afterward, I always discipline myself appropriately, for example by withholding extra servings of gruel.

Subject-pronoun agreement is a problem for many native speakers of English. I'm not sure why. It's easy enough to avoid. An example of what not to do: "When you ask a person to help you, they will often refuse." Why is this wrong? The pronoun "they" is a plural form, yet here it refers to a singular "person." It's correct to say: "When you ask a person to help you, he or she will often refuse." Or, if you prefer the plural: "When you ask people to help you, they will often refuse." Do one or the other. Don't mix "they" or "them" with a singular subject.

Avoid the word "very." Instead, use a higher-impact (and/or more descriptive) word like "extremely," "exceedingly," "hugely," "remarkably," "astoundingly," "astonishingly," "massively," etc. The word "very" is overused and thus has little impact. It's supposed to magnify the impact of whatever it's modifying, but you can't increase the impact of something by modifying it with a low-impact word. So just avoid "very" altogether. Think up something more imaginative. Imaginative words improve almost anybody's writing.

Eliminate unnecessary uses of "that." "He knew that it was wrong" can be improved by saying "He knew it was wrong." This may not sound like such a big deal, but if you use "that" needlessly throughout a lengthy piece of writing, you'll find it tends to bog things down. If you're looking for a super-easy way to streamline your writing, start by finding and removing unnecessary thats.

Stay away from "There is...that" constructions. Don't do this: "There are many cars that aren't reliable." Instead say: "Many cars aren't reliable." Why would you want to use seven words to say something that can be said in four words?

Don't put an "-ing" word at the start of a sentence, unless you really know what you're doing. In English, an "-ing" word is (grammatically speaking) either a present participle or a gerund. The difference between a participle and a gerund is that a participle is a verb form used as an adjective, whereas a gerund is an "-ing" verb that serves as a noun. Either way, the brain rebels. Your brain doesn't want to see verbs used as adjectives (nor as nouns). So avoid "-ing" verbs wherever you can. Sometimes you can't avoid them, of course. "Revolving door" uses the participle "revolving" to modify "door," which is fine; the meaning is clear. "Interrupting is rude" uses the gerund "interrupting" as the subject of the sentence. Not bad; it's short, and the brain can parse it okay. But consider: "Paying attention to grammar eliminates mistakes." That's a poor sentence (what's the subject?), as is "Being thin avoids heart disease later in life." Stay with nouns as the subjects of sentences and you'll find that sentences are easier to write, as well as easier for the reader to understand.

Be careful about decoupling the object of a sentence from the predicate. Example of what not to do: "Throw the horse over the fence some hay." The subject of this sentence is an implied "you," the object is "hay," and the predicate is "throw." But that's not how the sentence reads. It reads as if "horse" is the object, which is wrong. Presumably, you want to throw hay, not a horse, over the fence. If you were to say "Throw hay to the horse over the fence," that's still not good, because you're implying that the horse is over the fence rather than that you need to throw hay over the fence. If you actually want somebody to throw hay over the fence, say so: "Throw hay over the fence, to [or for] the horse."

Don't let ambiguity creep into your writing. "The ability to read quickly made him smarter." Does "quickly" modify "made" (quickly made)? Or does it modify "read" (read quickly)? It's ambiguous. Completely reword the sentence if necessary. Try something like "He became smarter because of his ability to read quickly," or (if "quickly" applies to "made") "He quickly became smarter because of his ability to read." Say things in the most unambiguous way possible, even if it means making sentences longer.

And by the way: Much of the time, you can ignore the old rule about not allowing a sentence to end with a preposition. Examples: "That's a subject I know nothing about." "It's nothing to cry over." "That's what the dog sat on." "Do it that way, if you have to." No one but the most pedantic schoolmarm would consider such sentences wrong.

Tomorrow, I want to talk about certain words and usages that cause trouble (yet are easily made right). The words in question are like land-mines waiting to blow big craters in your writing. Ignore them at your own peril.

Wednesday, December 26, 2012

Writer's Block: Getting Past the First Sentence

Suppose you have a writing assignment to do and it's due tomorrow and you're completely blocked. You don't even know where to begin.

Here's how to get started.

First, accept the general strategy that you're going to produce crap first, then make something out of it later. Because that's how writing works, frankly. Everything you've ever read in print started out as something way crappier than what finally got published. Most of what passes for "writing skill" is actually revision skill.

Secondly, forget about rules. Drop all your inhibitions over grammar, syntax, spelling, vocabulary, use of pronouns (first person, second person, third person), etc., because all that stuff can be fixed later. If you have a brief (80,000-foot-level) outline, fine, but for now take it off the table and hide it somewhere.

Start by writing the following sentence: "The most important thing I'd like to say about [subject] is XYZ." (Fill in the subject and XYZ yourself.)

Again, don't fuss over the fact that you're using first person voice ("I'd like to say"), because that's easily fixed later. There are hundreds of ways to fix it. Here's one: "There are lots of ways to look at [subject]. But probably the single most important thing to note about it is XYZ." Here's another: "Most people think ABC about [subject]. But in fact there are many reasons to take the XYZ point of view. A quick review of the evidence will show why DEF might well be a more worthwhile way to understand [subject]." You can always take yourself out of the discussion. Do it later.

Okay, you've written something, so congratulate yourself. The thing to notice is that once you've captured your main idea in a few words, you can now move in one of two directions. It may well be that your most important point is something you can only get to after first addressing a bunch of other things. In that case, move up to the top of the page (above the sentence you just wrote) and plan on writing downward, until you get to your most important point.

The other way it could go is that once you've stated your most important point, you need to back it up with examples and/or discuss important sub-points. In that case, start writing a new paragraph below the sentence you just wrote and plan on continuing downward toward the bottom of the page.

In one case you're moving toward the main point from above; in the other case you're moving from the main point downward. It may well be that you end up having to do both. But the point is, you've driven a stake into the ground. You have a reference point to work away from, or work toward.

Don't be afraid to state your conclusion first (at the very top of your piece), then, in the next paragraph, back up and explain how you got there. When I feel it's going to take a lot of difficult setup to get to my main point (and then I get all constipated-feeling, because I know the backstory is going to require a ton of well-thought-out explanation), I shortcut the whole process by stating my "punchline" early on, usually in the second paragraph. The first paragraph will state what it is I want to talk about and perhaps give some bullshit justification for why it needs to be talked about. Then, right away, in the second paragraph, I'll say something like "Rather than draw this out, let's cut to the chase. The right way to approach a problem like XYZ is to think about it in terms of ABC." Then I spend the rest of the piece supporting my already-delivered "punchline."

So give yourself permission to introduce topics in any order, including conclusion-first.

It may sound simplistic, but the most important thing you can do when you're blocked is just write something. Laugh, resign yourself to the idea that you're going to produce utter crap, then do just that: Quickly write down a big long list of absurdly simplistic statements about your topic. Or just write a laughably bad first paragraph and pretend you just discovered it under a stack of papers in a mental hospital. Laugh at it. Then move on.


Tuesday, December 25, 2012

Making the Writing Process Easier

Most people (including many professional writers) consider writing difficult. In fact, it's probably the most frighteningly difficult thing most people do in their professional lives, second only to public speaking.

Part of the reason for this is that people acknowledge, I think, on a gut level, that writing is a form of artistic expression, and yet most of us (for whatever reason) are convinced we're not capable of "art." No one is asking you to create art, though, so why impose that expectation on yourself, unless you're writing a sonnet?

Much of the fear of writing comes down to the fear of producing embarrassing crap. But here's something you should always bear in mind. Everything you've ever read, in print, started out as something crappier than what you ended up reading. You've only ever known the glossy polish of the final product. You didn't see the crappy rust-covered underbelly of what came before.

So don't hold yourself to a "final output" standard when you sit down to write. Your first effort may well be crap. But then, so was everyone else's first effort. You just didn't see it.

Always give yourself permission to produce crap. If you don't, you might never get past the first sentence.

Rumination is important preparation for writing, so always give yourself as much time as you can to think about your subject before sitting down to write. Break your topic down mentally into the simplest possible bits. Think each bit through on its own. What would you say in 25 words or less about each bit? Take notes if that helps.

I rarely sit down to write on a topic that's less than 70% to 80% thought-out in advance. (I count something as "thought out" if I am 90% confident that I understand my own unique take on the subject and can write about it in a way that will fool 90% of readers into thinking I know what I'm talking about.) I know I can count on the writing process itself (which is iterative, reentrant, non-linear, and thus organic) to help fill in the missing 20% to 30% of thought-outness, but I never count on (say) 90% of what I'm going to write just "coming to me" as I write. Only 20% or so will "come to me," and most of that in the editing pass, after the crappy first draft is laid down.

I like to think of putting fingertips to keyboard as the last step in a long chain of preparatory actions (research plus rumination). Captured keystrokes are just a static artifact of the dynamic thinking process that went before.

One of the best insights I can give you is that simple thoughts are easier to write down than complex thoughts, so anything you can do to de-complexify your thinking will have a huge payoff when it comes time to write. Stuff that's arduous to write is usually arduous to read. How do you make the arduous easy? Simplify.

If you find yourself completely choked up when you sit down to write, it's either because you're afraid to produce crap, or because your thinking on the subject matter is still muddled. If you're not under a deadline, take more time to think the subject through, except in simpler terms. (You can always make simple stuff complicated later, although I don't recommend it.) If you're under a deadline, sit down and quickly write a bunch of absurdly short sentences on the topic in question, phrasing everything in unacceptably oversimplified terms. I guarantee that if you quickly fill a page with laughably simplistic one-liner statements about a subject, the Fussmaster General inside you will be eager to jump at the chance to cross all that crap out and do a more meaningful job of expressing the same thoughts. In other words, you'll be ready to write "in anger."

One more point and then I'll shut up.

Techniques exist for simplifying the expression of ideas. Most of  them revolve around semantic clarity and micro-syntax. Just by avoiding certain types of words (for example, gerunds, which are verbs pretending to be nouns) you can force yourself to write in a simpler, clearer manner. The neat thing is, simple writing always comes out faster (and reads better) than turgid-but-logically-complete writing. The simpler you write, the easier the process, for you and the reader. I'll be talking about some of my favorite techniques in this regard over the next few days.

Monday, December 24, 2012

Contralateral Brain Exercises for Programmers

Lateralization of brain function has been well studied, and if there's one thing nobody disagrees on, at this point, it's that the brain does the bulk of its speech and language processing on the left side (in or near Broca's area). The right brain does have its own language comprehension ability (particularly when it comes to understanding metaphors, idioms, and prosody), but in general the right hemisphere lacks any ability to comprehend syntax or grammar, and people who suffer trauma to the left brain's speech and language centers develop profound verbal disabilities whereas damage to the right brain seldom produces profound language deficits.

It's also fairly well accepted that "the left hemisphere has been shown to be better prepared to process information in a more analytical, logical, or sequential fashion," whereas "the right hemisphere more efficiently serves tasks that require the holistic, or simultaneous, processing of nonverbal gestalts and the complex transformations of complex visual patterns" (The Neuropsychology Handbook, Chapter 7).

Writers and programmers deal (intensively and regularly) with large amounts of text; text that deals with sequential logic and conforms to especially rigid rules. This unavoidably brings a huge amount of left-brain usage. Is that a problem? No. But it might mean writers and programmers (and other "left-brain-intensive" folks) could benefit from greater engagement of the right hemisphere, because creative problem-solving requires active participation by both halves of the brain.

If one accepts the notion that programmers are (in work mode, at least) heavily lateralized to favor the left hemisphere, it stands to reason that those of us who deal in code for a living could benefit from contralateral brain exercises (exercises designed to stimulate the less-used side of the brain; the right side, in this case).

What does this mean in practice? If you already have hobbies or activities that engage your right brain, you might want to consider doing those things more intensively and more regularly. For example, if you occasionally play a musical instrument (an activity that requires an exceptional amount of cross-hemisphere coordination), start playing the instrument every day rather than occasionally.

If you like to paint, set aside time each day to paint. Do it intensively and regularly.

If you like photography, take photographs every day, without fail.  

Build up stamina for whatever it is you do that brings your less-dominant hemisphere into alignment.

If classical music gets your juices going, take a several-minute-long break once every hour or so (or during any natural break point in your work) to listen to classical music. Note that music is processed bilaterally, in a non-straightforward way [Andrade and Bhattacharya, Journal of the Royal Society of Medicine, June 2003 vol. 96 no. 6 284-287). But the point is, any kind of music processing requires the active participation of the right brain. So if your right brain has been "going to sleep" (not literally, of course) while you've been writing code, you can wake it up again simply by listening to music.

Interestingly, a certain amount of evidence exists that comprehension of poetry depends largely on right-brain engagement. Therefore, if you like poetry, try reading some before you sit down to work every morning.

My favorite exercise is this: Find an image of something that elicits a strongly positive non-verbal reaction in you (something that, just by its appearance, inspires you). It might be a picture of your favorite athlete in a moment of triumph. It might be a photo of some natural wonder (the Grand Canyon, a mountain, a glacier, a forest). It might be a picture of an iPhone. Whatever.

Tape the picture to the corner (or edge) of your monitor in the morning before you begin work. Leave it there, in your peripheral vision, for a while. Maybe all week.

You don't actually have to look at the photo to be affected by it. Its mere presence in the visual field will have an effect on your brain. I'm convinced this is why so many people put pictures of their children (or spouse, etc.) on their desk. You don't put a picture of your child on the desk to remind yourself that you have a child, nor to remind yourself what your child looks like. You already know what your child looks like. You've seen the picture a million times already, in any case. You put the picture there because its mere presence inspires you to perform better.

Try the inspiring-photo-in-your-peripheral-vision technique for a week, and try changing the photo to a different one every now and then. See if it doesn't spur your creativity. It works for me. Let me know if it works for you.





Sunday, December 23, 2012

Interface Design Lessons from the Aerospace Industry


F-111A cockpit.
Expand your mental image of what a "device" is, for a moment.

I'm sure you'd be willing to agree that a fighter jet is an extremely complex, high-functionality device. Yet it has to have a usable human interface (or else it's not worth much). How does one provide a highly usable interface for such a complex "device"?

In the 1960s, the way to do it was as shown in the accompanying photo of the cockpit of a General Dynamics F-111A. You don't have to be a pilot to appreciate the fact that the "user interface" to the F-111A was (let us say) intimidating in its complexity. Is such an interface usable? Apparently it was. Over 500 of these aircraft flew, with the cockpit design shown in the photo.

F-22 Raptor cockpit.
Fast-forward to 2005, which is when the Lockheed Martin/Boeing F-22 Raptor went into service. The F-22A has a useful load (max weight minus empty weight) of about 40,000 pounds, which is essentially the same as for the F-111A. In almost all other respects, the planes are miles apart. The F-22A has vastly greater capabilities than the F-111A; so much so, that the two airplanes shouldn't really be compared. But the point is, the F-22, despite being a much more sophisticated and capable  aircraft than the F-111A, has a much simpler human interface (see photo).

What happened between 1964 and 2005? Human factors research happened.

First, human factors experts realized that anything that could be considered a visual distraction is a potential safety hazard. Therefore, consolidate (and hide, if you can) as many doodads as possible. Naturally, the advent of processor-driven electronics made it possible to integrate and automate many of the functions that were previously exposed to crew, thus reducing the overall interface footprint. A good example is Fully Automated Digital Engine Control (FADEC) technology. In the F-111A, pilots had to monitor half a dozen engine gauges showing turbine inlet temperature and other indications of engine status. In a modern jet, a computer monitors and regulates such things. The pilot doesn't need to get involved.

An important feature of modern cockpits (which has little or nothing to do with technology per se) is that important items in the interface (e.g. display screens) are made larger than less-important elements. Human factors experts quickly realized that the worst possible thing to do is simply make all gauges (or groups of gauges) a "standard size" regardless of importance.

Advances in digital display technology made it possible to consolidate data from multiple gauges onto one display surface (which might have several display modes). This also reduces footprint, even though the biggest "gauges" (which are now screens) have actually gotten bigger.

Yet another outcome of human factors research was (is) that color displays are easier for the brain to parse than a wall of black-and-white data. Likewise, graphical visualizations of data (if done correctly) are easier to comprehend than numeric representations.

The overall principle at work is, of course, that of simplification combined with functional organization. To fly an aircraft, the pilot needs to have flight information (airspeed, altitude, rate of climb or descent, angle of bank and/or rate of turn, heading), navigational information (terrain information, aircraft position, some indication of desired course and deviation from desired course, distance to waypoint), and the ability to operate avionics. (Avionics include radios used for communication; transponders to make the aircraft show up on ground radar; navigational receivers, such as GPS, and aids to navigating the glidepath to a landing; weather-avoidance gear; and autopilots.) In a military aircraft, the only major additional functional group is the fire-control (weapons management) system. So in other words, the major functional groups are few in number: They include flight information; navigational information; avionics; and fire control. Within each of these groups, you apply the standard principles of consolidation, appropriate visualization, and size differentiation (important things bigger, less important things smaller).

All of these principles can be adapted to GUI-intensive software. It's up to you (and your human factors experts) to decide how.



Saturday, December 22, 2012

More Mantras for Software Professionals

See also yesterday's post.


First impressions count.

Being a category leader has no meaning if existing products are crap.

Never aspire to be best-in-category. Define a new category.

Never compare yourself to the competition. Your only competition is yourself.

Never borrow someone else's bar and set it higher. Design your own bar.


Refactor your thinking so you don't have to refactor code.


Good ideas are overrated. Good implementations are not.

Complexity cannot be made pretty.

A pig with lipstick is still a pig.


Excellence cannot be retrofitted.

If something's not right with your engineering culture, customers will notice.

There are no hard problems, only problems that aren't well defined.

Learning is the inevitable outcome of making mistakes, fixing them, and not repeating them.


If you aren't making mistakes, you're not doing it right. 

You can always do better.


If you found this post worthwhile, please tweet it, Digg it, Reddit it, or share the link with a friend. Thanks!

Friday, December 21, 2012

Mantras to Live By in the Software Biz

Breakthrough ideas have no superclass.

Excellence is not an add-on.

Mediocrity is built in increments.
 
Even the stupidest do-nothing feature was somebody's "requirement."

Requirements often aren't.


Your goal isn't to meet a set of requirements but to change someone's world for the better.

Excellence isn't the same as sucking less. 

The rearview mirror is not a navigational device.

True progress occurs in quantum leaps, not by interpolation.

Creativity has an aspect of unexpectedness, not just originality.

Incremental build-out is not innovation. 


A product can meet all of a customer's needs and still be a terrible product.

Don't even entertain the idea that you've done something insanely great until large numbers of users have told you so to your face.

Even if you did something insanely great last year, last month, or last week, don't assume you're doing something insanely great right now.

Don't build what customers tell you they want. Build what they don't yet know they need.

Don't let customers design your product. They're not design experts.


Tell Marketing to shut up already. 


If you enjoyed this post, please tell a friend. And come back tomorrow for more mantras. Thanks!

Thursday, December 20, 2012

Going on a Software-Design Feature Fast

I advocate that software makers take a hard look at why and how so many features have made their way into their products. The process by which non-core functionality enters a product is more important (obviously) than the sheer number of features.

Software makers should also reevaluate the process by which a feature becomes "required" and what it means for a feature to be "required."

I've been in tech for decades, and I've never yet encountered a software product that didn't contain at least one totally useless feature, a feature no one ever uses; the equivalent of the Scroll Lock key on a modern keyboard. The important point to note is that all software features, even the  most obscure and/or useless ones, got into the product as a result of somebody's "requirement."

I propose that software makers go on a "feature fast" until the feature-addition process is not only well understood but re-imagined. (Let Marketing be a stakeholder in this process, but let it be only one of many stakeholders. Not the majority stakeholder.)

Until then, I offer the following exercises for purveyors of commercial software:

1. Implement in situ analytics (inside-the-app analytics) so that you can understand how users are spending their time when they work with the product.

2. Find out (via built-in analytics) what the least-used feature of your product is. Get rid of it.

3. Repeat No. 2 for another 100 features. Replace them with API methods and helpful tooling (an SDK). Charge no money for the SDK.

4. Have you ever added an obscure feature because an important customer asked for it? If so, consider the following: Did you make the sale? Did the sale of the product actually hinge on that one feature? (Hopefully not. Hopefully the product's core functionality and reputation for excellence made the sale.) Five years later, is that customer still with you? Are they still using the feature? If not, why are you continuing to code-maintain, regression-test, document, and tech-support a one-off feature that's no longer needed?

5. Of all the UI elements that are in the user's face by default, find which ones are least-used. Of all the UI elements that are not readily visible, find those that are most-used. Consider ways to swap the two.

6. Try to determine how many features are in your product (develop your own methodology for this), then determine how many features are used by what percentage of customers. (When you have that data, visualize it in more than one way, graphically.) When you're done, ask yourself if you wouldn't be better off, from a resource allocation standpoint, if you stopped working on at-the-margin features and reinvested those dollars in making core features even more outstanding.

7. Obtain (via real-time analytics) a profile of a given user's favorite (or most-used) features and preemptively load those into memory, for that particular user, at startup time. Lazily load everything else, and in any case, don't single-task the entire loading process (and make the user stare at a splash screen). The preferential loading of modules according to a user-specific profile is essentially the equivalent of doing a custom build of the product on a per-customer basis, based on demonstrated customer needs. Isn't this what you should be aiming for?

8. Find out the extent to which customers are using your product under duress, and why. In other words, if your product is Microsoft Word, and you have customers who are still doing a certain amount of text editing in a lesser product (such as Wordpad), find out how many customers are doing that and why. Address the problem.

In tomorrow's post, I'm going to list some favorite software-design mantras that all people involved in building, testing, documenting, supporting, or marketing software products can (I hope) learn something from. Don't miss it.



Wednesday, December 19, 2012

GUI Surface Area and Its Implications

I've been talking a lot about feature richness as if it's a measure of product complexity, which it might not be. What I care about, in any case, is not feature count per se, nor complexity per se, but a product's perceived utility and ease of use.

For matters involving "feature count," it may actually be more useful to talk about total GUI surface area. After all, features often equate (in at least a rough sense) to clicks on controls of various sorts: push buttons, radio buttons, checkboxes, menu selections, color pickers, calendar controls, etc. In some sense, feature count and GUI surface area go hand in hand.

How to calculate GUI surface area? Dialogs (and other UI elements) tend to grow in proportion to an app's functionality, so why not just add up the actual screen real estate consumed by all the dialogs, toolbars, palettes, tabs, and menus in the product (in "pixels squared"), and call that the UI's surface area?


I offer, without further proof, the conjecture that a program's perceived complexity is related in some suitably subtle way to a program's total GUI surface area.

I also contend that the bigger a product's total GUI surface area, the smaller the user is made to feel.

Moreover, if a product's total functional surface area far exceeds a customer's actual use-case requirements, an unavoidable impression of waste is conveyed. More than that, the customer might very well infer that the product came from a culture of waste, an engineering culture that doesn't value efficiency. That's a devastating assumption to let take root in a customer's mind.

Do you really want a customer to feel he has paid good money for unnecessary functionality? Ever?

If you know that eighty percent of customers will only ever use twenty percent of the software's features, do you really want to brag, in your marketing, about the extravagant excess of functionality in your product?

Isn't it more important to be able to emphasize the inarguably superior nature of the product's core functionality?

Shouldn't non-core functionality be non-core to marketing dialogs until the customer demands otherwise?

In tomorrow's post, I'll offer constructive suggestions for software makers; ideas that can actually be implemented and tested. Why argue about ideas when you can test them?

Tuesday, December 18, 2012

Modal GUI Elements Are Creativity Sappers

Why does OpenOffice Writer force me to visit a modal dialog to adjust header properties? Why can't I right-click inside the actual header, on-the-page, to make these adjustments? A modal UI takes me away from my work.
Lately I've been trying to rethink the assumptions behind user interfaces, particularly the UIs of "creativity-oriented" applications.

One exercise I've found useful is to take notice, as you work with your favorite application(s), of how much time you spend working with dialogs, menus, palettes, etc. versus how much time you spend working on the document itself at a low level.

Any non-trivial GUI-driven application has at least two different levels of GUI. There's a high-level interface and a low-level interface. In a word processor, low-level operations (and corresponding interfaces) are ones that have you operating directly on text with the keyboard and/or mouse, without the aid of dialogs. So for example: entering new text, selecting portions of text, copying and pasting text, applying fonts, applying styles to fonts (italic, bold, etc.), scrolling, deleting text, and using Undo all of these are core low-level operations. The UI for these operations doesn't take you away from your work.

In an image editor, low-level operations are generally ones that involve dragging the mouse on the canvas. When you are doing things like selecting a portion of an image, transforming an active selection (via shear, rotation, scaling), or drawing shapes by hand, you're operating directly on the canvas with mouse drags.

An app's high-level GUI consists of anything that has to be done in a modal dialog, a menu system, a wizard, or anything else that doesn't directly involve a low-level operation.

Here's the important point. Anything that takes you away from the low-level interface (for example, any operation that takes you immediately to a modal dialog) is taking you away from your work. It's an interruption to the workflow and an impediment to getting work done, not because such diversions steal precious time, but because they steal from you something far more precious than time: namely, creativity.

Modal GUI elements interrupt a user's concentration and interfere with inspiration. This is a serious issue if your customers are creative individuals working in a creativity-oriented application.

If you look at how Adobe Photoshop has evolved from Version 1.0 to the present day, one of the most noticeable changes is in how many non-modal GUI elements have appeared in the workspace (and how easy it is for the user to choose which elements appear, via the Window menu). It's because non-modal elements like tool palettes and layer pickers are nowhere near as disruptive as modal elements. They let you stay "close to the work."

An application like Adobe After Effects makes the point even clearer. Here, you have a program in which an immense number of features have been realized in non-modal GUI elements. It's an important issue, because when you're doing something as complex (and creative) as offline video editing, you can't afford to have your creativity interrupted by frequent detours into modal dialogs.

Some "creativity" programs go the wrong way and implement the majority of GUI elements in modal (rather than non-modal) fashion by default. An example is OpenOffice. To do something as trivial as view a document's word count in OpenOffice Writer means making a detour to a modal dialog.

What's the main takeaway? Modal UI elements (dialogs, menus and sub-menus, wizards) take the user further from the work document. And that's always a bad thing. It's time-wasteful and saps creativity. Non-modal interfaces keep the user close to the content, at the risk of UI clutter. (The answer to the clutter problem is to put the user in charge of how much real estate to devote to non-modal UI elements at any given time.)

In tomorrow's post, I'll talk about GUI surface area and what its implications are for usability.

Monday, December 17, 2012

More Thoughts on Feature Richness

A classic example of rampant feature excess and poor UI design: Eclipse.

As you know if you've been following my previous posts, I've been thinking a lot, lately, about feature richness in software. What does it mean, to the user, to have a feature-rich product? When are additional features really needed? Is it possible to have too many features? Is there a "sweet spot" for feature richness (or a point of diminishing return)? Is it possible to build too many features into a product, or is that question best recast in terms of how features are exposed via the UI (or perhaps the API)?

Fair warning: I offer many questions, but few answers.

My gut tells me that more often than not, feature richness is a curse, not a blessing; an embarrassment, not something for Marketing to be proud of.

When a customer sits down to use a product and he or she notices an excess of functionality, it conveys an impression of waste. It suggests that the maker of the product willingly tolerates excess and doesn't understand the less-is-more aesthetic.

From a purely economic point of view, a customer who sees an excess of functionality wonders why he is being forced to spend money on things he might never need.

The customer might also get the impression (without even trying the product!) that the product is hard to learn.

For these and other reasons, people involved in software design should be asking themselves not how feature-rich a product should be, but how feature-spare.

Does anybody, at this point, seriously question that it is more important for an app to do a small number of mission-critical things in a superior fashion than to do a larger number of non-mission-critical things in acceptable fashion?

I have two word processing applications that I use a lot. One is OpenOffice Writer; the other is Wordpad. The former is Battlestar Galactica, the latter Sputnik. Ironically, I often find myself using Wordpad even though I have a much more capable word processor at my disposal. I use Wordpad to capture side-thoughts and sudden inspirations that I know I'll come back to later, when I'm further along in the main document. These side-thoughts and incidental epiphanies are sometimes the most creative parts of whatever it is I'm writing.

It's ironic that I will use the most primitive tool available (preferentially, over a much more powerful tool) when capturing my most creative output.

I don't think I'm alone in this. I'm sure a lot of programmers, for example, have had the experience of writing a Java class or JavaScript routine in Notepad (or the equivalent) first, only to Copy/Paste the code into Eclipse or some other heavyweight IDE later.

Why is this? Why turn to a primitive application first, when capturing fresh ideas and "inspired" content?

Speaking for myself, part of it is that when I'm having a peak creative moment, I don't have time to sit through through the ten to thirty seconds it might take to load OpenOffice, Eclipse, or Photoshop. Creative moments have very short shelf life. Any speed bumps on the way to implementing a new idea are creativity-killers. An app that loads in one second or less (as Wordpad does) is priceless.

But that's not the whole explanation, because quite often I'll turn to Wordpad even when OpenOffice Writer is already running!

I think that's because when I'm having a peak-creative moment, I don't want any distractions. I want to work close to the document, with no extraneous features distracting me or slowing me down in any way whatsoever. I don't want to be tempted to choose a different font, reset margins and tabs, redo paragraph formatting, worry about spellcheck, etc., while I'm capturing my most important thoughts. Just knowing that extraneous features exist slows me down.

Also, I find I often need more than one clipboard instance. I'll often open multiple Wordpad windows just to cache certain text passages until I can figure out whether or not I want to use them (and in what order) in my main document.

I'm sure there are other, deeper reasons why I turn to lightweight programs before using supposedly "superior" heavyweight alternatives. The fact that I can't articulate all the reasons tells me the reasons probably run quite deep. Software makers, take note.

I'll say it again: an application that has a large excess of features is a liability, both to the customer and the company that makes the software.

The larger the number of things an app is capable of doing, the more likely it is the user will:
  • be frustrated with program load time
  • feel intimidated by the product
  • need to consult documentation
  • call the help desk
  • spend money on third-party books and training
  • forget how certain features work (and spend time re-learning how to use those features)
  • feel pain at upgrade time, when menus and palettes and dialogs and prefs and workflows are "improved" over the last version of the software, requiring yet more (re)learning

Bottom line, when it comes to feature richness, more is not better. More is less. Sometimes a lot less.


Friday, December 14, 2012

When Is a Program Too Feature-Rich?

In yesterday's post, I posed a bunch of really hard human-factors questions. What got me thinking about all that was the simple question: When (if ever) is a program too feature-rich?

Maybe it's not possible for a software system to be too feature-rich. Perhaps it's all a question of how features are organized and exposed. After all, a laptop computer (in its entirety: operating system, drivers, software, everything) can be considered a single "software system"—a single meta-app, with various child components having names like Chrome, Microsoft Word, Photoshop, etc. Imagine how many "features" are buried in a laptop, if you include the operating system plus everything on down (all software applications of all kinds). We're talking hundreds of thousands of features, total. And yet, users manage, somehow, to cope with all that complexity. Or maybe I should say, users try like hell to cope with it. Often, it's a struggle.

Given the fact that people still do buy enormously complex "software systems" (and manage to cope with them, to the point of making them worthwhile to own), maybe something like total feature count doesn't matter at all, in and of itself, where usability is concerned.

Or does it? There are still people in this world who are afraid to use a computer (and/or terrified to use a smart phone or an iPad), either because it's "too hard," too apt to make the user feel stupid, or whatnot. Those of us who do use such devices daily tend to chuckle at the fears of the computer-illiterate. We tell them "There's nothing to be afraid of" and then expect them to get over their fears instantly. When they don't, we scoff.

But really, should we be judging the user-friendliness of a software system by how easy it is for the majority of users to adapt to it (often with a certain amount of pain and difficulty)? Or should we (instead) be judging a system's usability by the number of people who are afraid of it?

Why shouldn't we be designing systems for the computer-fearful rather than for the computer-literate?

It's easy to say that something like total feature count doesn't matter as long as the software's (or device's) interface is good. The problem is, it's never really very good.

I consider myself a fairly computer-literate person at this point. I've written programs in assembly language for Intel- and Motorola-powered machines. I can read and write C++, JavaScript, Java, and (under duress) a few other programming languages. I've written plug-ins or extensions for a dozen well-known desktop programs, and I have seven software patents to my name. But there are still software systems in this world (mostly enterprise) that make me feel stupid.

If someone like myself feels stupid when confronted by a certain device or software system, isn't that an indictment of the software (or device)? Or do I deserve to feel stupid, since thousands of other people are able to get work done using the same software?

If there are people in this world who don't know how to set the time and date on a coffee maker, isn't that an indictment of the coffee maker?

If someone can't figure out how to operate a cable-TV channel changer, isn't that an indictment of the device itself?

I don't have hard and fast answers to these questions. But I think it's fair to raise the questions.

I'll go further and say: Every user (or potential buyer) of software, or software-powered devices, should definitely raise these questions.

Also: Every company that designs and sells software, or software-powered devices, needs to raise these questions.

So raise them, I say. If you're a software (or device) maker, have the necessary dialog, in your company, to get a strategy together for dealing with these sorts of issues. Get users involved in the discussion. Come up with a trackable plan for implementing possible solutions. Then follow up with customers to see if the solutions are actually accomplishing the desired purpose.

And if you're a customer? Ask yourself how smart or how stupid you feel when using a given product. And then, if you have a choice, vote with your wallet.

Thursday, December 13, 2012

Hard Human Factors Questions

A cascading menu in Firefox. (An example of GUI 1.0 design.)


I'm not a human factors expert (therefore I could easily be wrong on this), but it seems to me that where GUI-driven applications are concerned, certain fundamental human factors questions have either been overlooked or not investigated fully. For example:
  • How many features can you pack into a program before you reach some kind of usability limit? Are there any fundamental usability limits relating to feature count, or can feature count go on forever? 
  • What does it mean to have a product with ten thousand features? What about a hundred thousand features? Can such a product be considered "usable" except on a superificial level?
  • For a program with thousands of features, what's the best strategy for exposing those features in a GUI? Need features be hidden in some hierarchical manner, where the most-used features are easiest to get to, second-tier features are next-easiest to get to, and so on, until you reach the really obscure features, which are presumably hardest to drill down into? Or is that kind of model wrong? Should all features be treated equally? Should the user be in charge of exposing the features he or she wants to expose (and be able to choose how they're exposed)?
  • How does feature richness relate to user satisfaction and/or "perceived usability"? Is it all just a matter of good GUI design? What metrics can one use to measure usability? 
  • In analyzing a program's GUI, has anyone ever created a complete command-tree for all UI elements (down to individual dialog-control level), in some kind of graphical format, and overlaid a heat map on the tree to see where users spend the most time?
  • Are current GUI motifs (menus, submenus, menu commands, dialogs and sub-dialogs, standard dialog controls, wizards, palettes, toolbars with icon-based commands) adequate to meet the needs of today's users? How adequate? Can we even measure "how adequate" with meaningful metrics?
It seems to me that most of the original thinking on these sorts of matters was done thirty years ago or so, with the advent of Apple's Lisa and Macintosh computers (plus work done before that at Xerox Parc); and we've been stuck in the world of GUI 1.0 ever since.

So that brings up yet another question: Is anyone working on GUI 2.0? (If so, who?) I would put touchscreen gestures in the GUI 2.0 category. (Is there anything else that belongs in that category?)

It seems to me software companies (including companies that develop web apps) should be concerned with all these sorts of questions.

I get the impression (based on the amount and quality of GUI design work that went into things like the iPhone, iPad, and iPod Touch) that Apple does, in fact, take these sorts of questions seriously. But does anyone else?

I don't see much evidence of other software companies taking these questions seriously. Then again, maybe I'm just not paying attention. Or maybe I shouldn't be asking these questions in the first place. As I said at the outset, I'm not a human factors expert. I'm merely an end user.

Monday, December 03, 2012

Confessions of a Twitter follow-slut

People are always asking me why I'm such a follow-slut on Twitter. They want to know what the heck good it is to be following 108,000 people. Am I insane? Is there any conceivable reason for following so many people?

My complete philosophy on this (and all the techniques I used to get to 100K followers) is laid out in detail in a 99-cent e-book, which I sincerely hope you'll enjoy. If you know what's good for you, you'll spring for it. But let me cut to the chase, in case you don't want to buy the book.

My basic M.O. is to follow as many interesting people as I can. Quite a few follow me back. Eventually I unfollow the heartless, unthinking losers who don't follow me back.

This results in me following an awful lot of interesting people, obviously. (Duh.) Can I interact with that many people? No. Only randomly. Can I really follow what people are posting? No, not one by one by one.

What you have to do is be smart enough to use Twitter's excellent List feature. Put various categories of Very Interesting People into various Lists, then check those streams regularly. I have lots of lists. Many of them private, some public. All excellent.

But I also check the main firehose. I do it all the time, actually, because with that many tweeps, I get to see tons and tons of curated links in my stream (and yes, the occasional bit of nonsensical stream-of-consciousness dreck), but the main thing is, I catch huge numbers of fascinating news stories and blog posts by sipping from the fire hydrant. If there's any kind of fast-breaking news story going on, I see it right away. A drone strike kills a child in Pakistan, I know about it. A Supreme Court justice happens to say something rational, I hear about it. A hair falls out of Donald Trump's whatever-it-is-that's-on-his-head, I know about it.

The life of a follow-slut is pretty good, actually.

So please. Don't judge me. Sluts are people too.

Friday, November 30, 2012

Firefox Crash-and-Burn Syndrome



My love-hate relationship with Firefox is slowly shifting toward mostly hate. One reason: About every 7 to 10 days, I get a sudden crash where Firefox disappears except for the above dialog. Sometimes the Details button doesn't work. Making me wonder if Mozilla will even get the Crash Report after all.

Clearly, in a situation like this, Firefox is crashing after an unchecked error. (Otherwise why does Mozilla need a complete Crash Report? Why not just get the error code? Answer: There is no error code. The program doesn't know why it quit.) My strong suspicion is that AJAX-intensive sites like Gmail and Twitter are running the program out of memory. Constant client-server AJAX chatter from endless polling is an invitation to memory leakage. (Plus see this post.)

Someday I'll switch to Chrome 100%, but the way I work now is, I use Firefox for business-related browsing and Chrome for personal surfing. I like keeping them separate. I'll continue to use Firefox until the sad day comes when we part company forever. At that point, maybe I'll send flowers and a short, poignant good-bye note. Never to be seen again.

Tuesday, November 20, 2012

How much have you written in one day?

This survey is  CLOSED.



Results after 1,336 page-visits:
 

Sunday, November 11, 2012

Pro Writing Aid: Ready for Prime Time?

There's no shortage of online "readability checkers" that claim to show how readable your text is via this or that metric. But few online tools (especially free ones) attack the readability problem with as much gusto, or in as many ways, as Pro Writing Aid.

Sadly, that's pretty much where the good news ends. My test-drive of Pro Writing Aid didn't find it to be much of an aid. But I give the creators an 'A' for effort. They're on the right track, at least.

The way the tool works is, you paste a bunch of text into PWA's online form and click the Analyze button. About ten seconds later, you'll see a summary report, with a column of links down the left side having names like:
  • Overused words
  • Word cloud
  • Sentence variation
  • Grammar
  • Adverbs/passive
  • Sticky sentences
  • Clichés and redundancies
  • Repeated words and phrases
  • Phrases summary
  • Diction
  • Vague and abstract words
  • Complex words
  • Alliteration analysis
  • Pacing
  • Consistency
  • Sentiment
  • Time
  • Dialog
  • Homonyms
You can click on any one of these to see potential problems highlighted in your text. Unfortunately, "potentials" outnumbered actuals quite a bit in the testing I did. (I used several pieces of my own writing for testing, as well as sample chapters from The Adventures of Huckleberry Finn. You'd think the latter would've thrown a lot of flags and warnings, but oddly enough, "it warn't that terrible bad.")

Different writers will get different mileage from this kind of tool, so go ahead and try it yourself: You may very well find it useful. For me, it was like using a spellchecker in that I spent the vast majority of the time dismissing things that a computer would flag as wrong but that a human would know were right.

The Adverbs/Passive tool was puzzling. It flagged every adverb (including every occurrence of "only"), but highlighted few or no instances of passive voice in any of the five writing samples I tried (each sample averaging 1,700 words).

The Word Cloud feature makes pretty pictures but is otherwise useless.

The Overused Words report flagged 47 instances of "it" in Chapter 3 of Huck Finn, saying that about 26 instances could be removed. In point of fact, I couldn't find any instances that warranted removal.

That's not to say the Overused Words report is useless. But as I say, it tends (like many of the other tools) to report far more false positives than it reports good catches.

A fundamental problem with utilities of this sort is that they don't make allowances for the (huge) differences between dialog and narrative, in a piece of text that contains both (such as a chapter from a novel). The reason this is a big problem, obviously, is that spoken English is quite a bit different from written English: It's different as to vocabulary, diction, syntax, word length, sentence length, sentence variety, pacing, use of clichés, constructions based on slang, and probably two or three dozen other particulars. You can't treat dialog and non-dialog text as one and the same thing. They're distinct. What works for one won't necessarily work for the other. I saw this when I passed a piece of dialog-intensive sample text through the Pro Writing Aid analyzer and noticed many more flags in areas of spoken English than in areas of expository English.

Another potential problem with utilities of this type is that they make no distinction between writing aimed at adults and writing aimed at children or young adults. (Or for that matter, writing aimed at a professional audience vs. writing aimed at a lay audience.) It would be nice if there were a way to specify the intended age group for the writing sample in question, so as to get an age-appropriate readout of things like diction and "sticky sentences." 

Long story short: Pro Writing Aid was a disappointment, for me. But I recognize that it might well be a boon to others. So by all means, try it out yourself. And let me know what you think.


Saturday, November 10, 2012

Evil Writing Prompts


  • Write a query letter and ten sample pages for a novel about the dismal state of the publishing industry and send it to two hundred literary agents.
  • Write a synopsis for a time-travel vampire romance set in the fantasy kingdom of Twillador and hand it out to every speaker at a writer's conference.
  • Write a query letter for a young adult novel called H. Finn, about "a boy, a raft, and a runaway nigger." Sign it S. Clemens, and send it out to 100 literary agents. Publish the rejection letters in a blog.
  • In your latest manuscript, do a global search and replace, putting the name of a well-known literary agent in place of your novel's villain's name, then send it to publishers that deal with that agent.
  • Include the first ten pages of Flowers for Algernon as a writing sample in a query letter. Send to 100 agents, claiming you are a mentally challenged adult seeking representation, and mention that the Americans with Disabilities Act requires them to write you a detailed, personalized letter back, lest you sue.
  • Create a fake newspaper clipping about your unfinished novel, scan it, and paste the image directly into a fake bio. Hand it out at writer's conferences.
  • Using Google Translate, translate the first ten pages of your novel into Hindi, then have Google Translate translate it back into English. Use the resulting hysterically mangled text as your writing sample. Send it under an Indian pen name to your least favorite literary agents.
  • Find the personal e-mail addresses of ten literary agents who accept snail-mail proposals only. Send each one a Word attachment containing the full text of Moby Dick and include in the subject line "REQUESTED MATERIAL."
  • Write 25 queries containing 25 random sentences and send them all to one literary agent, from 25 fake e-mail accounts.
  • Write an e-mail marked "URGENT" to fifty literary agents, claiming you have gotten offers of representation from multiple agencies for a work you never submitted to anyone. Use weird fonts and font colors.
  • Write a fake interview with yourself in which you talk about having won fake awards. Include the interview's URL in queries sent to fifty overseas agents.
  • Come up with 25 fake "Praise for" quotations to put in the front of your book. Be sure to quote dead authors. Include with your manuscript.
  • Write your own first page for a (real) bestselling book that's on the shelves now. Print it out. Go to the bookstore, find that book on the shelves, and paste your first page over the first page of the book.
Disclaimer: Folks. Folks. This all meant in jest. For heaven's sake don't actually do any of this shit.

Monday, October 22, 2012

Strategies for Querying Literary Agents

A friend of mine recently asked for advice on different ways to approach the task of querying large number of agents. She asked things like: Should I query them all at once? Or should I query them in groups? Or should I query them serially, one at a time, and wait for responses? If I query them serially or in groups, should I go with my favorite agents first (and then second-tier agents next, then third-tier and so forth)? Or should I query third-tier first, second-tier next, and first-tier last?

Let me kill the suspense by skipping to the bottom line. I told her to query in batches, backwards (third-tier first, then second-tier, and finally first-tier).

Now let's look at the reasons why.

Querying 100 agents all-at-once is the worst strategy ever, IMHO. First, I believe that a pitch should always be considered a work-in-progress. You should always be open to the idea that your query can be improved. Say you write a query this month (without sending it to anyone), then go on vacation, then come back to the query. There's a substantial chance that when you look at the query through new eyes, you'll see ways it can be improved. Or you might see mistakes in the original query, as written. Either way, think how disastrous it would be to copy-paste the original query and send it to everyone under the sun. Any imperfections in it will be propagated to all agents, and then you've blown it. Arguably, at least.

Also consider the possibility that your original pitch is simply taking the wrong approach. That's something you can discover by sending it to 20 or 30 or 40 agents. If you really believe the query is the absolute best it can be and you've selected agents carefully (to match what they're looking for), you should get at least one positive response out of 40 agents queried. If you don't, chances are good your query is fundamentally flawed in some way. You should consider whether a total rewrite is called for.

I've queried magazine editors, book publishers, and others in the past, and I've found from experience that a pitch can nearly always be improved. A direct-marketing pitch (which is exactly what you're writing) is something you hone and sharpen incrementally and continuously, preferably on the basis of testing. Sometimes you decide that an entirely different approach would be better. Don't foreclose that possibility by spamming out your first-generation query to everybody at once.That would be unwise.

Sending out queries serially and waiting for a response from each agent before moving on to the next one is simply impractical. Let's say agents take two weeks to respond, on average. (Which is ridiculous, because the true answer is closer to four weeks.) If you're planning on writing to 40 agents, it'll take you 80 weeks to get to all of them. I don't know about you, but for me, that would be impractical.

The reason I used 40 agents in the above example is that in the real world, agents respond positively to only two or three percent of cold queries. If you think you're in that category, you need to reach out to 40 agents, because a one-in-forty success rate is a two-and-a-half percent success rate.

Sending out queries in groups is the way to go, IMHO. But even if you adopt that strategy, you should still not blindly use copy-paste, because (again) if there are imperfections in the pitch, you need to find them early on, not when you've already spammed everybody. That means you should read each query before sending it out. Believe me, after 20 or 30 or 40 re-readings of something, you'll find flaws. Unless of course you're undeniably the all-time best writer in the universe and can reliably turn out perfection on the first go.

Here's why you should send batches to third-tier agents first, then second-tier, then first-tier last. (Unless of course you have a recommendation from someone significant, like a bestselling author who already works with the agency in question. If you have that, contact that agency first.) Usually your first tier will contain a lot of top-flight agencies (in addition to containing the occasional boutique agency that just happens to be a special fit for your particular project). Top-flight agencies get phenomenal quantities of queries. They have more good material to choose from than bottom-tier agencies. Thus, your level of competition is very great when you go to a top agency.

The way to beat the competition (if you don't come with a recommendation that really counts) is to come at the first-tier agency with an offer already in hand. This usually gets the agency's attention.

So the strategy I would use is this: Send out your first batch of queries to bottom-tier agencies. If you get an offer of representation from one of those, tell the first-tier agency that you've already got an offer in hand but you want to consider the top-tier agency in question first, because you strongly prefer that agency and don't want to go with the other one unless you really have to. But don't reveal the name of the agency that you got an offer from, because the top-flight agency will likely assume that (since you're writing to top-flight agencies) you got an offer from another top-flight agency. And you do want them to assume this. You certainly don't want them to know that your offer came from some little-known one-person agency.

It's totally Kosher to pit one agency against another like this. I can tell you for a fact that this sort of thing is done all the time when agencies pitch books to publishers. They love to get an auction going. I did this myself once, many years ago. I had a firm offer (contract in hand) from Doubleday. Instead of signing the contract immediately (as most people would have done), I wrote to four other top-flight publishers, and in my pitch I told them I already had an offer from Doubleday. All four publishers sent me contracts immediately and begged me to sign. I had an auction going. (I finally went with McGraw-Hill.)

I hope this discussion has been useful for you. It was for my friend.















Monday, September 10, 2012

Where Good Ideas Come From





In case you haven't heard of Steven Johnson's book (Where Good Ideas Come From), the above video will get you started.

Here's a rough overview of some key ideas from the book:

1. The "adjacent possible": An inventor generally uses components that exist in the immediate environment, and these are sometimes conveniently adapted for non-obvious uses. Gutenberg used a wine press for his first printing press, for example.

2. "Liquid networks" and connectivity: Large cities, and now the Internet, make it possible for loose, informal networks to form, and these tend to enable discoveries.

3. The slow hunch: It can take years for a hunch to blossom into a full-blown invention.

4. Serendipity: A certain amount of luck helps, but bear in mind Pasteur's famous observation, "Chances favors the prepared mind." E.g., LSD, Teflon, Viagra, aspartame, Post-It notes. Fortunately, no one has a patent on serendipity.

5. Error: E.g., Lee de Forest's development of the audion diode and the triode was the result of erroneous thinking, and de Forest never understood how they worked. But the inventions changed the world.

6. Exaptation: Birds developed feathers to keep warm and regulate their body temperature, and only later used them for flight.

There's more, as well. For example, Johnson advocates keeping a journal of half-baked ideas (following no organizational pattern at all) that you revisit frequently over a period, potentially, of years.

Bottom line, the "Eureka moment" is a myth in the sense that most such "moments" are the culmination of many hours (and/or years) of rumination, cooperation, hunch-accumulation, and serendipity. It's process, in disguise.


Monday, September 03, 2012

Remembering Nora Ephron

I was deeply saddened, not long ago, to hear about the recent passing (at the far-too-early age of 70) of one of my favorite writers of all time, Nora Ephron.

Nora Ephron
I had a personal connection with Ephron (which I'll get to in a minute), a tiny "brush with greatness," as some like to call it, and for many years after making contact with her, I had imaginary conversations (quite a few of them, if you must know) with Ephron, the way she herself admits to having had countless imaginary conversations with New York Times food critic Craig Claiborne.

Our conversations were great, of course. The stuff of legends. We'd talk about our writing adventures. The banality of American life. The inexplicable appearance of pomegranate extract in hand creams.

I suppose I could, in theory, go on having imaginary conversations with Nora. But unfortunately they'd be of the kind you have with a headstone in a cemetery. And I can't stand to think of her that way.

Most people alive today are not of Ephron's generation, so most do not know of the scores (hundreds?) of savagely witty, fiendishly funny, always entertaining essays and articles she did for The New Yorker, The New York Times, Vogue, Cosmopolitan, and (especially) Esquire. Today's audiences mostly remember Nora Ephron as the screenwriter behind When Harry Met Sally. Some may remember her role in cowriting You've Got Mail, Sleepless in Seattle, and Silkwood.

But to me, Nora Ephron's best work (by far) was as a journalist and essayist. That's why I strongly recommend that you drop what you're doing right now (I'll wait) and go order a copy of I Feel Bad about My Neck, or Wallflower at the Orgy, or Crazy Salad, or one of her other books. I personally guarantee you will not regret the decision to buy one (or all) of these books. As Nora herself would say, they're far cheaper than psychoanalysis, and (in the end) more uplifting.

My personal connection with Ephron was small. Let's be clear on that. It was the kind of connection that, if it were a number, would round off to zero. But in my mind, it's bigger than big.

I need to explain.

Back in the 1970s, when I was starting out as a writer, I got all these lofty ideas (from reading Writers Digest, mostly) that if only I could sell articles to magazines like Playboy, Esquire, and Good Housekeeping, I could quit graduate school and never have to worry about pipetting Salmonella cultures by mouth again. I imagined it was possible to become famous as a freelance writer and throw down the shackles of "working a regular job," never to pick them up again.

What an ass I was.

I sent queries to every newsstand magazine under the sun. And got rejections by the Kubota dump-truck load. Every rejection note I got was some type of pre-printed slip with a message that began either with "Dear Sir or Madam," or no salutation at all. All of them were form-letters, in other words. Or form-slips, I should say.

But then there were the rejection letters I got from Esquire magazine.

The rejections from Esquire were hand-typed (on an actual typewriter) personalized notes from someone named Nora Ephron.

I've kept a couple of these hand-signed rejection letters, and let me tell you, they're some of the most precious items I have in my box of Precious Items.

In one case, I had queried Esquire on an article I wanted to do on a particular medical subject. Nora wrote back saying to send the finished article to her, she wanted to see it. She apologized for the fact that I would have to submit it "on spec," meaning that not only could its eventual acceptance not be guaranteed, but there would be no "kill fee" in the event of rejection.

I sent the manuscript for the article. Nora wrote me back, saying that the bad news was that a previous writer had done a piece on a similar subject less than a year earlier, and therefore the article couldn't be accepted. However. The "good news" was that she loved the piece and was taking the liberty of submitting it (for me) to someone she knew at The New York Times.

I was elated (needless to say) with her reply. In fact, I had never been so comforted by a rejection letter in my life. It gave me hope. It gave me encouragement at a time in my writing career when I most needed it.

Nora's rejection letter kept me going (odd as it may sound) to the point where I did eventually sell some articles to some (minor) newsstand magazines. Eventually, I got a job as Associate Editor of The Mother Earth News. Not long after, I was the one sending rejection letters to would-be writers. Always hand-typed. Always personalized. Always kind and generous.

I've had a pretty good run (over the last 35+ years) as a writer, editor, publisher, and all-around "word guy." And I do think I owe more than a small bit of gratitude to Nora Ephron. Way more.

That's one reason (but certainly not the only one) that I am so sad to see Nora leave us.

She was a force for good, in my life (and in many others, I'm quite sure).

R.I.P., Nora. We miss you. A lot.