Formatting a notecard

Okay, I’ve searched the manual, searched the web, searched the forum, Googled the forum and the site, and I still don’t turn up an answer to a question that has to have been asked a thousand times…

Does any method exist for simple formatting in a notecard?

The ability to bold, italicize, or underline would allow me to make notecards conform to MLA and footnote standards.

What am I missing?

Synopses are plain-text (for technical reasons, due to how and where they are used throughout the interface), so rich text formatting isn’t possible. You can do a lot just by marking up words with asterisks and underscores, however. For the MLA formatting, if you’re wanting to ultimately create a list of endnotes from your synopses, you could create the document using the underscored text, then do a find/replace in Word (or other word processor supporting wild cards or some form of regular expressions and format replacement) to search for some text and replace it with the same text formatted with the underline and the underscores removed.

You can safely skip my kibitzing, since I’ve never done MLA citations, but I don’t understand what you expect to accomplish by putting a citation in a notecard. You can’t turn synopses into footnotes, and they can’t be inserted into the main text.

You could enter your formatted citation into the document text of a dedicated document, and then use the little button to the upper right of the synopsis in the inspector to copy a plain-text version of the citation, so that on a cork board, you can view them all that way (or in outline view, for that matter), and then arrange them as you like. Does that help, or am I missing the point entirely?

It’s just a readability thing. Those formats exist to make it faster for our eyes to scan footnotes and get the critical info without the “speed bumps” of poor formatting. It would also help with the corkboard’s utility for planning, brainstorming, and the like.

It’s not a showstopper, but Apple has proven that attempting to perfect interfaces leads to faster learning curves, quicker adoption, and–a more subtle thing–to expanding capabilities of the user base to make discoveries.

Mind, I’m an old DOS fanatic and longtime newsgroup participant, so command lines and plain text don’t worry me one bit. I’ve got the answer to my question, which was all I needed. But the cool stuff is still cool.

Thanks! I expected it was something like that.

(Mini, disposable rant: But computers are really good at handling clumsy, gargantuan data structures now, so I wouldn’t be afraid of making the code less elegant by introducing dual-display options of any-text, anywhere. I think programmers have a gut-level resistance to the gunking up of clean, elegant code, due to the way we always had to work in the old days. Nowdays we can afford to make code that looks a lot more like the chaotic way we think. Hairy, idiosyncratic, lengthy routines aren’t such a big deal anymore.)

Well, we wouldn’t be writing programs using object-oriented languages if pure byte level optimisation were still a critical thing, but there is something to be said for a chunk of code you can go back to five years after you’ve written it, and still understand what you were doing. I have piles of old code I’ve written over the years, and the stuff where I took the time to write elegantly and comment the code well are the scripts I can still use. That hacked up, shorthand stuff that is a mass of punctuation and trickery that I wrote five years ago to untangle some bits? Useless. I could just toss it in the digital byte shredder and never miss it.

If anything, computer programming has steered toward human elegance as machine capacity has risen. Those old programs written in the lean and mean days are the scary chunks of tangled code, precisely because they had to go through contortions to get around hardware limitations. Today we can practically use human grammar when telling our computers how to do things. This is a very expensive and slow way of doing things, but we have the hardware to spare for it, so why not?

It’s not just the machines we have to work with, when coding, but ourselves.