Filesystem-based Information Management Question

I’m a design engineer and for the last 4 1/2 years I have been attempting to pull together all of my design reference data into a central database. Initially I thought DevonThink was the right tool but I’m finding that the software gets in the way more than it helps. Last weekend, while checking out what Hog Bay Software was up to, I ran across doug’s posts about how he’s managing his notes via the filesystem and naturally ended up here. I’m now inspired to dump my messy DevonThink database back into the OS X filesystem, clean it up and start fresh.

I could use some direction on fundamentals from you pros here. Most of my output is not amenable to simple text files. I do much of my work in 3D CAD, Keynote making heavily-annotated presentations, and Numbers/Excel for design calcs and data analysis. I have a ton of images; photographs of parts, pieces, products, CAD screenshots, renderings, etc. that are all very important and related to at least one product design effort, possibly more. Much of my reference material is clipped from the web or collected as pdf datasheets. Recently I picked up Notational Velocity and have been using it to edit and track notes that I know can be text-only. RTF notes are tempting but after reading Doug’s post about how a little MMD obviated the need, I’m hopeful I will be able to do without and go straight MMD.

So far I have set up a rudimentary file structure and a TextExpander snippet that generates a prefix based on the date and time. Right now it’s 1009080016-, just about bedtime.

So here are the questions I have at the moment:

  • how to add metadata to pdf’s/webarchives/other files that are uneditable (ok, I know that technically they’re editable but I would rather not touch them)? I think I need to do this because I have single files that apply to different projects, eg. “I used this board-to-board connector in products A, B, and C. I would like to be able to search for connectors on any 3 of those and have this file show up. Other files will have different associated projects/products.”
  • how do you browse and preview files quickly? One of the positives of the single-bucket apps is that most of them have a slick preview/edit pane that allows you to highlight a note header/name/whatever and see the contents in an adjacent pane.
  • AmberV, my data wants to have your babies. Can you give a full breakdown of your taxonomy and advice on how one not so skilled in information management arts might go about generating their own system? If I understand correctly your folder structure is strictly chronological. No other taxonomy in the folder naming?
  • How about a screenshot of a sample file structure and filenames?
  • Any suggestions on how Scrivener might fit into the not-so-writing-heavy workflow I’ve got going?
  • Under what circumstances do you export your markdown files? Do you usually save the output or just trash it when you’re done?

Have you tried DevonThink Pro Office 2.03?

File preview via QuickLook and Web Kit.
Metadata and tags in Comment windows.
Easy grouping or listing by many criteria.
Scanning and OCR for paper files.
Conversion and storage of e-mail files.

I believe that DTPO can import, read, and index all of the filetypes you describe.
Not sure about 3D CAD, since I don’t know its export formats.
Pages, Keynote, Numbers; no problem. Photos, screenshots, renderings: scan as PDFs?
Plus, you are not restricted to naming conventions of either the Finder or Doug’s FSIM.
But then, I am not as super-organized as he, nor in need of a
“2010 Life Plan, Current Agenda, and a list of ideas for things to do and places to eat” :open_mouth:

I love you too, druid… :smiley:

Metadata and tagging.
As you note, a lot of my work flow is text based (it was once rtfd, then rtf, and if you go way back, doc and whatever AmiPro file extensions were) These days I’m using a lot of in-text hash tags in the text file for sub-characterizations; kind of like MMD tagging conventions. They used to scare me, in-text tags. I ended up with a bunch of them after exporting data out of Journler (it put a ‘Tag: xxxx’ line in the header of exported files) and I found them annoying because editing them was a process of open file> edit> save file one at a time. Somewhere along the line I learned about MassReplaceIt and my fear of tagging in text files disappeared (TextWrangler works well too). I think amberV mentioned it years ago in a post, but I had to figure it out on my own.

So, if I need to (which I’ll say is rare) I’ll add #LitCrit #TNStsg or some other CamelCase thing to the first few lines. The are easy to change, easy to delete, not quite so easy to add if there is no other existing tag, unless you’re a grep wizard, which I’m not.

For non-text files I’ve used Tagger and TagList. They are OpenMeta tagging tools, they are well designed, under active development, and free. It’s kind of a poor man’s Leap or Yep. And when DropBox went from 7.x to 8.x forum releases they were syching OpenMeta nicely. But I wish I felt better about OpenMeta, there is always the sinking feeling that Apple will pull the rug out one day and all the tags we’ve put on files in that ‘reserved space’ will be washed away. So my needs are well served with Tagger, but I worry from an architectural standpoint. (My needs tend to be like sticking an openMeta tag ‘#ReadNext’ on a bunch of pdfs that are filed away in their home folder. Or marking some Pages or MSWord drafts as ‘#ReviseThisThur’ — very temporary workflow oriented tags.)

Through blind faith, and an amberV post, I’ve been relying on full text search a lot. This so goes against my nature of ‘putting things in their place so they can be found’ but you know, it really works. Yes, the results list come up with some oddities, but it’s almost always a short enough list to get to the file I want. And you find relationships you would never be able to ‘tag’ your way into. To druid’s point this is where the black magic of DevonThink really is great. Their semantic recognition searching finds relationships you might not have anticipated, in this area I find DT to be without peer until you get to institutional strength corporate applications. With lots of data and a faulty memory this can be valuable.

DT will also let you index against an independent data set, which satisfies many of my concerns about having data in an app (EagleFiler does also). Only the organizational information you apply against your data is at risk. If I were handed a huge pile of unstructured data, that I didn’t create, that I had to make sense of, and then work with, I’d probably reach for DT as my primary tool.

I find that full text search gets me almost as good a result on my own files, but my data sets are not of the size of most researchers. I write fiction. I make stuff up. To feed my deterministic monkey I do make up lists of file names and save them in text files, occasionally. Searching on a file name with spotlight once it’s named as per amberV’s system gets you ‘just the one file’ every time.

Scrivener
I’ve actually been thinking a lot about Scrivener’s place in my work flow. I once used Scrivener not only as my writing tool but also as a long term repository, like a binder in which big collections of data lived. It’s just such a great tool to work in. But more and more I use it as a writing tool only. I dump a bunch of notes in from collected text files, web pages and the like, trolling through Notational Velocity or just the file system to find things, I synthesize, expand, write, whatever magic has to be done, and then get the finished work the heck out of the tool and into a more archival format. For me that’s probably a text file or a pdf. Again, amberV talked about this in an ancient post on these boards, it just took me a while to catch on.

I think of things in Scrivner as transitory, as part of a project, which will someday be finished, the output of which will go someplace else, even if it takes years.

FSIM
I will have to say that the more time that’s passed, the more I love my file system info manager. I’ve corrected early mistakes, like not making the file naming conventions abstract enough - too much specificity and the thing breaks down under its own weight. And I don’t worry about application functionality as much, because my data is always secure. I’m using apps against data, not housing my data in an app.

I’ll defer to others on the other points.

Doug

Tagger & TagList hasseg.org/tagger/
MassReplaceIt hexmonkeysoftware.com/

1 Like

Druid, thanks for the tips on DevonThink. I just upgraded to 2.0.3 and I’m working toward understanding it better now. Since it can index files without importing I’m thinking I can use it in conjunction with a FSIM like Doug’s.

Doug, thanks for the link to Tagger and MassReplaceIt. I haven’t see either of those apps before and I’m reading about both of them now. OpenMeta seems like something that should just be built into the OS.

Agreed.

And this:

is an incredibly helpful insight.

I have a lot more to consider now. Next step is to dig deeper into DevonThink and Tagger.

Any tips on naming conventions for FSIM? I’m curious to hear what changes you made to your naming conventions that ended up improving the system.

None. Once you get a date and an abstract code (like T1- ‘fiction ideas’, N1- ‘notes written by others’…) just about any descriptive name will do. Remember, when you are full text searching you are not just searching the name, so all the data in the FSIM comes into play.

I have one exception, my Readers Notebook. For each work read I make a file which is dated, then coded N3-, and titled “Author’sLast Name - BookTitle.”

So Rick’s new book would be…

            100910-N3-Moody - Four Fingers of Death

(The spaces between name hyphen and title hyphen have become a habit and are on purpose, even if idiosyncratic. I pull lists of my readings rather frequently. Dragging file names from Finder to Bean gets me a nice list that all I have to do is shorten with an Option-Mouse Drag to delete the first 11 columns of characters, getting rid of 100910-N3- . What’s left is a list of authors and books. To my eye the spaces are nice.)

The text file itself starts with a bibliography heading, (MLA format, usually cut and paste from Amazon or BobCat) maybe some hash codes if the work is related to another project, then my response paragraphs, and quotes.

I also collect a number of articles, reviews, journals, etc on works I read, or on the period, school or genre. They are titled identically but coded N1-. This way the N3-’s make a clean record of my readings.

So while the “notebook” does not reside in a special folder, in fact it only exists abstractly because of the file label “N3-“, I still get the benefit of a unique data storage container.

Note: This looks a lot like Zotero, right? I’m making a text file rather than clicking on that little icon in FireFox? Well, not really. First the visual style of Zotero drives me nuts. It’s very cramped, and the functionality for marking up pdfs I find to be too limited. With a text file or pdf per entry, I get to chose my own editing tools. Secondly, all my Bib Listings, Readers notes, Lit research, are not in someone else’s SQL (are they using MySQL???) database. I know I can get stuff out of Zotero, but to gauge the level of that project, should the worst happen, take a look at the complex file structure the application uses. I feel more comfortable with my text files. Oh, and I like Safari.

What do I give up? An excellent interface with library catalogues for entry origination (when I’ve had a lot of LitCit work to catch up on I’ve used Zotero to get the bibliographies and then exported them to text files, it is excellent for high volume work) Footnote/endnote management (I don’t use MSWord so Zotero isn’t helping me with that too much anyway. But the rtf merge function is worth a look)

Were I in the sciences, where the citation of journal articles can become overwhelming, I might feel differently, but for my needs, a data set of text files/pdf’s works very well.

1 Like

I’ve been meaning to post here and haven’t had the time to finish up a draft, but wanted to jump in on this one point. Marking down passages in books that I read is one thing that evolved quite a bit within my system. There was always a bit of a disconnect between how my system is philosophically designed, and keeping notes on the books I read. The main problem is that part of what makes it work well is that it contextualises the recollection of information through personal chronology (as has already been expounded upon). If I want to remember some great quote that I came across a year ago in November, I want to be able to go to that spot in my archive to look for it. But there is the problem, what if I started reading the book in March of that year (I’m the type that reads a dozen books all at once)—the file for notes on that book would have been started back in March, and I wouldn’t associate the quote with that time automatically. I might get lucky, but most likely I’m going to be remembering that I wrote read and transcribed the quote in November.

The other problem I ran into is due to some archival ethics that I impose on myself. I have a few rules that I strictly adhere to (even when I’m not using Boswell, which enforces them):

  1. No file lives in the working area for longer than 30 days

  2. Once archived, under no circumstances is a file to be edited

This might seem like lunacy. What about things that take longer than thirty days to complete; what if you find an error in an old record? It works like this:

  1. When I start working on something, a new primary file gets created in the working area. For some things, this might just be a stub for 30 days. For example, if I start a new project in Scrivener, I create an index file for that project that’s basically just an MMD header and a reference to the Scrivener project. When thirty days passes, I compile out what I’ve got and paste it into that file, archive it, then duplicate it and leave the duplicate in the working area—now I’ve got another 30 days. If something sits around for that long and doesn’t really change, I might not duplicate it, instead I’ll just file it with a WIP code so that at some point in the future I can revisit it if I get the time.

  2. Whenever I want to edit or revise an old item, I duplicate it into the working area. The duplicate then later gets archived separately, with its own ID number, containing a back-reference to the older version.

Combined, these two rules create a strict versioning landscape of things as they are built. Stuff only gets overwritten on at most a 30-day basis, but that’s just a maximum. I very often archive things right on that same day, or a few days later—whenever I feel I want to take a new direction with it, say.

So, there is a bit of a contradiction between book notes and the way I work. That isn’t really the type of thing you need a bunch of versions laying around for, and I don’t want to break my 30-day rule for one thing, otherwise I’ll be tempted to start breaking it for other things too, defeating the purpose of it (in fact for a while I did do just that, and yes did end up break the rules for other things as well).

The solution that I came up with was to establish a book with an ID number using a very simple entry in the database with some basic information like ISBN, publication date, etc. Then whenever I had a thought in regards to that book, wanted to transcribe a selection of text from it, whatever the case may be, I create individual entries in the database that refer back to this reference file.

Since everything related to the book gets a reference to that book’s ID number, I can simply search for that ID number and get a complete list of all my thoughts for that book. It also means that the information itself is more flexible and independently discoverable. A clipping from a biography on Charles Sanders Peirce isn’t buried in the text file of clips from that book, but is rather its own entry, complete with its own potential network of associations and contexts.

I considered creating new sub-tokens for each book. Like say, {I2.1.Brent-CSP}, but I didn’t like that idea. I’d rather have a generic {I2.1.Book} token, and then Brent-CSP becomes identified by its unique ID, bk10073367. I can have millions of those without increasing the burden on the token system, and keeping it a general tool of classification. It also means that the individual items attached to that book can have their own relevant tokens. {R2.1.Psyche} could be a series of introspective ideas based on some of CSP’s philosophy, {i3.1.Clipping} a transcribed quotation from the book—and maybe the Psyche entry could link to the clipping that spurred the idea. The book notes suddenly form their own webwork of associations—a thing which couldn’t be easily done in a single file.

Since making the transition, I’ve expanded the concept to other relevant areas as well. Anything that is a collection of smaller ideas that takes a while to construct could be considered as potentially better handled by lots of entries all bearing a common ID. The common ID can be chronologically located based upon my memory of when I started reading the book, but the interior portions of thought that unravelled as I read that book can be chronologically located from when they occurred. I know how two primary chronological hooks to find something—really I have as many as there are entries associated with the book, but the two most important ones are easily discoverable on their own and easily lead back to one another—and all the rest as well.

The cool thing about Boswell is that it is easy to perform a search like this, and then with a single drag end up with a text file containing all of the items that made up that search—kind of like compiling out of Scrivener. If I do want to present all of these quotes together in a single file for some reason, it’s really easy to do that.

1 Like

AmberV

Thank you so much for sharing your system ideas.

You have explained your file name approach, but I would be interested to hear your ideas for folder structures. Do you use hierarchical folders, or does your system negate the need for a complex structure?

Your approach with MMD is intriguing, but probably too complex for me to adopt. I am a little old to learn programming techniques :confused: I wonder if you have a view on how your system could be adapted for those like me, who require a less technical scheme - perhaps not as complete as yours, but one that works ‘well enough’. Do you think that a mix of your file naming system, with addition of limited tagging could do the trick? I recognise that tagging may have its risks (and could be subject to redundancy), but if the underlying file name can provide a safety net, then perhaps this is acceptable? Tagging could provide the file relationship dimension that you get through MMD.

Regarding your ‘Boswell’ approach to never modifying archived files. Do I understand correctly, that you keep working files together in a "work’ area for up to 30 days. These files can be edited and added to, until such time as you archive them?

It would be interesting to know how you treat files that come in to you from external sources. Do you rename these files? If not then it would be difficult to fit them into your system, but if you do, it can be difficult in liaising with the party who sent the file (and knows it under the original name). Similarly, how do you deal with email - which to me and others is a large part of the overall workflow.

I have looked at Boswell. Any chance you can convince the developer to produce a Cocoa version :wink:

Thanks again. You have been a help to more people than you realise.

I, too, have looked at Boswell. Interestingly enough, it has a similar learning curve to DT. One thing that prevents me from a more in-depth trial is that it appears the last version update was in 2005. Ioa, do you know if it is still under any form of development?

JP

JP

I am no expert on Boswell, but AmberV has kindly commented a great deal of information in the past. My understanding is that the developer is still working on the product, although this only seems to have translated to beta enhancements since 2005. It remains basically an OS 9 product. The concept is great and someone of AmberV’s ability obviously can make great use of it. For others (like me), I suspect it would be hard going. Text based only.

Yes, as said I’ve been meaning to answer some of your original questions and just haven’t found the time, but now I have some, so I’ll give it a go.

First, meta-data on media. My solution to this is in part derived from the fact that I simply don’t archive too much media. It constitutes a small minority of what is in my system. So for me, a “filing card system” approach works just fine. I treat media just like a public library would treat their books. The books are not in the database, but the information regarding the book is. If I wish to file a PDF or a website, I write up a quick reference card in an MMD text file which points to the file in the body, so that it can be clicked and opened, some meta-data concerning it in the MMD meta-data block, and perhaps some commentary on it. If it is an image, I’ll include the image right in the MMD output as well. There are important reasons for approaching things this way. Manually creating “a card” like this, anchors the item in your memory. If you just press some “Stash Everything Into My Super Everything Program” universal shortcut… well the stuff barely exists in your brain. If you take the time to actually write this stuff down you will remember it. Using this system is not just about having a good archive, it’s about keeping your brain sharp. It’s designed to benefit you in many ways.

Lots of people really like their convenience though! I get that, I do. It’s just not for me and the system definitely reflects that.

To handle “media” (which I put in quotes because I really just mean anything other than a text file when I say that—sometimes that might mean old Scrivener projects, or what have you) I have a separate folder which is organised by token. So my folder structure looks like this, in abstract:

archiveBoswell
    2010
        10090
        10180
        10270
    2009
        09090
        09180
        09270
        09360
    ...
archiveFiles
    {R2.1}
    {C2.1}
    {I2.2}
    ...

The two key things here are that the text documents themselves are in the dated folders. I organise by year and then by quarter of the year, and that is it. This is just tons of tons of MMD files in there. Now if a Record.External.Observation type file ({R2.1.Society} for example) needs to link to some media, I’ll drop the media into archiveFiles/{R2.1}. So that is how I annotate and organise PDFs, images, and so forth. Again, I don’t do a lot of this, so this workflow might be way too cumbersome for many. 99% of what I archive is plain-text. That impacts how I work in a big way.

Briefly, if you have’t read my prior thoughts on tokens, the above is an example of a typical token in my system. When it comes to filenames I only use the initial letter, not the whole token. At the file list level, I’m not as concerned with whether it is an external or internal record—that level of detail is not important to me when combined with chronology and title. I see Doug uses at least one number in his names—I could see myself doing that, but thus far I’ve been happy with just “R” instead of “R1”. Since I basically generate files outward from their contents though, it wouldn’t be too hard for me to make a transition like that.

On the Complexity of MMD

I would say this: don’t confuse the advanced usages of MMD with MMD itself. In fact much of what I say has nothing to do with MMD… like above, that whole bit about scripting a name convention change, nothing to do with MMD. Just because I go all out and write scripts to do stuff doesn’t mean you have to do that (trust me I’d be writing scripts no matter what the core format was). At it’s core, MMD is just a plain-text file with a little text-based accentuation to it, no more complicated than e-mail or what have you. For example, the meta-data block looks a bit like this:

Title: The Name of the Article
Author: Your Name
Date: 2010-09-12

That’s it, and you can even make up your fields to. For instance, when archiving e-mail I use a “To:” field. It’s all pretty free-form and the only stipulation is that there are no blank lines in the meta-data section, and that you have a “Field: Value” construct on each line.

So when applied to this system, it makes for a very useful place to add a little extra data to the file, data which doesn’t appear visibly, by default, in the produced copies that MMD can create (though some of it can be used in self-evident ways; like Title being used to name the web page).

@Lettermuck: Do you think that a mix of your file naming system, with addition of limited tagging could do the trick?

As you might better see now, that’s actually a pretty good description of what I do. :slight_smile: So long as you call “tagging” putting these things into the text file in the meta-data block (which is, remember, invisible when viewing it in its non-text form). No risk there, so why bother with anything else? Really? I don’t get any of the advantages with using super-filesystem methods and all that. With Spotlight you get all the same benefits just typing the keyword into the text file, and you get a file that is just as useful on an old DOS computer, or an iPad.

Since the media is not effectively a part of the archive, rather the archive refers to the media, I can use however much meta-data and commentation on whatever type of file it is. No limitations because I have an entire text file (or even dozens of text files) with which to “tag” this PDF file, and zero fragility since it is all plain-text and ordinary files. No reliance on system hacks etc, and its all ruthlessly simple. There is no coding or complexity. It’s just text files that say “See this file over here…”.

Using MMD is far from learning how to program. A better way to look at it is a system for standardising the contents of your file. You could of course make up whatever internal system you want to use, like Doug has with the hash codes. The advantage of conforming to MMD’s system is that you get a bunch of pre-built methods to take your plain-text file and turn it into other things. That may or may not be important to you. If all you need is RTF, then maybe using MMD to create RTFs wouldn’t be the best way to spend your time. If you need something to sometimes be an RTF, and other times be a clickable web page, and maybe other times a nicely typeset PDF, then it’s a good system to have at your disposal.

This leads to your question regarding MMD exports, I don’t keep the exports around, in fact in most cases they only ever exist in whatever TextMate uses for a temporary folder, while I view it. The only real exception to this is when the exported version itself represents something that then required a lot of extra effort to manually adjust. In most cases, what MMD produces is just fine, but sometimes I want to do stuff that is special to that document, and if I spend a significant quantity of time doing that, I’ll save the final product (or more often whatever generated that final product) as well as the MMD copy. The MMD copy will remain the “master” in my archive, the product will get saved as “media”. So for example, this message, I wrote it in MMD and then “published it” using the BBCode generator. I don’t save the BBCode version anywhere, I don’t need to. If I need another BBCode copy and I get that with a single keystroke in TextMate. Meanwhile the base file can also be used to preview as a “web file” in TextMate, which is what I’m doing to proof this copy.

When I view a file in TextMate, I press Ctrl-Opt-Cmd-P, which runs the file through MMD, creates an webpage file in a hidden temporary location, and then renders the file right in TextMate’s web preview system. All of the links are functional—they will open files in their original applications if necessary, and cross-references to other archival files also work reasonably well.

Some things, the way I describe them, might sound more involved than they actually are. For example when I say that I reference the original file in my MMD meta-data block, all I mean is that I type in something like:

Source: 10256231-I-Some article.pdf

That’s all. :slight_smile:

The advantage of this, again, is that if you search for “Some article.pdf” in Spotlight… you get that file, but you also get everything that points to it… talks about it, explains it, whatever, the whole cloud! That’s the beauty of using simple old-fashioned tools for this stuff.

File Names and Meta-Data

To briefly summarise my philosophy:

  1. The filename is the envelope. It’s what you see in the “drawer” and should contain as much information as is convenient to differentiate itself from the other envelopes
  2. The meta-data within the file (I use MMD’s conventions, but one can use whatever they like) is the full description of that file. This is what aides search routines and automatic organisation (if you employ such).

The core philosophies seperating these two is that the filename should be concise and legible in large lists but contain enough to differentiate itself; the meta-data should be as complete as you have patience to supply it with. I’ve put a lot of effort into making my system as “low impact” as possible. I don’t want to spend any more than ten seconds adding meta-data, and to further this I use boilerplates and template files with most everything set up except for a few key things that change in every file, like the title and date. TextMate’s Snippets make this incredibly easy to do.

Something to consider: a number of modern Mac programs, including DTP for that matter, have prioritised searching speed over searching precision, and this becomes especially true with punctuation. If you do adopt a token system similar to what I have developed, that is something to take into consideration. DTP is perfectly useless for the way I work, because searching for “{R2.1” is meaningless. The punctuation isn’t considered in the search and will in fact mess it up. Leap has this same problem. Notational Velocity does not have this problem. It will do precise searches very fast. Boswell also does precision searching, but it goes about it the slower way.

Why Numbers

I’ll digress briefly on why I use numbers in the first and second axis of the token, as someone asked this question in response to Doug’s blog post. On the surface it does seem less elegant than the rest, because numbers require memorisation. However I definitely have a system for these numbers, which dramatically reduces how complex it is to remember them. In the first axis, 1 is always private; 2 is always public; 3 is always concrete-auxiliary; etc. R1 is an introspective record, a dream I’ve stored, a comment on my psyche, etc. R2 is an amusing individual I saw while writing in the coffeehouse. M1 is an e-mail, M2 is a forum post. C2 is something I intend to publish; C1… not so much. :slight_smile: You get the idea. The secondary axis also has analogous meanings shared between them. Example: 4 is unfinished. When applied to an e-mail (M1.4) that means I never sent it; perhaps I wrote it in a fit of pique and decided it would be best to temper myself first. In {i3.4 it would be an unfinished concrete report of some collected information.

The other reason for using numbers, Doug already pointed out in his response in the blog comment section: It makes searching even more powerful. M2 is an unlikely sequence all by self, but with the dots it is even more unlikely. M2. is very unlikely, but Mp. might conceivably be more likely, especially with a case-insensitive search. That could have been “hemp.” instead of Private Communication, or whatever.

Numbers are also computer friendly. Alphanumerics can be used nearly anywhere, punctuation is a bit more spotty. I did briefly consider using punctuation instead of numbers, but if I ever did want to put full tokens into filenames or use them in some area that was picky about characters, it would become limiting.

Finally, numbers allow a great degree of expansion, especially if you demarcate them. While I don’t have any double-digit signifiers at the moment, the system could easily accommodate a R3.12.. I’d hope it never gets that way though, as that probably means there is a taxonomic failure and an axis needs to be split.

The punctuation placement also allows for some interesting searches. Since the minor-axis number is the only one surrounded by dots, searching for .4. will return all entries in my database that were never finished. Change that to .4.SomeName} and now it returns all unfinished items written to, about, or regarding that individual.

Organising Things

To expand a bit on the matter of folder organisation, I want to stress the importance of simplicity, which is a message I fear could be easily lost as I extrapolate, so I’ll come back to it at the end. The most important aspect of this system is that you shouldn’t be organising. That’s one of the major goals of it: reduce or completely eliminate the overhead of filing.

Now for the digression. I cannot answer simply, because Boswell has an folder scheme much more akin to Gmail’s label system. Technically any instance of an entry in a Boswellian folder is an “alias”, and so it very naturally works with the concept of having items scattered all over the place, whereas in many other systems this sort of cloning is a secondary feature, not a natural definition of what the relationship is between item and folder. In Boswell, an item can be in no folders at all—totally invisible unless you search for it.

So, I don’t actually strictly adhere to the chronology system alone. It is definitely an important component, and drives the sort method within most of my folders, but I’m able to easily maintain topical buckets which are automatically populated as I archive items. It’s just as often I’ll go to a folder like “Letters-Sent”. Broad notebook buckets like this are useful in Boswell because, like I said, it searches the slow way. So stipulating a top-level bucket that narrows the pool down from 10,000 to 800 items will mean the search performs that much more quickly. Within this bucket, as said, I nearly always sort things chronologically.

So they are really more like tags that act like folders. Coming back to the top emphasis: I don’t manually handle any of this. Boswell manages all of the organisation for me whenever I archive something. So my system remains pure to the paradigm of dump and forget, while also benefiting from a little computer-aided topical organisation that is 100% set up by me. I know it is flawless because I set it up, thus I can trust it.

So how does this stuff end up in actual folders, if I’m doing it all in Boswell? Simple, I dump the days worth of stuff out of Boswell and into the appropriate quarterly folder first thing in the morning. I don’t myself use that folder much, except in cross-references. Since I cross-ref heavily, the MMD links all point to the items in that folder, so when I click on a cross-reference in TextMate, it’s from that system that I view the files.

This also means that I have a Boswell-free redundant backup.

To reiterate: If you engage in complex organisation, you are missing the point of the system. One might be tempted to sort the "R"s and the "M"s into sub-folders beneath each quarter folder, I would caution against that. Part of what makes this system unique and powerful is that the chronological listing remains unbroken across taxonomic boundaries. Everything is listed together, no matter what type of file it is, in the order they were archived. This can yield interesting combinations! Remember that if you do need a little focus, it’s easy to do. You can either mentally block out the non-“R” stuff, or whatever, or use Spotlight to produce a quick focussed list.

Browsing and Previewing

Browsing is probably the weakest part of my system, but for a good reason. I hardly ever do it, and never as a way of finding things. The only time I’m browsing is if I’m feeling nostalgic—then maybe I’m going through a particular axis or two in a date range without caring what I come across so much, or if I’m looking for something in particular, and then the ID-Primary-DescriptiveTitle naming convention is 99/100 more than good enough to get the job done. In Boswell, if I’m not sure from that information, but have things narrowed down to a pool of 30, I can just DownArrow through the list and the text shows up immediately. Same can be done in Notational Velocity.

While there are definitely occasions for browsing or previewing, in most cases—for what I need—an excessive amount of this activity would probably mean a failure in the system. My system is designed so that I can hit a needle in a haystack within twenty seconds, no matter what that needle may be. Most often it’s more like 5–10 seconds. Very rarely is it longer, and then there is some browsing required. Even a very good system will at times suffer the fallibilities of human memory, I might just not remember enough to pull it out quickly. More precise and elaborate searches can usually accomplish this, but that means more time to acquire it. In most cases, it is like Doug says, your search result is usually one file.

This, goal if you could call it that, might not be relevant for everyone. For a graphic designer, thumbnail previews might be the equivalent of a textual descriptive name. A similarly designed system that placed more emphasis on a thumbnail for rapid retrieval would certainly be valid for some people. Programs like Leap or even just Finder in icon mode might be all they need.

In defence of naming conventions, for Druid’s sake, calling the naming system a “restriction” is to mistake the system. The main grief I have with application meta-data, and for that matter many of the meta-data systems featured in OS X applications that use Spotlight, HFS+, and other tricks, is that they aren’t very portable. All of those tags and folders and comments and labels get lost as soon as you leave the system, upload the files to an FTP server, or work in Windows for a while. Doug already pointed out the fallibility of relying upon system tools for chronology as well. They were never designed to be archive-proof for one: they are more meant to be activity indicators on the system. When was it made, when was it modified; neither is immutable. The archival date, which is a central and architecturally vital component to this system, cannot be trusted to any of these flags. The naming really is the system, everything else is gravy. Saying it is “restrictive” is no different than saying, “You don’t have to worry about the restriction of putting your files in DEVONthink to use DEVONthink”. :slight_smile:

If someone is interested in adopting a Noguchi styled organisation system, then they must have a 100% foolproof date system, and the filename is an profoundly logical and good place to put it. From above, the filename is the envelope, and this system is all about envelopes, and less about the boxes, when you get down to it.

Is this method for the super-organised? I’d say not, actually. It might look like it on the surface. Someone might look at my boilerplate without reading about how I create it, and think, “Wow, that’s a lot of meta-data work and filing”. It’s really not though. It’s, as described, five to ten seconds of entering information. I can mark up thirty files in a few minutes, and that’s usually the maximum of what I ever have to do in bulk—that usually happens whenever I come home with my AlphaSmart. :slight_smile:

If anything, I’d say this system is about dumping the super-organised mindset. One of the main reasons I designed it was because I was spending too much time organising things. I wanted something that was save-and-forget, not even file-and-forget. There is still a little filing, but honestly it’s not much at all, and it’s all habitual at this point anyway. I don’t even think about it. Does DEVONThink (Pro) (Office) (Skyscraper) supply that kind of ease? Yes, I think it does. I’ve read enough about it from people who are avid users of it to gather that it can provide that kind of thoughtless collection and retrieval. I’m not saying this system is better except in two key points: (a) it will still be working in thirty years, and (b) I believe it has psychological advantages. See also the thread on commonplacing. There is, I believe, something mentally healthy about creating the structure by hand and taking the time to give it proper care. I think the second point does have a limiting factor though, in terms of quantity. If someone is a pack rat and gathers hundreds of things per month, they would have no time for anything other than writing synopses of it all!

As Doug says above, there is almost a leap of faith in just how little you organise things. It’s feels very strange to work in this system at first. Your intuition is telling you you aren’t doing enough! All of this will be lost! It does work though. I’ve never lost a single file. The only times its failed me are when I failed it and neglected to put something into the system. There is no way around that problem though; everyone makes mistakes and any software or system will fail to produce information you never gave it. :slight_smile:

Responses

OpenMeta

@douger: But I wish I felt better about OpenMeta, there is always the sinking feeling that Apple will pull the rug out one day and all the tags we’ve put on files in that “reserved space” will be washed away. So my needs are well served with Tagger, but I worry from an architectural standpoint.

I would definitely encourage some manner of redundancy here. In my opinion OpenMeta is probably even less safe than DEVONthink in terms of portability and future-proofing. Plus there is yourself to consider: Do you want to predict that you’ll be using a Mac in ten years. I’d like to think I will be, but who knows. Ten years ago I was a Linux geek and Macs drove me up a wall. Now I have Linux on a Parallels VM and hardly ever use it. And that’s just me, like you say, this OM trick is an Apple thing, but they are not known for holding fort over things they’ve established. They are in the consumer business, not the long-term enterprise business.

I have considered looking into writing a Ruby based OpenMeta script that will process my hard-coded meta-data and make it OM-useful. It would be nice, but I definitely wouldn’t strip out the core. It would, if anything, become like a Boswell to me. A useful tool that sits on top of a system.

Avoiding Specificity on the Envelope

I’ve corrected early mistakes, like not making the file naming conventions abstract enough - too much specificity and the thing breaks down under its own weight.

That’s a huge “beginners problem” with this system. Hey, that was version one and two for me, before I started writing things up in the forum here. By then it was around version four. The first versions had a huge amount of specificity and it was a royal mess. Fortunately I kept very good records, so upgrading systems has been easy, but this latest iteration seems to be the winning ticket for me. 09004 was a final synthesis of many ideas, much of which I posted here in late 08, and honestly it hasn’t changed since then, except in ways it was designed to change. The final plain English tag in the token—totally designed to be infinite. There are many more of those now than there was then, but this doesn’t burden the system. It doesn’t hurt it at all to have .PersonA} and .PersonB} specified. That only makes it better, and they are both still {M1.1 or whatever.

So yeah, I love it too, work in it daily and benefit from it nearly hourly. I’m glad to hear you’ve remained just as pleased with it in the long run, too.

Future Proof

And I don’t worry about application functionality as much, because my data is always secure. I’m using apps against data, not housing my data in an app.

That’s really the key thing here. I don’t trust software for the multi-decade question. It’s nice, some of it is really amazing (I’m lucky enough to work for one that fits squarely in that last superlative), but for archiving data: that’s the long haul. I’m going to be carrying this archive around with me when I die. By then it will probably be in some little crystal embedded in my fingernail or something, but I’m not going to trust Boswell, or DEVONthink, or even Apple, to be on my fingernail as well. Yes, the files can be exported and moved to new systems, but so much of the connective tissue cannot. All of these “tags” and “links” and so forth are incredibly fragile in the long view picture.

So to that, I use what can be automated flowing outward from the core data. If OpenMeta can be automatically updated from a text file with a special script, then great I’ll take advantage of it and use it to the nth degree however I can, but if OM collapses my core meta-data is still there. I use Boswell to maintain an elaborate workflow with these buckets, and all without having to give it much thought, but if it collapses I won’t lose anything. Internal links? Same issue. If I used DTP’s cross-links or VoodooPad’s then I’d be really screwed if I had to move to Linux, so my links are all simplified MMD. Even if MMD fails, the stuff that makes it work is just conventions in a text file. That will always be useful even if there are no more computers left.

Working Journal vs. Immutable Archive

@Lettermuck: Regarding your ‘Boswell’ approach to never modifying archived files. Do I understand correctly, that you keep working files together in a "work’ area for up to 30 days. These files can be edited and added to, until such time as you archive them?

That is correct, though I often archive far before the 30 day expiration, even if it isn’t done yet, that is just a maximum.

Incoming Stuff

It would be interesting to know how you treat files that come in to you from external sources. Do you rename these files?

I not only rename them, I also MMD them as well. :slight_smile: Everything in my system is MMD, or is an MMD file that points to a media resource. Since MMD is so simple, this generally isn’t a difficult task. It’s usually just a matter of tossing some meta-data on it and potentially double-spacing paragraphs. Once something has come in to my system though, it slots right in with everything else. You get that same advantage of a system with no taxonomic edges for imported stuff, too.

If not then it would be difficult to fit them into your system, but if you do, it can be difficult in liaising with the party who sent the file (and knows it under the original name).

That’s not as big problem as you might think. Look at it this way, the archive is an internal storage system. I don’t (can’t) even edit it. Other people certainly don’t edit anything in it or even see it. If there is a file that is going back and forth between parties, and is still being actively edited, this is how I treat it:

  1. They send me a file that will be mutually edited over a period of time
  2. I create a stub entry in Boswell, the clock is now ticking, and I point to the original file in this stub entry, much like I would a media resource
  3. If 30 days pass and it is still be edited, I copy the current file contents into the stub file, Versionize it… now there is a new stub file and another 30 days.

So as you can see, the other person is not aware of any of this. The archive is my own internal system. To a degree the MMD is an internal system too. I prefer to render out a copy before sending or posting it.

Boswell

I have looked at Boswell. Any chance you can convince the developer to produce a Cocoa version

I’d write to the developer and let him know you are interested, that’s the best way to let them know there is interest.

I can’t really say for sure if there will ever be a Cocoa version. I’ll say I have reason to believe that is a possibility, but I also really don’t have any reason to say it is inevitable. :slight_smile:

You are definitely right in your final assessment though. It is an idiosyncratic product which would drive one nuts unless they really like the way it works already; it is extremely rigid in some philosophical ways like few programs are. It’s also not for squirrels, as you note. It’s a text authoring program, primarily. For the mass archival of text, whether collected or generated, and has no interest in images or PDF files or what have you. It’s a specialised tool in the same way you wouldn’t expect iPhoto to archive e-mail. I realise its trendy to have “everything buckets”, but personally I’ve never really seen much merit in them. I’d rather have a comprehensive index card system that, if necessary, illuminates but does not contain what my file system already does a good job of doing: files.

Point of taste, no doubt.

1 Like

Wow. Thanks so much AmberV.

Look out MMD - here I come!

Have written to Will Volnak (Boswell) by the way. Just to let him know that there is a bit of stir going on at the Scrivener forum :wink:

Is that all?

:laughing: :laughing: :laughing: :wink:

Dave

Hey thanks so much for that - fascinating. Does this mean, Amber, though, that you make ALL your incidental notes and observations on a computer? (Or at least a digital device.) I love the idea of your system but the hurdle I face is that I carry around a paper notebook, and that’s where all my incidental stuff goes. So browsing is the ONLY way I can retrieve that stuff, unless I want to type out the notebook as I go. Which I don’t. But if all of those notes were to be made on a digital device, I’d need something as handy and omnipresent as a (physical paper) notebook. What would that device be? :blush: Now I feel like a Luddite.

william

Hi, William: sounds like you need the LiveScribe pen:

"BERKELEY, Calif. — Mark Hunter, a doctoral student here at the University of California, uses a new kind of pen to take notes in class. His Livescribe Pulse digital pen writes on special paper, records lectures with audio at the same time, and transfers it all to his computer as a digital copy.

“I like the security that having the lectures combined with the notes gives me,” he says.

“Truth be told, I hardly ever go back and listen to them, but knowing that they’re there, I find to be meaningful.”

Livescribe, a start-up formed in 2007 to bring the old-fashioned pen into the 21st century, has weathered the recession to find success with fans such as Hunter who provide the bulk of its word-of-mouth marketing.

Now, Livescribe’s 500,000 customers have something new to talk about — a second digital pen, the Echo, released in late July. The new edition attempts to address requests from customers for more comfort, lighter weight and easier docking to the computer.

“The new pen is flat. It doesn’t roll off the table,” says Jim Marggraff, Livescribe’s founder and CEO. “Our customers wanted a smaller, more comfortable pen.”

They also want a cheaper model, but Marggraff needs a larger customer base before he can follow through on that.

The new Echo is $199 with 8 gigabytes of memory ($169 for 4 GB), compared with $169 for the 4-GB Pulse ($129 for 2-GB Pulse)."

(I don’t work for the company: but it seems like a great idea.) Look it up on Google.

@dafu: Is that all?

Don’t get me started… :wink:

@William: Does this mean, Amber, though, that you make ALL your incidental notes and observations on a computer? (Or at least a digital device.) I love the idea of your system but the hurdle I face is that I carry around a paper notebook, and that’s where all my incidental stuff goes.

William, no need to feel like a Luddite—or at least, if you do, count me in as well. You are right, now that I read over what I have said, it does look as though I type every single thought of mine down on a computer of some sort, but really nothing could be further from the truth. In fact I have historically written close to 90% of everything down with paper and ink (at least when it comes to writing in a diary). Nothing fancy, no digital recognition pens. Just a Moleskine and a black marker pen, or a stack of index cards with a binder clip if I’m in the mood for that. I say historically, because lately I’ve been writing a lot on my Alphasmart. I expect as I acclimate to the device I will use it less frequently (bit of a honeymoon thing going on as it is only a month old, no doubt), but even now I only bring that along when I know I’ll have a convenient place to use it. If I’m just going to be walking about or riding a bike with no particular destination in mind, it’s the old Moleskine or index cards for me.

As for transcription: that goes in phases for me. There will be periods of time when I religiously type in everything that I have written, scan the index cards or journal pages and cross-ref them to my media folder—there will be other phases where I just type in what I have written—and still others where I don’t do any of that at all and it just sits in notebooks. I also have phases where I get all ambitious and transcribe old journals that haven’t been typed up from these “dark” phases (I put a little mark next to every entry that I transcribe, making it easier for my future self to figure that part out). So eventually it all does get transcribed, it just may take five or six years for some of it. :slight_smile:

And yes, once it does get typed in it goes into the System, back-dated accordingly. I also try to mark down where I wrote each entry, so that I can search by location (all notes written on bus line #44, say), and a short title, but often it’s just the time. Sometimes I can remember where I was, sometimes not.

When I transcribe I use a sourcing system. I have a “Source” meta-data field that I use to indicate where the item came from. For gathered stuff this is somewhat like a citation, or a URL, often times its a reference to an ID in my system, which is sometimes a more detailed description of the material, as described with my method for taking down book notes (many of which, as you can imagine, are hand-written). Each of my hard-copy journals gets an ID number as well, and so any entries transcribed from that journal will use that ID number in their source field. Since I use my odd timestamp in “real life” as well, that means the journal pages are marked with these dates and times, and thus it is fairly easy to go back and find the original page, if I hadn’t scanned it in the first place.

Why go through all of this trouble? Well, you could say journalling is a bit of a hobby for me. I write a serious quantity of information down; probably to the tune of an average of 40,000-50,000 words per 40-day period. You could certain speculate that one of the reasons why I came up with a system for “thoughtless filing” is simply to handle the quantity of information that I’m throwing at it all of the time. This counts everything, though. Even this forum post is counted as part of my journal, ultimately. I consider every scrap of what I write to be a part of “my documentation”, so to speak, and given the content of this message, it will no doubt be linked to and analysed by my future self, as I track the meta-trail of how I record my life.

Another reason I go through the trouble of transcribing is that it makes it more easily available, and I feel there is an important aspect within the transcription process itself that a direct digital transfer cannot match. It’s one of the reasons why I’ve eschewed “syncing everything”, because I feel there is a useful aspect to reading what you have written and typing it up. I don’t just transcribe verbatim—most of my hand-written thoughts are in shorthand and rather tersely written—so transcribing is almost a bit more like rewriting. I’ve even ascertained how many days I should wait before transcribing. I don’t transcribe the things I’ve written that day at the end of the day, but rather the things I wrote about a month ago. Why? It distances me from the heat of the emotions, if any, and if the thoughts were reflective in nature, allows me to use the time that has passed between that point and now as an asset in my rewrite. It is, at the same time, not too far away. It’s not so far in the past that I no longer remember the person who wrote the entry, except in vague abstract, it’s still the “me” that is steadily evolving. It’s not so far in the past that I no longer remember the context, or wonder precisely what I was going on about.

In answer to your question on the replacement of the notebook: I have yet to find a device that can truly replace the pen and notebook, and even if I did, I’m not sure if I would use it. I rather like rewriting everything. I don’t think a digital pen, like jtranter points out, would work for me. For one the shorthand aspect wouldn’t really go over well with any kind of OCR, so it would just be moving the “ink” from one spot to another. I’d still have to type it up.

I take it you scan everything though, since you mentioned browsing your writings digitally? A digital pen might indeed be an interesting thing to think about, then.

1 Like

Thanks so much guys - that’s all really interesting. The Livescribe looks pretty amazing! Maybe for Xmas…

Amber, when I said I browse, I just meant flip the pages of my notebooks looking for stuff that I half-remember jotting down six months ago… or was it two months? For now I don’t scan, or transcribe very often either.

I like very much that you actually get that stuff out of the notebook and into a searchable form. ( I also love that you still use a notebook :smiley: ) Maybe I’ll try doing that a bit more, too, at least with some of it.

I suppose my philosophy has always been that if it’s really important, once you’ve written it somewhere, the chances are it’ll come back, or stick, and I think it usually does. After reading your post I looked up the Joguchi method - and it strikes me that my inner monologue is a kind of thinking analogue of that method. As I go through my day I’m constantly rehashing ideas and thoughts, many of which are in a notebook somewhere. But the more often I come back to them mentally, the more likely they are to recurr. A positive feedback cycle. Of course it’s not very scientific and a bit more transcribing would really reinforce it nicely - or, indeed, scanning. With the right file system it would be just as easy to search for a scanned document as a transcription.

Anyway, thanks so much for your posting on this thread. I’ve found it very stimulating and I’m thinking about all the ways I can apply these ideas to my own informational life. I’m especially interested in the idea of developing a token system I can apply to every document I create. That makes so much sense.

cheers for now
William

Hi all,

first off: I’m very interested in implementing a text-based system for my notes. So this whole discussion has been a great resource of information. Thanks everybody!

Now, Ioa, I have a question for you (you saw that coming, didn’t you?). Do you find yourself replicanting the note’s title over and over?

In the current state of my system, I have to type the title at least three times: on the file name itself, on the “Title” metadada field and as a heading on the text body. (Maybe the last one could be dropped, but I kinda like to have a nice heading when the note is exported. And, sure, there’s copy and paste; but it still seems too many steps.)

So, I’d like to know how is your workflow concerning the titles. Is there a smarter way? Am I unnecessarily duplicating information? In short, how do you do it?

Also (yep: two questions): how do you manage the unique ID for each note? Do you copy it to an “ID” field on the metadata? Or the one in the filename is enough? (It’s actually linked to the first question. Because the ID is already in the file name, if I decide to put it in a metadata field the copy-and-paste gets even worse: now it demands some editing).

Well, that’s it for now… I hope these questions don’t sound too stupid :slight_smile:.

Thanks a lot!

@William: I suppose my philosophy has always been that if it’s really important, once you’ve written it somewhere, the chances are it’ll come back, or stick, and I think it usually does.

I go back and forth on that. I have a pretty good memory, and so remember a large portion of what I write down, but in the case of years and years, there definitely are situations where I’ve written down a thought, forgot all about it, then three years later written down the same thought, forgot about it, and then finally wrote about it in the era of this system, and suddenly found all of these links to prior thoughts. It’s fascinating to see a chain of ideas, and to be able to go back and look at them in their chronological context; to analyse what gave birth to the idea, ponder on why it was forgotten, and see how it mutated and changed in the subsequent iterations.

Whenever I find a “chain” like that, I tend to write up a Trend file on it, which links to the ID for each occurrence, summarises how the thought evolved over time.

This kind of touches upon one of the reasons why I think versioning is a superior system to appending. Versioning creates new data nodes out of old information, and thus elevates the entire network that it draws from. Whenever I search for something and have additional thoughts on it, if I create a new file for that, back-referencing to the old ones, that whole network of thoughts becomes vastly easier to find in the future. If I had just gone on appending things into one file, that might not happen, and complex relationships between thoughts would be much less visible because a file edited over the course of years would lose its chronological contextual power.


@cksk: In the current state of my system, I have to type the title at least three times: on the file name itself, on the “Title” metadada field and as a heading on the text body. (Maybe the last one could be dropped, but I kinda like to have a nice heading when the note is exported. And, sure, there’s copy and paste; but it still seems too many steps.)

If you are using MMD, the last one can be dropped in many cases. The meta-data title is different from the structural heading titles. Think of it in the context of a book. You have the name of the book on the cover and on the first page inside. The first level 1 header in the book would probably be something like “Introduction”, or “Chapter 1”. So headings are more material then meta-data.

That said, for stuff that isn’t large-scale, it can be useful to have the meta-data title synonymous with the first level 1 header. Many of my entries are just short, one or two paragraphs thoughts, and for this type of thing, having the title of the “card” printed out in the HTML view is handy. Another reason why it can come in handy is that I have another script which can take any number of these entry “cards” and combine them into a single MMD file. It basically just takes the meta-data from the first entry and then wipes out the meta-data from the subsequent files, producing a single document (kind of like Compile). In this case, titles come in handy because each card now has a title line separating it from the other cards. Now, in actuality my code is smarter than that. It will analyse each file’s meta-data, and if Title is equal to the first header, it just deletes the meta-data. If the first header either doesn’t exist, or is something else, it:

  1. Adds the title as a level 1 header to the top of the document
  2. Increments the rest of the headers one level of depth

I get around typing it in twice, or even copy & paste, by using TextMate snippets. When I start a new entry, I just press Ctrl-M and then select from a menu of all the boilerplates I’ve designed. A simple journal entry boilerplate template might look like this:

Title:     $1
Author:    Ioa Petra'ka
Date:      ${2:`~/bin/timestamp`}
Keywords:  {${3:R2.1.Dailies}} id${2/[\/\.,:;\'\\]//}
Format:    complete

# $1 #

$0

There is actually more to it than that, but this demonstrates the “basic” idea. Some of that looks like absolute banging-on-the-keyboard (apologies for that). At it’s most basic, the TextMate snippet syntax looks like line one. The numbers indicate the tab order. As you type in values, you can press the tab key to advance to the next field you have set up. So after typing in the title, the second field is date. This demonstrates the “default value” usage of the TM snippet syntax. ${2:Something} would print “Something” and highlight it. If I like “Something” I can just press tab again and leave it, if I don’t like it, I can type in a substitute. In this case, I’m actually executing a UNIX script that I’ve written which produces my timestamp. Field #3 is in the Keywords line. That is a little complicated by the fact that my token syntax also uses curly braces, so the actual default field is inside the first set of brackets, and that default value is “R2.1.Dailies”. When I hit tab, that gets highlighted and if I like it I just press tab again, if not I can type in another token.

Now we get to the part of TM’s snippet code that is cool. Note the next field on the keyword line is a number that has already been used, this means it will be populated with whatever exists in $2, which is my date. Further, I’m running a regular expression on it which strips out any punctuation.

Finally you have the MMD title, again, using the back-reference $1 so whatever I typed into the Title field will automatically get populated here.

$0 is a way of telling TextMate where you want the caret to go when you are done with the boilerplate. You can shift-tab to go back to fields, but once you hit this last one it becomes “fixed” and you’ll have to use ordinary text editing to change things from that point on. All of the cross-linking code is cancelled. So if I revise the title I’ll need to edit it in both places.

The above is something I never see, that’s just the control code. In TextMate when I invoke this boilerplate, I get:

Title:     Example entry
Author:    Ioa Petra'ka
Date:      10267'034
Keywords:  {R1.1.Dailies} id10267034
Format:    complete

# Example entry #

All right, I typed in “Example entry” to demonstrate how it looks while typing in the title field, but otherwise that’s how it comes out. As you can see, if I’m adding a new journal entry right then and there, I’m done. The date is already fine, the token is what I want, I’m completely done adding meta-data once the title is typed in.

This is how I’m able to keep meta-data entry down to a matter of seconds. Even if I need to edit the token and date, that’s nothing in the grand scheme, and the ID form gets automatically processed as I type in the Date field.

Okay, so now for the filename. As I stated above, I actually don’t create the files by hand. I export out of Boswell and run a script that generates files for me based on their meta-data. So that aspect is automated. However the Boswell title is something I do type in by hand. It’s a little redundant, but frankly it doesn’t bother me. I also enter in my token into Boswell’s tag field, but I also use that field in the Journal for status. The token isn’t added until I’m done with the entry.

If I wasn’t using Boswell, I could probably get away with not naming the file initially at all. I could just use my script to name the file based on its meta-data. In fact, if I was running purely file-system, I would have a “Journal” folder, and at the end of the day I’d process all of the journal items ready to archive with this script to reset their filenames, and then move them to the appropriate chronological folder.

And, it looks like I’ve already answered you second question, regarding how I handle the ID. Initially I didn’t put the ID in the file like that, but I’ve found that to be very useful in cases where content-searches are being used. I’m not sure if it is necessary though it is nice to have a redundant “backup” in case the filename got damaged for some reason. It’s no added workload for me since TextMate handles the reproduction.

1 Like

Oh, the power of snippets! I spent the weekend playing with them. Since I don’t use Boswell, I’m going the other way around: the metadata gets generated by the filename. All automated.

Also, I’m syncing the same folder to TextMate and Notational Velocity. NV’s great search and TM’s snippets make they work wonders together – I use them pretty much as one app. That way, I can browse the notes in NV, see if there’s any connection with what I’m currently writing and, using snippets, create MMD links on the new entry.

(I have a snippet set up that encloses the selected text in [] and appends whatever is on the clipboard – the title of the note I’m linking to, copied from NV – enclosed in (). The snippet also removes accents and spaces from the clipboard’s contents, creating html-friendly names.)

So, all in all, I think I have a nice system going on. Thanks again for everyone’s comments and suggestions, specially Ioa.

Cheers!

Apologies, but I am a little confused about the quote above :confused:

My undertstanding is that you create your entries in TextMate or Scrivener and then save/drag them into Boswell. I guess you may also create some in Boswell itself? You talk above about not creating the files by hand, but exporting them out of Boswell and run a script to generate the files. Not sure what you mean here - sorry. I know that you backup out of Boswell to a backup file. This file also contains links to any media that is cross-referenced to the Boswell files.

Email
Can you please explain your methods for dealing with email. I assume that you import this into Boswell also. The suggested method is to save the email to a folder and then import into Boswell. I guess you rename the email (getting to know you now :laughing: ). Do you prefix with your date stamp at that point? Is the limited System 9 file name an issue? Do you edit the email while in the Boswell journal and add the usual metadata there?

The saved Boswell email is plain text, so you cannot reply to emails from within Boswell. Do you cross reference to your email archive, in a similar way as you do for media?

Tag Field
You mention that you enter your token in the Bosell tag field. What are the benefits of this? Is it because you plan to edit tokens in future - even when in the archive? Significance of curly brackets around your token?

In an earlier post you mentioned that you might place a code 4 in your token, to highlight entries that are unfinished. This sounded a great idea, although I could not figure how you removed this code, when you eventually do finish the WIP. Is an ability to edit the archived token the answer?

I have recently purchased a copy of Boswell and I am working my way through it. I am not yet a Scrivener customer, but will be at the end of the month (can’t wait). Further pointers on how Boswell and Scrivener fit together would be appreciated. You will be pleased to know that I have picked up the basics for MMD :astonished: Printing out into presentable formats should be straightforward, although I need to learn how to edit the format (without picking up LaTex skills).

I may take a look at TextMate, but have little experience of snippets. What are the major benefits over standard Text Edit, in terms of your workflow?

Thanks again for all your support and patience Ioa.