Web Pages

Hi, I am new to Scrivener, and I really love this app. One question. When I use the feature to keep a webpage, I notice that when I click on a link on page it automatically brings me to the web, and not to the part of the page where the info. I require is in Scrivener. This applies a lot to online help manuals which I prefer to have on my Hardrive (saves bandwidth)

I have tried d/l the page and then tryiing to load the local url but no luck…anyone else come across this?

Thanks.

There is already a web trick for doing this, but it requires the developer of the page to be courteous to their readers and supply what are known as “anchors”. You most often see anchors placed at the top of the page, or at each title position. This allows you to link directly to a spot on the target page from anywhere on the Internet using the URL form:

http://www.documentation.org/some_application/index.html#anchorName

It’s the stuff after the hash that makes the magic happen. There is a slightly more advanced trick which will work more often. Any HTML element that has been given an ID can be linked to directly, not just specifically designated anchors. Still, if the author of the page dumped 15 pages of paragraph text without any IDs in the middle, you are out of luck.

There really is no mechanism for doing this other than what I have outlined here, and partially because when used correctly, this mechanism is extremely portable and useful since it is a valid part of the URL. It can be saved in bookmarks, or even as imported into Scrivener.

Thanks for replying Amber, The anchors are there but point to the original webpage on the net, not the section in the Scrivener page, Is it possible to edit the Scrivener page in html? bit slow but!

Am I right in thinking that Scrivener does not support anchors, seems it will not load a webpage from the desktop either, which would be better.

I am also wondering, just how far can I push this app, I have been using it about a week and it is already 100MB in size, it is slowing down all right but not a problem yet but am wondering if I can put the amount of info. I deal with everyday into it?

My current documents folder is 7.7GB in size and I would in time be transferring all of these documents into it. Is it up to this and I was hoping to put in extra formats that i do not keep in my document folder like video and sound files.

I am asking too much I think…

They seem to be working for me. If I drag in a URL from my web browser that has an anchor link embedded in it, it will always load at that point when I click on it from the Binder, or ScrivenerLinks pointing to it.

I have no idea what you mean by Scrivener loading a web page from the desktop. Forgive me, for I am a little literal minded sometimes, but that makes no sense to me. If by chance you mean Scrivener keeping track of a file on the desktop, then yes, you can link to external files of any type. Reveal the Inspector (Cmd-Opt-I) and then the References panel (Cmd-7). Simply drag resources into this area. You can place other Scrivener documents, files from your computer, and web links in here.

Some advice: do you really deal with 7GB of data on a daily basis? This is all for one book? Scrivener isn’t really designed to work with the premise of Everything-in-one-project. A project means just that, a project that would result in some “product”. If that is all for one project, then yes you might consider relying on References a bit more than dumping every possible file into the project Binder. Keep in mind when you do that it copies the data. Unless you intend to delete the originals, that means you Documents folder will be +14GB with the project in it. It also limits how easy it would be to back-up this project. This will also severely limit your ability to periodically back-up the project.

Scrivener is a bit like iTunes in that it can handle tons of data. It doesn’t load everything at once, and as long as you keep the quantity of resources per organisational folder in the Binder to a minimum, you should be all right (above concerns aside). I don’t think you are asking too much, but perhaps you are just asking the wrong favours of Scrivener. :slight_smile:

Thanks Amber for taking the time to reply to me, I probably explained badly, and there is everything to be sais for being literal btw… :smiley:

When I import a web-page into Scrivener, it does not use the anchors on that page to scroll to that part of a page. (very long web pages, more blog-like)

For example this manual docs.google.com/Doc?id=dxt3z6v_2 … troduction
Scrivener imports it as a web page, but click on any of the Table of contents and it automatically opens your browser and goes to that place on the page on the web, not in the Scrivener page.

7GB for a book? It’s not a book, it is a lifetime of study and the 7GB is just a small part of the paperwork, I only moved to digital 6 yrs ago… The ability to access this information quickly in a digital format is essential.

The last time I worked with this intensity I built a desktop website and that sufficed, maybe I should do the same again???

I understand what you mean now with the anchors. It will load to an anchor but will not reposition to an anchor; so yes it does look like that is something Scrivener isn’t handling in the best way. Hmm. I’d post it in the bug report section since a document linking to itself is fairly common and useful. It might be difficult to fix, given that I think Scrivener is essentially just running the WebKit interface in a window, but I could be wrong about that. The developer is out on vacation for a bit, so you might not get a response immediately.

Okay as for project size: As stated, if you keep the quantity of items in a particular container to under 30-ish for media and 50 for text you should be okay. The biggest area of sluggishness is when Scrivener is asked to draw a Corkboard with several hundred pictures, so as long as Corkboard only shows a certain amount, that is mitigated. The precise number will of course depend on how much CPU power your machine has available.

The references pane could help you out. If having things loaded into split views is not so important, or you have enough screen space to have a viewing application open beside the Scrivener window, then this could be a good alternative. You could then organise groups of files topically by creating documents with lots of links in them.

If you need one-click access to lots of stuff, you could consider an out-board application. There is quite a variety on the Mac, and if you are doing a lot of scanning, DEVONthink Pro Office has built-in document scanning and OCR, which can come in very handy. I myself always try to use Scrivener first and then if it gets to be too much, I’ll rely on an external application to handle the bulky stuff that I don’t need constantly, leaving Scrivener with the very present-tense research items and the writing process itself.

So yes, it is designed around the concept of a book sized project. Larger projects and multiple books will potentially put a strain on what it handle.

I am asking too much of it Amber, It is a wonderful app and I hope it keeps developing. I was hoping to move away from the phletro of out-board apps i use, try and streamline the work process, it has taken me two years to get my work rate up to what it was before I did a system and hardware upgrade (OS9 to Tiger)

I dream of a system where i do not leave the interface to switch to yet another app to do something, although CS4 and FCPStudio2 are very good with their integration of apps, as always the demands of people like myself are even higher…and I want it to make coffee for me! :laughing:

Thank you for your time and patience and I see you play Second Life, A truly amazing app, I have disciplined myself not to play it I am very happy to say.

Keep up the good work with Scrivener, I will certainly keep using it on an everyday basis ongoing, I think once we adopt an app it becomes difficult to change to another.

You aren’t the only one. I’ve long pined for an entire desktop architecture which revolves around raw data instead of lots of files and applications in a million different formats. The window system would be modular, so you could bring up a two-pane window say, and load modules into the panels; save that window configuration by use. Then open up a 3-pane and load several modules into that. The key thing is that all of the applications would be pooling data off of one central core and displaying it in a variety of ways depending upon the purpose of the module. The Web browser would dump sites into the core automatically, tagged as tainted, and it would instantly appear in every application accessing that part of the pool (with anchors!). Chatting with someone about a project? Automatically logged, tagged by identity, and auto-clustered with the project. Using such a system, you could have a Scrivener UI sitting on top of an industrial strength database that is being managed by scripts which are looking for patterns and quantities in the same way that spam detectors work, and setting off little alerts in the Dock when watch-words you have set up become clustered in certain ways, et cetera. Everything would be fluid; organisation would be like folders, but not like folders in that they would be non-exclusive, and you could wipe out a data-cluster (folder) without touching the original data—just the organisation of that data. These collections would be accessible to all modules as working sets. Nothing could be deleted, only hidden. Blah blah blah.

Mac OS MM.V.IX

I haven’t been on Second Life in a while; I went through a similar period of self-training. :slight_smile:

You have just described the dream system, maybe in a decade ? It is an unusual business…computers in that it is dominated by one major player MS and then a very good but relatively minor one Mac and Linux is taking it’s time to go mainstream. The speed of development is faster than Moores law but if you look at what is being developed right now by Intel and other manufacturers, they are building microprocessors that the circuitry is measured in 5-6 atoms across, way ahead of what we have now, but it seems the overall systems I think are quite slow to develop and change.

Maybe this is just an example of the hunger we have for development in this area and the confidence that it will develop.

I studied and worked with computers 25 yrs ago and my colleagues thought I was joking when I said I was leaving to pursue a career in fine art, and i would return to computers when i could talk to them in 20 years. It was easy to see then that the methods being used to produce chips (hand drawn, photographed and etched) would become ultra fast. See the connection between Computers and fine art? :smiley:

20 years passed and I had fun in Fine Art, I returned to computers and bought what was a top of the range Mac 6 yrs ago, I was shocked to see a mouse in the box, I could not believe that the screen would be controlled by a mechanical object and not touch screen! Ok the voice control is there albeit far from intuitive i found.

I know the touch screen technology is well on its way into mainstream hardware but ten years later than I had expected. btw 80% of fine artists around the world are now producing work in digital media, no-one including myself knows where it will lead but we are all convinced that it is leading somewhere!

I think the big problem with the industry right now is that we have hit a twilight area where the current methods are simply pushed about as far as they can go; we hit the point of diminishing returns about five or six years ago, and now the only way to get a dramatic speed boost is to put more cores in. The other half of the problem is that the next generation of computing is still pre-alpha or even conceptual (and riddled with logistical problems like absolute-zero quantum cores) at this point. The breakthroughs required to make them a reality are still undetermined and extremely complex.

The problem with voice and touch screen is that it is not very efficient for anything beyond incidental use. In technology from 1970, using Vim, I can delete five paragraphs and throw them to the end of a document with nine keystrokes—no mouse and that isn’t leaving the home row either. Doing the same with voice would involve a lot of talking, and frankly most people cannot talk that much. Try holding a conversation for eight hours non-stop, then do that five days a week. Right now, with modern technology on the Mac, I can select four files, move them to some deeply nested folder, and be right back in the application I started in with around the same number of keystrokes mentioned above, using LaunchBar, the Vim of computer management. Would it really be more efficient to say out loud, “Computer, open application TextMate and paste into a new document” than Cmd-space t <enter> Cmd-n Cmd-v ? I can do the latter so quickly that the computer can barely even keep up (LaunchBar learns as you work with it, so it knows that ‘t’ means TextMate—I don’t have to wait for feedback confirmation), and I’m not disturbing everyone in the coffee shop like some cell phone user.

Touchscreens have other problems, namely with mechanical fatigue. Playing with an iPhone is one thing, or a touch-based keyboard, but if the screen is to be manipulated directly it creates a conflict of posture and positioning. The screen needs to be at an angle, distance, and height that is optimum for neck and back posture, minimising eye-strain, and so forth. That position happens to be about two feet away where the top of the screen is level with your eyebrow and slanting slightly toward you at the bottom. Try holding your arms up to that for eight hours—or five minutes. So placing the screen in a position which is comfortable for your hands to rest on, in your lap, causes the opposite problem. Now your are hunched over with you neck bent down, and in ten years you are going to need a back-brace and weekly physical therapy.

I think the next generation of computer input will be partially or entirely thought based. You could use your computer while combing your hair, if you have sufficient abilities at multi-tasking. My guess is that such a system could be coupled with the keyboard initially. Window and pointer manipulation is already a reality, though still a bit clumsy to use. We are, naturally, quite a ways off from being able to “speak” to the computer with thought alone—that might be something genuinely impossible to accomplish. But keyboards are quite fast, so it is less of an issue. We can type far faster than most of us can talk (and I doubt a computer voice recognition interface would be able to make much sense of a auctioneer), and most can type fast enough to keep up with their amended (not initial) thoughts. Amended is more important anyway.

Back to the top, for incidental use that is something else entirely. A microwave oven with a voice activated interface would be entirely more feasible and useful. Earl Grey, hot. But writing a Perl script? I’ll take a keyboard.

I just finished editing a set of book proofs, a tedious chore that many writers hate. You spend years perfecting a text, then a stranger (copy-editor) comes along and jams many unnecessary changes at you.

But editing on paper is far easier than on screen, even with today’s slick word processors. I would love to have an interface that responds to drawn symbols, especially in the revising and editing states. It would be a tablet display, and I would mark up the text with a stylus.

Copy-editing symbols are easy to write. Insert a word? Mark a caret, and the line opens to receive some new text. Deletes are a cinch, a long stroke with a pig-tail ending. Need to invert some words? Draw a hill-and-valley line, and the words shift accordingly.

When drafting, I prefer to type, but editing should be by hand. I agree with Amber that voice-commands are inadequate. Imagine having a code in the node and trying to get the laptop to type combah, sebi-colod, ambersad.

I dunno about having a computer to respond to my thoughts. The Ego perhaps, but definitely not the Id. (I’ll leave that to Vic.)

Amber:

Thank you for “Earl Grey, hot.” It made me happy, and got me thinking.

“Earl Grey, hot” was always my favorite part of any given Next Generation episode, because it felt like the future to me. The warping starship, the holodeck, the charming android – none of them made me want to live in that world more than the ability to bark an order at a machine and have it deliver a perfect cup of tea. And it wasn’t just that the tea was fabricated out of nowhere – I was also taken with the idea that the thing knew when it was being spoken to. There was something beautiful about a machine with enough power to perform a nearly miraculous task so elegantly. (Surely it was an Apple replicator.)

I’ve been using the Pulse Pen lately, and that feels like the future to me too (there’s a lot of wallop in that little pen!) I think my fondness for the Pulse, and the Mac, and Scrivener for that matter, is that they all seem – like that replicator – like little nuggets of technology that began with the thought “What I want to be able to do is…” Bad technology makes you do things you don’t want to do. Good technology is crafted with an ideal user in mind, and uses its power to add value to the way that user is already living.

I guess where I’m going with all of this is that we seem to be in a stage of computer evolution where we’re doing things because we can, not because we need to. I think the next great advance is going to come when someone has a clear thought about what she wants to do, and makes the technology keep up with her creativity.

I think that a major part of the problem with current tech is beginning to shift in the right direction <-- is a hideous example of english.

I think that we are beginning to overcome a current limitation in tech and we are shifting focus in the right direction. The limitation is the focus on centralized processing. One generic chip can not be efficient at discrete tasks. IO, mem, storage, OS, etc can not be optimized but will be optimally compromised. The move to dedicated devices such as always on voice recorders/detectors, visual detectors, etc allow for hyper optimization that will yield the “Earl Grey, hot” world we all think the future will be.

Basically until the industry stops looking at the big box by the desk and starts remembering the concepts of embedded design and optimization we are near the pinnacle of compute capabilities. Sure things will get fancier and prettier, but is multi touch really a move to the future?

Just my thoughts.

That’s the beauty of the technique as it is being developed. The “thoughts” are not actually being specifically monitored, rather the spatio-temporal nerve impulses through fingertips. It’s similar to those “stress-analysers” where you can touch a contact and by mental persuasion, cause a tone generator to increase or decrease its frequency. All you are actually doing is adjusting physical parameters which can be monitored through your finger. These devices are based on the same principle, though much more complicated, and can respond to such impulses as wanting to “move” the body without actually triggering the nervous system into moving. If you think about moving without moving, you still send out tremendous waves of electric impulse throughout your body and that can A) be monitored syntactically and B) become an acquired skill; some would have it, even more intuitive than operating a mouse mechanically. It would take some getting used to. That all covers selection, and movement quite well. I believe the idea interface for this device would not be a pointer, but rather a condensation. Visualise a 10% opaque “cursor” which filled the entire screen. You could “think” about moving into one spot on the screen and as you did that the cloud-cursor would shrink, allowing you to adjust its shrinkage as it minimised. Releasing the movement impulses would cancel, pushing them all the way to zero-plane would “click”. This, as described, would take place in a fraction of a second. With another movement you could then select from that point quite easily with a similar gesture. A “thought” of thrusting back away from the device would bring the cloud cursor back to 100-plane.

Apply this to your examples of editing. You have your hands up to the keyboard, you “condense” the cursor to precisely the spot you wish to add a word. There is no traversal of Cartesian coordinates, you simply visualise the spot on the screen you wish to click on and a fraction of a second later you get a tactile keyboard click which lets you know you’re on. You type, then move on to the next spot. Condense the cursor-cloud into a shape surrounding the paragraph, “click” it, and then drag it to spot B to move it. Keyboard modifiers could be mixed into all of this just as they already are for mouse movements. We are really only getting rid of the mouse here; and more importantly we are getting rid of the mechanical aspect of linear-space pointer function. It’s no longer “click and drag” but “grab and drop”. Non-linear.

This is all just a starting point—a way to replace the mouse. As the tech gets more sophisticated with time, both in mechanical detection of externalised though, and in splitting the hairs of different external modalities, some interesting things could emerge. You could maybe start typing in plane-50 and gently lower the text into a determined spot while you are typing. That’s the beauty of the idea—it relies upon two entirely different parts of the mind. The typing we accomplish is purely mechanical—all muscle memory. The parts of the mind that manage pointer selection and focus is not. It is one area where we can multi-task effectively.

What Sean has to say about the reduction of the interface is, I believe, absolute pivotal. Most of us use personal computers to manifest or store things that we are not capable of biologically doing, or to augment our biological ability to do so. We could write our books with pens and paper, but a computer augments our ability to do so. We cannot biologically produce a stack of inked paper out of a flap in our arm as a spider can produce a web of silk, so we have printers to do this. As technology increases, and we reduce the complexity of input mechanisms, we can move closer to the true aim of personality computer, which is allow our minds to manifest and store things it is not otherwise capable of doing. I think more and more of the computation will move back to where it should be, in the brain. Not all of it, naturally. We did built computers for a good reason and that is our ability to dream up problems more difficult than we can immediately solve, but a vast percentage of what we demand of personal computers is simple augmentation or manifesting.

If using multi-touch was the way to go, we’d all still be writing letters to one another. We do not, however. We send emails. It is easier to type out our thoughts then to draw letters on a page using graphite or ink. Moving computers back into the realm of drawing out letters and then trying to recognise them would be a step backward. I can write at about 30-35 words per minute with a pen and paper, using a special form of shorthand, which is a lot faster than most write at. I can type, on average, a three times that speed. Even if the computer could fathom my shorthand, I would still be taking a huge step backward. And for the problem of mouse replacement, has anyone ever used a painter’s digital tablet? They are fantastic for painting because it mimics the sensation of drawing with a pen or pencil, which is very difficult to do with a mouse. But it is also very tiring to do. There is a 1-to-1 correlation of distance that must be covered. Meaning, to move the pointer across the screen, you have to move that distance with your hand. A small tablet can reduce that effect at the cost of vastly reduced precision. But a screen-tablet will always be 1-to-1. Meanwhile a trackball, mouse, or trackpad can all be set to dramatically multiply movement so that a few millimetres of movement is translated to few centimetres, or even greater in some cases. So it isn’t a good writing replacement, it isn’t even a good mouse replacement.

We already do editing by hand. That is what mouse motion or keyboard command usage is. This notation we used on paper is two-step and what we developed computer writing to get away from. You draw on a page so that later you can move the paragraph. Why draw a symbol on the “tablet” so that the computer can move the paragraph, when we can just go ahead and move the paragraph immediately? What advantage is there to taking more articulated steps in drawing brackets and arrows when the alternative is a very simple range of motion—or in the above hypothesis, a focussing of the eyes and mental translation of that. In fact, I find the concept of mechanical manipulation of a pointer so inefficient that I’d prefer the keyboard nine times out of ten. That is why I use editors like Vim, and software like LaunchBar, because they let me do things in a tenth of the time it takes to do them with a mouse. Making the movement concept even more articulated and contextually dependent (arrows and such), would be a huge step backward.

That’s a dazzling vision of the future, but it also troubles me. You remember the race of humans that WALL-E helps to rescue? They had all turned into fat little Teletubbies, glued to their robo-cars and totally absorbed in video messaging.

As good as our brains and dreams are, as physical beings we’re still hard-wired to the Neolithic, and these old bodies need work. So I don’t mind the use of hands in typing, drawing, hammering, splitting firewood, or editing text. The hands give me tactile contact with words; they help me to compose, pull together diverse bits into finished work.

I can see how a thought-prompt seems a step forward, but time- and labor-saving technology always bears a price. Look at a current generation of kids, zombied by XBox, never visiting the country, and we see obesity and asthma on the rise.

I’m no Luddite, but I don’t want to see technology replace craft, a wonderful Old English word that means strength, skill, and cunning. So when I prefer hand-editing, I’m looking for a machine that is fast but still in touch with me, as I’ve learned to work.

Your preference for command-line editing is understandable. Folks with a deep knowledge of machines and systems, down to assembly language and up, can be wizards with a few keystrokes. For the millions of users that make computing popular and cheap, the GUI and mouse are essential. They are forward, not backward, steps in creating a new mass medium.

Nothing new here.

theatlantic.com/doc/194507/bush

Amen.

[see also: Wells’ Time Machine]

ps

I see it as an opposite behavioural move, Druid. Currently, computers require one to sit down to use them (or some variation). A computer running in your cortical could be used while you were gardening or jogging or what-not (though advisably, probably not while chopping wood!). I think the transition of computing from machine-box to true human interface would effectively solve the problem that WALL*E depicts, which I think it is important to note, was a satire on the entertainment driven “amuse me while I sleep” post-American culture, more than what we are talking about here. At least that is how I saw it. Again, I think you are mixing issues by blending devices like the Xbox with labour-saving devices. Sure, technology like the Xbox would not have been possible without our endeavours in augmentation and manifestation, yes, but their development was not a logical following of the same motivations that caused us to build quasi-thinking machines—but rather a side effect of the same culture that considers watching the History Channel an erudite past time.

My feeling is, once we get past the transition point, which has existed for our entire observation of the computer thus far, where computers become more marginal and less dominating, we will see a return to craft amongst those that are keen on it. Computers will be a facilitator of craft; not an alternative to it as they are now. Now I must give it my undivided attention; now I must submit my body to slow atrophy that must be made up on a bicycle later today.

It’s a good point to bring up regarding mass-power over existing power that is arcane. What I was intending to demonstrate, however, is that neither voice control nor tablet control is any better than the current GUI and mouse. I think that is the critical point. It is, in a sense, no different than computer interfaces that tried to duplicate the appearance and function of physical objects. The argument was that a modem dialler that looked like a telephone would be easier for the public to understand use. It turned out it was just frustrating. The process of linking up to an Internet connexion was re-visualised and simplified; made abstract from prior notions. Now we have a drop-down menu where you can select a Wi-Fi antenna and seconds later you are on the 'net. I think tablets are in the same arena as pixel phones. It might sound nice on paper (heh), but in practice there is probably a better solution to the problem that is more direct—like dragging a paragraph from point A to point B instead of indicating that is what you would like to do, and hoping the computer interprets your squiggles correctly. I don’t think dragging a paragraph is, at this point, an arcane command-line activity.

Just read this…“wow” and written in 1945 I take it?