None of that is true, so I would say there is some other cause. What happens when you import:
-
The original file is read from the disk and passed through a conversion library from its original text format into Scrivener’s internal file format. Which library is used depends on the file format. There is a Java-based converter for ODT, but if you don’t have Java installed it will fallback to the system converter. For DOCX it uses a bespoke converter. For and RTFD the system libraries are used, as well as plain-text formats.
-
This is then further processed using bespoke code to convert various features that might not be supported directly by the system (embedded images, styles, footnotes, etc.).
The important thing to be aware of is that prior to this conversion and the previous, is that there is no purpose at all for the resource to be stored in the binder. At step one it would be in a file format Scrivener doesn’t understand, so it would be a read-only Quick Look preview at best. Before step 2 it is the right format (RTF) but is missing crucial features that could cause data loss, so we wouldn’t ever put that into the binder either.
-
Now that the content is prepared the software integrates it with the data model in one of two ways:
- For drag and drop or simple file imports: it is given a binder title based upon the file name, it acquires some basic default settings (such as “Include in Compile” being true).
- For import and split: the prepared content is processed using the given criteria and the above process is used for each chunk that is recognised. This can involve building hierarchy when the split criteria and content causes such (like Heading 1 and Heading 2 styles building a nested tree). Items are named according to other rules that differ depending on the split type and settings; it’s probably not worth getting into that. In essence, for the purposes of step one, the import and split feature acts like you imported multiple files, at that point the import process is the same.
-
Internally this also involves creating .rtf files on the disk, saved into matching subfolders based on their data model and internal IDs.
-
The original content data is flushed from RAM, save for whatever is loaded into the editor. Again it has no useful purpose in the project and wouldn’t be placed anywhere.
So that is why you see everything included in compile, not because it is changing anything, but because that is the core default for all new items, whether they are imported or created within the software. It doesn’t matter where you create them, because this setting operates at a level beneath the Draft folder. You can drag things in and out of the draft without losing that setting, or even trash them, which is crucial because that setting is more important than where the item currently is. This is what allows people to swap books in and out of the Draft folder without losing their important inclusion settings. This is what allows people to compile material from outside of the Draft, which isn’t common, but can be achieved by compiling search results or collections.
Therefore the presence of these items in the Trash must be because they were trashed at some point. Perhaps it was long ago, it doesn’t really matter too much. You have, I think from other threads, figured out how to exclude trashed items from your search results, so that resolves the main confusion that started this thread. We do have that on by default, because a common desire for searching is to locate missing things, and accidentally trashed data is a not uncommon place for things to go missing.
I just wanted to point out that some other activity caused them to be in the Trash folder, so you don’t think you have to periodically clean them out when importing.
There are no automated processes in Scrivener that would trash items without you doing so yourself. That folder is 100% user controlled.