Remember, Dropbox is the company that especially monkey patched the Finder to get the sync icons ( ). I don't know how the team will choose to implement these features, but I'm confident that it will be well-thought-out and tested. If you `grep -R` it may download/open/read the files, once you reach 40 GB or near your disk capacity, Dropbox could start removing local copies of files that are not pinned to be local, i.e., remove the files that were downloaded because of the open()/read(), not the files you explicitly told it to keep local.
For example there could be a setting for the maximum about of space Dropbox will use up, e.g., 40 GB, plus Dropbox could be smart enough to detect disk usage.
It was totally seamless and amazing.Ĭompared to the complexity of what has already been implemented, solving the problem of "I want to recursively open/read every file in my Dropbox, but I don't want it to download terabytes of data and fill by hard drive" seems fairly simple. As for the indexing operations, I don't know how that is handled, they could disable indexing for remote files, or they could somehow integrate with indexing.īased on my testing of a pre-released version of the feature (it isn't released yet), if you were to do something like `find ~/Dropbox -type f -exec md5 +`, it would download files.Īs a user it did exactly what I expected.
AFAIK Spotlight and Windows search are indexed searches. I don't know a common search/find system that open()s or read()s files during the search by default. (First off, as a disclaimer, I no longer work for Dropbox, I don't speak on their behalf. Most of them also don't have any ability to cache files locally when offline, or the ability to manually select which files are stored locally and which are remote. With a normal network file system over the WAN you would have major latency problems trying to `cd` and `ls` the remote file system. This is actually very different than traditional network file systems like SMB, NFS, WebDAV, and SSHFS. You can right-click to pin files and directories to be stored locally, and right-click to send them back to the cloud so they don't take up space. If you open a file in that directory, it will open, even from command line, then do `du -sh` and see that that file is now taking up space, while all the others in the directory are not.
Also no luck with that.This feature is implemented at a low level, and works on the command line.įor example if you have a directory that is all stored in the cloud you can `cd` to it without any network delay, you can do `ls -lh` and see a list with real sizes without a delay (e.g., see that an ISO is 650 MB), and you can do `du -sh` and see that all the files are taking up zero space.
I've run "Carbon Copy Cloner" as suggested in the thread above. I've also verified disk permissions (although the mounted Virtual Drive doesn't appear there, only my local drive - therefore probably a pointless step).
I have read & write permission both on my local and my Virtual Drive. (Snow Leopard) Items can't be copied because you don't have permission to read them (Yosemite) Items can’t be copied because you don’t have permission to read them. Clearly I believe I do, which is why I can't explain the issue. Every time I receive the error message: "Items can't be copied because you don't have permission to read them". A reoccurring issue that I haven't found a solution to: I'm trying to copy & paste multiple folders from a Virtual Drive (in this case a mounted Bitcasa Drive ) to my local drive.