用于在OSX上成千上万个文件的目录中匹配二进制文件的高性能搜索工具



我正在合并两个大型(1000张(与不同目录结构的照片集,其中两组中都已经存在许多照片。我打算写一个这样的脚本:

For a given photo in set B,
Check if a binary match for it exists in set A.
If there's a match, delete the file.

审核了B集中的所有文件后,我将合并B集B的(现在是唯一的(剩余部分。

可能有不同的文件名匹配二进制匹配,因此测试时应忽略文件名。

另外,我将对B集B中的每个文件进行设置搜索,因此我更喜欢一种工具,该工具将设置A的索引作为初始扫描的一部分。幸运的是,可以完成此索引一次,而永远不需要更新。

我要使用一个OSX shell脚本,但是python也很好。

我通过基于马克的建议写一对python脚本解决了我的问题。

md5index.py:

#given a folder path, makes a hash index of every file, recursively
import sys, os, hashlib, io
hash_md5 = hashlib.md5()
#some files need to be hashed incrementally as they may be too big to fit in memory
#http://stackoverflow.com/a/40961519/2518451
def md5sum(src, length=io.DEFAULT_BUFFER_SIZE):
    md5 = hashlib.md5()
    with io.open(src, mode="rb") as fd:
        for chunk in iter(lambda: fd.read(length), b''):
            md5.update(chunk)
    return md5
#this project done on macOS. There may be other files that are appropriate to hide on other platforms.
ignore_files = [".DS_Store"]
def index(source, index_output):
    index_output_f = open(index_output, "wt")
    index_count = 0
    for root, dirs, filenames in os.walk(source):
        for f in filenames:
            if f in ignore_files:
                continue
            #print f
            fullpath = os.path.join(root, f)
            #print fullpath
            md5 = md5sum(fullpath)
            md5string = md5.hexdigest()
            line = md5string + ":" + fullpath
            index_output_f.write(line + "n")
            print line
            index_count += 1
    index_output_f.close()
    print("Index Count: " + str(index_count))

if __name__ == "__main__":
    index_output = "index_output.txt"
    if len(sys.argv) < 2:
        print("Usage: md5index [path]")
    else:
        index_path = sys.argv[1]
        print("Indexing... " + index_path)
        index(index_path, index_output)

和iniquemerge.py:

#given an index_output.txt in the same directory and an input path,
#remove all files that already have a hash in index_output.txt
import sys, os
from md5index import md5sum
from send2trash import send2trash
SENDING_TO_TRASH = True
def load_index():
    index_output = "index_output.txt"
    index = []
    with open(index_output, "rt") as index_output_f:
        for line in index_output_f:
            line_split = line.split(':')
            md5 = line_split[0]
            index.append(md5)
    return index
#traverse file, compare against index
def traverse_merge_path(merge_path, index):
    found = 0
    not_found = 0
    for root, dirs, filenames in os.walk(merge_path):
        for f in filenames:
            #print f
            fullpath = os.path.join(root, f)
            #print fullpath
            md5 = md5sum(fullpath)
            md5string = md5.hexdigest()
            if md5string in index:
                if SENDING_TO_TRASH:
                    send2trash(fullpath)
                found += 1
            else:
                print "t NON-DUPLICATE ORIGINAL: " + fullpath
                not_found += 1

    print "Found Duplicates: " + str(found) + " Originals: " + str(not_found)

if __name__ == "__main__":
    index = load_index()
    print "Loaded index with item count: " + str(len(index))
    print "SENDING_TO_TRASH: " + str(SENDING_TO_TRASH) 
    merge_path = sys.argv[1]
    print "Merging To: " + merge_path
    traverse_merge_path(merge_path, index)

假设我想将foldera合并到folderb中,我确实: python md5index.py foldera #创建index_output.txt,并使用foldera的所有哈希

创建
python uniquemerge.py folderB
# deletes all files in folderB that already existed in folderA
# I can now manually merge folderB into folderA

相关内容

最新更新