重复性 - 无法恢复单个文件



我尝试从amazon s3恢复单个文件或目录,但是我得到一个错误。

Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
Traceback (most recent call last):
  File "/usr/bin/duplicity", line 1251, in <module>
    with_tempdir(main)
  File "/usr/bin/duplicity", line 1244, in with_tempdir
    fn()
  File "/usr/bin/duplicity", line 1198, in main
    restore(col_stats)
  File "/usr/bin/duplicity", line 538, in restore
    restore_get_patched_rop_iter(col_stats)):
  File "/usr/bin/duplicity", line 560, in restore_get_patched_rop_iter
    backup_chain = col_stats.get_backup_chain_at_time(time)
  File "/usr/lib/python2.6/dist-packages/duplicity/collections.py", line 934, in get_backup_chain_at_time
    raise CollectionsError("No backup chains found")
CollectionsError: No backup chains found

我做错了什么?

我是这样做备份的出口密码= * * * * 出口AWS_ACCESS_KEY_ID = * * * * 出口AWS_SECRET_ACCESS_KEY = * * * * GPG_KEY = * * * * BACKUP_SIM_RUN = 1

LOGFILE="/var/log/s3-backup.log"
DAILYLOGFILE="/var/log/s3-backup-daily.log"
# The source of your backup
SOURCE=/home/u54433
# The destination
DEST=s3+http://**********

trace () {
        stamp=`date +%Y-%m-%d_%H:%M:%S`
        echo "$stamp: $*" >> ${DAILYLOGFILE}
}
cat /dev/null > ${DAILYLOGFILE}
trace "removing old backups..."
duplicity remove-older-than 2M --force --sign-key=${GPG_KEY} ${DEST} >> ${DAILYLOGFILE} 2>&1
trace "start backup files..."
duplicity --sign-key=${GPG_KEY} --exclude="**/logs" --s3-european-buckets --s3-use-new-style ${SOURCE} ${DEST} >> ${DAILYLOGFILE} 2>&1
cat "$DAILYLOGFILE" >> $LOGFILE
export PASSPHRASE=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=

在所有重复调用中使用--s3- Use -new-style选项。

我和你有同样的问题。我添加了缺失的选项"duplicity remove-older-than",现在一切都很好。

最好从amazon中删除S3 bucket,并尝试重新创建完全备份,这可能会解决问题。

你可以看到下面的链接

https://answers.launchpad.net/duplicity/+问题/107074

对于那些回到这个问题寻找明确答案的人来说,@shaikh-systems链接导致人们意识到在IAM子账户密钥的双重/AWS通信中存在一些问题。为了恢复,我通过使用/export输入我的主帐户密钥/secret来让它工作。

相关内容

  • 没有找到相关文章

最新更新