使用 ng-flow 上传的文件上传到 gae blobstore 始终名为"blob"



我试图创建一个页面上传图像到谷歌应用程序引擎blobstore。我使用angularjs和ng-flow来实现这一点。

上传部分似乎工作得很好,除了所有blob都存储为'application/octet-stream'并命名为'blob'。如何让blobstore识别文件名和内容类型?

这是我用来上传文件的代码。

在FlowEventsCtrl

:

$scope.$on('flow::filesSubmitted', function (event, $flow, files) {
            $http.get('/files/upload/create').then(function (resp) {
                $flow.opts.target = resp.data.url;
                $flow.upload();
            });
        });
在view.html

:

<div flow-init="{testChunks:false, singleFile:true}" 
     ng-controller="FlowEventsCtrl">
    <div class="panel">
        <span flow-btn>Upload File</span>
    </div>
    <div class="show-files">...</div>
</div>

服务器端与blobstore文档中指定的一致。

谢谢

我已经解决了我的问题,现在回想起来,答案似乎很明显。Flow.js和Blobstore上传URL做不同的事情。我将在下面留下我的解释,以防人们犯和我一样天真的错误。

blobstore期望文件中有一个字段。该字段包含上传数据的文件名和内容类型。该数据作为文件存储在blobstore中。默认情况下,该字段名为"file"。

Flow以块的形式上传数据,并包含一些用于文件名和其他数据的字段。实际的块数据在指定文件名为'blob'和内容类型为'application/octet-stream'的字段中上传。我们期望服务器存储数据块并重新组装到文件中。因为它只是文件的一部分而不是整个文件,所以它既不以文件命名,也不以相同的内容类型命名。默认情况下,该字段名为"file"。

所以这个问题的答案是:文件存储为"应用程序/octet-stream"并命名为"blob",因为我存储的是块而不是实际的文件。我之所以能够存储一些东西,似乎是因为两者都为字段使用了相同的默认名称。

因此,解决方案是为流请求编写自己的处理程序:

class ImageUploadHandler(webapp2.RequestHandler):
    def post(self):
        chunk_number = int(self.request.params.get('flowChunkNumber'))
        chunk_size = int(self.request.params.get('flowChunkSize'))
        current_chunk_size = int(self.request.params.get('flowCurrentChunkSize'))
        total_size = int(self.request.params.get('flowTotalSize'))
        total_chunks = int(self.request.params.get('flowTotalChunks'))
        identifier = str(self.request.params.get('flowIdentifier'))
        filename = str(self.request.params.get('flowFilename'))
        data = self.request.params.get('file')
        f = ImageFile(filename, identifier, total_chunks, chunk_size, total_size)
        f.write_chunk(chunk_number, current_chunk_size, data)
        if f.ready_to_build():
            info = f.build()
            if info:
                self.response.headers['Content-Type'] = 'application/json'
                self.response.out.write(json.dumps(info.as_dict()))
            else:
                self.error(500)
        else:
            self.response.headers['Content-Type'] = 'application/json'
            self.response.out.write(json.dumps({
                'chunkNumber': chunk_number,
                'chunkSize': chunk_size,
                'message': 'Chunk ' + str(chunk_number) + ' written'
            }))

其中ImageFile是一个向google云存储写入的类。

编辑:

在ImageFile类下面。唯一缺少的是FileInfo类,它是一个简单的模型,用于存储生成的url和文件名。

class ImageFile:
    def __init__(self, filename, identifier, total_chunks, chunk_size, total_size):
        self.bucket_name = os.environ.get('BUCKET_NAME', app_identity.get_default_gcs_bucket_name())
        self.original_filename = filename
        self.filename = '/' + self.bucket_name + '/' + self.original_filename
        self.identifier = identifier
        self.total_chunks = total_chunks
        self.chunk_size = chunk_size
        self.total_size = total_size
        self.stat = None
        self.chunks = []
        self.load_stat()
        self.load_chunks(identifier, total_chunks)
    def load_stat(self):
        try:
            self.stat = gcs.stat(self.filename)
        except gcs.NotFoundError:
            self.stat = None
    def load_chunks(self, identifier, number_of_chunks):
        for n in range(1, number_of_chunks + 1):
            self.chunks.append(Chunk(self.bucket_name, identifier, n))
    def exists(self):
        return not not self.stat
    def content_type(self):
        if self.filename.lower().endswith('.jpg'):
            return 'image/jpeg'
        elif self.filename.lower().endswith('.jpeg'):
            return 'image/jpeg'
        elif self.filename.lower().endswith('.png'):
            return 'image/png'
        elif self.filename.lower().endswith('.git'):
            return 'image/gif'
        else:
            return 'binary/octet-stream'
    def ready(self):
        return self.exists() and self.stat.st_size == self.total_size
    def ready_chunks(self):
        for c in self.chunks:
            if not c.exists():
                return False
        return True
    def delete_chunks(self):
        for c in self.chunks:
            c.delete()
    def ready_to_build(self):
        return not self.ready() and self.ready_chunks()
    def write_chunk(self, chunk_number, current_chunk_size, data):
        chunk = self.chunks[int(chunk_number) - 1]
        chunk.write(current_chunk_size, data)
    def build(self):
        try:
            log.info('File '' + self.filename + '': assembling chunks.')
            write_retry_params = gcs.RetryParams(backoff_factor=1.1)
            gcs_file = gcs.open(self.filename,
                                'w',
                                content_type=self.content_type(),
                                options={'x-goog-meta-identifier': self.identifier},
                                retry_params=write_retry_params)
            for c in self.chunks:
                log.info('Writing chunk ' + str(c.chunk_number) + ' of ' + str(self.total_chunks))
                c.write_on(gcs_file)
            gcs_file.close()
        except Exception, e:
            log.error('File '' + self.filename + '': Error during assembly - ' + e.message)
        else:
            self.delete_chunks()
            key = blobstore.create_gs_key('/gs' + self.filename)
            url = images.get_serving_url(key)
            info = ImageInfo(name=self.original_filename, url=url)
            info.put()
            return info

Chunk类:

class Chunk:
    def __init__(self, bucket_name, identifier, chunk_number):
        self.chunk_number = chunk_number
        self.filename = '/' + bucket_name + '/' + identifier + '-chunk-' + str(chunk_number)
        self.stat = None
        self.load_stat()
    def load_stat(self):
        try:
            self.stat = gcs.stat(self.filename)
        except gcs.NotFoundError:
            self.stat = None
    def exists(self):
        return not not self.stat
    def write(self, size, data):
        write_retry_params = gcs.RetryParams(backoff_factor=1.1)
        gcs_file = gcs.open(self.filename, 'w', retry_params=write_retry_params)
        for c in data.file:
            gcs_file.write(c)
        gcs_file.close()
        self.load_stat()
    def write_on(self, stream):
        gcs_file = gcs.open(self.filename)
        try:
            data = gcs_file.read()
            while data:
                stream.write(data)
                data = gcs_file.read()
        except gcs.Error, e:
            log.error('Error writing data to chunk: ' + e.message)
        finally:
            gcs_file.close()
    def delete(self):
        try:
            gcs.delete(self.filename)
            self.stat = None
        except gcs.NotFoundError:
            pass

最新更新