Python中简单的TensorFlow代码中的错误



我有一个用python(anaconda3,ubuntu 16.04)编写的深度学习代码。它基本上是基于受过训练的模型的给定视频的AD检测,如果视频是AD,则必须返回(我们认为它只是单次视频)。我的同事现在不可用的同事写了这篇文章。

原始文件在镜头列表上的某个地方(用于多个镜头)的循环效果,现在删除了循环,假设视频是单个拍摄的。但是看起来有些数组尺寸被弄乱了,因此错误。如何解决问题?

我对Python和深度学习不熟悉,这应该是普通的Python编程问题,而不是语义,因为它以前效果很好。

这是错误:

 File "/Ad_module_textfast_stream.py", line 36, in label_prediction
    pred_labels= clf_trained.predict( mfcc_feat.reshape(-1, 200) )
ValueError: cannot reshape array of size 8640 into shape (200)

因此,它在调用video_audio_extractor(video_name)时应运行。这是代码。目前,如果是视频或广告,则必须包含final_label_list。而且我想,由于它只是一张镜头,所以只能包含一个元素。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import time, cv2, librosa
import skvideo.io
import numpy as np
import tensorflow as tf
import subprocess, os, glob
from sklearn.externals import joblib

#################################### Loading the Dataset
def label_prediction( mfcc_list, num_frames= 3):
    num_shots= len(mfcc_list)
    pred_labels_list= list()
    final_label_list= list()

    if 2>1:
        clf_trained = joblib.load('trainsvm_mfcc.pkl') 
        #predicted_train_labels= clf_trained.predict( mfcc_list)   
        for j in range(num_shots):
            mfcc_feat = mfcc_list[j]     
            num_frames= 1
            mfcc_feat= ( mfcc_feat- np.mean(mfcc_feat) ) / ( np.std(mfcc_feat)+ 1e-6 )
    #### now access operation you wanna run
            #pred_labels= clf_trained.predict( mfcc_feat)
            pred_labels= clf_trained.predict( mfcc_feat.reshape(-1, 200) )
            final_label= 0  ## ads:1, and video:0
            if pred_labels> 0:
                final_label= 1
            pred_labels_list.append(pred_labels)
            final_label_list.append(final_label)
    ################ post-processing
    final_label_list[0]= 0 ### video starts with natural content
    final_label_list[len(final_label_list)-1]= 0 ## last  shot
    for kk in range(2,len(final_label_list)-2): ### one video between many ads, most likely to be ads
        if final_label_list[kk]==0 and final_label_list[kk-2]==1 and final_label_list[kk-1]==1 and final_label_list[kk+1]==1 and final_label_list[kk+2]==1:
            final_label_list[kk]= 1
    return final_label_list, pred_labels_list

def video_audio_extractor( video_name):
    cur_video= skvideo.io.vread(video_name)
    metadata = skvideo.io.ffprobe(video_name)
    vid_info= metadata["video"]
    items = list(vid_info.items())
    avg_fps_info= items[22][1]
    avg_fps= int(avg_fps_info[0:2])
    cur_num_frame= cur_video.shape[0]
    cur_audio, cur_sr= librosa.load(video_name)
    mfcc_list= list()
    cur_audioshot_mfcc= librosa.feature.mfcc( y= cur_audio, sr= cur_sr, n_mfcc=20)
    cur_audioshot_mfcc_1d= np.reshape( cur_audioshot_mfcc, [cur_audioshot_mfcc.shape[0]*cur_audioshot_mfcc.shape[1],])
    mfcc_list.append(cur_audioshot_mfcc_1d)
    final_label_list, pred_labels_list= label_prediction( mfcc_list, num_frames= 3)
    return mfcc_list, avg_fps, final_label_list

以下是包含for循环的原始video_audio_extractor函数:

def video_audio_extractor( video_name):
    cur_video= skvideo.io.vread(video_name)
    metadata = skvideo.io.ffprobe(video_name)
    vid_info= metadata["video"]
    items = list(vid_info.items())
    avg_fps_info= items[22][1]
    avg_fps= int(avg_fps_info[0:2])
    cur_num_frame= cur_video.shape[0]
    cur_audio, cur_sr= librosa.load(video_name)
    cur_shot_name= 'video_shots.txt'
    #cur_shot_name= cur_video_name[0:-4]+'_shots.txt'
    line = list(open(cur_shot_name, 'r'))
    mfcc_list= list()
    for shot_ind in range(len(line)):
        cur_line= line[ shot_ind]
        cur_line_list= cur_line.split()
        first_frame= int( cur_line_list[0] )
        last_frame = int( cur_line_list[1] )
        cur_audioshot_first_ind= int( np.floor(first_frame*len(cur_audio)/cur_num_frame ) )
        cur_audioshot_last_ind = int( np.floor(last_frame *len(cur_audio)/cur_num_frame ) )
        cur_audioshot= cur_audio[cur_audioshot_first_ind:cur_audioshot_last_ind]
        new_rate= 5000*cur_sr/len(cur_audioshot)
        cur_audioshot_resampled = librosa.resample(cur_audioshot, cur_sr, new_rate)
        cur_audioshot_mfcc= librosa.feature.mfcc(y=cur_audioshot_resampled, sr= new_rate, n_mfcc=20)
        cur_audioshot_mfcc_1d= np.reshape( cur_audioshot_mfcc, [cur_audioshot_mfcc.shape[0]*cur_audioshot_mfcc.shape[1],])
        mfcc_list.append(cur_audioshot_mfcc_1d)
    return mfcc_list, line, avg_fps

我认为问题是您的旧video_audio_extractor代码更改了采样率并重新示例音频。这是代码:

new_rate= 5000*cur_sr/len(cur_audioshot)
cur_audioshot_resampled = librosa.resample(cur_audioshot, cur_sr, new_rate)
cur_audioshot_mfcc= librosa.feature.mfcc(y=cur_audioshot_resampled, sr= new_rate, n_mfcc=20)

您的新video_audio_extractor不使用此新速率或进行任何重采样。这就是为什么mfcc_feat的新形状与mfcc_feat的旧形状不同的原因。如果替换以下行,会发生什么:

pred_labels= clf_trained.predict( mfcc_feat.reshape(-1, 200) )

使用此行:

pred_labels= clf_trained.predict( mfcc_feat )

相关内容

  • 没有找到相关文章

最新更新