AVPlayer与Pangestrecognizer一起查找时间



我正在尝试将seektotime与Pangesture识别器一起使用。但它并没有像预期的那样寻求。

let totalTime = self.avPlayer.currentItem!.duration
    print("time: (CMTimeGetSeconds(totalTime))")
    self.avPlayer.pause()
    let touchDelta = swipeGesture.translationInView(self.view).x / CGFloat(CMTimeGetSeconds(totalTime))
    let currentTime = CMTimeGetSeconds((avPlayer.currentItem?.currentTime())!) + Float64(touchDelta)
    print(currentTime)
    if currentTime >= 0 && currentTime <= CMTimeGetSeconds(totalTime) {
        let newTime = CMTimeMakeWithSeconds(currentTime, Int32(NSEC_PER_SEC))
        print(newTime)
        self.avPlayer.seekToTime(newTime)
    }

我在这里做错了什么?

想想这一行发生了什么:

let touchDelta = swipeGesture.translationInView(self.view).x / CGFloat(CMTimeGetSeconds(totalTime))

您将像素(仅在x轴上的平移)除以时间。这真的不是一个"德尔塔"或绝对的差异。这是一种比例。但这并不是一个有任何意义的比率。然后,您只需将这个比率添加到以前的currentTime,就可以获得新的currentTime,所以您将每秒像素数添加到像素数中,这并没有给出一个逻辑或有用的数字。

我们需要做的是从手势中获得x轴平移,并对其应用比例(这是一个比率),以便获得有用的秒数来推进/回放AVPlayer。x轴的平移是以像素为单位的,所以我们需要一个描述每像素秒数的比例,并将两者相乘,以获得秒数。适当的比例是视频中的总秒数与用户在手势中可以移动的像素总数之间的比率。将像素乘以(秒除以像素)得到一个以秒为单位的数字。在伪代码中:

scale = totalSeconds / totalPixels
timeDelta = translation * scale
currentTime = oldTime + timeDelta

所以我会这样重写你的代码:

let totalTime = self.avPlayer.currentItem!.duration
print("time: (CMTimeGetSeconds(totalTime))")
self.avPlayer.pause()
// BEGIN NEW CODE
let touchDelta = swipeGesture.translationInView(self.view).x
let scale = CGFloat(CMTimeGetSeconds(totalTime)) / self.view.bounds.width
let timeDelta = touchDelta * scale
let currentTime = CMTimeGetSeconds((avPlayer.currentItem?.currentTime())!) + Float64(timeDelta)
// END NEW CODE
print(currentTime)
if currentTime >= 0 && currentTime <= CMTimeGetSeconds(totalTime) {
    let newTime = CMTimeMakeWithSeconds(currentTime, Int32(NSEC_PER_SEC))
    print(newTime)
    self.avPlayer.seekToTime(newTime)
}

我有同样的问题,然后我创建UISlider并设置操作方法如下所示,

声明AVPlayer为var playerVal = AVPlayer()

@IBAction func sliderAction(sender: UISlider) {
    playerVal.pause()
    displayLink.invalidate()
    let newTime:CMTime = CMTimeMakeWithSeconds(Double(self.getAudioDuration() as! NSNumber) * Double(sender.value), playerVal.currentTime().timescale)
    playerVal.seekToTime(newTime)
    updateTime()
    playerVal.play()
    deepLink()

 }

另一种方法是

func updateTime() {
        let currentTime = Float(CMTimeGetSeconds(playerItem1.currentTime()))
        let minutes = currentTime/60
        let seconds = currentTime - minutes * 60
        let maxTime =  Float(self.getAudioDuration() as! NSNumber)
        let maxminutes = maxTime / 60
        let maxseconds = maxTime - maxminutes * 60
        startValue.text = NSString(format: "%.2f:%.2f", minutes,seconds) as String
        stopValue.text = NSString(format: "%.2f:%.2f", maxminutes,maxseconds) as String
    }

我使用了CADisplayLink并声明var displayLink = CADisplayLink(),它使用了继续(自动)播放音频。代码是

func deepLink() {
        displayLink = CADisplayLink(target: self, selector: ("updateSliderProgress"))
        displayLink.addToRunLoop(NSRunLoop.currentRunLoop(), forMode: NSDefaultRunLoopMode)
    }
func updateSliderProgress(){
        let progress = Float(CMTimeGetSeconds(playerVal.currentTime())) / Float(self.getAudioDuration() as! NSNumber)
        sliderView.setValue(Float(progress), animated: false)
    }

如果你看到上面的答案,你就有了想法,希望它对有帮助

最新更新