TLDR-参见编辑
我正在Swift中创建一个测试应用程序,我想使用AVMutableComposition
将应用程序文档目录中的多个视频拼接在一起。
我在这方面取得了一定程度的成功,我所有的视频都被拼接在一起,所有的东西都显示了正确尺寸的肖像和风景。
然而,我的问题是,所有视频都以汇编中最后一个视频的方向显示。
我知道要解决这个问题,我需要为我添加的每一首曲目添加图层说明,但我似乎无法做到这一点,根据我发现的答案,整个汇编似乎是以纵向方向进行的,横向视频只是按比例缩放以适应纵向视图,所以当我把手机翻过来看风景视频时,它们仍然很小,因为它们已经被缩放到人像大小。
这不是我想要的结果,我想要预期的功能,即如果视频是横向的,在纵向模式下会显示缩放,但如果手机旋转,我希望横向视频充满屏幕(就像在照片中简单地观看横向视频时一样),纵向也是如此,这样当纵向观看时会是全屏的,当横向观看时,视频会缩放到横向大小(就像在照片中观看人像视频时一样)。
总之,我想要的结果是,当观看一个包含横向和纵向视频的汇编时,我可以将手机放在一边观看整个汇编,横向视频是全屏的,纵向视频是按比例缩放的,或者当在纵向中观看同一视频时,纵向视频也是全屏的,横向视频按比例缩放。
根据所有的答案,我发现事实并非如此,当从照片中导入视频添加到汇编中时,他们似乎都有非常意外的行为,以及添加使用前置摄像头拍摄的视频时的相同随机行为(从我目前从库中导入的实现视频和"自拍"视频以正确的大小显示时要清楚,没有这些问题)。
我正在寻找一种旋转/缩放这些视频的方法,以便它们始终以正确的方向和比例显示,这取决于用户拿着手机的方向。
编辑:我现在知道我不能在一个视频中同时拥有横向和纵向,所以我想要的预期结果是最终视频具有横向。我已经想好了如何切换所有的方向和比例,让一切都以同样的方式进行,但我的输出是一个肖像视频,如果有人能帮助我改变这一点,我的输出将是横向的,我将不胜感激。
以下是我获取每个视频的说明的功能:
func videoTransformForTrack(asset: AVAsset) -> CGAffineTransform
{
var return_value:CGAffineTransform?
let assetTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0]
let transform = assetTrack.preferredTransform
let assetInfo = orientationFromTransform(transform)
var scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.width
if assetInfo.isPortrait
{
scaleToFitRatio = UIScreen.mainScreen().bounds.width / assetTrack.naturalSize.height
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
return_value = CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor)
}
else
{
let scaleFactor = CGAffineTransformMakeScale(scaleToFitRatio, scaleToFitRatio)
var concat = CGAffineTransformConcat(CGAffineTransformConcat(assetTrack.preferredTransform, scaleFactor), CGAffineTransformMakeTranslation(0, UIScreen.mainScreen().bounds.width / 2))
if assetInfo.orientation == .Down
{
let fixUpsideDown = CGAffineTransformMakeRotation(CGFloat(M_PI))
let windowBounds = UIScreen.mainScreen().bounds
let yFix = assetTrack.naturalSize.height + windowBounds.height
let centerFix = CGAffineTransformMakeTranslation(assetTrack.naturalSize.width, yFix)
concat = CGAffineTransformConcat(CGAffineTransformConcat(fixUpsideDown, centerFix), scaleFactor)
}
return_value = concat
}
return return_value!
}
出口商:
// Create AVMutableComposition to contain all AVMutableComposition tracks
let mix_composition = AVMutableComposition()
var total_time = kCMTimeZero
// Loop over videos and create tracks, keep incrementing total duration
let video_track = mix_composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
var instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: video_track)
for video in videos
{
let shortened_duration = CMTimeSubtract(video.duration, CMTimeMake(1,10));
let videoAssetTrack = video.tracksWithMediaType(AVMediaTypeVideo)[0]
do
{
try video_track.insertTimeRange(CMTimeRangeMake(kCMTimeZero, shortened_duration),
ofTrack: videoAssetTrack ,
atTime: total_time)
video_track.preferredTransform = videoAssetTrack.preferredTransform
}
catch _
{
}
instruction.setTransform(videoTransformForTrack(video), atTime: total_time)
// Add video duration to total time
total_time = CMTimeAdd(total_time, shortened_duration)
}
// Create main instrcution for video composition
let main_instruction = AVMutableVideoCompositionInstruction()
main_instruction.timeRange = CMTimeRangeMake(kCMTimeZero, total_time)
main_instruction.layerInstructions = [instruction]
main_composition.instructions = [main_instruction]
main_composition.frameDuration = CMTimeMake(1, 30)
main_composition.renderSize = CGSize(width: UIScreen.mainScreen().bounds.width, height: UIScreen.mainScreen().bounds.height)
let exporter = AVAssetExportSession(asset: mix_composition, presetName: AVAssetExportPreset640x480)
exporter!.outputURL = final_url
exporter!.outputFileType = AVFileTypeMPEG4
exporter!.shouldOptimizeForNetworkUse = true
exporter!.videoComposition = main_composition
// 6 - Perform the Export
exporter!.exportAsynchronouslyWithCompletionHandler()
{
// Assign return values based on success of export
dispatch_async(dispatch_get_main_queue(), { () -> Void in
self.exportDidFinish(exporter!)
})
}
很抱歉解释得太长了,我只是想确保我很清楚自己在问什么,因为其他答案对我不起作用。
我不确定你的orientationFromTransform()
给了你正确的方向。
我认为你试图修改它或尝试任何类似的东西:
extension AVAsset {
func videoOrientation() -> (orientation: UIInterfaceOrientation, device: AVCaptureDevicePosition) {
var orientation: UIInterfaceOrientation = .Unknown
var device: AVCaptureDevicePosition = .Unspecified
let tracks :[AVAssetTrack] = self.tracksWithMediaType(AVMediaTypeVideo)
if let videoTrack = tracks.first {
let t = videoTrack.preferredTransform
if (t.a == 0 && t.b == 1.0 && t.d == 0) {
orientation = .Portrait
if t.c == 1.0 {
device = .Front
} else if t.c == -1.0 {
device = .Back
}
}
else if (t.a == 0 && t.b == -1.0 && t.d == 0) {
orientation = .PortraitUpsideDown
if t.c == -1.0 {
device = .Front
} else if t.c == 1.0 {
device = .Back
}
}
else if (t.a == 1.0 && t.b == 0 && t.c == 0) {
orientation = .LandscapeRight
if t.d == -1.0 {
device = .Front
} else if t.d == 1.0 {
device = .Back
}
}
else if (t.a == -1.0 && t.b == 0 && t.c == 0) {
orientation = .LandscapeLeft
if t.d == 1.0 {
device = .Front
} else if t.d == -1.0 {
device = .Back
}
}
}
return (orientation, device)
}
}