在将 Objective-C 转换为 Swift 时如何处理协议委托



我正在尝试将语音识别代码转换为 Swift,ViewController.h 中定义的协议为:

@interface ViewController : UIViewController<SpeechRecognitionProtocol>
{
    NSMutableString* textOnScreen;
    DataRecognitionClient* dataClient;
    MicrophoneRecognitionClient* micClient;
    SpeechRecognitionMode recoMode;
    bool isMicrophoneReco;
    bool isIntent;
    int waitSeconds;
}

我卡在ViewController.h上转换下面的函数:

micClient = [SpeechRecognitionServiceFactory createMicrophoneClient:(recoMode)
                                                       withLanguage:(language)
                                               withKey:(primaryOrSecondaryKey)
                                                          withProtocol:(self)];

此函数在 SpeechSDK.framework 中定义为:

@interface SpeechRecognitionServiceFactory : NSObject
/*
@param delegate The protocol used to perform the callbacks/events upon during speech recognition.
*/
+(MicrophoneRecognitionClient*)createMicrophoneClient:(SpeechRecognitionMode)speechRecognitionMode
                              withLanguage:(NSString*)language
                              withKey:(NSString*)primaryOrSecondaryKey
                              withProtocol:(id<SpeechRecognitionProtocol>)delegate;
@end

这个协议在我转换后的 ViewController.Swift 中看起来像这样:

import UIKit    
protocol SpeechRecognitionProtocol {
    func onIntentReceived(result: IntentResult)
    func onPartialResponseReceived(response: String)
    func onFinalResponseReceived(response: RecognitionResult)
    func onError(errorMessage: String, withErrorCode errorCode: Int)
    func onMicrophoneStatus(recording: DarwinBoolean)
    func initializeRecoClient()
}
class ViewController: UIViewController, SpeechRecognitionProtocol {
    var myDelegate: SpeechRecognitionProtocol? 

最后,我在ViewController.swift中调用此函数。 使用协议后我收到以下错误:无法将"语音识别协议协议"类型的值转换为预期的参数类型"语音识别协议!

func initializeRecoClient() {
    let language: String = "en-us"
    let path: String = NSBundle.mainBundle().pathForResource("settings", ofType: "plist")!
    let settings = NSDictionary(contentsOfFile: path)
    let primaryOrSecondaryKey = settings?.objectForKey("primaryKey") as! String
    micClient = SpeechRecognitionServiceFactory.createMicrophoneClient(recoMode!,
                                               withLanguage: language, 
                                       withKey: primaryOrSecondaryKey,
                                withProtocol: SpeechRecognitionProtocol)
}

你不应该自己声明SpeechRecognitionProtocol(不确定你添加它只是为了演示目的,或者你的代码中是否真的有它)。 SpeechRecognitionProtocol 已经在 SpeechRecognitionService.h 中声明并可供 Swift 使用 - 这是您需要使用的那个。

实现该协议的对象是 ViewController 。假设您的initializeRecoClient是该类的方法,则调用需要如下所示:

micClient = SpeechRecognitionServiceFactory
              .createMicrophoneClient(recoMode!,
                                      withLanguage: language, 
                                      withKey: primaryOrSecondaryKey,
                                      withProtocol: self)

SpeechSDK API 没有为该工厂方法选择一个特别好的名称。withProtocol 参数不采用协议对象本身(顾名思义),而是采用实现协议的对象(显然)。

PS:不确定你使用什么SpeechAPI版本,我必须实现那些Swift方法来使ViewController符合SpeechRecognitionProtocol

func onPartialResponseReceived(response: String!) {}
func onFinalResponseReceived  (response: RecognitionResult) {}
func onError                  (errorMessage: String!, 
                               withErrorCode errorCode: Int32) {}
func onMicrophoneStatus       (recording: Bool) {}
func onIntentReceived         (result: IntentResult) {}

最新更新