利用微软认知服务达成语音识别作用

  想完结语音识别已经很久了,也尝试了不计其多次,究竟依旧败诉了,原因很多,识别功效不理想,个人在技术上没有得逞促成,各类原因,以至于开支了过多时日在下边。语音识别,我尝试过的有中国科学技术大学讯飞、百度语音,微软系。最终依旧喜欢微软系的简洁高效。(勿喷,纯个人感觉)

  最开端和气的想法是本人说一句话(权且在控制台上做德姆o),控制台程序能识别自身说的是如何,然后彰显出来,并且根据小编说的音讯,执行相应的行为.(想法相当漂亮好,现实很烦躁)初入语音识别,各个不当种种来,柔懦寡断的取舍哪家集团的api,百度上追寻各个语音识其余demo,学习参考,不过真正在.NET平台上运维成功的却是寥寥无几,或者是自个儿寻找方向有题目,经历了很多的坑,没1回得逞过,心灰且意冷,打了四遍退堂鼓,却究竟忍受不住想玩语音识别。

  能够看看作者VS中的语音demo

  图片 1

  第三个是今日的支柱-稍后再提。

  首个和第一个是微软系的种类自带的System.Speech.dll和看了微软博客里面包车型大巴一篇文章而去尝试的Microsoft.Speech.dll
可惜文章写的挺好的,小编尝试却是战败   的,并且发现3个标题,正是英文版的微软语音识别是无效的(Microsoft.Speech.Recognition),而汉语版的语音合成是船到江心补漏迟的(Microsoft.Speech.Synthesis).,因    此,作者只得将七个dll混合使用,来达到自作者想要的功效,最终效果的确达到了,不过却是极其不难的,一旦识别词汇多起来,那识别率直接降低,作者直接认为是采集样品  频率的标题,不过怎么也找不到采集样品频率的特性或是字段,如有会的恋人可给本人点新闻,让自家也飞起来,哈哈。

  第多个是百度语音识别demo,代码简洁许多,完毕难度不难,然而小细节很多,要求注意,然后是雷区挺多的,不过呢,指引走出雷区的表明书却是太少了,我是  踩了雷,很痛的那群。

 

  首先来看看,以往市面上主流语音识别设计格局:

  ① 、离线语音识别

  离线语音识别很好领会,正是语音识别库在地点或是局域网内,无需发起远程连接。那些也是自个儿当场的想法,本身弄一套语音识别库,然后依据个中的情节设计想要的一颦一笑请求。利用微软系的System.Speech.dll中的语音识别和语音合成成效。达成了不难的中文语音识别成效,可是倘使自个儿将语音识别库渐渐加大,识别率就越来越低,不知是自己电脑迈克风不行仍旧其余原因。最终受打击,甩掉。当自个儿试着读书百度语音时,也意识了离线语音识别库,不过呢官方并没有付诸具体的操作流程和筹划思路,笔者也一贯不去深刻摸底,有时间笔者要美丽领悟一番。

 

 1 using System;
 2 //using Microsoft.Speech.Synthesis;//中文版tts不能发声
 3 using Microsoft.Speech.Recognition;
 4 using System.Speech.Synthesis;
 5 //using System.Speech.Recognition;
 6 
 7 namespace SAssassin.SpeechDemo
 8 {
 9     /// <summary>
10     /// 微软语音识别 中文版 貌似效果还好点
11     /// </summary>
12     class Program
13     {
14         static SpeechSynthesizer sy = new SpeechSynthesizer();
15         static void Main(string[] args)
16         {
17             //创建中文识别器  
18             using (SpeechRecognitionEngine recognizer = new SpeechRecognitionEngine(new System.Globalization.CultureInfo("zh-CN")))
19             {
20                 foreach (var config in SpeechRecognitionEngine.InstalledRecognizers())
21                 {
22                     Console.WriteLine(config.Id);
23                 }
24                 //初始化命令词  
25                 Choices commonds = new Choices();
26                 string[] commond1 = new string[] { "一", "二", "三", "四", "五", "六", "七", "八", "九" };
27                 string[] commond2 = new string[] { "很高兴见到你", "识别率", "assassin", "长沙", "湖南", "实习" };
28                 string[] commond3 = new string[] { "开灯", "关灯", "播放音乐", "关闭音乐", "浇水", "停止浇水", "打开背景灯", "关闭背景灯" };
29                 //添加命令词
30                 commonds.Add(commond1);
31                 commonds.Add(commond2);
32                 commonds.Add(commond3);
33                 //初始化命令词管理  
34                 GrammarBuilder gBuilder = new GrammarBuilder();
35                 //将命令词添加到管理中  
36                 gBuilder.Append(commonds);
37                 //实例化命令词管理  
38                 Grammar grammar = new Grammar(gBuilder);
39 
40                 //创建并加载听写语法(添加命令词汇识别的比较精准)  
41                 recognizer.LoadGrammarAsync(grammar);
42                 //为语音识别事件添加处理程序。  
43                 recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(Recognizer_SpeechRRecongized);
44                 //将输入配置到语音识别器。  
45                 recognizer.SetInputToDefaultAudioDevice();
46                 //启动异步,连续语音识别。  
47                 recognizer.RecognizeAsync(RecognizeMode.Multiple);
48                 //保持控制台窗口打开。
49                 Console.WriteLine("你好");
50                 sy.Speak("你好");
51                 Console.ReadLine();
52             }
53         }
54 
55         //speechrecognized事件处理  
56         static void Recognizer_SpeechRRecongized(object sender, SpeechRecognizedEventArgs e)
57         {
58             Console.WriteLine("识别结果:" + e.Result.Text + " " + e.Result.Confidence + " " + DateTime.Now);
59             sy.Speak(e.Result.Text);
60         }
61     }
62 }

 

  贰 、在线语音识别。

  在线语音识别是我们当前程序将语音文件发送到远程服务基本,待远程服务为主匹配消除后将十分结果开始展览重临的进程。其选拔的形似是Restful风格,利用Json数据往返识别结果。

  刚先导学习中国科学技术大学讯飞的话音识别,刚开首什么也不懂,听朋友推荐加上本身百度求学,中国科学技术大学讯飞都说很不错,也抱着心思去上学学习,可是windows平台下唯有C++的demo,无奈自个儿是C#,虽说语言非常的大程度上不分家,不过不想过于劳顿,网上找了3个demo,据书上说是最全的C#本子的讯飞语音识别demo,可是当见到当中错综复杂的源代码时,内心是愁眉不展的,那里是直接通过一种办法引用c++的函数,运营了该demo,成功了,能大约的录音然后识别,可是多少地方存在难题,也得不到解决方案,不得已,吐弃。

  后来,百度语音吸引自身了,七月份时,重新初步看百度语音的demo,官网demo比较简单,尝试着读书了弹指间,首先你拿走百度语音开放平台去成立应用得到App
key 和Secret
key,然后下载着demo,在构造函数大概字段中又或许写入配置文件中,将那三个获得的key写入,程序会基于那五个key去发起呼吁的。就像同开始所说,那是在线语音识别,利用Restful风格,将语音文件上传至百度语音识别主题,然后识别后将回执数据再次回到到我们的次序中,刚开首,配置的时候本人技术不如何,配置各样不可信,地雷起先踩了,总要炸五回,最终还是可以将demo中的测试文件识别出来,算是本身个人的一小步把.(若是有对象正好碰到踩雷难题,不妨可与自我联合探索,也许小编也不懂,但在自个儿踩过的在这之中足足小编懂了,哈哈)

  图片 2

 

  接下去是设计思路的题材,语音识别能成功了,语音合成也能打响了,那里要小心,语音识别和话音合成要分头开通,并且那三个都有App
Key和Secret Key
即使是同等的,然则依旧要留意,不然语音合成就会出难题的。接下来要考虑的标题不怕,百度语音的筹划思路是依据文件识别,但是我们着想的最多的正是自个儿一贯迈克风语音输入,然后识别,那也是自个儿的想法,接下去化解这一题材,设计思路是,我将输入的新闻作为文件情势保留,等小编输入完,然后就调用语音识别方法,这不就行了吗,确实也是足以的,此处,又初叶进入雷区了,利用N奥迪o.dll文件落到实处录音功效,那一个包能够在Nuget中下载。

 1 using NAudio.Wave;
 2 using System;
 3 
 4 namespace SAssassin.VOC
 5 {
 6     /// <summary>
 7     /// 实现录音功能
 8     /// </summary>
 9     public class RecordWaveToFile
10     {
11         private WaveFileWriter waveFileWriter = null;
12         private WaveIn myWaveIn = null;
13 
14         public void StartRecord()
15         {
16             ConfigWave();
17             myWaveIn.StartRecording();
18         }
19 
20         private void ConfigWave()
21         {
22             string filePath = AppDomain.CurrentDomain.BaseDirectory + "Temp.wav";
23             myWaveIn = new WaveIn()
24             {
25                 WaveFormat = new WaveFormat(16000, 16, 1)//8k,16bit,单频
26                 //WaveFormat = new WaveFormat()//识别音质清晰
27             };
28             myWaveIn.DataAvailable += new System.EventHandler<WaveInEventArgs>(WaveIn_DataAvailable);
29             myWaveIn.RecordingStopped += new System.EventHandler<StoppedEventArgs>(WaveIn_RecordingStopped);
30             waveFileWriter = new WaveFileWriter(filePath, myWaveIn.WaveFormat);
31         }
32 
33         private void WaveIn_DataAvailable(object sender,WaveInEventArgs e)
34         {
35             if(waveFileWriter != null)
36             {
37                 waveFileWriter.Write(e.Buffer,0,e.BytesRecorded);
38                 waveFileWriter.Flush();
39             }
40         }
41 
42         private void WaveIn_RecordingStopped(object sender,StoppedEventArgs e)
43         {
44             myWaveIn.StopRecording();
45         }
46     }
47 }

 

此间控制器中央银行使WaveIn伊芙nt不会报错,可就在那此前,小编用的是WaveIn类,然后径直报错了

“System.InvalidOperationException:“Use
WaveInEvent to record on a background thread””

  在StackOverFlow上找到了化解方案,正是将WaveIn类换到WaveIn伊芙nt类即可,进入类里面看一下,其实发现都以援引同3个接口,甚至说多少个类的组织都以一模一样的,只是1个用于GUI线程,三个用于后台线程。一切就绪,录音也能落成,不过当自家翻看自身的录音文件时,杂音好多,音质不入侵,甚至是平素失真了,没什么用,送百度也识别战败,当将采集样品频率进步到44k时服从很好,录音文件很不错,可是难题来了,百度语音识别规定的pcm文件只可以是8k-16bit,糟心,想换到任何格式的文件,选拔收缩方式保留,然而借使将采集样品频率降下来,那么些意义就很不好,识别也是成了难题。不得不说,那还要稳步来消除哈。

  进入前几天重点,那也是自己果壳网第②篇小说小说,该讲点重点了,微软认知服务,10月初旬的时候接触到了必应的语音识别api,在微软bing官网里,并且当中的分辨作用,让自身大喊,那识别率太高了。然后想找它的api,发现文书档案全是英文质感,糟心。把材料看完,感觉使用方法很不利,也是长途调用的主意,可是api呢,官网找了老半天,唯有文书档案,那时也没看上面的成品,试用版什么的,只能望着,却不可能用,心累。也就在这几天,重新看了下必应的语音识别文书档案,才接触到这些词–“微软认知服务”,    
恕小编见闻太浅,那一个好东西却没听过,百度一查,真是不错,微软太牛了,这些里面包括众多api,语音识别都只算小菜三只,人脸识别,语义感知,等等很牛的功力,找到Api,找到免费试用,登录获得app的secret
key ,便得以用起来了。下载三个demo,将secret
key输入,测试一下,哇塞,那识别功用,大致了,太强了。并且从百度中来看许多结出,使用到微软认知服务语音识别成效的很少,小编也为此有写一点东西的想法。

  作者将demo中的很多地点抽出来直接形成了三个控制器程序,源码如下

  1 public class SpeechConfig
  2     {
  3         #region Fields
  4         /// <summary>
  5         /// The isolated storage subscription key file name.
  6         /// </summary>
  7         private const string IsolatedStorageSubscriptionKeyFileName = "Subscription.txt";
  8 
  9         /// <summary>
 10         /// The default subscription key prompt message
 11         /// </summary>
 12         private const string DefaultSubscriptionKeyPromptMessage = "Secret key";
 13 
 14         /// <summary>
 15         /// You can also put the primary key in app.config, instead of using UI.
 16         /// string subscriptionKey = ConfigurationManager.AppSettings["primaryKey"];
 17         /// </summary>
 18         private string subscriptionKey = ConfigurationManager.AppSettings["primaryKey"];
 19 
 20         /// <summary>
 21         /// Gets or sets subscription key
 22         /// </summary>
 23         public string SubscriptionKey
 24         {
 25             get
 26             {
 27                 return this.subscriptionKey;
 28             }
 29 
 30             set
 31             {
 32                 this.subscriptionKey = value;
 33                 this.OnPropertyChanged<string>();
 34             }
 35         }
 36 
 37         /// <summary>
 38         /// The data recognition client
 39         /// </summary>
 40         private DataRecognitionClient dataClient;
 41 
 42         /// <summary>
 43         /// The microphone client
 44         /// </summary>
 45         private MicrophoneRecognitionClient micClient;
 46 
 47         #endregion Fields
 48 
 49         #region event
 50         /// <summary>
 51         /// Implement INotifyPropertyChanged interface
 52         /// </summary>
 53         public event PropertyChangedEventHandler PropertyChanged;
 54 
 55         /// <summary>
 56         /// Helper function for INotifyPropertyChanged interface 
 57         /// </summary>
 58         /// <typeparam name="T">Property type</typeparam>
 59         /// <param name="caller">Property name</param>
 60         private void OnPropertyChanged<T>([CallerMemberName]string caller = null)
 61         {
 62             this.PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(caller));
 63         }
 64         #endregion event
 65 
 66         #region 属性
 67         /// <summary>
 68         /// Gets the current speech recognition mode.
 69         /// </summary>
 70         /// <value>
 71         /// The speech recognition mode.
 72         /// </value>
 73         private SpeechRecognitionMode Mode
 74         {
 75             get
 76             {
 77                 if (this.IsMicrophoneClientDictation ||
 78                     this.IsDataClientDictation)
 79                 {
 80                     return SpeechRecognitionMode.LongDictation;
 81                 }
 82 
 83                 return SpeechRecognitionMode.ShortPhrase;
 84             }
 85         }
 86 
 87         /// <summary>
 88         /// Gets the default locale.
 89         /// </summary>
 90         /// <value>
 91         /// The default locale.
 92         /// </value>
 93         private string DefaultLocale
 94         {
 95             //get { return "en-US"; }
 96             get { return "zh-CN"; }
 97 
 98         }
 99 
100         /// <summary>
101         /// Gets the Cognitive Service Authentication Uri.
102         /// </summary>
103         /// <value>
104         /// The Cognitive Service Authentication Uri.  Empty if the global default is to be used.
105         /// </value>
106         private string AuthenticationUri
107         {
108             get
109             {
110                 return ConfigurationManager.AppSettings["AuthenticationUri"];
111             }
112         }
113 
114         /// <summary>
115         /// Gets a value indicating whether or not to use the microphone.
116         /// </summary>
117         /// <value>
118         ///   <c>true</c> if [use microphone]; otherwise, <c>false</c>.
119         /// </value>
120         private bool UseMicrophone
121         {
122             get
123             {
124                 return this.IsMicrophoneClientWithIntent ||
125                     this.IsMicrophoneClientDictation ||
126                     this.IsMicrophoneClientShortPhrase;
127             }
128         }
129 
130         /// <summary>
131         /// Gets the short wave file path.
132         /// </summary>
133         /// <value>
134         /// The short wave file.
135         /// </value>
136         private string ShortWaveFile
137         {
138             get
139             {
140                 return ConfigurationManager.AppSettings["ShortWaveFile"];
141             }
142         }
143 
144         /// <summary>
145         /// Gets the long wave file path.
146         /// </summary>
147         /// <value>
148         /// The long wave file.
149         /// </value>
150         private string LongWaveFile
151         {
152             get
153             {
154                 return ConfigurationManager.AppSettings["LongWaveFile"];
155             }
156         }
157         #endregion 属性
158 
159         #region 模式选择控制器设置
160         /// <summary>
161         /// Gets or sets a value indicating whether this instance is microphone client short phrase.
162         /// </summary>
163         /// <value>
164         /// <c>true</c> if this instance is microphone client short phrase; otherwise, <c>false</c>.
165         /// </value>
166         public bool IsMicrophoneClientShortPhrase { get; set; }
167 
168         /// <summary>
169         /// Gets or sets a value indicating whether this instance is microphone client dictation.
170         /// </summary>
171         /// <value>
172         /// <c>true</c> if this instance is microphone client dictation; otherwise, <c>false</c>.
173         /// </value>
174         public bool IsMicrophoneClientDictation { get; set; }
175 
176         /// <summary>
177         /// Gets or sets a value indicating whether this instance is microphone client with intent.
178         /// </summary>
179         /// <value>
180         /// <c>true</c> if this instance is microphone client with intent; otherwise, <c>false</c>.
181         /// </value>
182         public bool IsMicrophoneClientWithIntent { get; set; }
183 
184         /// <summary>
185         /// Gets or sets a value indicating whether this instance is data client short phrase.
186         /// </summary>
187         /// <value>
188         /// <c>true</c> if this instance is data client short phrase; otherwise, <c>false</c>.
189         /// </value>
190         public bool IsDataClientShortPhrase { get; set; }
191 
192         /// <summary>
193         /// Gets or sets a value indicating whether this instance is data client with intent.
194         /// </summary>
195         /// <value>
196         /// <c>true</c> if this instance is data client with intent; otherwise, <c>false</c>.
197         /// </value>
198         public bool IsDataClientWithIntent { get; set; }
199 
200         /// <summary>
201         /// Gets or sets a value indicating whether this instance is data client dictation.
202         /// </summary>
203         /// <value>
204         /// <c>true</c> if this instance is data client dictation; otherwise, <c>false</c>.
205         /// </value>
206         public bool IsDataClientDictation { get; set; }
207 
208         #endregion
209 
210         #region 委托执行对象
211         /// <summary>
212         /// Called when the microphone status has changed.
213         /// </summary>
214         /// <param name="sender">The sender.</param>
215         /// <param name="e">The <see cref="MicrophoneEventArgs"/> instance containing the event data.</param>
216         private void OnMicrophoneStatus(object sender, MicrophoneEventArgs e)
217         {
218             Task task = new Task(() =>
219             {
220                 Console.WriteLine("--- Microphone status change received by OnMicrophoneStatus() ---");
221                 Console.WriteLine("********* Microphone status: {0} *********", e.Recording);
222                 if (e.Recording)
223                 {
224                     Console.WriteLine("Please start speaking.");
225                 }
226 
227                 Console.WriteLine();
228             });
229             task.Start();
230         }
231 
232         /// <summary>
233         /// Called when a partial response is received.
234         /// </summary>
235         /// <param name="sender">The sender.</param>
236         /// <param name="e">The <see cref="PartialSpeechResponseEventArgs"/> instance containing the event data.</param>
237         private void OnPartialResponseReceivedHandler(object sender, PartialSpeechResponseEventArgs e)
238         {
239             Console.WriteLine("--- Partial result received by OnPartialResponseReceivedHandler() ---");
240             Console.WriteLine("{0}", e.PartialResult);
241             Console.WriteLine();
242         }
243 
244         /// <summary>
245         /// Called when an error is received.
246         /// </summary>
247         /// <param name="sender">The sender.</param>
248         /// <param name="e">The <see cref="SpeechErrorEventArgs"/> instance containing the event data.</param>
249         private void OnConversationErrorHandler(object sender, SpeechErrorEventArgs e)
250         {
251             Console.WriteLine("--- Error received by OnConversationErrorHandler() ---");
252             Console.WriteLine("Error code: {0}", e.SpeechErrorCode.ToString());
253             Console.WriteLine("Error text: {0}", e.SpeechErrorText);
254             Console.WriteLine();
255         }
256 
257         /// <summary>
258         /// Called when a final response is received;
259         /// </summary>
260         /// <param name="sender">The sender.</param>
261         /// <param name="e">The <see cref="SpeechResponseEventArgs"/> instance containing the event data.</param>
262         private void OnMicShortPhraseResponseReceivedHandler(object sender, SpeechResponseEventArgs e)
263         {
264             Task task = new Task(() =>
265             {
266                 Console.WriteLine("--- OnMicShortPhraseResponseReceivedHandler ---");
267 
268                 // we got the final result, so it we can end the mic reco.  No need to do this
269                 // for dataReco, since we already called endAudio() on it as soon as we were done
270                 // sending all the data.
271                 this.micClient.EndMicAndRecognition();
272 
273                 this.WriteResponseResult(e);
274             });
275             task.Start();
276         }
277 
278         /// <summary>
279         /// Called when a final response is received;
280         /// </summary>
281         /// <param name="sender">The sender.</param>
282         /// <param name="e">The <see cref="SpeechResponseEventArgs"/> instance containing the event data.</param>
283         private void OnDataShortPhraseResponseReceivedHandler(object sender, SpeechResponseEventArgs e)
284         {
285             Task task = new Task(() =>
286             {
287                 Console.WriteLine("--- OnDataShortPhraseResponseReceivedHandler ---");
288 
289                 // we got the final result, so it we can end the mic reco.  No need to do this
290                 // for dataReco, since we already called endAudio() on it as soon as we were done
291                 // sending all the data.
292                 this.WriteResponseResult(e);
293 
294             });
295             task.Start();
296         }
297 
298         /// <summary>
299         /// Called when a final response is received;
300         /// </summary>
301         /// <param name="sender">The sender.</param>
302         /// <param name="e">The <see cref="SpeechResponseEventArgs"/> instance containing the event data.</param>
303         private void OnMicDictationResponseReceivedHandler(object sender, SpeechResponseEventArgs e)
304         {
305             Console.WriteLine("--- OnMicDictationResponseReceivedHandler ---");
306             if (e.PhraseResponse.RecognitionStatus == RecognitionStatus.EndOfDictation ||
307                 e.PhraseResponse.RecognitionStatus == RecognitionStatus.DictationEndSilenceTimeout)
308             {
309                 Task task = new Task(() =>
310                 {
311                     // we got the final result, so it we can end the mic reco.  No need to do this
312                     // for dataReco, since we already called endAudio() on it as soon as we were done
313                     // sending all the data.
314                     this.micClient.EndMicAndRecognition();
315                 });
316                 task.Start();
317             }
318 
319             this.WriteResponseResult(e);
320         }
321 
322         /// <summary>
323         /// Called when a final response is received;
324         /// </summary>
325         /// <param name="sender">The sender.</param>
326         /// <param name="e">The <see cref="SpeechResponseEventArgs"/> instance containing the event data.</param>
327         private void OnDataDictationResponseReceivedHandler(object sender, SpeechResponseEventArgs e)
328         {
329             Console.WriteLine("--- OnDataDictationResponseReceivedHandler ---");
330             if (e.PhraseResponse.RecognitionStatus == RecognitionStatus.EndOfDictation ||
331                 e.PhraseResponse.RecognitionStatus == RecognitionStatus.DictationEndSilenceTimeout)
332             {
333                 Task task = new Task(() =>
334                 {
335 
336                     // we got the final result, so it we can end the mic reco.  No need to do this
337                     // for dataReco, since we already called endAudio() on it as soon as we were done
338                     // sending all the data.
339                 });
340                 task.Start();
341             }
342 
343             this.WriteResponseResult(e);
344         }
345 
346         /// <summary>
347         /// Sends the audio helper.
348         /// </summary>
349         /// <param name="wavFileName">Name of the wav file.</param>
350         private void SendAudioHelper(string wavFileName)
351         {
352             using (FileStream fileStream = new FileStream(wavFileName, FileMode.Open, FileAccess.Read))
353             {
354                 // Note for wave files, we can just send data from the file right to the server.
355                 // In the case you are not an audio file in wave format, and instead you have just
356                 // raw data (for example audio coming over bluetooth), then before sending up any 
357                 // audio data, you must first send up an SpeechAudioFormat descriptor to describe 
358                 // the layout and format of your raw audio data via DataRecognitionClient's sendAudioFormat() method.
359                 int bytesRead = 0;
360                 byte[] buffer = new byte[1024];
361 
362                 try
363                 {
364                     do
365                     {
366                         // Get more Audio data to send into byte buffer.
367                         bytesRead = fileStream.Read(buffer, 0, buffer.Length);
368 
369                         // Send of audio data to service. 
370                         this.dataClient.SendAudio(buffer, bytesRead);
371                     }
372                     while (bytesRead > 0);
373                 }
374                 finally
375                 {
376                     // We are done sending audio.  Final recognition results will arrive in OnResponseReceived event call.
377                     this.dataClient.EndAudio();
378                 }
379             }
380         }
381         #endregion 委托执行对象
382 
383         #region 辅助方法
384         /// <summary>
385         /// Gets the subscription key from isolated storage.
386         /// </summary>
387         /// <returns>The subscription key.</returns>
388         private string GetSubscriptionKeyFromIsolatedStorage()
389         {
390             string subscriptionKey = null;
391 
392             using (IsolatedStorageFile isoStore = IsolatedStorageFile.GetStore(IsolatedStorageScope.User | IsolatedStorageScope.Assembly, null, null))
393             {
394                 try
395                 {
396                     using (var iStream = new IsolatedStorageFileStream(IsolatedStorageSubscriptionKeyFileName, FileMode.Open, isoStore))
397                     {
398                         using (var reader = new StreamReader(iStream))
399                         {
400                             subscriptionKey = reader.ReadLine();
401                         }
402                     }
403                 }
404                 catch (FileNotFoundException)
405                 {
406                     subscriptionKey = null;
407                 }
408             }
409 
410             if (string.IsNullOrEmpty(subscriptionKey))
411             {
412                 subscriptionKey = DefaultSubscriptionKeyPromptMessage;
413             }
414 
415             return subscriptionKey;
416         }
417 
418         /// <summary>
419         /// Creates a new microphone reco client without LUIS intent support.
420         /// </summary>
421         private void CreateMicrophoneRecoClient()
422         {
423             this.micClient = SpeechRecognitionServiceFactory.CreateMicrophoneClient(
424                 this.Mode,this.DefaultLocale,this.SubscriptionKey);
425 
426             this.micClient.AuthenticationUri = this.AuthenticationUri;
427 
428             // Event handlers for speech recognition results
429             this.micClient.OnMicrophoneStatus += this.OnMicrophoneStatus;
430             this.micClient.OnPartialResponseReceived += this.OnPartialResponseReceivedHandler;
431             if (this.Mode == SpeechRecognitionMode.ShortPhrase)
432             {
433                 this.micClient.OnResponseReceived += this.OnMicShortPhraseResponseReceivedHandler;
434             }
435             else if (this.Mode == SpeechRecognitionMode.LongDictation)
436             {
437                 this.micClient.OnResponseReceived += this.OnMicDictationResponseReceivedHandler;
438             }
439 
440             this.micClient.OnConversationError += this.OnConversationErrorHandler;
441         }
442 
443         /// <summary>
444         /// Creates a data client without LUIS intent support.
445         /// Speech recognition with data (for example from a file or audio source).  
446         /// The data is broken up into buffers and each buffer is sent to the Speech Recognition Service.
447         /// No modification is done to the buffers, so the user can apply their
448         /// own Silence Detection if desired.
449         /// </summary>
450         private void CreateDataRecoClient()
451         {
452             this.dataClient = SpeechRecognitionServiceFactory.CreateDataClient(
453                 this.Mode,
454                 this.DefaultLocale,
455                 this.SubscriptionKey);
456             this.dataClient.AuthenticationUri = this.AuthenticationUri;
457 
458             // Event handlers for speech recognition results
459             if (this.Mode == SpeechRecognitionMode.ShortPhrase)
460             {
461                 this.dataClient.OnResponseReceived += this.OnDataShortPhraseResponseReceivedHandler;
462             }
463             else
464             {
465                 this.dataClient.OnResponseReceived += this.OnDataDictationResponseReceivedHandler;
466             }
467 
468             this.dataClient.OnPartialResponseReceived += this.OnPartialResponseReceivedHandler;
469             this.dataClient.OnConversationError += this.OnConversationErrorHandler;
470         }
471 
472         /// <summary>
473         /// Writes the response result.
474         /// </summary>
475         /// <param name="e">The <see cref="SpeechResponseEventArgs"/> instance containing the event data.</param>
476         private void WriteResponseResult(SpeechResponseEventArgs e)
477         {
478             if (e.PhraseResponse.Results.Length == 0)
479             {
480                 Console.WriteLine("No phrase response is available.");
481             }
482             else
483             {
484                 Console.WriteLine("********* Final n-BEST Results *********");
485                 for (int i = 0; i < e.PhraseResponse.Results.Length; i++)
486                 {
487                     Console.WriteLine(
488                         "[{0}] Confidence={1}, Text=\"{2}\"",
489                         i,
490                         e.PhraseResponse.Results[i].Confidence,
491                         e.PhraseResponse.Results[i].DisplayText);
492                     if (e.PhraseResponse.Results[i].DisplayText == "关闭。")
493                     {
494                         Console.WriteLine("收到命令,马上关闭");
495                     }
496                 }
497 
498                 Console.WriteLine();
499             }
500         }
501         #endregion 辅助方法
502 
503         #region Init
504         public SpeechConfig()
505         {
506             this.IsMicrophoneClientShortPhrase = true;
507             this.IsMicrophoneClientWithIntent = false;
508             this.IsMicrophoneClientDictation = false;
509             this.IsDataClientShortPhrase = false;
510             this.IsDataClientWithIntent = false;
511             this.IsDataClientDictation = false;
512 
513             this.SubscriptionKey = this.GetSubscriptionKeyFromIsolatedStorage();
514         }
515 
516         /// <summary>
517         /// 语音识别开始执行
518         /// </summary>
519         public void SpeechRecognize()
520         {
521             if (this.UseMicrophone)
522             {
523                 if (this.micClient == null)
524                 {
525                     this.CreateMicrophoneRecoClient();
526                 }
527 
528                 this.micClient.StartMicAndRecognition();
529             }
530             else
531             {
532                 if (null == this.dataClient)
533                 {
534                     this.CreateDataRecoClient();
535                 }
536 
537                 this.SendAudioHelper((this.Mode == SpeechRecognitionMode.ShortPhrase) ? this.ShortWaveFile : this.LongWaveFile);
538             }
539         }
540         #endregion Init
541     }

   在那些中有多少个引用文件能够由此nuget包下载,基本没什么难题。

对了此处注意的多个标题就是,下载Microsoft.Speech的时候势必是多少个包都亟需下载,不然会报错的,版本必须是4.5+以上的。

图片 3

  只需替换暗中认可的key就行,程序便可跑起来,效果真是很6

图片 4

 

 

那识别率真是很好很好,很中意,不过这么些微软的免费试用只有贰个月,那就只可以在这些月里多让它开花结果了哈哈。

 

 

  第1篇博客作者推荐了微软认知服务-语音识别api,亲僧体会过强大,才想将其当做首篇博客内容。

   2017-08-20,望技术成功后能回来看见本人的步履。