Git Product home page Git Product logo

flutter_tflite_audio's Introduction

Welcome!

I'm Michael from Sydney, Australia ๐Ÿ‡ฆ๐Ÿ‡บ. Currently living in Tokyo, Japan ๐Ÿ‡ฏ๐Ÿ‡ต.

About me:

  • ๐ŸŽ Have worked with Flutter, Swift, Python, Dart, Java.
  • โญ Interested in machine learning and mobile development.
  • โšก Currently working on Sound Event Detection with Tensorflow and Flutter.
  • ๐ŸŒ Open to freelancing/contract work
  • ๐Ÿ“ฌ How to reach me: [email protected]

Github stats:

flutter_tflite_audio's People

Contributors

caldarie avatar zielu92 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

flutter_tflite_audio's Issues

Recognition Raw Scores returns [NaN, NaN, NaN, NaN]

Hi @Caldarie
I found a problem when I used GTM model. The Raw Score returns Nan with the latest version 0.2.1+1

D/AudioRecord(27796): stop(1446): 0x7543619a00, mActive:0
D/AudioRecord(27796): ~AudioRecord(1446): mStatus 0
D/AudioRecord(27796): stop(1446): 0x7543619a00, mActive:0
D/Tflite_audio(27796): Recording stopped.
V/Tflite_audio(27796): Raw Scores: [NaN, NaN, NaN, NaN]
D/Tflite_audio(27796): Recognition stopped.
V/Tflite_audio(27796): result: {hasPermission=true, inferenceTime=89, recognitionResult=Background Noise}
D/Tflite_audio(27796): Recognition Stream stopped

But I tried a non-GTM model and it works fine.

iOS issues: Invalid 'tflite_audio.podspec' file after change plugin version and problems with flutter 2

I noted that using a new version of this plugin, there are some problems specially in iOS devices:

  1. Using the next dependency and running pod install, it fails:
dependencies:
  tflite_audio: ^0.1.6+1
Analyzing dependencies
[!] Failed to load 'tflite_audio' podspec: 
[!] Invalid `tflite_audio.podspec` file: syntax error, unexpected tCONSTANT, expecting end
  s.summary          = 'A new flutter plugin project.'
                        ^
/Users/carolinaalbuquerque/Documents/beingcare-concept-proof/concept_proof/ios/.symlinks/plugins/tflite_audio/ios/tflite_audio.podspec:8: syntax error, unexpected tSTRING_BEG
...'A new flutter plugin project.'
...                              ^
/Users/carolinaalbuquerque/Documents/beingcare-concept-proof/concept_proof/ios/.symlinks/plugins/tflite_audio/ios/tflite_audio.podspec:12: syntax error, unexpected tIDENTIFIER, expecting end-of-input
  s.homepage         = 'http://example.com'
                        ^~~~.

 #  from /Users/carolinaalbuquerque/Documents/beingcare-concept-proof/concept_proof/ios/.symlinks/plugins/tflite_audio/ios/tflite_audio.podspec:8
 #  -------------------------------------------
 #    s.version          = '0.1.6+1
 >    s.summary          = 'A new flutter plugin project.'
 #    s.description      = <<-DESC
 #  -------------------------------------------

However, using the version 0.1.5+3, this issue not happens!

  1. Also, after flutter update to flutter 2, I started to have some problems in startRecording process when running using an iPhone:
Launching lib/main.dart on iPhone de Carolina in debug mode...
Automatically signing iOS for device deployment using specified development team in Xcode project: 5SSNTW7HP4
Running pod install...                                              6,4s
Running Xcode build...                                                  
 โ””โ”€Compiling, linking and signing...                        18,5s
Xcode build done.                                           43,6s
Initialized TensorFlow Lite runtime.                                    
Created TensorFlow Lite delegate for select TF ops.                     
TfLiteFlexDelegate delegate: 3 nodes delegated out of 47 nodes with 2 partitions.
["0 Background Noise", "1 Clap", "2 Whistle"]                           
Installing and launching...                                        42,9s
Connecting to the VM Service is taking longer than expected...
Permission granted
start microphone
Permission granted
start microphone
[avae]            AVAEInternal.h:76    required condition is false: [AVAEGraphNode.mm:817:CreateRecordingTap: (nullptr == Tap())]
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: nullptr == Tap()'
*** First throw call stack:
(0x1ad19a9d8 0x1c1520b54 0x1ad0a950c 0x1bd103984 0x1bd161c04 0x1bd149b3c 0x1bd1c6de8 0x1bd1a81e4 0x105aaaf80 0x105aa83d4 0x105aa7a30 0x105aa7bcc 0x10d2e45bc 0x10ca83b78 0x10cd82f5c 0x10cd2235c 0x10cd24a14 0x1ad11b3e0 0x1ad11afe4 0x1ad11a4c4 0x1ad114850 0x1ad113ba0 0x1c3e7c598 0x1afa052f4 0x1afa0a874 0x10578877c 0x1acdf2568)
libc++abi.dylib: terminating with uncaught exception of type NSException
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
    frame #0: 0x00000001d90d984c libsystem_kernel.dylib`__pthread_kill + 8
libsystem_kernel.dylib`__pthread_kill:
->  0x1d90d984c <+8>:  b.lo   0x1d90d9868               ; <+36>
    0x1d90d9850 <+12>: stp    x29, x30, [sp, #-0x10]!
    0x1d90d9854 <+16>: mov    x29, sp
    0x1d90d9858 <+20>: bl     0x1d90b6f5c               ; cerror_nocancel
Target 0: (Runner) stopped.
Still attempting to connect to the VM Service...
If you do NOT see the Flutter application running, it might have crashed. The device logs (e.g. from adb or XCode) might have more details.

Besides this, when running in Android device, it works perfectly both for v0.1.5+3 and v0.1.6+1!

Permission request error

D/Tflite_audio( 7874): Check for permissions
D/Tflite_audio( 7874): Permission requested.
E/EventChannel#startAudioRecognition( 7874): Failed to open event stream
E/EventChannel#startAudioRecognition( 7874): java.lang.NullPointerException: Attempt to invoke virtual method 'void android.app.Activity.requestPermissions(java.lang.String[], int)' on a null object reference
E/EventChannel#startAudioRecognition( 7874): 	at androidx.core.app.ActivityCompat.requestPermissions(ActivityCompat.java:502)
E/EventChannel#startAudioRecognition( 7874): 	at flutter.tflite_audio.TfliteAudioPlugin.requestMicrophonePermission(TfliteAudioPlugin.java:310)
E/EventChannel#startAudioRecognition( 7874): 	at flutter.tflite_audio.TfliteAudioPlugin.checkPermissions(TfliteAudioPlugin.java:303)
E/EventChannel#startAudioRecognition( 7874): 	at flutter.tflite_audio.TfliteAudioPlugin.onListen(TfliteAudioPlugin.java:221)
E/EventChannel#startAudioRecognition( 7874): 	at io.flutter.plugin.common.EventChannel$IncomingStreamRequestHandler.onListen(EventChannel.java:188)
E/EventChannel#startAudioRecognition( 7874): 	at io.flutter.plugin.common.EventChannel$IncomingStreamRequestHandler.onMessage(EventChannel.java:167)
E/EventChannel#startAudioRecognition( 7874): 	at io.flutter.embedding.engine.dart.DartMessenger.handleMessageFromDart(DartMessenger.java:85)
E/EventChannel#startAudioRecognition( 7874): 	at io.flutter.embedding.engine.FlutterJNI.handlePlatformMessage(FlutterJNI.java:818)
E/EventChannel#startAudioRecognition( 7874): 	at android.os.MessageQueue.nativePollOnce(Native Method)
E/EventChannel#startAudioRecognition( 7874): 	at android.os.MessageQueue.next(MessageQueue.java:335)
E/EventChannel#startAudioRecognition( 7874): 	at android.os.Looper.loop(Looper.java:206)
E/EventChannel#startAudioRecognition( 7874): 	at android.app.ActivityThread.main(ActivityThread.java:8512)
E/EventChannel#startAudioRecognition( 7874): 	at java.lang.reflect.Method.invoke(Native Method)
E/EventChannel#startAudioRecognition( 7874): 	at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:602)
E/EventChannel#startAudioRecognition( 7874): 	at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1130)

โ•โ•โ•โ•โ•โ•โ•โ• Exception caught by services library โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
The following PlatformException was thrown while activating platform stream on channel startAudioRecognition:
PlatformException(error, Attempt to invoke virtual method 'void android.app.Activity.requestPermissions(java.lang.String[], int)' on a null object reference, null, null)

When the exception was thrown, this was the stack
#0      StandardMethodCodec.decodeEnvelope
package:flutter/โ€ฆ/services/message_codecs.dart:597
#1      MethodChannel._invokeMethod
package:flutter/โ€ฆ/services/platform_channel.dart:158
<asynchronous suspension>
#2      EventChannel.receiveBroadcastStream.<anonymous closure>
package:flutter/โ€ฆ/services/platform_channel.dart:545
<asynchronous suspension>
โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

iOS issue with Background service plugin / outputRawScores

Hi @Caldarie. I'm testing the app on iOS but the package doesn't work. I have followed the guidelines for the implementation but it still doesn't work. This is the exception:

`Unhandled Exception: MissingPluginException(No implementation found for method loadModel on channel tflite_audio)
#0 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:165:7)

โ•โ•โ•ก EXCEPTION CAUGHT BY SERVICES LIBRARY โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
The following MissingPluginException was thrown while activating platform stream on channel
AudioRecognitionStream:
MissingPluginException(No implementation found for method listen on channel AudioRecognitionStream)

When the exception was thrown, this was the stack:
#0 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:165:7)

#1 EventChannel.receiveBroadcastStream. (package:flutter/src/services/platform_channel.dart:506:9)

โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•`

How can I fix it???

Thank you.

Increase Inference Frequency (Android)

Hey man, thanks for this awesome plugin. I was looking to increase the number of times per second I run my model. I was able to implement a sliding window in the swift code, but I'm not super familiar with Java or Android development. I was wondering if you could provide some suggestions on how to accomplish this?

Thanks,
Brett

Counting specific sound occurances in the audio

I am trying to count the number of specific sound occurrences in the audio

The problem I have is that I am calling TfliteAudio.startAudioRecognition and trying to listen to the events steam I am receiving events every 1 second. And I can't find the possibility to increase events' frequency to receive events every 50-100 ms. Is it possible to decrease interval duration to 50-100 ms?

Another problem I have is that event['recognitionResult'] always returns "1 Result":
result:
{hasPermission=true, inferenceTime=75, recognitionResult=1 Result}
However, there are more than 1 repetitions of sound I am trying to count in each 1-second interval. Should it work like this and what does number "1" means, is this number of the sound in a single audio interval or something else?

Is it possible to implement specific sound counting with this package or I should look somewhere else? Any feedback would be helpful, thanks!

Different results on android and ios with mfcc model.

  1. I have created my own model that use mfcc. With your pacakge it works fine with android but it is giving strange results with ios(almost same results everytime). Upon further going through your package I have encountered that:

In android when we use mfcc: (inside TflteAudioPlugin.java)
InputData2D=[-584.5461, 160.31107, 54.33207, 41.93382, -6.2690716, -22.130093, -23.728022, -18.326862, -19.871655, -12.884017, 4.3321896, 7.76119, -10.174459, 14.504196, -8.621558, -29.163612, 1.3063055, 14.564109, -7.867571, 0.08920245, 13.916428, 23.708931, 8.588604, -7.8490815, -3.8324726, -10.441817, -2.3888357, 9.464556, -6.7323833, -3.6811, 2.5033593, -8.471148, -5.328222, 5.5226245, 6.6240654, -7.9169397, -9.550313, 9.346459, 5.020493, -6.127846]

while in ios when we use mfcc: (inside SwiftTflteAudioPlugin.swift)
InputData=[-0.0000464e-200,0.0037373e-330,.........................................,0.0037363e-100]

In ios the InputData values are extremely low.

Clearly theres a huge difference between the inputData of android and ios that's why it is giving such strange results in ios(almost same every time)

  1. I am testing on a physical device both for android and ios

Is it possible to record the audio at the same time?

Hi,

I'm wondering if it's possible to record audio at the same time the model is running?

Basically, I want to build something that saves snippets of the audio when specific things are detected.

Thanks,
Jonas

Null safety: Error in example code

I think it is caused by the addition of null safety compatibility without update the example code.

Error:

lib/main.dart:180:58: Error: The parameter 'result' can't have a value of 'null' because of its type 'String', but the implicit default value is 'null'.
Try adding either an explicit non-'null' default value or the 'required' modifier.
Widget labelListWidget(List labelList, [String result]) {
^^^^^^
lib/main.dart:118:68: Error: The argument type 'List?' can't be assigned to the parameter type 'List' because 'List?' is nullable and 'List' isn't.

  • 'List' is from 'dart:core'.
    return labelListWidget(labelSnapshot.data);
    ^
    lib/main.dart:129:61: Error: The argument type 'List?' can't be assigned to the parameter type 'List' because 'List?' is nullable and 'List' isn't.
  • 'List' is from 'dart:core'.
    labelListWidget(labelSnapshot.data),
    ^
    lib/main.dart:141:49: Error: The argument type 'List?' can't be assigned to the parameter type 'List' because 'List?' is nullable and 'List' isn't.
  • 'List' is from 'dart:core'.
    labelSnapshot.data,
    ^
    lib/main.dart:20:33: Error: Field 'result' should be initialized because its type 'Stream<Map<dynamic, dynamic>>' doesn't allow null.
  • 'Stream' is from 'dart:async'.
  • 'Map' is from 'dart:core'.
    Stream<Map<dynamic, dynamic>> result;

Flutter Version :
Flutter 2.2.3 โ€ข channel stable โ€ข https://github.com/flutter/flutter.git
Framework โ€ข revision f4abaa0735 (4 weeks ago) โ€ข 2021-07-01 12:46:11 -0700
Engine โ€ข revision 241c87ad80
Tools โ€ข Dart 2.13.4

Create model and retrain on the fly

Hi, first of all thanks for the awesome project, especially the clear readme that walk me through the journey.

I would like to ask whether this plugin expose the API for user to train on the fly and probably reload after adding a new model?

The example does not work well

i tested the example and it doesn't work well at recognizing the words i say.
is it the lack of dataset that causing this or is it something else ?
and please how can i create a Model can i use teachablemachine.withgoogle.com for that?

iOS build error (Solved. Pinned for reference)

The solution may be here
tensorflow/tensorflow#52042

Build Error

duplicate symbol '_TfLiteXNNPackDelegateCreate' in:
    /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps(xnnpack_delegate.o)
    /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteC/Frameworks/TensorFlowLiteC.framework/TensorFlowLiteC
duplicate symbol '_TfLiteXNNPackDelegateDelete' in:
    /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps(xnnpack_delegate.o)
    /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteC/Frameworks/TensorFlowLiteC.framework/TensorFlowLiteC
duplicate symbol '_TfLiteXNNPackDelegateGetThreadPool' in:
    /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps(xnnpack_delegate.o)
    /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteC/Frameworks/TensorFlowLiteC.framework/TensorFlowLiteC
duplicate symbol '_TfLiteXNNPackDelegateOptionsDefault' in:
    /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps(xnnpack_delegate.o)
    /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteC/Frameworks/TensorFlowLiteC.framework/TensorFlowLiteC
ld: 4 duplicate symbols for architecture arm64

openai whisper

Hi,

could you use this library to run openai whisper with a tflite model? In the examples there are always labels provided, but for whisper there would not be any labels I think?

Thanks!

Need help with the inputs

Hi, I have a dumb question. My model receives outputs of librosa.load(audio_file, sr=16000) as inputs. How can I reproduce it with your code?

Thank you.

Nil found error with Google Teachable Machine model

When I try to run the app with a GTM built model on device it keeps triggering the following error when the buffer size is reached and inference should happen.

Failed to invoke the interpreter with error: Provided data count 376128 must match the required count 176128.
Fatal error: Unexpectedly found nil while implicitly unwrapping an Optional value: file tflite_audio/SwiftTfliteAudioPlugin.swift, line 283
2021-02-18 10:29:23.396879+0100 Runner[8368:1723178] Fatal error: Unexpectedly found nil while implicitly unwrapping an Optional value: file tflite_audio/SwiftTfliteAudioPlugin.swift, line 283

The problematic line is: let scores = [Float32](unsafeData: outputTensor.data) ?? []
where outputTensor is found as nil.

Thanks for helping!

Error while runing on IOS

Getting this error every time when I'm trying to listen to sounds
exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)'

only on IOS (12, debug mode), on Android everything is ok

The plugin `tflite_audio` uses a deprecated version of the Android embedding.

The plugin tflite_audio uses a deprecated version of the Android embedding.
To avoid unexpected runtime failures, or future build failures, try to see if this plugin supports the Android V2 embedding. Otherwise, consider removing it since a future release of Flutter will remove these deprecated APIs.
If you are plugin author, take a look at the docs for migrating the plugin to the V2 embedding:

Android Permission bug

Hi @Caldarie . I found another issue to fix. On my app it's the dart code which ask for permission (It asks for all the permissions inside the home page > I need to do this cause tflite_audio is not the only package that need permissions). When the app ask for the permission and user grants them, I don't know why but tflite_audio shows a dialog with this message: 'Microphone permission denied. Go to settings etc..'. But it isn't true cause user granted that permission. After a lot of time, I found the issue inside TfliteAudioPlugin.java, raw 330 (inside onRequestPermissionResult() method). I don't know why but it seems like that method doesn't understand that permission has been already granted.
Can you provide a little update on this thing?
Thank you

startFileRecognition error

error when load from external directory .. but if i load wav from asset dir is working

set is asset false..

and result like ...

Parameters: {audioDirectory=/data/user/0/com.example.hello/cache/file_picker/cat_1.wav, detectionThreshold=0.5, minimumTimeBetweenSamples=0, method=setFileRecognitionStream, averageWindowDuration=0, audioLength=0, sampleRate=44100, suppressionTime=0}
D/TfliteAudio(25258): AudioLength has been readjusted. Length: 8620
D/TfliteAudio(25258): Transpose Audio: false
D/TfliteAudio(25258): Check for permission. Request code: 1
D/TfliteAudio(25258): Loading audio file to buffer
D/TfliteAudio(25258): Audio file sucessfully loaded
D/TfliteAudio(25258): Extracting byte data from audio file
E/EventChannel#FileRecognitionStream(25258): Failed to open event stream
E/EventChannel#FileRecognitionStream(25258): java.lang.RuntimeException: Failed to load audio file:
E/EventChannel#FileRecognitionStream(25258): at flutter.tflite_audio.MediaDecoder.(MediaDecoder.java:36)
E/EventChannel#FileRecognitionStream(25258): at flutter.tflite_audio.TfliteAudioPlugin.extractRawData(TfliteAudioPlugin.java:531)
E/EventChannel#FileRecognitionStream(25258): at flutter.tflite_audio.TfliteAudioPlugin.loadAudioFile(TfliteAudioPlugin.java:520)
E/EventChannel#FileRecognitionStream(25258): at flutter.tflite_audio.TfliteAudioPlugin.checkPermissions(TfliteAudioPlugin.java:404)
E/EventChannel#FileRecognitionStream(25258): at flutter.tflite_audio.TfliteAudioPlugin.onListen(TfliteAudioPlugin.java:252)
E/EventChannel#FileRecognitionStream(25258): at io.flutter.plugin.common.EventChannel$IncomingStreamRequestHandler.onListen(EventChannel.java:218)
E/EventChannel#FileRecognitionStream(25258): at io.flutter.plugin.common.EventChannel$IncomingStreamRequestHandler.onMessage(EventChannel.java:197)
E/EventChannel#FileRecognitionStream(25258): at io.flutter.embedding.engine.dart.DartMessenger.invokeHandler(DartMessenger.java:295)
E/EventChannel#FileRecognitionStream(25258): at io.flutter.embedding.engine.dart.DartMessenger.lambda$dispatchMessageToQueue$0$io-flutter-embedding-engine-dart-DartMessenger(DartMessenger.java:322)
E/EventChannel#FileRecognitionStream(25258): at io.flutter.embedding.engine.dart.DartMessenger$$ExternalSyntheticLambda0.run(Unknown Source:12)
E/EventChannel#FileRecognitionStream(25258): at android.os.Handler.handleCallback(Handler.java:883)
E/EventChannel#FileRecognitionStream(25258): at android.os.Handler.dispatchMessage(Handler.java:100)
E/EventChannel#FileRecognitionStream(25258): at android.os.Looper.loop(Looper.java:224)
E/EventChannel#FileRecognitionStream(25258): at android.app.ActivityThread.main(ActivityThread.java:7562)
E/EventChannel#FileRecognitionStream(25258): at java.lang.reflect.Method.invoke(Native Method)
E/EventChannel#FileRecognitionStream(25258): at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:539)
E/EventChannel#FileRecognitionStream(25258): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:950)
E/EventChannel#FileRecognitionStream(25258): Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'java.io.FileDescriptor android.content.res.AssetFileDescriptor.getFileDescriptor()' on a null object reference
E/EventChannel#FileRecognitionStream(25258): at flutter.tflite_audio.MediaDecoder.(MediaDecoder.java:34)
E/EventChannel#FileRecognitionStream(25258): ... 16 more

โ•โ•โ•ก EXCEPTION CAUGHT BY SERVICES LIBRARY โ•žโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
The following PlatformException was thrown while activating platform stream on channel
FileRecognitionStream:
PlatformException(error, Failed to load audio file: , null, null)

When the exception was thrown, this was the stack:
#0 StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:653:7)
#1 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:315:18)

#2 EventChannel.receiveBroadcastStream. (package:flutter/src/services/platform_channel.dart:662:9)

โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

Real time audio recognition from raw audio values

Hi, great work here...
Please is there a way to do a real time audio recognition using the raw values gotten from an analog microphone (has gone through a little big of processing.. It works if loaded on audacity). The values are gotten from an analog microphone connected to a Raspberry pi Pico, then sent in real time to my flutter app. Please is there a way to do it with your package or just any idea on how to do it in general.

Thanks

implement function for extracting MFCCs in dart

Hello, i being able to perform extractions of MFCCs with a lot of performance on android smartphones.

I advise you to look at an implementation I made in this repository. I've used this same implementation to classify bee audio and have achieved 90% accuracy so far.

This implementation follows the study of another implementation in python that I found in Kaggle that is in this link.

What makes this implementation really efficient is the FFT used. This implementation of the FFT is not naive. Look at the repository of this implementation here.

Making predictions with MFCC/stored audio file

Hi,

I'm very new to the topics flutter and tensorflow. Just so you know that maybe some things I ask may not make any sense :).

I'm trying to build an app that allows me to record some audio samples. Then I would like to do some classification with the recorded files.

My questions are:

  • Is it possible to make a prediction with a recorded file instead of using the audio stream? (รก la model.predict(data) like in python/tensorflow)
  • I'm using mfcc in my trained model. I expect that I would need to do some transformation with the recorded audio files to load them with the model (as I'm doing it in python). To which degree is that even possible with this plugin?

I hope you understand my problem.

Thanks in advance!

Problem loading Google's Teachable Machine models

I would like to know if these known issues will be resolved in the meantime?

  • App crashes when runnning GTM model on both android and iOS - To reduce your app's footprint, this package has disabled GTM feature by default
  • App crashes when running GTM model on iOS emulator - please run your simulation on actual iOS device. Tflite for x86_64 architectures have limited support

Hope I can access to rawscore...

I changed

final double detectionThreshold = 0.7;

to 0.7 but I think sometimes even if rawscore value is about 0.5 , then it triggered....
แ„‰แ…ณแ„แ…ณแ„…แ…ตแ†ซแ„‰แ…ฃแ†บ 2023-05-11 แ„‹แ…ฉแ„’แ…ฎ 12 47 55

with

result
        ?.listen((event) => {
              print("Recognition Result: " +
                  event["recognitionResult"].toString()),

I can't get rawScore value,
if I can get it, I can filter out on my self.

Thank you for great plugin.

loading the model

I tried your code for teachable machine in flutter but when I run the app it keeps loading the model and I couldn't use the functionality

Crash on android Emulator for Google's Teachable Machine

Let me explain you what's happening, when running the app, the debug session is starting, everything is okay, then, at the moment when the app is loading and opening on the android emulator, it's freezing and closing directly.
Then you have :
'lost connection to device'

Crash Report:

`
/Tflite_audio( 5403): loadModel
D/Tflite_audio( 5403): model name is: assets/google_teach_machine_model.tflite
I/tflite ( 5403): Initialized TensorFlow Lite runtime.
W/native ( 5403): cpu_feature_guard.cc:36 The TensorFlow library was compiled to use SSE instructions, but these aren't available on your machine.
F/libc ( 5403): Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0xfffffff4 in tid 5403 (e_audio_example), pid 5403 (e_audio_example)


Build fingerprint: 'google/sdk_gphone_x86_arm/generic_x86_arm:11/RSR1.201013.001/6903271:userdebug/dev-keys'
Revision: '0'
ABI: 'x86'
Timestamp: 2021-02-18 08:51:25+0100
pid: 5403, tid: 5403, name: e_audio_example >>> tfliteaudio.tflite_audio_example <<<
uid: 10153
signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0xfffffff4
eax 00000000 ebx abdaeff4 ecx 000000a0 edx 0000000d
edi e85140e8 esi abdb85f8
ebp ff9f37b8 esp ff9f3770 eip a6f9bd29
backtrace:
#00 pc 0039ad29 /data/app/~~1HiIG67KtPY_A-PHL0219w==/tfliteaudio.tflite_audio_example-zsfQjns-rATmoaLaZf6gUw==/lib/x86/libtensorflowlite_flex_jni.so
#1 pc 046f57e7 /data/app/~~1HiIG67KtPY_A-PHL0219w==/tfliteaudio.tflite_audio_example-zsfQjns-rATmoaLaZf6gUw==/lib/x86/libtensorflowlite_flex_jni.so
#2 pc 046f5462 /data/app/~~1HiIG67KtPY_A-PHL0219w==/tfliteaudio.tflite_audio_example-zsfQjns-rATmoaLaZf6gUw==/lib/x86/libtensorflowlite_flex_jni.so
#3 pc 046f5a6b /data/app/~~1HiIG67KtPY_A-PHL0219w==/tfliteaudio.tflite_audio_example-zsfQjns-rATmoaLaZf6gUw==/lib/x86/libtensorflowlite_flex_jni.so
#4 pc 04521527 /data/app/~~1HiIG67KtPY_A-PHL0219w==/tfliteaudio.tflite_audio_example-zsfQjns-rATmoaLaZf6gUw==/lib/x86/libtensorflowlite_flex_jni.so
#5 pc 00397b75 /data/app/~~1HiIG67KtPY_A-PHL0219w==/tfliteaudio.tflite_audio_example-zsfQjns-rATmoaLaZf6gUw==/lib/x86/libtensorflowlite_flex_jni.so
#6 pc 0005f918 /apex/com.android.runtime/bin/linker (_dl__ZL10call_arrayIPFviPPcS1_EEvPKcPT_jbS5+312) (BuildId: c17fda87f98636d6da3d69604bb1486c)
#7 pc 0005fbbc /apex/com.android.runtime/bin/linker (__dl__ZN6soinfo17call_constructorsEv+588) (BuildId: c17fda87f98636d6da3d69604bb1486c)
#8 pc 00043be2 /apex/com.android.runtime/bin/linker (__dl__Z9do_dlopenPKciPK17android_dlextinfoPKv+2674) (BuildId: c17fda87f98636d6da3d69604bb1486c)
#9 pc 0003e4b2 /apex/com.android.runtime/bin/linker (__dl__ZL10dlopen_extPKciPK17android_dl

#145 pc 0036fb02 /apex/com.android.art/lib/libart.so (art::interpreter::Execute(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame&, art::JValue, bool, bool) (.llvm.16375758241455872412)+370) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#146 pc 00379b00 /apex/com.android.art/lib/libart.so (art::interpreter::EnterInterpreterFromEntryPoint(art::Thread*, art::CodeItemDataAccessor const&, art::ShadowFrame*)+176) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#147 pc 0078b325 /apex/com.android.art/lib/libart.so (artQuickToInterpreterBridge+1061) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#148 pc 0014220d /apex/com.android.art/lib/libart.so (art_quick_to_interpreter_bridge+77) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#149 pc 00893656 /system/framework/x86/boot-framework.oat (com.android.internal.os.ZygoteInit.main+2102) (BuildId: 9a9778e61b43d349325d0bb85244bd9bc95ff387)
#150 pc 0013baf2 /apex/com.android.art/lib/libart.so (art_quick_invoke_static_stub+418) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#151 pc 001d0392 /apex/com.android.art/lib/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+258) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#152 pc 0062e653 /apex/com.android.art/lib/libart.so (art::JValue art::InvokeWithVarArgsart::ArtMethod*(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, art::ArtMethod*, char*)+579) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#153 pc 0062eb25 /apex/com.android.art/lib/libart.so (art::JValue art::InvokeWithVarArgs<_jmethodID*>(art::ScopedObjectAccessAlreadyRunnable const&, _jobject*, _jmethodID*, char*)+85) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#154 pc 004ce64f /apex/com.android.art/lib/libart.so (art::JNI::CallStaticVoidMethodV(_JNIEnv*, _jclass*, _jmethodID*, char*)+735) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#155 pc 003f8aae /apex/com.android.art/lib/libart.so (art::(anonymous namespace)::CheckJNI::CallMethodV(char const*, _JNIEnv*, _jobject*, _jclass*, _jmethodID*, char*, art::Primitive::Type, art::InvokeType)+2846) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#156 pc 003e60d9 /apex/com.android.art/lib/libart.so (art::(anonymous namespace)::CheckJNI::CallStaticVoidMethodV(_JNIEnv*, _jclass*, _jmethodID*, char*)+73) (BuildId: 8191579dfafff37a5cbca70f9a73020f)
#157 pc 0008f90e /system/lib/libandroid_runtime.so (_JNIEnv::CallStaticVoidMethod(_jclass*, _jmethodID*, ...)+62) (BuildId: 588f2cd5873ff4273bb25b25edb82606)
#158 pc 00098c8e /system/lib/libandroid_runtime.so (android::AndroidRuntime::start(char const*, android::Vectorandroid::String8 const&, bool)+910) (BuildId: 588f2cd5873ff4273bb25b25edb82606)
#159 pc 00003804 /system/bin/app_process32 (main+1588) (BuildId: c5eedbfb6130af84c3db8e121fb1202e)
#160 pc 000522e3 /apex/com.android.runtime/lib/bionic/libc.so (__libc_init+115) (BuildId: 6e3a0180fa6637b68c0d181c343e6806)
Lost connection to device. `
Screenshot 2021-02-18 at 08 49 36

Originally posted by @Tanelo in #4 (comment)

Reducing false positives/ non divisible bufferRate outputs NaN

Hi @Caldarie. I'm facing an issue reguarding the detection. I create my model with a lot of samples to recognize a certain noise, it works pretty well but tflite_audio recognizes also other noises like the one I would like to recognize. How can I fix this to adjust precision? Maybe I have to play with this parameters: detectionThreshold, averageWindowDuration, minimumTimeBetweenSamples, suppressionTime??

Thank you

How do I keep the stream listening

I need to make an app that can continuously listen to the ambient sound. But the example can only work once and require restarting the audio recognition. Is there a workaround to achieve continuous listening?

Please read before opening a new issue.

When opening a new issue, please provide the following information:

  1. Describe the problem
  2. Are you using a device or emulator?
  3. Did you run the example model provided in this repository? Did you get the same error?
  4. Provide the full error logs

How to handle models generating multiple outputs

Hi,
Is there a way to manage multiple output models?
I am trying to implement this model indeed: https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/audio_classification.ipynb
which is based on Yamnet model and generates 2 outputs: first one from Yamnet model, 2nd one from trained model.
Implementing flutter_tflite_audio only gives me access to first output (and by default ask for labels of 1rst model/output only)
Thank you
Fabrice

App size Increased.

My Tensorflow model which I trained using teachable machines has a size of 5MB but after adding the model and flutter_tflite_audio package, my app size increased from 20MB to 54MB.

Is there any solution?

Tensorflow Lite errors when running in iOS devices

Running my application in iOS device (iPhone 7 with iOS 14.4) it crashes when the model is processing the data.
I believe that happens due to Tensorflow Lite Errors (see output) but I have no idea how to fix it:

carolinaalbuquerque ~/Documents/audio_recognition_app (*main) > flutter run
Launching lib/main.dart on iPhone de Carolina in debug mode...
Automatically signing iOS for device deployment using specified development team in Xcode project: 5SSNTW7HP4
Running Xcode build...                                                  
 โ””โ”€Compiling, linking and signing...                        19,5s
Xcode build done.                                           28,9s
(lldb) 2021-02-20 18:18:18.623713+0000 Runner[414:13363] Warning: Unable to create restoration in progress marker file
fopen failed for data file: errno = 2 (No such file or directory)       
Errors found! Invalidating cache...                                     
fopen failed for data file: errno = 2 (No such file or directory)       
Errors found! Invalidating cache...                                     
Installing and launching...                                        36,1s
Initialized TensorFlow Lite runtime.
TensorFlow Lite Error: Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.
TensorFlow Lite Error: Node number 2 (FlexSize) failed to prepare.


Failed to create the interpreter with error: Failed to allocate memory for input tensors.
["0 Background Noise", "1 Bell", "2 Whistle", "3 Xylophone"]
Activating Dart DevTools...                                         5,9s
Syncing files to device iPhone de Carolina...                       176ms

Flutter run key commands.
r Hot reload. ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
R Hot restart.
h Repeat this help message.
d Detach (terminate "flutter run" but leave application running).
c Clear the screen
q Quit (terminate the application on the device).
An Observatory debugger and profiler on iPhone de Carolina is available at: http://127.0.0.1:53066/z1fkaZhV7VE=/

Flutter DevTools, a Flutter debugger and profiler, on iPhone de Carolina is available at:
http://127.0.0.1:9101?uri=http%3A%2F%2F127.0.0.1%3A53066%2Fz1fkaZhV7VE%3D%2F

Running with unsound null safety
For more information see https://dart.dev/null-safety/unsound-null-safety
requesting permission
start microphone
recordingBuffer length: 11008
recordingBuffer length: 22016
recordingBuffer length: 33024
recordingBuffer length: 44032
reached threshold
Running model
* thread #21, queue = 'conversionQueue', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
    frame #0: 0x00000001d6d0a128 libsystem_platform.dylib`_platform_memmove + 72
libsystem_platform.dylib`_platform_memmove:
->  0x1d6d0a128 <+72>: stnp   x12, x13, [x0]
    0x1d6d0a12c <+76>: stnp   x14, x15, [x0, #0x10]
    0x1d6d0a130 <+80>: subs   x2, x2, #0x40             ; =0x40 
    0x1d6d0a134 <+84>: b.ls   0x1d6d0a158               ; <+120>
Target 0: (Runner) stopped.
Lost connection to device.

I am using a Google Teachable Machine model and I followed these steps for iOS configuration.

I have tested already in Android device and it works perfectly but I need to guarantee iOS support!

Change recording length using GTM models to allow audio inputs greater than 1 second

I am using a GTM model and I am trying to increase the recording length passed to the model. To analysing 1 second of audio, I am using the following configurations:

      numOfInferences: 1,
      inputType: 'rawAudio',
      sampleRate: 44100,
      recordingLength: 44032,
      bufferSize: 22016,

Instead of analysing just 1 second, I want to increase the audio input to 3-5 seconds. Changing the recording length to 132 096 (3 x 44032), sampleRate to 132 300 (3 x 44100) and the bufferSize to half of recordingLength value, the inference crashes.

Is there anyway to record and send to the model an audio with more seconds knowing that GTM model's input requires a tensor input with 44032 size?

When it is opened and closed continuously, an exception occurs.

[โœ“] Flutter (Channel stable, 2.5.0, on macOS 11.6 20G165 darwin-x64, locale zh-Hans-CN)
[โœ“] Android toolchain - develop for Android devices (Android SDK version 31.0.0)
[โœ“] Xcode - develop for iOS and macOS
[โœ“] Chrome - develop for the web
[โœ“] Android Studio (version 4.1)
[โœ“] IntelliJ IDEA Ultimate Edition (version 2021.2.1)
[โœ“] VS Code (version 1.61.0)
[โœ“] Connected device (2 available)

_initialize() async {
advancedPlayer.stop();
isRecording = true;
result = TfliteAudio.startAudioRecognition(
numOfInferences: 1000,
inputType: this.inputType,
sampleRate: this.sampleRate,
recordingLength: this.recordingLength,
bufferSize: this.bufferSize,
detectionThreshold: 0.6,
averageWindowDuration: 2000,
minimumTimeBetweenSamples: 100,
suppressionTime: 2000,
);
setState(() {});

///Logs the results and assigns false when stream is finished.
result.listen((event) {
  checkData(event["recognitionResult"].toString());
}).onDone(() {
  setState(() {
    isRecording = false;
  });
});

}

V/Tflite_audio( 8040): recordingOffset: 110250/44032000
V/Tflite_audio( 8040): recordingOffset: 121275/44032000
V/Tflite_audio( 8040): recordingOffset: 132300/44032000
V/Tflite_audio( 8040): Recording reached threshold
V/Tflite_audio( 8040): recordingOffset: 139776/44032000
V/Tflite_audio( 8040): Creating new threshold
V/Tflite_audio( 8040): Recognition started.
V/Tflite_audio( 8040): Input shape: [1, 44032]
E/AndroidRuntime( 8040): FATAL EXCEPTION: Thread-73
E/AndroidRuntime( 8040): Process: com.sankoumu.piano, PID: 8040
E/AndroidRuntime( 8040): java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.Object java.util.HashMap.get(java.lang.Object)' on a null object reference
E/AndroidRuntime( 8040): at flutter.tflite_audio.TfliteAudioPlugin.recognize(TfliteAudioPlugin.java:514)
E/AndroidRuntime( 8040): at flutter.tflite_audio.TfliteAudioPlugin.access$100(TfliteAudioPlugin.java:59)
E/AndroidRuntime( 8040): at flutter.tflite_audio.TfliteAudioPlugin$4.run(TfliteAudioPlugin.java:494)
E/AndroidRuntime( 8040): at java.lang.Thread.run(Thread.java:923)
I/Process ( 8040): Sending signal. PID: 8040 SIG: 9

When it is opened and closed continuously, an exception occurs.

Google Teachable Machine raw output returns NaN

I have the same problem but only on one device. I created a model with Google Teachable Machine and tested this on two devices:

Samsung Galaxy S9 Plus
The first label is always detected here.
The logged raw scores are: [NaN, NaN, NaN]

Samsung Galaxy S20
The detection works perfectly here

Both were tested under the same conditions.

The S20 outputs a NaN in one of hundreds of cases.
The S9 always outputs NaN. I haven't yet been able to get a score on the S9.

Any suggestions?

Originally posted by @fabian-rump in #10 (comment)

help :Error using your own model

D/Tflite_audio( 2629): Check for permissions
D/Tflite_audio( 2629): Permission already granted. start recording
V/Tflite_audio( 2629): Recording started
V/Tflite_audio( 2629): recordingOffset: 1000/16000
V/Tflite_audio( 2629): recordingOffset: 2000/16000
V/Tflite_audio( 2629): recordingOffset: 3000/16000
V/Tflite_audio( 2629): recordingOffset: 4000/16000
V/Tflite_audio( 2629): recordingOffset: 5000/16000
V/Tflite_audio( 2629): recordingOffset: 6000/16000
V/Tflite_audio( 2629): recordingOffset: 7000/16000
V/Tflite_audio( 2629): recordingOffset: 8000/16000
V/Tflite_audio( 2629): recordingOffset: 9000/16000
V/Tflite_audio( 2629): recordingOffset: 10000/16000
V/Tflite_audio( 2629): recordingOffset: 11000/16000
V/Tflite_audio( 2629): recordingOffset: 12000/16000
V/Tflite_audio( 2629): recordingOffset: 13000/16000
V/Tflite_audio( 2629): recordingOffset: 14000/16000
V/Tflite_audio( 2629): recordingOffset: 15000/16000
V/Tflite_audio( 2629): recordingOffset: 16000/16000
V/Tflite_audio( 2629): inputType: decodedWav
V/Tflite_audio( 2629): Recognition started.
D/Tflite_audio( 2629): Recording stopped.
E/AndroidRuntime( 2629): FATAL EXCEPTION: Thread-3
E/AndroidRuntime( 2629): Process: tfliteaudio.tflite_audio_example, PID: 2629
E/AndroidRuntime( 2629): java.lang.IllegalArgumentException: Invalid input Tensor index: 1
E/AndroidRuntime( 2629): at org.tensorflow.lite.NativeInterpreterWrapper.getInputTensor(NativeInterpreterWrapper.java:358)
E/AndroidRuntime( 2629): at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:186)
E/AndroidRuntime( 2629): at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:374)
E/AndroidRuntime( 2629): at flutter.tflite_audio.TfliteAudioPlugin.decodedWaveRecognize(TfliteAudioPlugin.java:592)
E/AndroidRuntime( 2629): at flutter.tflite_audio.TfliteAudioPlugin.access$200(TfliteAudioPlugin.java:54)
E/AndroidRuntime( 2629): at flutter.tflite_audio.TfliteAudioPlugin$4.run(TfliteAudioPlugin.java:449)
E/AndroidRuntime( 2629): at java.lang.Thread.run(Thread.java:923)
D/ViewRootImplMainActivity: windowFocusChanged hasFocus=false inTouchMode=true
I/Process ( 2629): Sending signal. PID: 2629 SIG: 9
Lost connection to device.

this is my model https://github.com/jijkbird/filetest/releases/download/1/model.zip

startRecording() called on an uninitialized AudioRecord.

Future<Timer?> startListningClap(BuildContext context) async {
    // if service already running
    if (await FlutterForegroundTask.isRunningService) {
      setForceStopFlashlight(false);
      return Timer.periodic(const Duration(milliseconds: 500), (Timer ct) {
        try {
          clapAudioSubscriber.cancel();
        } catch (_) {}

        try {
          recognitionStream = TfliteAudio.startAudioRecognition(
            sampleRate: 44100,
            bufferSize: /*22016*/ 11016,
            detectionThreshold: 0.3,
          );
        } catch (_) {}

        // start listning to clap/whistle
        clapAudioSubscriber = recognitionStream.listen(
            (event) async {
              try {
                if (clapServiceStatus == true &&
                    event['recognitionResult'] == 'clap') {
                  // stop listening when clap detected

                  ct.cancel();
                  UtilityFunctions.showPhoneFoundAlertDialog(
                      context, () => stopStartClapListning(context));
                  // if vibration is set to on then vibrate phone
                  bool clapVib = prefs.getBool('clapVibration') ?? false;
                  if (await (Vibration.hasVibrator()) == true && clapVib) {
                    Vibration.vibrate(duration: 1000, amplitude: 255);
                  }

                  // if flashlight is set to on then turn flashlight
                  bool clapFlash = prefs.getBool('clapFlashLight') ?? false;
                  if (clapFlash) {
                    turnOnFlashLight();
                  }

                  // play melody if enabled by user
                  if (clapMelody == true) playMelody(volume);
                }
              } catch (_) {}
            },
            cancelOnError: true,
            onError: (_) {
              clapAudioSubscriber.cancel();
            },
            onDone: () {
              clapAudioSubscriber.cancel();
            });
      });
    }
    return null;
  }

E/AndroidRuntime(10013): Process: com.example.flutter_application_test, PID: 10013
E/AndroidRuntime(10013): java.lang.IllegalStateException: startRecording() called on an uninitialized AudioRecord.
E/AndroidRuntime(10013): at android.media.AudioRecord.startRecording(AudioRecord.java:1147)
E/AndroidRuntime(10013): at flutter.tflite_audio.Recording.start(Recording.java:91)
E/AndroidRuntime(10013): at flutter.tflite_audio.TfliteAudioPlugin.record(TfliteAudioPlugin.java:592)
E/AndroidRuntime(10013): at flutter.tflite_audio.TfliteAudioPlugin.lambda$GvBCQqT11rP0XXTQzopagqcPxcA(Unknown Source:0)
E/AndroidRuntime(10013): at flutter.tflite_audio.-$$Lambda$TfliteAudioPlugin$GvBCQqT11rP0XXTQzopagqcPxcA.run(Unknown Source:2)
E/AndroidRuntime(10013): at java.lang.Thread.run(Thread.java:923)
I/ExceptionHandle(10013): at android.media.AudioRecord.startRecording(AudioRecord.java:1147)
I/ExceptionHandle(10013): at flutter.tflite_audio.Recording.start(Recording.java:91)
I/ExceptionHandle(10013): at flutter.tflite_audio.TfliteAudioPlugin.record(TfliteAudioPlugin.java:592)
I/ExceptionHandle(10013): at flutter.tflite_audio.TfliteAudioPlugin.lambda$GvBCQqT11rP0XXTQzopagqcPxcA(Unknown Source:0)
I/ExceptionHandle(10013): at flutter.tflite_audio.-$$Lambda$TfliteAudioPlugin$GvBCQqT11rP0XXTQzopagqcPxcA.run(Unknown Source:2)
I/ExceptionHandle(10013): at java.lang.Thread.run(Thread.java:923)
D/TfliteAudio(10013): Parameters: {detectionThreshold=0.3, minimumTimeBetweenSamples=0, method=setAudioRecognitionStream, numOfInferences=1, averageWindowDuration=0, audioLength=0, sampleRate=44100, suppressionTime=0, bufferSize=11016}
D/TfliteAudio(10013): AudioLength has been readjusted. Length: 44032
D/TfliteAudio(10013): Transpose Audio: false
D/TfliteAudio(10013): Check for permission. Request code: 13
D/TfliteAudio(10013): Permission already granted.

Not able to use mfcc Model

Hi,

I have made an MFCC Model using this tutorial - Link

I am not able to use this model and getting this error - Cannot copy from a TensorFlowLite tensor (StatefulPartitionedCall:0) with shape [3, 2] to a Java object with shape [1, 2].

Can you help.

I am using Real Device (Android)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.