Voice Recognition Using Deep Learning

Can we apply the same training to classify the voice recognition as we did in the MNIST data set??

PLESE REPLY

Hi, Anubhav.

Yes you can use cnn for speech recognitions, But you have to use the LSTM layer for it as you need to remember.
You have to store/make the data in .wav files(audio files) then convert into a array of numbers using MFCC which converting the analog signal into discrete space of numbers.

build model model = Sequential() model.add(LSTM(16, input_shape=(config.buckets, config.max_len, channels), activation=“sigmoid”)) model.add(Dense(1, activation=‘sigmoid’)) model.add(Dense(num_classes, activation=‘softmax’))

This paper can be helpful :- https://arxiv.org/pdf/1904.08779v2.pdf
for concepts : https://medium.com/@jonathan_hui/speech-recognition-feature-extraction-mfcc-plp-5455f5a69dd9
This paper can be useful : https://arxiv.org/pdf/1904.08779v2.pdf

All the best!

Also please answer to my queries am pasting the link, i am really confused,
https://discuss.cloudxlab.com/t/please-answer-my-queries-and-help-me-out/5869?u=anubhav_gupta

In Heterogeneous Ensembles models :- we can combine use a decision tree, a SVM and a logistic regression etc which are different algorithms on classifications on same data and then use averaging or voting to get the aggregate results.

In Homogeneous Ensembles models :- We need to apply same algorithm on all the estimators, you can use Random forest apart from decision Tree.

How can i Demostrate my work or the projects that i have done in Hadoop , Spark and all on Github beacuse i am unable to find a proper way ??
So that i can make others aware that i know these Framework…

Please help.

Thankyou…

You have to learn GIT and you have to push your work to GITHUB.
You can send all your projects, jupyter notebooks, text files etc to GITHUB using git command and show it to all as it is a professional public repository.

You can learn GIT and GITHUB and other concepts like containers and dockers etc from this course