Technical Queries : pls help

Hi Team,

I have gone through the topics and I have come across some technical ques reagrding CNN and RNN networks. Can anyone help me please.
Also, I have some ques regarding project submission etc.

Regards,
Aman

Hi Aman_Deep,

Please post your questions and queries.

Regards.

Hi Ankur,
Thanks for the reply, but can we talk over phone, as It would be easy for me to understand it and can ask you in details .

Hi Aman_Deep,

Posting here problems would be much better as it has to be solved/explained with links or references and codeblocks too.
Feel free to post any difficulties you face here on the forum.

Regards

Hi Ankur,

Sure I will post it here.

  1. What’s the exact diff between SGD and Mini-Batch . When we say we are using SGD, How does it compute loss , I mean after how many iterations .
    I know in Mini-batch, we can specify the batch size , does it create any batch in SGD too or not ?

  2. When it computer weights, does it flush out each weight after each epoch or iteration ?
    And if it doesn’t flush the weights , then how does it store it , as some weights can be same again and again

  3. How weights are updated , after every epoch or every iteration ?

  4. Why can’t we use Guassian function as an Activation Function ?

  5. How to find bug in Deep Neural Network, a point where we can figure out the reason why it’s giving such a low performance ?

  6. Whats the diff between softplus and softmax function?

  7. Can we use Relu or TanH as our output function ?

  8. Does Sigmoidal Function also cause Exploding Gradient ? If yes, then how because it’s partial derivative value only lies between 0 to 0.25 ?

  9. Does drop out happens at every iteration or every epoch ?

  10. How drop-outs are different from L1 regulariser ?

  11. When we initialise weights, so does it initialise weights for each layer differently . I mean if we have 4 layered network , so does it initialise weights and biases differently for all 4 layers ?

  12. What’s the ideal situation to apply batch normalization ?

  13. Does all the loss functions in classical machine learning are convex ? and in deep learning all of them are non-convex ? and is this non-convex functions are the reason , why we face situation being stuck at local minima or saddle point ?

  14. How the biases are initialised , like to initialise weights , there are techniques like he, glorot uniform etc ?

  15. Incase of LSTM, Suppose we have text like " Hello, how are you" and when designing our network , we specify 9 LSTM units in first layer , then does it mean ‘hello’ will go in 9 units at same time or ‘hello’ ‘how’ ‘are’ ‘you’ goes into these 9 units . Would it be parallel or sequential process . Also, there are only 4 words are 9 units , how does it go ?

  16. Why padding is important , why can’t we sent unequal length of data into the network , what will happen ?

  17. In the second phase of CNN, when it do element wise relu, does it always have to be Relu ?

  18. If its Relu , then is it always a good approach to initialise weights with Hey initialiser ?

  19. Why the end of CNN is always 1D Flattened vector , what will happen if we dont do it and pass directly into output layer ?

  20. What exactly is Bayesian approach to find hyperparameters in deep neural network

  21. In last project , Classify Large Images using Inception v3 , how many classes should I take minimum ? How many pics of each class shall I take ? Are we suppose to do Transfer Learning here ? what does it mean “argument scope created by the inception_v3_arg_scope() function” ?

I know these are so many doubts, but I have made of list of all these questions when I was going through the course .

Regards,
Aman

1 Like

Hi Aman_Deep,

Great set of questions. I am sure other people on the forum including myself would find your questions really helpful for our own learning too.

I will try to compose a jupyter notebook for all the answers and provide a link for the same asap.

Regards.

Hi Ankur,

Thank you and I will be honoured if these set of questions can help anybody else too .

Regards,
Aman