After adding batch normalisation and all kinds of regularisation possible,

IF the training accuracy is 92% and validation accuracy is 99.7% isthe model underfitting or is it a good model?

Please reply to my doubt

Seems to be overfit model.

You can do the data augmentation or add more data or create different data and check if still accuracy remains same.

You can change the batch size from 32 to 64 etc.

This discussions may help :

CNN Encoder-decoder

But if training accuracy is more than test in that case there is overfitting right?

but here it is opposite.

Yes, this is very strange. To me it looks like a case of label imbalance. Or it could be because of the very small validation set. Can you share the learning graphs with epoch on x-axis and y-axis with multiple charts of validation loss, training loss, validation accuracy and training accuracy?

Or maybe itâ€™s just luck!

Sure sir,

But in my case it is a good model or bad?

I used multires Unet

def compound_interest(principle, rate, years):

if principle > 0 and rate > 0 and years > 0:

interest = 0.0

x = principle

old_principle = principle

for i in range(years):

interest = (x * rate * 1)/100

x = x + interest

comp_interest = x - old_principle

return comp_interest

else:

return(0)

i did not understand the whole program.please help

Here there are 3 main points to solve.

- if principle>0 and rate>0 and years>0: then only it should calculate the Compound interest.
- If any one the value is zero, it must return zero. return(0).
- we must declare comp_interest=0.0 as float as the local variable of function. comp_interest=0.0

All the best!