[Question By User]
I have a query regarding ML training working and the range of input features values.
Suppose i have an input with 4 features. 2 of them are in the range [-1,1], rest are out of this range. After performing normalization, the range of the rest of 2 features are now in same range as other 2. How is this beneficial during training of weights and bias?
Suppose, i have a input with 2 features, now if both have [-1,1] and if we go in depth then one has actual range of [-0.9, 0.9] and other has actual range [-0.09, 0.09], then since the range of 2nd feature is 10 times smaller than that of first, will during training of weights and bias, the feature with more range will be given priority and will have larger weight/ bias than 2nd feature and will this make 1st feature more important than 2nd during training and afterwards during testing?