.evaluate() computes the loss based on the input we pass it, along with any other metrics that we requested in the metrics param when we compiled our model (such as accuracy etc.)
Do we really need to execute model.evaluate() separately when we can do the same thing in model.fit() itself using cross validation as below.
When we have created separate validation set explicitly. In this case are not we doing
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=30, validation_data=(X_valid, y_valid),batch_size=100)
When we have not created a separate validation set explicitly.
model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=30, validation_split=0.2,batch_size=100)
Regards
Manoj