Use of model.evaluate() when model.fit() can be used for same

.evaluate() computes the loss based on the input we pass it, along with any other metrics that we requested in the metrics param when we compiled our model (such as accuracy etc.)

Do we really need to execute model.evaluate() separately when we can do the same thing in model.fit() itself using cross validation as below.

When we have created separate validation set explicitly. In this case are not we doing

model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])

history = model.fit(X_train, y_train, epochs=30, validation_data=(X_valid, y_valid),batch_size=100)

When we have not created a separate validation set explicitly.

model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"])

history = model.fit(X_train, y_train, epochs=30, validation_split=0.2,batch_size=100)

Regards
Manoj

Hi, Manoj.

  1. Yes you can do automatic verification of datasets using validation_split=0.3
  2. You can use a Manual Verification Dataset using validation_data=(X_test,y_test) is

model.evaluate(X[test], Y[test], verbose=0) function is used when you are creating more no. of models and want to know which one is the best eg: k-fold cross-validations etc.

All the best!