Cat Vs Non Cat Accuracy changing every time?

Hi
After training and splitting the dataset, i ran my same multiple times with starting clearing session using tf.keras.backend.clear_session() , still i get different test and valid accuracy varying from 40% to 86% each time, when i ran below model. Would you let me know what to correct to get fixed accuracy if run same model again and again ?

after training and split , my code is

test_set_x ,train_set_x =test_set_x_orig/255,train_set_x_orig/255
train_x,valid_x=train_set_x[:150,:,:],train_set_x[150:,:,:]
train_y,valid_y=np.reshape(train_set_y[0,:150],(150,)),np.reshape(train_set_y[0,150:],(59,))
##clear model
tf.keras.backend.clear_session()

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 2), activation='relu', input_shape=(64, 64, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())

model.add(layers.Dense(64, activation='selu'))
model.add(layers.Dense(128, activation='selu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(128, activation='selu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(64, activation='selu'))
model.add(layers.Dense(2,activation='sigmoid') ) 

model.compile(loss="sparse_categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
history = model.fit(train_x, train_y, epochs=10, validation_data=(valid_x, valid_y))
score = model.evaluate(test_set_x, np.reshape(test_set_y,(50,)))

The cats-noncats dataset is very small to train a model from scratch and get good accuracy. Take a look at transfer learning project.

You can find this on the Guided Projects page.