Spark 2.0.2 Fatal Error

I am trying to read a CSV-file and it works for some smaller files, but the biggest file gives an issues and creates a log report:

‘A fatal error has been detected by the Java Runtime Environment’

‘Could not obtain block:’

These are some snippets of the error.

I have no clue what I am doing wrong;. Can you help me?

Hi,

There could be the memory issues. If you could share piece of code which is well document with the details about the objectives, I could help.

Regards,
Sandeep

I think you are right about the memory issues. I see an OutofMemory in the error code. I was trying to make a recommender of the Movielens dataset, which contains 20 million rows. When I take a sample of 100.000 it is working. Is there a posibility that I can extent my memory?

@p172160155024,

Can you please try below mentioned suggestions:

Extending memory on cloudxlab isn’t possible for individual but you can definitely try using the spark mllib and distribute the work on multiple nodes.