Cluster Health FAIL

4 followers
0
Avatar

One of our partners is getting the following FAIL in the cluster health after installing 6.0.5 on CDH 5.7. "Datameer requires at least '1.0 GB' of memory for a (map/reduce-) task. Configured is only '729.0 MB'". Whats the custom parameter he needs to set to bump up the memory here? We've tried a bunch with no success. Thanks!

Nikhil Kumar

Official comment

10 comments

  • Avatar
    Nikhil Kumar

    Thanks! :-) He has tried all the following so far with no success:

    das.job.map-task.memory=2048m
    das.job.reduce-task.memory=2048m
    das.job.application-manager.memory=2048m
     
    mapred.map.child.java.opts=2048m
    mapreduce.reduce.java.opts=2048m
    mapred.child.java.opts=2048m
     
    He has also increased the memory in das-env.sh
     
    Any other ideas?
    -1
  • Avatar
    Joel Stewart

    What's the cluster setup? Are there limitations on how much memory containers can request from the cluster itself? 

    0
  • Avatar
    Nikhil Kumar

    I still get this error "task.memory

    FAIL

    Datameer requires at least '1.0 GB' of memory for a (map/reduce-) task. Configured is only '989.9 MB' when installing us on a CDH quickstrat VM. I tried setting the following (below) but it does not like the value 2048m and fails immediately. Any ideas? I want to use this VM for a training. Thanks!

    das.job.map-task.memory=2048m
    das.job.reduce-task.memory=2048m
    das.job.application-manager.memory=2048m
     
    mapred.map.child.java.opts=2048m
    mapreduce.reduce.java.opts=2048m
    mapred.child.java.opts=2048m
    0
  • Avatar
    Konsta Danyliuk

    HI Nikhil,

    Try to add below option to your Hadoop Cluster custom properties section to check if this helps.

    tez.task.resource.memory.mb=1536

     

    -1
  • Avatar
    Nikhil Kumar

    Thanks Kosta - Since this is 6.1.2, it using Spark cluster mode. So I think there is a spark specific setting I need?

    0
  • Avatar
    Nikhil Kumar

    The setting  tez.task.resource.memory.mb=1536 did not resolve the problem by the way.

    0
  • Avatar
    Konsta Danyliuk

    Hi Nikhil,

    As Spark is default execution framework for Datameer 6.1.2, try to add below option to your Hadoop Cluster custom properties section to check if this helps.

    spark.executor.memory=1536m

    0
Please sign in to leave a comment.