RuntimeException: Filesystem closed


No jobs are processing due to a closed filesystem and we are not able to identify the file system.

Error message

[anonymous] INFO [LeaseRenewer:user@host:port] ( - Retrying connect to server: <hostname>/<ip>:<port>. Already tried 4 time(s); maxRetries=5
[anonymous] WARN [LeaseRenewer:user@host:port] ( - Failed to renew lease for [DFSClient_NONMAPREDUCE_-<id>] for 30 seconds. Aborting ... Call From FQDN/<ip> to <hostname>:<port> failed on socket timeout exception: 20000 millis timeout while waiting for channel to be ready for connect. ch :
java.nio.channels.SocketChannel[connection-pending remote=<hostname>/<ip>:<port>]; For more details see:
[anonymous] INFO [ConcurrentJobExecutor-2] ( - Releasing splits (UUID: <id>) from cache, still cached split-arrays: 3
[anonymous] ERROR [ConcurrentJobExecutor-2] ( - Failed to cleanup job Workbook <name> / <name> Filesystem closed
[system] ERROR [JobScheduler thread-1] ( - Storage not available, Filesystem closed
[system] ERROR [JobScheduler thread-1] ( - Failed to start job, filesystem is not available.

Troubleshooting steps

Check Hadoop Cluster settings
Try to deploy job "Cluster Health Check"
Probably it will report the same error 

Check ulimit -n and cat /proc/sys/fs/file-nr if there are enough file descriptors
Check /var/log/messages and gather the file!
Do this both endpoint, which means on Datameer host as cluster nodes as well! 


The (network) connection to the cluster and with this to the remote storage (HDFS) was lost. This can be caused by network issues, rebooting the cluster and so on. The Datameer service closes the filesystem than, as the storage (HDFS) is not available. The Datameer service will stay in this state, even if the remote storage is available again. See HDFS-5028 for more information.


In this case restarting conductor will solve the issue.