安装过程中,DataNode start failed

108次浏览 0人关注 复制链接 所属标签: 安装失败 DataNode

stderr:   /var/lib/ambari-agent/data/errors-40.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 161, in <module>
    DataNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/datanode.py", line 67, in start
    datanode(action="start")
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_datanode.py", line 68, in datanode
    create_log_dir=True
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/utils.py", line 274, in service
    Execute(daemon_cmd, not_if=process_id_exists_command, environment=hadoop_env_exports)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
    tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
    tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode'' returned 1. starting datanode, logging to /opt/soft/log/hadoop/hdfs/hadoop-hdfs-datanode-ark1.out
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hadoop/hdfs/gc.log-202104121939 due to No such file or directory

Error: Could not find or load main class org.apache.hadoop.hdfs.server.datanode.DataNode

stdout:   /var/lib/ambari-agent/data/output-40.txt

2021-04-12 19:39:38,067 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2021-04-12 19:39:38,870 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6
2021-04-12 19:39:38,889 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2021-04-12 19:39:38,891 - Group['root'] {}
2021-04-12 19:39:38,895 - Group['ranger'] {}
2021-04-12 19:39:38,895 - Group['hadoop'] {}
2021-04-12 19:39:38,896 - Group['ark'] {}
2021-04-12 19:39:38,896 - Group['users'] {}
2021-04-12 19:39:38,896 - Group['presto'] {}
2021-04-12 19:39:38,897 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-04-12 19:39:38,899 - User['streaming'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-04-12 19:39:38,900 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-04-12 19:39:38,901 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2021-04-12 19:39:38,903 - User['root'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-04-12 19:39:38,904 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger']}
2021-04-12 19:39:38,905 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-04-12 19:39:38,907 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-04-12 19:39:38,908 - User['ark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-04-12 19:39:38,910 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-04-12 19:39:38,911 - User['isuhadoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-04-12 19:39:38,912 - User['presto'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2021-04-12 19:39:38,914 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2021-04-12 19:39:38,917 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2021-04-12 19:39:40,068 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2021-04-12 19:39:40,070 - Group['hdfs'] {}
2021-04-12 19:39:40,071 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2021-04-12 19:39:40,073 - FS Type: 
2021-04-12 19:39:40,073 - Directory['/etc/hadoop'] {'mode': 0755}
2021-04-12 19:39:40,109 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2021-04-12 19:39:40,111 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2021-04-12 19:39:40,161 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2021-04-12 19:39:41,328 - Skipping Execute[('setenforce', '0')] due to not_if
2021-04-12 19:39:41,329 - Directory['/opt/soft/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2021-04-12 19:39:41,335 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2021-04-12 19:39:41,336 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2021-04-12 19:39:41,346 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2021-04-12 19:39:41,350 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2021-04-12 19:39:41,361 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2021-04-12 19:39:41,384 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2021-04-12 19:39:41,385 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2021-04-12 19:39:41,387 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2021-04-12 19:39:41,397 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2021-04-12 19:39:42,556 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2021-04-12 19:39:45,778 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2021-04-12 19:39:45,782 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6
2021-04-12 19:39:45,838 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2021-04-12 19:39:45,871 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2021-04-12 19:39:47,107 - checked_call returned (0, '2.6.1.0-129', '')
2021-04-12 19:39:47,126 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2021-04-12 19:39:47,141 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2021-04-12 19:39:47,143 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2021-04-12 19:39:47,165 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2021-04-12 19:39:47,166 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2021-04-12 19:39:47,188 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2021-04-12 19:39:47,206 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2021-04-12 19:39:47,206 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2021-04-12 19:39:47,222 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2021-04-12 19:39:47,224 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2021-04-12 19:39:47,242 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2021-04-12 19:39:47,242 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2021-04-12 19:39:47,257 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2021-04-12 19:39:47,276 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2021-04-12 19:39:47,276 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2021-04-12 19:39:47,292 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2021-04-12 19:39:47,310 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2021-04-12 19:39:47,310 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2021-04-12 19:39:47,426 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2021-04-12 19:39:47,448 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2021-04-12 19:39:47,449 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2021-04-12 19:39:47,505 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2021-04-12 19:39:47,508 - Directory['/var/lib/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0751}
2021-04-12 19:39:47,508 - Changing permission for /var/lib/hadoop-hdfs from 755 to 751
2021-04-12 19:39:47,509 - Directory['/var/lib/ambari-agent/data/datanode'] {'create_parents': True, 'mode': 0755}
2021-04-12 19:39:47,509 - Creating directory Directory['/var/lib/ambari-agent/data/datanode'] since it doesn't exist.
2021-04-12 19:39:47,510 - History_file property has file /var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist and it does not exist.
2021-04-12 19:39:47,548 - Host contains mounts: ['/sys', '/proc', '/dev', '/sys/kernel/security', '/dev/shm', '/dev/pts', '/run', '/sys/fs/cgroup', '/sys/fs/cgroup/systemd', '/sys/fs/pstore', '/sys/fs/cgroup/devices', '/sys/fs/cgroup/blkio', '/sys/fs/cgroup/freezer', '/sys/fs/cgroup/cpuset', '/sys/fs/cgroup/cpu,cpuacct', '/sys/fs/cgroup/net_cls,net_prio', '/sys/fs/cgroup/pids', '/sys/fs/cgroup/memory', '/sys/fs/cgroup/hugetlb', '/sys/fs/cgroup/perf_event', '/sys/kernel/config', '/', '/proc/sys/fs/binfmt_misc', '/sys/kernel/debug', '/dev/mqueue', '/dev/hugepages', '/boot', '/home', '/run/user/0', '/data1'].
2021-04-12 19:39:47,548 - Mount point for directory /data1/hadoop/hdfs/data is /data1
2021-04-12 19:39:47,549 - Forcefully ensuring existence and permissions of the directory: /data1/hadoop/hdfs/data
2021-04-12 19:39:47,550 - Directory['/data1/hadoop/hdfs/data'] {'group': 'hadoop', 'cd_access': 'a', 'create_parents': True, 'ignore_failures': True, 'mode': 0755, 'owner': 'hdfs'}
2021-04-12 19:39:47,550 - Creating directory Directory['/data1/hadoop/hdfs/data'] since it doesn't exist.
2021-04-12 19:39:47,551 - Changing owner for /data1/hadoop/hdfs/data from 0 to hdfs
2021-04-12 19:39:47,565 - Host contains mounts: ['/sys', '/proc', '/dev', '/sys/kernel/security', '/dev/shm', '/dev/pts', '/run', '/sys/fs/cgroup', '/sys/fs/cgroup/systemd', '/sys/fs/pstore', '/sys/fs/cgroup/devices', '/sys/fs/cgroup/blkio', '/sys/fs/cgroup/freezer', '/sys/fs/cgroup/cpuset', '/sys/fs/cgroup/cpu,cpuacct', '/sys/fs/cgroup/net_cls,net_prio', '/sys/fs/cgroup/pids', '/sys/fs/cgroup/memory', '/sys/fs/cgroup/hugetlb', '/sys/fs/cgroup/perf_event', '/sys/kernel/config', '/', '/proc/sys/fs/binfmt_misc', '/sys/kernel/debug', '/dev/mqueue', '/dev/hugepages', '/boot', '/home', '/run/user/0', '/data1'].
2021-04-12 19:39:47,566 - Mount point for directory /data1/hadoop/hdfs/data is /data1
2021-04-12 19:39:47,567 - File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] {'content': '\n# This file keeps track of the last known mount-point for each dir.\n# It is safe to delete, since it will get regenerated the next time that the component of the service starts.\n# However, it is not advised to delete this file since Ambari may\n# re-create a dir that used to be mounted on a drive but is now mounted on the root.\n# Comments begin with a hash (#) symbol\n# dir,mount_point\n/data1/hadoop/hdfs/data,/data1\n', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2021-04-12 19:39:47,567 - Writing File['/var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist'] because it doesn't exist
2021-04-12 19:39:47,568 - Changing owner for /var/lib/ambari-agent/data/datanode/dfs_data_dir_mount.hist from 0 to hdfs
2021-04-12 19:39:47,571 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2021-04-12 19:39:47,571 - Changing owner for /var/run/hadoop from 0 to hdfs
2021-04-12 19:39:47,572 - Changing group for /var/run/hadoop from 0 to hadoop
2021-04-12 19:39:47,573 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2021-04-12 19:39:47,573 - Creating directory Directory['/var/run/hadoop/hdfs'] since it doesn't exist.
2021-04-12 19:39:47,574 - Changing owner for /var/run/hadoop/hdfs from 0 to hdfs
2021-04-12 19:39:47,574 - Directory['/opt/soft/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2021-04-12 19:39:47,575 - Creating directory Directory['/opt/soft/log/hadoop/hdfs'] since it doesn't exist.
2021-04-12 19:39:47,576 - Changing owner for /opt/soft/log/hadoop/hdfs from 0 to hdfs
2021-04-12 19:39:47,577 - File['/var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2021-04-12 19:39:48,729 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start datanode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-datanode.pid'}
2021-04-12 19:39:55,348 - Execute['find /opt/soft/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /opt/soft/log/hadoop/hdfs/hadoop-hdfs-datanode-ark1.out <==
Java HotSpot(TM) 64-Bit Server VM warning: Cannot open file /var/log/hadoop/hdfs/gc.log-202104121939 due to No such file or directory

Error: Could not find or load main class org.apache.hadoop.hdfs.server.datanode.DataNode
ulimit -a for user hdfs
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63446
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 655360
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 655360
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Command failed after 1 tries

暂无回答