Druid集群搭建

下载解压安装包

下载Druid包,解压到安装目录:

1
wget http://static.druid.io/artifacts/releases/druid-0.12.3-bin.tar.gz

下载MySQL metadata store extension,并解压到extensions目录中:

1
wget http://static.druid.io/artifacts/releases/mysql-metadata-storage-0.12.3.tar.gz

手动创建 druid-0.12.3/var/tmp 和 druid-0.12.3/var/hadoop-tmp 目录

创建Metadata storage和Deep storage

Metadata storage

我使用的是MySql数据库作为元数据库,在MySql中创建一个Druid用户和Druid数据库,并给Druid用户配置Druid数据库的所有权限。

Deep storage

我使用HDFS作为Druid的Deep storage,在hdfs创建/druid目录,相关命令如下:

1
2
3
sudo su hdfs
hadoop fs -mkdir -p /druid
hadoop fs -chown hadoop:supergroup /druid #将目录授权给hadoop

集群分配

线上集群一共10台机器,启动8个数据节点、2个Master节点、2个查询节点。
数据节点:包括 Historical 和 MiddleManager 进程。
Master节点:包括 Coordinator 和 Overlord 进程。
查询节点:包括 Broker 和 Router(可选) 进程。

配置

创建druid-0.12.3/conf-online目录,将conf/中的配置复制到conf-online/中,在进行修改,具体配置如下:

Common配置

配置文件:conf-online/druid/_common/common.runtime.properties

复制hadoop配置文件

将Hadoop configuration XMLs(core-site.xml, hdfs-site.xml, yarn-site.xml, mapred-site.xml)复制到conf-online/druid/_common中。

配置 Zookeeper

1
2
3
4
5
6
#
# Zookeeper
#

druid.zk.service.host=host01:2181,host02:2181,host03:2181
druid.zk.paths.base=/druid

添加 extensions (“druid-kafka-indexing-service” “mysql-metadata-storage” “druid-hdfs-storage”)

1
druid.extensions.loadList=["druid-kafka-eight", "druid-histogram", "druid-datasketches", "druid-lookups-cached-global", "mysql-metadata-storage", "druid-kafka-indexing-service", "druid-hdfs-storage"]

配置 Metadata storage

1
2
3
4
5
6
7
8
9
#
# Metadata storage
#

# For MySQL:
druid.metadata.storage.type=mysql
druid.metadata.storage.connector.connectURI=jdbc:mysql://host01:3306/druid
druid.metadata.storage.connector.user=druid
druid.metadata.storage.connector.password=123456

配置 Deep storage

1
2
3
4
5
6
7
#
# Deep storage
#

# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
druid.storage.type=hdfs
druid.storage.storageDirectory=hdfs://nameservice1/druid/segments

配置 Indexing service logs

1
2
3
4
5
6
7
#
# Indexing service logs
#

# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):
druid.indexer.logs.type=hdfs
druid.indexer.logs.directory=hdfs://nameservice1/druid/indexing-logs

设置 Monitoring 日志级别

1
2
3
4
5
6
#
# Monitoring
#
druid.monitoring.monitors=["io.druid.java.util.metrics.JvmMonitor"]
druid.emitter=logging
druid.emitter.logging.logLevel=info

Coordinator配置

配置目录:conf-online/druid/coordinator

jvm.config

修改时区

1
2
3
4
5
6
7
8
-server
-Xms3g
-Xmx3g
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
-Dderby.stream.error.file=var/druid/derby.log

runtime.properties

修改host和port

1
2
3
4
5
6
druid.service=druid/coordinator
druid.host=192.168.1.120
druid.port=28081

druid.coordinator.startDelay=PT30S
druid.coordinator.period=PT30S

Overlord配置

配置目录:conf-online/druid/overlord

jvm.config

修改时区

1
2
3
4
5
6
7
-server
-Xms3g
-Xmx3g
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

修改host和port

1
2
3
4
5
6
7
8
druid.service=druid/overlord
druid.host=192.168.1.120
druid.port=28090

druid.indexer.queue.startDelay=PT30S

druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata

Historical配置

配置目录:conf-online/druid/historical

jvm.config

修改时区和MaxDirectMemorySize

1
2
3
4
5
6
7
8
-server
-Xms8g
-Xmx8g
-XX:MaxDirectMemorySize=20g
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

修改host和port
调整druid.server.http.numThreads和druid.processing.numThreads(建议cores-1)
注意:MaxDirectMemorySiz >= druid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1)
druid.segmentCache.locations是本地缓存segment的地方
配置historical的缓存

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
druid.service=druid/historical
druid.host=192.168.1.120
druid.port=28083

# HTTP server threads
druid.server.http.numThreads=25

# Processing threads and buffers
druid.processing.buffer.sizeBytes=1073741824
druid.processing.numMergeBuffers=2
druid.processing.numThreads=15
druid.processing.tmpDir=var/druid/processing

# Segment storage
#druid.segmentCache.locations=[{"path":"var/druid/segment-cache","maxSize"\:130000000000}]
druid.segmentCache.locations=[{"path":"/data01/druid/segment-cache","maxSize"\:100000000000}, {"path":"/data02/druid/segment-cache","maxSize"\:100000000000}]
druid.server.maxSize=200000000000

druid.query.groupBy.maxOnDiskStorage=10737418240

# Cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.historical.cache.unCacheable=["select"]
druid.cache.type=caffeine
druid.cache.sizeInBytes=6000000000

MiddleManager配置

配置目录:conf-online/druid/middleManager

jvm.config

修改时区

1
2
3
4
5
6
7
8
-server
-Xms64m
-Xmx64m
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Dhadoop.hadoop.tmp.dir=var/hadoop-tmp
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

修改host和port
修改druid.indexer.runner.javaOpts,配置MaxDirectMemorySize
修改druid.indexer.fork.property.druid.processing相关配置
指定druid.indexer.task.defaultHadoopCoordinates,HadoopIndexTasks使用的Hadoop版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
druid.service=druid/middleManager
druid.host=192.168.1.120
druid.port=28091

# Number of tasks per middleManager
druid.worker.capacity=3

# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xms2g -Xmx2g -XX:MaxDirectMemorySize=4g -Duser.timezone=UTC+0800 -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
druid.indexer.task.restoreTasksOnRestart=true

# HTTP server threads
druid.server.http.numThreads=25

# Processing threads and buffers on Peons
druid.indexer.fork.property.druid.processing.buffer.sizeBytes=536870912
druid.indexer.fork.property.druid.processing.numMergeBuffers=2
druid.indexer.fork.property.druid.processing.numThreads=2
druid.indexer.fork.property.druid.processing.tmpDir=var/druid/processing

# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.6.0-mr1-cdh5.13.1"]

Broker配置

配置目录:conf-online/druid/broker

jvm.config

修改时区
修改-Xms、-Xmx、MaxDirectMemorySize

1
2
3
4
5
6
7
8
-server
-Xms24g
-Xmx24g
-XX:MaxDirectMemorySize=20g
-Duser.timezone=UTC+0800
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

runtime.properties

修改host和port
调整druid.server.http.numThreads和druid.processing.numThreads(建议cores-1)
注意:MaxDirectMemorySiz >= druid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1)
设置druid.sql.enable为true,可以使用sql
设置druid.query.groupBy.maxOnDiskStorag。合并缓冲区或dictionary填满时,将结果集溢出到磁盘的最大磁盘空间(每次查询)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
druid.service=druid/broker
druid.host=192.168.1.120
druid.port=28082

# HTTP server threads
druid.broker.http.numConnections=50
druid.server.http.numThreads=40

# Processing threads and buffers
druid.processing.buffer.sizeBytes=1073741824
druid.processing.numMergeBuffers=4
druid.processing.numThreads=14
druid.processing.tmpDir=var/druid/processing

# SQL
druid.sql.enable = true

# Query cache
#druid.broker.cache.useCache=true
#druid.broker.cache.populateCache=true
#druid.cache.type=local
#druid.cache.sizeInBytes=2000000000

# Query config
# 查询节点请求历史节点方式,有random和connectionCount两种连接方式
druid.broker.balancer.type=connectionCount
druid.query.groupBy.maxOnDiskStorage=10737418240

启动集群

修改bin/node.sh

在文件前面添加 DRUID_CONF_DIR=”conf-online/druid”

复制配置到其他节点上,注意要修改 host

启动 Master节点

1
2
bin/coordinator.sh start
bin/overlord.sh start

启动 Data节点

1
2
bin/historical.sh start
bin/middleManager.sh start

启动 Query节点

1
bin/broker.sh start

安装Superset

Superset是一款开源的数据探索与可视化工具,支持Druid数据源,可以将druid中的数据可视化的展示,还可以支持Sql查询。
项目地址:https://github.com/apache/incubator-superset
安装参考:https://superset.incubator.apache.org/installation.html

遇到的问题

Not enough direct memory.

1
Not enough direct memory.  Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, druid.processing.numThreads, or druid.processing.numMergeBuffers: maxDirectMemory[1,073,741,824], memoryNeeded[6,979,321,856] = druid.processing.buffer.sizeBytes[536,870,912] * (druid.processing.numMergeBuffers[2] + druid.processing.numThreads[10] + 1)

修改-XX:MaxDirectMemorySize,使得:MaxDirectMemorySiz >= druid.processing.buffer.sizeBytes * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1)

Failed to create directory within 10000 attempts

手动创建var/tmp目录,路径错误,在druid中有一个环境路径需要提前手工创建environment:java.io.tmpdir=var/tmp

1
2
3
4
5
6
7
8
9
10
11
12
java.lang.IllegalStateException: Failed to create directory within 10000 attempts (tried 1538122986466-0 to 1538122986466-9999)
at com.google.common.io.Files.createTempDir(Files.java:600) ~[guava-16.0.1.jar:?]
at io.druid.segment.indexing.RealtimeTuningConfig.createNewBasePersistDirectory(RealtimeTuningConfig.java:58) ~[druid-server-0.12.3.jar:0.12.3]
at io.druid.segment.indexing.RealtimeTuningConfig.makeDefaultTuningConfig(RealtimeTuningConfig.java:68) ~[druid-server-0.12.3.jar:0.12.3]
at io.druid.segment.realtime.FireDepartment.<init>(FireDepartment.java:62) ~[druid-server-0.12.3.jar:0.12.3]
at io.druid.indexing.kafka.KafkaIndexTask.run(KafkaIndexTask.java:397) ~[?:?]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444) [druid-indexing-service-0.12.3.jar:0.12.3]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416) [druid-indexing-service-0.12.3.jar:0.12.3]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]