前言: 最近部署了ELK,遇到了不少的坑,而網(wǎng)絡(luò)上又較少有比較完整且詳細(xì)的文檔,
因此將自己部署的過程記錄總結(jié)下,開始使用的服務(wù)器配置如下:
主機(jī):2 (elk1,elk2)
系統(tǒng):CentOS7
配置:4核16G內(nèi)存
網(wǎng)絡(luò):內(nèi)網(wǎng)互通
ELK版本:elasticsearch-2.4.1 kibana-4.6.1
logstash-2.4.0 logstash-forwarder-0.4.0
redis-3.0.7
ELK簡(jiǎn)單介紹ELK是三個(gè)不同工具的簡(jiǎn)稱,組合使用可以完成各種日志分析,下面簡(jiǎn)單介紹下三個(gè)工具的作用 Elasticsearch:是一個(gè)基于Apache Lucene(TM)的開源搜索引擎,簡(jiǎn)單點(diǎn)說就是用于建立索引并存儲(chǔ)日志的工具,可參考(http://es./) Logstash:是一個(gè)應(yīng)用程序,它可以對(duì)日志的傳輸、過濾、管理和搜索提供支持。我們一般用它來統(tǒng)一對(duì)應(yīng)用程序日志進(jìn)行收集管理,提供Web接口用于查詢和統(tǒng)計(jì) Kibana:用于更友好的展示分析日志的web平臺(tái),簡(jiǎn)單點(diǎn)說就是有圖有真相,可以在上面生成各種各樣的圖表更直觀的顯示日志分析的成果 
總結(jié):將三個(gè)工具組合起來使用我們就可以收集日志分析并展示分析結(jié)果 部署安裝簡(jiǎn)要: 安裝過程我基本上參考了:https://www./...來做, 不過遺憾的是,這里只告訴最最基本的安裝方式,如果日志量較大又怎么辦,遇到問題又怎么查看,資料相對(duì)較少,這里我根據(jù)自己的安裝來詳細(xì)闡明步驟 注意: 1.本文下載的官網(wǎng)RPM包安裝(比較方便),可自行選擇 2.這里是單臺(tái)服務(wù)器安裝elk,集群安裝一樣,修改配置文件即可 一、去下載最新的穩(wěn)定版,因?yàn)楣δ茏疃嘧钊?這里貼出官網(wǎng)https:
二、Elasticsearch安裝1 由于安裝ELK需要jdk,因此沒有jdk的可以先安裝1.8版本以上 $cd ~
$wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u65-b17/jdk-8u65-linux-x64.rpm"
$sudo yum localinstall jdk-8u65-linux-x64.rpm
$rm ~/jdk-8u65-linux-x64.rpm
2 下載并安裝Elasticsearch $rpm -ivh elasticsearch-2.4.1.rpm
3 編輯配置文件,主要修改以下幾項(xiàng) $vim /etc/elasticsearch/elasticsearch.yml
path.data: /data/elasticsearch #日志存儲(chǔ)目錄
path.logs: /data/elasticsearch/log #elasticsearch啟動(dòng)日志路徑
network.host: elk1 #這里是主機(jī)IP,我寫了hosts
node.name: "node-2" #節(jié)點(diǎn)名字,不同節(jié)點(diǎn)名字要改為不一樣
http.port: 9200 #api接口url
node.master: true #主節(jié)點(diǎn)
node.data: true #是否存儲(chǔ)數(shù)據(jù)
#手動(dòng)發(fā)現(xiàn)節(jié)點(diǎn),我這里有兩個(gè)節(jié)點(diǎn)加入到elk集群
discovery.zen.ping.unicast.hosts: [elk1, elk2]
4 創(chuàng)建配置文件夾后啟動(dòng) $mkdir -pv /data/elasticsearch/log
$systemctl start elasticsearch
檢查:/data/elasticsearch/elasticsearch/nodes/0/indices目錄是否正確創(chuàng)建 三、Kibana安裝1 下載并安裝Kibana $rpm -ivh kibana-4.6.1-x86_64.rpm
2 編輯配置文件,主要修改如下 vim /opt/kibana/config/kibana.yml
server.port: 5601
#server.host: "localhost"
server.host: "0.0.0.0"
elasticsearch.url: "http://elk1:9200"
3 啟動(dòng)并檢查是否安裝成功 $systemctl start kibana
$netstat -ntlp|grep 5601
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 8354/node
四、安裝redis1 此處我選擇安裝redis-cli 3.0.7,可以去官網(wǎng)下載編譯安裝(https:///) 2 啟動(dòng) $redis-server /etc/redis_6379.conf &
注:redis安裝此處就不多贅述,需要指出的是由于redis是單進(jìn)程,在ELK中一般用作隊(duì)列,對(duì)I/O讀寫消耗較高,因此我起多個(gè)進(jìn)程(將在后面優(yōu)化詳細(xì)提到),將不同的日志分發(fā)到不同的redis,更多redis安裝請(qǐng)自行百度 五、生成SSL證書用于Logstash免密傳輸日志1 編輯配置文件/etc/pki/tls/openssl.cnf $vim /etc/pki/tls/openssl.cnf
# 這里的IP信息填寫logstash的IP,最好內(nèi)網(wǎng)傳輸
subjectAltName = IP: 10.26.215.110
2 生成證書 $cd /etc/pki/tls
$sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
六、Logstash安裝1 下載并安裝Logstash $rpm -ivh logstash-2.4.0.noarch.rpm
2 接收日志并放入redis的配置文件 elasticsearch output參數(shù)參考: http://www./guide/e... elasticsearch input參數(shù)參考: https://www./guide/... elasticsearch filter參考 https://www./guide/... 注意:Logstash沒有默認(rèn)的配置文件,需要手動(dòng)編輯,此處我給出我使用的兩個(gè)實(shí)例 vim /etc/logstash/conf.d/redis-input.conf
input {
lumberjack {
port => 5043
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter{
}
output {
redis {
host => "127.0.0.1"
port => 6379
data_type => "list"
key => "logstash:redis"
}
}
3 讀取redis消息隊(duì)列日志,調(diào)用elasticsearch接口建立索引 vim /etc/logstash/conf2.d/redis-output.conf
input {
redis {
data_type => "list"
key => "logstash:redis"
host => "10.24.245.21"
port => 6379
}
output {
elasticsearch {
hosts => ["meizu-elk:9200"]
index => "%{type}-%{+YYYY.MM.dd}"
document_type => "%{type}"
workers => 100
template_overwrite => true
}
#如果你要將logstash讀取redis并建立索引的
}
4 啟動(dòng)Logstash 自己編寫了下啟動(dòng)腳本,用于適應(yīng)我自己需求 #!/bin/bash
function conf()
{
script_name="logstash"
start_command=/opt/logstash/bin/logstash
conf_file=/etc/logstash/conf.d/redis-input.conf
output_conf=/etc/logstash/conf2.d/redis-output.conf
log_path=/var/log/logstash_output.log
log_input_path=/var/log/logstash_input.log
works=4
}
function getPid()
{
i_pid=$(ps -ef | grep "${conf_file}"|grep -v "grep"|awk '{print $2}')
o_pid=$(ps -ef | grep "${output_conf}"|grep -v "grep"|awk '{print $2}')
}
function _start(){
getPid
[ -n "$i_pid" ] && { echo "[start] ${start_command} -f ${conf_file} is already unning,exit";exit; }
${start_command} -f ${conf_file} >> ${log_input_path} 2>&1 &
[ $? != 0 ] && { echo "[start] Running ${start_command} -f ${conf_file} Error";exit; }
echo "[startBase] Config file:${start_command} ${conf_file}"
[ -n "$o_pid" ] && { echo "[start] ${start_command} -f ${output_conf} is already unning,exit";exit; }
for i in $(seq ${works});do
${start_command} -f ${output_conf} >> ${log_path} 2>&1 &
done
[ $? != 0 ] && { echo "[start] Running ${start_command} -f ${output_conf} Error";exit; }
echo "[startToElastic] Config file:${start_command} ${output_conf}"
}
function _stop(){
getPid
for i in ${i_pid[@]}
do
kill -9 $i || echo "[stop] Stop php-fpm Error"
sleep 1 && echo "[stop] ${start_command} pid:$i stoped"
done
for i in ${o_pid[@]}
do
kill -9 $i || echo "[stop] Stop php-fpm Error"
sleep 1 && echo "[stop] ${start_command} pid:$i stoped"
done
sleep 1
}
function _check(){
getPid
if [ ! -n "$i_pid" ];then
echo "[check] ${start_command} -f ${conf_file} is already stoped"
else
for i in ${i_pid[@]}
do
echo "[check] ${start_command} -f ${conf_file} is running,pid is $i"
done
fi
if [ ! -n "$o_pid" ];then
echo "[check] ${start_command} -f ${output_conf} is already stoped"
else
for i in ${o_pid[@]}
do
echo "[check] ${start_command} -f ${output_conf} is running,pid is $i"
done
fi
}
function manager()
{
case "$1" in
start)
_start
_check
;;
stop)
_stop
_check
;;
check)
_check
;;
restart)
_check
_stop
_start
_check
;;
*)
printf "Arguments are error!You only set: start|check|stop|restart"
;;
esac
}
conf
if [ "$#" -ne "1" ];then
echo ""
echo "Scripts need parameters,parameters=1"
echo "For example:"
echo "/etc/init.d/${script_name} startAll|startBase|startToElastic|stop|check|restart"
echo "start | 啟動(dòng)需要啟動(dòng)的進(jìn)程"
echo ""
exit 1
fi
ctrl=$1 && manager ${ctrl}
啟動(dòng)logstash $systemctl start logstash
七、安裝logstash客戶端logstash-forwarder發(fā)送日志1 在需要收集的服務(wù)器上面安裝客戶端,收集日志后通過TCP協(xié)議傳輸過去 這里我隨便選擇一臺(tái)服務(wù)器,例如web1
$rpm -ivh logstash-forwarder-0.4.0-1.x86_64.rpm
2 修改默認(rèn)配置文件 vim /etc/logstash-forwarder.conf
#找到"network",在此作用域中修改
"servers": [ "10.26.215.116:5043" ],
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",
"timeout": 15
#找到"files"在此作用域中修改
"files": [
{
"paths": [ "/var/log/message.log"],
"fields": { "type": "logstash" }
},
{
"paths": [ "/data/log/nginx/access.log"],
"fields": { "type": "web1_nginx" }
}
]
3 將elk1服務(wù)器的/etc/pki/tls/certs/logstash-forwarder.crt復(fù)制到web1 4 啟動(dòng)logstash-forwarder $/etc/init.d/logstash-forwarder start
八、安裝并配置nginx用于http訪問1 nginx安裝 $sudo yum -y install epel-release
$sudo yum -y install nginx httpd-tools
2 設(shè)置用于http訪問的權(quán)限 #創(chuàng)建kibanaadmin用戶,這里會(huì)讓你輸入密碼,比如輸入123
$sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadmin
3 nginx配置 $vim /etc/nginx/nginx.conf include /etc/nginx/conf.d/*.conf;
$vim /etc/nginx/conf.d/kibana.conf server {
listen 80;
server_name ;
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
location / {
proxy_pass http://localhost:5601;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
啟動(dòng)nginx $sudo systemctl start nginx
$sudo systemctl enable nginx
校驗(yàn)各服務(wù)是否正常我們安裝好了elk服務(wù),但是如何知道是否安裝正確,如何知道數(shù)據(jù)是否進(jìn)行了正確傳輸流向呢 下面就說下幾個(gè)檢查點(diǎn) 一、logstash檢查1 查看logstash-forwarder是否發(fā)送日志給logstash服務(wù) [root@test]# tail -f /var/log/logstash-forwarder/logstash-forwarder.err
2016/12/01 21:22:00.856750 Registrar: processing 116 events
2016/12/01 21:22:05.877525 Registrar: processing 100 events
# 看到這種就表示連接成功并發(fā)送給服務(wù)端
2 檢查logstash服務(wù)端日志是否異常 查看input收集日志情況 $tail -f /var/log/logstash_input.log
{:timestamp=>"2016-12-01T15:54:55.113000+0800", :message=>"Pipeline main started"}
$redis-cli -h IP -p 6379
10.24.245.1:6379> LLEN "logstash:redis"
(integer) 10
10.24.245.1:6379> LRANGE "logstash:redis" 1 10
查看output連接elasticsearch建立索引情況 $tail -f /var/log/logstash_output.log
{:timestamp=>"2016-12-01T15:54:55.113000+0800", :message=>"Pipeline main started"}
{:timestamp=>"2016-12-01T15:54:57.095000+0800", :message=>"Pipeline main started"}
此處的日志很少,因?yàn)槲覀冴P(guān)閉了處理的詳細(xì)日志,我們可以通過兩種方式來查看詳細(xì)內(nèi)容
(1)修改配置文件
vim /etc/logstash/conf2.d/redis-output-bbs.conf
stdout { codec => rubydebug }
(2)手動(dòng)啟動(dòng)進(jìn)程,并調(diào)試打印在屏幕
/opt/logstash/bin/logstash -f /etc/logstash/conf2.d/redis-output-bbs.conf -vv
二、查看elasticsearch是否建立索引1 查看日志 tail -f /data/elasticsearch/log/elasticsearch.log
2 進(jìn)入elasticsearch數(shù)據(jù)索引目錄,查看是否有索引被創(chuàng)建 $ll -h /data/elasticsearch/elasticsearch/nodes/0/indices/
三、檢查kibana1 可以在命令行直接啟動(dòng),查看是否有報(bào)錯(cuò)信息 $/opt/kibana/bin/kibana
2 http訪問kibana,http:// 綁定hosts
Your_ip example.com


相關(guān)優(yōu)化這里提到優(yōu)化主要還是針對(duì)單臺(tái)服務(wù)器性能不能滿足業(yè)務(wù)需求和盡可能的將服務(wù)器處理日志的能力發(fā)揮到最大,如日志量比較大時(shí) 由于elk本來就是分布式集群日志分析系統(tǒng),因此可以將各個(gè)功能獨(dú)立出來,單獨(dú)服務(wù)器運(yùn)行,避免相互影響 1 關(guān)于redis 筆者實(shí)驗(yàn)中,先后將各個(gè)功能獨(dú)立出來,其中redis對(duì)性能的相互影響最大,因?yàn)閞edis用作消息隊(duì)列,又是單進(jìn)程,因此對(duì)既讀又寫時(shí)候有瓶頸,且在日志量處理不過來的時(shí)候,會(huì)堆積到redis里面,所以跟elk放到一起容易導(dǎo)致oom和內(nèi)存爭(zhēng)搶,最要命的是經(jīng)常導(dǎo)致I/O等待 最后的解決方式:(1)將redis單獨(dú)運(yùn)行在一臺(tái)服務(wù)器(2)業(yè)務(wù)隔離,不同業(yè)務(wù)日志放入不同的redis實(shí)例,多實(shí)例多通道充分利用服務(wù)器性能 2 關(guān)于logstash logstash在讀取收集并寫入redis方面貌似并沒有太大壓力,筆者開啟了一個(gè)socket用于接收日志,測(cè)試中并發(fā)寫入redis很快,1W條/s的日志量應(yīng)該都沒有問題,就算日志量比較大,可以開多個(gè)socket分開傳輸 3 關(guān)于elasticsearch elk一般用于大量日志分析,因此就不得不做集群,而日志的索引建立查詢等操作最終都要回到elasticsearch這里,這會(huì)導(dǎo)致它壓力非常大,集群可以擁有更好的處理能力,elasticsearch集群非常簡(jiǎn)單,只需要簡(jiǎn)單的配置即可 在主節(jié)點(diǎn)設(shè)置 $vim /etc/elasticsearch/elasticsearch.yml
node.name: node-1
node.master: true
node.data: true
#手動(dòng)發(fā)現(xiàn)節(jié)點(diǎn),可以填寫多個(gè)節(jié)點(diǎn)
discovery.zen.ping.unicast.hosts: [elk1, elk2]
在從節(jié)點(diǎn)配置 $vim /etc/elasticsearch/elasticsearch.yml
node.name: node-1
node.master: false
node.data: true
參考:http://www./articl... elasticsearch配置 https://www./guide/... logstash官方文檔 https://www./guide/... elasticsearch官方文檔
|