ApacheBench ab压测工具

ab简介

ApacheBench 是 Apache服务器自带的一个web压力测试工具,简称ab。ab又是一个命令行工具,根据ab命令可以创建很多的并发访问线程,模拟多个访问者同时对某一URL地址进行访问,因此可以用来测试目标服务器的负载压力。

安装ab

1
sudo apt-get install apache2-utils

参数列表

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
-n requests Number of requests to perform //请求链接数
-c concurrency Number of multiple requests to make at a time //表示并发数
-t timelimit Seconds to max. to spend on benchmarking
This implies -n 50000
-s timeout Seconds to max. wait for each response
Default is 30 seconds
-b windowsize Size of TCP send/receive buffer, in bytes
-B address Address to bind to when making outgoing connections
-p postfile File containing data to POST. Remember also to set -T
-u putfile File containing data to PUT. Remember also to set -T
-T content-type Content-type header to use for POST/PUT data, eg.
'application/x-www-form-urlencoded'
Default is 'text/plain'
-v verbosity How much troubleshooting info to print
-w Print out results in HTML tables
-i Use HEAD instead of GET
-x attributes String to insert as table attributes
-y attributes String to insert as tr attributes
-z attributes String to insert as td or th attributes
-C attribute Add cookie, eg. 'Apache=1234'. (repeatable)
-H attribute Add Arbitrary header line, eg. 'Accept-Encoding: gzip'
Inserted after all normal header lines. (repeatable)
-A attribute Add Basic WWW Authentication, the attributes
are a colon separated username and password.
-P attribute Add Basic Proxy Authentication, the attributes
are a colon separated username and password.
-X proxy:port Proxyserver and port number to use
-V Print version number and exit
-k Use HTTP KeepAlive feature
-d Do not show percentiles served table.
-S Do not show confidence estimators and warnings.
-q Do not show progress when doing more than 150 requests
-l Accept variable document length (use this for dynamic pages)
-g filename Output collected data to gnuplot format file.
-e filename Output CSV file with percentages served
-r Don't exit on socket receive errors.
-h Display usage information (this message)
-Z ciphersuite Specify SSL/TLS cipher suite (See openssl ciphers)
-f protocol Specify SSL/TLS protocol
(SSL3, TLS1, TLS1.1, TLS1.2 or ALL)

基本使用

GET 压测

1
ab -n 100 -c 10 http://www.baidu.com/
  • -n表示请求数
  • -c表示并发数

样例输出为:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking www.baidu.com (be patient).....done


Server Software: BWS/1.1
Server Hostname: www.baidu.com
Server Port: 80


Document Path: / #测试页面
Document Length: 112439 bytes #测试页面大小

Concurrency Level: 10 #并发数
Time taken for tests: 1.256 seconds #整个测试话费的时间
Complete requests: 100 #完成请求的总量
Failed requests: 96 #失败的请求次数
(Connect: 0, Receive: 0, Length: 96, Exceptions: 0)
Write errors: 0
Total transferred: 11348660 bytes #传输数据总大小
HTML transferred: 11253726 bytes #传输页面总大小
Requests per second: 79.62 [#/sec] (mean) #平均每秒请求数
Time per request: 125.593 [ms] (mean) #平均每次并发10个请求的处理时间
Time per request: 12.559 [ms] (mean, across all concurrent requests) #平均每个请求处理时间,所有并发的请求加一起
Transfer rate: 8824.29 [Kbytes/sec] received #平均每秒网络流量

Connection Times (ms)
min mean[+/-sd] median max
Connect: 4 20 7.7 18 38
Processing: 18 90 50.5 82 356
Waiting: 4 22 7.9 22 41
Total: 22 111 50.7 101 384
#花费在连接Connect,处理Processing,等待Waiting的时间的最小min,平均值mean,标准差[+/-sd],中值median,最大表max的一个表。

Percentage of the requests served within a certain time (ms)
50% 101 #50%请求的响应时间在101ms内
66% 103 #66%请求的响应时间在103ms内
75% 104 #...以此类推
80% 105
90% 111
95% 267
98% 311
99% 384
100% 384 (longest request)

带自定义header请求:

1
ab -n 100 -H “Cookie: Key1=Value1; Key2=Value2” http://test.com/

POST请求

1
ab -n 1 -c 1 -p 'post.txt' -T 'application/x-www-form-urlencoded'   http://192.168.188.6:8080/distributeLock2
  • -p 用来做post数据的文件,这里此文件保存在ab同级目录下
  • -T 设置content-type值

org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available

问题描述

今天在操作es的过程中出现了异常:

1
2
org.elasticsearch.client.transport.NoNodeAvailableException: None of the
configured nodes are available

而我创建es client的代码也很简单,核心是:

1
2
3
4
Settings settings = Settings.builder()
.put("cluster.name", esClusterInfo.getClusterName())
.put("client.transport.sniff", true) //自动嗅探整个集群的状态,把集群中其他ES节点的ip添加到本地的客户端列表中
.build();

我的排查步骤

记录一下我当时的排查过程:

  • 看异常第一反应是集群有问题,但是排查集群的节点以后,发现集群的节点都是没问题的。
  • 而后开始检查settings中设置的cluster.name的是否正确发现也是正确的
  • google发现很多人是因为将es集群的端口写错,也就是9300错写为9200,但是检查我的数据以后发现也是没问题的。
  • 而后开始怀疑我设置的client.transport.sniff的缘故,因为这个参数的作用是:

使客户端去嗅探整个集群的状态,把集群中其它机器的ip地址加到客户端中。这样做的好处是,一般你不用手动设置集群里所有集群的ip到连接客户端,它会自动帮你添加,并且自动发现新加入集群的机器

但是这个参数有一个问题就是:

当ES服务器监听(publish_address )使用内网服务器IP,而访问(bound_addresses )使用外网IP时,不要设置client.transport.sniff为true。不设置client.transport.sniff时,默认为false(关闭客户端去嗅探整个集群的状态)。因为在自动发现时会使用内网IP进行通信,导致无法连接到ES服务器。因此此时需要直接使用addTransportAddress方法把集群中其它机器的ip地址加到客户端中。

但是检查我的环境,发现我的环境全部是内网,所以设置不设置这个client.transport.sniff是没区别的。

最后持续google许久,也没发现问题,喝杯茶刷了一会手机突然想起印象中es client的版本和es集群的版本不一致也有可能
出问题,于是google了一波es 版本不一致这个关键字,果然是因为版本不一致导致的。

ES client和ES集群版本不一致的问题

我遇到的这个问题是es集群的版本比我使用的es client的版本低,相当于我使用高版本的client去访问低版本的集群,所以出现org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available问题。

因此我们平时需要注意es的版本问题。

使用Let's Encrypt配置nginx证书

Automatically enable HTTPS on your website with EFF’s Certbot, deploying Let’s Encrypt certificates

主要参考:https://certbot.eff.org/lets-encrypt/centosrhel7-nginx

操作起来很简单,主要用到了下面的命令:

1
2
3
4
sudo yum install nginx
sudo certbot --nginx
sudo certbot renew --dry-run
0 0 1 * * certbot renew --post-hook "/usr/sbin/nginx -s reload"

Mac外接4k显示器选1080p字体发虚

MacBook外接4K显示器使用HDMI数据线选择缩放模式,然后选择1080p,会出现外接显示器字体发虚的问题,一般有下面几种的排查方式:

Slow startup Tomcat because of SecureRandom

今天在新机器上启动tomcat应用的时候,发现巨慢,检查日志发现有如下信息:

1
2
Jan 09, 2018 8:44:35 PM org.apache.catalina.util.SessionIdGenerator createSecureRandom
INFO: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [239,939] milliseconds.

这块初始化SecureRandom用了239,939毫秒,之前没遇到这个问题。查了一下发现在官方wiki https://wiki.apache.org/tomcat/HowTo/FasterStartUp#Entropy_Source

Entropy Source
Tomcat 7+ heavily relies on SecureRandom class to provide random values for its session ids and in other places. Depending on your JRE it can cause delays during startup if entropy source that is used to initialize SecureRandom is short of entropy. You will see warning in the logs when this happens, e.g.:
org.apache.catalina.util.SessionIdGenerator createSecureRandom
INFO: Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [5172] milliseconds.
There is a way to configure JRE to use a non-blocking entropy source by setting the following system property: -Djava.security.egd=file:/dev/./urandom
Note the “/./“ characters in the value. They are needed to work around known Oracle JRE bug #6202721. See also JDK Enhancement Proposal 123. It is known that implementation of SecureRandom was improved in Java 8 onwards.
Also note that replacing the blocking entropy source (/dev/random) with a non-blocking one actually reduces security because you are getting less-random data. If you have a problem generating entropy on your server (which is common), consider looking into entropy-generating hardware products such as “EntropyKey”.

解决办法

基于官方wiki,解决办法是在tomcat的startenv.sh脚本中增加

1
-Djava.security.egd=file:/dev/./urandom

不过tomcat的wiki中提到,如果使用这个非阻塞的/dev/urandom的话,会有一些安全方面的风险,说实话没看懂,不过写了一篇Myths about /dev/urandom来证明使用/dev/urandom是没问题的,所以就先用着吧:-)

关于 熵源”(entropy source)

这篇文章:JVM上的随机数与熵池策略说的比较清楚,推荐大家阅读

参考资料

Spring RestTemplate parse gzip response

假设http://10.89.xx.xx:8080/_/metrics接口返回的数据格式是gzip格式
他的Response Headers信息如下

1
2
3
4
5
6
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Content-Encoding: gzip
Content-Type: text/plain;charset=UTF-8
Transfer-Encoding: chunked
Date: Thu, 28 Dec 2017 08:13:53 GMT

如果我们使用Spring RestTemplate想直接拿到String形式的返回,而不是byte[]格式,那么可以使用如下的方式:

1
2
3
4
5
6
7
8
9
10
11
12
13
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.http.client.HttpComponentsClientHttpRequestFactory;
import org.springframework.web.client.RestTemplate;
import org.apache.http.impl.client.HttpClientBuilder;
public static void main(String[] args) {
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory(
HttpClientBuilder.create().build());
RestTemplate restTemplate = new RestTemplate(clientHttpRequestFactory);
ResponseEntity<String> responseEntity = restTemplate.getForEntity("http://10.89.xx.xxx:8080/_/metrics", String.class);
HttpStatus statusCode = responseEntity.getStatusCode();
System.out.println(responseEntity.getBody());
}

Java RSA非对称加密

最近的一个项目中,agent和master双方需要远程通信,但是需要双方认证以及传输的信息加密,因此就选择了RSA这个非对称加密算法实现了netty的handler。

##实现思路
简要的描述一下实现思路:

  • 首先生成一对公钥和私钥
  • 所有的master都使用这个私钥进行加密、解密
  • 所有的agent都使用这个公钥进行加密和解密
  • master发给agent的信息,使用私钥加密,master收到agent的信息,使用私钥解密
  • agent发给master的信息,使用公钥加密,agent收到master的信息,使用公钥解密
  • 无论是agent还是master,对收到的信息,只要解密失败,那么就丢弃

这样相当于实现了agent和master的认证,以及消息的加密传输。挺有意思的。

##生成公钥私钥

###使用java代码生成:

1
2
3
4
5
6
7
8
9
10
private static final String RSA = "RSA";
public static KeyPair buildKeyPair() throws NoSuchAlgorithmException {
final int keySize = 2048;
KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance(RSA);
keyPairGenerator.initialize(keySize);
return keyPairGenerator.genKeyPair();
}
KeyPair keyPair = buildKeyPair();
PublicKey pubKey = keyPair.getPublic();
PrivateKey privateKey = keyPair.getPrivate();

###shell生成
我在这个项目实现中,是别生成了公钥文件和私钥文件,作为了工程的配置文件来用的,因此使用了shell的命令:

1
2
3
ssh-keygen -t rsa -b 2048 -C "any string"
openssl pkcs8 -topk8 -inform PEM -outform DER -in id_rsa -out private_key.der -nocrypt
openssl rsa -in id_rsa -pubout -outform DER -out public_key.der

命令解释:

  • 第一条命令会生成2048位的rsa的公钥和私钥文件。这个2048可以自己修改,但是最小是1024,位数越大,加密越好,但是加密和解密的速度会更慢。在生成的过程中会询问你文件的路径,我是存放在了当前目录,因此当前目录下会有:id_rsaid_rsa.pub这两个文件。
  • 第二条命令生成java程序可用的私钥文件,其实就是将id_rsa转换为PKCS#8格式,这样java程序能够读取它
  • 第三条命令生成java程序可用的公钥文件,其实以DER格式输出公钥部分,这样java程序可用读取它

这3条命令执行以后,当前目录下会有4个文件:

1
id_rsa          id_rsa.pub      private_key.der public_key.der

我们程序中使用的是后面2个,将其放在代码的resources目录下。

###RSA实现
关于rsa的java实现,直接给出代码吧:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
package com.qunar.qcloudagent.common.encrypt;
import com.google.common.base.Throwables;
import javax.crypto.BadPaddingException;
import javax.crypto.Cipher;
import javax.crypto.IllegalBlockSizeException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.security.KeyFactory;
import java.security.PrivateKey;
import java.security.PublicKey;
import java.security.spec.PKCS8EncodedKeySpec;
import java.security.spec.X509EncodedKeySpec;
public class RSAUtil {
private static final String RSA = "RSA";
private static final PrivateKey PRIVATE_KEY = readPrivateKey(
RSAUtil.class.getResource("/").getPath().concat("private_key.der"));
private static Cipher cipher;
private final static PublicKey PUBLIC_KEY = readPublicKey(
RSAUtil.class.getResource("/").getPath().concat("public_key.der"));
static {
try {
cipher = Cipher.getInstance(RSA);
} catch (Exception e) {
throw Throwables.propagate(e);
}
}
public static byte[] encryptWithPrivateKey(byte[] message) throws Exception {
cipher.init(Cipher.ENCRYPT_MODE, PRIVATE_KEY);
return blockCipher(message, Cipher.ENCRYPT_MODE);
}
public static byte[] decryptWithPrivateKey(byte[] encrypted) throws Exception {
cipher.init(Cipher.DECRYPT_MODE, PRIVATE_KEY);
return blockCipher(encrypted, Cipher.DECRYPT_MODE);
}
public static byte[] encryptWithPublicKey(byte[] message) throws Exception {
cipher.init(Cipher.ENCRYPT_MODE, PUBLIC_KEY);
return blockCipher(message, Cipher.ENCRYPT_MODE);
}
public static byte[] decryptWithPublicKey(byte[] encrypted) throws Exception {
cipher.init(Cipher.DECRYPT_MODE, PUBLIC_KEY);
return blockCipher(encrypted, Cipher.DECRYPT_MODE);
}
private static PublicKey readPublicKey(String filename) {
try {
byte[] keyBytes = Files.readAllBytes(Paths.get(filename));
X509EncodedKeySpec spec = new X509EncodedKeySpec(keyBytes);
KeyFactory kf = KeyFactory.getInstance(RSA);
return kf.generatePublic(spec);
} catch (Exception e) {
throw Throwables.propagate(e);
}
}
private static PrivateKey readPrivateKey(String filename) {
try {
byte[] keyBytes = Files.readAllBytes(Paths.get(filename));
PKCS8EncodedKeySpec spec = new PKCS8EncodedKeySpec(keyBytes);
KeyFactory kf = KeyFactory.getInstance(RSA);
return kf.generatePrivate(spec);
} catch (Exception e) {
throw Throwables.propagate(e);
}
}
/**
* 具体为啥这么做,请看文章:{@see http://coding.westreicher.org/?p=23}
*/
private static byte[] blockCipher(byte[] bytes, int mode) throws IllegalBlockSizeException, BadPaddingException {
// string initialize 2 buffers.
// scrambled will hold intermediate results
byte[] scrambled = new byte[0];
// toReturn will hold the total result
byte[] toReturn = new byte[0];
// if we encrypt we use 100 byte long blocks. Decryption requires 128 byte long blocks (because of RSA)
int length = (mode == Cipher.ENCRYPT_MODE) ? 100 : 128;
// another buffer. this one will hold the bytes that have to be modified in this step
byte[] buffer = new byte[length];
for (int i = 0; i < bytes.length; i++) {
// if we filled our buffer array we have our block ready for de- or encryption
if ((i > 0) && (i % length == 0)) {
//execute the operation
scrambled = cipher.doFinal(buffer);
// add the result to our total result.
toReturn = append(toReturn, scrambled);
// here we calculate the length of the next buffer required
int newlength = length;
// if newlength would be longer than remaining bytes in the bytes array we shorten it.
if (i + length > bytes.length) {
newlength = bytes.length - i;
}
// clean the buffer array
buffer = new byte[newlength];
}
// copy byte into our buffer.
buffer[i % length] = bytes[i];
}
// this step is needed if we had a trailing buffer. should only happen when encrypting.
// example: we encrypt 110 bytes. 100 bytes per run means we "forgot" the last 10 bytes. they are in the buffer array
scrambled = cipher.doFinal(buffer);
// final step before we can return the modified data.
toReturn = append(toReturn, scrambled);
return toReturn;
}
private static byte[] append(byte[] toReturn, byte[] scrambled) {
byte[] destination = new byte[toReturn.length + scrambled.length];
System.arraycopy(toReturn, 0, destination, 0, toReturn.length);
System.arraycopy(scrambled, 0, destination, toReturn.length, scrambled.length);
return destination;
}
}

关于上面的代码有一点需要指出的是:RSA加密明文最大长度117字节,解密要求密文最大长度为128字节,所以在加密和解密的过程中需要分块进行。 RSA加密对明文的长度是有限制的,如果加密数据过大会抛出如下异常:Exception in thread “main” javax.crypto.IllegalBlockSizeException: Data must not be longer than 117 bytes。因此上面的blockCipher是用来专门处理这种情况的。

##参考文章

自己实现LRU Cache

今天闲的无事,写一个LRU的缓存练练手,不过提前说好哈,最好的办法是直接使用Guava中的方法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
    import com.google.common.cache.CacheBuilder;
import java.util.concurrent.ConcurrentMap;
public class GuavaLRUCache {
public static void main(String[] args) {
ConcurrentMap<String, String> cache =
CacheBuilder.newBuilder()
.maximumSize(2L)
.<String, String>build().asMap();
cache.put("a", "b");
cache.put("b", "c");
System.out.println(cache);
cache.put("a", "d");
System.out.println(cache);
cache.put("c", "d");
System.out.println(cache);
}
}

如果自己实现的话,就比较挫了:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
public class LRUCache<K, V> {
private final int limit;
private final Map<K, V> cache;
private final Deque<K> deque;
private final ReentrantLock reentrantLock = new ReentrantLock();
private static final int DEFAULT_CAPACITY = 16;
private static final float DEFAULT_LOAD_FACTOR = 0.75f;
public LRUCache(int limit) {
this.limit = limit;
deque = new ArrayDeque<K>(limit);
cache = new ConcurrentHashMap<K, V>(DEFAULT_CAPACITY, DEFAULT_LOAD_FACTOR);
}
public LRUCache(int limit, int capacity, float loadFactor) {
this.limit = limit;
deque = new ArrayDeque<K>(limit);
cache = new ConcurrentHashMap<K, V>(capacity, loadFactor);
}
public void put(K key, V value) {
V v = cache.get(key);
reentrantLock.lock();
try {
if(v == null){
cache.put(key, value);
deque.removeFirstOccurrence(key);
deque.addFirst(key);
} else {
deque.removeFirstOccurrence(key);
deque.addFirst(key);
}
if(size() > limit){
K key1 = deque.removeLast();
if(key1 != null){
cache.remove(key1);
}
}
}finally {
reentrantLock.unlock();
}
}
public V get(K key) {
V v = cache.get(key);
if(v != null){
reentrantLock.lock();
try {
deque.removeFirstOccurrence(key);
deque.addFirst(key);
} finally {
reentrantLock.unlock();
}
}
return v;
}
public int size() {
return cache.size();
}
@Override
public String toString() {
List<String> values = Lists.newArrayListWithExpectedSize(limit);
reentrantLock.lock();
try {
for (K k : deque) {
V v = cache.get(k);
values.add(k.toString() + " -> " + v.toString());
}
System.out.println(deque);
} finally {
reentrantLock.unlock();
}
return values.toString();
}
//other apis...
public static void main(String[] args) {
LRUCache<String, String> cache = new LRUCache<String, String>(3);
cache.put("key1", "value1");
cache.put("key2", "value2");
cache.put("key3", "value3");
System.out.println(cache);
cache.put("key4", "value4");
System.out.println(cache);
cache.put("key2", "value2");
System.out.println(cache);
cache.put("key4", "value4");
System.out.println(cache);
cache.put("key1", "value1");
System.out.println(cache);
}
}

参考资料

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×