题外话一:
最近一段时间,Cloud Insight接连发布了三种语言(Python, Node, Ruby)的SDK,Cloud Insight Agent也迎来了重大突破,发布了Windows监控第一个版本,总算可以松口气写点东西了~
题外话二:
偶然的机会看到一篇blog,文中详细的介绍了Flask(Python), Sinatra(Ruby)以及 Matini(Golang)这三类微型框架的用法,并提供了各个框架在Docker下的部署方式。然而,美中不足的是没有提供各个框架的性能对比情况,经过一番搜罗,发现了一篇对现今主流框架做性能对比的文章,找到两者的结合点,于是才有了今天这篇文章~
回归正题,之所以选择Flask(Python), Sinatra(Ruby), Matini(Golang)和Express(Node),主要是经验所限以及个人比较喜欢这类微型框架,下面我们就对各个框架在同等条件下的性能表现一探究竟。
本文源码地址:https://github.com/startover/fibonacci-webapp-benchmark
安装文档:https://docs.docker.com/engine/installation/
CentOS/Redhat:
| 1 | yum install httpd–tools |
Ubuntu/Debian:
| 1 | apt–get update && apt–get install apache2–utils |
| 12345678910111213 | $ git clone git@github.com:startover/fibonacci–webapp–benchmark.git$ cd fibonacci–webapp–benchmark$ ./docker–compose up –dRecreating fibonacciwebappbenchmark_python_1...Recreating fibonacciwebappbenchmark_go_1...Recreating fibonacciwebappbenchmark_ruby_1...Recreating fibonacciwebappbenchmark_node_1...$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES14e0d2388dca fibonacciwebappbenchmark_node \”npm start\” 6 seconds ago Up 5 seconds 0.0.0.0:8080->8080/tcp fibonacciwebappbenchmark_node_18b1bdd070f83 fibonacciwebappbenchmark_ruby \”bundle exec ruby sin\” 23 seconds ago Up 22 seconds 0.0.0.0:4567->4567/tcp fibonacciwebappbenchmark_ruby_1333360123b56 fibonacciwebappbenchmark_go \”go run martini.go\” 34 seconds ago Up 32 seconds 0.0.0.0:3000->3000/tcp fibonacciwebappbenchmark_go_1df50829f511b fibonacciwebappbenchmark_python \”python app.py\” 42 seconds ago Up 41 seconds 0.0.0.0:5000->5000/tcp fibonacciwebappbenchmark_python_1 |
Python + Flask
| 1234567891011121314151617181920 | $ ab –n 100000 –c 100 http://localhost:5000/10...Concurrency Level: 100Time taken for tests: 168.322 secondsComplete requests: 100000Failed requests: 0Write errors: 0Total transferred: 18400000 bytesHTML transferred: 2900000 bytesRequests per second: 594.10 [#/sec] (mean)Time per request: 168.322 [ms] (mean)Time per request: 1.683 [ms] (mean, across all concurrent requests)Transfer rate: 106.75 [Kbytes/sec] receivedConnection Times (ms) min mean[+/–sd] median maxConnect: 0 0 0.3 0 12Processing: 21 168 15.3 166 295Waiting: 13 167 15.1 165 286Total: 26 168 15.3 166 295... |
ܬ,总算可以松口气写点东西了~
题外话二:
偶然的机会看到一篇blog,文中详细的介绍了Flask(Python), Sinatra(Ruby)以及 Matini(Golang)这三类微型框架的用法,并提供了各个框架在Docker下的部署方式。然而,美中不足的是没有提供各个框架的性能对比情况,经过一番搜罗,发现了一篇对现今主流框架做性能对比的文章,找到两者的结合点,于是才有了今天这篇文章~
回归正题,之所以选择Flask(Python), Sinatra(Ruby), Matini(Golang)和Express(Node),主要是经验所限以及个人比较喜欢这类微型框架,下面我们就对各个框架在同等条件下的性能表现一探究竟。
本文源码地址:https://github.com/startover/fibonacci-webapp-benchmark
安装文档:https://docs.docker.com/engine/installation/
CentOS/Redhat:
| 1 | yum install httpd–tools |
Ubuntu/Debian:
| 1 | apt–get update && apt–get install apache2–utils |
| 12345678910111213 | $ git clone git@github.com:startover/fibonacci–webapp–benchmark.git$ cd fibonacci–webapp–benchmark$ ./docker–compose up –dRecreating fibonacciwebappbenchmark_python_1...Recreating fibonacciwebappbenchmark_go_1...Recreating fibonacciwebappbenchmark_ruby_1...Recreating fibonacciwebappbenchmark_node_1...$ docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES14e0d2388dca fibonacciwebappbenchmark_node \”npm start\” 6 seconds ago Up 5 seconds 0.0.0.0:8080->8080/tcp fibonacciwebappbenchmark_node_18b1bdd070f83 fibonacciwebappbenchmark_ruby \”bundle exec ruby sin\” 23 seconds ago Up 22 seconds 0.0.0.0:4567->4567/tcp fibonacciwebappbenchmark_ruby_1333360123b56 fibonacciwebappbenchmark_go \”go run martini.go\” 34 seconds ago Up 32 seconds 0.0.0.0:3000->3000/tcp fibonacciwebappbenchmark_go_1df50829f511b fibonacciwebappbenchmark_python \”python app.py\” 42 seconds ago Up 41 seconds 0.0.0.0:5000->5000/tcp fibonacciwebappbenchmark_python_1 |
Python + Flask
| 1234567891011121314151617181920 | $ ab –n 100000 –c 100 http://localhost:5000/10...Concurrency Level: 100Time taken for tests: 168.322 secondsComplete requests: 100000Failed requests: 0Write errors: 0Total transferred: 18400000 bytesHTML transferred: 2900000 bytesRequests per second: 594.10 [#/sec] (mean)Time per request: 168.322 [ms] (mean)Time per request: 1.683 [ms] (mean, across all concurrent requests)Transfer rate: 106.75 [Kbytes/sec] receivedConnection Times (ms) min mean[+/–sd] median maxConnect: 0 0 0.3 0 12Processing: 21 168 15.3 166 295Waiting: 13 167 15.1 165 286Total: 26 168 15.3 166 295... |
上一篇:Python程序的执行原理
下一篇:Python 列表推导