Merge remote-tracking branch 'MISP/master'

pull/176/head
Koen Van Impe 2018-03-30 15:07:43 +02:00
commit 326e701260
25 changed files with 2444 additions and 20 deletions

View File

@ -11,7 +11,6 @@ python:
- "3.5-dev"
- "3.6"
- "3.6-dev"
- "nightly"
install:
- pip install -U nose codecov pytest
@ -19,13 +18,13 @@ install:
- pip install .
script:
- coverage run -m --parallel-mode --source=misp_modules misp_modules.__init__ &
- coverage run -m --parallel-mode --source=misp_modules misp_modules.__init__ -l 127.0.0.1 &
- pid=$!
- sleep 5
- nosetests --with-coverage --cover-package=misp_modules
- kill -s INT $pid
- pushd ~/
- coverage run -m --parallel-mode --source=misp_modules misp_modules.__init__ -s &
- coverage run -m --parallel-mode --source=misp_modules misp_modules.__init__ -s -l 127.0.0.1 &
- pid=$!
- popd
- sleep 5

View File

@ -18,39 +18,48 @@ For more information: [Extending MISP with Python modules](https://www.circl.lu/
### Expansion modules
* [ASN History](misp_modules/modules/expansion/asn_history.py) - a hover and expansion module to expand an AS number with the ASN description and its history.
* [CIRCL Passive SSL](misp_modules/modules/expansion/circl_passivessl.py) - a hover and expansion module to expand IP addresses with the X.509 certificate seen.
* [CIRCL Passive DNS](misp_modules/modules/expansion/circl_passivedns.py) - a hover and expansion module to expand hostname and IP addresses with passive DNS information.
* [CIRCL Passive SSL](misp_modules/modules/expansion/circl_passivessl.py) - a hover and expansion module to expand IP addresses with the X.509 certificate seen.
* [countrycode](misp_modules/modules/expansion/countrycode.py) - a hover module to tell you what country a URL belongs to.
* [CrowdStrike Falcon](misp_modules/modules/expansion/crowdstrike_falcon.py) - an expansion module to expand using CrowdStrike Falcon Intel Indicator API.
* [CVE](misp_modules/modules/expansion/cve.py) - a hover module to give more information about a vulnerability (CVE).
* [DNS](misp_modules/modules/expansion/dns.py) - a simple module to resolve MISP attributes like hostname and domain to expand IP addresses attributes.
* [DomainTools](misp_modules/modules/expansion/domaintools.py) - a hover and expansion module to get information from [DomainTools](http://www.domaintools.com/) whois.
* [EUPI](misp_modules/modules/expansion/eupi.py) - a hover and expansion module to get information about an URL from the [Phishing Initiative project](https://phishing-initiative.eu/?lang=en).
* [Farsight DNSDB Passive DNS](misp_modules/modules/expansion/farsight_passivedns.py) - a hover and expansion module to expand hostname and IP addresses with passive DNS information.
* [GeoIP](misp_modules/modules/expansion/geoip_country.py) - a hover and expansion module to get GeoIP information from geolite/maxmind.
* [IPASN](misp_modules/modules/expansion/ipasn.py) - a hover and expansion to get the BGP ASN of an IP address.
* [iprep](misp-modules/modules/expansion/iprep.py) - an expansion module to get IP reputation from packetmail.net.
* [OTX](misp_modules/modules/expansion/otx.py) - an expansion module for [OTX](https://otx.alienvault.com/).
* [passivetotal](misp_modules/modules/expansion/passivetotal.py) - a [passivetotal](https://www.passivetotal.org/) module that queries a number of different PassiveTotal datasets.
* [rbl](misp_modules/modules/expansion/rbl.py) - a module to get RBL (Real-Time Blackhost List) values from an attribute.
* [shodan](misp_modules/modules/expansion/shodan.py) - a minimal [shodan](https://www.shodan.io/) expansion module.
* [sourcecache](misp_modules/modules/expansion/sourcecache.py) - a module to cache a specific link from a MISP instance.
* [ThreatCrowd](misp_modules/modules/expansion/threatcrowd.py) - an expansion module for [ThreatCrowd](https://www.threatcrowd.org/).
* [OTX](misp_modules/modules/expansion/otx.py) - an expansion module for [OTX](https://otx.alienvault.com/).
* [threatminer](misp_modules/modules/expansion/threatminer.py) - an expansion module to expand from [ThreatMiner](https://www.threatminer.org/).
* [countrycode](misp_modules/modules/expansion/countrycode.py) - a hover module to tell you what country a URL belongs to.
* [virustotal](misp_modules/modules/expansion/virustotal.py) - an expansion module to pull known resolutions and malware samples related with an IP/Domain from virusTotal (this modules require a VirusTotal private API key)
* [wikidata](misp_modules/modules/expansion/wiki.py) - a [wikidata](https://www.wikidata.org) expansion module.
* [xforce](misp_modules/modules/expansion/xforceexchange.py) - an IBM X-Force Exchange expansion module.
* [YARA syntax validator](misp_modules/modules/expansion/yara_syntax_validator.py) - YARA syntax validator.
### Export modules
* [CEF](misp_modules/modules/export_mod/cef_export.py) module to export Common Event Format (CEF).
* [GoAML export](misp_modules/modules/export_mod/goamlexport.py) module to export in GoAML format.
* [Lite Export](misp_modules/modules/export_mod/liteexport.py) module to export a lite event.
* [Simple PDF export](misp_modules/modules/export_mod/pdfexport.py) module to export in PDF (required: asciidoctor-pdf).
* [ThreatConnect](misp_modules/modules/export_mod/threat_connect_export.py) module to export in ThreatConnect CSV format.
* [ThreatStream](misp_modules/modules/export_mod/threatStream_misp_export.py) module to export in ThreatStream format.
### Import modules
* [CSV import](misp_modules/modules/import_mod/csvimport.py) Customizable CSV import module.
* [Cuckoo JSON](misp_modules/modules/import_mod/cuckooimport.py) Cuckoo JSON import.
* [Email Import](misp_modules/modules/import_mod/email_import.py) Email import module for MISP to import basic metadata.
* [OCR](misp_modules/modules/import_mod/ocr.py) Optical Character Recognition (OCR) module for MISP to import attributes from images, scan or faxes.
* [OpenIOC](misp_modules/modules/import_mod/openiocimport.py) OpenIOC import based on PyMISP library.
* [stiximport](misp_modules/modules/import_mod/stiximport.py) - An import module to process STIX xml/json.
* [Email Import](misp_modules/modules/import_mod/email_import.py) Email import module for MISP to import basic metadata.
* [ThreatAnalyzer](misp_modules/modules/import_mod/threatanalyzer_import.py) - An import module to process ThreatAnalyzer archive.zip/analysis.json sandbox exports.
* [VMRay](misp_modules/modules/import_mod/vmray_import.py) - An import module to process VMRay export.
## How to install and start MISP modules?

View File

@ -1,7 +1,7 @@
stix
cybox
tornado
dnspython3
dnspython
requests
urlarchiver
passivetotal
@ -20,3 +20,5 @@ SPARQLWrapper
domaintools_api
pygeoip
bs4
oauth2
yara

View File

@ -1,6 +1,6 @@
from . import _vmray
__all__ = ['vmray_submit', 'asn_history', 'circl_passivedns', 'circl_passivessl',
'countrycode', 'cve', 'dns', 'domaintools', 'eupi', 'ipasn', 'passivetotal', 'sourcecache',
'virustotal', 'whois', 'shodan', 'reversedns', 'geoip_country', 'wiki', 'iprep', 'threatminer' ,'otx',
'threatcrowd','vulndb']
'countrycode', 'cve', 'dns', 'domaintools', 'eupi', 'farsight_passivedns', 'ipasn', 'passivetotal', 'sourcecache',
'virustotal', 'whois', 'shodan', 'reversedns', 'geoip_country', 'wiki', 'iprep', 'threatminer', 'otx',
'threatcrowd', 'vulndb', 'crowdstrike_falcon','yara_syntax_validator']

View File

@ -0,0 +1,27 @@
Copyright (c) 2013 by Farsight Security, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Copyright (c) 2010-2012 by Internet Systems Consortium, Inc. ("ISC")
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,162 @@
dnsdb-query
===========
These clients are reference implementations of the [DNSDB HTTP API](https://api.dnsdb.info/). Output is
compliant with the [Passive DNS Common Output Format](http://tools.ietf.org/html/draft-dulaunoy-kaplan-passive-dns-cof-01).
Please see https://www.dnsdb.info/ for more information.
Requirements
------------
* Linux, BSD, OS X
* Curl
* Python 2.7.x
* Farsight DNSDB API key
Installation
------------
1. Create a directory
```
mkdir ~/dnsdb
```
1. Download the software
```
curl https://codeload.github.com/dnsdb/dnsdb-query/tar.gz/debian/0.2-1 -o ~/dnsdb/0.2-1.tar.gz
```
1. Extract the software
```
tar xzvf ~/dnsdb/0.2-1.tar.gz -C ~/dnsdb/ --strip-components=1
```
1. Create a API key file
```
nano ~/.dnsdb-query.conf
```
1. Cut and paste the following and replace '\<apikey\>' with your API Key
```
APIKEY="<apikey>"
```
1. Test the Python client
```
$ python dnsdb/dnsdb_query.py -i 104.244.13.104
```
```
...
www.farsightsecurity.com. IN A 104.244.13.104
```
dnsdb_query.py
--------------
dnsdb_query.py is a Python client for the DNSDB HTTP API. It is similar
to the dnsdb-query shell script but supports some additional features
like sorting and setting the result limit parameter. It is also embeddable
as a Python module.
```
Usage: dnsdb_query.py [options]
Options:
-h, --help show this help message and exit
-c CONFIG, --config=CONFIG
config file
-r RRSET, --rrset=RRSET
rrset <ONAME>[/<RRTYPE>[/BAILIWICK]]
-n RDATA_NAME, --rdataname=RDATA_NAME
rdata name <NAME>[/<RRTYPE>]
-i RDATA_IP, --rdataip=RDATA_IP
rdata ip <IPADDRESS|IPRANGE|IPNETWORK>
-s SORT, --sort=SORT sort key
-R, --reverse reverse sort
-j, --json output in JSON format
-l LIMIT, --limit=LIMIT
limit number of results
--before=BEFORE only output results seen before this time
--after=AFTER only output results seen after this time
Time formats are: "%Y-%m-%d", "%Y-%m-%d %H:%M:%S", "%d" (UNIX timestamp),
"-%d" (Relative time in seconds), BIND format relative timestamp (e.g. 1w1h,
(w)eek, (d)ay, (h)our, (m)inute, (s)econd)
```
Or, from Python:
```
from dnsdb_query import DnsdbClient
server='https://api.dnsdb.info'
apikey='d41d8cd98f00b204e9800998ecf8427e'
client = DnsdbClient(server,apikey)
for rrset in client.query_rrset('www.dnsdb.info'):
# rrset is a decoded JSON blob
print repr(rrset)
```
Other configuration options that may be set:
`DNSDB_SERVER`
The base URL of the DNSDB HTTP API, minus the /lookup component. Defaults to
`https://api.dnsdb.info.`
`HTTP_PROXY`
The URL of the HTTP proxy that you wish to use.
`HTTPS_PROXY`
The URL of the HTTPS proxy that you wish to use.
dnsdb-query
-----------
dnsdb-query is a simple curl-based wrapper for the DNSDB HTTP API.
The script sources the config file `/etc/dnsdb-query.conf` as a shell fragment.
If the config file is not present in `/etc`, the file `$HOME/.dnsdb-query.conf`
is sourced instead.
The config file MUST set the value of the APIKEY shell variable to the API
key provided to you by Farsight Security.
For example, if your API key is d41d8cd98f00b204e9800998ecf8427e, place the
following line in `/etc/dnsdb-query.conf` or `$HOME/.dnsdb-query.conf`:
```
APIKEY="d41d8cd98f00b204e9800998ecf8427e"
```
Other shell variables that may be set via the config file or command line
are:
`DNSDB_SERVER`
The base URL of the DNSDB HTTP API, minus the /lookup component. Defaults to
`https://api.dnsdb.info.`
`DNSDB_FORMAT`
The result format to use, either text or json. Defaults to text.
`HTTP_PROXY`
The URL of the HTTP proxy that you wish to use.
`HTTPS_PROXY`
The URL of the HTTPS proxy that you wish to use.
dnsdb-query supports the following usages:
```
Usage: dnsdb-query rrset <ONAME>[/<RRTYPE>[/<BAILIWICK>]]
Usage: dnsdb-query rdata ip <IPADDRESS>
Usage: dnsdb-query rdata name <NAME>[/<RRTYPE>]
Usage: dnsdb-query rdata raw <HEX>[/<RRTYPE>]
```
If your rrname, bailiwick or rdata contains the `/` character you
will need to escape it to `%2F` on the command line. eg:
`./dnsdb_query -r 1.0%2F1.0.168.192.in-addr.arpa`
retrieves the rrsets for `1.0/1.0.168.192.in-addr.arpa`.

View File

@ -0,0 +1,323 @@
#!/usr/bin/env python
#
# Note: This file is NOT the official one from dnsdb, as it has a python3 cherry-picked pull-request applied for python3 compatibility
# See https://github.com/dnsdb/dnsdb-query/pull/30
#
# Copyright (c) 2013 by Farsight Security, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import calendar
import errno
import locale
import optparse
import os
import re
import sys
import time
import json
from io import StringIO
try:
from urllib2 import build_opener, Request, ProxyHandler, HTTPError, URLError
from urllib import quote as urllib_quote, urlencode
except ImportError:
from urllib.request import build_opener, Request, ProxyHandler, HTTPError, URLError
from urllib.parse import quote as urllib_quote, urlencode
DEFAULT_CONFIG_FILES = filter(os.path.isfile, ('/etc/dnsdb-query.conf', os.path.expanduser('~/.dnsdb-query.conf')))
DEFAULT_DNSDB_SERVER = 'https://api.dnsdb.info'
DEFAULT_HTTP_PROXY = ''
DEFAULT_HTTPS_PROXY = ''
cfg = None
options = None
locale.setlocale(locale.LC_ALL, '')
class QueryError(Exception):
pass
class DnsdbClient(object):
def __init__(self, server, apikey, limit=None, http_proxy=None, https_proxy=None):
self.server = server
self.apikey = apikey
self.limit = limit
self.http_proxy = http_proxy
self.https_proxy = https_proxy
def query_rrset(self, oname, rrtype=None, bailiwick=None, before=None, after=None):
if bailiwick:
if not rrtype:
rrtype = 'ANY'
path = 'rrset/name/%s/%s/%s' % (quote(oname), rrtype, quote(bailiwick))
elif rrtype:
path = 'rrset/name/%s/%s' % (quote(oname), rrtype)
else:
path = 'rrset/name/%s' % quote(oname)
return self._query(path, before, after)
def query_rdata_name(self, rdata_name, rrtype=None, before=None, after=None):
if rrtype:
path = 'rdata/name/%s/%s' % (quote(rdata_name), rrtype)
else:
path = 'rdata/name/%s' % quote(rdata_name)
return self._query(path, before, after)
def query_rdata_ip(self, rdata_ip, before=None, after=None):
path = 'rdata/ip/%s' % rdata_ip.replace('/', ',')
return self._query(path, before, after)
def _query(self, path, before=None, after=None):
res = []
url = '%s/lookup/%s' % (self.server, path)
params = {}
if self.limit:
params['limit'] = self.limit
if before and after:
params['time_first_after'] = after
params['time_last_before'] = before
else:
if before:
params['time_first_before'] = before
if after:
params['time_last_after'] = after
if params:
url += '?{0}'.format(urlencode(params))
req = Request(url)
req.add_header('Accept', 'application/json')
req.add_header('X-Api-Key', self.apikey)
proxy_args = {}
if self.http_proxy:
proxy_args['http'] = self.http_proxy
if self.https_proxy:
proxy_args['https'] = self.https_proxy
proxy_handler = ProxyHandler(proxy_args)
opener = build_opener(proxy_handler)
try:
http = opener.open(req)
while True:
line = http.readline()
if not line:
break
yield json.loads(line.decode('ascii'))
except (HTTPError, URLError) as e:
raise QueryError(str(e), sys.exc_traceback)
def quote(path):
return urllib_quote(path, safe='')
def sec_to_text(ts):
return time.strftime('%Y-%m-%d %H:%M:%S -0000', time.gmtime(ts))
def rrset_to_text(m):
s = StringIO()
try:
if 'bailiwick' in m:
s.write(';; bailiwick: %s\n' % m['bailiwick'])
if 'count' in m:
s.write(';; count: %s\n' % locale.format('%d', m['count'], True))
if 'time_first' in m:
s.write(';; first seen: %s\n' % sec_to_text(m['time_first']))
if 'time_last' in m:
s.write(';; last seen: %s\n' % sec_to_text(m['time_last']))
if 'zone_time_first' in m:
s.write(';; first seen in zone file: %s\n' % sec_to_text(m['zone_time_first']))
if 'zone_time_last' in m:
s.write(';; last seen in zone file: %s\n' % sec_to_text(m['zone_time_last']))
if 'rdata' in m:
for rdata in m['rdata']:
s.write('%s IN %s %s\n' % (m['rrname'], m['rrtype'], rdata))
s.seek(0)
return s.read()
finally:
s.close()
def rdata_to_text(m):
return '%s IN %s %s' % (m['rrname'], m['rrtype'], m['rdata'])
def parse_config(cfg_files):
config = {}
if not cfg_files:
raise IOError(errno.ENOENT, 'dnsdb_query: No config files found')
for fname in cfg_files:
for line in open(fname):
key, eq, val = line.strip().partition('=')
val = val.strip('"')
config[key] = val
return config
def time_parse(s):
try:
epoch = int(s)
return epoch
except ValueError:
pass
try:
epoch = int(calendar.timegm(time.strptime(s, '%Y-%m-%d')))
return epoch
except ValueError:
pass
try:
epoch = int(calendar.timegm(time.strptime(s, '%Y-%m-%d %H:%M:%S')))
return epoch
except ValueError:
pass
m = re.match(r'^(?=\d)(?:(\d+)w)?(?:(\d+)d)?(?:(\d+)h)?(?:(\d+)m)?(?:(\d+)s?)?$', s, re.I)
if m:
return -1*(int(m.group(1) or 0)*604800 +
int(m.group(2) or 0)*86400+
int(m.group(3) or 0)*3600+
int(m.group(4) or 0)*60+
int(m.group(5) or 0))
raise ValueError('Invalid time: "%s"' % s)
def epipe_wrapper(func):
def f(*args, **kwargs):
try:
return func(*args, **kwargs)
except IOError as e:
if e.errno == errno.EPIPE:
sys.exit(e.errno)
raise
return f
@epipe_wrapper
def main():
global cfg
global options
parser = optparse.OptionParser(epilog='Time formats are: "%Y-%m-%d", "%Y-%m-%d %H:%M:%S", "%d" (UNIX timestamp), "-%d" (Relative time in seconds), BIND format (e.g. 1w1h, (w)eek, (d)ay, (h)our, (m)inute, (s)econd)')
parser.add_option('-c', '--config', dest='config',
help='config file', action='append')
parser.add_option('-r', '--rrset', dest='rrset', type='string',
help='rrset <ONAME>[/<RRTYPE>[/BAILIWICK]]')
parser.add_option('-n', '--rdataname', dest='rdata_name', type='string',
help='rdata name <NAME>[/<RRTYPE>]')
parser.add_option('-i', '--rdataip', dest='rdata_ip', type='string',
help='rdata ip <IPADDRESS|IPRANGE|IPNETWORK>')
parser.add_option('-t', '--rrtype', dest='rrtype', type='string',
help='rrset or rdata rrtype')
parser.add_option('-b', '--bailiwick', dest='bailiwick', type='string',
help='rrset bailiwick')
parser.add_option('-s', '--sort', dest='sort', type='string', help='sort key')
parser.add_option('-R', '--reverse', dest='reverse', action='store_true', default=False,
help='reverse sort')
parser.add_option('-j', '--json', dest='json', action='store_true', default=False,
help='output in JSON format')
parser.add_option('-l', '--limit', dest='limit', type='int', default=0,
help='limit number of results')
parser.add_option('', '--before', dest='before', type='string', help='only output results seen before this time')
parser.add_option('', '--after', dest='after', type='string', help='only output results seen after this time')
options, args = parser.parse_args()
if args:
parser.print_help()
sys.exit(1)
try:
if options.before:
options.before = time_parse(options.before)
except ValueError:
print('Could not parse before: {}'.format(options.before))
try:
if options.after:
options.after = time_parse(options.after)
except ValueError:
print('Could not parse after: {}'.format(options.after))
try:
cfg = parse_config(options.config or DEFAULT_CONFIG_FILES)
except IOError as e:
print(str(e), file=sys.stderr)
sys.exit(1)
if not 'DNSDB_SERVER' in cfg:
cfg['DNSDB_SERVER'] = DEFAULT_DNSDB_SERVER
if not 'HTTP_PROXY' in cfg:
cfg['HTTP_PROXY'] = DEFAULT_HTTP_PROXY
if not 'HTTPS_PROXY' in cfg:
cfg['HTTPS_PROXY'] = DEFAULT_HTTPS_PROXY
if not 'APIKEY' in cfg:
sys.stderr.write('dnsdb_query: APIKEY not defined in config file\n')
sys.exit(1)
client = DnsdbClient(cfg['DNSDB_SERVER'], cfg['APIKEY'],
limit=options.limit,
http_proxy=cfg['HTTP_PROXY'],
https_proxy=cfg['HTTPS_PROXY'])
if options.rrset:
if options.rrtype or options.bailiwick:
qargs = (options.rrset, options.rrtype, options.bailiwick)
else:
qargs = (options.rrset.split('/', 2))
results = client.query_rrset(*qargs, before=options.before, after=options.after)
fmt_func = rrset_to_text
elif options.rdata_name:
if options.rrtype:
qargs = (options.rdata_name, options.rrtype)
else:
qargs = (options.rdata_name.split('/', 1))
results = client.query_rdata_name(*qargs, before=options.before, after=options.after)
fmt_func = rdata_to_text
elif options.rdata_ip:
results = client.query_rdata_ip(options.rdata_ip, before=options.before, after=options.after)
fmt_func = rdata_to_text
else:
parser.print_help()
sys.exit(1)
if options.json:
fmt_func = json.dumps
try:
if options.sort:
results = list(results)
if len(results) > 0:
if not options.sort in results[0]:
sort_keys = results[0].keys()
sort_keys.sort()
sys.stderr.write('dnsdb_query: invalid sort key "%s". valid sort keys are %s\n' % (options.sort, ', '.join(sort_keys)))
sys.exit(1)
results.sort(key=lambda r: r[options.sort], reverse=options.reverse)
for res in results:
sys.stdout.write('%s\n' % fmt_func(res))
except QueryError as e:
print(e.message, file=sys.stderr)
sys.exit(1)
if __name__ == '__main__':
main()

View File

@ -0,0 +1,128 @@
import json
import requests
moduleinfo = {'version': '0.1',
'author': 'Christophe Vandeplas',
'description': 'Module to query CrowdStrike Falcon.',
'module-type': ['expansion']}
moduleconfig = ['api_id', 'apikey']
misperrors = {'error': 'Error'}
misp_types_in = ['domain', 'email-attachment', 'email-dst', 'email-reply-to', 'email-src', 'email-subject',
'filename', 'hostname', 'ip', 'ip-src', 'ip-dst', 'md5', 'mutex', 'regkey', 'sha1', 'sha256', 'uri', 'url',
'user-agent', 'whois-registrant-email', 'x509-fingerprint-md5']
mapping_out = { # mapping between the MISP attributes types and the compatible CrowdStrike indicator types.
'domain': {'types': 'hostname', 'to_ids': True},
'email_address': {'types': 'email-src', 'to_ids': True},
'email_subject': {'types': 'email-subject', 'to_ids': True},
'file_name': {'types': 'filename', 'to_ids': True},
'hash_md5': {'types': 'md5', 'to_ids': True},
'hash_sha1': {'types': 'sha1', 'to_ids': True},
'hash_sha256': {'types': 'sha256', 'to_ids': True},
'ip_address': {'types': 'ip-dst', 'to_ids': True},
'ip_address_block': {'types': 'ip-dst', 'to_ids': True},
'mutex_name': {'types': 'mutex', 'to_ids': True},
'registry': {'types': 'regkey', 'to_ids': True},
'url': {'types': 'url', 'to_ids': True},
'user_agent': {'types': 'user-agent', 'to_ids': True},
'x509_serial': {'types': 'x509-fingerprint-md5', 'to_ids': True},
'actors': {'types': 'threat-actor'},
'malware_families': {'types': 'text', 'categories': 'Attribution'}
}
misp_types_out = [item['types'] for item in mapping_out.values()]
mispattributes = {'input': misp_types_in, 'output': misp_types_out}
def handler(q=False):
if q is False:
return False
request = json.loads(q)
if (request.get('config')):
if (request['config'].get('apikey') is None):
misperrors['error'] = 'CrowdStrike apikey is missing'
return misperrors
if (request['config'].get('api_id') is None):
misperrors['error'] = 'CrowdStrike api_id is missing'
return misperrors
client = CSIntelAPI(request['config']['api_id'], request['config']['apikey'])
r = {"results": []}
valid_type = False
for k in misp_types_in:
if request.get(k):
# map the MISP typ to the CrowdStrike type
for item in lookup_indicator(client, request[k]):
r['results'].append(item)
valid_type = True
if not valid_type:
misperrors['error'] = "Unsupported attributes type"
return misperrors
return r
def lookup_indicator(client, item):
result = client.search_indicator(item)
for item in result:
for relation in item['relations']:
if mapping_out.get(relation['type']):
r = mapping_out[relation['type']].copy()
r['values'] = relation['indicator']
yield(r)
for actor in item['actors']:
r = mapping_out['actors'].copy()
r['values'] = actor
yield(r)
for malware_family in item['malware_families']:
r = mapping_out['malware_families'].copy()
r['values'] = malware_family
yield(r)
def introspection():
return mispattributes
def version():
moduleinfo['config'] = moduleconfig
return moduleinfo
class CSIntelAPI():
def __init__(self, custid=None, custkey=None, perpage=100, page=1, baseurl="https://intelapi.crowdstrike.com/indicator/v2/search/"):
# customer id and key should be passed when obj is created
self.custid = custid
self.custkey = custkey
self.baseurl = baseurl
self.perpage = perpage
self.page = page
def request(self, query):
headers = {'X-CSIX-CUSTID': self.custid,
'X-CSIX-CUSTKEY': self.custkey,
'Content-Type': 'application/json'}
full_query = self.baseurl + query
r = requests.get(full_query, headers=headers)
# 400 - bad request
if r.status_code == 400:
raise Exception('HTTP Error 400 - Bad request.')
# 404 - oh shit
if r.status_code == 404:
raise Exception('HTTP Error 404 - awww snap.')
# catch all?
if r.status_code != 200:
raise Exception('HTTP Error: ' + str(r.status_code))
if r.text:
return r
def search_indicator(self, item):
query = 'indicator?match=' + item
r = self.request(query)
return json.loads(r.text)

View File

@ -0,0 +1,81 @@
import json
from ._dnsdb_query.dnsdb_query import DnsdbClient, QueryError
misperrors = {'error': 'Error'}
mispattributes = {'input': ['hostname', 'domain', 'ip-src', 'ip-dst'], 'output': ['freetext']}
moduleinfo = {'version': '0.1', 'author': 'Christophe Vandeplas', 'description': 'Module to access Farsight DNSDB Passive DNS', 'module-type': ['expansion', 'hover']}
moduleconfig = ['apikey']
server = 'https://api.dnsdb.info'
# TODO return a MISP object with the different attributes
def handler(q=False):
if q is False:
return False
request = json.loads(q)
if (request.get('config')):
if (request['config'].get('apikey') is None):
misperrors['error'] = 'Farsight DNSDB apikey is missing'
return misperrors
client = DnsdbClient(server, request['config']['apikey'])
if request.get('hostname'):
res = lookup_name(client, request['hostname'])
elif request.get('domain'):
res = lookup_name(client, request['domain'])
elif request.get('ip-src'):
res = lookup_ip(client, request['ip-src'])
elif request.get('ip-dst'):
res = lookup_ip(client, request['ip-dst'])
else:
misperrors['error'] = "Unsupported attributes type"
return misperrors
out = ''
for v in set(res): # uniquify entries
out = out + "{} ".format(v)
r = {'results': [{'types': mispattributes['output'], 'values': out}]}
return r
def lookup_name(client, name):
try:
res = client.query_rrset(name) # RRSET = entries in the left-hand side of the domain name related labels
for item in res:
if item.get('rrtype') in ['A', 'AAAA', 'CNAME']:
for i in item.get('rdata'):
yield(i.rstrip('.'))
if item.get('rrtype') in ['SOA']:
for i in item.get('rdata'):
# grab email field and replace first dot by @ to convert to an email address
yield(i.split(' ')[1].rstrip('.').replace('.', '@', 1))
except QueryError as e:
pass
try:
res = client.query_rdata_name(name) # RDATA = entries on the right-hand side of the domain name related labels
for item in res:
if item.get('rrtype') in ['A', 'AAAA', 'CNAME']:
yield(item.get('rrname').rstrip('.'))
except QueryError as e:
pass
def lookup_ip(client, ip):
try:
res = client.query_rdata_ip(ip)
for item in res:
yield(item['rrname'].rstrip('.'))
except QueryError as e:
pass
def introspection():
return mispattributes
def version():
moduleinfo['config'] = moduleconfig
return moduleinfo

View File

@ -45,7 +45,7 @@ def findAll(data, keys):
return a
def valid_email(email):
return bool(re.search(r"^[\w\.\+\-]+\@[\w]+\.[a-z]{2,3}$", email))
return bool(re.search(r"[a-zA-Z0-9!#$%&'*+\/=?^_`{|}~-]+(?:\.[a-zA-Z0-9!#$%&'*+\/=?^_`{|}~-]+)*@(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]*[a-zA-Z0-9])?\.)+[a-zA-Z0-9](?:[a-zA-Z0-9-]*[a-zA-Z0-9])?", email))
def handler(q=False):
if q is False:

View File

@ -0,0 +1,112 @@
import json
import datetime
try:
import dns.resolver
resolver = dns.resolver.Resolver()
resolver.timeout = 0.2
resolver.lifetime = 0.2
except:
print("dnspython3 is missing, use 'pip install dnspython3' to install it.")
sys.exit(0)
misperrors = {'error': 'Error'}
mispattributes = {'input': ['ip-src', 'ip-dst'], 'output': ['text']}
moduleinfo = {'version': '0.1', 'author': 'Christian Studer',
'description': 'Check an IPv4 address against known RBLs.',
'module-type': ['expansion', 'hover']}
moduleconfig = []
rbls = {
'spam.spamrats.com': 'http://www.spamrats.com',
'spamguard.leadmon.net': 'http://www.leadmon.net/SpamGuard/',
'rbl-plus.mail-abuse.org': 'http://www.mail-abuse.com/lookup.html',
'web.dnsbl.sorbs.net': 'http://www.sorbs.net',
'ix.dnsbl.manitu.net': 'http://www.dnsbl.manitu.net',
'virus.rbl.jp': 'http://www.rbl.jp',
'dul.dnsbl.sorbs.net': 'http://www.sorbs.net',
'bogons.cymru.com': 'http://www.team-cymru.org/Services/Bogons/',
'psbl.surriel.com': 'http://psbl.surriel.com',
'misc.dnsbl.sorbs.net': 'http://www.sorbs.net',
'httpbl.abuse.ch': 'http://dnsbl.abuse.ch',
'combined.njabl.org': 'http://combined.njabl.org',
'smtp.dnsbl.sorbs.net': 'http://www.sorbs.net',
'korea.services.net': 'http://korea.services.net',
'drone.abuse.ch': 'http://dnsbl.abuse.ch',
'rbl.efnetrbl.org': 'http://rbl.efnetrbl.org',
'cbl.anti-spam.org.cn': 'http://www.anti-spam.org.cn/?Locale=en_US',
'b.barracudacentral.org': 'http://www.barracudacentral.org/rbl/removal-request',
'bl.spamcannibal.org': 'http://www.spamcannibal.org',
'xbl.spamhaus.org': 'http://www.spamhaus.org/xbl/',
'zen.spamhaus.org': 'http://www.spamhaus.org/zen/',
'rbl.suresupport.com': 'http://suresupport.com/postmaster',
'db.wpbl.info': 'http://www.wpbl.info',
'sbl.spamhaus.org': 'http://www.spamhaus.org/sbl/',
'http.dnsbl.sorbs.net': 'http://www.sorbs.net',
'csi.cloudmark.com': 'http://www.cloudmark.com/en/products/cloudmark-sender-intelligence/index',
'rbl.interserver.net': 'http://rbl.interserver.net',
'ubl.unsubscore.com': 'http://www.lashback.com/blacklist/',
'dnsbl.sorbs.net': 'http://www.sorbs.net',
'virbl.bit.nl': 'http://virbl.bit.nl',
'pbl.spamhaus.org': 'http://www.spamhaus.org/pbl/',
'socks.dnsbl.sorbs.net': 'http://www.sorbs.net',
'short.rbl.jp': 'http://www.rbl.jp',
'dnsbl.dronebl.org': 'http://www.dronebl.org',
'blackholes.mail-abuse.org': 'http://www.mail-abuse.com/lookup.html',
'truncate.gbudb.net': 'http://www.gbudb.com/truncate/index.jsp',
'dyna.spamrats.com': 'http://www.spamrats.com',
'spamrbl.imp.ch': 'http://antispam.imp.ch',
'spam.dnsbl.sorbs.net': 'http://www.sorbs.net',
'wormrbl.imp.ch': 'http://antispam.imp.ch',
'query.senderbase.org': 'http://www.senderbase.org/about',
'opm.tornevall.org': 'http://dnsbl.tornevall.org',
'netblock.pedantic.org': 'http://pedantic.org',
'access.redhawk.org': 'http://www.redhawk.org/index.php?option=com_wrapper&Itemid=33',
'cdl.anti-spam.org.cn': 'http://www.anti-spam.org.cn/?Locale=en_US',
'multi.surbl.org': 'http://www.surbl.org',
'noptr.spamrats.com': 'http://www.spamrats.com',
'dnsbl.inps.de': 'http://dnsbl.inps.de/index.cgi?lang=en',
'bl.spamcop.net': 'http://bl.spamcop.net',
'cbl.abuseat.org': 'http://cbl.abuseat.org',
'dsn.rfc-ignorant.org': 'http://www.rfc-ignorant.org/policy-dsn.php',
'zombie.dnsbl.sorbs.net': 'http://www.sorbs.net',
'dnsbl.njabl.org': 'http://dnsbl.njabl.org',
'relays.mail-abuse.org': 'http://www.mail-abuse.com/lookup.html',
'rbl.spamlab.com': 'http://tools.appriver.com/index.aspx?tool=rbl',
'all.bl.blocklist.de': 'http://www.blocklist.de/en/rbldns.html'
}
def handler(q=False):
if q is False:
return False
request = json.loads(q)
if request.get('ip-src'):
ip = request['ip-src']
elif request.get('ip-dst'):
ip = request['ip-dst']
else:
misperrors['error'] = "Unsupported attributes type"
return misperrors
listed = []
info = []
for rbl in rbls:
ipRev = '.'.join(ip.split('.')[::-1])
query = '{}.{}'.format(ipRev, rbl)
try:
txt = resolver.query(query,'TXT')
listed.append(query)
info.append(str(txt[0]))
except:
continue
result = {}
for l, i in zip(listed, info):
result[l] = i
r = {'results': [{'types': mispattributes.get('output'), 'values': json.dumps(result)}]}
return r
def introspection():
return mispattributes
def version():
moduleinfo['config'] = moduleconfig
return moduleinfo

View File

@ -152,7 +152,7 @@ def getMoreInfo(req, key):
# Get all hashes first
hashes = []
hashes = findAll(req, ["md5", "sha1", "sha256", "sha512"])
r.append({"types": ["md5", "sha1", "sha256", "sha512"], "values": hashes})
r.append({"types": ["freetext"], "values": hashes})
for hsh in hashes[:limit]:
# Search VT for some juicy info
try:

View File

@ -43,12 +43,12 @@ def handler(q=False):
# Only continue if we have a vulnerability attribute
if not request.get('vulnerability'):
misperrors['error'] = 'Vulnerability id missing for VulnDB'
misperrors['error'] = 'Vulnerability ID missing for VulnDB.'
return misperrors
vulnerability = request.get('vulnerability')
if request["config"].get("apikey") is None or request["config"].get("apisecret") is None:
misperrors["error"] = "Missing API key or secret value for VulnDB"
misperrors["error"] = "Missing API key or secret value for VulnDB."
return misperrors
apikey = request["config"].get("apikey")
apisecret = request["config"].get("apisecret")
@ -90,7 +90,7 @@ def handler(q=False):
if content_json:
if 'error' in content_json:
misperrors["error"] = "No CVE information found"
misperrors["error"] = "No CVE information found."
return misperrors
else:
output = {'results': list()}
@ -266,7 +266,7 @@ def handler(q=False):
output['results'] += [{'types': 'cpe', 'values': values_cpe }]
return output
else:
misperrors["error"] = "No information retrieved from VulnDB"
misperrors["error"] = "No information retrieved from VulnDB."
return misperrors
except:
misperrors["error"] = "Error while fetching information from VulnDB, wrong API keys?"

View File

@ -0,0 +1,38 @@
import json
import requests
try:
import yara
except:
print("yara is missing, use 'pip3 install yara' to install it.")
misperrors = {'error': 'Error'}
mispattributes = {'input': ['yara'], 'output': ['text']}
moduleinfo = {'version': '0.1', 'author': 'Dennis Rand', 'description': 'An expansion hover module to perform a syntax check on if yara rules are valid or not.', 'module-type': ['hover']}
moduleconfig = []
def handler(q=False):
if q is False:
return False
request = json.loads(q)
if not request.get('yara'):
misperrors['error'] = 'Yara rule missing'
return misperrors
try:
rules = yara.compile(source=request.get('yara'))
summary = ("Syntax valid")
except Exception as e:
summary = ("Syntax error: " + str(e))
r = {'results': [{'types': mispattributes['output'], 'values': summary}]}
return r
def introspection():
return mispattributes
def version():
moduleinfo['config'] = moduleconfig
return moduleinfo

View File

@ -1 +1 @@
__all__ = ['testexport','cef_export','liteexport','threat_connect_export']
__all__ = ['testexport','cef_export','liteexport','goamlexport','threat_connect_export','pdfexport','threatStream_misp_export']

View File

@ -0,0 +1,237 @@
import json, datetime, base64
from pymisp import MISPEvent
from collections import defaultdict, Counter
misperrors = {'error': 'Error'}
moduleinfo = {'version': '1', 'author': 'Christian Studer',
'description': 'Export to GoAML',
'module-type': ['export'],
'require_standard_format': True}
moduleconfig = ['rentity_id']
mispattributes = {'input': ['MISPEvent'], 'output': ['xml file']}
outputFileExtension = "xml"
responseType = "application/xml"
objects_to_parse = ['transaction', 'bank-account', 'person', 'entity', 'geolocation']
goAMLmapping = {'bank-account': {'bank-account': 't_account', 'institution-name': 'institution_name',
'institution-code': 'institution_code', 'iban': 'iban', 'swift': 'swift',
'branch': 'branch', 'non-banking-institution': 'non_bank_institution',
'account': 'account', 'currency-code': 'currency_code',
'account-name': 'account_name', 'client-number': 'client_number',
'personal-account-type': 'personal_account_type', 'opened': 'opened',
'closed': 'closed', 'balance': 'balance', 'status-code': 'status_code',
'beneficiary': 'beneficiary', 'beneficiary-comment': 'beneficiary_comment',
'comments': 'comments'},
'person': {'person': 't_person', 'text': 'comments', 'first-name': 'first_name',
'middle-name': 'middle_name', 'last-name': 'last_name', 'title': 'title',
'mothers-name': 'mothers_name', 'alias': 'alias', 'date-of-birth': 'birthdate',
'place-of-birth': 'birth_place', 'gender': 'gender','nationality': 'nationality1',
'passport-number': 'passport_number', 'passport-country': 'passport_country',
'social-security-number': 'ssn', 'identity-card-number': 'id_number'},
'geolocation': {'geolocation': 'location', 'city': 'city', 'region': 'state',
'country': 'country_code', 'address': 'address', 'zipcode': 'zip'},
'transaction': {'transaction': 'transaction', 'transaction-number': 'transactionnumber',
'date': 'date_transaction', 'location': 'transaction_location',
'transmode-code': 'transmode_code', 'amount': 'amount_local',
'transmode-comment': 'transmode_comment', 'date-posting': 'date_posting',
'teller': 'teller', 'authorized': 'authorized',
'text': 'transaction_description'},
'legal-entity': {'legal-entity': 'entity', 'name': 'name', 'business': 'business',
'commercial-name': 'commercial_name', 'phone-number': 'phone',
'legal-form': 'incorporation_legal_form',
'registration-number': 'incorporation_number'}}
referencesMapping = {'bank-account': {'aml_type': '{}_account', 'bracket': 't_{}'},
'person': {'transaction': {'aml_type': '{}_person', 'bracket': 't_{}'}, 'bank-account': {'aml_type': 't_person', 'bracket': 'signatory'}},
'legal-entity': {'transaction': {'aml_type': '{}_entity', 'bracket': 't_{}'}, 'bank-account': {'aml_type': 't_entity'}},
'geolocation': {'aml_type': 'address', 'bracket': 'addresses'}}
class GoAmlGeneration(object):
def __init__(self, config):
self.config = config
self.parsed_uuids = defaultdict(list)
def from_event(self, event):
self.misp_event = MISPEvent()
self.misp_event.load(event)
def parse_objects(self):
uuids = defaultdict(list)
report_code = []
currency_code = []
for obj in self.misp_event.objects:
obj_type = obj.name
uuids[obj_type].append(obj.uuid)
if obj_type == 'bank-account':
try:
report_code.append(obj.get_attributes_by_relation('report-code')[0].value.split(' ')[0])
currency_code.append(obj.get_attributes_by_relation('currency-code')[0].value)
except:
print('report_code or currency_code error')
self.uuids, self.report_codes, self.currency_codes = uuids, report_code, currency_code
def build_xml(self):
self.xml = {'header': "<report><rentity_id>{}</rentity_id><submission_code>E</submission_code>".format(self.config),
'data': ""}
if "STR" in self.report_codes:
report_code = "STR"
else:
report_code = Counter(self.report_codes).most_common(1)[0][0]
self.xml['header'] += "<report_code>{}</report_code>".format(report_code)
submission_date = str(self.misp_event.timestamp).replace(' ', 'T')
self.xml['header'] += "<submission_date>{}</submission_date>".format(submission_date)
self.xml['header'] += "<currency_code_local>{}</currency_code_local>".format(Counter(self.currency_codes).most_common(1)[0][0])
for trans_uuid in self.uuids.get('transaction'):
self.itterate('transaction', 'transaction', trans_uuid, 'data')
person_to_parse = [person_uuid for person_uuid in self.uuids.get('person') if person_uuid not in self.parsed_uuids.get('person')]
if len(person_to_parse) == 1:
self.itterate('person', 'reporting_person', person_to_parse[0], 'header')
location_to_parse = [location_uuid for location_uuid in self.uuids.get('geolocation') if location_uuid not in self.parsed_uuids.get('geolocation')]
if len(location_to_parse) == 1:
self.itterate('geolocation', 'location', location_to_parse[0], 'header')
self.xml['data'] += "</report>"
def itterate(self, object_type, aml_type, uuid, xml_part):
obj = self.misp_event.get_object_by_uuid(uuid)
if object_type == 'transaction':
self.xml[xml_part] += "<{}>".format(aml_type)
self.fill_xml_transaction(object_type, obj.attributes, xml_part)
self.parsed_uuids[object_type].append(uuid)
if obj.ObjectReference:
self.parseObjectReferences(object_type, xml_part, obj.ObjectReference)
self.xml[xml_part] += "</{}>".format(aml_type)
else:
if 'to_' in aml_type or 'from_' in aml_type:
relation_type = aml_type.split('_')[0]
self.xml[xml_part] += "<{0}_funds_code>{1}</{0}_funds_code>".format(relation_type, self.from_and_to_fields[relation_type]['funds'].split(' ')[0])
self.itterate_normal_case(object_type, obj, aml_type, uuid, xml_part)
self.xml[xml_part] += "<{0}_country>{1}</{0}_country>".format(relation_type, self.from_and_to_fields[relation_type]['country'])
else:
self.itterate_normal_case(object_type, obj, aml_type, uuid, xml_part)
def itterate_normal_case(self, object_type, obj, aml_type, uuid, xml_part):
self.xml[xml_part] += "<{}>".format(aml_type)
self.fill_xml(object_type, obj, xml_part)
self.parsed_uuids[object_type].append(uuid)
if obj.ObjectReference:
self.parseObjectReferences(object_type, xml_part, obj.ObjectReference)
self.xml[xml_part] += "</{}>".format(aml_type)
def parseObjectReferences(self, object_type, xml_part, references):
for ref in references:
next_uuid = ref.referenced_uuid
next_object_type = ref.Object.get('name')
relationship_type = ref.relationship_type
self.parse_references(object_type, next_object_type, next_uuid, relationship_type, xml_part)
def fill_xml_transaction(self, object_type, attributes, xml_part):
from_and_to_fields = {'from': {}, 'to': {}}
for attribute in attributes:
object_relation = attribute.object_relation
attribute_value = attribute.value
if object_relation == 'date-posting':
self.xml[xml_part] += "<late_deposit>True</late_deposit>"
elif object_relation in ('from-funds-code', 'to-funds-code'):
relation_type, field, _ = object_relation.split('-')
from_and_to_fields[relation_type][field] = attribute_value
continue
elif object_relation in ('from-country', 'to-country'):
relation_type, field = object_relation.split('-')
from_and_to_fields[relation_type][field] = attribute_value
continue
try:
self.xml[xml_part] += "<{0}>{1}</{0}>".format(goAMLmapping[object_type][object_relation], attribute_value)
except KeyError:
pass
self.from_and_to_fields = from_and_to_fields
def fill_xml(self, object_type, obj, xml_part):
if obj.name == 'bank-account':
for attribute in obj.attributes:
if attribute.object_relation in ('personal-account-type', 'status-code'):
attribute_value = attribute.value.split(' - ')[0]
else:
attribute_value = attribute.value
try:
self.xml[xml_part] += "<{0}>{1}</{0}>".format(goAMLmapping[object_type][attribute.object_relation], attribute_value)
except KeyError:
pass
else:
for attribute in obj.attributes:
try:
self.xml[xml_part] += "<{0}>{1}</{0}>".format(goAMLmapping[object_type][attribute.object_relation], attribute.value)
except KeyError:
pass
def parse_references(self, object_type, next_object_type, uuid, relationship_type, xml_part):
reference = referencesMapping[next_object_type]
try:
next_aml_type = reference[object_type].get('aml_type').format(relationship_type.split('_')[0])
try:
bracket = reference[object_type].get('bracket').format(relationship_type)
self.xml[xml_part] += "<{}>".format(bracket)
self.itterate(next_object_type, next_aml_type, uuid, xml_part)
self.xml[xml_part] += "</{}>".format(bracket)
except KeyError:
self.itterate(next_object_type, next_aml_type, uuid, xml_part)
except KeyError:
next_aml_type = reference.get('aml_type').format(relationship_type.split('_')[0])
bracket = reference.get('bracket').format(relationship_type)
self.xml[xml_part] += "<{}>".format(bracket)
self.itterate(next_object_type, next_aml_type, uuid, xml_part)
self.xml[xml_part] += "</{}>".format(bracket)
def handler(q=False):
if q is False:
return False
request = json.loads(q)
if 'data' not in request:
return False
if not request.get('config') and not request['config'].get('rentity_id'):
misperrors['error'] = "Configuration error."
return misperrors
config = request['config'].get('rentity_id')
export_doc = GoAmlGeneration(config)
export_doc.from_event(request['data'][0])
if not export_doc.misp_event.Object:
misperrors['error'] = "There is no object in this event."
return misperrors
types = []
for obj in export_doc.misp_event.Object:
types.append(obj.name)
if 'transaction' not in types:
misperrors['error'] = "There is no transaction object in this event."
return misperrors
export_doc.parse_objects()
export_doc.build_xml()
exp_doc = "{}{}".format(export_doc.xml.get('header'), export_doc.xml.get('data'))
return {'response': [], 'data': str(base64.b64encode(bytes(exp_doc, 'utf-8')), 'utf-8')}
def introspection():
modulesetup = {}
try:
responseType
modulesetup['responseType'] = responseType
except NameError:
pass
try:
userConfig
modulesetup['userConfig'] = userConfig
except NameError:
pass
try:
outputFileExtension
modulesetup['outputFileExtension'] = outputFileExtension
except NameError:
pass
try:
inputSource
moduleSetup['inputSource'] = inputSource
except NameError:
pass
return modulesetup
def version():
moduleinfo['config'] = moduleconfig
return moduleinfo

View File

@ -0,0 +1,187 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from datetime import date
import json
import shlex
import subprocess
import base64
from pymisp import MISPEvent
misperrors = {'error': 'Error'}
moduleinfo = {'version': '1',
'author': 'Raphaël Vinot',
'description': 'Simple export to PDF',
'module-type': ['export'],
'require_standard_format': True}
moduleconfig = []
mispattributes = {}
outputFileExtension = "pdf"
responseType = "application/pdf"
types_to_attach = ['ip-dst', 'url', 'domain']
objects_to_attach = ['domain-ip']
headers = """
:toc: right
:toclevels: 1
:toc-title: Daily Report
:icons: font
:sectanchors:
:sectlinks:
= Daily report by {org_name}
{date}
:icons: font
"""
event_level_tags = """
IMPORTANT: This event is classified TLP:{value}.
{expanded}
"""
attributes = """
=== Indicator(s) of compromise
{list_attributes}
"""
title = """
== ({internal_id}) {title}
{summary}
"""
class ReportGenerator():
def __init__(self):
self.report = ''
def from_remote(self, event_id):
from pymisp import PyMISP
from keys import misp_url, misp_key, misp_verifycert
misp = PyMISP(misp_url, misp_key, misp_verifycert)
result = misp.get(event_id)
self.misp_event = MISPEvent()
self.misp_event.load(result)
def from_event(self, event):
self.misp_event = MISPEvent()
self.misp_event.load(event)
def attributes(self):
if not self.misp_event.attributes:
return ''
list_attributes = []
for attribute in self.misp_event.attributes:
if attribute.type in types_to_attach:
list_attributes.append("* {}".format(attribute.value))
for obj in self.misp_event.Object:
if obj.name in objects_to_attach:
for attribute in obj.Attribute:
if attribute.type in types_to_attach:
list_attributes.append("* {}".format(attribute.value))
return attributes.format(list_attributes="\n".join(list_attributes))
def _get_tag_info(self, machinetag):
return self.taxonomies.revert_machinetag(machinetag)
def report_headers(self):
content = {'org_name': 'name',
'date': date.today().isoformat()}
self.report += headers.format(**content)
def event_level_tags(self):
if not self.misp_event.Tag:
return ''
for tag in self.misp_event.Tag:
# Only look for TLP for now
if tag['name'].startswith('tlp'):
tax, predicate = self._get_tag_info(tag['name'])
return self.event_level_tags.format(value=predicate.predicate.upper(), expanded=predicate.expanded)
def title(self):
internal_id = ''
summary = ''
# Get internal refs for report
if not hasattr(self.misp_event, 'Object'):
return ''
for obj in self.misp_event.Object:
if obj.name != 'report':
continue
for a in obj.Attribute:
if a.object_relation == 'case-number':
internal_id = a.value
if a.object_relation == 'summary':
summary = a.value
return title.format(internal_id=internal_id, title=self.misp_event.info,
summary=summary)
def asciidoc(self, lang='en'):
self.report += self.title()
self.report += self.event_level_tags()
self.report += self.attributes()
def handler(q=False):
if q is False:
return False
request = json.loads(q)
if 'data' not in request:
return False
for evt in request['data']:
report = ReportGenerator()
report.report_headers()
report.from_event(evt)
report.asciidoc()
command_line = 'asciidoctor-pdf -'
args = shlex.split(command_line)
with subprocess.Popen(args, stdout=subprocess.PIPE, stdin=subprocess.PIPE) as process:
cmd_out, cmd_err = process.communicate(input=report.report.encode('utf-8'))
return {'response': [], 'data': str(base64.b64encode(cmd_out), 'utf-8')}
def introspection():
modulesetup = {}
try:
responseType
modulesetup['responseType'] = responseType
except NameError:
pass
try:
userConfig
modulesetup['userConfig'] = userConfig
except NameError:
pass
try:
outputFileExtension
modulesetup['outputFileExtension'] = outputFileExtension
except NameError:
pass
try:
inputSource
modulesetup['inputSource'] = inputSource
except NameError:
pass
return modulesetup
def version():
moduleinfo['config'] = moduleconfig
return moduleinfo

View File

@ -0,0 +1,109 @@
"""
Export module for coverting MISP events into ThreatStream Structured Import files. Based of work by the CenturyLink CIRT.
Source: https://github.com/MISP/misp-modules/blob/master/misp_modules/modules/export_mod/threat_connect_export.py
"""
import base64
import csv
import io
import json
import logging
misperrors = {"error": "Error"}
moduleinfo = {
"version": "1.0",
"author": "Robert Nixon, based off of the ThreatConnect MISP Module written by the CenturyLink CIRT",
"description": "Export a structured CSV file for uploading to ThreatStream",
"module-type": ["export"]
}
moduleconfig = []
# Map of MISP fields => ThreatStream itypes, you can modify this to your liking
fieldmap = {
"domain": "mal_domain",
"hostname": "mal_domain",
"ip-src": "mal_ip",
"ip-dst": "mal_ip",
"email-src": "phish_email",
"url": "mal_url",
"md5": "mal_md5",
}
# combine all the MISP fields from fieldmap into one big list
mispattributes = {
"input": list(fieldmap.keys())
}
def handler(q=False):
"""
Convert a MISP query into a CSV file matching the ThreatStream Structured Import file format.
Input
q: Query dictionary
"""
if q is False or not q:
return False
request = json.loads(q)
response = io.StringIO()
writer = csv.DictWriter(response, fieldnames=["value", "itype", "tags"])
writer.writeheader()
# start parsing MISP data
for event in request["data"]:
for attribute in event["Attribute"]:
if attribute["type"] in mispattributes["input"]:
logging.debug("Adding %s to structured CSV export of ThreatStream Export", attribute["value"])
if "|" in attribute["type"]:
# if the attribute type has multiple values, line it up with the corresponding ThreatStream values in fieldmap
indicators = tuple(attribute["value"].split("|"))
ts_types = tuple(fieldmap[attribute["type"]].split("|"))
for i, indicator in enumerate(indicators):
writer.writerow({
"value": indicator,
"itype": ts_types[i],
"tags": attribute["comment"]
})
else:
writer.writerow({
"itype": fieldmap[attribute["type"]],
"value": attribute["value"],
"tags": attribute["comment"]
})
return {"response": [], "data": str(base64.b64encode(bytes(response.getvalue(), 'utf-8')), 'utf-8')}
def introspection():
"""
Relay the supported attributes to MISP.
No Input
Output
Dictionary of supported MISP attributes
"""
modulesetup = {
"responseType": "application/txt",
"outputFileExtension": "csv",
"userConfig": {},
"inputSource": []
}
return modulesetup
def version():
"""
Relay module version and associated metadata to MISP.
No Input
Output
moduleinfo: metadata output containing all potential configuration values
"""
moduleinfo["config"] = moduleconfig
return moduleinfo

View File

@ -1,4 +1,4 @@
from . import _vmray
__all__ = ['vmray_import', 'testimport', 'ocr', 'stiximport', 'cuckooimport',
'email_import', 'mispjson', 'openiocimport']
__all__ = ['vmray_import', 'testimport', 'ocr', 'stiximport', 'cuckooimport', 'goamlimport',
'email_import', 'mispjson', 'openiocimport', 'threatanalyzer_import', 'csvimport']

View File

@ -0,0 +1,128 @@
# -*- coding: utf-8 -*-
import json, os, base64
import pymisp
misperrors = {'error': 'Error'}
mispattributes = {'inputSource': ['file'], 'output': ['MISP attributes']}
moduleinfo = {'version': '0.1', 'author': 'Christian Studer',
'description': 'Import Attributes from a csv file.',
'module-type': ['import']}
moduleconfig = ['header']
duplicatedFields = {'mispType': {'mispComment': 'comment'},
'attrField': {'attrComment': 'comment'}}
class CsvParser():
def __init__(self, header):
self.header = header
self.attributes = []
def parse_data(self, data):
return_data = []
for line in data:
l = line.split('#')[0].strip() if '#' in line else line.strip()
if l:
return_data.append(l)
self.data = return_data
# find which delimiter is used
self.delimiter, self.length = self.findDelimiter()
def findDelimiter(self):
n = len(self.header)
if n > 1:
tmpData = []
for da in self.data:
tmp = []
for d in (';', '|', '/', ',', '\t', ' ',):
if da.count(d) == (n-1):
tmp.append(d)
if len(tmp) == 1 and tmp == tmpData:
return tmpData[0], n
else:
tmpData = tmp
else:
return None, 1
def buildAttributes(self):
# if there is only 1 field of data
if self.delimiter is None:
mispType = self.header[0]
for data in self.data:
d = data.strip()
if d:
self.attributes.append({'types': mispType, 'values': d})
else:
# split fields that should be recognized as misp attribute types from the others
list2pop, misp, head = self.findMispTypes()
# for each line of data
for data in self.data:
datamisp = []
datasplit = data.split(self.delimiter)
# in case there is an empty line or an error
if len(datasplit) != self.length:
continue
# pop from the line data that matches with a misp type, using the list of indexes
for l in list2pop:
datamisp.append(datasplit.pop(l).strip())
# for each misp type, we create an attribute
for m, dm in zip(misp, datamisp):
attribute = {'types': m, 'values': dm}
for h, ds in zip(head, datasplit):
if h:
attribute[h] = ds.strip()
self.attributes.append(attribute)
def findMispTypes(self):
descFilename = os.path.join(pymisp.__path__[0], 'data/describeTypes.json')
with open(descFilename, 'r') as f:
MispTypes = json.loads(f.read())['result'].get('types')
list2pop = []
misp = []
head = []
for h in reversed(self.header):
n = self.header.index(h)
# fields that are misp attribute types
if h in MispTypes:
list2pop.append(n)
misp.append(h)
# handle confusions between misp attribute types and attribute fields
elif h in duplicatedFields['mispType']:
# fields that should be considered as misp attribute types
list2pop.append(n)
misp.append(duplicatedFields['mispType'].get(h))
elif h in duplicatedFields['attrField']:
# fields that should be considered as attribute fields
head.append(duplicatedFields['attrField'].get(h))
# otherwise, it is an attribute field
else:
head.append(h)
# return list of indexes of the misp types, list of the misp types, remaining fields that will be attribute fields
return list2pop, misp, list(reversed(head))
def handler(q=False):
if q is False:
return False
request = json.loads(q)
if request.get('data'):
data = base64.b64decode(request['data']).decode('utf-8')
else:
misperrors['error'] = "Unsupported attributes type"
return misperrors
if not request.get('config') and not request['config'].get('header'):
misperrors['error'] = "Configuration error"
return misperrors
config = request['config'].get('header').split(',')
config = [c.strip() for c in config]
csv_parser = CsvParser(config)
csv_parser.parse_data(data.split('\n'))
# build the attributes
csv_parser.buildAttributes()
r = {'results': csv_parser.attributes}
return r
def introspection():
return mispattributes
def version():
moduleinfo['config'] = moduleconfig
return moduleinfo

View File

@ -0,0 +1,172 @@
import json, datetime, time, base64
import xml.etree.ElementTree as ET
from collections import defaultdict
from pymisp import MISPEvent, MISPObject
misperrors = {'error': 'Error'}
moduleinfo = {'version': 1, 'author': 'Christian Studer',
'description': 'Import from GoAML',
'module-type': ['import']}
moduleconfig = []
mispattributes = {'inputSource': ['file'], 'output': ['MISP objects']}
t_from_objects = {'nodes': ['from_person', 'from_account', 'from_entity'],
'leaves': ['from_funds_code', 'from_country']}
t_to_objects = {'nodes': ['to_person', 'to_account', 'to_entity'],
'leaves': ['to_funds_code', 'to_country']}
t_person_objects = {'nodes': ['addresses'],
'leaves': ['first_name', 'middle_name', 'last_name', 'gender', 'title', 'mothers_name', 'birthdate',
'passport_number', 'passport_country', 'id_number', 'birth_place', 'alias', 'nationality1']}
t_account_objects = {'nodes': ['signatory'],
'leaves': ['institution_name', 'institution_code', 'swift', 'branch', 'non_banking_insitution',
'account', 'currency_code', 'account_name', 'iban', 'client_number', 'opened', 'closed',
'personal_account_type', 'balance', 'date_balance', 'status_code', 'beneficiary',
'beneficiary_comment', 'comments']}
entity_objects = {'nodes': ['addresses'],
'leaves': ['name', 'commercial_name', 'incorporation_legal_form', 'incorporation_number', 'business', 'phone']}
goAMLobjects = {'report': {'nodes': ['reporting_person', 'location'],
'leaves': ['rentity_id', 'submission_code', 'report_code', 'submission_date', 'currency_code_local']},
'reporting_person': {'nodes': ['addresses'], 'leaves': ['first_name', 'middle_name', 'last_name', 'title']},
'location': {'nodes': [], 'leaves': ['address_type', 'address', 'city', 'zip', 'country_code', 'state']},
'transaction': {'nodes': ['t_from', 't_from_my_client', 't_to', 't_to_my_client'],
'leaves': ['transactionnumber', 'transaction_location', 'date_transaction',
'transmode_code', 'amount_local']},
't_from': t_from_objects, 't_from_my_client': t_from_objects,
't_to': t_to_objects, 't_to_my_client': t_to_objects,
'addresses': {'nodes': ['address'], 'leaves': []},
'address': {'nodes': [], 'leaves': ['address_type', 'address', 'city', 'zip', 'country_code', 'state']},
'from_person': t_person_objects, 'to_person': t_person_objects, 't_person': t_person_objects,
'from_account': t_account_objects, 'to_account': t_account_objects,
'signatory': {'nodes': ['t_person'], 'leaves': []},
'from_entity': entity_objects, 'to_entity': entity_objects,
}
t_account_mapping = {'misp_name': 'bank-account', 'institution_name': 'institution-name', 'institution_code': 'institution-code',
'iban': 'iban', 'swift': 'swift', 'branch': 'branch', 'non_banking_institution': 'non-bank-institution',
'account': 'account', 'currency_code': 'currency-code', 'account_name': 'account-name',
'client_number': 'client-number', 'personal_account_type': 'personal-account-type', 'opened': 'opened',
'closed': 'closed', 'balance': 'balance', 'status_code': 'status-code', 'beneficiary': 'beneficiary',
'beneficiary_comment': 'beneficiary-comment', 'comments': 'comments'}
t_person_mapping = {'misp_name': 'person', 'comments': 'text', 'first_name': 'first-name', 'middle_name': 'middle-name',
'last_name': 'last-name', 'title': 'title', 'mothers_name': 'mothers-name', 'alias': 'alias',
'birthdate': 'date-of-birth', 'birth_place': 'place-of-birth', 'gender': 'gender','nationality1': 'nationality',
'passport_number': 'passport-number', 'passport_country': 'passport-country', 'ssn': 'social-security-number',
'id_number': 'identity-card-number'}
location_mapping = {'misp_name': 'geolocation', 'city': 'city', 'state': 'region', 'country_code': 'country', 'address': 'address',
'zip': 'zipcode'}
t_entity_mapping = {'misp_name': 'legal-entity', 'name': 'name', 'business': 'business', 'commercial_name': 'commercial-name',
'phone': 'phone-number', 'incorporation_legal_form': 'legal-form', 'incorporation_number': 'registration-number'}
goAMLmapping = {'from_account': t_account_mapping, 'to_account': t_account_mapping, 't_person': t_person_mapping,
'from_person': t_person_mapping, 'to_person': t_person_mapping, 'reporting_person': t_person_mapping,
'from_entity': t_entity_mapping, 'to_entity': t_entity_mapping,
'location': location_mapping, 'address': location_mapping,
'transaction': {'misp_name': 'transaction', 'transactionnumber': 'transaction-number', 'date_transaction': 'date',
'transaction_location': 'location', 'transmode_code': 'transmode-code', 'amount_local': 'amount',
'transmode_comment': 'transmode-comment', 'date_posting': 'date-posting', 'teller': 'teller',
'authorized': 'authorized', 'transaction_description': 'text'}}
nodes_to_ignore = ['addresses', 'signatory']
relationship_to_keep = ['signatory', 't_from', 't_from_my_client', 't_to', 't_to_my_client', 'address']
class GoAmlParser():
def __init__(self):
self.misp_event = MISPEvent()
def read_xml(self, data):
self.tree = ET.fromstring(data)
def parse_xml(self):
self.first_itteration()
for t in self.tree.findall('transaction'):
self.itterate(t, 'transaction')
def first_itteration(self):
submission_date = self.tree.find('submission_date').text.split('+')[0]
self.misp_event.timestamp = int(time.mktime(time.strptime(submission_date, "%Y-%m-%dT%H:%M:%S")))
for node in goAMLobjects['report']['nodes']:
element = self.tree.find(node)
if element is not None:
self.itterate(element, element.tag)
def itterate(self, tree, aml_type, referencing_uuid=None, relationship_type=None):
objects = goAMLobjects[aml_type]
referenced_uuid = referencing_uuid
rel = relationship_type
if aml_type not in nodes_to_ignore:
try:
mapping = goAMLmapping[aml_type]
misp_object = MISPObject(name=mapping['misp_name'])
for leaf in objects['leaves']:
element = tree.find(leaf)
if element is not None:
object_relation = mapping[element.tag]
attribute = {'object_relation': object_relation, 'value': element.text}
misp_object.add_attribute(**attribute)
if aml_type == 'transaction':
for node in objects['nodes']:
element = tree.find(node)
if element is not None:
self.fill_transaction(element, element.tag, misp_object)
self.misp_event.add_object(misp_object)
last_object = self.misp_event.objects[-1]
referenced_uuid = last_object.uuid
if referencing_uuid and relationship_type:
referencing_object = self.misp_event.get_object_by_uuid(referencing_uuid)
referencing_object.add_reference(referenced_uuid, rel, None, **last_object)
except KeyError:
pass
for node in objects['nodes']:
element = tree.find(node)
if element is not None:
tag = element.tag
if tag in relationship_to_keep:
rel = tag[2:] if tag.startswith('t_') else tag
self.itterate(element, element.tag, referencing_uuid=referenced_uuid, relationship_type=rel)
@staticmethod
def fill_transaction(element, tag, misp_object):
if 't_from' in tag:
from_funds = element.find('from_funds_code').text
from_funds_attribute = {'object_relation': 'from-funds-code', 'value': from_funds}
misp_object.add_attribute(**from_funds_attribute)
from_country = element.find('from_country').text
from_country_attribute = {'object_relation': 'from-country', 'value': from_country}
misp_object.add_attribute(**from_country_attribute)
if 't_to' in tag:
to_funds = element.find('to_funds_code').text
to_funds_attribute = {'object_relation': 'to-funds-code', 'value': to_funds}
misp_object.add_attribute(**to_funds_attribute)
to_country = element.find('to_country').text
to_country_attribute = {'object_relation': 'to-country', 'value': to_country}
misp_object.add_attribute(**to_country_attribute)
def handler(q=False):
if q is False:
return False
request = json.loads(q)
if request.get('data'):
data = base64.b64decode(request['data']).decode('utf-8')
else:
misperrors['error'] = "Unsupported attributes type"
return misperrors
aml_parser = GoAmlParser()
try:
aml_parser.read_xml(data)
except:
misperrors['error'] = "Impossible to read XML data"
return misperrors
aml_parser.parse_xml()
r = {'results': [obj.to_json() for obj in aml_parser.misp_event.objects]}
return r
def introspection():
return mispattributes
def version():
moduleinfo['config'] = moduleconfig
return moduleinfo

View File

@ -0,0 +1,507 @@
'''
import
define mandatory
'''
import json
import base64
import re
import zipfile
import ipaddress
import io
import logging
misperrors = {'error': 'Error'}
userConfig = {}
inputSource = ['file']
moduleinfo = {'version': '0.6', 'author': 'Christophe Vandeplas',
'description': 'Import for ThreatAnalyzer archive.zip/analysis.json files',
'module-type': ['import']}
moduleconfig = []
log = logging.getLogger('misp-modules')
# FIXME - many hardcoded filters should be migrated to import regexes. See also https://github.com/MISP/MISP/issues/2712
# DISCLAIMER - This module is to be considered as experimental and needs much fine-tuning.
# more can be done with what's in the ThreatAnalyzer archive.zip
def handler(q=False):
if q is False:
return False
results = []
zip_starts = 'PK'
request = json.loads(q)
data = base64.b64decode(request['data'])
if data[:len(zip_starts)].decode() == zip_starts:
with zipfile.ZipFile(io.BytesIO(data), 'r') as zf:
# unzipped_files = []
modified_files_mapping = {}
# pre-process some of the files in the zip
for zip_file_name in zf.namelist(): # Get all files in the zip file
# find the filenames of the modified_files
if re.match(r"Analysis/proc_\d+/modified_files/mapping\.log", zip_file_name):
with zf.open(zip_file_name, mode='r', pwd=None) as fp:
file_data = fp.read()
for line in file_data.decode().split('\n'):
if line:
l_fname, l_size, l_md5, l_created = line.split('|')
l_fname = cleanup_filepath(l_fname)
if l_fname:
if l_size == 0:
pass # FIXME create an attribute for the filename/path
else:
# file is a non empty sample, upload the sample later
modified_files_mapping[l_md5] = l_fname
# now really process the data
for zip_file_name in zf.namelist(): # Get all files in the zip file
# print('Processing file: {}'.format(zip_file_name))
if re.match(r"Analysis/proc_\d+/modified_files/.+\.", zip_file_name) and "mapping.log" not in zip_file_name:
sample_md5 = zip_file_name.split('/')[-1].split('.')[0]
if sample_md5 in modified_files_mapping:
sample_filename = modified_files_mapping[sample_md5]
# print("{} maps to {}".format(sample_md5, sample_filename))
with zf.open(zip_file_name, mode='r', pwd=None) as fp:
file_data = fp.read()
results.append({
'values': sample_filename,
'data': base64.b64encode(file_data).decode(),
'type': 'malware-sample', 'categories': ['Artifacts dropped', 'Payload delivery'], 'to_ids': True, 'comment': ''})
if 'Analysis/analysis.json' in zip_file_name:
with zf.open(zip_file_name, mode='r', pwd=None) as fp:
file_data = fp.read()
analysis_json = json.loads(file_data.decode('utf-8'))
results += process_analysis_json(analysis_json)
# if 'sample' in zip_file_name:
# sample['data'] = base64.b64encode(file_data).decode()
else:
try:
results = process_analysis_json(json.loads(data.decode('utf-8')))
except ValueError:
log.warning('MISP modules {0} failed: uploaded file is not a zip or json file.'.format(request['module']))
return {'error': 'Uploaded file is not a zip or json file.'.format(request['module'])}
pass
# keep only unique entries based on the value field
results = list({v['values']: v for v in results}.values())
r = {'results': results}
return r
def process_analysis_json(analysis_json):
if 'analysis' in analysis_json and 'processes' in analysis_json['analysis'] and 'process' in analysis_json['analysis']['processes']:
# if 'analysis' in analysis_json and '@filename' in analysis_json['analysis']:
# sample['values'] = analysis_json['analysis']['@filename']
for process in analysis_json['analysis']['processes']['process']:
# print_json(process)
if 'connection_section' in process and 'connection' in process['connection_section']:
for connection_section_connection in process['connection_section']['connection']:
connection_section_connection['@remote_ip'] = cleanup_ip(connection_section_connection['@remote_ip'])
connection_section_connection['@remote_hostname'] = cleanup_hostname(connection_section_connection['@remote_hostname'])
if connection_section_connection['@remote_ip'] and connection_section_connection['@remote_hostname']:
val = '{}|{}'.format(connection_section_connection['@remote_hostname'],
connection_section_connection['@remote_ip'])
# print("connection_section_connection hostname|ip: {}|{} IDS:yes".format(
# connection_section_connection['@remote_hostname'],
# connection_section_connection['@remote_ip'])
# )
yield({'values': val, 'type': 'domain|ip', 'categories': 'Network activity', 'to_ids': True, 'comment': ''})
elif connection_section_connection['@remote_ip']:
# print("connection_section_connection ip-dst: {} IDS:yes".format(
# connection_section_connection['@remote_ip'])
# )
yield({'values': connection_section_connection['@remote_ip'], 'type': 'ip-dst', 'to_ids': True, 'comment': ''})
elif connection_section_connection['@remote_hostname']:
# print("connection_section_connection hostname: {} IDS:yes".format(
# connection_section_connection['@remote_hostname'])
# )
yield({'values': connection_section_connection['@remote_hostname'], 'type': 'hostname', 'to_ids': True, 'comment': ''})
if 'http_command' in connection_section_connection:
for http_command in connection_section_connection['http_command']:
# print('connection_section_connection HTTP COMMAND: {}\t{}'.format(
# http_command['@method'], # comment
# http_command['@url']) # url
# )
val = cleanup_url(http_command['@url'])
if val:
yield({'values': val, 'type': 'url', 'categories': 'Network activity', 'to_ids': True, 'comment': http_command['@method']})
if 'http_header' in connection_section_connection:
for http_header in connection_section_connection['http_header']:
if 'User-Agent:' in http_header['@header']:
val = http_header['@header'][len('User-Agent: '):]
yield({'values': val, 'type': 'user-agent', 'categories': 'Network activity', 'to_ids': False, 'comment': ''})
elif 'Host:' in http_header['@header']:
val = http_header['@header'][len('Host: '):]
if ':' in val:
try:
val_port = int(val.split(':')[1])
except ValueError as e:
val_port = False
val_hostname = cleanup_hostname(val.split(':')[0])
val_ip = cleanup_ip(val.split(':')[0])
if val_hostname and val_port:
val_combined = '{}|{}'.format(val_hostname, val_port)
# print({'values': val_combined, 'type': 'hostname|port', 'to_ids': True, 'comment': ''})
yield({'values': val_combined, 'type': 'hostname|port', 'to_ids': True, 'comment': ''})
elif val_ip and val_port:
val_combined = '{}|{}'.format(val_ip, val_port)
# print({'values': val_combined, 'type': 'ip-dst|port', 'to_ids': True, 'comment': ''})
yield({'values': val_combined, 'type': 'ip-dst|port', 'to_ids': True, 'comment': ''})
else:
continue
val_hostname = cleanup_hostname(val)
if val_hostname:
# print({'values': val_hostname, 'type': 'hostname', 'to_ids': True, 'comment': ''})
yield({'values': val_hostname, 'type': 'hostname', 'to_ids': True, 'comment': ''})
else:
# LATER header not processed
pass
if 'filesystem_section' in process and 'create_file' in process['filesystem_section']:
for filesystem_section_create_file in process['filesystem_section']['create_file']:
# first skip some items
if filesystem_section_create_file['@create_disposition'] in {'FILE_OPEN_IF'}:
continue
# FIXME - this section is probably not needed considering the 'stored_files stored_created_file' section we process later.
# print('CREATE FILE: {}\t{}'.format(
# filesystem_section_create_file['@srcfile'], # filename
# filesystem_section_create_file['@create_disposition']) # comment - use this to filter out cases
# )
if 'networkoperation_section' in process and 'dns_request_by_addr' in process['networkoperation_section']:
for networkoperation_section_dns_request_by_addr in process['networkoperation_section']['dns_request_by_addr']:
# FIXME - it's unclear what this section is for.
# TODO filter this
# print('DNS REQUEST: {}\t{}'.format(
# networkoperation_section_dns_request_by_addr['@request_address'], # ip-dst
# networkoperation_section_dns_request_by_addr['@result_name']) # hostname
# ) # => NOT hostname|ip
pass
if 'networkoperation_section' in process and 'dns_request_by_name' in process['networkoperation_section']:
for networkoperation_section_dns_request_by_name in process['networkoperation_section']['dns_request_by_name']:
networkoperation_section_dns_request_by_name['@request_name'] = cleanup_hostname(networkoperation_section_dns_request_by_name['@request_name'].rstrip('.'))
networkoperation_section_dns_request_by_name['@result_addresses'] = cleanup_ip(networkoperation_section_dns_request_by_name['@result_addresses'])
if networkoperation_section_dns_request_by_name['@request_name'] and networkoperation_section_dns_request_by_name['@result_addresses']:
val = '{}|{}'.format(networkoperation_section_dns_request_by_name['@request_name'],
networkoperation_section_dns_request_by_name['@result_addresses'])
# print("networkoperation_section_dns_request_by_name hostname|ip: {}|{} IDS:yes".format(
# networkoperation_section_dns_request_by_name['@request_name'],
# networkoperation_section_dns_request_by_name['@result_addresses'])
# )
yield({'values': val, 'type': 'domain|ip', 'categories': 'Network activity', 'to_ids': True, 'comment': ''})
elif networkoperation_section_dns_request_by_name['@request_name']:
# print("networkoperation_section_dns_request_by_name hostname: {} IDS:yes".format(
# networkoperation_section_dns_request_by_name['@request_name'])
# )
yield({'values': networkoperation_section_dns_request_by_name['@request_name'], 'type': 'hostname', 'to_ids': True, 'comment': ''})
elif networkoperation_section_dns_request_by_name['@result_addresses']:
# this happens when the IP is both in the request_name and result_address.
# print("networkoperation_section_dns_request_by_name hostname: {} IDS:yes".format(
# networkoperation_section_dns_request_by_name['@result_addresses'])
# )
yield({'values': networkoperation_section_dns_request_by_name['@result_addresses'], 'type': 'ip-dst', 'to_ids': True, 'comment': ''})
if 'networkpacket_section' in process and 'connect_to_computer' in process['networkpacket_section']:
for networkpacket_section_connect_to_computer in process['networkpacket_section']['connect_to_computer']:
networkpacket_section_connect_to_computer['@remote_hostname'] = cleanup_hostname(networkpacket_section_connect_to_computer['@remote_hostname'])
networkpacket_section_connect_to_computer['@remote_ip'] = cleanup_ip(networkpacket_section_connect_to_computer['@remote_ip'])
if networkpacket_section_connect_to_computer['@remote_hostname'] and networkpacket_section_connect_to_computer['@remote_ip']:
# print("networkpacket_section_connect_to_computer hostname|ip: {}|{} IDS:yes COMMENT:port {}".format(
# networkpacket_section_connect_to_computer['@remote_hostname'],
# networkpacket_section_connect_to_computer['@remote_ip'],
# networkpacket_section_connect_to_computer['@remote_port'])
# )
val_combined = "{}|{}".format(networkpacket_section_connect_to_computer['@remote_hostname'], networkpacket_section_connect_to_computer['@remote_ip'])
yield({'values': val_combined, 'type': 'hostname|ip', 'to_ids': True, 'comment': ''})
elif networkpacket_section_connect_to_computer['@remote_hostname']:
# print("networkpacket_section_connect_to_computer hostname: {} IDS:yes COMMENT:port {}".format(
# networkpacket_section_connect_to_computer['@remote_hostname'],
# networkpacket_section_connect_to_computer['@remote_port'])
# )
val_combined = "{}|{}".format(networkpacket_section_connect_to_computer['@remote_hostname'], networkpacket_section_connect_to_computer['@remote_port'])
yield({'values': val_combined, 'type': 'hostname|port', 'to_ids': True, 'comment': ''})
elif networkpacket_section_connect_to_computer['@remote_ip']:
# print("networkpacket_section_connect_to_computer ip-dst: {} IDS:yes COMMENT:port {}".format(
# networkpacket_section_connect_to_computer['@remote_ip'],
# networkpacket_section_connect_to_computer['@remote_port'])
# )
val_combined = "{}|{}".format(networkpacket_section_connect_to_computer['@remote_ip'], networkpacket_section_connect_to_computer['@remote_port'])
yield({'values': val_combined, 'type': 'ip-dst|port', 'to_ids': True, 'comment': ''})
if 'registry_section' in process and 'create_key' in process['registry_section']:
# FIXME this is a complicated section, together with the 'set_value'.
# it looks like this section is not ONLY about creating registry keys,
# more about accessing a handle to keys (with specific permissions)
# maybe we don't want to keep this, in favor of 'set_value'
for create_key in process['registry_section']['create_key']:
# print('REG CREATE: {}\t{}'.format(
# create_key['@desired_access'],
# create_key['@key_name']))
pass
if 'registry_section' in process and 'delete_key' in process['registry_section']:
# LATER we probably don't want to keep this. Much pollution.
# Maybe for later once we have filtered out this.
for delete_key in process['registry_section']['delete_key']:
# print('REG DELETE: {}'.format(
# delete_key['@key_name'])
# )
pass
if 'registry_section' in process and 'set_value' in process['registry_section']:
# FIXME this is a complicated section, together with the 'create_key'.
for set_value in process['registry_section']['set_value']:
# '@data_type' == 'REG_BINARY',
# '@data_type' == 'REG_DWORD',
# '@data_type' == 'REG_EXPAND_SZ',
# '@data_type' == 'REG_MULTI_SZ',
# '@data_type' == 'REG_NONE',
# '@data_type' == 'REG_QWORD',
# '@data_type' == 'REG_SZ',
regkey = cleanup_regkey("{}\\{}".format(set_value['@key_name'], set_value['@value_name']))
regdata = cleanup_regdata(set_value.get('@data'))
if not regkey:
continue
if set_value['@data_size'] == '0' or not regdata:
# print('registry_section set_value REG SET: {}\t{}\t{}'.format(
# set_value['@data_type'],
# set_value['@key_name'],
# set_value['@value_name'])
# )
yield({'values': regkey, 'type': 'regkey', 'to_ids': True,
'categories': ['External analysis', 'Persistence mechanism', 'Artifacts dropped'], 'comment': set_value['@data_type']})
else:
try:
# unicode fun...
# print('registry_section set_value REG SET: {}\t{}\t{}\t{}'.format(
# set_value['@data_type'],
# set_value['@key_name'],
# set_value['@value_name'],
# set_value['@data'])
# )
val = "{}|{}".format(regkey, regdata)
yield({'values': val, 'type': 'regkey|value', 'to_ids': True,
'categories': ['External analysis', 'Persistence mechanism', 'Artifacts dropped'], 'comment': set_value['@data_type']})
except Exception as e:
print("EXCEPTION registry_section {}".format(e))
# TODO - maybe we want to handle these later, or not...
pass
pass
if 'stored_files' in process and 'stored_created_file' in process['stored_files']:
for stored_created_file in process['stored_files']['stored_created_file']:
stored_created_file['@filename'] = cleanup_filepath(stored_created_file['@filename'])
if stored_created_file['@filename']:
if stored_created_file['@filesize'] is not '0':
val = '{}|{}'.format(stored_created_file['@filename'], stored_created_file['@md5'])
# print("stored_created_file filename|md5: {}|{} IDS:yes".format(
# stored_created_file['@filename'], # filename
# stored_created_file['@md5']) # md5
# ) # => filename|md5
yield({'values': val, 'type': 'filename|md5', 'to_ids': True,
'categories': ['Artifacts dropped', 'Payload delivery'], 'comment': ''})
else:
# print("stored_created_file filename: {} IDS:yes".format(
# stored_created_file['@filename']) # filename
# ) # => filename
yield({'values': stored_created_file['@filename'],
'type': 'filename', 'to_ids': True,
'categories': ['Artifacts dropped', 'Payload delivery'], 'comment': ''})
if 'stored_files' in process and 'stored_modified_file' in process['stored_files']:
for stored_modified_file in process['stored_files']['stored_modified_file']:
stored_modified_file['@filename'] = cleanup_filepath(stored_modified_file['@filename'])
if stored_modified_file['@filename']:
if stored_modified_file['@filesize'] is not '0':
val = '{}|{}'.format(stored_modified_file['@filename'], stored_modified_file['@md5'])
# print("stored_modified_file MODIFY FILE: {}\t{}".format(
# stored_modified_file['@filename'], # filename
# stored_modified_file['@md5']) # md5
# ) # => filename|md5
yield({'values': val, 'type': 'filename|md5', 'to_ids': True,
'categories': ['Artifacts dropped', 'Payload delivery'],
'comment': 'modified'})
else:
# print("stored_modified_file MODIFY FILE: {}\t{}".format(
# stored_modified_file['@filename']) # filename
# ) # => filename
yield({'values': stored_modified_file['@filename'], 'type': 'filename', 'to_ids': True,
'categories': ['Artifacts dropped', 'Payload delivery'],
'comment': 'modified'})
def add_file(filename, results, hash, index, filedata=None):
pass
# results.append({'values': filename, 'data': "{}|{}".format(filename, filedata.decode()), 'type': 'malware-sample',
# 'categories': ['Artifacts dropped', 'Payload delivery']})
def add_file_zip():
# if 'malware-sample' in request:
# sample_filename = request.get("malware-sample").split("|", 1)[0]
# data = base64.b64decode(data)
# fl = io.BytesIO(data)
# zf = zipfile.ZipFile(fl)
# sample_hashname = zf.namelist()[0]
# data = zf.read(sample_hashname, b"infected")
# zf.close()
pass
def print_json(data):
print(json.dumps(data, sort_keys=True, indent=4, separators=(',', ': ')))
def list_in_string(lst, data, regex=False):
for item in lst:
if regex:
if re.search(item, data, flags=re.IGNORECASE):
return True
else:
if item in data:
return True
def cleanup_ip(item):
# you should exclude private IP ranges via import regexes
noise_substrings = {
'224.0.0.',
'127.0.0.',
'8.8.8.8',
'8.8.4.4',
'0.0.0.0',
'NONE'
}
if list_in_string(noise_substrings, item):
return None
try:
ipaddress.ip_address(item)
return item
except ValueError:
return None
def cleanup_hostname(item):
noise_substrings = {
'wpad',
'teredo.ipv6.microsoft.com',
'WIN7SP1-x64-UNP'
}
# take away common known bad
if list_in_string(noise_substrings, item):
return None
# eliminate IP addresses
try:
ipaddress.ip_address(item)
except ValueError:
# this is not an IP, so continue
return item
return None
def cleanup_url(item):
if item in ['/']:
return None
return item
def cleanup_filepath(item):
noise_substrings = {
'C:\\Windows\\Prefetch\\',
'\\AppData\\Roaming\\Microsoft\\Windows\\Recent\\',
'\\AppData\\Roaming\\Microsoft\\Office\\Recent\\',
'C:\\ProgramData\\Microsoft\\OfficeSoftwareProtectionPlatform\\Cache\\cache.dat',
'\\AppData\\Local\\Microsoft\\Windows\\Temporary Internet Files\\Content.',
'\\AppData\\Local\\Microsoft\\Internet Explorer\\Recovery\\High\\',
'\\AppData\\Local\\Microsoft\\Internet Explorer\\DOMStore\\',
'\\AppData\\LocalLow\\Microsoft\\Internet Explorer\\Services\\search_',
'\\AppData\\Local\\Microsoft\\Windows\\History\\History.',
'\\AppData\\Roaming\\Microsoft\\Windows\\Cookies\\',
'\\AppData\\LocalLow\\Microsoft\\CryptnetUrlCache\\',
'\\AppData\\Local\\Microsoft\\Windows\\Caches\\',
'\\AppData\\Local\\Microsoft\\Windows\WebCache\\',
'\\AppData\\Local\\Microsoft\\Windows\\Explorer\\thumbcache',
'\\AppData\\Roaming\\Adobe\\Acrobat\\9.0\\SharedDataEvents-journal',
'\\AppData\\Roaming\\Adobe\\Acrobat\\9.0\\UserCache.bin',
'\\AppData\\Roaming\\Macromedia\\Flash Player\\macromedia.com\\support\\flashplayer\\sys\\settings.sol',
'\\AppData\\Roaming\Adobe\\Flash Player\\NativeCache\\',
'C:\\Windows\\AppCompat\\Programs\\',
'C:\~' # caused by temp file created by MS Office when opening malicious doc/xls/...
}
if list_in_string(noise_substrings, item):
return None
return item
def cleanup_regkey(item):
noise_substrings = {
r'\\Software\\Microsoft\\Windows\\CurrentVersion\\Installer\\UserData\\',
r'\\Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings\\',
r'\\CurrentVersion\\Explorer\\RecentDocs\\',
r'\\CurrentVersion\\Explorer\\UserAssist\\',
r'\\CurrentVersion\\Explorer\\FileExts\\[a-z\.]+\\OpenWith',
r'\\Software\\Microsoft\\Internet Explorer\\Main\\WindowsSearch',
r'\\Software\\Microsoft\\Office\\[0-9\.]+\\',
r'\\SOFTWARE\\Microsoft\\OfficeSoftwareProtectionPlatform\\',
r'\\Software\\Microsoft\\Office\\Common\\Smart Tag\\',
r'\\Usage\\SpellingAndGrammarFiles',
r'^HKLM\\Software\\Microsoft\\Tracing\\',
r'\\Software\\Classes\\CLSID\\',
r'\\Software\\Classes\\Local Settings\\MuiCache\\',
r'\\Local Settings\\Software\\Microsoft\\Windows\\Shell\\Bag',
r'\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\RunMRU\\'
}
item = item.replace('\\REGISTRY\\MACHINE\\', 'HKLM\\')
item = item.replace('\\REGISTRY\\USER\\', 'HKCU\\')
if list_in_string(noise_substrings, item, regex=True):
return None
return item
def cleanup_regdata(item):
if not item:
return None
item = item.replace('(UNICODE_0x00000000)', '')
return item
def get_zipped_contents(filename, data, password=None):
with zipfile.ZipFile(io.BytesIO(data), 'r') as zf:
unzipped_files = []
if password is not None:
password = str.encode(password) # Byte encoded password required
for zip_file_name in zf.namelist(): # Get all files in the zip file
# print(zip_file_name)
with zf.open(zip_file_name, mode='r', pwd=password) as fp:
file_data = fp.read()
unzipped_files.append({'values': zip_file_name,
'data': file_data,
'comment': 'Extracted from {0}'.format(filename)})
# print("{} : {}".format(zip_file_name, len(file_data)))
return unzipped_files
def introspection():
modulesetup = {}
try:
userConfig
modulesetup['userConfig'] = userConfig
except NameError:
pass
try:
inputSource
modulesetup['inputSource'] = inputSource
except NameError:
pass
return modulesetup
def version():
moduleinfo['config'] = moduleconfig
return moduleinfo

1
tests/goamlexport.xml Normal file
View File

@ -0,0 +1 @@
<report><rentity_id>2510</rentity_id><submission_code>E</submission_code><report_code>STR</report_code><submission_date>2018-02-22T08:34:16+00:00</submission_date><currency_code_local>EUR</currency_code_local><transaction><transactionnumber>TW00000901</transactionnumber><transaction_location>1 Manners Street Wellington</transaction_location><transmode_code>BG</transmode_code><date_transaction>2015-12-01T10:03:00</date_transaction><amount_local>12345</amount_local><transaction_description>when it transacts</transaction_description><t_from><from_funds_code>E</from_funds_code><from_account><status_code>A</status_code><personal_account_type>A</personal_account_type><currency_code>EUR</currency_code><account>31032027088</account><swift>ATTBVI</swift><institution_name>The bank</institution_name><signatory><t_person><last_name>Nick</last_name><first_name>Pitt</first_name><title>Sir</title><birthdate>1993-09-25</birthdate><birth_place>Mulhouse, France</birth_place><gender>Male</gender><addresses><address><city>Paris</city><country_code>France</country_code></address></addresses></t_person></signatory></from_account><from_country>FRA</from_country></t_from><t_to_my_client><to_funds_code>K</to_funds_code><to_person><last_name>Michel</last_name><first_name>Jean</first_name><title>Himself</title><gender>Prefer not to say</gender><addresses><address><city>Luxembourg</city><country_code>Luxembourg</country_code></address></addresses></to_person><to_country>LUX</to_country></t_to_my_client></transaction></report>