Merge branch 'master' of github.com:MISP/misp-dashboard into subzero

subzero
mokaddem 2019-06-21 14:39:24 +02:00
commit b06c95c907
37 changed files with 23070 additions and 361 deletions

106
README.md
View File

@ -1,23 +1,23 @@
# misp-dashboard
A dashboard showing live data and statistics from the ZMQ feeds of one or more [MISP](https://www.misp-project.org/) instances. The dashboard
can be used as a real-time situational awareness tool to gather threat intelligence information. The misp-dashboard includes
a gamification tool to show the contributions of each organisations and how they are ranked over time. The dashboard can be used for
SOC (Security Operation Center), security team or during cyber exercise to keep track of what's going on your various MISP instances.
A dashboard showing live data and statistics from the ZMQ feeds of one or more [MISP](https://www.misp-project.org/) instances.
The dashboard can be used as a real-time situational awareness tool to gather threat intelligence information.
The misp-dashboard includes a [gamification](https://en.wikipedia.org/wiki/Gamification#Criticism) tool to show the contributions of each organisation and how they are ranked over time.
The dashboard can be used for SOCs (Security Operation Centers), security teams or during cyber exercises to keep track of what is being processed on your various MISP instances.
# Features
## Live Dashboard
- Possibility to subscribe to multiple ZMQ feeds
- Shows direct contribution made by organisations
- Shows live resolvable posted locations
- Possibility to subscribe to multiple ZMQ feeds from different MISP instances
- Shows immediate contributions made by organisations
- Displays live resolvable posted geo-locations
![Dashboard live](./screenshots/dashboard-live.png)
## Geolocalisation Dashboard
- Provides historical geolocalised information to support security teams, CSIRTs or SOC finding threats in their constituency
- Provides historical geolocalised information to support security teams, CSIRTs or SOCs in finding threats within their constituency
- Possibility to get geospatial information from specific regions
![Dashbaord geo](./screenshots/dashboard-geo.png)
@ -25,25 +25,25 @@ SOC (Security Operation Center), security team or during cyber exercise to keep
## Contributors Dashboard
__Shows__:
- The monthly rank of all organisation
- The monthly rank of all organisations
- The last organisation that contributed (dynamic updates)
- The contribution level of all organisation
- Each category of contribution per organisation
- The contribution level of all organisations
- Each category of contributions per organisation
- The current ranking of the selected organisation (dynamic updates)
__Includes__:
- Gamification of the platform:
- [Gamification](https://en.wikipedia.org/wiki/Gamification#Criticism) of the platform:
- Two different levels of ranking with unique icons
- Exclusive obtainable badges for source code contributors and donator
![Dashboard contributor](./screenshots/dashboard-contributors2.png)
![Dashboard contributor2](./screenshots/dashboard-contributors3.png)
![Dashboard contributors](./screenshots/dashboard-contributors2.png)
![Dashboard contributors2](./screenshots/dashboard-contributors3.png)
## Users Dashboard
- Shows when and how the platform is used:
- Login punchcard and overtime
- Login punchcard and contributions over time
- Contribution vs login
![Dashboard users](./screenshots/dashboard-users.png)
@ -57,7 +57,7 @@ __Includes__:
![Dashboard users](./screenshots/dashboard-trendings.png)
# Installation
- Launch ```./install_dependencies.sh``` from the MISP-Dashboard directory
- Launch ```./install_dependencies.sh``` from the MISP-Dashboard directory ([idempotent-ish](https://en.wikipedia.org/wiki/Idempotence))
- Update the configuration file ```config.cfg``` so that it matches your system
- Fields that you may change:
- RedisGlobal -> host
@ -68,7 +68,7 @@ __Includes__:
# Updating by pulling
- Re-launch ```./install_dependencies.sh``` to fetch new required dependencies
- Re-update your configuration file ```config.cfg```
- Re-update your configuration file ```config.cfg``` by comparing eventual changes in ```config.cfg.default```
:warning: Make sure no zmq python3 scripts are running. They block the update.
@ -90,9 +90,10 @@ Traceback (most recent call last):
with open(dst, 'wb') as fdst:
OSError: [Errno 26] Text file busy: '/home/steve/code/misp-dashboard/DASHENV/bin/python3'
```
- Restart the System: `./zmq_subscriber.py &`, `./zmq_dispatcher.py &` and `./server.py &`
# Starting the System
:warning: You do not need to run it as root. Normal privileges are fine.
:warning: You should not run it as root. Normal privileges are fine.
- Be sure to have a running redis server
- e.g. ```redis-server --port 6250```
@ -102,7 +103,7 @@ OSError: [Errno 26] Text file busy: '/home/steve/code/misp-dashboard/DASHENV/bin
- Start the Flask server ```./server.py &```
- Access the interface at ```http://localhost:8001/```
Alternatively, you can run the ```start_all.sh``` script to run the commands described above.
__Alternatively__, you can run the ```start_all.sh``` script to run the commands described above.
# Debug
@ -117,7 +118,7 @@ export FLASK_APP=server.py
flask run --host=0.0.0.0 --port=8001 # <- Be careful here, this exposes it on ALL ip addresses. Ideally if run locally --host=127.0.0.1
```
OR, just toggle the debug flag in start_all.sh script.
OR, just toggle the debug flag in start_all.sh or config.cfg.
Happy hacking ;)
@ -135,6 +136,29 @@ optional arguments:
a soft method to delete only keys used by MISP-Dashboard.
```
## Notes about ZMQ
The misp-dashboard being stateless in regards to MISP, it can only process data that it received. Meaning that if your MISP is not publishing all notifications to its ZMQ, the misp-dashboard will not have them.
The most revelant example could be the user login punchcard. If your MISP doesn't have the option ``Plugin.ZeroMQ_audit_notifications_enable`` set to ``true``, the punchcard will be empty.
## Dashboard not showing results - No module named zmq
When the misp-dashboard does not show results then first check if the zmq module within MISP is properly installed.
In **Administration**, **Plugin Settings**, **ZeroMQ** check that **Plugin.ZeroMQ_enable** is set to **True**.
Publish a test event from MISP to ZMQ via **Event Actions**, **Publish event to ZMQ**.
Verify the logfiles
```
${PATH_TO_MISP}/app/tmp/log/mispzmq.error.log
${PATH_TO_MISP}/app/tmp/log/mispzmq.log
```
If there's an error **ModuleNotFoundError: No module named 'zmq'** then install pyzmq.
```
$SUDO_WWW ${PATH_TO_MISP}/venv/bin/pip install pyzmq
```
# zmq_subscriber options
```usage: zmq_subscriber.py [-h] [-n ZMQNAME] [-u ZMQURL]
@ -151,7 +175,7 @@ optional arguments:
# Deploy in production using mod_wsgi
Install Apache's mod-wsgi for Python3
Install Apache mod-wsgi for Python3
```bash
sudo apt-get install libapache2-mod-wsgi-py3
@ -166,7 +190,7 @@ The following NEW packages will be installed:
libapache2-mod-wsgi-py3
```
Configuration file `/etc/apache2/sites-available/misp-dashboard.conf` assumes that `misp-dashboard` is cloned into `var/www/misp-dashboard`. It runs as user `misp` in this example. Change the permissions to folder and files accordingly.
Configuration file `/etc/apache2/sites-available/misp-dashboard.conf` assumes that `misp-dashboard` is cloned into `/var/www/misp-dashboard`. It runs as user `misp` in this example. Change the permissions to your custom folder and files accordingly.
```
<VirtualHost *:8001>
@ -214,33 +238,35 @@ Configuration file `/etc/apache2/sites-available/misp-dashboard.conf` assumes th
# License
~~~~
Copyright (C) 2017-2019 CIRCL - Computer Incident Response Center Luxembourg (c/o smile, security made in Lëtzebuerg, Groupement d'Intérêt Economique)
Copyright (c) 2017-2019 Sami Mokaddem
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
~~~~
Images and logos are handmade for:
- rankingMISPOrg/
- rankingMISPMonthly/
- MISPHonorableIcons/
Note that:
- Part of ```MISPHonorableIcons/1.svg``` comes from [octicons.github.com](https://octicons.github.com/icon/git-pull-request/) (CC0 - No Rights Reserved)
- Part of ```MISPHonorableIcons/2.svg``` comes from [Zeptozephyr](https://zeptozephyr.deviantart.com/art/Vectored-Portal-Icons-207347804) (CC0 - No Rights Reserved)
- Part of ```MISPHonorableIcons/3.svg``` comes from [octicons.github.com](https://octicons.github.com/icon/git-pull-request/) (CC0 - No Rights Reserved)
- Part of ```MISPHonorableIcons/4.svg``` comes from [Zeptozephyr](https://zeptozephyr.deviantart.com/art/Vectored-Portal-Icons-207347804) & [octicons.github.com](https://octicons.github.com/icon/git-pull-request/) (CC0 - No Rights Reserved)
- Part of ```MISPHonorableIcons/5.svg``` comes from [Zeptozephyr](https://zeptozephyr.deviantart.com/art/Vectored-Portal-Icons-207347804) & [octicons.github.com](https://octicons.github.com/icon/git-pull-request/) (CC0 - No Rights Reserved)
```
Copyright (C) 2017-2018 CIRCL - Computer Incident Response Center Luxembourg (c/o smile, security made in Lëtzebuerg, Groupement d'Intérêt Economique)
Copyright (c) 2017-2018 Sami Mokaddem
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
```

View File

@ -1,11 +1,11 @@
#!/usr/bin/env python3
from pprint import pprint
import os
import redis
import configparser
import argparse
import configparser
import os
from pprint import pprint
import redis
RED="\033[91m"
GREEN="\033[92m"

View File

@ -62,6 +62,7 @@ tcp-backlog 511
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1
bind 127.0.0.1 ::1
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen

View File

@ -1,6 +1,7 @@
[Server]
host = localhost
port = 8001
debug = False
[Dashboard]
#hours
@ -16,7 +17,7 @@ size_dashboard_left_width = 5
size_openStreet_pannel_perc = 55
size_world_pannel_perc = 35
item_to_plot = Attribute.category
fieldname_order=["Event.id", "Attribute.Tag", "Attribute.category", "Attribute.type", ["Attribute.value", "Attribute.comment"]]
fieldname_order=["Attribute.timestamp", "Event.id", "Attribute.Tag", "Attribute.category", "Attribute.type", ["Attribute.value", "Attribute.comment"]]
char_separator=||
[GEO]
@ -33,7 +34,10 @@ additional_help_text = ["Sightings multiplies earned points by 2", "Editing an a
[Log]
directory=logs
filename=logs.log
dispatcher_filename=zmq_dispatcher.log
subscriber_filename=zmq_subscriber.log
helpers_filename=helpers.log
update_filename=updates.log
[RedisGlobal]
host=localhost
@ -75,3 +79,4 @@ path_countrycode_to_coord_JSON=./data/country_code_lat_long.json
[RedisDB]
db=2
dbVersion=db_version

426
diagnostic.py Executable file
View File

@ -0,0 +1,426 @@
#!/usr/bin/env python3
import os
import sys
import stat
import time
import signal
import functools
import configparser
from pprint import pprint
import subprocess
import diagnostic_util
try:
import redis
import zmq
import json
import flask
import requests
from halo import Halo
except ModuleNotFoundError as e:
print('Dependency not met. Either not in a virtualenv or dependency not installed.')
print(f'- Error: {e}')
sys.exit(1)
'''
Steps:
- check if dependencies exists
- check if virtualenv exists
- check if configuration is update-to-date
- check file permission
- check if redis is running and responding
- check if able to connect to zmq
- check zmq_dispatcher processing queue
- check queue status: being filled up / being filled down
- check if subscriber responding
- check if dispatcher responding
- check if server listening
- check log static endpoint
- check log dynamic endpoint
'''
HOST = 'http://127.0.0.1'
PORT = 8001 # overriden by configuration file
configuration_file = {}
pgrep_subscriber_output = ''
pgrep_dispatcher_output = ''
signal.signal(signal.SIGALRM, diagnostic_util.timeout_handler)
def humanize(name, isResult=False):
words = name.split('_')
if isResult:
words = words[1:]
words[0] = words[0][0].upper() + words[0][1:]
else:
words[0] = words[0][0].upper() + words[0][1:] + 'ing'
return ' '.join(words)
def add_spinner(_func=None, name='dots'):
def decorator_add_spinner(func):
@functools.wraps(func)
def wrapper_add_spinner(*args, **kwargs):
human_func_name = humanize(func.__name__)
human_func_result = humanize(func.__name__, isResult=True)
flag_skip = False
with Halo(text=human_func_name, spinner=name) as spinner:
result = func(spinner, *args, **kwargs)
if isinstance(result, tuple):
status, output = result
elif isinstance(result, list):
status = result[0]
output = result[1]
elif isinstance(result, bool):
status = result
output = None
else:
status = False
flag_skip = True
spinner.fail(f'{human_func_name} - Function return unexpected result: {str(result)}')
if not flag_skip:
text = human_func_result
if output is not None and len(output) > 0:
text += f': {output}'
if isinstance(status, bool) and status:
spinner.succeed(text)
elif isinstance(status, bool) and not status:
spinner.fail(text)
else:
if status == 'info':
spinner.info(text)
else:
spinner.warn(text)
return status
return wrapper_add_spinner
if _func is None:
return decorator_add_spinner
else:
return decorator_add_spinner(_func)
@add_spinner
def check_virtual_environment_and_packages(spinner):
result = os.environ.get('VIRTUAL_ENV')
if result is None:
return (False, 'This diagnostic tool should be started inside a virtual environment.')
else:
if redis.__version__.startswith('2'):
return (False, f'''Redis python client have version {redis.__version__}. Version 3.x required.
\t [inside virtualenv] pip3 install -U redis''')
else:
return (True, '')
@add_spinner
def check_configuration(spinner):
global configuration_file, port
configfile = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'config/config.cfg')
cfg = configparser.ConfigParser()
cfg.read(configfile)
configuration_file = cfg
cfg = {s: dict(cfg.items(s)) for s in cfg.sections()}
configfile_default = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'config/config.cfg.default')
cfg_default = configparser.ConfigParser()
cfg_default.read(configfile_default)
cfg_default = {s: dict(cfg_default.items(s)) for s in cfg_default.sections()}
# Check if all fields from config.default exists in config
result, faulties = diagnostic_util.dict_compare(cfg_default, cfg)
if result:
port = configuration_file.get("Server", "port")
return (True, '')
else:
return_text = '''Configuration incomplete.
\tUpdate your configuration file `config.cfg`.\n\t Faulty fields:\n'''
for field_name in faulties:
return_text += f'\t\t- {field_name}\n'
return (False, return_text)
@add_spinner(name='dot')
def check_file_permission(spinner):
max_mind_database_path = configuration_file.get('RedisMap', 'pathmaxminddb')
st = os.stat(max_mind_database_path)
all_read_perm = bool(st.st_mode & stat.S_IROTH) # FIXME: permission may be changed
if all_read_perm:
return (True, '')
else:
return (False, 'Maxmin GeoDB might have incorrect read file permission')
@add_spinner
def check_redis(spinner):
redis_server = redis.StrictRedis(
host=configuration_file.get('RedisGlobal', 'host'),
port=configuration_file.getint('RedisGlobal', 'port'),
db=configuration_file.getint('RedisLog', 'db'))
if redis_server.ping():
return (True, '')
else:
return (False, '''Can\'t reach Redis server.
\t Make sure it is running and adapt your configuration accordingly''')
@add_spinner
def check_zmq(spinner):
timeout = 15
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect(configuration_file.get('RedisGlobal', 'zmq_url'))
socket.setsockopt_string(zmq.SUBSCRIBE, '')
poller = zmq.Poller()
start_time = time.time()
poller.register(socket, zmq.POLLIN)
for t in range(1, timeout+1):
socks = dict(poller.poll(timeout=1*1000))
if len(socks) > 0:
if socket in socks and socks[socket] == zmq.POLLIN:
rcv_string = socket.recv()
if rcv_string.startswith(b'misp_json'):
return (True, '')
else:
pass
spinner.text = f'checking zmq - elapsed time: {int(time.time() - start_time)}s'
else:
return (False, '''Can\'t connect to the ZMQ stream.
\t Make sure the MISP ZMQ is running: `/servers/serverSettings/diagnostics`
\t Make sure your network infrastucture allows you to connect to the ZMQ''')
@add_spinner
def check_processes_status(spinner):
global pgrep_subscriber_output, pgrep_dispatcher_output
response = subprocess.check_output(
["pgrep", "-laf", "zmq_"],
universal_newlines=True
)
for line in response.splitlines():
lines = line.split(' ')
if len(lines) == 2:
pid, p_name = lines
elif len(lines) ==3:
pid, _, p_name = lines
if 'zmq_subscriber.py' in p_name:
pgrep_subscriber_output = line
elif 'zmq_dispatcher.py' in p_name:
pgrep_dispatcher_output = line
if len(pgrep_subscriber_output) == 0:
return (False, 'zmq_subscriber is not running')
elif len(pgrep_dispatcher_output) == 0:
return (False, 'zmq_dispatcher is not running')
else:
return (True, 'Both processes are running')
@add_spinner
def check_subscriber_status(spinner):
global pgrep_subscriber_output
pool = redis.ConnectionPool(
host=configuration_file.get('RedisGlobal', 'host'),
port=configuration_file.getint('RedisGlobal', 'port'),
db=configuration_file.getint('RedisLIST', 'db'),
decode_responses=True)
monitor = diagnostic_util.Monitor(pool)
commands = monitor.monitor()
start_time = time.time()
signal.alarm(15)
try:
for i, c in enumerate(commands):
if i == 0: # Skip 'OK'
continue
split = c.split()
try:
action = split[3]
target = split[4]
except IndexError:
pass
if action == '"LPUSH"' and target == f'\"{configuration_file.get("RedisLIST", "listName")}\"':
signal.alarm(0)
break
else:
spinner.text = f'Checking subscriber status - elapsed time: {int(time.time() - start_time)}s'
except diagnostic_util.TimeoutException:
return_text = f'''zmq_subscriber seems not to be working.
\t Consider restarting it: {pgrep_subscriber_output}'''
return (False, return_text)
return (True, 'subscriber is running and populating the buffer')
@add_spinner
def check_buffer_queue(spinner):
redis_server = redis.StrictRedis(
host=configuration_file.get('RedisGlobal', 'host'),
port=configuration_file.getint('RedisGlobal', 'port'),
db=configuration_file.getint('RedisLIST', 'db'))
warning_threshold = 100
elements_in_list = redis_server.llen(configuration_file.get('RedisLIST', 'listName'))
return_status = 'warning' if elements_in_list > warning_threshold else ('info' if elements_in_list > 0 else True)
return_text = f'Currently {elements_in_list} items in the buffer'
return (return_status, return_text)
@add_spinner
def check_buffer_change_rate(spinner):
redis_server = redis.StrictRedis(
host=configuration_file.get('RedisGlobal', 'host'),
port=configuration_file.getint('RedisGlobal', 'port'),
db=configuration_file.getint('RedisLIST', 'db'))
time_slept = 0
sleep_duration = 0.001
sleep_max = 10.0
refresh_frequency = 1.0
next_refresh = 0
change_increase = 0
change_decrease = 0
elements_in_list_prev = 0
elements_in_list = int(redis_server.llen(configuration_file.get('RedisLIST', 'listName')))
elements_in_inlist_init = elements_in_list
consecutive_no_rate_change = 0
while True:
elements_in_list_prev = elements_in_list
elements_in_list = int(redis_server.llen(configuration_file.get('RedisLIST', 'listName')))
change_increase += elements_in_list - elements_in_list_prev if elements_in_list - elements_in_list_prev > 0 else 0
change_decrease += elements_in_list_prev - elements_in_list if elements_in_list_prev - elements_in_list > 0 else 0
if next_refresh < time_slept:
next_refresh = time_slept + refresh_frequency
change_rate_text = f'{change_increase}/sec\t{change_decrease}/sec'
spinner.text = f'Buffer: {elements_in_list}\t{change_rate_text}'
if consecutive_no_rate_change == 3:
time_slept = sleep_max
if elements_in_list == 0:
consecutive_no_rate_change += 1
else:
consecutive_no_rate_change = 0
change_increase = 0
change_decrease = 0
if time_slept >= sleep_max:
return_flag = elements_in_list == 0 or (elements_in_list < elements_in_inlist_init or elements_in_list < 2)
return_text = f'Buffer is consumed {"faster" if return_flag else "slower" } than being populated'
break
time.sleep(sleep_duration)
time_slept += sleep_duration
elements_in_inlist_final = int(redis_server.llen(configuration_file.get('RedisLIST', 'listName')))
return (return_flag, return_text)
@add_spinner
def check_dispatcher_status(spinner):
redis_server = redis.StrictRedis(
host=configuration_file.get('RedisGlobal', 'host'),
port=configuration_file.getint('RedisGlobal', 'port'),
db=configuration_file.getint('RedisLIST', 'db'))
content = {'content': time.time()}
redis_server.rpush(configuration_file.get('RedisLIST', 'listName'),
json.dumps({'zmq_name': 'diagnostic_channel', 'content': 'diagnostic_channel ' + json.dumps(content)})
)
return_flag = False
return_text = ''
time_slept = 0
sleep_duration = 0.2
sleep_max = 10.0
redis_server.delete('diagnostic_tool_response')
while True:
reply = redis_server.get('diagnostic_tool_response')
elements_in_list = redis_server.llen(configuration_file.get('RedisLIST', 'listName'))
if reply is None:
if time_slept >= sleep_max:
return_flag = False
return_text = f'zmq_dispatcher did not respond in the given time ({int(sleep_max)}s)'
if len(pgrep_dispatcher_output) > 0:
return_text += f'\n\t➥ Consider restarting it: {pgrep_dispatcher_output}'
else:
return_text += '\n\t➥ Consider starting it'
break
time.sleep(sleep_duration)
spinner.text = f'Dispatcher status: No response yet'
time_slept += sleep_duration
else:
return_flag = True
return_text = f'Took {float(reply):.2f}s to complete'
break
return (return_flag, return_text)
@add_spinner
def check_server_listening(spinner):
url = f'{HOST}:{PORT}/_get_log_head'
spinner.text = f'Trying to connect to {url}'
try:
r = requests.get(url)
except requests.exceptions.ConnectionError:
return (False, f'Can\'t connect to {url}')
return (
r.status_code == 200,
f'{url} {"not " if r.status_code != 200 else ""}reached. Status code [{r.status_code}]'
)
@add_spinner
def check_server_dynamic_enpoint(spinner):
sleep_max = 15
start_time = time.time()
url = f'{HOST}:{PORT}/_logs'
p = subprocess.Popen(
['curl', '-sfN', '--header', 'Accept: text/event-stream', url],
stdout=subprocess.PIPE,
bufsize=1)
signal.alarm(sleep_max)
return_flag = False
return_text = f'Dynamic endpoint returned data but not in the correct format.'
try:
for line in iter(p.stdout.readline, b''):
if line.startswith(b'data: '):
data = line[6:]
try:
j = json.loads(data)
return_flag = True
return_text = f'Dynamic endpoint returned data (took {time.time()-start_time:.2f}s)'
signal.alarm(0)
break
except Exception as e:
return_flag = False
return_text = f'Something went wrong. Output {line}'
break
except diagnostic_util.TimeoutException:
return_text = f'Dynamic endpoint did not returned data in the given time ({int(time.time()-start_time)}sec)'
return (return_flag, return_text)
def start_diagnostic():
if not (check_virtual_environment_and_packages() and check_configuration()):
return
check_file_permission()
check_redis()
check_zmq()
check_processes_status()
check_subscriber_status()
if check_buffer_queue() is not True:
check_buffer_change_rate()
dispatcher_running = check_dispatcher_status()
if check_server_listening() and dispatcher_running:
check_server_dynamic_enpoint()
def main():
start_diagnostic()
if __name__ == '__main__':
main()

68
diagnostic_util.py Normal file
View File

@ -0,0 +1,68 @@
import configparser
def dict_compare(dict1, dict2, itercount=0):
dict1_keys = set(dict1.keys())
dict2_keys = set(dict2.keys())
intersection = dict1_keys.difference(dict2_keys)
faulties = []
if itercount > 0 and len(intersection) > 0:
return (False, list(intersection))
flag_no_error = True
for k, v in dict1.items():
if isinstance(v, dict):
if k not in dict2:
faulties.append({k: dict1[k]})
flag_no_error = False
else:
status, faulty = dict_compare(v, dict2[k], itercount+1)
flag_no_error = flag_no_error and status
if len(faulty) > 0:
faulties.append({k: faulty})
else:
return (True, [])
if flag_no_error:
return (True, [])
else:
return (False, faulties)
class TimeoutException(Exception):
pass
def timeout_handler(signum, frame):
raise TimeoutException
# https://stackoverflow.com/a/10464730
class Monitor():
def __init__(self, connection_pool):
self.connection_pool = connection_pool
self.connection = None
def __del__(self):
try:
self.reset()
except Exception:
pass
def reset(self):
if self.connection:
self.connection_pool.release(self.connection)
self.connection = None
def monitor(self):
if self.connection is None:
self.connection = self.connection_pool.get_connection(
'monitor', None)
self.connection.send_command("monitor")
return self.listen()
def parse_response(self):
return self.connection.read_response()
def listen(self):
while True:
yield self.parse_response()

View File

@ -1,9 +1,13 @@
#!/usr/bin/env python3.5
import os, sys, json
import datetime, time
import redis
import configparser
import datetime
import json
import os
import sys
import time
import redis
import util
from helpers import contributor_helper
@ -206,7 +210,7 @@ def main():
for award in awards_given:
# update awards given
serv_redis_db.zadd('CONTRIB_LAST_AWARDS:'+util.getDateStrFormat(now), nowSec, json.dumps({'org': org, 'award': award, 'epoch': nowSec }))
serv_redis_db.zadd('CONTRIB_LAST_AWARDS:'+util.getDateStrFormat(now), {json.dumps({'org': org, 'award': award, 'epoch': nowSec }): nowSec})
serv_redis_db.expire('CONTRIB_LAST_AWARDS:'+util.getDateStrFormat(now), ONE_DAY*7) #expire after 7 day
# publish
publish_log('GIVE_HONOR_ZMQ', 'CONTRIBUTION', {'org': org, 'award': award, 'epoch': nowSec }, CHANNEL_LASTAWARDS)

View File

@ -1,16 +1,20 @@
import util
from util import getZrange
import math, random
import time
import os
import configparser
import json
import datetime
import json
import logging
import math
import os
import random
import sys
import time
import redis
import util
from util import getZrange
from . import users_helper
KEYDAY = "CONTRIB_DAY" # To be used by other module
KEYALLORG = "CONTRIB_ALL_ORG" # To be used by other module
@ -30,11 +34,16 @@ class Contributor_helper:
#logger
logDir = cfg.get('Log', 'directory')
logfilename = cfg.get('Log', 'filename')
logfilename = cfg.get('Log', 'helpers_filename')
logPath = os.path.join(logDir, logfilename)
if not os.path.exists(logDir):
os.makedirs(logDir)
logging.basicConfig(filename=logPath, filemode='a', level=logging.INFO)
try:
logging.basicConfig(filename=logPath, filemode='a', level=logging.INFO)
except PermissionError as error:
print(error)
print("Please fix the above and try again.")
sys.exit(126)
self.logger = logging.getLogger(__name__)
#honorBadge
@ -106,7 +115,7 @@ class Contributor_helper:
def addContributionToCateg(self, date, categ, org, count=1):
today_str = util.getDateStrFormat(date)
keyname = "{}:{}:{}".format(self.keyCateg, today_str, categ)
self.serv_redis_db.zincrby(keyname, org, count)
self.serv_redis_db.zincrby(keyname, count, org)
self.logger.debug('Added to redis: keyname={}, org={}, count={}'.format(keyname, org, count))
def publish_log(self, zmq_name, name, content, channel=""):
@ -120,14 +129,14 @@ class Contributor_helper:
if action in ['edit', None]:
pass
#return #not a contribution?
now = datetime.datetime.now()
nowSec = int(time.time())
pnts_to_add = self.default_pnts_per_contribution
# Do not consider contribution as login anymore
#self.users_helper.add_user_login(nowSec, org)
# is a valid contribution
if categ is not None:
try:
@ -135,27 +144,27 @@ class Contributor_helper:
except KeyError:
pnts_to_add = self.default_pnts_per_contribution
pnts_to_add *= pntMultiplier
util.push_to_redis_zset(self.serv_redis_db, self.keyDay, org, count=pnts_to_add)
#CONTRIB_CATEG retain the contribution per category, not the point earned in this categ
util.push_to_redis_zset(self.serv_redis_db, self.keyCateg, org, count=1, endSubkey=':'+util.noSpaceLower(categ))
self.publish_log(zmq_name, 'CONTRIBUTION', {'org': org, 'categ': categ, 'action': action, 'epoch': nowSec }, channel=self.CHANNEL_LASTCONTRIB)
else:
categ = ""
self.serv_redis_db.sadd(self.keyAllOrg, org)
keyname = "{}:{}".format(self.keyLastContrib, util.getDateStrFormat(now))
self.serv_redis_db.zadd(keyname, nowSec, org)
self.serv_redis_db.zadd(keyname, {org: nowSec})
self.logger.debug('Added to redis: keyname={}, nowSec={}, org={}'.format(keyname, nowSec, org))
self.serv_redis_db.expire(keyname, util.ONE_DAY*7) #expire after 7 day
awards_given = self.updateOrgContributionRank(org, pnts_to_add, action, contribType, eventTime=datetime.datetime.now(), isLabeled=isLabeled, categ=util.noSpaceLower(categ))
for award in awards_given:
# update awards given
keyname = "{}:{}".format(self.keyLastAward, util.getDateStrFormat(now))
self.serv_redis_db.zadd(keyname, nowSec, json.dumps({'org': org, 'award': award, 'epoch': nowSec }))
self.serv_redis_db.zadd(keyname, {json.dumps({'org': org, 'award': award, 'epoch': nowSec }): nowSec})
self.logger.debug('Added to redis: keyname={}, nowSec={}, content={}'.format(keyname, nowSec, json.dumps({'org': org, 'award': award, 'epoch': nowSec })))
self.serv_redis_db.expire(keyname, util.ONE_DAY*7) #expire after 7 day
# publish
@ -168,7 +177,7 @@ class Contributor_helper:
if pnts is None:
pnts = 0
else:
pnts = int(pnts.decode('utf8'))
pnts = int(pnts)
return pnts
# return: [final_rank, requirement_fulfilled, requirement_not_fulfilled]
@ -372,7 +381,7 @@ class Contributor_helper:
def getOrgsTrophyRanking(self, categ):
keyname = '{mainKey}:{orgCateg}'
res = self.serv_redis_db.zrange(keyname.format(mainKey=self.keyTrophy, orgCateg=categ), 0, -1, withscores=True, desc=True)
res = [[org.decode('utf8'), score] for org, score in res]
res = [[org, score] for org, score in res]
return res
def getAllOrgsTrophyRanking(self, category=None):
@ -401,12 +410,12 @@ class Contributor_helper:
def giveTrophyPointsToOrg(self, org, categ, points):
keyname = '{mainKey}:{orgCateg}'
self.serv_redis_db.zincrby(keyname.format(mainKey=self.keyTrophy, orgCateg=categ), org, points)
self.serv_redis_db.zincrby(keyname.format(mainKey=self.keyTrophy, orgCateg=categ), points, org)
self.logger.debug('Giving {} trophy points to {} in {}'.format(points, org, categ))
def removeTrophyPointsFromOrg(self, org, categ, points):
keyname = '{mainKey}:{orgCateg}'
self.serv_redis_db.zincrby(keyname.format(mainKey=self.keyTrophy, orgCateg=categ), org, -points)
self.serv_redis_db.zincrby(keyname.format(mainKey=self.keyTrophy, orgCateg=categ), -points, org)
self.logger.debug('Removing {} trophy points from {} in {}'.format(points, org, categ))
''' AWARDS HELPER '''
@ -553,7 +562,7 @@ class Contributor_helper:
def getAllOrgFromRedis(self):
data = self.serv_redis_db.smembers(self.keyAllOrg)
data = [x.decode('utf8') for x in data]
data = [x for x in data]
return data
def getCurrentOrgRankFromRedis(self, org):
@ -589,4 +598,3 @@ class Contributor_helper:
return { 'remainingPts': i-points, 'stepPts': prev }
prev = i
return { 'remainingPts': 0, 'stepPts': self.rankMultiplier**self.levelMax }

View File

@ -1,18 +1,22 @@
import math, random
import os
import datetime
import json
import datetime, time
import logging
import json
import redis
import math
import os
import random
import sys
import time
from collections import OrderedDict
import geoip2.database
import phonenumbers, pycountry
from phonenumbers import geocoder
import redis
import geoip2.database
import phonenumbers
import pycountry
import util
from helpers import live_helper
from phonenumbers import geocoder
class InvalidCoordinate(Exception):
pass
@ -29,11 +33,16 @@ class Geo_helper:
#logger
logDir = cfg.get('Log', 'directory')
logfilename = cfg.get('Log', 'filename')
logfilename = cfg.get('Log', 'helpers_filename')
logPath = os.path.join(logDir, logfilename)
if not os.path.exists(logDir):
os.makedirs(logDir)
logging.basicConfig(filename=logPath, filemode='a', level=logging.INFO)
try:
logging.basicConfig(filename=logPath, filemode='a', level=logging.INFO)
except PermissionError as error:
print(error)
print("Please fix the above and try again.")
sys.exit(126)
self.logger = logging.getLogger(__name__)
self.keyCategCoord = "GEO_COORD"
@ -43,7 +52,12 @@ class Geo_helper:
self.PATH_TO_JSON = cfg.get('RedisMap', 'path_countrycode_to_coord_JSON')
self.CHANNELDISP = cfg.get('RedisMap', 'channelDisp')
self.reader = geoip2.database.Reader(self.PATH_TO_DB)
try:
self.reader = geoip2.database.Reader(self.PATH_TO_DB)
except PermissionError as error:
print(error)
print("Please fix the above and try again.")
sys.exit(126)
self.country_to_iso = { country.name: country.alpha_2 for country in pycountry.countries}
with open(self.PATH_TO_JSON) as f:
self.country_code_to_coord = json.load(f)
@ -125,7 +139,7 @@ class Geo_helper:
self.live_helper.add_to_stream_log_cache('Map', j_to_send)
self.logger.info('Published: {}'.format(json.dumps(to_send)))
except ValueError:
self.logger.warning("can't resolve ip")
self.logger.warning("Can't resolve IP: " + str(supposed_ip))
except geoip2.errors.AddressNotFoundError:
self.logger.warning("Address not in Database")
except InvalidCoordinate:
@ -181,13 +195,18 @@ class Geo_helper:
now = datetime.datetime.now()
today_str = util.getDateStrFormat(now)
keyname = "{}:{}".format(keyCateg, today_str)
self.serv_redis_db.geoadd(keyname, lon, lat, content)
try:
self.serv_redis_db.geoadd(keyname, lon, lat, content)
except redis.exceptions.ResponseError as error:
print(error)
print("Please fix the above, and make sure you use a redis version that supports the GEOADD command.")
print("To test for support: echo \"help GEOADD\"| redis-cli")
self.logger.debug('Added to redis: keyname={}, lon={}, lat={}, content={}'.format(keyname, lon, lat, content))
def push_to_redis_zset(self, keyCateg, toAdd, endSubkey="", count=1):
now = datetime.datetime.now()
today_str = util.getDateStrFormat(now)
keyname = "{}:{}{}".format(keyCateg, today_str, endSubkey)
self.serv_redis_db.zincrby(keyname, toAdd, count)
self.serv_redis_db.zincrby(keyname, count, toAdd)
self.logger.debug('Added to redis: keyname={}, toAdd={}, count={}'.format(keyname, toAdd, count))
def ip_to_coord(self, ip):

View File

@ -1,8 +1,10 @@
import os
import datetime
import json
import random
import datetime, time
import logging
import os
import random
import sys
import time
class Live_helper:
@ -16,11 +18,16 @@ class Live_helper:
# logger
logDir = cfg.get('Log', 'directory')
logfilename = cfg.get('Log', 'filename')
logfilename = cfg.get('Log', 'helpers_filename')
logPath = os.path.join(logDir, logfilename)
if not os.path.exists(logDir):
os.makedirs(logDir)
logging.basicConfig(filename=logPath, filemode='a', level=logging.INFO)
try:
logging.basicConfig(filename=logPath, filemode='a', level=logging.INFO)
except PermissionError as error:
print(error)
print("Please fix the above and try again.")
sys.exit(126)
self.logger = logging.getLogger(__name__)
def publish_log(self, zmq_name, name, content, channel=None):
@ -32,6 +39,7 @@ class Live_helper:
self.serv_live.publish(channel, j_to_send)
self.logger.debug('Published: {}'.format(j_to_send))
if name != 'Keepalive':
name = 'Attribute' if 'ObjectAttribute' else name
self.add_to_stream_log_cache(name, j_to_send_keep)
@ -40,10 +48,10 @@ class Live_helper:
entries = self.serv_live.lrange(rKey, 0, -1)
to_ret = []
for entry in entries:
jentry = json.loads(entry.decode('utf8'))
jentry = json.loads(entry)
to_ret.append(jentry)
return to_ret
def add_to_stream_log_cache(self, cacheKey, item):
rKey = self.prefix_redis_key+cacheKey

View File

@ -1,13 +1,17 @@
import math, random
import os
import json
import copy
import datetime, time
import datetime
import json
import logging
import math
import os
import random
import sys
import time
from collections import OrderedDict
import util
class Trendings_helper:
def __init__(self, serv_redis_db, cfg):
self.serv_redis_db = serv_redis_db
@ -23,11 +27,16 @@ class Trendings_helper:
#logger
logDir = cfg.get('Log', 'directory')
logfilename = cfg.get('Log', 'filename')
logfilename = cfg.get('Log', 'helpers_filename')
logPath = os.path.join(logDir, logfilename)
if not os.path.exists(logDir):
os.makedirs(logDir)
logging.basicConfig(filename=logPath, filemode='a', level=logging.INFO)
try:
logging.basicConfig(filename=logPath, filemode='a', level=logging.INFO)
except PermissionError as error:
print(error)
print("Please fix the above and try again.")
sys.exit(126)
self.logger = logging.getLogger(__name__)
''' SETTER '''
@ -40,7 +49,7 @@ class Trendings_helper:
to_save = json.dumps(data)
else:
to_save = data
self.serv_redis_db.zincrby(keyname, to_save, 1)
self.serv_redis_db.zincrby(keyname, 1, to_save)
self.logger.debug('Added to redis: keyname={}, content={}'.format(keyname, to_save))
def addTrendingEvent(self, eventName, timestamp):
@ -82,7 +91,7 @@ class Trendings_helper:
for curDate in util.getXPrevDaysSpan(dateE, prev_days):
keyname = "{}:{}".format(trendingType, util.getDateStrFormat(curDate))
data = self.serv_redis_db.zrange(keyname, 0, -1, desc=True, withscores=True)
data = [ [record[0].decode('utf8'), record[1]] for record in data ]
data = [ [record[0], record[1]] for record in data ]
data = data if data is not None else []
to_ret.append([util.getTimestamp(curDate), data])
to_ret = util.sortByTrendingScore(to_ret, topNum=topNum)
@ -115,7 +124,7 @@ class Trendings_helper:
for curDate in util.getXPrevDaysSpan(dateE, prev_days):
keyname = "{}:{}".format(self.keyTag, util.getDateStrFormat(curDate))
data = self.serv_redis_db.zrange(keyname, 0, topNum-1, desc=True, withscores=True)
data = [ [record[0].decode('utf8'), record[1]] for record in data ]
data = [ [record[0], record[1]] for record in data ]
data = data if data is not None else []
temp = []
for jText, score in data:
@ -130,10 +139,10 @@ class Trendings_helper:
for curDate in util.getXPrevDaysSpan(dateE, prev_days):
keyname = "{}:{}".format(self.keySigh, util.getDateStrFormat(curDate))
sight = self.serv_redis_db.get(keyname)
sight = 0 if sight is None else int(sight.decode('utf8'))
sight = 0 if sight is None else int(sight)
keyname = "{}:{}".format(self.keyFalse, util.getDateStrFormat(curDate))
fp = self.serv_redis_db.get(keyname)
fp = 0 if fp is None else int(fp.decode('utf8'))
fp = 0 if fp is None else int(fp)
to_ret.append([util.getTimestamp(curDate), { 'sightings': sight, 'false_positive': fp}])
return to_ret
@ -149,7 +158,7 @@ class Trendings_helper:
keyname = "{}:{}".format(trendingType, util.getDateStrFormat(curDate))
data = self.serv_redis_db.zrange(keyname, 0, -1, desc=True)
for elem in data:
allSet.add(elem.decode('utf8'))
allSet.add(elem)
to_ret[trendingType] = list(allSet)
tags = self.getTrendingTags(dateS, dateE)
tagSet = set()
@ -178,7 +187,7 @@ class Trendings_helper:
for curDate in util.getXPrevDaysSpan(dateE, prev_days):
keyname = "{}:{}".format(trendingType, util.getDateStrFormat(curDate))
data = self.serv_redis_db.zrange(keyname, 0, topNum-1, desc=True, withscores=True)
data = [ [record[0].decode('utf8'), record[1]] for record in data ]
data = [ [record[0], record[1]] for record in data ]
data = data if data is not None else []
to_format.append([util.getTimestamp(curDate), data])

View File

@ -1,10 +1,14 @@
import math, random
import os
import datetime
import json
import datetime, time
import logging
import math
import os
import random
import sys
import time
import util
from . import contributor_helper
@ -20,11 +24,16 @@ class Users_helper:
#logger
logDir = cfg.get('Log', 'directory')
logfilename = cfg.get('Log', 'filename')
logfilename = cfg.get('Log', 'helpers_filename')
logPath = os.path.join(logDir, logfilename)
if not os.path.exists(logDir):
os.makedirs(logDir)
logging.basicConfig(filename=logPath, filemode='a', level=logging.INFO)
try:
logging.basicConfig(filename=logPath, filemode='a', level=logging.INFO)
except PermissionError as error:
print(error)
print("Please fix the above and try again.")
sys.exit(126)
self.logger = logging.getLogger(__name__)
def add_user_login(self, timestamp, org, email=''):
@ -32,11 +41,11 @@ class Users_helper:
timestampDate_str = util.getDateStrFormat(timestampDate)
keyname_timestamp = "{}:{}".format(self.keyTimestamp, org)
self.serv_redis_db.zadd(keyname_timestamp, timestamp, timestamp)
self.serv_redis_db.zadd(keyname_timestamp, {timestamp: timestamp})
self.logger.debug('Added to redis: keyname={}, org={}'.format(keyname_timestamp, timestamp))
keyname_org = "{}:{}".format(self.keyOrgLog, timestampDate_str)
self.serv_redis_db.zincrby(keyname_org, org, 1)
self.serv_redis_db.zincrby(keyname_org, 1, org)
self.logger.debug('Added to redis: keyname={}, org={}'.format(keyname_org, org))
self.serv_redis_db.sadd(self.keyAllOrgLog, org)
@ -44,7 +53,7 @@ class Users_helper:
def getAllOrg(self):
temp = self.serv_redis_db.smembers(self.keyAllOrgLog)
return [ org.decode('utf8') for org in temp ]
return [ org for org in temp ]
# return: All timestamps for one org for the spanned time or not
def getDates(self, org, date=None):
@ -63,11 +72,11 @@ class Users_helper:
else:
break # timestamps should be sorted, no need to process anymore
return to_return
# return: All dates for all orgs, if date is not supplied, return for all dates
def getUserLogins(self, date=None):
# get all orgs and retreive their timestamps
# get all orgs and retrieve their timestamps
dates = []
for org in self.getAllOrg():
keyname = "{}:{}".format(self.keyOrgLog, org)
@ -81,7 +90,7 @@ class Users_helper:
keyname = "{}:{}".format(self.keyOrgLog, util.getDateStrFormat(curDate))
data = self.serv_redis_db.zrange(keyname, 0, -1, desc=True)
for org in data:
orgs.add(org.decode('utf8'))
orgs.add(org)
return list(orgs)
# return: list composed of the number of [log, contrib] for one org for the time spanned
@ -125,7 +134,7 @@ class Users_helper:
def getLoginVSCOntribution(self, date):
keyname = "{}:{}".format(self.keyContribDay, util.getDateStrFormat(date))
orgs_contri = self.serv_redis_db.zrange(keyname, 0, -1, desc=True, withscores=False)
orgs_contri = [ org.decode('utf8') for org in orgs_contri ]
orgs_contri = [ org for org in orgs_contri ]
orgs_login = [ org for org in self.getAllLoggedInOrgs(date, prev_days=0) ]
contributed_num = 0
non_contributed_num = 0
@ -169,7 +178,7 @@ class Users_helper:
data = [data[6]]+data[:6]
return data
# return: a dico of the form {login: [[timestamp, count], ...], contrib: [[timestamp, 1/0], ...]}
# return: a dico of the form {login: [[timestamp, count], ...], contrib: [[timestamp, 1/0], ...]}
# either for all orgs or the supplied one
def getUserLoginsAndContribOvertime(self, date, org=None, prev_days=6):
dico_hours_contrib = {}

View File

@ -1,6 +1,9 @@
#!/bin/bash
set -e
## disable -e for production systems
#set -e
## Debug mode
#set -x
sudo chmod -R g+w .
@ -15,7 +18,13 @@ fi
sudo apt-get install python3-virtualenv virtualenv screen redis-server unzip -y
if [ -z "$VIRTUAL_ENV" ]; then
virtualenv -p python3 DASHENV
virtualenv -p python3 DASHENV ; DASH_VENV=$?