Setting up NOCLook
This guide is written for Ubuntu 14.04.
Basic requirements
Start with installing some basic requirements and creating a new user as any superuser.
$ sudo apt-get install python-setuptools git libpq-dev postgresql python-dev postfix nginx-full uwsgi uwsgi-plugin-python libffi-dev $ sudo easy_install pip $ sudo pip install virtualenv $ sudo adduser --disabled-password --home /var/opt/norduni ni
We are using postgresql but you can use any SQL database that Django supports. See Django database documentation for other supported SQL databases.
Neo4j database
Oracle java is recommended for Neo4j.
$ sudo apt-add-repository ppa:webupd8team/java $ sudo apt-get update $ sudo apt-get install oracle-java8-installer
Download neo4j-community from http://neo4j.com/download/. NORDUnet and SUNET run 2.1.8. 2.3.2 has been tested and did not work as expected.
$ tar xvfz neo4j-community-2.1.8-unix.tar.gz $ sudo mv neo4j-community-2.1.8 /var/opt/. $ sudo ln -s /var/opt/neo4j-community-2.1.8 /var/opt/neo4j-community $ cd /var/opt/neo4j-community $ sudo ./bin/neo4j-installer install
Set property keys to auto index in neo4j.
$ sudo vi /var/opt/neo4j-community/conf/neo4j.properties Add or update the following lines. # Autoindexing # Enable auto-indexing for nodes, default is false node_auto_indexing=true # The node property keys to be auto-indexed, if enabled node_keys_indexable=name, description, ip_address, ip_addresses, as_number, hostname, hostnames, telenor_tn1_number, nordunet_id, version # Enable auto-indexing for relationships, default is false relationship_auto_indexing=true # The relationship property keys to be auto-indexed, if enabled relationship_keys_indexable=ip_address
Increase the number of files the neo4j user may concurrently access. A restart is required for the settings to take effect.
$ sudo vi /etc/security/limits.conf Add the lines below to limits.conf. # User neo4j allowed concurrent files neo4j soft nofile 40000 neo4j hard nofile 40000
$ sudo vi /etc/pam.d/su Uncomment the following line. session required pam_limits.so
After the restart neo4-service should be running.
$ sudo service neo4j-service status Neo4j Server is running at pid 1475
Create full text index for nodes and relationships.
$ curl -D - -H "Content-Type: application/json" --data '{"name" : "node_auto_index","config" : {"type" : "fulltext","provider" : "lucene"}}' -X POST http://localhost:7474/db/data/index/node/ HTTP/1.1 201 Created *snip* $ curl -D - -H "Content-Type: application/json" --data '{"name" : "relationship_auto_index","config" : {"type" : "fulltext","provider" : "lucene"}}' -X POST http://localhost:7474/db/data/index/relationship/ HTTP/1.1 201 Created *snip*
Postgres database
Set password for database user and create a new database
$ sudo -u postgres psql postgres template1=# CREATE USER ni with PASSWORD 'secret'; template1=# CREATE DATABASE norduni; template1=# GRANT ALL PRIVILEGES ON DATABASE norduni to ni; template1=# ALTER DATABASE norduni OWNER TO ni; # Allow user ni to drop and create for restoring template1=# ALTER USER ni CREATEDB; # and development purposes template1=# \q
NORDUni repository
Get the NORDUni files.
$ sudo -u ni -i $ pwd /var/opt/norduni $ git clone git://git.nordu.net/norduni.git
Python environment
Make a virtual python environment.
$ virtualenv norduni_environment
Making a virtual environment is also just a suggestion but it makes it easier to keep your system clean.
Python requirements
Install required python modules.
$ . norduni_environment/bin/activate $ pip install -r norduni/requirements/prod.txt
Django settings
Change the django settings.
$ cd norduni/src/niweb/ $ cp dotenv .env $ vi .env
The following settings need to be changed.
REPORTS_TO= DB_PASSWORD= DEFAULT_FROM_EMAIL= EMAIL_HOST= SECRET_KEY=
Check if your settings are ok.
$ python manage.py syncdb $ python manage.py migrate apps.noclook $ python manage.py migrate actstream $ python manage.py migrate tastypie $ python manage.py collectstatic $ python manage.py runserver
Now you should be able connect to the machine with your browser on http://localhost:8000 and see the NOCLook app index page.
Deploying NOCLook
uwsgi
Create a uwsgi configuration file.
$ sudo vi /etc/uwsgi/apps-available/noclook.ini The following configuration should be a good start. [uwsgi] # Django-related settings plugins = python protocol = uwsgi # the base directory (full path) chdir = /var/opt/norduni/norduni/src/niweb/ # Django's wsgi file wsgi-file = /var/opt/norduni/norduni/src/niweb/niweb/wsgi.py env = DJANGO_SETTINGS_MODULE=niweb.settings.prod # the virtualenv (full path) home = /var/opt/norduni/norduni_environment # logging daemonize = /var/log/uwsgi/app/noclook.log # process-related settings # master master = true # maximum number of worker processes processes = 5 #threads = 2 max-requests = 5000 # the socket (use the full path to be safe socket = 127.0.0.1:8001 # clear environment on exit vacuum = true
Link the configuration in to the correct directory.
sudo ln -s /etc/uwsgi/apps-available/noclook.ini /etc/uwsgi/apps-enabled/noclook.ini
Make temp dir and log dir writable by the uwsgi user (www-data on ubuntu)
sudo chown -R ni:www-data /tmp/django_cache sudo chmod -R g+w /tmp/django_cache sudo chown -R ni:www-data /var/opt/norduni/norduni/src/niweb/logs/ sudo chmod -R g+w /var/opt/norduni/norduni/src/niweb/logs/
nginx
Setup new dhparam file 2048 should suffice, but if you like you can go with 4096 instead:
$ sudo openssl dhparam -out /etc/ssl/dhparams.pem 2048
Configure nginx.
$ sudo vi /etc/nginx/sites-available/default The following configuration should be a good start. upstream django { server 127.0.0.1:8001; # for a web port socket } server { listen 80; listen [::]:80; server_name ni.nordu.net; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; listen [::]:443 default ipv6only=on; ## listen for ipv6 ssl on; ssl_certificate /etc/ssl/ni_nordu_net.crt; ssl_certificate_key /etc/ssl/ni_nordu_net.key; # https://cipherli.st ssl_prefer_server_ciphers on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"; ssl_session_cache shared:SSL:10m; ssl_ecdh_curve secp384r1; ssl_dhparam /etc/ssl/dhparams.pem; server_name ni.nordu.net; location /static/ { alias /var/opt/norduni/norduni/src/niweb/niweb/static/; autoindex on; access_log off; expires 30d; } location / { include /etc/nginx/uwsgi_params; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect off; uwsgi_pass django; } }
SAML SP
If you want to set up NOCLook as a SAML SP you need to install the following packages and Python modules.
$ sudo apt-get install libffi-dev xmlsec1 $ sudo -u ni -i $ . norduni_environment/bin/activate $ pip install djangosaml2
You then need to uncomment the lines in settings.py that imports and sets up djangosaml2. You also have to create a pysaml2 configuration.
All this is best described in the documentation at https://pypi.python.org/pypi/djangosaml2.
Local saml metadata
To speed up login you can use local metadata. This metadata still needs to be updated and verified, and for that you can use https://github.com/NORDUnet/metadata-updater
You need to configure djangosaml2 to use local metadata, and you will have to add the meta-dataupdater to cron, preferably by running crontab -e as the ni user. Once an hour is reasonable, once a day can be ok, once a week might be tiresome when the cert expires.
Collecting and processing network data
To insert data you need to stop any python process that is using the Neo4j database. We hope to get the option to load more database instances in read-only mode in a near future then this could be avoided.
NORDUnet has a GIT repository called nistore and it is cloned to /var/opt/norduni/nistore/.
To start have a look at the NERDS README then clone the NERDS project.
cd /var/opt/norduni/ mkdir tools cd tools git clone https://github.com/fredrikt/nerds.git
Juniper Configuration Producer/Consumer
The Juniper configuration producer can load Juniper configuration directly from the router via SSH or Juniper configuration files in XML format from disk.
[ssh] user = view_account_user password = not_so_secret_password [sources] remote = one.example.org two.example.org three.example.org local = /var/conf/one.xml /var/conf/two.xml /var/conf/three.xml
"host": { "juniper_conf": { "bgp_peerings": [ { "as_number": "", "group": "", "description": "", "remote_address": "", "local_address": "", "type": "" }, ], "interfaces": [ { "name": "", "bundle": "", "vlantagging": true/false, "units": [ { "address": [ "", "" ], "description": "", "unit": "", "vlanid": "" } ], "tunnels": [ { "source": "", "destination": "" } ], "description": "" }, ], "name": "" }, "version": 1, "name": "" }
The JSON files can the be inserted using noclook_juniper_consumer.py.
Change the path at the top of the script to be able to import norduni_client.py.
Edit the template.conf file with the correct path to the Juniper NERDS files.
[data] juniper_conf = /path/to/juniper/json nmap_services = alcatel_isis = noclook =
Then run:
python noclook_juniper_consumer.py -C template.conf
Alcatel-Lucent ISIS Producer/Consumer
Using the output from the "show isis database detail" on a Cisco router
connected to the Alcatel-Lucent DCN network, nodes and their neighbors
will be grouped.
To get a more human readable result use the IOS command "clns" to map
the NSAP address to a hostname. eg. clns host hostname NSAP_address.
You can also provide a mapping CSV file. The mandatory columns are
osi_address and name. All following columns will be added to the JSON
output.
osi_address;name;other1;otherN 47002300000001000100010001002060280DB11D;NU-SHHM-ILA-01;info1;infoN
"host": { "alcatel_isis": { "data": { "ip_address": "", "link": "", "name": "", "osi_address": "", "ots": "", "type": "" }, "name": "", "neighbours": [ { "metric": "", "name": "" }, ] }, "name": "", "version": 1 }
The JSON files can be inserted with noclook_alcatel_consumer.py.
Edit the template.conf file with the correct path to the Alcatel ISIS NERDS files.
Change the path at the top of the script to be able to import norduni_client.py.
[data] juniper_conf = nmap_services = alcatel_isis = /path/to/alcatel/json noclook =
Then run:
python noclook_alcatel_consumer.py -C template.conf
nmap Producer/Consumer
Using the nmap services producer you can scan a network or individual addresses. NORDUnet have a file
with networks that is used with the "-iL networks_file" option added to NERDS_NMAP_OPTIONS in the run.sh file.
You need to install python-nmap from https://github.com/johanlundberg/python-nmap if the pip version gives you trouble.
Then you can scan your localhost with:
cd /opt/norduni/tools/nerds/producers/nmap_services ./run.sh . 127.0.0.1
You will find the JSON file in /opt/norduni/tools/nerds/producers/nmap_services/producers/json/.
"host" : { "." : { "os" : { "family" : "", "name" : "" } }, "addrs" : [ "127.0.0.1" ], "hostnames" : [ "host.example.org" ], "name" : "host.example.org", "services" : { "ipv4": { "127.0.0.1": { "tcp": { "1025": { "product": "Microsoft Windows RPC", "confidence": "10", "name": "msrpc", "proto": "unknown"}, "1029": { "product": "Microsoft Windows RPC over HTTP", "confidence": "10", "version": "1.0", "name": "ncacn_http", "proto": "unknown"}, } } } }, "version" : 1 }
The JSON files can be inserted with noclook_nmap_consumer_py.py.
Edit the template.conf file with the correct path to the nmap services JSON files.
Change the path at the top of the script to be able to import norduni_client.py.
[data] juniper_conf = nmap_services = /path/to/nmap/json alcatel_isis = noclook =
Then run:
python noclook_nmap_consumer.py -C template.conf
CSV Site Producer/Consumer
The script produces JSON output in the NERDS format from the provided CSV file.
The csv file needs to start with the name of the node and then the node type.
After those two columns any other node property may follow.
Start your csv file with a line similar to the one below.
name;node_type;node_property1,node_property2;...;node_property15
name;Host;site_type;address;area;postcode;city;country;floor;room;latitude;longitude;responsible_for;owner_id;telenor_subscription_id;comment
{ "host": { "csv_producer": { "address": "", "area": "", "city": "", "comment": "", "country": "", "floor": "", "latitude": "", "longitude": "", "meta_type": "", "name": "", "node_type": "", "owner_id": "", "postcode": "", "responsible_for": "", "room": "", "site_type": "", "telenor_subscription_id": "" }, "name": "", "version": 1 } }
The consumer script should only be run once as it does not update the sites, only creates new.
The JSON file directory is then inserted in to the database using noclook_site_csv_consumer.py.
Change the path at the top of the script to be able to import norduni_client.py.
Then run:
python noclook_site_csv_consumer.py -D /path/to/site_files/json
Daily database update
The producers are run with a cron job and the script noclook_consumer.py is used to run the three inserting/updating scripts (noclook_juniper_consumer.py, noclook_alcatel_consumer.py and noclook_nmap_consumer.py).
Change the path at the top of the script to be able to import norduni_client.py.
[data] juniper_conf = /path/to/juniper/json nmap_services = /path/to/nmap/json alcatel_isis = /path/to/alcate/json noclook = #Used for loading backup.
Then run:
python noclook_consumer.py -C template.conf -I
Setting up a local/development NOCLook
# Clone a convenience repo $ git clone https://github.com/NORDUnet/norduni-developer $ cd norduni-developer # Start dependencies $ ./start.sh # Clone NOCLook project repo $ git clone https://git.nordu.net/norduni.git $ cd norduni # Create a virtualenv and activate it $ virtualenv env $ . env/bin/activate # Install the python packages $ pip install -r requirements/dev.txt # Create a settings file $ cp src/niweb/dotenv src/niweb/.devenv # Sync the db $ python /path_to_repo/src/niweb/manage.py syncdb $ python /path_to_repo/src/niweb/manage.py migrate # Run the app $ python /path_to_repo/src/niweb/manage.py runserver
Upgrading to newest versions
This is the general procedure for upgrading to newest version of norduni.
# stash current local changes and update $ git stash $ git pull origin master $ git stash apply # Run migrations $ python /path_to_repo/src/niweb/manage.py migrate # Pip update requirements $ pip install -U -r requirements/prod.txt # Collect statics $ python /path_to_repo/src/niweb/manage.py collectstatic # Restart uwsgi $ sudo services uwsgi restart