Skip to content

Services

In this section we are going to setup a similar enviroment we saw back at the service diagram.

To simplify and save time there is a docker-compose -file available that will setup the following services. You can do the installation to your personal computer, virtual machine or even to the cloud. Some of the services come pre-configured, but for example centralized logging or monitoring are not.

Note

To install the services you will first need to git clone the repository from: https://gitlab.labranet.jamk.fi/ttow0130/infra.

You may also use the following deploy token for read access to the repository: git clone https://labranet-token-csc:5wkLLtozjRUgFT6H3tXv@gitlab.labranet.jamk.fi/ttow0130/infra.git

Start the services

After that simply execute the command: docker-compose up like you learned in the 03. Containerisation section.

Config example

version: "3.4"
services:
    haproxy:
        image: haproxy
        container_name: haproxy
        volumes:
            - ./haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
        networks:
            - soanet
        ports:
            - 81:81
            - 80:80

    varnish:
        image: varnish
        container_name: varnish
        volumes:
            - ./varnish/default.vcl:/etc/varnish/default.vcl:ro
        networks:
            - soanet

    redis:
        image: redis
        container_name: redis
        volumes:
            - redis:/data
        networks:
            - soanet
        ports:
            - 6379:6379

    minio:
        image: minio/minio
        container_name: minio
        volumes:
            - minio:/data
        networks:
            - soanet
        ports:
            - 9001:9000
        environment:
            MINIO_ACCESS_KEY: minio
            MINIO_SECRET_KEY: minio123
        command: server /data/minio
        healthcheck:
            test: ["CMD", "curl", "-f", "http://minio:9000/minio/health/live"]
            interval: 1m30s
            timeout: 20s
            retries: 3
            start_period: 3m

    grafana:
        image: grafana/grafana
        container_name: grafana
        volumes:
            - grafana:/var/lib/grafana
        networks:
            - soanet
        ports:
            - 3000:3000
        env_file:
            - './grafana/env.grafana'


    influxdb:
        image: influxdb
        container_name: influxdb
        volumes:
            - influxdb:/var/lib/influxdb
        networks:
            - soanet
        ports:
            - 8083:8083
            - 8086:8086
            - 8090:8090
        env_file:
            - './influxdb/env.influxdb'

    telegraf:
        image: telegraf:latest
        container_name: telegraf
        volumes:
            - ./telegraf/telegraf.conf:/etc/telegraf/telegraf.conf:ro
        networks:
            - soanet

    mongo:
        image: mongo:3
        container_name: mongo
        networks:
            - soanet
        ports:
            - 27017:27017

    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1
        container_name: elasticsearch
        networks:
            - soanet
        env_file:
            - './elasticsearch/env.elasticsearch'
        ulimits:
            memlock:
                soft: -1
                hard: -1
        deploy:
            resources:
                limits:
                    memory: 1G

    graylog:
        image: graylog/graylog:3.0
        container_name: graylog
        networks:
            - soanet
        env_file:
            - './graylog/env.graylog'
        ports:
            - 9000:9000
            - 1514:1514
            - 1514:1514/udp
            - 12201:12201
            - 12201:12201/udp

    pgpool:
        image: postdock/pgpool:edge
        container_name: pgpool
        networks:
            - soanet
        #env_file:
        #    - './pgpool/env.pgpool'
        environment:
            PCP_USER: pcp_user
            PCP_PASSWORD: pcp_pass
            WAIT_BACKEND_TIMEOUT: 60
            CHECK_USER: monkey_user
            CHECK_PASSWORD: monkey_pass
            CHECK_PGCONNECT_TIMEOUT: 3
            SSH_ENABLE: 1
            DB_USERS: monkey_user:monkey_pass
            BACKENDS: "0:pgmaster:5432:1:/var/lib/postgresql/data:ALLOW_TO_FAILOVER,1:pgslave1::::"
            REQUIRE_MIN_BACKENDS: 1
            CONFIGS: "num_init_children:250,max_pool:4"
        volumes:
            - ./ssh/:/tmp/.ssh/keys
        ports:
            - 5430:5432

    pgmaster:
        image: postdock/postgres:edge
        container_name: pgmaster
        networks:
            - soanet
        # env_file:
        #    - './pgmaster/env.pgmaster'
        environment:
            NODE_ID: 1
            NODE_NAME: node1
            CLUSTER_NODE_NETWORK_NAME: pgmaster
            PARTNER_NODES: "pgmaster,pgslave1"
            REPLICATION_PRIMARY_HOST: pgmaster
            NODE_PRIORITY: 100
            SSH_ENABLE: 1
            POSTGRES_PASSWORD: monkey_pass
            POSTGRES_USER: monkey_user
            POSTGRES_DB: monkey_db
            CLEAN_OVER_REWIND: 0
            CONFIGS_DELIMITER_SYMBOL: ;
            CONFIGS: "listen_addresses:'*';max_replication_slots:5"
            CLUSTER_NAME: pg_cluster
            REPLICATION_DB: replication_db
            REPLICATION_USER: replication_user
            REPLICATION_PASSWORD: replication_pass
        ports:
            - 5431:5432
        volumes:
            - pgmaster:/var/lib/postgresql/data
            - ./ssh/:/tmp/.ssh/keys

    pgslave1:
        image: postdock/postgres:edge
        container_name: pgslave1
        networks:
            - soanet
        #env_file:
        #    - './pgslave/env.pgslave1'
        environment:
            NODE_ID: 2
            NODE_NAME: node2
            CLUSTER_NODE_NETWORK_NAME: pgslave1
            SSH_ENABLE: 1
            PARTNER_NODES: "pgmaster,pgslave1"
            REPLICATION_PRIMARY_HOST: pgmaster
            CLEAN_OVER_REWIND: 1
            CONFIGS_DELIMITER_SYMBOL: ;
            CONFIGS: "max_replication_slots:10"
        ports:
            - 5432:5432
        volumes:
            - pgslave1:/var/lib/postgresql/data
            - ./ssh/:/tmp/.ssh/keys

networks:
    soanet:

volumes:
    minio:
    redis:
    grafana:
    influxdb:
    pgmaster:
    pgslave1:

Task

Based on what you have learned so far, can you identify what happens after running this file?


Install notes

Edit your hosts file

Windows: C:\windows\system32\drivers\etc\hosts
Linux: /etc/hosts

Add the following entry:

<Your CSC Floating IP>   minio.imager.local grafana.imager.local graylog.imager.local api.imager.local

# Example
# 10.2.100.94  minio.imager.local grafana.imager.local graylog.imager.local api.imager.local

You will need admin rights for this operation.

Haproxy stats

http://serverip:81/stats

admin - admin

Minio

http://minio.imager.local

minio - minio123

Grafana

http://grafana.imager.local

admin - admin

You will need to add new datasource. Go to Configuration - Data Sources. Click on "Add data source". Select InfluxDB. To URL field type "http://influxdb:8086". In the database field type "telegraf". Finally click on Save & Test.

Import dashboards for you. Go to Create - Import. To "Grafana.com Dashboard" give the follwing ids (repeat this process for for each id). Click on "Load" and from the dropdown select the influxdb database and press Import.

  • 928 (Telegraf stats)
  • 2263 (HAProxy stats)
  • 3056 (Docker metrics) (Edit graph aliases by to be: [[tag_container_name]])

Graylog

http://graylog.imager.local

admin - admin

You will need to add a new input. Go to System - Inputs. Select GELF UDP and then "Launch new input". Tick the global box and git if a title as you wish. Click on save.

You should start to see messages under Streams - All messages

PGPool

monkey_user - monkey_pass

Database: monkey_db