Centralising VNC Server logs and reporting events with Elastic Stack

Follow

Centralising VNC Server logs provides a convenient way of monitoring specific events from VNC Server, such as logins and authentication failures, across an entire IT estate. This can be useful for auditing and security purposes.

This article provides information on how this can be achieved using the popular ELK stack and ElastAlert, using Beats to forward events from VNC Server to Elastic. A full discussion on the ELK stack is outside the scope of this short article. For further information on the Elastic Stack, see https://www.elastic.co/elastic-stack/

This article also covers pushing authentication failure events and blacklisting to Microsoft Teams using ElastAlert. ElastAlert can also forward alerts to other systems, such as Slack, Email, PagerDuty, and SIEMs.

This article does not include information on enabling encryption or security across the stack although this is highly recommended in production environments.

An example of how this can appear in Kibana log stream is shown below (personal details redacted):

kibana-streaming-redacted.PNG

Individual event can also be reviewed in JSON format:

{
  "_index": ".ds-logs-generic-default-2021.07.14-000005",
  "_type": "_doc",
  "_id": "FPF4pnoBzDoATE8bcRoG",
  "_score": 1,
  "fields": {
    "conntype": [
      "CLOUD (Peer to Peer)"
    ],
    "winlog.provider_name": [
      "VNC Server"
    ],
    "host.hostname": [
      "NUC"
    ],
    "winlog.computer_name": [
      "NUC"
    ],
    "host.mac": [
      "REDACTED"
    ],
    "serverperms": [
      "(f permissions)"
    ],
    "host.os.version": [
      "10.0"
    ],
    "winlog.keywords": [
      "Classic"
    ],
    "winlog.record_id": [
      5111
    ],
    "host.os.name": [
      "Windows 10 Pro"
    ],
    "log.level": [
      "information"
    ],
    "agent.name": [
      "NUC"
    ],
    "host.name": [
      "NUC"
    ],
    "event.kind": [
      "event"
    ],
    "host.os.type": [
      "windows"
    ],
    "agent.hostname": [
      "NUC"
    ],
    "data_stream.type": [
      "logs"
    ],
    "winlog.event_data.param1": [
      "Connections"
    ],
    "winlog.event_data.param2": [
      "authenticated: user@example.com (from 192.168.1.111::44205), as user (f permissions)"
    ],
    "tags": [
      "beats_input_codec_plain_applied"
    ],
    "host.architecture": [
      "x86_64"
    ],
    "event.provider": [
      "VNC Server"
    ],
    "event.code": [
      "256"
    ],
    "agent.id": [
      "REDACTED"
    ],
    "ecs.version": [
      "1.9.0"
    ],
    "event.created": [
      "2021-07-14T19:22:29.151Z"
    ],
    "agent.version": [
      "7.13.2"
    ],
    "eventtype": [
      "Connected"
    ],
    "host.os.family": [
      "windows"
    ],
    "connectedfrom": [
      "192.168.1.111"
    ],
    "host.os.build": [
      "19043.1110"
    ],
    "host.ip": [
      "REDACTED"
    ],
    "agent.type": [
      "winlogbeat"
    ],
    "vncacctemail": [
      "user@example.com"
    ],
    "host.os.kernel": [
      "10.0.19041.1110 (WinBuild.160101.0800)"
    ],
    "winlog.api": [
      "wineventlog"
    ],
    "@version": [
      "1"
    ],
    "host.id": [
      "REDACTED"
    ],
    "winlog.task": [
      "Service Mode"
    ],
    "data_stream.namespace": [
      "default"
    ],
    "loginaccount": [
      "john.smith"
    ],
    "message": [
      "Connections: authenticated: user@example.com (from 192.168.1.111::44205), as user (f permissions)"
    ],
    "winlog.event_id": [
      "256"
    ],
    "event.action": [
      "Service Mode"
    ],
    "@timestamp": [
      "2021-07-14T19:22:28.301Z"
    ],
    "winlog.channel": [
      "Application"
    ],
    "host.os.platform": [
      "windows"
    ],
    "data_stream.dataset": [
      "generic"
    ],
    "winlog.opcode": [
      "Info"
    ],
    "agent.ephemeral_id": [
      "0d0f6a09-7419-401d-954c-4df80815db22"
    ],
    "facility": [
      "Connections"
    ]
  }
}

In these examples, the hostname of the server we're using for Elasticsearch, Kibana and Logstash is carbon (IP address: 192.168.1.111), which is running Ubuntu 20.04.

For general getting started documentation, see https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html

It's worth mentioning that Elastic provides a very rich set of APIs to query data, the results of which can be integrated into other systems. We recommend visiting https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html for more information.

The steps below were produced using Elasticsearch 7.13, Kibana 7.13 and Logstash 7.13.

Step 1 - download and install Elasticsearch for your platform

See: https://www.elastic.co/guide/en/elasticsearch/reference/7.13/install-elasticsearch.html

Step 2 - configure Elasticsearch

A basic config file (elasticsearch.yml) is shown below:

# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 192.168.1.111
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
discovery.type: single-node
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["192.168.1.111"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["192.168.1.111"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

Step 3 - start Elasticsearch

Check it's running by connecting to hostname:9200 (or whatever you configured network.host and http.portto be

You should see something similar to:

  {
  "name" : "carbon",
  "cluster_name" : "carbon",
  "cluster_uuid" : "EtOr_lXgRM6REYi1jzhzdQ",
  "version" : {
  "number" : "7.13.3",
  "build_flavor" : "default",
  "build_type" : "deb",
  "build_hash" : "5d21bea28db1e89ecc1f66311ebdec9dc3aa7d64",
  "build_date" : "2021-07-02T12:06:10.804015202Z",
  "build_snapshot" : false,
  "lucene_version" : "8.8.2",
  "minimum_wire_compatibility_version" : "6.8.0",
  "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
  }

Step 4 - download and install Kibana

See: https://www.elastic.co/guide/en/kibana/7.13/install.html

Step 5 - configure Kibana

A basic configuration is shown below (for clarity, only values which should be changed are shown, for all other values defaults - as commented out in the config file - can be kept):

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.1.111"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# The Kibana server's name.  This is used for display purposes.
server.name: "carbon"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.1.111:9200"]

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"

Step 6 - download and install Logstash

See: https://www.elastic.co/guide/en/logstash/7.13/installing-logstash.html

Step 7 - configure Logstash with rules for VNC Server

input {
  beats {
    port => 5044
  }
}

filter {
grok {
   # cloud CONNECTION P2P:
   match => {"message" => "%{WORD:facility}: connected: %{EMAILADDRESS:vncacctemail} \(from %{IPV4:connectedfrom}\:\:%{POSINT:cloudconnp2pconnection}\)"}
   # cloud CONNECTION RELAY:
   match => {"message" => "%{WORD:facility}: connected: %{EMAILADDRESS:vncacctemail} \(via %{IPV4:connectedfrom}\:\:%{POSINT:cloudconnrelayconnection} to %{IPV4}\:\:%{POSINT}\)"}
   # cloud AUTH - EMAIL P2P - winlogbeat
   match => {"message" => "%{WORD:facility}: authenticated: %{EMAILADDRESS:vncacctemail} \(from %{IPV4:connectedfrom}\:\:%{POSINT:cloudconnp2pauthentication}\), as %{EMAILADDRESS:loginaccount} %{GREEDYDATA:serverperms}"}
   # cloud AUTH - EMAIL - RELAY - winlogbeat
   match => {"message" => "%{WORD:facility}: authenticated: %{EMAILADDRESS:vncacctemail} \(via %{IPV4:connectedfrom}\:\:%{POSINT:cloudconnrelayauthentication} to %{IPV4}\:\:%{POSINT}\), as %{EMAILADDRESS:loginaccount} %{GREEDYDATA:serverperms}"}
   # cloud AUTH - account - P2P - winlogbeat
   match => {"message" => "%{WORD:facility}: authenticated: %{EMAILADDRESS:vncacctemail} \(from %{IPV4:connectedfrom}\:\:%{POSINT:cloudconnp2pauthentication}\), as %{USERNAME:loginaccount} %{GREEDYDATA:serverperms}"}
   # cloud AUTH - account - RELAY  - winlogbeat
   match => {"message" => "%{WORD:facility}: authenticated: %{EMAILADDRESS:vncacctemail} \(via %{IPV4:connectedfrom}\:\:%{POSINT:cloudconnrelayauthentication} to %{IPV4}\:\:%{POSINT}\), as %{USERNAME:loginaccount} %{GREEDYDATA:serverperms}"}
   # direct CONNECTION
   match => {"message" => "%{WORD:facility}: connected: %{IPV4:connectedfrom}\:\:%{POSINT:directconnconnection}\ \(%{WORD:proto}\)"}
   # direct AUTH - EMAIL
   match => {"message" => "%{WORD:facility}: authenticated: %{IPV4:connectedfrom}\:\:%{POSINT:directconnauthentication} \(UDP\), as %{EMAILADDRESS:loginaccount} %{GREEDYDATA:serverperms}"}
   # direct AUTH - useraccount
   match => {"message" => "%{WORD:facility}: authenticated: %{IPV4:connectedfrom}\:\:%{POSINT:directconnauthentication}\ \(%{WORD:proto}\), as %{USERNAME:loginaccount} %{GREEDYDATA:serverperms}"}
   # disconnections - direct
   match => {"message" => "%{WORD:facility}: disconnected: %{IPV4:connectedfrom}\:\:%{POSINT:directconndisconnect}\ \(%{WORD:proto}\) %{GREEDYDATA:closereason}"}
   # disconnections - cloud - P2P
   match => {"message" => "%{WORD:facility}: disconnected: %{EMAILADDRESS:vncacctemail} \(from %{IPV4:connectedfrom}\:\:%{POSINT:cloudconnp2pdisconnect}\) %{GREEDYDATA:closereason}"}
   # disconnections - cloud - RELAY
   match => {"message" => "%{WORD:facility}: disconnected: %{EMAILADDRESS:vncacctemail} \(via %{IPV4}\:\:%{POSINT:cloudrelayconn} to %{IPV4}\:\:%{POSINT:cloudconnrelaydisconnect}\) %{GREEDYDATA:closereason}"}
   # disconnections - blacklisted
   match => {"message" => "%{WORD:facility}: rejecting blacklisted connection: %{GREEDYDATA:blacklisted}"}
}
# the above grok patterns are the only events we're interested in, so drop all others
if ("_grokparsefailure" in [tags]) { drop {} }

if ([cloudconnp2pconnection]) {
 # if you want to log peer to peer cloud connections, comment out the line below
 drop { }
}

if ([cloudconnrelayconnection]) {
 # if you want to log data relay cloud connections, comment out the line below
 drop { }
}


if ([cloudconnp2pdisconnect]) {
 mutate {
  add_field => { "conntype" => "CLOUD (Peer to Peer)" }
  add_field => { "eventtype" => "Disconnected" }
  remove_field => ["cloudconnp2pdisconnect"]
  }
}



if ([cloudconnrelayauthentication]) {
 mutate {
  add_field => { "conntype" => "CLOUD (Data Relay)" }
  add_field => { "eventtype" => "Connected" }
  remove_field => ["cloudconnrelayauthentication"]
  }
}

if ([cloudconnp2pauthentication]) {
 mutate {
  add_field => { "conntype" => "CLOUD (Peer to Peer)" }
  add_field => { "eventtype" => "Connected" }
  remove_field => ["cloudconnp2pauthentication"]
  }
}


if ([cloudconnrelaydisconnect]) {
 mutate {
  add_field => { "conntype" => "CLOUD (Data Relay)" }
  add_field => { "eventtype" => "Disconnected" }
  remove_field => ["cloudconnrelaydisconnect"]
  }
}

if ([blacklisted]) {
 mutate {
  add_field => { "conntype" => "Blacklisted" }
  add_field => { "eventtype" => "Blacklisted" }
  add_field => { "connectedfrom" => "%{blacklisted}" }
  add_field => { "closereason" => "Viewer Blacklisted" }
  remove_field => ["blacklisted"]
  }
}


if ([directconnconnection]) {
 # if you want to log data relay connections, comment out the line below
 drop { }
}

if ([directconndisconnect]) {
 mutate {
  add_field => { "conntype" => "DIRECT" }
  add_field => { "eventtype" => "Disconnected" }
  remove_field => ["directconndisconnect"]
  }
}

if ([directconnauthentication]) {
 mutate {
  add_field => { "conntype" => "DIRECT" }
  add_field => { "eventtype" => "Connected" }
  remove_field => ["directconnauthentication"]
  }
 }
}


output {
elasticsearch {
#       cacert => '/etc/logstash/logstash-ca.crt'
        hosts => ["carbon:9200"]
        data_stream => "true"
        user => "logstash_internal"
        password => "efi3ugheofiugeof0igu90"
        ssl => false
#       ssl_certificate_verification => false
        }
# uncomment the below to log to the file specified #file { # path => "/home/username/vnc/vnclogs.%{+YYYY-MM-dd}.log" # codec => json_lines # } }

In the above, you'll see we're using the grok filter to match events. For more information, see https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

If you want to write your own Logstash grok rules, or customise our examples above, we recommend using http://grokdebug.herokuapp.com/ or, in newer versions of kibana, http://<kibana host: port>/app/dev_tools#/grokdebugger

Step 8 - download and install Beats

Windows

For Windows, use Winlogbeat to forward events logged to the Windows Event log to Logstash.
See: https://www.elastic.co/downloads/beats/winlogbeat
A simple config is shown below (for clarity, only values which should be changed are shown, for all other values defaults - as commented out in the config file - can be kept):

winlogbeat.event_logs:
  - name: Application
    provider: VNC Server
    ignore_older: 72h

setup.template.settings: index.number_of_shards: 1 #index.codec: best_compression #_source.enabled: false
# ------------------------------ Logstash Output ------------------------------- output.logstash: # The Logstash hosts hosts: ["192.168.1.111:5044"] # ================================= Processors ================================= processors: - add_host_metadata: when.not.contains.tags: forwarded - add_cloud_metadata: ~

Linux

For Linux, use Filebeat to send the events from VNC Server logfile(s) to Logstash.
See: https://www.elastic.co/beats/filebeat

A simple config which monitors /var/log/vncserver-x11.log (the default service mode linux server log file) is shown below. For clarity, only values which should be changed are shown, for all other values defaults - as commented out in the config file - can be kept):

filebeat.inputs:
    - type: log
      paths:
        - /var/log/vncserver-x11.log
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.1.111:5044"]

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
#  - add_cloud_metadata: ~
#  - add_docker_metadata: ~
#  - add_kubernetes_metadata: ~

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
logging.selectors: ["*"]

Step 9 - download and install ElastAlert

See: https://elastalert.readthedocs.io/en/latest/running_elastalert.html

Step 10 - create elastalert index in Elastic

$ ./elastalert-create-index

Step 11 - configure ElastAlert rules

The VNCLoginFailure rule below sends VNC Server authentication failure alerts to the Teams URL specified by 'ms_teams_webhook_url'

# (Required)
# Rule name, must be unique
name: VNCLoginFailure

# (Required)
# Type of alert.
type: any

# (Required)
# Index to search, wildcard supported
index: log*

# (Required)
# A list of Elasticsearch filters used for find events
# These filters are joined with AND and nested in a filtered query
# For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html
# We are filtering for only "login_event" type documents with username "admin"
filter:
- query:
   query_string:
     query: "AuthFailure"

# (Required)
# The alert is use when a match is found
alert:
- "ms_teams"

ms_teams_webhook_url: 
    - "https://teams-webook-url"
ms_teams_alert_summary: "VNC Login Alert"
ms_teams_theme_color: "#ff0000"
alert_subject: "VNC Server Login Failure"
alert_text: "
Hostname: {}\n
Connection type: {}\n
Connection from IP: {}\n
Message: {}\n
"

alert_text_args:
- host.hostname
- conntype
- connectedfrom 
- closereason

ms_teams_alert_fixed_width: True
alert_text_type: alert_text_only

An example of how this looks in Teams is shown below:

teams-elastalert-direct.PNG

 

The VNCLoginFailure rule below sends VNC Server blacklisting alerts to the Teams URL specified by 'ms_teams_webhook_url'

# (Required)
# Rule name, must be unique
name: VNCServerBlacklist

# (Required)
# Type of alert.
type: any

# (Required)
# Index to search, wildcard supported
index: log*

# (Required)
# A list of Elasticsearch filters used for find events
# These filters are joined with AND and nested in a filtered query
# For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html
# We are filtering for only "login_event" type documents with username "admin"
filter:
- query:
   query_string:
     query: "Blacklisted"

# (Required)
# The alert is use when a match is found
alert:
- "ms_teams"

ms_teams_webhook_url:
    - "ms_teams_webhook_url"
ms_teams_alert_summary: "VNC Login Alert"
ms_teams_theme_color: "#ff0000"
alert_subject: "VNC Server Blacklist event"
alert_text: "
Hostname: {}\n
Connected from: {}\n
Message: {}\n
"

alert_text_args:
- host.hostname
- connectedfrom
- message

ms_teams_alert_fixed_width: True
alert_text_type: alert_text_only

An example of how this appears in teams is shown below.

teams-elastalert-blacklist.PNG

Was this article helpful?
7 out of 7 found this helpful

Comments

0 comments

Article is closed for comments.