Q1: How many sites are using SEC and how can I deploy it?
Q2: How can I use SEC with rsyslog?
Q3: How can I use SEC with syslog-ng?
Q4: Does SEC support network topology databases of
HP OpenView, Tivoli, or some other platforms? How can I use network topology
information in the event correlation process?
Q5: How can I integrate SEC with HP OpenView Network
Node Manager?
Q6: How can I integrate SEC with HP OpenView Operations?
Q7: How can I use SEC with Graphite?
Q8: How can I convert Swatch rules into SEC rules?
Q9: I have set up a named pipe as input file with
--input option, but SEC is unable to open it, although SEC has a permission
to read it. Why doesn't my setup work?
Q10: What is the difference between rules
and event correlation operations?
Q11: How can I see what event correlation operations
are currently active? How can I see the state of individual event
correlation operations? How can I see what SEC is currently doing?
Q12: How does 'reset' action work?
Q13: How can I use regular expression modifiers in
SEC rule definitions (like //i, //m, etc.)? How can I insert comments
into regular expressions?
Q14: How can I load Perl modules that could be used at
SEC runtime?
Q15: How can I save contexts to disk when SEC
is shut down, and read them in after SEC has been restarted?
Q16: How can I set up bi-directional
communication between SEC and its subprocess that was started by 'spawn'
action?
Q17: How can I run 'shellcmd' actions in an ordered
manner (i.e., an action is not started before the previous one has
completed)?
Q18: I have started a long-running subprocess from
SEC, but when I send SIGHUP or SIGTERM to SEC, the subprocess will not receive
SIGTERM as it should. What can be done about it?
Q19: How can I write a rule for several input sources
that would be able to report the name of the source for matching lines?
How to report the name of the internal context for input source?
Q20: How can I limit the run time of child processes?
Q21: How can I integrate SEC with systemd?
Q22: How can I parse events in JSON format?
Q23: How can I write regular expression patterns for
recognizing letters and other character classes in encodings like UTF-8?
Q24: How can I monitor log files with names that
contain timestamps?
Q25: How can I write a rule that ends processing
for matching events?
Q26: How can I issue commands to SEC via control file
or FIFO instead of sending signals to SEC process?
Q27: How can I configure SEC to listen on TCP or UDP
port for input events?
A: It is very difficult to tell the exact number of users, because no registration is required to download it :-) Also, SEC has been packaged for major Linux and BSD distributions, and many users install it from the package, instead of downloading the source.
SEC can be deployed in a wide variety of ways. There is a common misconception among some people that only one instance of SEC can be running at a time (which might eventually become a performance bottleneck). In fact, unlike a number of system services, you can run many SEC instances in daemon mode simultaneously. Also, apart from daemon mode, SEC can be used as a UNIX command line tool and employed in shell pipelines (e.g., like grep or sed), and there is no limit to the number of instances executing at the same time.
SEC is a single-threaded application to facilitate deterministic event processing with shared event correlation state over all input sources and rulesets. However, since SEC has a small memory footprint, it is straightforward to run several SEC processes on the same system for independent rulesets and event processing flows. Finally, if data sharing is needed in a multi-process setup, any SEC process can easily spawn several additional instances and communicate with them through a pipe interface.
Q2: How can I use SEC with rsyslog?
A: If you want to configure SEC to monitor log files which are created by
rsyslog, simply use --input command line options for specifying
the paths to log files, and --conf command line options for providing
appropriate rule files for log file monitoring.
However, if you would like to pass events from rsyslog to SEC over
the pipe interface, you would have to provide specific configuration
options for rsyslog and SEC. The following example has been tested
with SEC-2.7.4 and rsyslog-v5.
These rsyslog configuration directives start /usr/local/bin/sec.sh
with its standard input connected to a pipe,
and use the pipe for feeding all syslog messages with either auth- or
authpriv-facility to the standard input of /usr/local/bin/sec.sh:
$ModLoad omprog
$ActionOMProgBinary /usr/local/bin/sec.sh
In /usr/local/bin/sec.sh, provide a full command line for starting
SEC, for example:
#!/bin/bash
exec /usr/local/bin/sec --conf=/etc/sec/sec.conf --notail --input=-
Please note that you have to provide SEC with the --notail option,
in order to ensure it terminates when rsyslog closes the pipe.
Otherwise, a redundant SEC instance would stay around after
rsyslog has been restarted or shut down.
Rsyslog-v8 supports several options which ease the integration of
SEC with rsyslog. The following example (tested with rsyslog-8.17)
receives input events from port 514/udp (standard port for BSD syslog
protocol) and sends events with facilities auth and authpriv
to SEC, using rsyslog traditional file format:
module(load="imudp")
input(type="imudp" port="514")
if $syslogfacility == 4 or $syslogfacility == 10 then {
Due to the hup.signal="USR2" option, rsyslog sends the USR2 signal
to the SEC process when rsyslog receives the HUP signal during log rotation
(unlike rsyslog, SEC employs USR2 for rotating logs).
Without the hup.signal="USR2" option, the SEC process would receive
the HUP signal from rsyslog which clears all previous event correlation state
and restarts SEC.
If you would like to send events to local copy of rsyslog, you can run
external tools like /usr/bin/logger from SEC. Also, on many platforms
(e.g., Linux) syslog messages from local programs are accepted over a UNIX
domain socket.
For example, the following SEC action sends to rsyslog an event
"This is a test" which is issued with the tag mytest,
daemon-facility and info-level. The example assumes that
syslog messages are accepted from /dev/log socket in datagram mode:
action=udgram /dev/log <30>mytest: This is a test
Q3: How can I use SEC with syslog-ng?
A: If you would like to monitor log files created by syslog-ng, use
--input command line options for specifying their locations, and
use --conf options for providing rule files for log file monitoring.
In order to send events from syslog-ng to SEC over the pipe interface,
use the program() destination driver. For example, with the following
configuration syslog-ng uses a pipe for feeding SEC with all messages
received over the port 514/udp (tested with SEC-2.7.4 and syslog-ng-3.3):
source net { udp(); };
destination sec { program("/usr/local/bin/sec --conf=/etc/sec/sec.conf --notail --input=-"); };
log { source(net); destination(sec); };
Also note that SEC must be provided with the --notail option, in order
to ensure it terminates when syslog-ng closes the pipe.
Q4: Does SEC support network topology databases of HP OpenView, Tivoli,
or some other platforms? How can I use network topology information in the
event correlation process?
A: There is no support for any specific network topology database format
in the SEC core. However, SEC allows you to integrate
custom scripts and Perl code into SEC event flow (e.g., see SingleWithScript
rule and 'eval' action in SEC man page). Basically, you have to write a script
or Perl code that is able to query the topology database you have, and then
use it from rules.
Q5: How can I integrate SEC with HP OpenView Network Node Manager?
A: Network Node Manager 5.0 and previous releases write all the events they
see to trapd.log file. Therefore, you just have to specify this file as input
for SEC. Starting from version 6.0, Network Node Manager does no longer
produce trapd.log file by default. To force it to do so, you have to edit
pmd.lrf file, adding -SOV_EVENT;t option to the file:
OVs_YES_START::-SOV_EVENT;t:OVs_WELL_BEHAVED:15:PAUSE
After that, execute following commands:
ovaddobj pmd.lrf
For producing output events from SEC, you can use Network Node Manager's
ovevent utility. For detailed information, see ov_event(5), lrf(4), and
ovevent(1) manual pages.
Q6: How can I integrate SEC with HP OpenView Operations?
A: Use itostream plugin that is part of the SEC package. The plugin has
been tested with Operations 5.3, 6.0, 7.0, 8.1 and 9.2, and has been found
working with HP-UX, Solaris and Linux management servers, but also with
HP-UX, Solaris and Linux agents.
To use the plugin, you first need to compile it. The compiling
instructions are located in the itostream.c file, but if you are compiling
on management server, the following line should be sufficient:
gcc -o itostream itostream.c -L/opt/OV/lib -lopcsv -lnsp
On agents use -DAGENT flag, e.g.,
gcc -o itostream itostream.c -DAGENT -L/opt/OV/lib -lopc -lnsp
On some agent platforms the /opt/OV/lib directory is not included in the
shared library search path, which results an error message when you try to
run itostream binary. To include /opt/OV/lib in search path, use
-Xlinker -rpath options:
gcc -o itostream itostream.c -DAGENT -L/opt/OV/lib -lopc -lnsp
-Xlinker -rpath /opt/OV/lib
Also, some Operations agent platforms don't have /opt/OV/lib/libopc.*
library, which is normally just a symbolic link to
/opt/OV/lib/libopc_r.* library. In that case try
to use the following commandline:
gcc -o itostream itostream.c -DAGENT -L/opt/OV/lib -lopc_r -lnsp
(i.e., use -lopc_r option instead of -lopc).
In order to use itostream binary on the management server, you need to
enable output for external MSI plugins. To do that, open the Operations GUI
Node Bank, and go to Actions->Server->Configure. Then check "Enable Output"
option, and close the window.
If you wish to use itostream on particular Operations managed node, right-click
on the managed node icon, and go to Modify->Advanced Options. Then check
"Enable Output" option, and close the window.
Itostream takes 2 parameters: the name of MSI interface (you can use
arbitrary string here, like "test" or "mymsi"), and timeout N -
when itostream has seen no data for the last N seconds, it will try to
reconnect to the local Operations agent or Operations management server. After
startup, itostream will write Operations messages to its standard output, one
message per line. Itostream's standard output can be directed to a pipe or
file, which can be input for SEC. Here are some sample lines from itostream
output:
time=1025713202 sev=normal node=server1.mydomain app=TEST obj=TEST msg_grp=Network msg_text=node up
Q7: How can I use SEC with Graphite?
A: You can use the 'tcpsock' action for sending data to Graphite. By default,
Graphite listens on the port 2003/tcp for lines in plaintext format, where
each line has the following layout:
metric_path metric_value metric_timestamp
For example, the line switch.box2.cpu.util 100 1370349000 could
represent the fact that the CPU utilization of the switch box2 was 100% at
June 4, 2013 12:30 UTC (1370349000 seconds since January 1, 1970, 00:00 UTC).
The following ruleset keeps track of SSH login failures from different client
systems, and reports the number of login failures per client IP address to
Graphite:
type=Single
type=Calendar
Login failure counts for clients are kept in the Perl hash table
%sshlogin_failed which is maintained by both rules.
The first rule matches an SSH login failure event, extracting the IP address
of the client and incrementing the entry for the given IP in the
%sshlogin_failed hash table.
The second rule reports login failures per client IP addresses once in
every 5 minutes. The rule also resets the %sshlogin_failed hash table,
in order to start counting from scratch for the following 5 minutes.
During reporting, the second rule extracts client IP addresses and login
failure counts from the %sshlogin_failed hash table,
storing these data to contexts CLIENTS and COUNTS, respectively.
Note that client IP addresses and respective counts are stored in the same
order (this is ensured by Perl's keys() and values() functions).
For example, if the second element in the store of the CLIENTS context is
10.1.1.1, it is also the second element in the store of COUNTS which reflects
login failures from 10.1.1.1.
In order to send collected data to Graphite, the 'while' action is used
to loop over CLIENTS and COUNTS contexts, shifting elements out from both
contexts during each iteration and sending them to Graphite with the 'tcpsock'
action. The loop is executed until the store of the CLIENTS context contains
no elements (the 'getsize' action returns 0).
Each 'tcpsock' action takes the client IP address and login failure count,
and forms the following data string:
ssh.login.failure.IPaddress count timestampnewline
The timestamp is obtained from the %u action list variable which is
automatically maintained by SEC, while newline is assigned to the %n action
list variable with the 'lcall' action (the same action resets
the %sshlogin_failed hash table).
After creating the data string, the 'tcpsock' action sends it to the port
2003/tcp of the Graphite server (the example assumes the server is running
at the local host).
Q8: How can I convert Swatch rules into SEC rules?
A: The Swatch rule that consists of a regular expression and action without
thresholding conditions can be expressed with a SEC Single rule. For example,
the Swatch rule
watchfor /sshd\[\d+\]: Failed .+ for (\w+) from [\d.]+ port \d+ ssh2$/
can be converted to
type=Single
Suppose you have the following Swatch thresholding rule:
watchfor /sshd\[\d+\]: Failed .+ for (\w+) from [\d.]+ port \d+ ssh2$/
This rule matches SSH login failure events and writes a warning to standard
output if three failed logins have been observed for the *same* user within
60 seconds.
Swatch thresholding rules can be tuned with setting the following parameters:
track_by -- scope of counting
The 'type' parameter can have the following values:
limit -- react to the first 'count' events with an action and ignore
the following ones (e.g., if count=3, react to 1st, 2nd and 3rd event)
The 'both' thresholding mode maps naturally to the SingleWithThreshold
rule of SEC. For example, the Swatch rule
watchfor /sshd\[\d+\]: Failed .+ for (\w+) from [\d.]+ port \d+ ssh2$/
can be written as follows:
type=SingleWithThreshold
In order to mimic the 'threshold' mode of Swatch, change the 'action'
parameter of the above SEC rule in the following way:
action=write - Three login failures for user $1 within 1m; reset 0
For thresholding similar to the Swatch 'limit' mode, use the EventGroup rule
of SEC:
type=EventGroup
The 'count' parameter of this rule executes an action on each matching
event, and the 'action' parameter sets up a suppressing context when 3 events
have been seen for the same user name. The context disables further matching
for the given user name and is removed by the 'end' parameter when
counting operation terminates for this user name.
Note that if the Swatch rule has the 'type' parameter set to 'limit' or 'both',
and the 'count' parameter is set to 1, the rule executes an action for
the first event instance and suppresses the following instances in the given
time window. Such rules are easiest to express with SEC SingleWithSuppress
rules, for example:
type=SingleWithSuppress
Q9: I have set up a named pipe as input file with
--input option, but SEC is unable to open it, although SEC has a permission
to read it. Why doesn't my setup work?
A: In order to keep the pipe open at all times without the need to close
and reopen it when the writer closes the pipe, SEC opens named pipes in
read-write mode by default. For changing this behavior, use the
--norwfifo command line option.
Q10: What is the difference between rules and event
correlation operations?
A: Basically, rules are instructions to SEC that tell which event correlation
to start and how to feed them with events. Rules are static in
their nature - their number will not change at runtime, unless you have
updated the configuration file(s) and sent SIGHUP or SIGABRT to SEC.
In contrast, event correlation operations are dynamic - they are started by
rules and they terminate after their job is done.
There is no 1-1 relationship between the rules and event correlation
operations - there can be many simultaneously running event correlation
operations that were all started by the same rule.
After the rule has started an event correlation operation, this event
correlation operation needs to be distinguished from other operations.
To do this, SEC assigns a key to the operation that is composed from
configuration file name, rule ID, and the operation description string
(defined by the desc field of the rule).
Say that you have configuration file my.conf with one rule in it:
type=SingleWithThreshold
Suppose that SEC observes input line "user admin login failure on tty1".
This matches pattern 'user (\S+) login failure on (\S+)', and SEC will
now build event correlation key for the observed event. After
replacing $1 and $2 with actual values, the desc field evaluates to
the operation description string
"Repeated login failures for user admin on tty1".
Using the configuration file name, the rule ID, and the operation
description string for building the event correlation key will yield
the following value:
my.conf | 0 | Repeated login failures for user admin on tty1
(Since the rule was the first one in the configuration file, its ID is 0.
The ID for the second rule would be 1, for the third rule 2, etc.)
When SEC observes input line "user USERNAME login failure on TERM", it
will first calculate the key and check if there already is an event
correlation operation with that key. If such operation exists, detected line
will be correlated by this operation. Otherwise, a new event
correlation operation will be started which will consume the input line.
This processing scheme means that by using appropriate desc fields,
you can change the scope of event correlation.
For instance, if you use 'Repeated login failures for user $1' for
desc, you will count login failures for different users,
disregarding terminal names. Therefore, the following three lines will be
correlated by the same event correlation operation:
user admin login failure on tty1
However, if you use 'Repeated login failures for user $1 on $2' for
desc, the three lines above will each start a separate event
correlation operation.
Since the configuration file name and rule ID are present in the keys, event
correlation operations started by different rules will not clash, even if
their operation description strings are identical.
Q11: How can I see what event correlation operations
are currently active? How can I see the state of individual event
correlation operations? How can I see what SEC is currently doing?
A: Send SIGUSR1 signal to the SEC process, this will cause SEC to dump
detailed information about its state to the dumpfile (given with --dump
option). The information includes details about event correlation operations
and contexts that are currently active, SEC performance and rule usage
statistics, etc.
Q12: How does 'reset' action work?
A: Suppose you have two rules in your configuration file my.conf:
type=Single
type=SingleWithThreshold
Suppose SEC will observe the following lines:
user admin login failure on tty1
The second rule will start a separate event correlation operation for each
of the lines. The keys of these operations are:
my.conf | 1 | Repeated login failures for user admin on tty1
(since the rule was the second one in the configuration file, its ID is 1).
When SEC will observe the line 'user admin logged in on tty5', it will
evaluate $1 and $2 variables, yielding the following action:
reset +1 Repeated login failures for user admin on tty5
This means that SEC has to terminate the operation which has been started
by the next rule (+1) and which has the key
my.conf | 1 | Repeated login failures for user admin on tty5
Since the terms "next to the first" and "second" are identical,
another way to write the same action is:
action=reset 2 Repeated login failures for user $1 on $2
If there is no rule number specified in the action definition, e.g.
action=reset Repeated login failures for user $1 on $2
then SEC will assume a wildcard for the rule number, constructing all
possible keys and trying to find and terminate corresponding operations.
If there are 5 rules in the configuration file, SEC would look for the
operations with the following keys:
my.conf | 0 | Repeated login failures for user admin on tty5
Q13: How can I use regular expression modifiers in SEC rule definitions
(like //i, //m, etc.)? How can I insert comments into regular expressions?
A: SEC regular expression definitions don't include surrounding slashes, and
therefore it looks like there is no place for modifiers. Fortunately, perl
regular expressions allow you to use modifiers inside regular expressions:
/your_regexp/i can be expressed as /(?i)your_regexp/
For example, if you would like to set pattern field to /[A-Z]/i,
correct way of doing that would be 'pattern=(?i)[A-Z]'.
In order to insert comments into regular expressions, you can use
(?#text) constructs. For example, the following three pattern
definitions are equivalent:
pattern=test:\
pattern=test:(\S+)(?#another comment)$
pattern=test:(\S+)$
When using the (?x) modifier for inserting the comment, one has to
bear in mind that multi-line SEC regular expressions are always converted
into single-line format before they are compiled. For example, consider
the following pattern definition:
pattern=(?x)test:\
Before compilation, this pattern is converted into
pattern=(?x)test:# this is a comment (\S+)$
However, the above pattern is equivalent to
pattern=(?x)test:
which is probably not what was intended.
Q14: How can I load Perl modules that could be used at SEC runtime?
A: Add --intevents option to SEC commandline, and write a rule for
loading the necessary module when SEC_STARTUP event is observed. The following
rule will load the SNMP module, and terminate SEC if the loading failed:
type=single
Note that before attempting to load the SNMP module, the %a variable will be
set to zero, since if the SEC 'eval' action fails, it will not change the
previous value of its variable. Therefore, if %a is still zero after the load
attempt, the attempt was not successful.
Q15: How can I save contexts to disk when SEC is shut down,
and read them in after SEC has been restarted?
A: Add --intevents option to SEC commandline, and write rules for saving
context names when SEC_SHUTDOWN event is observed and for reading context
names in when SEC_STARTUP event is observed. For writing context names into
a file, use SingleWithScript rule:
# save context names
type=SingleWithScript
# read in context names, prepending 'SEC_CONTEXT: ' prefix to every name
type=Single
# Create contexts, based on the information received from the previous rule
type=Single
The following ruleset loads Perl Storable module at SEC startup and uses
it for saving/restoring all context data like context names, their lifetimes,
and event stores (since code references can't be saved/restored with
Storable, the following example assumes that context action lists do not
contain 'lcall' actions):
type=Single
type=Single
type=Single
Q16: How can I set up bi-directional communication between SEC and
its subprocess that was started by 'spawn' action?
A: When another process is started with 'spawn' from SEC,
it can send data to SEC by writing to standard output (internally, the
standard output of the process is redirected to a pipe that SEC reads).
To send data from SEC to the spawned process, set up a named pipe or file
from the process and use the 'write' action for writing to that pipe or file.
Q17: How can I run 'shellcmd' actions in an ordered manner (i.e.,
an action is not started before the previous one has completed)?
A: Suppose you have the following rule definition:
type=Calendar
Since the runtime of external programs started with the 'shellcmd' actions
is not limited in any way, SEC creates a separate process for executing
each program, in order to avoid freezing the whole event processing.
Therefore, although the first action
(cat /tmp/myreport | mail root@localhost) is started before the second one
(rm -f /tmp/myreport), it is not guaranteed that the first action has already
terminated when the second action starts. Furthermore, since commandlines
are first processed by the shell, it could well happen that the second action
is actually executed first, especially if its commandline is much simpler and
takes less time to process.
Therefore, the rule definition above might easily produce an empty e-mail
message, since the file is removed just before 'cat' gets to it.
In order to avoid such unwanted behaviour, you could use single 'shellcmd'
action and take advantage of the shell's && control operator:
type=Calendar
i.e., the file /tmp/myreport is not removed before the 'mail' command has
completed successfully. Another way to solve this problem is to put all
your commands into a separate shell script, and give the name of the script
to the 'shellcmd' action.
Q18: I have started a long-running subprocess from SEC, but when I
send SIGHUP or SIGTERM to SEC, the subprocess will not receive SIGTERM as it
should. What can be done about it?
A: When a command is started from Perl with system() or open() call,
Perl checks whether the command contains shell metacharacters, and if
it does, the command is executed with the interpreting shell
(on UNIX platforms, normally with /bin/sh -c your_command).
This means that when SEC is sending SIGTERM to its child processes,
your_command will NOT receive SIGTERM, but it will be sent to the shell
that started it.
In order to avoid such unwanted behaviour (and save one slot
in your process table), use shell's exec builtin command.
When exec is prependend to your commandline, the shell will not
fork a separate process for your command, but it will be executed
inside the current process. E.g., when you specify
action=spawn exec /usr/local/bin/myscript.pl 2>/var/log/myscript.log
an extra process will not be created for myscript.pl, although the
commandline contains the shell redirection metacharacter '>'.
Q19: How can I write a rule for several input sources
that would be able to report the name of the source for matching lines?
How to report the name of the internal context for input source?
A: Starting from SEC-2.6.1, you can take advantage of the $+{_inputsrc}
match variable that holds the name(s) of input source(s) for matching line(s).
With earlier versions of SEC,
use the PerlFunc pattern type that has the input source name as one of its
input parameters, and return the input source name from the pattern function.
E.g., the following rule matches the "File system full" messages with
a regular expression, and sets $1 and $2 variables to the file system and input
source names:
type=single
If the "/opt: file system full" message is logged to /var/log/server1.log,
the rule writes "File system /opt full (/var/log/server1.log)" to standard
output.
Starting from SEC-2.8.0, you can use the $+{_intcontext} match variable
for getting the name of the internal context for current input source.
Q20: How can I limit the run time of child processes?
A: First, you can use the timeout tool for executing command lines
and terminating them with a signal if the command is still running after
given number of seconds.
For example, the following 'spawn' action starts /bin/myprog and terminates
it with signal 15 (TERM) if it is still running after 10 seconds. After
producing synthetic events from standard output of /bin/myprog, the 'spawn'
action also generates a synthetic event "Exit code: N" with exit code of
/bin/myprog (by convention, exit code 124 indicates that /bin/prog timed out
and was terminated by timeout tool):
action=spawn ( /usr/bin/timeout -s 15 10 /bin/myprog; /bin/echo -e "\nExit code: $?" )
As an alternative to above example, you could also use the following fairly
simple Perl wrapper script for limiting the run time of child processes:
#!/usr/bin/perl -w
if (scalar(@ARGV) < 3) { exit(1); }
$SIG{TERM} = sub { $term{$$} = 1; };
$pid = fork();
if ($pid == -1) {
The following 'shellcmd' action will invoke /bin/prog through this wrapper
and terminate it after 15 seconds with the KILL signal:
action=shellcmd /usr/local/bin/wrapper.pl 15 9 /bin/myprog
Q21: How can I integrate SEC with systemd?
A: In order to integrate SEC with systemd, you need to set up a systemd service file
and environment file for SEC. If you wish to run a single instance of SEC, you can
just set up a simple sec.service service file (e.g., on RHEL/Centos/Fedora platform
this file is located in the /usr/lib/systemd/system directory) without having
the environment file. For example, sec.service could have the following content:
[Unit]
[Service]
[Install]
Note that the above service file provides all command line options for a single SEC process,
and the environment file is thus not necessary. Also note that the PIDFile
directive of the sec.service file has to refer to the pid file created by SEC with
the --pid command line option. After setting up this file, the command line
systemctl start sec will start SEC, while systemctl enable sec will enable
starting it at boot.
However, if you wish to run several SEC instances, a special service file sec@.service
needs to be defined which takes advantage of the %I specifier for referring to one particular
instance (the use of @-sign in the service file name indicates it is used for multiple instances).
For example:
[Unit]
[Service]
[Install]
Note that this time only few command line options are provided in the service file
which are common for all instances, while instance specific options are given in
the environment file /etc/sysconfic/sec. The environment file has to be referred to
with the EnvironmentFile directive in the sec@.service file.
Also note that the ExecStart directive uses the $OPTIONS_%I variable in the SEC
command line which holds instance specific command line options. For each SEC instance,
a separate variable is defined in the /etc/sysconfig/sec environment file. For example,
if you want to run two SEC instances called suricata and os, you can set up
the environment file /etc/sysconfig/sec as follows:
OPTIONS_suricata="--conf=/etc/sec/suricata/*.sec --input=/var/log/suricata/fast.log --user=suricata --umask=027"
According to the OPTIONS_suricata variable, the suricata instance of SEC will monitor
the input file /var/log/suricata/fast.log with rules loaded from files
/etc/sec/suricata/*.sec. Also, this instance will run with permissions of user suricata
with file creation mask set to 027. Finally, according to generic options defined in
the sec@.service file, this instance will run as a daemon (because of the
--detach option) and will store its process ID to /run/sec-suricata.pid.
According to the OPTIONS_os variable, the os instance of SEC will monitor the input
file /var/log/messages with SEC rules loaded from /etc/sec/os/*.sec.
Also, the instance will run as a daemon with root privileges (default behavior if
the --user option is not given), storing its process ID to /run/sec-os.pid.
In order to enable both instances, the following command lines have to be executed:
systemctl enable sec@suricata
Also, starting the instances can be accomplished with
systemctl start sec@suricata
Q22: How can I parse events in JSON format?
A: For a recipe, please have a look at the relevant example in SEC rule repository:
https://github.com/simple-evcorr/rulesets/tree/master/parsing-json
Q23: How can I write regular expression patterns for recognizing letters
and other character classes in encodings like UTF-8?
A: For recognizing characters in various encodings like UTF-8 and iso-8859-1,
use PerlFunc patterns for converting the input line from its native encoding
into a format with Perl wide characters, and then apply regular expression
against the line. For a more detailed recipe, please have a look at the
UTF-8 related example in SEC rule repository:
https://github.com/simple-evcorr/rulesets/tree/master/utf8
Note: do not set PERL_UNICODE environment variable for UTF-8 processing
with SEC, since this approach no longer works with recent Perl versions.
Q24: How can I monitor log files with names that contain timestamps?
A: Starting from SEC-2.8.0, you can use 'addinput' and 'dropinput' actions
for changing the list of input files at runtime.
For example, assume that log messages are divided into separate log files
by date, and log file names have the format /var/log/mylog-YYYYMMDD
(e.g., in September 5 2018 all messages would go to
/var/log/mylog-20180905, while after a date change a new log file
/var/log/mylog-20180906 would be created for September 6 2018).
If SEC is started with the following command line
sec --conf=/etc/sec/track-input.sec --reopen-timeout=60 --intevents --intcontexts
the following rules from /etc/sec/track-input.sec will implement tracking
input log files by date:
type=Single
type=Calendar
type=Single
The first rule generates a synthetic event
"ADDINPUT /var/log/mylog-YYYYMMDD OFFSET -"
at SEC startup or restart which triggers the opening of log file for
the current day. In the synthetic event, offset - is indicated which means
that the file will be tracked from EOF.
The second rule generates a synthetic event
"ADDINPUT /var/log/mylog-YYYYMMDD OFFSET 0"
each midnight which triggers switching over to a new log file after a date
change (in this case, offset 0 is used, in order to track the new log file
from the beginning).
The third rule will match aforementioned synthetic events and employ
'addinput' action for adding the given log file to the list of inputs.
If the file can't be opened immediately (e.g., the file does not exist yet),
--reopen-timeout=60 command line option will configure SEC to attempt
to reopen the file after each 60 seconds until the attempt succeeds.
After 'addinput' action has been executed for the log file, the context
INPUT_/var/log/mylog-YYYYMMDD
is created for this file with the lifetime of 86400 seconds (1 day).
The context will expire after a date change and drop the log file
from the list of inputs with 'dropinput' action, since the log file
is no longer relevant for the current day.
In some cases, it is not known in advance when new log files created.
For example, an application might create a new log file after the previous
file has reached a specific size, and log files could have the format
/var/log/mylog-YYYY-MM-DD-HH:MM:SS (e.g., if the log file has been
created in March 31 2020 at 12:37:44, its name is
/var/log/mylog-2020-03-31-12:37:44).
If SEC is started with the following command line
sec --conf=/etc/sec/track-input.sec --intevents --intcontexts
the following rules from /etc/sec/track-input.sec will implement tracking
input log files by timestamp:
type=Single
type=Single
type=Single
The above ruleset employs the following logtracker.pl script:
#!/usr/bin/perl -w
$| = 1;
if (!defined($ARGV[0])) { die "Usage: $0 <file_pattern>\n"; }
$pattern = $ARGV[0];
@files = glob($pattern);
if (scalar(@files)) {
for (;;) {
sleep(1);
@files = glob($pattern);
if (!scalar(@files)) { next; }
$lastfile = $files[-1];
if (!defined($logfile)) {
The above script is started with 'cspawn' action on SEC startup or restart,
and the script monitors the appearance of new input log files that match
the pattern /var/log/mylog-*.
When a new log file appears, the script generates a synthetic event
"Close file <previous_file>" for closing the previous log file, and
then generates another synthetic event "Open file <new_file> 0" for
opening the new log file and processing it from the beginning.
For example, if SEC is monitoring the file
/var/log/mylog-2020-03-31-12:37:44
and new log file /var/log/mylog-2020-04-01-09:52:13 appears,
the logtracker.pl script will generate synthetic events
"Close file /var/log/mylog-2020-03-31-12:37:44" and
"Open file /var/log/mylog-2020-04-01-09:52:13 0".
These events will be matched by second and third rule in above SEC ruleset,
so that the monitoring of previous log file will be ended with
'dropinput /var/log/mylog-2020-03-31-12:37:44' action and the monitoring of
new log file will be started with
'addinput /var/log/mylog-2020-04-01-09:52:13 0' action
(since the second parameter of 'addinput' action is 0, processing of
the file will start from the beginning).
When SEC is started or restarted and the pattern
/var/log/mylog-* matches, log file with the most recent timestamp
is selected and SEC will start to monitor the file from the end
(i.e., already existing lines in the file are not processed).
For example, if /var/log/mylog-2020-03-31-12:37:44 is the log file
with most recent timestamp, logtracker.pl script will generate the
synthetic event "Open file /var/log/mylog-2020-03-31-12:37:44 -", and SEC
will start to tail this log file with
'addinput /var/log/mylog-2020-03-31-12:37:44 -' action
(offset - denotes processing from the end of file).
Q25: How can I write a rule that ends processing for matching events?
A: If you set the relevant continue* field of the rule to EndMatch,
event processing will end after the rule has been applied to the event.
For having a rule that will end event processing immediately after the event
has been matched, you can utilize Jump rule without the cfset field
and continue field set to EndMatch. For example, the following
Jump rule will end the processing for sshd events (i.e., if an event
matches regular expression sshd\[\d+\]:, no further rules from any
of the rule files will be tried for this event):
type=Jump
auth,authpriv.* :omprog:
module(load="omprog")
action(type="omprog" name="sec"
binary="/usr/local/bin/sec --conf=/etc/sec/sec.conf --notail --input=-"
template="RSYSLOG_TraditionalFileFormat" hup.signal="USR2")
}
ovstop pmd
ovstart pmd
time=1025713224 sev=major node=server2.mydomain app=opsystem obj=disk msg_grp=OS msg_text=Disk fault
time=1025713227 sev=critical node=server2.mydomain app=opsystem obj=server msg_grp=OS msg_text=node down
ptype=RegExp
pattern=sshd\[\d+\]: Failed \S+ for (?:invalid user )?\S+ from ([\d.]+) port \d+ ssh2$
desc=SSH login failure from $1
action=lcall %o $1 -> ( sub { ++$sshlogin_failed{$_[0]}; } )
time=*/5 * * * *
desc=report SSH login failure statistics
action=lcall %temp -> ( sub { return keys %sshlogin_failed; } ); \
fill CLIENTS %temp; \
lcall %temp -> ( sub { return values %sshlogin_failed; } ); \
fill COUNTS %temp; \
lcall %n -> ( sub { %sshlogin_failed = (); return "\n"; } ); \
getsize %size CLIENTS; while %size ( \
shift CLIENTS %client; shift COUNTS %count; \
tcpsock localhost:2003 ssh.login.failure.%client %count %u%n; \
getsize %size CLIENTS )
exec echo Login failure for user $1
ptype=Regexp
pattern=sshd\[\d+\]: Failed .+ for (\w+) from [\d.]+ port \d+ ssh2$
desc=login failure
action=write - Login failure for user $1
threshold track_by=$1,type=both,count=3,seconds=60
exec echo Three login failures for user $1 within 1m
count -- event threshold
seconds -- counting window
type -- type of thresholding
threshold -- react to each 'count'-th event with an action
(e.g., if count=3, react to 3rd, 6th, 9th, ... event)
both -- react to 'count'-th event with an action (e.g., if count=3,
react to the 3rd event only)
threshold track_by=$1,type=both,count=3,seconds=60
exec echo Three login failures for user $1 within 1m
ptype=Regexp
pattern=sshd\[\d+\]: Failed .+ for (\w+) from [\d.]+ port \d+ ssh2$
desc=$1
action=write - Three login failures for user $1 within 1m
thresh=3
window=60
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\w+) from [\d.]+ port \d+ ssh2$
context=!SUPPRESS_SSH_USER_$1
count=write - Login failure for user $1
desc=$1
action=create SUPPRESS_SSH_USER_$1
thresh=3
window=60
end=delete SUPPRESS_SSH_USER_$1
ptype=Regexp
pattern=sshd\[\d+\]: Failed .+ for (\w+) from [\d.]+ port \d+ ssh2$
desc=$1
action=write - Login failure for user $1, suppressing repeated events for the same user during 1m
window=60
ptype=RegExp
pattern=user (\S+) login failure on (\S+)
desc=Repeated login failures for user $1 on $2
action=shellcmd notify.sh "%s"
window=60
thresh=3
user admin login failure on tty5
user admin login failure on tty2
ptype=RegExp
pattern=user (\S+) logged in on (\S+)
desc=User $1 successful login
action=reset +1 Repeated login failures for user $1 on $2
ptype=RegExp
pattern=user (\S+) login failure on (\S+)
desc=Repeated login failures for user $1 on $2
action=shellcmd notify.sh "%s"
window=60
thresh=3
user admin login failure on tty5
user admin login failure on tty2
my.conf | 1 | Repeated login failures for user admin on tty5
my.conf | 1 | Repeated login failures for user admin on tty2
my.conf | 1 | Repeated login failures for user admin on tty5
my.conf | 2 | Repeated login failures for user admin on tty5
my.conf | 3 | Repeated login failures for user admin on tty5
my.conf | 4 | Repeated login failures for user admin on tty5
/your_regexp/m can be expressed as /(?m)your_regexp/
etc.
(see perlre(1) man page)
(?# this is a comment )\
(\S+)$
# this is a comment \
(\S+)$
ptype=substr
pattern=SEC_STARTUP
context=SEC_INTERNAL_EVENT
desc=Load SNMP module
action=assign %a 0; eval %a (require SNMP); eval %a (exit(1) unless %a)
ptype=SubStr
pattern=SEC_SHUTDOWN
context=SEC_INTERNAL_EVENT
script=cat > /tmp/sec_contexts.dump
desc=Saving the SEC contexts
action=none
ptype=SubStr
pattern=SEC_STARTUP
context=SEC_INTERNAL_EVENT
desc=Read in previously saved SEC contexts
action=spawn perl -ne 'print "SEC_CONTEXT: $_"' /tmp/sec_contexts.dump
ptype=RegExp
pattern=^SEC_CONTEXT: (.*)
desc=Recreate context $1
action=create $1
ptype=SubStr
pattern=SEC_STARTUP
context=SEC_INTERNAL_EVENT
continue=TakeNext
desc=Load the Storable module and terminate if it is not found
action=assign %ret 0; eval %ret (require Storable); \
eval %ret (exit(1) unless %ret)
ptype=SubStr
pattern=SEC_STARTUP
context=SEC_INTERNAL_EVENT
desc=Restore all SEC contexts from /tmp/SEC_CONTEXTS on startup
action=lcall %ret -> ( sub { \
my $ptr = $main::context_list{"SEC_INTERNAL_EVENT"}; \
%main::context_list = \
%{Storable::retrieve("/tmp/SEC_CONTEXTS")}; \
$main::context_list{"SEC_INTERNAL_EVENT"} = $ptr; } )
ptype=SubStr
pattern=SEC_SHUTDOWN
context=SEC_INTERNAL_EVENT
desc=Save all SEC contexts into /tmp/SEC_CONTEXTS on shutdown
action=lcall %ret -> ( sub { \
Storable::store(\%main::context_list, "/tmp/SEC_CONTEXTS"); } )
time=0 0 * * *
desc=Sending report
action=shellcmd cat /tmp/myreport | mail root@localhost; \
shellcmd rm -f /tmp/myreport
time=0 0 * * *
desc=Sending report
action=shellcmd cat /tmp/myreport | mail root@localhost && rm -f /tmp/myreport
ptype=perlfunc
pattern=sub { if ($_[0] =~ /(\S+): [Ff]ile system full/) { \
return ($1, $_[1]); } else { return 0; } }
desc=File system $1 full ($2)
action=write -
$int = shift @ARGV;
$sig = shift @ARGV;
exit(1);
} elsif ($pid == 0) {
$SIG{TERM} = 'DEFAULT';
if (exists($term{$$})) { exit(0); }
exec("@ARGV");
exit(1);
} else {
$SIG{TERM} = sub { kill TERM, $pid; exit(0); };
if (exists($term{$$})) { kill TERM, $pid; exit(0); };
$SIG{ALRM} = sub { kill $sig, $pid; exit(0); };
alarm($int);
waitpid($pid, 0);
exit($? >> 8);
}
Description=Simple Event Correlator script to filter log file entries
After=syslog.target
Type=forking
PIDFile=/run/sec.pid
ExecStart=/usr/bin/sec --detach --pid=/run/sec.pid --conf=/etc/sec/*.sec --input=/var/log/messages --log=/var/log/sec --intevents
WantedBy=multi-user.target
Description=Simple Event Correlator (instance %I)
After=syslog.target
Type=forking
PIDFile=/run/sec-%I.pid
ExecStart=/usr/bin/sec --detach --pid=/run/sec-%I.pid $OPTIONS_%I
EnvironmentFile=/etc/sysconfig/sec
WantedBy=multi-user.target
OPTIONS_os="--conf=/etc/sec/os/*.sec --input=/var/log/messages"
systemctl enable sec@os
systemctl start sec@os
ptype=RegExp
pattern=^(?:SEC_STARTUP|SEC_RESTART)$
context=SEC_INTERNAL_EVENT
desc=start tracking log file for the current day at SEC (re)start
action=event ADDINPUT /var/log/mylog-%{.year}%{.mon}%{.mday} OFFSET -
time=0 0 * * *
desc=switch over to new log file at midnight
action=event ADDINPUT /var/log/mylog-%{.year}%{.mon}%{.mday} OFFSET 0
ptype=RegExp
pattern=^ADDINPUT (\S+) OFFSET (0|-)$
context=_INTERNAL_EVENT && !INPUT_$1
desc=open input file $1 and start reading from offset $2
action=addinput $1 $2; create INPUT_$1 86400 ( dropinput $1 )
ptype=RegExp
pattern=^(?:SEC_STARTUP|SEC_RESTART)$
context=SEC_INTERNAL_EVENT
desc=start log file tracker
action=cspawn LOGTRACKER /usr/local/bin/logtracker.pl '/var/log/mylog-*'
ptype=RegExp
pattern=^Open file (.+) (0|-)$
context=LOGTRACKER
desc=open logfile $1 and start reading from offset $2
action=addinput $1 $2
ptype=RegExp
pattern=^Close file (.+)$
context=LOGTRACKER
desc=close logfile $1
action=dropinput $1
$logfile = $files[-1];
print "Open file $logfile -\n";
}
$logfile = $lastfile;
print "Open file $logfile 0\n";
} elsif (($lastfile cmp $logfile) == 1) {
print "Close file $logfile\n";
$logfile = $lastfile;
print "Open file $logfile 0\n";
}
}
ptype=RegExp
pattern=sshd\[\d+\]:
continue=EndMatch