SEC reads lines from files, named pipes, or standard input, matches the lines with patterns (regular expressions, Perl subroutines, etc.) for recognizing input events, and correlates events according to the rules in its configuration file(s). Rules are matched against input in the order they are given in the configuration file. If there are two or more configuration files, rule sequence from every file is matched against input (unless explicitly specified otherwise). SEC can produce output by executing external programs (e.g., snmptrap(1) or mail(1)), by writing to files, by sending data to TCP and UDP based servers, by calling precompiled Perl subroutines, etc.
SEC can be run in various ways. For example, the following command line starts it as a daemon, in order to monitor events appended to the /var/log/messages syslog file with rules from /etc/sec/syslog.rules:
/usr/bin/sec --detach --conf=/etc/sec/syslog.rules \
--input=/var/log/messages
Each time /var/log/messages is rotated, a new instance of /var/log/messages is opened and processed from the beginning. The following command line runs SEC in a shell pipeline, configuring it to process lines from standard input, and to exit when the /usr/bin/nc tool closes its standard output and exits:
/usr/bin/nc -l 8080 | /usr/bin/sec --notail --input=- \
--conf=/etc/sec/my.conf
Some SEC rules start event correlation operations, while other rules react immediately to input events or system clock. For example, suppose that SEC has been started with the following command line
/usr/bin/sec --conf=/etc/sec/sshd.rules --input=/var/log/secure
in order to monitor the /var/log/secure syslog file for sshd events. Also, suppose that the /etc/sec/sshd.rules configuration file contains the following rule for correlating SSH failed login syslog events:
type=SingleWithThreshold
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2
desc=Three SSH login failures within 1m for user $1
action=pipe '%s' /bin/mail -s 'SSH login alert' root@localhost
window=60
thresh=3
The pattern field of the rule defines the pattern for recognizing input events, while the ptype field defines its type (regular expression). Suppose that user risto fails to log in over SSH and the following message is logged to /var/log/secure:
Dec 16 16:24:59 myserver sshd[13685]: Failed password for risto from 10.12.2.5 port 41063 ssh2
This input message will match the regular expression pattern of the above rule, and the match variable $1 will be set to the string risto (see perlre(1) for details). After a match, SEC will evaluate the operation description string given with the desc field. This is done by substituting $1 with its current value which yields Three SSH login failures within 1m for user risto. SEC will then check if there already exists an event correlation operation identified with this string and triggered by the same rule. If the operation is not found, SEC will create a new operation for the user name risto, and the occurrence time of the input event will be recorded into the operation. Note that for event occurrence time SEC always uses the current time as returned by the time(2) system call, *not* the timestamp extracted from the event.
Suppose that after 25 seconds, a similar SSH login failure event for the same user name is observed. In this case, a running operation will be found for the operation description string Three SSH login failures within 1m for user risto, and the occurrence time of the second event is recorded into the operation. If after 30 seconds a third event for the user name risto is observed, the operation has processed 3 events within 55 seconds. Since the threshold condition "3 events within 60 seconds" (as defined by the thresh and window fields) is now satisfied, SEC will execute the action defined with the action field -- it will fork a command
/bin/mail -s 'SSH login alert' root@localhost
with a pipe connected to its standard input. Then, SEC writes the operation description string Three SSH login failures within 1m for user risto (held by the %s special variable) to the standard input of the command through the pipe. In other words, an e-mail warning is sent to the local root-user. Finally, since there are 5 seconds left until the end of the event correlation window, the operation will consume the following SSH login failure events for user risto without any further action, and finish after 5 seconds.
The above example illustrates that the desc field of a rule defines the scope of event correlation and influences the number of operations created by the rule. For example, if we set the desc field to Three SSH login failures within 1m, the root-user would be also alerted on 3 SSH login failure events for *different* users within 1 minute. In order to avoid clashes between operations started by different rules, operation ID contains not only the value set by the desc field, but also the rule file name and the rule number inside the file. For example, if the rule file /etc/sec/sshd.rules contains one rule
type=SingleWithThreshold
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2
desc=Three SSH login failures within 1m for user $1
action=pipe '%s' /bin/mail -s 'SSH login alert' root@localhost
window=60
thresh=3
and the event
Dec 16 16:24:59 myserver sshd[13685]: Failed password for risto from 10.12.2.5 port 41063 ssh2
is the first matching event for the above rule, this event will trigger a new event correlation operation with the ID
/etc/sec/sshd.rules | 0 | Three SSH login failures within 1m for user risto
(0 is the number assigned to the first rule in the file, see EVENT CORRELATION OPERATIONS section for more information).
The following simple example demonstrates that event correlation schemes can be defined by combining several rules. In this example, two rules harness contexts and synthetic events for achieving their goal:
type=SingleWithThreshold
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2
desc=Three SSH login failures within 1m for user $1
action=event 3_SSH_LOGIN_FAILURES_FOR_$1
window=60
thresh=3
type=EventGroup
ptype=RegExp
pattern=3_SSH_LOGIN_FAILURES_FOR_(\S+)
context=!USER_$1_COUNTED && !COUNTING_OFF
count=create USER_$1_COUNTED 60
desc=Repeated SSH login failures for 30 distinct users within 1m
action=pipe '%s' /bin/mail -s 'SSH login alert' root@localhost; \
create COUNTING_OFF 3600
window=60
thresh=30
The first rule looks almost identical to the rule from the previous example, but its action field is different -- after three SSH login failures have been observed for the same user name within one minute by an event correlation operation, the operation will emit the synthetic event 3_SSH_LOGIN_FAILURES_FOR_<username>. Although synthetic events are created by SEC, they are treated like regular events received from input sources and are matched against rules.
The regular expression pattern of the second rule will match the 3_SSH_LOGIN_FAILURES_FOR_<username> event and start a new event correlation operation if no such events have been previously seen. Also, each time a synthetic event for some user name has matched the rule, a context with the lifetime of 1 minute for that user name is created (see the count field). Note that this prevents further matches for the same user name, since a synthetic event for <username> can match the rule only if the context USER_<username>_COUNTED *does not* exist (as requested by the boolean expression in the context field; see CONTEXTS AND CONTEXT EXPRESSIONS section for more information).
The operation started by the second rule sends an e-mail warning to the local root-user if 30 synthetic events have been observed within 1 minute (see the thresh and window fields). Note that due to the use of the USER_<username>_COUNTED contexts, all synthetic events concern different user names. After sending an e-mail warning, the operation will also create the context COUNTING_OFF with the lifetime of 1 hour, and will continue to run until the 1 minute event correlation window expires. After the operation has finished, the presence of the COUNTING_OFF context will keep the second rule disabled (as requested by the boolean expression in the context field). Therefore, at most one e-mail warning per 1 hour is issued by above rules.
The above examples have presented the event correlation capabilities of SEC in a very brief fashion. The following sections will provide an in-depth discussion of SEC features.
Note that options can be introduced both with the single dash (-) and double dash (--), and both the equal sign (=) and whitespace can be used for separating the option name from the option value. For example, -conf=<file_pattern> and --conf <file_pattern> options are equivalent.
type=Single
rem=this rule matches any line which contains \
three consecutive A characters and writes the string \
"three A characters were observed" to standard output
ptype=SubStr
pattern=AAA
desc=Three A characters
action=write - three A characters were observed
# This comment line ends preceding rule definition.
# The following rule works like the previous rule,
# but looks for three consecutive B characters and
# writes the string "three B characters were observed"
# to standard output
type=Single
ptype=SubStr
pattern=BBB
desc=Three B characters
action=write - three B characters were observed
Apart from keywords that are part of rule definitions, label keywords may appear anywhere in the configuration file. The value of each label keyword will be treated as a label that can be referred to in rule definitions as a point-of-continue. This allows for continuing event processing at a rule that follows the label, after the current rule has matched and processed the event.
The points-of-continue are defined with continue* fields. Accepted values for these fields are:
SEC rules from the same configuration file are matched against input in the order they have been given in the file. For example, consider a configuration file which contains the following rule sequence:
type=Single
ptype=SubStr
pattern=AAA
rem=after this rule has matched, continue from last rule
continue=GoTo lastRule
desc=Three A characters
action=write - three A characters were observed
type=Single
ptype=SubStr
pattern=BBB
rem=after this rule has matched, don't consider following rules, \
since 'continue' defaults to 'DontCont'
desc=Three B characters
action=write - three B characters were observed
type=Single
ptype=SubStr
pattern=CCC
rem=after this rule has matched, continue from next rule
continue=TakeNext
desc=Three C characters
action=write - three C characters were observed
label=lastRule
type=Single
ptype=SubStr
pattern=DDD
desc=Three D characters
action=write - three D characters were observed
For the input line "AAABBBCCCDDD", this ruleset writes strings "three A characters were observed" and "three D characters were observed" to standard output. If the input line is "BBBCCCDDD", the string "three B characters were observed" is written to standard output. For the input line "CCCDDD", strings "three C characters were observed" and "three D characters were observed" are sent to standard output, while the input line "DDD" produces the output string "three D characters were observed".
If there are two or more configuration files, rule sequence from every file is matched against input (unless explicitly specified otherwise). For example, suppose SEC is started with the command line
/usr/bin/sec --input=- \
--conf=/etc/sec/sec1.rules --conf=/etc/sec/sec2.rules
and the configuration file /etc/sec/sec1.rules has the following content:
type=Single
ptype=SubStr
pattern=AAA
desc=Three A characters
action=write - three A characters were observed
type=Single
ptype=SubStr
pattern=BBB
continue=EndMatch
desc=Three B characters
action=write - three B characters were observed
Also, suppose the configuration file /etc/sec/sec2.rules has the following content:
type=Single
ptype=SubStr
pattern=CCC
desc=Three C characters
action=write - three C characters were observed
If SEC receives the line "AAABBBCCC" from standard input, rules from both configuration files are tried, and as a result, the strings "three A characters were observed" and "three C characters were observed" are written to standard output. Note that rules from /etc/sec/sec1.rules are tried first against the input line, since the option --conf=/etc/sec/sec1.rules is given before --conf=/etc/sec/sec2.rules in the SEC command line (see also INPUT PROCESSING AND TIMING section for a more detailed discussion). If SEC receives the line "BBBCCC" from standard input, the second rule from /etc/sec/sec1.rules produces a match, and the string "three B characters were observed" is written to standard output. Since the rule contains continue=EndMatch statement, the search for matching rules will end for all configuration files, and rules from /etc/sec/sec2.rules will not be not tried. Without this statement, the search for matching rules would continue in /etc/sec/sec2.rules, and the first rule would write the string "three C characters were observed" to standard output.
ptype=substr
pattern=Backup done:\tsuccess
The pattern matches lines containing "Backup done:<TAB>success".
Note that since the SubStr[N] pattern type has been designed for fast matching, it does not support match variables.
In addition to numbered match variables ($1, $2, etc.), SEC supports named match variables $+{name} and the $0 variable. The $0 variable holds the entire string of last N input lines that the regular expression has matched. Named match variables can be created in newer versions of Perl regular expression language, e.g., (?<myvar>AB|CD) sets $+{myvar} to AB or CD. Also, SEC creates special named match variables $+{_inputsrc} and $+{_intcontext}. The $+{_inputsrc} variable holds input file name(s) where matching line(s) came from. The $+{_intcontext} variable holds the name of current internal context (see INTERNAL EVENTS AND CONTEXTS section for more information). If internal context has not been set up for the current input source, the variable is set to Perl undefined value.
For example, the following pattern matches the SSH "Connection from" event, and sets $0 to the entire event line, both $1 and $+{ip} to the IP address of the remote node, and $2 to the port number at the remote node:
ptype=RegExp
pattern=sshd\[\d+\]: Connection from (?<ip>[\d.]+) port (\d+)
If the matching event comes from input file /var/log/messages with internal context MSGS, the $+{_inputsrc} and $+{_intcontext} variables are set to strings "/var/log/messages" and "MSGS", respectively.
Also, SEC allows for match caching and for the creation of additional named match variables through variable maps which are defined with the varmap* fields. Variable map is a list of name=number mappings separated by semicolons, where name is the name for the named variable and number identifies a numbered match variable that is set by the regular expression. Each name must begin with a letter and consist of letters, digits and underscores. After the regular expression has matched, named variables specified by the map are created from corresponding numbered variables. If the same named variable is set up both from the regular expression and variable map, the map takes precedence.
If name is not followed by the equal sign and number in the varmap* field, it is regarded as a common name for all match variables and their values from a successful match. This name is used for caching a successful match by the pattern -- match variables and their values are stored in the memory-based pattern match cache under name. Cached match results can be reused by Cached and NCached patterns. Note that before processing each new input line, previous content of the pattern match cache is cleared. Also note that a successful pattern match is cached even if the subsequent context expression evaluation yields FALSE (see INPUT PROCESSING AND TIMING section for more information).
For example, consider the following pattern definition:
ptype=regexp
pattern=(?i)(\S+\.mydomain).*printer: toner\/ink low
varmap=printer_toner_or_ink_low; message=0; hostname=1
The pattern matches "printer: toner/ink low" messages in a case insensitive manner from printers belonging to .mydomain. Note that the printer hostname is assigned to $1 and $+{hostname}, while the whole message line is assigned to $0 and $+{message}. If the message comes from file /var/log/test which does not have an internal context defined, the $+{_inputsrc} variable is set to string "/var/log/test", while $+{_intcontext} is set to Perl undefined value. Also, these variables and their values are stored to the pattern match cache under the name "printer_toner_or_ink_low".
The following pattern definition produces a match if the last two input lines are AAA and BBB:
ptype=regexp2
pattern=^AAA\nBBB$
varmap=aaa_bbb
Note that with the --nojointbuf option the pattern only matches if the matching lines are coming from the *same* input file, while the --jointbuf option lifts that restriction.
In the case of a match, $0 is set to "AAA<NEWLINE>BBB", $+{_inputsrc} to file name(s) for matching lines, and $+{_intcontext} to the name of current internal context. Also, these variable-value pairs are cached under the name "aaa_bbb".
function(L1, L2, ..., LN, F1, F2, ..., FN)
Note that with the --nojointbuf option, the function is called with a single file name parameter F, since lines L1, ..., LN are coming from the same input file:
function(L1, L2, ..., LN, F)
Also note that if the input line is a synthetic event, the input file name is Perl undefined value.
If the function returns several values or a single value that is true in Perl boolean context, the pattern matches. If the function returns no values or a single value that is false in Perl boolean context (0, empty string or undefined value), the pattern does not match. If the pattern matches, return values will be assigned to numbered match variables ($1, $2, etc.). Like with RegExp patterns, the $0 variable is set to matching input line(s), the $+{_inputsrc} variable is set to input file name(s), the $+{_intcontext} variable is set to the name of current internal context, and named match variables can be created from variable maps. For example, consider the following pattern definition:
ptype=perlfunc2
pattern=sub { return ($_[0] cmp $_[1]); }
The pattern compares last two input lines in a stringwise manner ($_[1] holds the last line and $_[0] the preceding one), and matches if the lines are different. Note that the result of the comparison is assigned to $1, while two matching lines are concatenated (with the newline character between them) and assigned to $0. If matching lines come from input file /var/log/mylog with internal context TEST, the $+{_inputsrc} and $+{_intcontext} variables are set to strings "/var/log/mylog" and "TEST", respectively.
The following pattern produces a match for any line, and sets $1, $2 and $3 variables to strings "abc", "def" and "ghi", respectively (also, $0 is set to the whole input line, $+{_inputsrc} to the input file name, and $+{_intcontext} to the name of internal context associated with input file $+{_inputsrc}):
ptype=perlfunc
pattern=sub { return ("abc", "def", "ghi"); }
The following pattern definition produces a match if the input line is not a synthetic event and contains either the string "abc" or "def". The $0 variable is set to the matching line. If matching line comes from /var/log/test without an internal context, $+{_intcontext} is set to Perl undefined value, while $1, $+{file} and $+{_inputsrc} are set to string "/var/log/test":
ptype=perlfunc
pattern=sub { if (defined($_[1]) && $_[0] =~ /abc|def/) \
{ return $_[1]; } return 0; }
varmap= file=1
Finally, if a function pattern returns a single value which is a reference to a Perl hash, named match variables are created from key-value pairs in the hash. For example, the following pattern matches a line if it contains either the string "three" or "four". Apart from setting $0, $+{_inputsrc} and $+{_intcontext}, the pattern also creates match variables $+{three} and $+{four}, and sets them to 3 and 4, respectively:
ptype=perlfunc
pattern=sub { my(%hash); \
if ($_[0] !~ /three|four/) { return 0; } \
$hash{"three"} = 3; $hash{"four"} = 4; return \%hash; }
ptype=perlfunc
pattern=sub { if (defined($_[1]) && $_[0] =~ /abc|def/) \
{ return $_[1]; } return 0; }
varmap=abc_or_def_found; file=1
then the entry "abc_or_def_found" is created in the pattern match cache. Therefore, the pattern
ptype=cached
pattern=abc_or_def_found
will also produce a match for this input line, and set the $0, $1, $+{file}, $+{_inputsrc}, and $+{_intcontext} variables to values from the previous match.
When match variables are substituted, each "$$" sequence is interpreted as a literal dollar sign ($) which allows for masking match variables. For example, the string "Received $$1" becomes "Received $1" after substitution, while "Received $$$1" becomes "Received $<value_of_1st_var>". In order to disambiguate numbered match variables from the following text, variable number must be enclosed in braces. For example, the string "Received ${1}0" becomes "Received <value_of_1st_var>0" after substitution, while the string "Received $10" would become "Received <value_of_10th_var>".
If the match variable was not set by the pattern, it is substituted with an empty string (i.e., a zero-width string). Thus the string "Received $10!" becomes "Received !" after substitution if the pattern did not set $10. (Note that prior to SEC-2.6, unset variables were *not* substituted.)
In the current version of SEC, names of $+{name} match variables must comply with the following naming convention -- the first character can be a letter or underscore, while remaining characters can be letters, digits, underscores and exclamation marks (!). However, when setting named match variables from a pattern, it is recommended to begin the variable name with a letter, since names of special automatically created variables begin with an underscore (e.g., $+{_inputsrc}).
After the pattern has matched an event and match variables have been set, it is also possible to refer to previously cached match variables with the syntax $:{entryname:varname}, where entryname is the name of the pattern match cache entry, and varname is the name of the variable stored under the entry. For example, if the variable $+{ip} has been previously cached under the entry "SSH", it can be referred as $:{SSH:ip}. For the reasons of efficiency, the $:{entryname:varname} syntax is not supported for fast pattern types which do not set match variables (i.e., SubStr, NSubStr, NCached and TValue).
Note that since Pair and PairWithWindow rules have two patterns, match variables of the first pattern are shadowed for some rule fields when the second pattern matches and sets variables. In order to refer to shadowed variables, their names must begin with % instead of $ (e.g., %1 refers to match variable $1 set by the first pattern). However, the use of the %-prefix is only valid under the following circumstances -- *both* pattern types support match variables *and* in the given rule field match variables from *both* patterns can be used.
The %-prefixed match variables are masked with the "%%" sequence (like regular match variables with "$$"). Similarly, the braces can be used for disambiguating the %-prefixed variables from the following text.
Finally, note that the second pattern of Pair and PairWithWindow rules may contain match variables if the second pattern is of type SubStr, NSubStr, Regexp, or NRegExp. The variables are substituted at runtime with the values set by the first pattern. If the pattern is a regular expression, all special characters inside substituted values are masked with the Perl quotemeta() function and the final expression is checked for correctness.
For example, the action create MYCONTEXT 3600 (report MYCONTEXT /bin/mail root@localhost) creates the context MYCONTEXT which has a lifetime of 3600 seconds and empty event store. Also, immediately before MYCONTEXT expires and is dropped from memory, the action report MYCONTEXT /bin/mail root@localhost is executed which mails the event store of MYCONTEXT to root@localhost.
Contexts can be used for event aggregation and reporting. Suppose the following actions are executed in this order:
create MYCONTEXT
add MYCONTEXT This is a test
alias MYCONTEXT MYALIAS
add MYALIAS This is another test
report MYCONTEXT /bin/mail root@localhost
delete MYALIAS
The first action creates the context MYCONTEXT with infinite lifetime and empty event store. The second action appends the string "This is a test" to the event store of MYCONTEXT. The third action sets up an alias name MYALIAS for the context (names MYCONTEXT and MYALIAS refer to the same context data structure). The fourth action appends the string "This is another test" to the event store of the context. The fifth action writes the lines
This is a test
This is another test
to the standard input of the /bin/mail root@localhost command. The sixth action deletes the context data structure from memory and drops its names MYCONTEXT and MYALIAS.
Since contexts are accessible from all rules and event correlation operations, they can be used for data sharing and joining several rules into one event correlation scheme. In order to check for the presence of contexts from rules, context expressions can be employed.
Context expressions are boolean expressions that are defined with the context* rule fields. Context expressions can be used for restricting the matches produced by patterns, since if the expression evaluates FALSE, the rule will not match an input event.
The context expression accepts context names, Perl miniprograms, Perl
functions, and pattern match cache lookups as operands. These operands can
be combined with the following operators:
! - logical NOT,
&& - short-circuit logical AND,
|| - short-circuit logical OR.
In addition, parentheses can be used for grouping purposes.
If the operand does not contain any special operators (such as -> or :>, see below), it is treated as a context name. Context name operands may contain match variables, but may not contain whitespace. If the context name refers to an existing context, the operand evaluates TRUE, otherwise it evaluates FALSE.
For example, consider the following rule sequence:
type=Single
ptype=RegExp
pattern=Test: (\d+)
desc=test
action=create CONT_$1
type=Single
ptype=RegExp
pattern=Test2: (\d+) (\d+)
context=CONT_$1 && CONT_$2
desc=test
action=write - Both $1 and $2 have been seen in the past
If the following input lines appear in this order
Test: 19
Test: 261
Test2: 19 787
Test: 787
Test2: 787 261
the first input line matches the first rule which creates the context CONT_19, and similarly, the second input line triggers the creation of the context CONT_261. The third input line "Test2: 19 787" matches the regular expression
Test2: (\d+) (\d+)
but does not match the second rule, since the boolean expression
CONT_19 && CONT_787
evaluates FALSE (context CONT_19 exists, but context CONT_787 doesn't). The fourth input line matches the first rule which creates the context CONT_787. The fifth input line "Test2: 787 261" matches the second rule, since the boolean expression
CONT_787 && CONT_261
evaluates TRUE (both context CONT_787 and context CONT_261 exist), and therefore the string "Both 787 and 261 have been seen in the past" is written to standard output.
If the context expression operand contains the arrow operator (->), the text following the arrow must be a valid Perl function definition that is compiled at SEC startup with the Perl eval() function. The eval() must return a code reference (see also PERL INTEGRATION section for more information). If any text precedes the arrow, it is treated as a list of parameters for the function. Parameters must be separated by whitespace and may contain match variables. In order to evaluate the context expression operand, the Perl function is called in the Perl scalar context. If the return value of the function is true in the Perl boolean context, the operand evaluates TRUE, otherwise it evaluates FALSE.
For example, the following rule matches an SSH login failure event if the login attempt comes from a privileged port of the client host:
type=Single
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port (\d+) ssh2
context=$2 -> ( sub { $_[0] < 1024 } )
desc=SSH login failure for $1 priv port $2
action=write - SSH login failure for user $1 from a privileged port $2
When the following message from SSH daemon appears
Dec 16 16:24:59 myserver sshd[13685]: Failed password for risto from 10.12.2.5 port 41063 ssh2
the regular expression of the rule matches this message, and the value of the $2 match variable (41063) is passed to the Perl function
sub { $_[0] < 1024 }
This function returns true if its input parameter is less than 1024 and false otherwise, and therefore the above message will not match the rule. However, the following message
Dec 16 16:25:17 myserver sshd[13689]: Failed password for risto from 10.12.2.5 port 1023 ssh2
matches the rule, and the string "SSH login failure for user risto from a privileged port 1023" is written to standard output.
As another example, the following context expression evaluates TRUE if the /var/log/messages file does not exist or was last modified more than 1 hour ago (note that the Perl function takes no parameters):
context= -> ( sub { my(@stat) = stat("/var/log/messages"); \
return (!scalar(@stat) || time() - $stat[9] > 3600); } )
If the context expression operand contains the :> operator, the text that follows :> must be a valid Perl function definition that is compiled at SEC startup with the Perl eval() function. The eval() must return a code reference (see also PERL INTEGRATION section for more information). If any text precedes the :> operator, it is treated as a list of parameters for the function. Parameters must be separated by whitespace and may contain match variables. It is assumed that each parameter is a name of an entry in the pattern match cache. If an entry with the given name does not exist, Perl undefined value is passed to the function. If an entry with the given name exists, a reference to the entry is passed to the Perl function. Internally, each pattern match cache entry is implemented as a Perl hash which contains all match variables for the given entry. In the hash, each key-value pair represents some variable name and value, e.g., if cached match variable $+{ip} is holding 10.1.1.1, the hash contains the value 10.1.1.1 with the key ip. In order to evaluate the context expression operand, the Perl function is called in the Perl scalar context. If the return value of the function is true in the Perl boolean context, the operand evaluates TRUE, otherwise it evaluates FALSE.
For example, consider the following rule sequence:
type=Single
ptype=RegExp
pattern=sshd\[\d+\]: (?<status>Accepted|Failed) .+ \
for (?<invuser>invalid user )?(?<user>\S+) from (?<ip>[\d.]+) \
port (?<port>\d+) ssh2
varmap=SSH
continue=TakeNext
desc=parse SSH login events and pass them to following rules
action=none
type=Single
ptype=Cached
pattern=SSH
context=SSH :> ( sub { $_[0]->{"status"} eq "Failed" && \
$_[0]->{"port"} < 1024 && \
defined($_[0]->{"invuser"}) } )
desc=Probe of invalid user $+{user} from privileged port of $+{ip}
action=pipe '%t: %s' /bin/mail -s 'SSH alert' root@localhost
The first rule matches and parses SSH login messages, and stores parsing results to the pattern match cache under the name SSH. The pattern of the second rule (defined with ptype=Cached and pattern=SSH) matches any input event for which the entry SSH has been previously created in the pattern match cache (in other words, the event has been recognized and parsed as an SSH login message). For each matching event, the second rule passes the reference to the SSH cache entry to the Perl function
sub { $_[0]->{"status"} eq "Failed" && \
$_[0]->{"port"} < 1024 && \
defined($_[0]->{"invuser"}) }
The function checks the values of $+{status}, $+{port}, and $+{invuser} match variables under the SSH entry, and returns true if $+{status} equals to the string "Failed" (i.e., login attempt failed), the value of $+{port} is less than 1024, and $+{invuser} holds a defined value (i.e., user account does not exist). If the function (and thus context expression) evaluates TRUE, the rule sends a warning e-mail to root@localhost that a non-existing user account was probed from a privileged port of a client host.
If the context expression operand begins with the varset keyword, the following string is treated as a name of an entry in the pattern match cache. The operand evaluates TRUE if the given entry exists, and FALSE otherwise.
For example, the following context expression definition evaluates TRUE if the pattern match cache entry SSH exists and under this entry, the value of the match variable $+{user} equals to the string "risto":
context=varset SSH && SSH :> ( sub { $_[0]->{"user"} eq "risto" } )
If the context expression operand begins with the equal sign (=), the following text must be a Perl miniprogram which is a valid parameter for the Perl eval() function. The miniprogram may contain match variables. In order to evaluate the Perl miniprogram operand, it will be compiled and executed by calling the Perl eval() function in the Perl scalar context (see also PERL INTEGRATION section). If the return value from eval() is true in the Perl boolean context, the operand evaluates TRUE, otherwise it evaluates FALSE. Please note that unlike Perl functions of -> and :> operators which are compiled once at SEC startup, Perl miniprograms are compiled before each execution, and their evaluation is thus considerably more expensive.
For example, the following context expression evaluates TRUE when neither the context C1 nor the context C2 exists and the value of the $1 variable equals to the string "myhost.mydomain":
context=!(C1 || C2) && =("$1" eq "myhost.mydomain")
Since && is a short-circuiting operator, the Perl code
"$1" eq "myhost.mydomain"
is *not* evaluated if either C1 or C2 exists.
Note that since Perl functions and miniprograms may contain strings that clash with context expression operators (e.g., '!'), it is recommended to enclose them in parentheses, e.g.,
context=$1 $2 -> ( sub { $_[0] != $_[1] } )
context= =({my($temp) = 0; !$temp;})
Also, if function parameter lists contain such strings, they should be enclosed in parentheses in the similar way:
context=($1! $2) -> ( sub { $_[0] eq $_[1] } )
If the whole context expression is enclosed in square brackets [], e.g., [MYCONTEXT1 && !MYCONTEXT2], SEC evaluates the expression *before* pattern matching (normally, the pattern is matched with input line(s) first, so that match variables would be initialized and substituted before the expression is evaluated). However, if the expression does not contain match variables and many input events are known to match the pattern but not the expression, the []-operator could save substantial amount of CPU time.
Apart from few exceptions explicitly noted, match variables are substituted at the earliest opportunity in action lists. For example, consider the following rule definition:
type=SingleWithThreshold
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2
desc=Three SSH login failures within 1m
action=pipe 'Three SSH login failures, first user is $1' \
/bin/mail -s 'SSH login alert' root@localhost
window=60
thresh=3
When this rule matches an SSH login failure event which starts an event correlation operation, the operation substitutes the $1 match variable in the action list definition with the user name from the matching event, and user names from further events processed by this event correlation operation are not considered for $1. For example, if the following events are observed
Dec 16 16:24:52 myserver sshd[13671]: Failed password for root from 10.12.2.5 port 29736 ssh2
Dec 16 16:24:59 myserver sshd[13685]: Failed password for risto from 10.12.2.5 port 41063 ssh2
Dec 16 16:25:01 myserver sshd[13689]: Failed password for oracle from 10.12.2.5 port 11204 ssh2
then all events are processed by the same operation, and the message "Three SSH login failures, first user is root" is mailed to root@localhost.
In order to use semicolons inside a non-constant parameter, the parameter must be enclosed in parentheses (the outermost set of parentheses will be removed by SEC during configuration file parsing). For example, the following action list consists of delete and shellcmd actions:
action=delete MYCONTEXT; shellcmd (rm /tmp/sec1.tmp; rm /tmp/sec2.tmp)
The delete action deletes the context MYCONTEXT, while the shellcmd action executes the command line rm /tmp/sec1.tmp; rm /tmp/sec2.tmp. Since the command line contains a semicolon, it has been enclosed in parentheses, since otherwise the semicolon would be mistakenly considered a separator between two actions.
Apart from match variables, SEC supports action list variables in action lists which facilitate data sharing between actions and Perl integration. Each action list variable has a name which must begin with a letter and consist of letters, digits and underscores. Names of built-in variables usually start with a dot character (.), so that they can be distinguished from user defined variables. In order to refer to an action list variable, its name must be prefixed by a percent sign (%). Unlike match variables, action list variables can only be used in action lists and they are substituted with their values immediately before the action list execution. Also, action list variables continue to exist after the current action list has been executed and can be employed in action lists of other rules.
The following action list variables are predefined by SEC:
For example, the following action list assigns the current time in human-readable format and the string "This is a test event" to the %text action list variable, and mails the value of %text to root@localhost:
action=assign %text %t: This is a test event; \
pipe '%text' /bin/mail root@localhost
If the action list is executed at Nov 19 10:58:51 2015, the assign action sets the %text action list variable to the string "Thu Nov 19 10:58:51 2015: This is a test event", while the pipe action mails this string to root@localhost. Note that unlike match variables, action list variables have a global scope, and accessing the value of the %text variable in action lists of other rules will thus yield the string "Thu Nov 19 10:58:51 2015: This is a test event" (until another value is assigned to %text).
In order to disambiguate the variable from the following text, the variable name must be enclosed in braces. For example, the following action
action=write - %{.year}-%{.mon}-%{.mday}T%{.hmsstr}%{.tzoff2}
writes a timestamp in ISO 8601 format to standard output, e.g., 2016-02-24T07:34:01+02:00 (replacing %{.mday} with %.mday in the above action would mistakenly create a reference to %.mdayT variable).
When action list variables are substituted with their values, each sequence "%%" is interpreted as a literal percent sign (%) which allows for masking the variables. For example, the string "s%%t" becomes "s%t" after substitution, not "s%<timestamp>".
However, note that if %-prefixed match variables are supported for the action2 field of the Pair or PairWithWindow rule, the sequence "%%%" must be used in action2 for masking a variable, since the string goes through *two* variable substitution rounds (first for %-prefixed match variables and then for action list variables, e.g., the string "s%%%t" first becomes "s%%t" and finally "s%t").
Whenever a rule field goes through several substitution rounds, the $ or % characters are masked inside values substituted during earlier rounds, in order to avoid unwanted side effects during later rounds.
If the action list variable has not been set, it is substituted with an empty string (i.e., a zero-width string). Thus the string "Value of A is: %a" becomes "Value of A is: " after substitution if the variable %a is unset. (Note that prior to SEC-2.6, unset variables were *not* substituted.)
Finally, the values are substituted as strings, therefore values of other types (e.g., references) lose their original meaning, unless explicitly noted otherwise (e.g., if a Perl function reference is stored to an action list variable, the function can later be invoked through this variable with the call action).
SEC supports the following actions (optional parameters are enclosed in square brackets):
action=logonly This is a test
The above logonly action logs the message "This is a test" with level 4.
action=write /var/log/test.log %t $0
The above write action prepends human-readable timestamp and separating space character to the value of the $0 match variable, and the resulting string is appended to file /var/log/test.log with terminating newline.
action=writen - ab; writen - c; writen - %.nl
The above action list writes the string "abc<NEWLINE>" to standard output, and is thus identical to write - abc (and also to writen - abc%.nl).
action=owritecl /var/log/test-%{.year}%{.mon}%{.mday} $0%{.nl}
The above owritecl action appends the value of the $0 match variable with terminating newline to file /var/log/test-YYYYMMDD, where YYYYMMDD reflects the current date (e.g., if the current date is April 1 2018, the file is /var/log/test-20180401). Since the file is closed after each write, the old file will not be left open when date changes.
action=udgram /dev/log <30>%.monstr %.mdaystr %.hmsstr sec: This is a test
The above udgram action sends a syslog message to local syslog daemon via /dev/log socket, where message priority is 30 (corresponds to the "daemon" facility and "info" level), syslog tag is "sec" and message text is "This is a test". Note that message substring "%.monstr %.mdaystr %.hmsstr" evaluates to timestamp in BSD syslog format (e.g., Mar 31 15:36:07).
action=udpsock mysrv:514 <13>%.monstr %.mdaystr %.hmsstr myhost test: $0
The above udpsock action sends a BSD syslog message to port 514/udp of remote syslog server mysrv, where message priority is 13 (corresponds to the "user" facility and "notice" level), name of the local host is "myhost", syslog tag is "test" and message text is the value if the $0 match variable.
action=tcpsock grsrv:2003 ssh.login.failures %{num} %{u}%{.nl}
The above tcpsock action sends the value of the action list variable %{num} to port 2003/tcp of the Graphite server grsrv, so that the value is recorded under metric path ssh.login.failures. Note that the %{u} action list variable evaluates to current time in seconds since Epoch and is used for setting the timestamp for recorded value.
action=shellcmd (cat /tmp/report | mail root; rm -f /tmp/report); \
logonly Report sent to user root
The shellcmd action of this action list executes the command line
cat /tmp/report | mail root; rm -f /tmp/report
and the logonly action logs the message "Report sent to user root". Since the command line contains a semicolon which is used for separating shellcmd and logonly actions, the command line is enclosed in parentheses.
action=spawn (cat /tmp/events; rm -f /tmp/events)
The above spawn action will generate synthetic events from all lines in file /tmp/events and remove the file. Since the command line contains a semicolon which is used for separating actions, the command line is enclosed in parentheses.
action=cmdexec rm /tmp/report*
The above cmdexec action will remove the file /tmp/report* without treating * as a file pattern character that matches any string.
action=pipe 'Offending activities from host $1' /bin/mail root@localhost
The above pipe action writes the line "Offending activities from host <hostname>" to the standard input of the /bin/mail root@localhost command which sends this line to root@localhost via e-mail (<hostname> is the value of the $1 match variable).
action=pipeexec 'Offending activities from host $1' \
/bin/mail -s SEC%{.sp}alert $2
The above pipeexec action writes the line "Offending activities from host <hostname>" to the standard input of the /bin/mail -s <subject> <user> command which sends this line to <user> via e-mail with subject <subject> (<hostname> is the value of the $1 match variable, while <user> is the value of the $2 match variable). Note that since <subject> is defined as SEC%{.sp}alert and does not contain whitespace, it is treated as a single argument for the -s flag of the /bin/mail command. However, since <subject> contains the %.sp action list variable, the string "SEC alert" will be used for the e-mail subject at runtime. Also, if the value of the $2 match variable contains shell metacharacters, they will not be interpreted by the shell.
action=write /var/log/test.log $0; create TIMER 3600 \
( logonly Closing /var/log/test.log; closef /var/log/test.log )
The write action from the above action list appends the value of the $0 match variable to file /var/log/test.log, while the create action creates the context TIMER which will exist for 3600 seconds. Since this context is recreated at each write, the context can expire only if the action list has not been executed for more than 3600 seconds (i.e., the action list has last updated the file more than 1 hour ago). If that is the case, the action list
logonly Closing /var/log/test.log; closef /var/log/test.log
is executed which logs the message "Closing /var/log/test.log" with the logonly action and closes /var/log/test.log with the closef action. When the execution of this action list is complete, the TIMER context is deleted.
action=set C_$1 30 ( logonly Context C_$1 has expired )
The above set action sets the context C_<suffix> to expire after 30 seconds with a log message about expiration (<suffix> is the value of the $1 match variable).
action=add EVENTS This is a test; add EVENTS This is a test2
After the execution of this action list, the last two strings in the event store of the EVENTS context are "This is a test" and "This is a test2" (in that order).
action=prepend EVENTS This is a test; prepend EVENTS This is a test2
After the execution of this action list, the first two strings in the event store of the EVENTS context are "This is a test2" and "This is a test" (in that order).
action=create PID_$1 60 ( report PID_$1 /bin/mail root@localhost ); \
add PID_$1 Beginning of the report
The above action list creates the context PID_<suffix> with the lifetime of 60 seconds and sets the first string in the context event store to "Beginning of the report" (<suffix> is the value of the $1 match variable). When the context expires, all strings from the event store will be mailed to root@localhost.
action=fill EVENTS Event1; add EVENTS Event2; add EVENTS Event3; \
pop EVENTS %temp1; shift EVENTS %temp2; getsize %size EVENTS
This action list sets the %temp1 action list variable to Event3, %temp2 action list variable to Event1, and %size action list variable to 1.
action=create TEST 10 ( getltime %time TEST; \
logonly Context TEST with %time second lifetime has expired )
The above create action configures the context TEST to log its lifetime when it expires.
action=copy EVENTS %events; event %events
The above action list will create a synthetic event from each string in the event store of the EVENTS context.
action=reset -1 Ten login failures observed from $1; reset 0
If the above action list is executed by an event correlation operation, the first reset action will terminate another event correlation operation which has been started by the previous rule and has the operation description string "Ten login failures observed from <host>" (<host> is the value of the $1 match variable). The second reset action will terminate the calling operation itself.
action=getwpos %pos -1 Ten login failures observed from $1
The above getwpos action will find the beginning of the event correlation window for an event correlation operation which has been started by the previous rule and has the operation description string "Ten login failures observed from <host>" (<host> is the value of the $1 match variable). If the event correlation window begins at April 6 08:03:53 2018 UTC, the value 1523001833 will be assigned to the %pos action list variable.
action=assign %div Division error; eval %div ( $1 / $2 )
The assign action sets the %div action list variable to the string "Division error", while the eval action substitutes the values of $1 and $2 match variables into the string "$1 / $2". Resulting string is treated as Perl code which is first compiled and then executed. For instance, if the values of $1 and $2 are 12 and 4, respectively, the following Perl code is compiled: 12 / 4. Since executing this code yields 3, the eval action assigns this value to the %div action list variable. Also, if $2 has no value or its value is 0, resulting code leads to compilation or execution error, and %div retains its previous value "Division error".
action=eval %func ( sub { return $_[0] + $_[1] } ); \
call %sum %func $1 $2
Since the Perl code provided to eval action is a definition of an anonymous function, its compilation yields a code reference which gets assigned to the %func action list variable (the function returns the sum of its two input parameters). The call action will invoke previously compiled function, using the values of $1 and $2 match variables as function parameters, and assigning function return value to the %sum action list variable. Therefore, if the values of $1 and $2 are 2 and 3, respectively, %sum is set to 5.
action=lcall %len $1 -> ( sub { return length($_[0]) } )
The above lcall action will take the value of the $1 match variable and find its length in characters, assigning the length to the %len action list variable. Note that the function for finding the length is compiled when SEC loads its configuration, and all invocations of lcall will execute already compiled code. As another example, consider the following action list definition:
action=lcall %o SSH :> ( sub { $_[0]->{"failure"} = 1 } )
The above lcall action will assign 1 to the $+{failure} match variable that has been cached under the SSH entry in the pattern match cache (variable will be created if it did not exist previously).
action=addinput /var/log/test-%{.year}%{.mon}%{.mday} 0 TESTFILE
The above addinput action adds the file /var/log/test-YYYYMMDD to the list of input files, where YYYYMMDD reflects the current date. The addinput action will also attempt to open the file, and if open succeeds, file will be processed from the beginning. Also, the internal context TESTFILE will be used for all events read from the file.
action=exists %present REPORT; if %present \
( report REPORT /bin/mail root@localhost; delete REPORT ) \
else ( logonly Nothing to report )
If the REPORT context exists, its event store is mailed to root@localhost and the context is deleted, otherwise the message "Nothing to report" is logged.
action=create REVERSE; getsize %n TEST; \
while %n ( pop TEST %e; add REVERSE %e; getsize %n TEST ); \
copy REVERSE %events; fill TEST %events
This action list reverses the order of strings in the event store of the context TEST, using the context REVERSE as a temporary storage. During each iteration of the while-loop, the last string in the event store of TEST is removed with the pop action and appended to the event store of REVERSE with the add action. The loop terminates when all strings have been removed from the event store of TEST (i.e., the getsize action reports 0 for event store size). Finally, the event store of REVERSE is assigned to the %events action list variable with the copy action, and the fill action is used for overwriting the event store of TEST with the value of %events.
Examples:
Follow the /var/log/trapd.log file and feed to SEC input all lines that are appended to the file:
action=spawn /bin/tail -f /var/log/trapd.log
Mail the timestamp and the value of the $0 variable to the local root:
action=pipe '%t: $0' /bin/mail -s "alert message" root@localhost
Add the value of the $0 variable to the event store of the context ftp_<the value of $1>, and set the context to expire after 30 minutes. When the context expires, its event store will be mailed to the local root:
action=add ftp_$1 $0; \
set ftp_$1 1800 (report ftp_$1 /bin/mail root@localhost)
Create a subroutine for weeding out comment lines from the input list, and use this subroutine for removing comment lines from the event store of the context C1:
action=eval %funcptr ( sub { my(@buf) = split(/\n/, $_[0]); \
my(@ret) = grep(!/^#/, @buf); return @ret; } ); \
copy C1 %in; call %out %funcptr %in; fill C1 %out
The following action list achieves the same goal as the previous action list with while and if actions:
action=getsize %size C1; while %size ( shift C1 %event; \
lcall %nocomment %event -> ( sub { $_[0] !~ /^#/ } ); \
if %nocomment ( add C1 %event ); \
lcall %size %size -> ( sub { $_[0]-1; } ) )
action=eval %o (print ")";)
is considered an invalid action list (however, note that
action=eval %o (print "()";)
would be passed by SEC, since now parentheses are balanced).
In order to avoid such parsing errors, each parenthesis without
a counterpart must be masked with a backslash (the backslash will be removed
by SEC during configuration file parsing). For example, the above action
could be written as
The Single rule immediately executes an action list when an event has matched the rule. An event matches the rule if the pattern matches the event and the context expression (if given) evaluates TRUE.
Note that the Single rule does not start event correlation operations, and the desc field is merely used for setting the %s action list variable.
Examples:
type=single
continue=takenext
ptype=regexp
pattern=ftpd\[(\d+)\]: \S+ \(ristov2.*FTP session opened
desc=ftp session opened for ristov2 pid $1
action=create ftp_$1
type=single
continue=takenext
ptype=regexp
pattern=ftpd\[(\d+)\]:
context=ftp_$1
desc=ftp session event for ristov2 pid $1
action=add ftp_$1 $0; set ftp_$1 1800 \
(report ftp_$1 /bin/mail root@localhost)
type=single
ptype=regexp
pattern=ftpd\[(\d+)\]: \S+ \(ristov2.*FTP session closed
desc=ftp session closed for ristov2 pid $1
action=report ftp_$1 /bin/mail root@localhost; \
delete ftp_$1
This ruleset is created for monitoring the ftpd log file. The first rule creates the context ftp_<pid> when someone connects from host ristov2 over FTP and establishes a new ftp session (the session is identified by the PID of the process which has been created for handling this session). The second rule adds all further log file lines for the session <pid> to the event store of the context ftp_<pid> (before adding a line, the rule checks if the context exists). After adding a line, the rule extends context's lifetime for 30 minutes and sets the action list that will be executed when the context expires. The third rule mails collected log file lines to root@localhost when the session <pid> is closed. Collected lines will also be mailed when the session <pid> has been inactive for 30 minutes (no log file lines observed for that session).
Note that the log file line that has matched the first rule is also matched against the second rule (since the first rule has the continue field set to TakeNext). Since the second rule always matches this line, it will become the first line in the event store of ftp_<pid>. The second rule has also its continue field set to TakeNext, since otherwise no log file lines would reach the third rule.
The SingleWithScript rule forks a process for executing an external program when an event has matched the rule. The command line of the external program is defined by the script field.
If the shell field is set to Yes (this is the default), the command line of the external program will be parsed by shell if the command line contains shell metacharacters. If the shell field is set to No, command line is not parsed by shell, but split into arguments by using whitespace as a separator, and passed to execvp(3) for execution. Note that splitting into arguments is done when command line is loaded from the configuration file and parsed, not at runtime (e.g., if command line is /usr/local/bin/mytool $1 $2, the values of $1 and $2 variables are regarded as single arguments even if the values contain whitespace).
The names of all currently existing contexts are written to the standard input of the program. After the program has been forked, the rule matching continues immediately, and the program status will be checked periodically until the program exits. If the program returns 0 exit status, the action list defined by the action field is executed; otherwise the action list defined by the action2 field is executed (if given).
Note that the SingleWithScript rule does not start event correlation operations, and the desc field is merely used for setting the %s action list variable.
Examples:
type=SingleWithScript
ptype=RegExp
pattern=interface ([\d.]+) down
script=/bin/ping -c 3 -q $1
desc=Check if $1 responds to ping
action=logonly Interface $1 reported down, but is pingable
action2=pipe '%t: Interface $1 is down' /bin/mail root@localhost
When "interface <ipaddress> down" line appears in input, the rule checks if <ipaddress> responds to ping. If <ipaddress> is pingable, the message "Interface <ipaddress> reported down, but is pingable" is logged; otherwise an e-mail warning containing a human-readable timestamp is sent to root@localhost.
The SingleWithSuppress rule runs event correlation operations for filtering repeated instances of the same event during T seconds. The value of T is defined by the window field.
When an event has matched the rule, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule does not exist, SEC will create it with the lifetime of T seconds, and the operation immediately executes an action list. If the operation exists, it consumes the matching event without any action.
Examples:
type=SingleWithSuppress
ptype=RegExp
pattern=(\S+): [fF]ile system full
desc=File system $1 full
action=pipe '%t: %s' /bin/mail root@localhost
window=900
This rule runs event correlation operations for processing "file system full" syslog messages, e.g.,
Dec 16 14:26:09 test ufs: [ID 845546 kern.notice] NOTICE: alloc: /var: file system full
When the first message for a file system is observed, an operation is created which sends an e-mail warning about this file system to root@localhost. The operation will then run for 900 seconds and silently consume further messages for the *same* file system. However, if a message for a different file system is observed, another operation will be started which sends a warning to root@localhost again (since the desc field contains the $1 match variable which evaluates to the file system name).
The Pair rule runs event correlation operations for processing event pairs during T seconds. The value of T is defined by the window field. Default value is 0 which means infinity.
When an event has matched the conditions defined by the pattern and context field, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule exists, it consumes the matching event without any action. If the operation does not exist, SEC will create it with the lifetime of T seconds, and the operation immediately executes an action list defined by the action field. SEC will also copy the match conditions given with the pattern2 and context2 field into the operation, and substitute match variables with their values in copied conditions.
If the event does not match conditions defined by the pattern and context field, SEC will check the match conditions of all operations started by the given rule. Each matching operation executes the action list given with the action2 field and finishes.
If match variables are set when the operation matches an event, they are made available as $-prefixed match variables in context2, desc2, and action2 fields of the rule definition. For example, if pattern2 field is a regular expression, then $1 in the desc2 field is set by pattern2. In order to access match variables set by pattern, %-prefixed match variables have to be used in context2, desc2, and action2 fields. For example, if pattern and pattern2 are regular expressions, then %1 in the desc2 field refers to the value set by the first capture group in pattern (i.e., it has the same value as $1 in the desc field).
Examples:
type=Pair
ptype=RegExp
pattern=kernel: nfs: server (\S+) not responding, still trying
desc=Server $1 is not responding
action=pipe '%t: %s' /bin/mail root@localhost
ptype2=SubStr
pattern2=kernel: nfs: server $1 OK
desc2=Server $1 is responding again
action2=logonly
window=3600
This rule runs event correlation operations for processing NFS "server not responding" and "server OK" syslog messages, e.g.,
Dec 18 22:39:48 test kernel: nfs: server box1 not responding, still trying
Dec 18 22:42:27 test kernel: nfs: server box1 OK
When the "server not responding" message for an NFS server is observed, an operation is created for this server which sends an e-mail warning about the server to root@localhost. The operation will then run for 3600 seconds and silently consume further "server not responding" messages for the same server. If this operation observes "server OK" message for the *same* server, it will log the message "Server <servername> is responding again" and finish.
For example, if SEC observes the following event at 22:39:48
Dec 18 22:39:48 test kernel: nfs: server box1 not responding, still trying
an event correlation operation is created for server box1 which issues an e-mail warning about this server immediately. After that, the operation will run for 3600 seconds (until 23:39:48), waiting for an event which would contain the substring "kernel: nfs: server box1 OK" (because the pattern2 field contains the $1 match variable which evaluates to the server name).
If any further error messages appear for server box1 during the 3600 second lifetime of the operation, e.g.,
Dec 18 22:40:28 test kernel: nfs: server box1 not responding, still trying
Dec 18 22:41:09 test kernel: nfs: server box1 not responding, still trying
these messages will be silently consumed by the operation. If before its expiration the operation observes an event which contains the substring "kernel: nfs: server box1 OK", e.g.,
Dec 18 22:42:27 test kernel: nfs: server box1 OK
the operation will log the message "Server box1 is responding again" and terminate immediately. If no such message appears during the 3600 second lifetime of the operation, the operation will expire without taking any action. Please note that if the window field would be either removed from the rule definition or set to 0, the operation would never silently expire, but would terminate only after observing an event which contains the substring "kernel: nfs: server box1 OK".
If the above rule is modified in the following way
type=Pair
ptype=RegExp
pattern=^([[:alnum:]: ]+) \S+ kernel: nfs: server (\S+) not responding, still trying
desc=Server $2 is not responding
action=logonly
ptype2=RegExp
pattern2=^([[:alnum:]: ]+) \S+ kernel: nfs: server $2 OK
desc2=Server %2 was not accessible from %1 to $1
action2=pipe '%s' /bin/mail root@localhost
window=86400
this rule will run event correlation operations which report NFS server downtime to root@localhost via e-mail, provided that downtime does not exceed 24 hours (86400 seconds).
For example, if SEC observes the following event
Dec 18 23:01:17 test kernel: nfs: server box.test not responding, still trying
then the rule matches this event, sets $1 match variable to "Dec 18 23:01:17" and $2 to "box.test", and creates an event correlation operation for server box.test. This operation will start its work by logging the message "Server box.test is not responding", and will then run for 86400 seconds, waiting for an event which would match the regular expression
^([[:alnum:]: ]+) \S+ kernel: nfs: server box\.test OK
Note that this expression was created from the regular expression template in the pattern2 field by substituting the match variable $2 with its value. However, since the string "box.test" contains the dot (.) character which is a regular expression metacharacter, the dot is masked with the backslash in the regular expression.
Suppose SEC will then observe the event
Dec 18 23:09:54 test kernel: nfs: server box.test OK
This event matches the above regular expression which is used by the operation running for server box.test. Also, the regular expression match sets the $1 variable to "Dec 18 23:09:54" and unsets the $2 variable. In order to refer to their original values when the operation was created, %1 and %2 match variables have to be used in the desc2 field (%1 equals to "Dec 18 23:01:17" and %2 equals to "box.test"). Therefore, the operation will send the e-mail message "Server box.test was not accessible from Dec 18 23:01:17 to Dec 18 23:09:54" to root@localhost, and will terminate immediately.
The PairWithWindow rule runs event correlation operations for processing event pairs during T seconds. The value of T is defined by the window field.
When an event has matched the conditions defined by the pattern and context field, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule exists, it consumes the matching event without any action. If the operation does not exist, SEC will create it with the lifetime of T seconds. SEC will also copy the match conditions given with the pattern2 and context2 field into the operation, and substitute match variables with their values in copied conditions.
If the event does not match conditions defined by the pattern and context field, SEC will check the match conditions of all operations started by the given rule. Each matching operation executes the action list given with the action2 field and finishes. If the operation has not observed a matching event by the end of its lifetime, it executes the action list given with the action field before finishing.
If match variables are set when the operation matches an event, they are made available as $-prefixed match variables in context2, desc2, and action2 fields of the rule definition. For example, if pattern2 field is a regular expression, then $1 in the desc2 field is set by pattern2. In order to access match variables set by pattern, %-prefixed match variables have to be used in context2, desc2, and action2 fields. For example, if pattern and pattern2 are regular expressions, then %1 in the desc2 field refers to the value set by the first capture group in pattern (i.e., it has the same value as $1 in the desc field).
Examples:
type=PairWithWindow
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2
desc=User $1 has been unable to log in from $2 over SSH during 1 minute
action=pipe '%t: %s' /bin/mail root@localhost
ptype2=RegExp
pattern2=sshd\[\d+\]: Accepted .+ for $1 from $2 port \d+ ssh2
desc2=SSH login successful for %1 from %2 after initial failure
action2=logonly
window=60
This rule runs event correlation operations for processing SSH login events, e.g.,
Dec 27 19:00:24 test sshd[10526]: Failed password for risto from 10.1.2.7 port 52622 ssh2
Dec 27 19:00:27 test sshd[10526]: Accepted password for risto from 10.1.2.7 port 52622 ssh2
When an SSH login failure is observed for a user name and a source IP address, an operation is created for this user name and IP address combination which will expect a successful login for the *same* user name and *same* IP address during 60 seconds. If the user will not log in from the same IP address during 60 seconds, the operation will send an e-mail warning to root@localhost before finishing, otherwise it will log the message "SSH login successful for <username> from <ipaddress> after initial failure" and finish.
Suppose the following events are generated by an SSH daemon, and each event timestamp reflects the time SEC observes the event:
Dec 30 13:02:01 test sshd[30517]: Failed password for risto from 10.1.2.7 port 42172 ssh2
Dec 30 13:02:30 test sshd[30810]: Failed password for root from 192.168.1.104 port 46125 ssh2
Dec 30 13:02:37 test sshd[30517]: Failed password for risto from 10.1.2.7 port 42172 ssh2
Dec 30 13:02:59 test sshd[30810]: Failed password for root from 192.168.1.104 port 46125 ssh2
Dec 30 13:03:04 test sshd[30810]: Accepted password for root from 192.168.1.104 port 46125 ssh2
When the first event is observed at 13:02:01, an operation is started for user risto and IP address 10.1.2.7 which will expect a successful login for risto from 10.1.2.7. The operation will run for 60 seconds, waiting for an event which would match the regular expression
sshd\[\d+\]: Accepted .+ for risto from 10\.1\.2\.7 port \d+ ssh2
Note that this expression was created from the regular expression template in the pattern2 field by substituting match variables $1 and $2 with their values. However, since the value of $2 contains the dot (.) characters which are regular expression metacharacters, each dot is masked with the backslash in the regular expression.
When the second event is observed at 13:02:30, another operation is started for user root and IP address 192.168.1.104 which will expect root to log in successfully from 192.168.1.104. This operation will run for 60 seconds, waiting for an event matching the regular expression
sshd\[\d+\]: Accepted .+ for root from 192\.168\.1\.104 port \d+ ssh2
The third event at 13:02:37 represents a second login failure for user risto and IP address 10.1.2.7, and is silently consumed by the first operation. Likewise, the fourth event at 13:02:59 is silently consumed by the second operation. The first operation will run until 13:03:01 and then expire without seeing a successful login for risto from 10.1.2.7. Before terminating, the operation will send an e-mail warning to root@localhost that user risto has not managed to log in from 10.1.2.7 during one minute. At 13:03:04, the second operation will observe an event which matches its regular expression
sshd\[\d+\]: Accepted .+ for root from 192\.168\.1\.104 port \d+ ssh2
After seeing this event, the operation will log the message "SSH login successful for root from 192.168.1.104 after initial failure" and terminate immediately. Please note that the match by the regular expression
sshd\[\d+\]: Accepted .+ for root from 192\.168\.1\.104 port \d+ ssh2
sets the $1 match variable to 1 and unsets $2. Therefore, the %1 and %2 match variables have to be used in the desc2 field, in order to refer to the original values of $1 (root) and $2 (192.168.1.104) when the operation was created.
The SingleWithThreshold rule runs event correlation operations for counting repeated instances of the same event during T seconds, and taking an action if N events are observed. The values of T and N are defined by the window and thresh field, respectively.
When an event has matched the rule, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule does not exist, SEC will create it with the lifetime of T seconds. The operation will memorize the occurrence time of the event (current time as returned by the time(2) system call), and compare the number of memorized occurrence times with the threshold N. If the operation has observed N events, it executes the action list defined by the action field, and consumes all further matching events without any action. If the rule has an optional action list defined with the action2 field, the operation will execute it before finishing, provided that the action list given with action has been previously executed by the operation. Note that a sliding window is employed for event counting -- if the operation has observed less than N events by the end of its lifetime, it drops occurrence times which are older than T seconds, and extends its lifetime for T seconds from the earliest remaining occurrence time. If there are no remaining occurrence times, the operation finishes without executing an action list.
Examples:
type=SingleWithThreshold
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2
desc=Three SSH login failures within 1m for user $1
action=pipe '%t: %s' /bin/mail root@localhost
window=60
thresh=3
This rule runs event correlation operations for counting the number of SSH login failure events. Each operation counts events for one user name, and if the operation has observed three login failures within 60 seconds, it sends an e-mail warning to root@localhost.
Suppose the following events are generated by an SSH daemon, and each event timestamp reflects the time SEC observes the event:
Dec 28 01:42:21 test sshd[28132]: Failed password for risto from 10.1.2.7 port 42172 ssh2
Dec 28 01:43:10 test sshd[28132]: Failed password for risto from 10.1.2.7 port 42172 ssh2
Dec 28 01:43:29 test sshd[28132]: Failed password for risto from 10.1.2.7 port 42172 ssh2
Dec 28 01:44:00 test sshd[28149]: Failed password for risto2 from 10.1.2.7 port 42176 ssh2
Dec 28 01:44:03 test sshd[28211]: Failed password for risto from 10.1.2.7 port 42192 ssh2
Dec 28 01:44:07 test sshd[28211]: Failed password for risto from 10.1.2.7 port 42192 ssh2
When the first event is observed at 01:42:21, a counting operation is started for user risto, with its event correlation window ending at 01:43:21. Since by 01:43:21 two SSH login failures for user risto have occurred, the threshold condition remains unsatisfied for the operation. Therefore, the beginning of its event correlation window will be moved to 01:43:10 (the occurrence time of the second event), leaving the first event outside the window. At 01:44:00, another counting operation is started for user risto2. The threshold condition for the first operation will become satisfied at 01:44:03 (since the operation has seen three login failure events for user risto within 60 seconds), and thus an e-mail warning will be issued. Finally, the event occurring at 01:44:07 will be consumed silently by the first operation (the operation will run until 01:44:10). Since there will be no further login failure events for user risto2, the second operation will exist until 01:45:00 without taking any action.
The SingleWith2Thresholds rule runs event correlation operations which take action if N1 events have been observed in the window of T1 seconds, and then at most N2 events will be observed in the window of T2 seconds. The values of T1, N1, T2, and N2 are defined by the window, thresh, window2, and thresh2 field, respectively.
When an event has matched the rule, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule does not exist, SEC will create it with the lifetime of T1 seconds. The operation will memorize the occurrence time of the event (current time as returned by the time(2) system call), and compare the number of memorized occurrence times with the threshold N1. If the operation has observed N1 events, it executes the action list defined by the action field, and starts another counting round for T2 seconds. If no more than N2 events have been observed by the end of the window, the operation executes the action list defined by the action2 field and finishes. Note that both windows are sliding -- the first window slides like the window of the SingleWithThreshold operation, while the beginning of the second window is moved to the second earliest memorized event occurrence time when the threshold N2 is violated.
Examples:
type=SingleWith2Thresholds
ptype=RegExp
pattern=(\S+): %SYS-3-CPUHOG
desc=Router $1 CPU overload
action=pipe '%t: %s' /bin/mail root@localhost
window=300
thresh=2
desc2=Router $1 CPU load has been normal for 1h
action2=logonly
window2=3600
thresh2=0
When a SYS-3-CPUHOG syslog message is received from a router, the rule starts a counting operation for this router which sends an e-mail warning to root@localhost if another such message is received from the same router within 300 seconds. After sending the warning, the operation will continue to run until no SYS-3-CPUHOG syslog messages have been received from the router for 3600 seconds. When this condition becomes satisfied, the operation will log the message "Router <routername> CPU load has been normal for 1h" and finish.
Suppose the following events are generated by a router, and each event timestamp reflects the time SEC observes the event:
Dec 30 12:23:25 router1.mydomain Router1: %SYS-3-CPUHOG: cpu is hogged
Dec 30 12:25:38 router1.mydomain Router1: %SYS-3-CPUHOG: cpu is hogged
Dec 30 12:28:53 router1.mydomain Router1: %SYS-3-CPUHOG: cpu is hogged
When the first event is observed at 12:23:25, a counting operation is started for router Router1. The appearance of the second event at 12:25:38 fulfills the threshold condition given with the thresh and window fields (two events have been observed within 300 seconds). Therefore, the operation will send an e-mail warning about the CPU overload of Router1 to root@localhost.
After that, the operation will start another counting round, expecting to see no SYS-3-CPUHOG events (since thresh2=0) for Router1 during the following 3600 seconds (the beginning of the operation's event correlation window will be moved to 12:25:38 for the second counting round). Since the appearance of the third event at 12:28:53 violates the threshold condition given with the thresh2 and window2 fields, the beginning of the event correlation window will be moved to 12:28:53. Since there will be no further SYS-3-CPUHOG messages for Router1, the operation will run until 13:28:53 and then expire, logging the message "Router Router1 CPU load has been normal for 1h" before finishing.
The EventGroup rule runs event correlation operations for counting repeated instances of N different events e1,...,eN during T seconds, and taking an action if threshold conditions c1,...,cN for *all* events are satisfied (i.e., for each event eK there are at least cK event instances in the window). Note that the event correlation window of the EventGroup operation is sliding like the window of the SingleWithThreshold operation.
Event e1 is described with the pattern and context field, event e2 is described with the pattern2 and context2 field, etc. The values for N and T are defined by the type and window field, respectively. The value for c1 is given with the thresh field, the value for c2 is given with the thresh2 field, etc. Values for N and c1,...,cN default to 1.
In order to match an event with the rule, pattern and context fields are evaluated first. If they don't match the event, then pattern2 and context2 are evaluated, etc. If all N conditions are tried without a success, the event doesn't match the rule.
When an event has matched the rule, SEC evaluates the operation description string given with the desc field. If the operation for the given string and rule does not exist, SEC will create it with the lifetime of T seconds. The operation will memorize the occurrence time of the event (current time as returned by the time(2) system call), and compare the number of memorized occurrence times for each eK with the threshold cK (i.e., the number of observed instances of eK is compared with the threshold cK). If all threshold conditions are satisfied, the operation executes the action list defined by the action field, and consumes all further matching events without re-executing the action list if the multact field is set to No (this is the default). However, if multact is set to Yes, the operation will re-evaluate the threshold conditions on every further matching event, re-executing the action list given with the action field if all conditions are satisfied, and sliding the event correlation window forward when the window is about to expire (if no events remain in the window, the operation will finish).
For example, consider the following rule:
type=EventGroup2
ptype=SubStr
pattern=EVENT_A
thresh=2
ptype2=SubStr
pattern2=EVENT_B
thresh2=2
desc=Sequence of two or more As and Bs observed within 60 seconds
action=write - %s
window=60
Also, suppose the following events occur, and each event timestamp reflects the time SEC observes the event:
Mar 10 12:03:01 EVENT_A
Mar 10 12:03:04 EVENT_B
Mar 10 12:03:10 EVENT_A
Mar 10 12:03:11 EVENT_A
Mar 10 12:03:27 EVENT_B
Mar 10 12:03:46 EVENT_A
Mar 10 12:03:59 EVENT_A
When these events are observed by the above EventGroup2 rule, the rule starts an event correlation operation at 12:03:01. Note that although the first threshold condition thresh=2 is satisfied when the third event appears at 12:03:10, the second threshold condition thresh2=2 is not met, and therefore the operation will not execute the action list given with the action field. When the fifth event appears at 12:03:27, all threshold conditions are finally satisfied, and the operation will write the string "Sequence of two or more As and Bs observed within 60 seconds" to standard output with the write action. Finally, the events occurring at 12:03:46 and 12:03:59 will be consumed silently by the operation (the operation will run until 12:04:01).
If multact=yes statement is added to the above EventGroup2 rule, the operation would execute the write action not only at 12:03:27, but also at 12:03:46 and 12:03:59, since all threshold conditions are still satisfied when the last two events appear (i.e., the last two events are no longer silently consumed). Also, with multact=yes the operation will employ sliding window based event processing even after the write action has been executed at 12:03:27 (therefore, the operation will run until 12:04:59).
If the rule definition has an optional event group pattern and its type defined with the egpattern and egptype fields, the event group pattern is used for matching the event group string. The event group string consists of tokens Xi that are separated by a single space character: "X1 X2 ... XM". M is the number of events a given event correlation operation has observed within its event correlation window.
If the i-th event that the event correlation operation has observed is an instance of event eK, then Xi is set as follows. If the rule definition has a token defined with the egtokenK field for event eK, Xi is set to that token (the token definition may contain match variables that are substituted with values from matching the current instance of eK). If the rule does not have the egtokenK field, then Xi = K (note that K is a positive integer).
Event group string is built and matched with event group pattern after all threshold conditions (given with thresh* fields) have been found satisfied. In other words, the event group pattern defines an additional condition to numeric threshold conditions.
Note that the event group pattern and its type are similar to regular patterns and pattern types that are given with pattern* and ptype* fields, except the event group pattern is not setting any match variables. If the egptype field is set to RegExp or NRegExp, the egpattern field defines a regular expression, while in the case of SubStr and NSubStr egpattern provides a string pattern.
If the egptype field is set to PerlFunc or NPerlFunc, the Perl function given with the egpattern field is called in the Perl scalar context, with the function having three parameters: the event group string, the reference to the list of tokens from the event group string, and the reference to the list of event occurrence times that correspond to tokens. Each event occurrence time is provided in seconds since Epoch, with the first element in the list being the occurrence time of the event represented by the first token in the event group string, the second element in the list being the occurrence time of the event represented by the second token in the event group string, etc.
With egptype=PerlFunc, event group pattern matches if the return value of the function evaluates true in the Perl boolean context, while in the case of false the pattern does not match the event group string. With egptype=NPerlFunc, the pattern matching works in the opposite way.
For example, consider the following rule:
type=EventGroup2
ptype=SubStr
pattern=EVENT_A
thresh=2
ptype2=SubStr
pattern2=EVENT_B
thresh2=2
desc=Sequence of two or more As and Bs with 'A B' at the end
action=write - %s
egptype=RegExp
egpattern=1 2$
window=60
Also, suppose the following events occur, and each event timestamp reflects the time SEC observes the event:
Mar 10 12:05:31 EVENT_B
Mar 10 12:05:32 EVENT_B
Mar 10 12:05:38 EVENT_A
Mar 10 12:05:39 EVENT_A
Mar 10 12:05:42 EVENT_B
When these events are observed by the above EventGroup2 rule, the rule starts an event correlation operation at 12:05:31. When the fourth event appears at 12:05:39, all threshold conditions (thresh=2 and thresh2=2) become satisfied, and therefore the following event group string is built from the first four events:
2 2 1 1
However, since this string does not match the regular expression
1 2$
that has been given with the egpattern field, the operation will not execute the action list given with the action field. When the fifth event appears at 12:05:42, all threshold conditions are again satisfied, and all observed events produce the following event group string:
2 2 1 1 2
Since this event group string matches the regular expression given with the egpattern field, the operation will write the string "Sequence of two or more As and Bs with 'A B' at the end" to standard output with the write action.
If the rule definition has an optional action list defined with the countK field for event eK, the operation executes it every time an instance of eK is observed (even if multact is set to No and the operation has already executed the action list given with action). If the action list contains match variables, they are substituted before *each* execution with values from matching the current instance of eK.
If the rule definition has an optional action list defined with the init field, the operation executes it immediately after the operation has been created.
If the rule definition has an optional action list defined with the end field, the operation executes it immediately before the operation finishes. Note that this action list is *not* executed when the operation is terminated with the reset action.
If the rule definition has an optional action list defined with the slide field, the operation executes it immediately after the event correlation window has slidden forward. However, note that moving the window with the setwpos action will *not* trigger the execution.
Also note that when the event correlation window slides forward, the event group pattern (given with the egpattern field) and numeric threshold conditions (given with thresh* fields) will *not* be evaluated for checking if the action list given with the action field should be executed.
Examples:
The following example rule cross-correlates iptables events, Apache web server access log messages with 4xx response codes, and SSH login failure events:
type=EventGroup3
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (?:invalid user )?\S+ from ([\d.]+) port \d+ ssh2
thresh=2
ptype2=RegExp
pattern2=^([\d.]+) \S+ \S+ \[.+?\] ".+? HTTP\/[\d.]+" 4\d+
thresh2=3
ptype3=RegExp
pattern3=kernel: iptables:.* SRC=([\d.]+)
thresh3=5
desc=Repeated probing from host $1
action=pipe '%t: %s' /bin/mail root@localhost
window=120
The rule starts an event correlation operation for an IP address if SSH login failure event, iptables event, or Apache 4xx event is observed for that IP address. The operation sends an e-mail warning to root@localhost if within 120 seconds three threshold conditions are satisfied for the IP address it tracks -- (1) at least two SSH login failure events have occurred for this client IP, (2) at least three Apache 4xx events have occurred for this client IP, (3) at least five iptables events have been observed for this source IP.
Suppose the following events occur, and each event timestamp reflects the time SEC observes the event:
192.168.1.104 - - [05/Jan/2014:01:11:22 +0200] "GET /test.html HTTP/1.1" 404 286 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0"
Jan 5 01:12:52 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=48422 DF PROTO=TCP SPT=46351 DPT=21 WINDOW=29200 RES=0x00 SYN URGP=0
Jan 5 01:12:53 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=48423 DF PROTO=TCP SPT=46351 DPT=21 WINDOW=29200 RES=0x00 SYN URGP=0
Jan 5 01:13:01 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=20048 DF PROTO=TCP SPT=44963 DPT=23 WINDOW=29200 RES=0x00 SYN URGP=0
Jan 5 01:13:02 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=20049 DF PROTO=TCP SPT=44963 DPT=23 WINDOW=29200 RES=0x00 SYN URGP=0
Jan 5 01:13:08 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=36362 DF PROTO=TCP SPT=56918 DPT=25 WINDOW=29200 RES=0x00 SYN URGP=0
Jan 5 01:13:09 localhost kernel: iptables: IN=eth0 OUT= MAC=08:00:27:8e:a1:3a:00:1d:e0:7e:89:b1:08:00 SRC=192.168.1.104 DST=192.168.1.107 LEN=60 TOS=0x10 PREC=0x00 TTL=64 ID=36363 DF PROTO=TCP SPT=56918 DPT=25 WINDOW=29200 RES=0x00 SYN URGP=0
192.168.1.104 - - [05/Jan/2014:01:13:51 +0200] "GET /test.html HTTP/1.1" 404 286 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0"
192.168.1.104 - - [05/Jan/2014:01:13:54 +0200] "GET /test.html HTTP/1.1" 404 286 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0"
192.168.1.104 - - [05/Jan/2014:01:14:00 +0200] "GET /login.html HTTP/1.1" 404 287 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0"
192.168.1.104 - - [05/Jan/2014:01:14:03 +0200] "GET /login.html HTTP/1.1" 404 287 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0"
192.168.1.104 - - [05/Jan/2014:01:14:03 +0200] "GET /login.html HTTP/1.1" 404 287 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0"
Jan 5 01:14:11 localhost sshd[1810]: Failed password for root from 192.168.1.104 port 46125 ssh2
Jan 5 01:14:12 localhost sshd[1810]: Failed password for root from 192.168.1.104 port 46125 ssh2
Jan 5 01:14:18 localhost sshd[1822]: Failed password for root from 192.168.1.104 port 46126 ssh2
Jan 5 01:14:19 localhost sshd[1822]: Failed password for root from 192.168.1.104 port 46126 ssh2
192.168.1.104 - - [05/Jan/2014:01:14:34 +0200] "GET /test.html HTTP/1.1" 404 286 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:26.0) Gecko/20100101 Firefox/26.0"
The Apache 4xx event at 01:11:22 starts an event correlation operation for 192.168.1.104 which has the event correlation window of 120 seconds, thus ending at 01:13:22. Between 01:12:52 and 01:13:09, six iptables events appear for 192.168.1.104, and the appearance of the fifth event at 01:13:08 fulfills the third threshold condition (within 120 seconds, at least five iptables events have been observed).
Since by 01:13:22 (the end of the event correlation window) no additional events have occurred, the first and second threshold condition remain unsatisfied. Therefore, the beginning of the event correlation window will be moved to 01:12:52 (the occurrence time of the earliest event which is at most 120 seconds old). As a result, the end of the window will move from 01:13:22 to 01:14:52. The only event which is left outside the window is the Apache 4xx event at 01:11:22, and thus the threshold condition for iptables events remains satisfied.
Between 01:13:51 and 01:14:03, five Apache 4xx events occur, and the appearance of the third event at 01:14:00 fulfills the second threshold condition (within 120 seconds, at least three Apache 4xx events have been observed). These events are followed by four SSH login failure events which occur between 01:14:11 and 01:14:19. The appearance of the second event at 01:14:12 fulfills the first threshold condition (within 120 seconds, at least two SSH login failure events have been observed). Since at this particular moment (01:14:12) the other two conditions are also fulfilled, the operation sends an e-mail warning about 192.168.1.104 to root@localhost. After that, the operation silently consumes all further matching events for 192.168.1.104 until 01:14:52, and then terminates.
Please note that if the above rule definition would contain multact=yes statement, the operation would continue sending e-mails at each matching event after 01:14:12, provided that all threshold conditions are satisfied. Therefore, the operation would send three additional e-mails at 01:14:18, 01:14:19, and 01:14:34. Also, the operation would not terminate after its window ends at 01:14:52, but would rather slide the window forward and expect new events. At the occurrence of any iptables, SSH login failure or Apache 4xx event for 192.168.1.104, the operation would produce a warning e-mail if all threshold conditions are fulfilled.
The following example rule cross-correlates iptables events and SSH login events:
type=EventGroup3
ptype=regexp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2
varmap= user=1; ip=2
count=alias OPER_$+{ip} LOGIN_FAILED_$+{user}_$+{ip}
ptype2=regexp
pattern2=sshd\[\d+\]: Accepted .+ for (\S+) from ([\d.]+) port \d+ ssh2
varmap2= user=1; ip=2
context2=LOGIN_FAILED_$+{user}_$+{ip}
ptype3=regexp
pattern3=kernel: iptables:.* SRC=([\d.]+)
varmap3= ip=1
desc=Client $+{ip} accessed a firewalled port and had difficulties with logging in
action=pipe '%t: %s' /bin/mail root@localhost
init=create OPER_$+{ip}
slide=delete OPER_$+{ip}; reset 0
end=delete OPER_$+{ip}
window=120
The rule starts an event correlation operation for an IP address if SSH login failure or iptables event was observed for that IP address. The operation exists for 120 seconds (since when the event correlation window slides forward, the operation terminates itself with the reset action as specified with the slide field). The operation sends an e-mail warning to root@localhost if within 120 seconds three threshold conditions are satisfied for the IP address it tracks -- (1) at least one iptables event has been observed for this source IP, (2) at least one SSH login failure has been observed for this client IP, (3) at least one successful SSH login has been observed for this client IP and for some user, provided that the operation has previously observed an SSH login failure for the same user and same client IP.
Suppose the following events occur, and each event timestamp reflects the time SEC observes the event:
Dec 27 19:00:06 test kernel: iptables: IN=eth0 OUT= MAC=00:13:72:8a:83:d2:00:1b:25:07:e2:1b:08:00 SRC=10.1.2.7 DST=10.2.5.5 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1881 DF PROTO=TCP SPT=34342 DPT=23 WINDOW=5840 RES=0x00 SYN URGP=0
Dec 27 19:00:14 test sshd[10520]: Accepted password for root from 10.1.2.7 port 52609 ssh2
Dec 27 19:00:24 test sshd[10526]: Failed password for risto from 10.1.2.7 port 52622 ssh2
Dec 27 19:00:27 test sshd[10526]: Accepted password for risto from 10.1.2.7 port 52622 ssh2
The iptables event at 19:00:06 starts an event correlation operation for 10.1.2.7 which has the event correlation window of 120 seconds. Immediately after the operation has been started, it creates the context OPER_10.1.2.7. The second event at 19:00:14 does not match the rule, since the context LOGIN_FAILED_root_10.1.2.7 does not exist. The third event at 19:00:24 matches the rule, and the operation which is running for 10.1.2.7 sets up the alias name LOGIN_FAILED_risto_10.1.2.7 for the context OPER_10.1.2.7. Finally, the fourth event at 19:00:27 matches the rule, since the context LOGIN_FAILED_risto_10.1.2.7 exists, and the event is therefore processed by the operation (the presence of the context indicates that the operation has previously observed a login failure for user risto from 10.1.2.7). At this particular moment (19:00:27), all three threshold conditions for the operation are fulfilled, and therefore it sends an e-mail warning about 10.1.2.7 to root@localhost. After that, the operation silently consumes all further matching events for 10.1.2.7 until 19:02:06, and then terminates. Immediately before termination, the operation deletes the context OPER_10.1.2.7 which also drops its alias name LOGIN_FAILED_risto_10.1.2.7.
The following example rule correlates SSH login failure events:
type=EventGroup
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2
desc=SSH login failures from three different hosts within 1m for user $1
egtoken=$2
egptype=PerlFunc
egpattern=sub { my(%hosts) = map { $_ => 1 } @{$_[1]}; \
return scalar(keys %hosts) >= 3; }
action=pipe '%t: %s' /bin/mail root@localhost
window=60
thresh=3
The rule runs event correlation operations for counting the number of SSH login failure events. Each operation counts events for one user name, and if the operation has observed login failures from three different hosts within 60 seconds, it sends an e-mail warning to root@localhost. For verifying that hosts are different, the egtoken field configures the use of host IP addresses as tokens for building the event group string. Also, the Perl function provided with the egpattern field checks if the list of tokens contains at least three unique IP addresses. Note that the list of tokens is referenced by the second input parameter ($_[1]) of the function, while the first input parameter ($_[0]) holds the event group string (the above Perl function only processes the list of tokens).
Suppose the following events are generated by an SSH daemon, and each event timestamp reflects the time SEC observes the event:
Apr 4 23:04:00 test sshd[21137]: Failed password for risto from 10.1.1.7 port 32182 ssh2
Apr 4 23:04:03 test sshd[21145]: Failed password for risto from 10.1.1.9 port 42176 ssh2
Apr 4 23:04:04 test sshd[21212]: Failed password for risto from 10.1.1.7 port 34191 ssh2
Apr 4 23:04:07 test sshd[21226]: Failed password for risto from 10.1.1.2 port 18999 ssh2
When the first event is observed at 23:04:00, a counting operation is started for user risto. Since at 23:04:04 the operation observes the third event, the threshold condition given with the thresh field becomes satisfied, and thus the event group pattern given with the egpattern field is evaluated. However, the list of tokens (10.1.1.7, 10.1.1.9, 10.1.1.7) contains only two unique elements, and therefore the event group pattern does not match. When the fourth event occurs at 23:04:07, event group pattern is evaluated again, and since the list of tokens (10.1.1.7, 10.1.1.9, 10.1.1.7, 10.1.1.2) contains three unique elements now, the event group pattern matches and the operation will send an e-mail warning to root@localhost.
The Suppress rule takes no action when an event has matched the rule, and keeps matching events from being processed by later rules in the configuration file.
Note that the Suppress rule does not start event correlation operations, and the optional desc field is merely used for describing the rule. Also, in order to end event processing, so that no further rules from any of the configuration files would be tried, use the Jump rule.
Examples:
type=Suppress
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for \S+ from ([\d.]+) port \d+ ssh2
context=SUPPRESS_IP_$1
type=SingleWithThreshold
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2
desc=Three SSH login failures within 1m for user $1 from $2
action=pipe '%t: %s' /bin/mail root@localhost; \
create SUPPRESS_IP_$2 3600
window=60
thresh=3
The first rule filters out SSH login failure events for an already reported source IP address, so that they will not be matched against the second rule during 3600 seconds after sending an e-mail warning.
The Calendar rule was designed for executing actions at specific times. Unlike all other rules, this rule reacts only to the system clock, ignoring other input. The Calendar rule executes the action list given with the action field if the current time matches all conditions of the time specification given with the time field. The action list is executed only once for any matching minute.
The rule employs a time specification which closely resembles the crontab(1) style, but there are some subtle differences. The time specification consists of five or six conditions separated by whitespace. The first condition matches minutes (allowed values are 0-59), the second condition matches hours (allowed values are 0-23), the third condition days (allowed values are 0-31, with 0 denoting the last day of the month), the fourth condition months (allowed values are 1-12), and the fifth condition weekdays (allowed values are 0-7, with 0 and 7 denoting Sunday). The sixth condition is optional and matches years (allowed values are 0-99 which denote the last two digits of the year).
Asterisks (*), ranges of numbers (e.g., 8-11), and lists (e.g., 2,5,7-9) are allowed as conditions. Asterisks and ranges may be augmented with step values (e.g., 47-55/2 means 47,49,51,53,55).
Note that unlike crontab(1) time specification, the day and weekday conditions are *not* joined with logical OR, but rather with logical AND. Therefore, 0 1 25-31 10 7 means 1AM on last Sunday in October. On the other hand, with crontab(1) the same specification means 1AM in every last seven days or every Sunday in October.
Also, unlike some versions of cron(8), SEC is not restricted to take action only during the first second of the current minute. For example, if SEC is started at the 22th second of a minute, the wildcard condition produces a match for this minute. As another example, if the time specification matches the current minute but the context expression evaluates FALSE during the first half of the minute, the Calendar rule will execute the action list in the middle of this minute when the expression value becomes TRUE.
Note that the Calendar rule does not start event correlation operations, and the desc field is merely used for setting the %s action list variable.
Examples:
type=Calendar
time=0 2 25-31 3,12 6
desc=Check if backup is done on last Saturday of Q1 and Q4
action=event WAITING_FOR_BACKUP
type=Calendar
time=0 2 24-30 6,9 6
desc=Check if backup is done on last Saturday of Q2 and Q3
action=event WAITING_FOR_BACKUP
type=PairWithWindow
ptype=SubStr
pattern=WAITING_FOR_BACKUP
desc=Quarterly backup not completed on time!
action=pipe '%t: %s' /bin/mail root@localhost
ptype2=SubStr
pattern2=BACKUP READY
desc2=Quarterly backup successfully completed
action2=none
window=1800
The first two rules create a synthetic event WAITING_FOR_BACKUP at 2AM on last Saturday of March, June, September and December. The third rule matches this event and starts an event correlation operation which waits for the BACKUP READY event for 1800 seconds. If this event has not arrived by 2:30AM, the operation sends an e-mail warning to root@localhost.
The Jump rule submits matching events to specific ruleset(s) for further processing. If the event matches the rule, SEC continues the search for matching rules in configuration file set(s) given with the cfset field. Rules from every file are tried in the order of their appearance in the file. Configuration file sets can be created from Options rules with the joincfset field, with each set containing at least one configuration file. If more that one set name is given with cfset, sets are processed from left to right; a matching rule in one set doesn't prevent SEC from processing the following sets. If the constset field is set to Yes, set names are assumed to be constants and will not be searched for match variables at runtime.
If the cfset field is not present and the continue field is set to GoTo, the Jump rule can be used for skipping rules inside the current configuration file. If both cfset and continue are not present (or continue is set to DontCont), Jump is identical to Suppress rule. Finally, if cfset is not present and continue is set to EndMatch, processing of the matching event ends (i.e., no further rules from any of the configuration files will be tried).
Note that the Jump rule does not start event correlation operations, and the optional desc field is merely used for describing the rule.
Examples:
type=Jump
ptype=RegExp
pattern=sshd\[\d+\]:
cfset=sshd-rules auth-rules
When an sshd syslog message appears in input, rules from configuration files of the set sshd-rules are first used for matching the message, and then rules from the configuration file set auth-rules are tried.
The Options rule sets processing options for the ruleset in the current configuration file. If more than one Options rule is present in the configuration file, the last instance overrides all previous ones. Note that the Options rule is only processed when SEC (re)starts and reads in the configuration file. Since this rule is not applied at runtime, it can never match events, react to the system clock, or start event correlation operations.
The joincfset field lists the names of one or more configuration file sets, and the current configuration file will be added to each set. If a set doesn't exist, it will be created and the current configuration file becomes its first member. If the procallin field is set to No, the rules from the configuration file will be used for matching input from Jump rules only.
Examples:
The following rule adds the current configuration file to the set sshd-rules which is used for matching input from Jump rules only:
type=Options
joincfset=sshd-rules
procallin=no
The following rule adds the current configuration file to sets linux and solaris which are used for matching all input:
type=Options
joincfset=linux solaris
In order to identify event correlation operations, SEC assigns an ID to every operation that is composed from the configuration file name, the rule number, and the operation description string (defined by the desc field of the rule). If there are N rules in the configuration file (excluding Options rules), the rule numbers belong to the range 0..N-1, and the number of the k-th rule is k-1. Since each Options rule is only processed when SEC reads in the configuration file and is not applied at runtime, the Options rules will not receive rule numbers. Note that since the configuration file name and rule number are part of the operation ID, different rules can have identical desc fields without a danger of a clash between operations.
For example, if the configuration file /etc/sec/my.conf contains only one rule
type=SingleWithThreshold
ptype=RegExp
pattern=user (\S+) login failure on (\S+)
desc=Repeated login failures for user $1 on $2
action=pipe '%t: %s' /bin/mail root@localhost
window=60
thresh=3
then the number of this rule is 0. When this rule matches an input event "user admin login failure on tty1", the desc field yields an operation description string Repeated login failures for user admin on tty1, and the event will be directed for further processing to the operation with the following ID:
/etc/sec/my.conf | 0 | Repeated login failures for user admin on tty1
If the operation for this ID does not exist, the rule will create it. The newly created operation has its event counter initialized to 1, and it expects to receive two additional "user admin login failure on tty1" events from the rule within the following 60 seconds. If the operation receives such an event, its event counter is incremented, and if the counter reaches the value of 3, a warning e-mail is sent to root@localhost.
By tuning the desc field of the rule, the scope of individual event correlation operations can be changed. For instance, if the following events occur within 10 seconds
user admin login failure on tty1
user admin login failure on tty5
user admin login failure on tty2
the above rule starts three event correlation operations. However, if the desc field of the rule is changed to Repeated login failures for user $1, these events are processed by the *same* event correlation operation (the operation sends a warning e-mail to root@localhost when it receives the third event).
Since rules from the same configuration file are matched against input in the order they are given, the rule ordering influences the creation and feeding of event correlation operations. Suppose the configuration file /etc/sec/my.conf contains the following rules:
type=Suppress
ptype=TValue
pattern=TRUE
context=MYCONTEXT
type=SingleWithThreshold
ptype=RegExp
pattern=user (\S+) login failure on (\S+)
desc=Repeated login failures for user $1 on $2
action=pipe '%t: %s' /bin/mail root@localhost
window=60
thresh=3
The second rule is able to create and feed event correlation operations as long as the context MYCONTEXT does not exist. However, after MYCONTEXT has been created, no input event will reach the second rule, and the rule is thus unable to create new operations and feed existing ones with events.
Note that Pair and PairWithWindow rules can feed the same event to several operations. Suppose the configuration file /etc/sec/my2.conf contains the following rules:
type=Suppress
ptype=SubStr
pattern=test
type=Pair
ptype=RegExp
pattern=database (\S+) down
desc=Database $1 is down
action=pipe '%t: %s' /bin/mail root@localhost
ptype2=RegExp
pattern2=database $1 up|all databases up
desc2=Database %1 is up
action2=pipe '%t: %s' /bin/mail root@localhost
window=86400
Since the following input events don't contain the substring "test"
database mydb1 down
database mydb2 down
database mydb3 down
they are matched by the second rule of type Pair which creates three event correlation operations. Each operation is running for one particular database name, and the operations have the following IDs:
/etc/sec/my2.conf | 1 | Database mydb1 is down
/etc/sec/my2.conf | 1 | Database mydb2 is down
/etc/sec/my2.conf | 1 | Database mydb3 is down
Each newly created operation sends an e-mail notification to root@localhost
about the "database down" condition, and will then wait for 86400 seconds
(24 hours) for either of the following messages:
(a) "database up" message for the given database,
(b) "all databases up" message.
The operation with the ID
/etc/sec/my2.conf | 1 | Database mydb1 is down
uses the following regular expression for matching expected messages:
database mydb1 up|all databases up
The operation with the ID
/etc/sec/my2.conf | 1 | Database mydb2 is down
employs the following regular expression for matching expected messages:
database mydb2 up|all databases up
Finally, the operation with the ID
/etc/sec/my2.conf | 1 | Database mydb3 is down
uses the following regular expression:
database mydb3 up|all databases up
If the following input events appear after 10 minutes
database test up
admin logged in
database mydb3 up
all databases up
the first event "database test up" matches the first rule (Suppress) which does not pass the event further to the second rule (Pair). However, all following events reach the Pair rule. Since the messages don't match the pattern field of the rule, the rule feeds them to all currently existing operations it has created, so that the operations can match these events with their regular expressions. Because regular expressions of all three operations don't match the event "admin logged in", the operations will continue to run. In the case of the "database mydb3 up" event, the regular expression of the operation
/etc/sec/my2.conf | 1 | Database mydb3 is down
produces a match. Therefore, the operation will send the e-mail notification "Database mydb3 is up" to root@localhost and terminate. However, the following event "all databases up" matches the regular expressions of two remaining operations. As a result, the operations will send e-mail notifications "Database mydb1 is up" and "Database mydb2 is up" to root@localhost and terminate.
Each operation has an event correlation window which defines its scope in time. The size of the window is defined by the window* field, and the beginning of the window can be obtained with the getwpos action. SingleWithThreshold, SingleWith2Thresholds and EventGroup operations can slide its window forward during event processing, while for all operations the window can also be moved explicitly with the setwpos action. Also, with the reset action event correlation operations can be terminated. Note that getwpos, setwpos, and reset actions only work for operations started by the rules from the same configuration file.
For example, consider the configuration file /etc/sec/sshd.rules that contains the following rules:
type=SingleWithThreshold
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from [\d.]+ port \d+ ssh2
desc=Three SSH login failures within 1m for user $1
action=pipe '%t: %s' /bin/mail root@localhost
window=60
thresh=3
type=Single
ptype=RegExp
pattern=sshd\[\d+\]: Accepted .+ for (\S+) from [\d.]+ port \d+ ssh2
desc=SSH login successful for user $1
action=reset -1 Three SSH login failures within 1m for user $1
Suppose the following events are generated by an SSH daemon, and each event timestamp reflects the time SEC observes the event:
Dec 29 15:00:03 test sshd[14129]: Failed password for risto from 10.1.2.7 port 31312 ssh2
Dec 29 15:00:08 test sshd[14129]: Failed password for risto from 10.1.2.7 port 31312 ssh2
Dec 29 15:00:17 test sshd[14129]: Accepted password for risto from 10.1.2.7 port 31312 ssh2
Dec 29 15:00:52 test sshd[14142]: Failed password for risto from 10.1.1.2 port 17721 ssh2
The first event at 15:00:03 starts an event correlation operation with the ID
/etc/sec/sshd.rules | 0 | Three SSH login failures within 1m for user risto
However, when the third event occurs at 15:00:17, the second rule matches it and terminates the operation with the action
reset -1 Three SSH login failures within 1m for user risto
The -1 parameter of reset restricts the action to operations started by the previous rule (i.e., the first rule that has a number 0), while the Three SSH login failures within 1m for user risto parameter refers to the operation description string. Together with the current configuration file name (/etc/sec/sshd.rules), the parameters yield the operation ID
/etc/sec/sshd.rules | 0 | Three SSH login failures within 1m for user risto
(If the operation with the given ID would not exist, reset would perform no operation.)
As a consequence, the fourth event at 15:00:52 starts another operation with the same ID as the terminated operation had. Without the second rule, the operation that was started at 15:00:03 would not be terminated, and the appearance of the fourth event would trigger a warning e-mail from that operation.
Note that when both synthetic events and regular input are available for processing, synthetic events are always consumed first. When all synthetic events have been consumed iteratively, SEC will start processing new data from input files.
With the --jointbuf option, SEC employs a joint input buffer for all input sources which holds N last input lines (the value of N can be set with the --bufsize option). Updating the input buffer means that the new line becomes the first element of the buffer, while the last element (the oldest line) is removed from the end of the buffer. With the --nojointbuf option, SEC maintains a buffer of N lines for each input file, and if the input line comes from file F, the buffer of F is updated as described previously. There is also a separate buffer for synthetic and internal events.
Suppose SEC is started with the following command line
/usr/bin/sec --conf=/etc/sec/test-multiline.conf --jointbuf \
--input=/var/log/prog1.log --input=/var/log/prog2.log
and the configuration file /etc/sec/test-multiline.conf has the following content:
type=Single
rem=this rule matches two consecutive lines where the first \
line contains "test1" and the second line "test2", and \
writes the matching lines to standard output
ptype=RegExp2
pattern=test1.*\n.*test2
desc=two consecutive test lines
action=write - $0
When the following lines appear in input files /var/log/prog1.log and /var/log/prog2.log
Dec 31 12:33:12 test prog1: test1 (file /var/log/prog1.log)
Dec 31 12:34:09 test prog2: test1 (file /var/log/prog2.log)
Dec 31 12:39:35 test prog1: test2 (file /var/log/prog1.log)
Dec 31 12:41:53 test prog2: test2 (file /var/log/prog2.log)
they are stored in a common input buffer. Therefore, rule fires after the third event has appeared, and writes the following lines to standard output:
Dec 31 12:34:09 test prog2: test1 (file /var/log/prog2.log)
Dec 31 12:39:35 test prog1: test2 (file /var/log/prog1.log)
However, if SEC is started with the --nojointbuf option, separate input buffers are set up for /var/log/prog1.log and /var/log/prog2.log. Therefore, the rule fires after the third event has occurred, and writes the following lines to standard output:
Dec 31 12:33:12 test prog1: test1 (file /var/log/prog1.log)
Dec 31 12:39:35 test prog1: test2 (file /var/log/prog1.log)
The rule also fires after the fourth event has occurred, producing the following output:
Dec 31 12:34:09 test prog2: test1 (file /var/log/prog2.log)
Dec 31 12:41:53 test prog2: test2 (file /var/log/prog2.log)
The content of input buffers can be modified with the rewrite action, and modifications become visible immediately during ongoing event processing iteration. Suppose SEC is started with the following command line
/usr/bin/sec --conf=/etc/sec/test-rewrite.conf \
--input=- --nojointbuf
and the configuration file /etc/sec/test-rewrite.conf has the following content:
type=Single
rem=this rule matches two consecutive lines where the first \
line contains "test1" and the second line "test2", and \
joins these lines in the input buffer
ptype=RegExp2
pattern=^(.*test1.*)\n(.*test2.*)$
continue=TakeNext
desc=join two test lines
action=rewrite 2 Joined $1 and $2
type=Single
rem=this rule matches a line which begins with "Joined", \
and writes this line to standard output
ptype=RegExp
pattern=^Joined
desc=output joined lines
action=write - $0
When the following two lines appear in standard input
This is a test1
This is a test2
they are matched by the first rule which uses the rewrite action for replacing those two lines in the input buffer with a new content. The last line in the input buffer ("This is a test2") is replaced with "Joined This is a test1 and This is a test2", while the previous line in the input buffer ("This is a test1") is replaced with an empty string. Since the rule contains continue=TakeNext statement, the matching process will continue from the following rule. This rule matches the last line in the input buffer if it begins with "Joined", and writes the line to standard output, producing
Joined This is a test1 and This is a test2
After each event processing iteration, the pattern match cache is cleared. In other words, if a match is cached with the rule varmap* field, it is available during ongoing iteration only. Note that results from a successful pattern matching are also cached when the subsequent context expression evaluation yields FALSE. This allows for reusing results from partial rule matches. For example, the following rule creates the cache entry "ssh_failed_login" for any SSH failed login event, even if the context ALERTING_ON does not exist:
type=Single
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (\S+) from ([\d.]+) port \d+ ssh2
varmap=ssh_failed_login; user=1; ip=2
context=ALERTING_ON
desc=SSH login failure for user $1 from $2
action=pipe '%s' /bin/mail -s 'SSH login alert' root@localhost
However, provided the context expression does not contain match variables, enclosing the expression in square brackets (e.g., [ALERTING_ON]) forces its evaluation before the pattern matching, and will thus prevent the matching and the creation of the cache entry if the evaluation yields FALSE.
Rules from the same configuration file are matched against the buffer content in the order they are given in that file. When multiple configuration files have been specified, rule sequences from all files are matched against the buffer content (unless specified otherwise with Options rules). The matching order is determined by the order of configuration files in SEC command line. For example, if the Perl glob() function returns filenames in ascending ASCII order, and configuration files /home/risto/A.conf, /home/risto/B.conf2, and /home/risto/C.conf are specified with --conf=/home/risto/*.conf --conf=/home/risto/*.conf2 in SEC command line, then SEC first matches the input against the rule sequence from A.conf, then from C.conf, and finally from B.conf2. Also, note that even if A.conf contains a Suppress rule for a particular event, the event is still processed by rulesets in C.conf and B.conf2. However, note that glob() might return file names in different order if locale settings change. If you want to enforce a fixed order for configuration file application in a portable way, it is recommended to create a unique set for each file with the Options rule, and employ the Jump rule for defining the processing order for sets, e.g.:
# This rule appears in A.conf
type=Options
joincfset=FileA
procallin=no
# This rule appears in B.conf2
type=Options
joincfset=FileB
procallin=no
# This rule appears in C.conf
type=Options
joincfset=FileC
procallin=no
# This rule appears in main.conf
type=Jump
ptype=TValue
pattern=TRUE
cfset=FileA FileC FileB
After the relevant input buffer has been updated and its content has been matched by the rules, SEC handles caught signals and checks the status of child processes. When the timeout specified with the --cleantime option has expired, SEC also checks the status of contexts and event correlation operations. Therefore, relatively small values should be specified with the --cleantime option, in order to retain the accuracy of the event correlation process. If the --cleantime option is set to 0, SEC checks event correlation operations and contexts after processing every input line, but this consumes more CPU time. If the --poll-timeout option value exceeds the value given with --cleantime, the --poll-timeout option value takes precedence (i.e., sleeps after unsuccessful polls will not be shortened).
Finally, note that apart from the sleeps after unsuccessful polls, SEC measures all time intervals and occurrence times in seconds, and always uses the time(2) system call for obtaining the current time. Also, for input event occurrence time SEC always uses the time it observed the event, *not* the timestamp extracted from the event.
If the --intevents command line option is given, SEC will generate internal events when it is started up, when it receives certain signals, and when it terminates normally. Inside SEC, internal event is treated as if it was a line that was read from a SEC input file. Specific rules can be written to match internal events, in order to take some action (e.g., start an external event correlation module with spawn when SEC starts up). The following internal events are supported:
SEC_STARTUP - generated when SEC is started (this event will always be the first event that SEC sees)
SEC_PRE_RESTART - generated before processing of the SIGHUP signal (this event will be the last event that SEC sees before clearing all internal data structures and reloading its configuration)
SEC_RESTART - generated after processing of the SIGHUP signal (this event will be the first event that SEC sees after clearing all internal data structures and reloading its configuration)
SEC_PRE_SOFTRESTART - generated before processing of the SIGABRT signal (this event will be the last event that SEC sees before reloading its configuration)
SEC_SOFTRESTART - generated after processing of the SIGABRT signal (this event will be the first event that SEC sees after reloading its configuration)
SEC_PRE_LOGROTATE - generated before processing of the SIGUSR2 signal (this event will be the last event that SEC sees before reopening its log file and closing its outputs)
SEC_LOGROTATE - generated after processing of the SIGUSR2 signal (this event will be the first event that SEC sees after reopening its log file and closing its outputs)
SEC_SHUTDOWN - generated when SEC receives the SIGTERM signal, or when SEC reaches all EOFs of input files after being started with the --notail option. With the --childterm option, SEC sleeps for 3 seconds after generating SEC_SHUTDOWN event, and then sends SIGTERM to its child processes (if a child process was triggered by SEC_SHUTDOWN, this delay leaves the process enough time for setting a signal handler for SIGTERM).
Before generating an internal event, SEC sets up a context named SEC_INTERNAL_EVENT, in order to disambiguate internal events from regular input. The SEC_INTERNAL_EVENT context is deleted immediately after the internal event has been matched against all rules.
If the --intcontexts command line option is given, or there is an --input option with a context specified, SEC creates an internal context each time it reads a line from an input file or a synthetic event. The internal context is deleted immediately after the line has been matched against all rules. For all input files that have the context name explicitly set with --input=<file_pattern>=<context>, the name of the internal context is <context>. If the line was read from the input file <filename> for which there is no context name set, the name of the internal context is _FILE_EVENT_<filename>. For synthetic events, the name of the internal context defaults to _INTERNAL_EVENT, but cspawn and cevent actions can be used for generating synthetic events with custom internal context names. This allows for writing rules that match data from one particular input source only. For example, the rule
type=Suppress
ptype=TValue
pattern=TRUE
context=[!_FILE_EVENT_/dev/logpipe]
passes only the lines that were read from /dev/logpipe, and also synthetic events that were generated with the _FILE_EVENT_/dev/logpipe internal context (e.g., with the action cevent _FILE_EVENT_/dev/logpipe 0 This is a test event). As another example, if SEC has been started with the command line
/usr/bin/sec --intevents --intcontexts --conf=/etc/sec/my.conf \
--input=/var/log/messages=MESSAGES \
--input=/var/log/secure=SECURE \
--input=/var/log/cron=CRON
and the rule file /etc/sec/my.conf contains the following rules
type=Single
ptype=RegExp
pattern=^(?:SEC_STARTUP|SEC_RESTART)$
context=[SEC_INTERNAL_EVENT]
desc=listen on 10514/tcp for incoming events
action=cspawn MESSAGES /usr/bin/nc -l -k 10514
type=Single
ptype=RegExp
pattern=.
context=[MESSAGES]
desc=echo everything from 10514/tcp and /var/log/messages
action=write - $0
then SEC will receive input lines from the log files /var/log/messages, /var/log/secure, and /var/log/cron, and will also run /usr/bin/nc for receiving input lines from the port 10514/tcp. All input lines from /var/log/messages and 10514/tcp are matched by the second rule and written to standard output.
Disabling shell parsing for command lines can be useful for avoiding unwanted side effects. For example, consider the following badly written rule for sending an e-mail to a local user if 10 SSH login failures have been observed for this user from the same IP address during 300 seconds:
type=SingleWithThreshold
ptype=RegExp
pattern=sshd\[\d+\]: Failed .+ for (.+) from ([\d.]+) port \d+ ssh2
desc=Failed SSH logins for user $1 from $2
action=pipe 'Failed SSH logins from $2' /bin/mail -s alert $1
window=300
thresh=10
Unfortunately, the above rule allows for the execution of arbitrary command lines with the privileges of the SEC process. For example, consider the following malicious command line for providing fake input events for the rule:
logger -p authpriv.info -t sshd -i 'Failed password for `/usr/bin/touch /tmp/test` from 127.0.0.1 port 12345 ssh2'
When this command line is repeatedly executed, the attacker is able to trigger the execution of the command line /bin/mail -s alert `/usr/bin/touch /tmp/test`. However, this command line is parsed by shell that triggers the execution of the command line specified by the attacker: /usr/bin/touch /tmp/test. For fixing this issue, the pipe action can be replaced with pipeexec which will disable the shell parsing:
action=pipeexec 'Failed SSH logins from $2' /bin/mail -s alert $1
However, when using that approach, you *must* make sure that the data passed to an external program is handled without any unexpected side effects.
As another workaround, the regular expression pattern of the rule can be modified to match user names that do not contain shell metacharacters, for example:
pattern=sshd\[\d+\]: Failed .+ for ([\w.-]+) from ([\d.]+) port \d+ ssh2
SEC communicates with its child processes through pipes (created with the pipe(2) system call). When the child process is at the read end of the pipe, data have to be written to the pipe in blocking mode which ensures reliable data transmission. In order to avoid being blocked, SEC forks another SEC process for writing data to the pipe reliably. The newly created SEC process will then fork the child process, managing the child process on behalf of the main SEC process (i.e., the main SEC process is the grandparent process for the child). For example, if the SEC process that manages the child receives the SIGTERM signal, the signal will be forwarded to the child process, and when the child process terminates, its exit code will be reported to the main SEC process.
After forking an external program, SEC continues immediately, and checks the program status periodically until the program exits. The running time of a child process is not limited in any way. With the --childterm option, SEC sends the SIGTERM signal to all child processes when it terminates. If some special exit procedures need to be accomplished in the child process (or the child wishes to ignore SIGTERM), then the child must install a handler for the SIGTERM signal. Note that if the program command line is parsed by shell, the parsing shell will run as a child process of SEC and the parent process of the program. Therefore, the SIGTERM signal will be sent to the shell, *not* the program. In order to avoid this, the shell's builtin exec command can be used (see sh(1) for more information) which replaces the shell with the program without forking a new process, e.g.,
action=spawn exec /usr/local/bin/myscript.pl 2>/var/log/myscript.log
Note that if an action list includes two actions which fork external programs, the execution order these programs is not determined by the order of actions in the list, since both programs are running asynchronously. In order to address this issue, the execution order must be specified explicitly (e.g., instead of writing action=shellcmd cmd1; shellcmd cmd2, use the shell && operator and write action=shellcmd cmd1 && cmd2).
Sometimes it is desirable to start an external program and provide it with data from several rules. In order to create such setup, named pipes can be harnessed. For example, if /var/log/pipe is a named pipe, then
action=shellcmd /usr/bin/logger -f /var/log/pipe -p user.notice
starts the /usr/bin/logger utility which sends all lines read from /var/log/pipe to the local syslog daemon with the "user" facility and "notice" level. In order to feed events to /usr/bin/logger, the write action can be used (e.g., write /var/log/pipe This is my event). Although SEC keeps the named pipe open across different write actions, the pipe will be closed on the reception of SIGHUP, SIGABRT and SIGUSR2 signals. Since many UNIX tools terminate on receiving EOF from standard input, they need restarting after such signals have arrived. For this purpose, the --intevents option and SEC internal events can be used. For example, the following rule starts the /usr/bin/logger utility at SEC startup, and also restarts it after the reception of relevant signals:
type=Single
ptype=RegExp
pattern=^(?:SEC_STARTUP|SEC_RESTART|SEC_SOFTRESTART|SEC_LOGROTATE)$
context=SEC_INTERNAL_EVENT
desc=start the logger tool
action=free %emptystring; owritecl /var/log/pipe %emptystring; \
shellcmd /usr/bin/logger -f /var/log/pipe -p user.notice
Note that if /var/log/pipe is never opened for writing by a write action, /usr/bin/logger will never see EOF and will thus not terminate. The owritecl action opens and closes /var/log/pipe without writing any bytes, in order to ensure the presence of EOF in such cases. This allows any previous /usr/bin/logger process to terminate before the new process is started.
May 27 10:00:15 box1 kernel: iptables: IN=eth0 OUT= MAC=08:00:27:be:9e:2f:00:10:db:ff:20:03:08:00 SRC=10.6.4.14 DST=10.1.8.2 LEN=84 TOS=0x00 PREC=0x00 TTL=251 ID=61426 PROTO=ICMP TYPE=8 CODE=0 ID=11670 SEQ=2
May 27 10:02:22 box1 kernel: iptables: IN=eth0 OUT= MAC=08:00:27:be:9e:2f:00:10:db:ff:20:03:08:00 SRC=10.6.4.14 DST=10.1.8.2 LEN=52 TOS=0x00 PREC=0x00 TTL=60 ID=61441 DF PROTO=TCP SPT=53125 DPT=23 WINDOW=49640 RES=0x00 SYN URGP=0
Depending on the protocol and the nature of the traffic, events can have a wide variety of fields, and parsing out all event data with one regular expression is infeasible. For addressing this issue, a PerlFunc pattern can be used which creates match variables from all fields of the matching event, stores them in one Perl hash, and returns a reference to this hash. Outside the PerlFunc pattern, match variables are initialized from the key-value pairs in the returned hash. Suppose the following Jump rule with a PerlFunc pattern is defined in the main.rules rule file:
type=Jump
ptype=PerlFunc
pattern=sub { my(%var); my($line) = $_[0]; \
if ($line !~ /kernel: iptables:/g) { return 0; } \
while ($line =~ /\G\s*([A-Z]+)(?:=(\S*))?/g) { \
$var{$1} = defined($2)?$2:1; \
} return \%var; }
varmap=IPTABLES
desc=parse iptables event
cfset=iptables
For example, if the iptables event contains the fields SRC=10.6.4.14, DST=10.1.8.2 and SYN, the above PerlFunc pattern sets up match variable $+{SRC} which holds 10.6.4.14, match variable $+{DST} which holds 10.1.8.2, and match variable $+{SYN} which holds 1. The Jump rule caches all created match variables under the name IPTABLES, and submits the matching event to iptables ruleset for further processing. Suppose the iptables ruleset is defined in the iptables.rules rule file:
type=Options
procallin=no
joincfset=iptables
type=SingleWithThreshold
ptype=Cached
pattern=IPTABLES
context=IPTABLES :> ( sub { return $_[0]->{"PROTO"} eq "ICMP"; } )
desc=ICMP flood type $+{TYPE} code $+{CODE} from host $+{SRC}
action=logonly
window=10
thresh=100
type=SingleWithThreshold
ptype=Cached
pattern=IPTABLES
context=IPTABLES :> ( sub { return exists($_[0]->{"SYN"}) && \
exists($_[0]->{"FIN"}) ; } )
desc=SYN+FIN flood from host $+{SRC}
action=logonly
window=10
thresh=100
The two SingleWithThreshold rules employ Cached patterns for matching iptables events by looking up the IPTABLES entry in the pattern match cache (created by the above Jump rule for each iptables event). In order to narrow down the match to specific iptables events, the rules employ precompiled Perl functions in context expressions. The :> operator is used for speeding up the matching, providing the function with a single parameter which refers to the hash of variable name-value pairs for the IPTABLES cache entry.
The first SingleWithThreshold rule logs a warning message if within 10 seconds 100 iptables events have been observed for ICMP packets with the same type, code, and source IP address. The second SingleWithThreshold rule logs a warning message if within 10 seconds 100 iptables events have been observed for TCP packets coming from the same host, and having both SYN and FIN flag set in each packet.
Apart from using action list variables for data sharing between rules, Perl variables created in Perl code can be employed for the same purpose. For example, when SEC has executed the following action
action=eval %a ($b = 1)
the variable $b and its value become visible in the following context expression
context= =(++$b > 10)
(with that expression one can implement event counting implicitly). In order to avoid possible clashes with variables inside the SEC code itself, user-defined Perl code is executed in the main::SEC namespace (i.e., inside the special package main::SEC). By using the main:: prefix, SEC data structures can be accessed and modified. For example, the following rules restore and save contexts with names MY_* on SEC startup and shutdown, using Perl Storable module for saving and restoring relevant elements of %main::context_list hash (since the following example does not handle code references with Storable module, it is assumed that context action lists do not contain lcall actions):
type=Single
ptype=SubStr
pattern=SEC_STARTUP
context=SEC_INTERNAL_EVENT
continue=TakeNext
desc=Load the Storable module and terminate if it is not found
action=eval %ret (require Storable); \
if %ret ( logonly Storable loaded ) else ( eval %o exit(1) )
type=Single
ptype=SubStr
pattern=SEC_STARTUP
context=SEC_INTERNAL_EVENT
desc=Restore contexts MY_* from /var/lib/sec/SEC_CONTEXTS on startup
action=lcall %ret -> ( sub { my($ref, $context); \
$ref = Storable::retrieve("/var/lib/sec/SEC_CONTEXTS"); \
foreach $context (keys %{$ref}) { \
if ($context =~ /^MY_/) \
{ $main::context_list{$context} = $ref->{$context}; } } } )
type=Single
ptype=SubStr
pattern=SEC_SHUTDOWN
context=SEC_INTERNAL_EVENT
desc=Save contexts MY_* into /var/lib/sec/SEC_CONTEXTS on shutdown
action=lcall %ret -> ( sub { my($context, %hash); \
foreach $context (keys %main::context_list) { \
if ($context =~ /^MY_/) \
{ $hash{$context} = $main::context_list{$context}; } } \
Storable::store(\%hash, "/var/lib/sec/SEC_CONTEXTS"); } )
However, note that modifying data structures within SEC code is recommended only for advanced users who have carefully studied relevant parts of the code.
Finally, sometimes larger chunks of Perl code have to be used for event processing and correlation. However, writing many lines of code directly into a rule is cumbersome and may decrease its readability. In such cases it is recommended to separate the code into a custom Perl module which is loaded at SEC startup, and use the code through the module interface (see perlmod(1) for further details):
type=Single
ptype=SubStr
pattern=SEC_STARTUP
context=SEC_INTERNAL_EVENT
desc=Load the SecStuff module
action=eval %ret (require '/usr/local/sec/SecStuff.pm'); \
if %ret ( none ) else ( eval %o exit(1) )
type=Single
ptype=PerlFunc
pattern=sub { return SecStuff::my_match($_[0]); }
desc=event '$0' was matched by my_match()
action=write - %s
# Set up contexts NIGHT and WEEKEND for nights
# and weekends. The context NIGHT has a lifetime
# of 8 hours and the context WEEKEND 2 days
type=Calendar
time=0 23 * * *
desc=NIGHT
action=create %s 28800
type=Calendar
time=0 0 * * 6
desc=WEEKEND
action=create %s 172800
# If a router does not come up within 5 minutes
# after it was rebooted, generate event
# "<router> REBOOT FAILURE". The next rule matches
# this event, checks the router with ping and sends
# a notification if there is no response.
type=PairWithWindow
ptype=RegExp
pattern=\s([\w.-]+) \d+: %SYS-5-RELOAD
desc=$1 REBOOT FAILURE
action=event %s
ptype2=RegExp
pattern2=\s$1 \d+: %SYS-5-RESTART
desc2=%1 successful reboot
action2=logonly
window=300
type=SingleWithScript
ptype=RegExp
pattern=^([\w.-]+) REBOOT FAILURE
script=/bin/ping -c 3 -q $1
desc=$1 did not come up after reboot
action=logonly $1 is pingable after reboot
action2=pipe '%t: %s' /bin/mail root@localhost
# Send a notification if CPU load of a router is too
# high (two CPUHOG messages are received within 5
# minutes); send another notification if the load is
# normal again (no CPUHOG messages within last 15
# minutes). Rule is not active at night or weekend.
type=SingleWith2Thresholds
ptype=RegExp
pattern=\s([\w.-]+) \d+: %SYS-3-CPUHOG
context=!(NIGHT || WEEKEND)
desc=$1 CPU overload
action=pipe '%t: %s' /bin/mail root@localhost
window=300
thresh=2
desc2=$1 CPU load normal
action2=pipe '%t: %s' /bin/mail root@localhost
window2=900
thresh2=0
# If a router interface is in down state for less
# than 15 seconds, generate event
# "<router> INTERFACE <interface> SHORT OUTAGE";
# otherwise generate event
# "<router> INTERFACE <interface> DOWN".
type=PairWithWindow
ptype=RegExp
pattern=\s([\w.-]+) \d+: %LINK-3-UPDOWN: Interface ([\w.-]+), changed state to down
desc=$1 INTERFACE $2 DOWN
action=event %s
ptype2=RegExp
pattern2=\s$1 \d+: %LINK-3-UPDOWN: Interface $2, changed state to up
desc2=%1 INTERFACE %2 SHORT OUTAGE
action2=event %s
window=15
# If "<router> INTERFACE <interface> DOWN" event is
# received, send a notification and wait for
# "interface up" event from the same router interface
# for the next 24 hours
type=Pair
ptype=RegExp
pattern=^([\w.-]+) INTERFACE ([\w.-]+) DOWN
desc=$1 interface $2 is down
action=pipe '%t: %s' /bin/mail root@localhost
ptype2=RegExp
pattern2=\s$1 \d+: %LINK-3-UPDOWN: Interface $2, changed state to up
desc2=%1 interface %2 is up
action2=pipe '%t: %s' /bin/mail root@localhost
window=86400
# If ten "short outage" events have been observed
# in the window of 6 hours, send a notification
type=SingleWithThreshold
ptype=RegExp
pattern=^([\w.-]+) INTERFACE ([\w.-]+) SHORT OUTAGE
desc=Interface $2 at node $1 is unstable
action=pipe '%t: %s' /bin/mail root@localhost
window=21600
thresh=10
/usr/bin/sec --conf=/etc/sec/*.rules --intcontexts \
--input=/var/log/messages --input=/var/log/secure
#
# the content of /etc/sec/main.rules
#
type=Jump
context=[ _FILE_EVENT_/var/log/messages ]
ptype=PerlFunc
pattern=sub { my(%var); my($line) = $_[0]; \
if ($line !~ /kernel: iptables:/g) { return 0; } \
while ($line =~ /\G\s*([A-Z]+)(?:=(\S*))?/g) { \
$var{$1} = defined($2)?$2:1; \
} return \%var; }
varmap=IPTABLES
desc=parse iptables events and direct to relevant ruleset
cfset=iptables
type=Jump
context=[ _FILE_EVENT_/var/log/secure ]
ptype=RegExp
pattern=sshd\[(?<pid>\d+)\]: (?<status>Accepted|Failed) \
(?<authmethod>[\w-]+) for (?<invuser>invalid user )?\
(?<user>[\w-]+) from (?<srcip>[\d.]+) port (?<srcport>\d+) ssh2$
varmap=SSH_LOGIN
desc=parse SSH login events and direct to relevant ruleset
cfset=ssh-login
type=Jump
context=[ SSH_EVENT ]
ptype=TValue
pattern=True
desc=direct SSH synthetic events to relevant ruleset
cfset=ssh-events
#
# the content of /etc/sec/iptables.rules
#
type=Options
procallin=no
joincfset=iptables
type=SingleWithThreshold
ptype=Cached
pattern=IPTABLES
context=IPTABLES :> ( sub { return exists($_[0]->{"SYN"}) && \
exists($_[0]->{"FIN"}) ; } ) \
&& !SUPPRESS_IP_$+{SRC}
desc=SYN+FIN flood from host $+{SRC}
action=pipe '%t: %s' /bin/mail -s 'iptables alert' root@localhost; \
create SUPPRESS_IP_$+{SRC} 3600
window=10
thresh=100
type=SingleWithThreshold
ptype=Cached
pattern=IPTABLES
context=IPTABLES :> ( sub { return exists($_[0]->{"SYN"}) && \
!exists($_[0]->{"ACK"}) ; } ) \
&& !SUPPRESS_IP_$+{SRC}
desc=SYN flood from host $+{SRC}
action=pipe '%t: %s' /bin/mail -s 'iptables alert' root@localhost; \
create SUPPRESS_IP_$+{SRC} 3600
window=10
thresh=100
#
# the content of /etc/sec/ssh-login.rules
#
type=Options
procallin=no
joincfset=ssh-login
type=Single
ptype=Cached
pattern=SSH_LOGIN
context=SSH_LOGIN :> ( sub { return $_[0]->{"status"} eq "Failed" && \
$_[0]->{"srcport"} < 1024 && \
defined($_[0]->{"invuser"}); } )
continue=TakeNext
desc=Probe of invalid user $+{user} from privileged port of $+{srcip}
action=pipe '%t: %s' /bin/mail -s 'SSH alert' root@localhost
type=SingleWithThreshold
ptype=Cached
pattern=SSH_LOGIN
context=SSH_LOGIN :> ( sub { return $_[0]->{"status"} eq "Failed" && \
defined($_[0]->{"invuser"}); } )
desc=Ten login probes for invalid users from $+{srcip} within 60s
action=pipe '%t: %s' /bin/mail -s 'SSH alert' root@localhost
thresh=10
window=60
type=PairWithWindow
ptype=Cached
pattern=SSH_LOGIN
context=SSH_LOGIN :> ( sub { return $_[0]->{"status"} eq "Failed"; } )
desc=User $+{user} failed to log in from $+{srcip} within 60s
action=cevent SSH_EVENT 0 %s
ptype2=Cached
pattern2=SSH_LOGIN
context2=SSH_LOGIN :> \
( sub { return $_[0]->{"status"} eq "Accepted"; } ) && \
$+{user} %+{user} $+{srcip} %+{srcip} -> \
( sub { return $_[0] eq $_[1] && $_[2] eq $_[3]; } )
desc2=User $+{user} logged in successfully from $+{srcip} within 60s
action2=logonly
window=60
#
# the content of /etc/sec/ssh-events.rules
#
type=Options
procallin=no
joincfset=ssh-events
type=SingleWithThreshold
ptype=RegExp
pattern=User ([\w-]+) failed to log in from [\d.]+ within 60s
desc=Ten login failures for user $1 within 1h
action=pipe '%t: %s' /bin/mail -s 'SSH alert' root@localhost
thresh=10
window=3600
# read events from standard input
--input=-
# rules are stored in /etc/sec/test.conf
--conf
/etc/sec/test.conf
Note that although SEC rereads its resource file at the reception of the SIGHUP or SIGABRT signal, adding an option that specifies a certain startup procedure (e.g., --pid or --detach) will not produce the desired effect at runtime. Also note that the resource file content is *not* parsed by shell, therefore shell metacharacters are passed to SEC as-is.