当试图排除系统问题时,如试图加载内核驱动程序或寻找未经授权的系统登录尝试,日志文件可能非常有用。 本章讨论到哪里查找日志文件,如何查看日志文件,以及在日志文件中查找什么。 某些日志文件由守护进程
rsyslogd
控制。
rsyslogd
守护进程是对
sysklogd
的一个增强的替代,并提供了扩展的消息过滤、加密保护、各种配置选项、输入和输出模块,支持通过
TCP
或
UDP
协议传输。请注意,
rsyslog
与
sysklogd
兼容。
日志文件也可以由
systemd
的组件
journald
守护进程管理 。
journald
守护进程捕获Syslog消息、内核日志消息、初始RAM磁盘和早期引导消息,以及所有服务写入标准输出和标准错误输出的消息,对它们进行索引,并让用户可以使用这些消息。本机日志文件格式是结构化和索引的二进制文件,它改进了搜索并提供了更快的操作,还存储了时间戳或用户id等元数据信息。由
journald
生成的日志文件默认情况下是不持久的,日志文件只存储在内存中或
/run/Log/journal/
目录下的一个小的循环缓冲区中。日志数据的数量取决于可用内存,当您达到容量限制时,最老的条目将被删除。但是,这个设置可以被修改,参阅
Enabling Persistent Storage
。有关日志的更多信息,请参阅
Using the Journal
。
默认情况下,系统上只安装
journald
。你必须自己安装 rsyslog。在继续本指南的其余部分之前,不要忘记在安装之后启用并启动它。
journald
守护进程是故障排除的主要工具。它还提供了创建结构化日志消息所需的额外数据。
journald
获取的数据被转发到
/run/systemd/journal/syslog
套接字,
rsyslogd
可能会使用该套接字进一步处理数据。然而,
rsyslog
在默认情况下通过
imjournal
输入模块进行实际的集成,从而避免了前面提到的套接字。您也可以使用
omjournal
模块以相反的方向,从
rsyslogd
到
journald
传输数据。详细信息请参见
Interaction of Rsyslog and Journal
。该集成支持以一致的格式维护基于文本的日志,以确保与可能依赖于
rsyslogd
的应用程序或配置的兼容性。另外,你可以用结构化的格式来维护 rsyslog 消息(参见
Structured Logging with Rsyslog
)。
在 /etc/rsyslog.conf 配置文件中可以找到由
rsyslogd
维护的日志文件列表。大多数日志文件位于
/var/log/
目录下。一些应用程序,如
httpd
和
samba
在
/var/log/
中有一个目录来存放它们的日志文件。 你可能会注意到
/var/log/
目录下有多个文件后面都有数字(例如,
cron-20100906
)。这些数字表示已添加到轮换日志文件中的时间戳。日志文件被轮转,因此它们的文件大小不会变得太大。
logrotate
包,包含一个cron任务,该任务根据
/etc/logrotate.conf
配置文件和
/etc/logrotate.d/
目录中的配置文件自动轮换日志文件。
rsyslog 的主要配置文件是 /etc/rsyslog.conf 。在这里,您可以指定由 filter 和 action 部分组成的 global directives、modules 和 rules。此外,您可以在井号 (#) 后以文本形式添加注释。
规则由
filter
部分指定,它选择 syslog 消息的子集,以及用选定的消息做什么的
action
部分。要在
/etc/rsyslog.conf
配置文件中定义规则,请在一行上定义过滤器和操作,并用一个或多个空格或制表符分隔它们。
rsyslog 提供了多种方法来根据选定的属性过滤 syslog 消息。可用的过滤方法可分为 Facility/Priority-based 、 Property-based 和 Expression-based 过滤器。
过滤系统日志消息最常用和众所周知的方法是使用基于设施/优先级的过滤器,它根据两个条件过滤系统日志消息: facility 和 priority 用点分隔。要创建选择器,请使用以下语法:
FACILITY 指定产生特定系统日志消息的子系统。例如, mail 子系统处理所有与邮件相关的系统日志消息。 FACILITY 可以由以下关键字(或数字代码)之一表示: kern (0)、 user (1)、 mail (2)、 daemon (3), auth (4), syslog (5), lpr (6), news (7), uucp (8), cron (9), authpriv (10), ftp (11), ntp (12), logaudit (13)、 logalert (14)、 clock (15) 和 local0 到 local7 (16 - 23)。
PRIORITY 指定系统日志消息的优先级。 PRIORITY 可以由以下关键字(或数字)之一表示: debug (7)、 info (6)、 notice (5)、[command] warning (4)、 err (3)、 crit (2)、 alert (1) 和 emerg (0)。
上述语法选择具有定义或
更高
优先级的系统日志消息。通过在任何优先级关键字前面加上等号 (
=
),您可以指定仅选择具有指定优先级的系统日志消息。所有其他优先级将被忽略。相反,在优先级关键字前面加上感叹号 (
!
) 会选择除具有已定义优先级的消息之外的所有 syslog 消息。
除了上面指定的关键字之外,您还可以使用星号 (
*
) 来定义所有设施或优先级(取决于您把星号放在逗号之前或之后)。指定优先级关键字“none”用于没有给定优先级的设施。设施条件和优先级条件都不区分大小写。
要定义多个设施和优先级,请用逗号 (
,
) 分隔它们。要在一行上定义多个选择器,请用分号 (
;
) 分隔它们。请注意,选择器字段中的每个选择器都能够覆盖前面的选择器,这可以从模式中排除一些优先级。
以下是一些简单的基于设施/优先级的过滤器示例,可以在
/etc/rsyslog.conf
中指定。要选择具有任何优先级的所有内核系统日志消息,请将以下文本添加到配置文件中:
kern.*
要选择具有优先级 crit 和更高优先级的所有邮件系统日志消息,请使用此表单:
mail.crit
要选择除具有 info 或 debug 优先级的消息之外的所有 cron syslog 消息,请设置如下配置:
cron.!info,!debug
EXPRESSION
属性表示要评估的表达式,例如:
$msg startswith 'DEVNAME'
或
$syslogfacility-text == 'local0'
。您可以使用
and
和
or
运算符在单个过滤器中指定多个表达式。
ACTION
属性表示如果表达式返回值
true
时要执行的操作。这可以是单个操作,也可以是用大括号括起来的任意复杂脚本。
新行开头的 if 关键字指示基于表达式的过滤器。 then 关键字将 EXPRESSION 与 ACTION 分开。您可以选择使用 else 关键字来指定在不满足条件时要执行的操作。
使用基于表达式的过滤器,您可以使用用大括号括起来的脚本来嵌套条件,如
Expression-based Filters
中所示。该脚本允许您在表达式中使用
facility/priority-based
过滤器。另一方面,这里不推荐使用
property-based
过滤器。RainerScript 支持具有专用函数
re_match()
和
re_extract()
的正则表达式。
以下表达式包含两个嵌套条件。由名为 prog1 的程序创建的日志文件根据消息中“test”字符串的存在分为两个文件。
if $programname == 'prog1' then { action(type="omfile" file="/var/log/prog1.log") if $msg contains 'test' then action(type="omfile" file="/var/log/prog1test.log") action(type="omfile" file="/var/log/prog1notest.log")其中 DynamicFile 是修改输出路径的预定义模板的名称。您可以使用破折号前缀 (
-
) 禁用同步,也可以使用由冒号 (:
) 分隔的多个模板。有关模板的详细信息,请参照 Generating Dynamic File Names。如果您指定的文件是现有的 terminal 或
/dev/console
设备,在使用 X Window 系统时,系统日志消息将分别发送到标准输出(使用特殊的 terminal-handling)或您的控制台(使用特殊的/dev/console
-handling) 。
rsyslog 允许您通过网络发送和接收 syslog 消息。此功能允许您在一台机器上管理多个主机的系统日志消息。要将系统日志消息转发到远程计算机,请使用以下语法:
可选的 zNUMBER 设置为 syslog 消息启用 zlib 压缩。NUMBER 属性指定压缩级别(从最低值1到最大值9)。压缩收益由 rsyslogd
自动检查,仅当有任何压缩收益时才会压缩消息,并且永远不会压缩低于 60 字节的消息。
HOST 属性指定接收所选系统日志消息的主机。
PORT 属性指定主机的端口。
rsyslog 可以通过指定用户名向特定用户发送 syslog 消息(如 Specifying Multiple Actions 所述)。要指定多个用户,请用逗号 (,
) 分隔每个用户名。要向当前登录的每个用户发送消息,请使用星号 (*
)。
rsyslog 允许您为选定的 syslog 消息执行程序,并使用 system()
调用在 shell 中执行程序。要指定要执行的程序,请在其前面加上插入字符 (^
)。随后,指定一个模板来格式化接收到的消息并将其作为单行参数传递给可执行文件(有关模板的更多信息,请参阅:Templates)。
丢弃动作主要用于在进行任何进一步处理之前过滤掉消息。如果您想省略一些会填满日志文件的重复消息,它可能会很有效。丢弃操作的结果取决于在配置文件中指定的位置,为了获得最佳结果,请将这些操作放在操作列表的顶部。请注意,一旦消息被丢弃,就无法在以后的配置文件行中检索它。
例如,以下规则丢弃任何 cron 系统日志消息:
cron.* ~
对于每个选择器,您可以指定多个操作。要为一个选择器指定多个操作,请将每个操作写在单独的行上,并在其前面加上一个 & 字符:
FILTER ACTION & ACTION & ACTION
多个操作提高了预期结果的整体性能,因为指定的选择器只需评估一次。
在以下示例中,所有具有关键优先级 (crit
) 的内核 syslog 消息都发送给用户 user1
,由模板 temp
处理并传递给 test-program
可执行文件,然后通过 UDP
协议转发到 192.168 .0.1
。
kern.=crit user1 & ^test-program;temp & @192.168.0.1
两个引号 (“…”
) 之间的任何内容都是实际的模板文本。在此文本中,可以使用特殊字符,例如用于换行的“\n”或用于回车的“\r”。其他字符,例如 %
或 "
,如果您想按字面意思使用这些字符,则必须转义。
在两个百分号 (%
) 之间指定的文本指定了一个允许您访问特定内容的 property 系统日志消息。有关属性的更多信息,请参阅:Properties。
OPTION
属性指定修改模板功能的任何选项。当前支持的模板选项是 sql
和 stdsql
,它们用于将文本格式化为 SQL 查询。
请注意,数据库写入器检查是否在模板中指定了 sql
或 stdsql
选项。如果不是,则数据库写入器不执行任何操作。这是为了防止任何可能的安全威胁,例如 SQL 注入。
有关详细信息,请参阅:Actions 中的 Storing syslog messages in a database 部分。
PROPERTY_NAME 属性指定一个财产。所有可用属性的列表及其详细描述可以在 rsyslog.conf(5)
手册页的 Available Properties 部分下找到。
FROM_CHAR_ 和 TO_CHAR 属性表示指定属性起作用的字符范围。或者,可以使用正则表达式来指定字符范围。为此,请将字母“R”设置为 FROM_CHAR 属性,并将所需的正则表达式指定为 TO_CHAR 属性。
OPTION 属性指定任何属性选项,例如将输入转换为小写的 lowercase
选项。所有可用属性选项的列表及其详细说明可以在 rsyslog.conf(5)
手册页的 Property Options 部分找到。
A verbose syslog message template 显示了一个格式化 syslog 消息的模板,以便输出消息的严重性、设施、收到消息时的时间戳、主机名、消息标签、消息文本,并以新行结束。
$template verbose, "%syslogseverity%, %syslogfacility%, %timegenerated%, %HOSTNAME%, %syslogtag%, %msg%\n"
A wall message template 显示了一个类似于传统墙上消息的模板(该消息发送给每个登录的用户,并且他们的 mesg(1)
权限设置为 yes
)。此模板在新行上输出消息文本以及主机名、消息标记和时间戳(使用 \r
和 \n
)并响铃(使用 \7
)。
$template wallmsg,"\r\n\7Message from syslogd@%HOSTNAME% at %timegenerated% ...\r\n %syslogtag% %msg%\n\r"
A database formatted message template 显示了一个格式化系统日志消息的模板,以便它可以用作数据库查询。请注意模板末尾指定了 sql
选项。它告诉数据库写入器将消息格式化为 MySQL SQL
查询。
$template dbFormat,"insert into SystemEvents (Message, Facility, FromHost, Priority, DeviceReportedTime, ReceivedAt, InfoUnitID, SysLogTag) values ('%msg%', %syslogfacility%, '%HOSTNAME%', %syslogpriority%, '%timereported:::date-mysql%', '%timegenerated:::date-mysql%', %iut%, '%syslogtag%')", sql
"Debug line with all properties:\nFROMHOST: '%FROMHOST%', fromhost-ip: '%fromhost-ip%', HOSTNAME: '%HOSTNAME%', PRI: %PRI%,\nsyslogtag '%syslogtag%', programname: '%programname%', APP-NAME: '%APP-NAME%', PROCID: '%PROCID%', MSGID: '%MSGID%',\nTIMESTAMP: '%TIMESTAMP%', STRUCTURED-DATA: '%STRUCTURED-DATA%',\nmsg: '%msg%'\nescaped msg: '%msg:::drop-cc%'\nrawmsg: '%rawmsg%'\n\n\"# keep 4 weeks worth of backlogs rotate 4 # uncomment this if you want your log files compressed compress
示例配置文件中的所有行都定义了适用于每个日志文件的全局选项。在我们的示例中,日志文件每周轮换一次,轮换的日志文件保存 4 周,所有轮换的日志文件都通过
gzip
压缩为
.gz
格式。任何以井号 (#) 开头的行都是注释,不会被处理。
您可以为特定日志文件定义配置选项并将其放在全局选项下。但是,建议在
/etc/logrotate.d/
目录中为任何特定日志文件创建一个单独的配置文件,并在那里定义任何配置选项。
以下是放置在
/etc/logrotate.d/
目录中的配置文件示例:
/var/log/messages { rotate 5 weekly postrotate /usr/bin/killall -HUP syslogd endscript此文件中的配置选项仅适用于
/var/log/messages
日志文件。此处指定的设置会尽可能覆盖全局设置。因此,轮换的“/var/log/messages”日志文件将保存五周,而不是全局选项中定义的四周。以下是您可以在 logrotate 配置文件中指定的一些指令列表:
在 rsyslog 版本 6 中,引入了新的配置语法。这种新的配置格式旨在更强大、更直观,并通过不允许某些无效构造来防止常见错误。语法增强由依赖 RainerScript 的新配置处理器启用。仍然完全支持旧格式,默认情况下在
/etc/rsyslog.conf
配置文件中使用它。RainerScript 是一种脚本语言,设计用于处理网络事件和配置事件处理器,例如 rsyslog。RainerScript 最初用于定义基于表达式的过滤器,请参阅:请参阅:Expression-based Filters。rsyslog 版本 7 中的 RainerScript 版本实现了
input()
和ruleset()
语句,允许以新语法编写/etc/rsyslog.conf
配置文件。新语法的不同之处主要在于它更加结构化。参数作为参数传递给语句,例如输入、操作、模板和模块加载。选项的范围受块的限制。这增强了可读性并减少了由错误配置引起的错误。还显著提高了性能。比较使用 legacy-style 参数编写的配置:
$InputFileName /tmp/inputfile $InputFileTag tag1: $InputFileStateFile inputfile-state $InputRunFileMonitor和使用新格式声明的相同配置:
input(type="imfile" file="/tmp/inputfile" tag="tag1:" statefile="inputfile-state")这显着减少了配置中使用的参数数量,提高了可读性,还提供了更高的执行速度。有关 RainerScript 语句和参数的更多信息,请参阅: Online Documentation。
撇开特殊指令不谈,rsyslog 按照 rules 处理消息,rules 由过滤条件和条件为真时要执行的操作组成。使用传统的
/etc/rsyslog.conf
文件,所有规则都按照每个输入消息的出现顺序进行评估。此过程从第一条规则开始,一直持续到所有规则都已处理完毕或消息被其中一条规则丢弃为止。但是,可以将规则分组为称为 rulesets 的序列。使用规则集,您可以将某些规则的影响仅限于选定的输入,或通过定义绑定到特定输入的一组不同的操作来提高 rsyslog 的性能。换句话说,对于某些类型的消息,不可避免地会被评估为假的过滤条件可以被跳过。
/etc/rsyslog.conf
中的旧规则集定义如下所示:$RuleSet rulesetname rule2规则在定义另一个规则时结束,或者默认规则集被如下调用:
$RuleSet RSYSLOG_DefaultRuleset在 rsyslog 7 的新配置格式中,
input()
和ruleset()
语句被保留用于此操作。/etc/rsyslog.conf
中的新格式规则集定义如下所示:ruleset(name="rulesetname") { rule2 call rulesetname2将 rulesetname 替换为您的规则集的标识符。规则集名称不能以
RSYSLOG_
开头,因为此命名空间保留给 rsyslog 使用。如果消息没有分配其他规则集,则RSYSLOG_DefaultRuleset
定义默认规则集。rule 和 rule2 定义上述过滤操作格式的规则。使用call
参数,您可以通过从其他规则集块中调用它们来嵌套规则集。创建规则集后,您需要指定它应用于哪些输入:
input(type="input_type" port="port_num" ruleset="rulesetname");在这里,您可以通过 input_type(收集消息的输入模块)或 port_num ( 端口号)来识别输入消息。可以为
input()
指定其他参数,例如 file 或 tag。将 rulesetname 替换为要根据消息评估的规则集的名称。如果输入消息未明确绑定到规则集,则会触发默认规则集。您还可以使用旧格式来定义规则集,有关详细信息,请参阅:在线文档。
Example 11. 使用规则集以下规则集确保对来自不同端口的远程消息进行不同的处理。添加以下内容到
/etc/rsyslog.conf
:ruleset(name="remote-10514") { action(type="omfile" file="/var/log/remote-10514") ruleset(name="remote-10515") { cron.* action(type="omfile" file="/var/log/remote-10515-cron") mail.* action(type="omfile" file="/var/log/remote-10515-mail") input(type="imtcp" port="10514" ruleset="remote-10514"); input(type="imtcp" port="10515" ruleset="remote-10515");上例中显示的规则集定义了来自两个端口的远程输入的日志目的地,如果是 10515,则根据设施对消息进行排序。然后,TCP 输入被启用并绑定到规则集。请注意,您必须加载所需的模块 (imtcp) 才能使此配置正常工作。
与 syslogd 的兼容性
从 rsyslog 版本 6 开始,通过
-c
选项指定的兼容模式已被删除。此外,不推荐使用 syslogd 样式的命令行选项,应避免通过这些命令行选项配置 rsyslog。但是,您可以使用多个模板和指令来配置rsyslogd
以模拟类似 syslogd 的行为。有关各种
rsyslogd
选项的更多信息,请参阅rsyslogd(8)
手册页。规则处理器 是一个解析和过滤引擎。在这里,应用了`/etc/rsyslog.conf` 中定义的规则。基于这些规则,规则处理器评估要执行哪些操作。每个动作都有自己的动作队列。消息通过此队列传递到创建最终输出的相应操作处理器。请注意,此时,可以在一条消息上同时运行多个操作。为此,一条消息被复制并传递给多个动作处理器。
每个动作只能有一个队列。根据配置,消息可以直接发送到动作处理器,而无需动作队列。这是 直接队列 的行为(见下文)。如果输出动作失败,动作处理器会通知动作队列,然后队列会取回一个未处理的元素,并在一段时间间隔后再次尝试该动作。
综上所述,队列在 rsyslog 中有两个位置:要么作为单个 主消息队列 在规则处理器之前,要么作为 action queues 在各种类型的输出操作之前。队列提供了两个主要优势,这两个优势都可以提高消息处理的性能:
在这里,您可以把设置应用于主消息队列(将 object 替换为
MainMsg
)或应用于操作队列(将 object 替换为Action
)。将 queue_type 替换为direct
、linkedlist
、fixedarray
(内存队列)或disk
之一。主消息队列的默认设置是 FixedArray 队列,限制为 10,000 条消息。动作队列默认设置为直接队列。
对于许多简单的操作,例如将输出写入本地文件时,不需要在操作之前构建队列。为避免排队,请使用:
$objectQueueType Direct将 object 替换为
MainMsg
或Action
分别将此选项用于主消息队列或操作队列。使用直接队列,消息直接并立即从生产者传递到消费者。磁盘队列将消息严格存储在硬盘驱动器上,这使得它们非常可靠,但也是所有可能的队列模式中最慢的。此模式可用于防止丢失非常重要的日志数据。但是,在大多数用例中不建议使用磁盘队列。要设置磁盘队列,请在
/etc/rsyslog.conf
中键入以下内容:$objectQueueType Disk将 object 替换为
MainMsg
或Action
以分别将此选项应用于主消息队列或操作队列。磁盘队列按部分写入,默认大小为 10 Mb。可以使用以下配置指令修改此默认大小:$objectQueueMaxFileSize size其中 size 表示磁盘队列部分的大小。定义的大小限制不是限制性的,rsyslog 总是写入一个完整的队列条目,即使它违反了大小限制。磁盘队列的每个部分都与单个文件匹配。这些文件的命名指令如下所示:
$objectQueueFilename name这设置了一个 name 文件前缀后跟一个 7 位数字,从 1 开始,每个文件递增。
使用内存队列,排队的消息被保存在内存中,这使得进程非常快。如果计算机重启或关闭,排队的数据会丢失。但是,您可以使用
$ActionQueueSaveOnShutdown
设置在关机前保存数据。内存中的队列有两种类型:FixedArray 队列 — 主消息队列的默认模式,限制为 10,000 个元素。这种类型的队列使用一个固定的、预先分配的数组来保存指向队列元素的指针。由于这些指针,即使队列为空,也会消耗一定量的内存。但是,FixedArray 提供了最佳的运行时性能,并且当您期望相对较少的排队消息和高性能时,它是最佳选择。
LinkedList 队列 — 在这里,所有结构都在链表中动态分配,因此仅在需要时分配内存。LinkedList 队列可以很好地处理偶尔的突发消息。
磁盘辅助的内存队列磁盘和内存队列各有它们的优势,rsyslog 让您可以将它们组合成 磁盘辅助的内存队列。为此,请配置一个普通的内存队列,然后添加
$objectQueueFileName
指令来定义磁盘辅助的文件名。然后,此队列变为 磁盘辅助,这意味着它将内存队列与磁盘队列耦合以协同工作。如果内存队列已满或需要在关机后保留,则磁盘队列被激活。使用磁盘辅助队列,您可以设置特定于磁盘的配置参数和特定于内存的配置参数。这种类型的队列可能是最常用的,它对于可能长时间运行且不可靠的操作特别有用。
要指定磁盘辅助内存队列的功能,请使用所谓的 watermarks:
$objectQueueHighWatermark number将 object 替换为
MainMsg
或Action
以分别将此选项用于主消息队列或操作队列。将 number 替换为排队的消息。当内存中的队列达到高水位线定义的数量时,它开始将消息写入磁盘并继续,直到内存中的队列大小下降到低水位线定义的数量。正确设置水位线可以最大程度地减少不必要的磁盘写入,也会为消息突发留出内存空间,因为写入磁盘文件的时间相当长。因此,高水位线必须低于使用 $objectQueueSize 设置的整个队列容量。高水位线和总队列大小之间的差异是为消息突发保留的备用内存缓冲区。另一方面,高水位线设置得太低会不必要地经常打开磁盘辅助。Example 12. 可靠地将日志消息转发到服务器Rsyslog 通常用于维护集中式日志系统,其中日志消息通过网络转发到服务器。为避免服务器不可用时消息丢失,建议为转发操作配置一个操作队列。这样,发送失败的消息将被存储在本地,直到再次可以访问服务器。请注意,此类队列不可配置为使用“UDP”协议的连接。要建立完全可靠的连接,例如当您的日志服务器位于您的专用网络之外时,请考虑使用:使用 RELP 中描述的 RELP 协议。
转发到单个服务器假设任务是将日志消息从系统转发到主机名为 example.com 的服务器,并配置一个操作队列以在服务器中断时缓冲消息。为此,请执行以下步骤:
所有类型的队列都可以进一步配置以满足您的要求。您可以使用多个指令来修改操作队列和主消息队列。目前,有 20 多个队列参数可用,请参阅:在线文档。其中一些设置是常用的,其他设置(例如工作线程管理)提供对队列行为的更密切控制,并为高级用户保留。通过高级设置,您可以优化 rsyslog 的性能、调度队列或修改系统关闭时队列的行为。
限制队列的大小您可以使用以下设置限制队列可以包含的消息数量:
$objectQueueHighWatermark number将 object 替换为
MainMsg
或Action
以使用此选项分别用于主消息队列或动作队列。将 number 替换为排队的消息。您只能将队列大小设置为消息数,而不是它们的实际内存大小。主消息队列和规则集队列的默认队列大小为 10,000 条消息,操作队列为 1000 条。磁盘辅助队列默认是无限的,不能用这个指令限制,但是你可以通过以下设置为它们保留物理磁盘空间:
$objectQueueMaxDiscSpace number将 object 替换为
MainMsg
或Action
。当达到 number 指定的大小限制时,消息将被丢弃,直到出队的消息释放出足够的空间。当队列达到一定数量的消息时,您可以丢弃不太重要的消息,以便为更高优先级的条目节省队列空间。启动丢弃过程的阈值可以用所谓的 丢弃标记 设置:
$objectQueueDiscardMark number将 object 替换为
MainMsg
或Action
以分别将此选项用于主消息队列或操作队列。在这里,number 代表消息队列中的消息达到此数量才开始丢弃。要定义要丢弃的消息,请使用:$objectQueueDiscardSeverity priority将 priority 替换为以下关键字之一(或数字):debug (7)、info (6)、notice (5)、warning (4)、err (3)、crit (2)、alert (1) 和 emerg (0)。使用此设置,在达到丢弃标记后,将立即从队列中删除具有低于定义优先级的新传入和已排队消息。
使用时间范围您可以配置 rsyslog 以在特定时间段内处理队列。例如,使用此选项,您可以将一些处理转移到非高峰时间。要定义时间范围,请使用以下语法:
$objectQueueDequeueTimeBegin hour### Per-Host Templates for Remote Systems ### $template TmplAuthpriv, "/var/log/remote/auth/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log" $template TmplMsg, "/var/log/remote/msg/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log"# Provides TCP syslog reception $ModLoad imtcp # Adding this ruleset to process remote messages $RuleSet remote1 authpriv.* ?TmplAuthpriv *.info;mail.none;authpriv.none;cron.none ?TmplMsg $RuleSet RSYSLOG_DefaultRuleset #End the rule set by switching back to the default rule set $InputTCPServerBindRuleset remote1 #Define a new input and bind it to the "remote1" rule set $InputTCPServerRun 514将更改保存到
/etc/ rsyslog.conf
文件。template(name="TmplAuthpriv" type="string" string="/var/log/remote/auth/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log" template(name="TmplMsg" type="string" string="/var/log/remote/msg/%HOSTNAME%/%PROGRAMNAME:::secpath-replace%.log"这些模板也可以写成列表格式如下:
template(name="TmplAuthpriv" type="list") { constant(value="/var/log/remote/auth/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log")template(name="TmplMsg" type="list") { constant(value="/var/log/remote/msg/") property(name="hostname") constant(value="/") property(name="programname" SecurePath="replace") constant(value=".log")这种模板文本格式对于那些刚接触 rsyslog 的人来说可能更容易阅读,因此更容易适应需求改变。
完成对新语法的更改,我们需要重现模块加载命令,添加规则集,然后将规则集绑定到协议、端口和规则集:
module(load="imtcp") ruleset(name="remote1"){ authpriv.* action(type="omfile" DynaFile="TmplAuthpriv") *.info;mail.none;authpriv.none;cron.none action(type="omfile" DynaFile="TmplMsg") input(type="imtcp" port="514" ruleset="remote1")Due to its modular design, rsyslog offers a variety of modules which provide additional functionality. Note that modules can be written by third parties. Most modules provide additional inputs (see Input Modules below) or outputs (see Output Modules below). Other modules provide special functionality specific to each module. The modules may provide additional configuration directives that become available after a module is loaded. To load a module, use the following syntax:
$ModLoad MODULEwhere
$ModLoad
is the global directive that loads the specified module and MODULE represents your desired module. For example, if you want to load the Text File Input Module (imfile) that enables rsyslog to convert any standard text files into syslog messages, specify the following line in the/etc/rsyslog.conf
configuration file:$ModLoad imfilersyslog offers a number of modules which are split into the following main categories:
Input Modules — Input modules gather messages from various sources. The name of an input module always starts with the
im
prefix, such as imfile and imjournal.Output Modules — Output modules provide a facility to issue message to various targets such as sending across a network, storing in a database, or encrypting. The name of an output module always starts with the
om
prefix, such as omsnmp, omrelp, and so on.Parser Modules — These modules are useful in creating custom parsing rules or to parse malformed messages. With moderate knowledge of the C programming language, you can create your own message parser. The name of a parser module always starts with the
pm
prefix, such as pmrfc5424, pmrfc3164, and so on.Message Modification Modules — Message modification modules change content of syslog messages. Names of these modules start with the
mm
prefix. Message Modification Modules such as mmanon, mmnormalize, or mmjsonparse are used for anonymization or normalization of messages.String Generator Modules — String generator modules generate strings based on the message content and strongly cooperate with the template feature provided by rsyslog. For more information on templates, see Templates. The name of a string generator module always starts with the
sm
prefix, such as smfile or smtradfile.Library Modules — Library modules provide functionality for other loadable modules. These modules are loaded automatically by rsyslog when needed and cannot be configured by the user.
It is sufficient to load imfile once, even when importing multiple files. The $InputFilePollInterval global directive specifies how often rsyslog checks for changes in connected text files. The default interval is 10 seconds, to change it, replace int with a time interval specified in seconds.
To identify the text files to import, use the following syntax in
/etc/rsyslog.conf
:# File 1 $InputFileName path_to_file $InputFileTag tag: $InputFileStateFile state_file_name $InputFileSeverity severity $InputFileFacility facility $InputRunFileMonitor # File 2 $InputFileName path_to_file2Four settings are required to specify an input text file:
replace state_file_name with a unique name for the state file. State files, which are stored in the rsyslog working directory, keep cursors for the monitored files, marking what partition has already been processed. If you delete them, whole files will be read in again. Make sure that you specify a name that does not already exist.
add the $InputRunFileMonitor directive that enables the file monitoring. Without this setting, the text file will be ignored.
Apart from the required directives, there are several other settings that can be applied on the text input. Set the severity of imported messages by replacing severity with an appropriate keyword. Replace facility with a keyword to define the subsystem that produced the message. The keywords for severity and facility are the same as those used in facility/priority-based filters, see Filters.
Example 13. Importing Text FilesThe Apache HTTP server creates log files in text format. To apply the processing capabilities of rsyslog to apache error messages, first use the imfile module to import the messages. Add the following into
/etc/rsyslog.conf
:$ModLoad imfile $InputFileName /var/log/httpd/error_log $InputFileTag apache-error: $InputFileStateFile state-apache-error $InputRunFileMonitorExporting Messages to a Database
Processing of log data can be faster and more convenient when performed in a database rather than with text files. Based on the type of DBMS used, choose from various output modules such as ommysql, ompgsql, omoracle, or ommongodb. As an alternative, use the generic omlibdbi output module that relies on the
libdbi
library. The omlibdbi module supports database systems Firebird/Interbase, MS SQL, Sybase, SQLite, Ingres, Oracle, mSQL, MySQL, and PostgreSQL.Example 14. Exporting Rsyslog Messages to a DatabaseTo store the rsyslog messages in a MySQL database, add the following into
/etc/rsyslog.conf
:$ModLoad ommysql $ActionOmmysqlServerPort 1234 *.* :ommysql:database-server,database-name,database-userid,database-passwordFirst, the output module is loaded, then the communication port is specified. Additional information, such as name of the server and the database, and authentication data, is specified on the last line of the above example.
Enabling Encrypted Transport
Confidentiality and integrity in network transmissions can be provided by either the TLS or GSSAPI encryption protocol.
Transport Layer Security (TLS) is a cryptographic protocol designed to provide communication security over the network. When using TLS, rsyslog messages are encrypted before sending, and mutual authentication exists between the sender and receiver.
Generic Security Service API (GSSAPI) is an application programming interface for programs to access security services. To use it in connection with rsyslog you must have a functioning Kerberos environment.
Using RELP
Reliable Event Logging Protocol (RELP) is a networking protocol for data logging in computer networks. It is designed to provide reliable delivery of event messages, which makes it useful in environments where message loss is not acceptable.
As mentioned above, Rsyslog and Journal, the two logging applications present on your system, have several distinctive features that make them suitable for specific use cases. In many situations it is useful to combine their capabilities, for example to create structured messages and store them in a file database (see Structured Logging with Rsyslog). A communication interface needed for this cooperation is provided by input and output modules on the side of Rsyslog and by the Journal's communication socket.
By default,
rsyslogd
uses theimjournal
module as a default input mode for journal files. With this module, you import not only the messages but also the structured data provided byjournald
. Also, older data can be imported fromjournald
(unless forbidden with the$ImjournalIgnorePreviousMessages
directive). See Importing Data from Journal for basic configuration ofimjournal
.As an alternative, configure
rsyslogd
to read from the socket provided byjournal
as an output for syslog-based applications. The path to the socket is/run/systemd/journal/syslog
. Use this option when you want to maintain plain rsyslog messages. Compared toimjournal
the socket input currently offers more features, such as ruleset binding or filtering. To import Journal data trough the socket, use the following configuration in/etc/rsyslog.conf
:$ModLoad imuxsock $OmitLocalLogging offThe above syntax loads the
imuxsock
module and turns off the$OmitLocalLogging
directive, which enables the import trough the system socket. The path to this socket is specified separately in/etc/rsyslog.d/listen.conf
as follows:$SystemLogSocketName /run/systemd/journal/syslogYou can also output messages from Rsyslog to Journal with the
omjournal
module. Configure the output in/etc/rsyslog.conf
as follows:$ModLoad omjournal *.* :omjournal:For instance, the following configuration forwards all received messages on tcp port 10514 to the Journal:
$ModLoad imtcp $ModLoad omjournal $RuleSet remote *.* :omjournal: $InputTCPServerBindRuleset remote $InputTCPServerRun 10514On systems that produce large amounts of log data, it can be convenient to maintain log messages in a structured format. With structured messages, it is easier to search for particular information, to produce statistics and to cope with changes and inconsistencies in message structure. Rsyslog uses the JSON (JavaScript Object Notation) format to provide structure for log messages.
Compare the following unstructured log message:
Oct 25 10:20:37 localhost anacron[1395]: Jobs will be executed sequentiallywith a structured one:
{"timestamp":"2013-10-25T10:20:37", "host":"localhost", "program":"anacron", "pid":"1395", "msg":"Jobs will be executed sequentially"}Searching structured data with use of key-value pairs is faster and more precise than searching text files with regular expressions. The structure also lets you to search for the same entry in messages produced by various applications. Also, JSON files can be stored in a document database such as MongoDB, which provides additional performance and analysis capabilities. On the other hand, a structured message requires more disk space than the unstructured one.
In rsyslog, log messages with meta data are pulled from Journal with use of the
imjournal
module. With themmjsonparse
module, you can parse data imported from Journal and from other sources and process them further, for example as a database output. For parsing to be successful,mmjsonparse
requires input messages to be structured in a way that is defined by the Lumberjack project.The Lumberjack project aims to add structured logging to rsyslog in a backward-compatible way. To identify a structured message, Lumberjack specifies the @cee: string that prepends the actual JSON structure. Also, Lumberjack defines the list of standard field names that should be used for entities in the JSON string. For more information on Lumberjack, see Online Documentation.
The following is an example of a lumberjack-formatted message:
@cee: {"pid":17055, "uid":1000, "gid":1000, "appname":"logger", "msg":"Message text."}To build this structure inside Rsyslog, a template is used, see Filtering Structured Messages. Applications and servers can employ the
libumberlog
library to generate messages in the lumberjack-compliant form. For more information onlibumberlog
, see Online Documentation.Importing Data from Journal
The imjournal module is Rsyslog's input module to natively read the journal files (see Interaction of Rsyslog and Journal). Journal messages are then logged in text format as other rsyslog messages. However, with further processing, it is possible to translate meta data provided by Journal into a structured message.
To import data from Journal to Rsyslog, use the following configuration in
/etc/rsyslog.conf
:$ModLoad imjournal $imjournalPersistStateInterval number_of_messages $imjournalStateFile path $imjournalRatelimitInterval seconds $imjournalRatelimitBurst burst_number $ImjournalIgnorePreviousMessages off/onWith number_of_messages, you can specify how often the journal data must be saved. This will happen each time the specified number of messages is reached.
Replace path with a path to the state file. This file tracks the journal entry that was the last one processed.
With seconds, you set the length of the rate limit interval. The number of messages processed during this interval can not exceed the value specified in burst_number. The default setting is 20,000 messages per 600 seconds. Rsyslog discards messages that come after the maximum burst within the time frame specified.
With
$ImjournalIgnorePreviousMessages
you can ignore messages that are currently in Journal and import only new messages, which is used when there is no state file specified. The default setting isoff
. Please note that if this setting is off and there is no state file, all messages in the Journal are processed, even if they were already processed in a previous rsyslog session.You can translate all data and meta data stored by Journal into structured messages. Some of these meta data entries are listed in Verbose journalctl Output, for a complete list of journal fields see the
systemd.journal-fields(7)
manual page. For example, it is possible to focus on kernel journal fields, that are used by messages originating in the kernel.Filtering Structured Messages
To create a lumberjack-formatted message that is required by rsyslog's parsing module, use the following template:
template(name="CEETemplate" type="string" string="%TIMESTAMP% %HOSTNAME% %syslogtag% @cee: %$!all-json%\n")This template prepends the
@cee:
string to the JSON string and can be applied, for example, when creating an output file withomfile
module. To access JSON field names, use the $! prefix. For example, the following filter condition searches for messages with specific hostname and UID:($!hostname == "hostname" && $!UID== "UID")These messages can come from Journal or from other input sources, and must be formatted in a way defined by the Lumberjack project. These messages are identified by the presence of the @cee: string. Then,
mmjsonparse
checks if the JSON structure is valid and then the message is parsed.To parse lumberjack-formatted JSON messages with
mmjsonparse
, use the following configuration in the/etc/rsyslog.conf
:$ModLoad mmjsonparse *.* :mmjsonparse:In this example, the
mmjsonparse
module is loaded on the first line, then all messages are forwarded to it. Currently, there are no configuration parameters available formmjsonparse
.Storing Messages in the MongoDB
Rsyslog supports storing JSON logs in the MongoDB document database through the ommongodb output module.
To forward log messages into MongoDB, use the following syntax in the
/etc/rsyslog.conf
(configuration parameters for ommongodb are available only in the new configuration format; see Using the New Configuration Format):$ModLoad ommongodb *.* action(type="ommongodb" server="DB_server" serverport="port" db="DB_name" collection="collection_name" uid="UID" pwd="password")Replace DB_server with the name or address of the MongoDB server. Specify port to select a non-standard port from the MongoDB server. The default port value is
0
and usually there is no need to change this parameter.With DB_name, you identify to which database on the MongoDB server you want to direct the output. Replace collection_name with the name of a collection in this database. In MongoDB, collection is a group of documents, the equivalent of an RDBMS table.
You can set your login details by replacing UID and password.
With this command,
rsyslogd
produces debugging information and prints it to the standard output. The-n
stands for "no fork". You can modify debugging with environmental variables, for example, you can store the debug output in a log file. Before startingrsyslogd
, type the following on the command line:export RSYSLOG_DEBUGLOG="path" export RSYSLOG_DEBUG="Debug"Replace path with a desired location for the file where the debugging information will be logged. For a complete list of options available for the RSYSLOG_DEBUG variable, see the related section in the
rsyslogd(8)
manual page.To check if syntax used in the
/etc/rsyslog.conf
file is valid use:rsyslogd
-N
1
Where
1
represents level of verbosity of the output message. This is a forward compatibility option because currently, only one level is provided. However, you must add this argument to run the validation.Ensure the time is correctly set on the systems generating the log messages as well as on any logging servers. See Configuring the Date and Time for information on checking and setting the time. See Configuring NTP Using ntpd and Configuring NTP Using the chrony Suite for information on using
NTP
to keep the system clock accurately set.On a logging server, check that the firewall has the appropriate ports open to allow ingress of either
UDP
orTCP
, depending on what traffic and port the sending systems are configured to use. For example:The Journal is a component of systemd that is responsible for viewing and management of log files. It can be used in parallel, or in place of a traditional syslog daemon, such as
rsyslogd
. The Journal was developed to address problems connected with traditional logging. It is closely integrated with the rest of the system, supports various logging technologies and access management for the log files.Logging data is collected, stored, and processed by the Journal’s
journald
service. It creates and maintains binary files called journals based on logging information that is received from the kernel, from user processes, from standard output, and standard error output of system services or via its native API. These journals are structured and indexed, which provides relatively fast seek times. Journal entries can carry a unique identifier. Thejournald
service collects numerous meta data fields for each log message. The actual journal files are secured, and therefore cannot be manually edited.Viewing Log Files
To access the journal logs, use the journalctl tool. For a basic view of the logs type as
root
:journalctl
An output of this command is a list of all log files generated on the system including messages generated by system components and by users. The structure of this output is similar to one used in
/var/log/messages/
but with certain improvements:the priority of entries is marked visually. Lines of error priority and higher are highlighted with red color and a bold font is used for lines with notice and warning priority
the time stamps are converted for the local time zone of your system
all logged data is shown, including rotated logs
the beginning of a boot is tagged with a special line
The following is an example output provided by the journalctl tool. When called without parameters, the listed entries begin with a time stamp, then the host name and application that performed the operation is mentioned followed by the actual message. This example shows the first three entries in the journal log:
# journalctl -- Logs begin at Thu 2013-08-01 15:42:12 CEST, end at Thu 2013-08-01 15:48:48 CEST. -- Aug 01 15:42:12 localhost systemd-journal[54]: Allowing runtime journal files to grow to 49.7M. Aug 01 15:42:12 localhost kernel: Initializing cgroup subsys cpuset Aug 01 15:42:12 localhost kernel: Initializing cgroup subsys cpu [...]Replace Number with the number of lines to be shown. When no number is specified, journalctl displays the ten most recent entries.
The journalctl command allows controlling the form of the output with the following syntax:
journalctl-o
formReplace form with a keyword specifying a desired form of output. There are several options, such as
verbose
, which returns full-structured entry items with all fields,export
, which creates a binary stream suitable for backups and network transfer, andjson
, which formats entries as JSON data structures. For the full list of keywords, see thejournalctl(1)
manual page.Example 16. Verbose journalctl OutputTo view full meta data about all entries, type:
# journalctl -o verbose [...] Fri 2013-08-02 14:41:22 CEST [s=e1021ca1b81e4fc688fad6a3ea21d35b;i=55c;b=78c81449c920439da57da7bd5c56a770;m=27cc _BOOT_ID=78c81449c920439da57da7bd5c56a770 PRIORITY=5 SYSLOG_FACILITY=3 _TRANSPORT=syslog _MACHINE_ID=69d27b356a94476da859461d3a3bc6fd _HOSTNAME=localhost.localdomain _PID=562 _COMM=dbus-daemon _EXE=/usr/bin/dbus-daemon _CMDLINE=/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation _SYSTEMD_CGROUP=/system/dbus.service _SYSTEMD_UNIT=dbus.service SYSLOG_IDENTIFIER=dbus SYSLOG_PID=562 _UID=81 _GID=81 _SELINUX_CONTEXT=system_u:system_r:system_dbusd_t:s0-s0:c0.c1023 MESSAGE=[system] Successfully activated service 'net.reactivated.Fprint' _SOURCE_REALTIME_TIMESTAMP=1375447282839181 [...]This example lists fields that identify a single log entry. These meta data can be used for message filtering as shown in Advanced Filtering. For a complete description of all possible fields see the
systemd.journal-fields(7)
manual page.Filtering Messages
The output of the journalctl command executed without parameters is often extensive, therefore you can use various filtering methods to extract information to meet your needs.
Filtering by PriorityLog messages are often used to track erroneous behavior on the system. To view only entries with a selected or higher priority, use the following syntax:
journalctl-p
priorityHere, replace priority with one of the following keywords (or with a number): debug (7), info (6), notice (5), warning (4), err (3), crit (2), alert (1), and emerg (0).
Example 17. Filtering by PriorityTo view only entries with error or higher priority, use:
journalctl-p err
With
--since
and--until
, you can view only log messages created within a specified time range. You can pass values to these options in form of date or time or both as shown in the following example.Example 18. Filtering by Time and PriorityFiltering options can be combined to reduce the set of results according to specific requests. For example, to view the warning or higher priority messages from a certain point in time, type:
journalctl-p warning
--since="2013-3-16 23:59:59"
Advanced FilteringVerbose journalctl Output lists a set of fields that specify a log entry and can all be used for filtering. For a complete description of meta data that
systemd
can store, see thesystemd.journal-fields(7)
manual page. This meta data is collected for each log message, without user intervention. Values are usually text-based, but can take binary and large values; fields can have multiple values assigned though it is not very common.To view a list of unique values that occur in a specified field, use the following syntax:
journalctl-F
fieldnameReplace fieldname with a name of a field you are interested in.
To show only log entries that fit a specific condition, use the following syntax:
journalctl fieldname=value
Replace fieldname with a name of a field and value with a specific value contained in that field. As a result, only lines that match this condition are returned.
and press the Tab key two times. This shows a list of available field names. Tab completion based on context works on field names, so you can type a distinctive set of letters from a field name and then press Tab to complete the name automatically. Similarly, you can list unique values from a field. Type:
journalctl fieldname=
and press Tab two times. This serves as an alternative to journalctl
-F
fieldname.Specifying two matches for the same field results in a logical
OR
combination of the matches. Entries matching value1 or value2 are displayed.Also, you can specify multiple field-value pairs to further reduce the output set:
journalctl fieldname1=value fieldname2=value ...
If two matches for different field names are specified, they will be combined with a logical
AND
. Entries have to match both conditions to be shown.With use of the + symbol, you can set a logical
OR
combination of matches for multiple fields:journalctl fieldname1=value + fieldname2=value ...This command returns entries that match at least one of the conditions, not only those that match both of them.
Example 19. Advanced filteringTo display entries created by
avahi-daemon.service
orcrond.service
under user with UID 70, use the following command:journalctl_UID=70
_SYSTEMD_UNIT=avahi-daemon.service
_SYSTEMD_UNIT=crond.service
Since there are two values set for the
_SYSTEMD_UNIT
field, both results will be displayed, but only when matching the_UID=70
condition. This can be expressed simply as: (UID=70 and (avahi or cron)).Enabling Persistent Storage
By default, Journal stores log files only in memory or a small ring-buffer in the
/run/log/journal/
directory. This is sufficient to show recent log history with journalctl. This directory is volatile, log data is not saved permanently. With the default configuration, syslog reads the journal logs and stores them in the/var/log/
directory. With persistent logging enabled, journal files are stored in/var/log/journal
which means they persist after reboot. Journal can then replace rsyslog for some users (but see the chapter introduction).Enabled persistent storage has the following advantages
Even with persistent storage the amount of data stored depends on free memory, there is no guarantee to cover a specific time span
More disk space is needed for logs
As an alternative to the aforementioned command-line utilities, Red Hat Enterprise Linux 7 provides an accessible GUI for managing log messages.
Viewing Log Files
Most log files are stored in plain text format. You can view them with any text editor such as Vi or Emacs. Some log files are readable by all users on the system; however, root privileges are required to read most log files. To view system log files in an interactive, real-time application, use the System Log.
Regular Expression
— Specifies the regular expression that will be applied to the log file and will attempt to match any possible strings of text in it.
Effect
Highlight
— If checked, the found results will be highlighted with the selected color. You may select whether to highlight the background or the foreground of the text.
Hide
— If checked, the found results will be hidden from the log file you are viewing.Monitoring Log Files
System Log monitors all opened logs by default. If a new line is added to a monitored log file, the log name appears in bold in the log list. If the log file is selected or displayed, the new lines appear in bold at the bottom of the log file. System Log - new log alert illustrates a new alert in the
cron
log file and in themessages
log file. Clicking on themessages
log file displays the logs in the file with the new lines in bold.For more information on how to configure the
rsyslog
daemon and how to locate, view, and monitor log files, see the resources listed below.安装的文档
rsyslogd
(8) — The manual page for thersyslogd
daemon documents its usage.
rsyslog.conf
(5) — The manual page namedrsyslog.conf
documents available configuration options.
logrotate
(8) — The manual page for the logrotate utility explains in greater detail how to configure and use it.
journalctl
(1) — The manual page for the journalctl daemon documents its usage.
journald.conf
(5) — This manual page documents available configuration options.
systemd.journal-fields
(7) — This manual page lists special Journal fields.rsyslog Home Page — The rsyslog home page offers a thorough technical breakdown of its features, documentation, configuration examples, and video tutorials.
RainerScript documentation on the rsyslog Home Page — Commented summary of data types, expressions, and functions available in RainerScript.
Description of queues on the rsyslog Home Page — General information on various types of message queues and their usage.