如何使用 CloudWatch Logs Insights 分析自定义 Amazon VPC 流日志?

5 分钟阅读
0

我使用 Amazon Virtual Private Cloud(Amazon VPC)流日志来配置自定义 VPC 流日志。我想要使​​用 Amazon CloudWatch Logs Insights 来发现日志中的模式和趋势。

简短描述

CloudWatch Logs Insights 自动发现默认格式的流日志,但不自动发现自定义格式的流日志。

要将 CloudWatch Logs Insights 与自定义格式的流日志一起使用,必须修改查询。

下面是自定义流日志格式的示例:

${account-id} ${vpc-id} ${subnet-id} ${interface-id} ${instance-id} ${srcaddr} ${srcport} ${dstaddr} ${dstport} ${protocol} ${packets} ${bytes} ${action} ${log-status} ${start} ${end} ${flow-direction} ${traffic-path} ${tcp-flags} ${pkt-srcaddr} ${pkt-src-aws-service} ${pkt-dstaddr} ${pkt-dst-aws-service} ${region} ${az-id} ${sublocation-type} ${sublocation-id}

以下查询是如何自定义和扩展查询以匹配使用场景的示例。

解决方法

检索最新的流日志

要从日志字段中提取数据,请使用 parse 关键字。例如,以下查询的输出按流日志事件开始时间排序,并限制为最近的两个日志条目。

查询

#Retrieve latest custom VPC Flow Logs
parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id
| sort start desc
| limit 2

输出

account_idvpc_idsubnet_idinterface_idinstance_idsrcaddrsrcport
123456789012vpc-0b69ce8d04278dddsubnet-002bdfe1767d0ddb0eni-0435cbb62960f230e-172.31.0.10455125
123456789012vpc-0b69ce8d04278ddd1subnet-002bdfe1767d0ddb0eni-0435cbb62960f230e-91.240.118.8149422

按源和目标 IP 地址对汇总数据传输

使用以下查询按源和目标 IP 地址对来汇总网络流量。在示例查询中,sum 统计聚合了 bytes 字段。sum 统计计算主机之间传输的数据的累计总数,因此 flow_direction 包含在查询和输出中。聚合的结果暂时分配给 Data_Transferred 字段。然后,按 Data_Transferred 降序对结果进行排序,并返回最大的两对。

查询

parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id
| stats sum(bytes) as Data_Transferred by srcaddr, dstaddr, flow_direction
| sort by Data_Transferred desc
| limit 2

输出

srcaddrdstaddrflow_directionData_Transferred
172.31.1.2473.230.172.154egress346952038
172.31.0.463.230.172.154egress343799447

按 Amazon EC2 实例 ID 分析数据传输

可以使用自定义流日志按 Amazon Elastic Compute Cloud(Amazon EC2)实例 ID 分析数据传输。要确定最活跃的 EC2 实例,请在查询中包括 instance_id 字段。

查询

parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id
| stats sum(bytes) as Data_Transferred by instance_id
| sort by Data_Transferred desc
| limit 5

输出

instance_idData_Transferred
-1443477306
i-03205758c9203c979517558754
i-0ae33894105aa500c324629414
i-01506ab9e9e90749d198063232
i-0724007fef3cb06f354847643

筛选被拒绝的 SSH 流量

要分析安全组和网络访问控制列表拒绝的流量,请使用 REJECT 筛选器操作。要确定在 SSH 流量上被拒绝的主机,请扩展筛选器以包括 TCP 协议和目标端口为 22 的流量。在下面的示例查询中,使用了 TCP 协议 6。

查询

parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id
| filter action = "REJECT" and protocol = 6 and dstport = 22
| stats sum(bytes) as SSH_Traffic_Volume by srcaddr
| sort by SSH_Traffic_Volume desc
| limit 2

输出

srcaddrSSH_Traffic_Volume
23.95.222.129160
179.43.167.7480

隔离特定源/目标对的 HTTP 数据流

要分析数据趋势,请使用 CloudWatch Logs Insights 隔离两个 IP 地址之间的双向流量。在以下查询中,["172.31.1.247","172.31.11.212"] 使用 IP 地址作为源或目标 IP 地址来返回流日志。filter 语句将 VPC 流日志事件与 TCP 协议 6 和端口 80 进行匹配,以隔离 HTTP 流量。要返回所有可用字段的子集,请使用 display 关键字。

查询

请参阅以下查询:

#HTTP Data Stream for Specific Source/Destination Pair
parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id
| filter srcaddr in ["172.31.1.247","172.31.11.212"] and dstaddr in ["172.31.1.247","172.31.11.212"] and protocol = 6 and (dstport = 80 or srcport=80)
| display interface_id,srcaddr, srcport, dstaddr, dstport, protocol, bytes, action, log_status, start, end, flow_direction, tcp_flags
| sort by start desc
| limit 2

输出

interface_idsrcaddrsrcportdstaddrdstportprotocolbytesactionlog_status
eni-0b74120275654905e172.31.11.21280172.31.1.2472937665160876ACCEPTOK
eni-0b74120275654905e172.31.1.24729376172.31.11.21280697380ACCEPTOK

以条形图或饼图形式直观呈现结果

可以使用 CloudWatch Log Insights 以条形图或饼图形式直观呈现结果。如果结果包含 bin() 函数,则查询输出将返回时间戳。然后,可以使用折线图或堆积面积图来](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData_VisualizationQuery.html)直观呈现时间序列[。

要计算以 1 分钟为间隔传输的累积数据,请使用 stats sum(bytes) as Data_Trasferred by bin(1m)。要查看此可视化,请在 CloudWatch Logs Insights 控制台中的日志可视化表之间切换。

查询

parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id
| filter srcaddr in ["172.31.1.247","172.31.11.212"] and dstaddr in ["172.31.1.247","172.31.11.212"] and protocol = 6 and (dstport = 80 or srcport=80)
| stats sum(bytes) as Data_Transferred by bin(1m)

输出

bin(1m)Data_Transferred
2022-04-01 15:23:00.00017225787
2022-04-01 15:21:00.00017724499
2022-04-01 15:20:00.0001125500
2022-04-01 15:19:00.000101525
2022-04-01 15:18:00.00081376

相关信息

Supported logs and discovered fields

CloudWatch Logs Insights query syntax

AWS 官方
AWS 官方已更新 5 个月前