如何使用 CloudWatch Logs Insights 分析自定义 Amazon VPC 流日志?
我使用 Amazon Virtual Private Cloud(Amazon VPC)流日志来配置自定义 VPC 流日志。我想要使用 Amazon CloudWatch Logs Insights 来发现日志中的模式和趋势。
简短描述
CloudWatch Logs Insights 自动发现默认格式的流日志,但不自动发现自定义格式的流日志。
要将 CloudWatch Logs Insights 与自定义格式的流日志一起使用,必须修改查询。
下面是自定义流日志格式的示例:
${account-id} ${vpc-id} ${subnet-id} ${interface-id} ${instance-id} ${srcaddr} ${srcport} ${dstaddr} ${dstport} ${protocol} ${packets} ${bytes} ${action} ${log-status} ${start} ${end} ${flow-direction} ${traffic-path} ${tcp-flags} ${pkt-srcaddr} ${pkt-src-aws-service} ${pkt-dstaddr} ${pkt-dst-aws-service} ${region} ${az-id} ${sublocation-type} ${sublocation-id}
以下查询是如何自定义和扩展查询以匹配使用场景的示例。
解决方法
检索最新的流日志
要从日志字段中提取数据,请使用 parse 关键字。例如,以下查询的输出按流日志事件开始时间排序,并限制为最近的两个日志条目。
查询
#Retrieve latest custom VPC Flow Logs parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id | sort start desc | limit 2
输出
account_id | vpc_id | subnet_id | interface_id | instance_id | srcaddr | srcport |
---|---|---|---|---|---|---|
123456789012 | vpc-0b69ce8d04278ddd | subnet-002bdfe1767d0ddb0 | eni-0435cbb62960f230e | - | 172.31.0.104 | 55125 |
123456789012 | vpc-0b69ce8d04278ddd1 | subnet-002bdfe1767d0ddb0 | eni-0435cbb62960f230e | - | 91.240.118.81 | 49422 |
按源和目标 IP 地址对汇总数据传输
使用以下查询按源和目标 IP 地址对来汇总网络流量。在示例查询中,sum 统计聚合了 bytes 字段。sum 统计计算主机之间传输的数据的累计总数,因此 flow_direction 包含在查询和输出中。聚合的结果暂时分配给 Data_Transferred 字段。然后,按 Data_Transferred 降序对结果进行排序,并返回最大的两对。
查询
parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id | stats sum(bytes) as Data_Transferred by srcaddr, dstaddr, flow_direction | sort by Data_Transferred desc | limit 2
输出
srcaddr | dstaddr | flow_direction | Data_Transferred |
---|---|---|---|
172.31.1.247 | 3.230.172.154 | egress | 346952038 |
172.31.0.46 | 3.230.172.154 | egress | 343799447 |
按 Amazon EC2 实例 ID 分析数据传输
可以使用自定义流日志按 Amazon Elastic Compute Cloud(Amazon EC2)实例 ID 分析数据传输。要确定最活跃的 EC2 实例,请在查询中包括 instance_id 字段。
查询
parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id | stats sum(bytes) as Data_Transferred by instance_id | sort by Data_Transferred desc | limit 5
输出
instance_id | Data_Transferred |
---|---|
- | 1443477306 |
i-03205758c9203c979 | 517558754 |
i-0ae33894105aa500c | 324629414 |
i-01506ab9e9e90749d | 198063232 |
i-0724007fef3cb06f3 | 54847643 |
筛选被拒绝的 SSH 流量
要分析安全组和网络访问控制列表拒绝的流量,请使用 REJECT 筛选器操作。要确定在 SSH 流量上被拒绝的主机,请扩展筛选器以包括 TCP 协议和目标端口为 22 的流量。在下面的示例查询中,使用了 TCP 协议 6。
查询
parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id | filter action = "REJECT" and protocol = 6 and dstport = 22 | stats sum(bytes) as SSH_Traffic_Volume by srcaddr | sort by SSH_Traffic_Volume desc | limit 2
输出
srcaddr | SSH_Traffic_Volume |
---|---|
23.95.222.129 | 160 |
179.43.167.74 | 80 |
隔离特定源/目标对的 HTTP 数据流
要分析数据趋势,请使用 CloudWatch Logs Insights 隔离两个 IP 地址之间的双向流量。在以下查询中,["172.31.1.247","172.31.11.212"] 使用 IP 地址作为源或目标 IP 地址来返回流日志。filter 语句将 VPC 流日志事件与 TCP 协议 6 和端口 80 进行匹配,以隔离 HTTP 流量。要返回所有可用字段的子集,请使用 display 关键字。
查询
请参阅以下查询:
#HTTP Data Stream for Specific Source/Destination Pair parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id | filter srcaddr in ["172.31.1.247","172.31.11.212"] and dstaddr in ["172.31.1.247","172.31.11.212"] and protocol = 6 and (dstport = 80 or srcport=80) | display interface_id,srcaddr, srcport, dstaddr, dstport, protocol, bytes, action, log_status, start, end, flow_direction, tcp_flags | sort by start desc | limit 2
输出
interface_id | srcaddr | srcport | dstaddr | dstport | protocol | bytes | action | log_status |
---|---|---|---|---|---|---|---|---|
eni-0b74120275654905e | 172.31.11.212 | 80 | 172.31.1.247 | 29376 | 6 | 5160876 | ACCEPT | OK |
eni-0b74120275654905e | 172.31.1.247 | 29376 | 172.31.11.212 | 80 | 6 | 97380 | ACCEPT | OK |
以条形图或饼图形式直观呈现结果
可以使用 CloudWatch Log Insights 以条形图或饼图形式直观呈现结果。如果结果包含 bin() 函数,则查询输出将返回时间戳。然后,可以使用折线图或堆积面积图来](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData_VisualizationQuery.html)直观呈现时间序列[。
要计算以 1 分钟为间隔传输的累积数据,请使用 stats sum(bytes) as Data_Trasferred by bin(1m)。要查看此可视化,请在 CloudWatch Logs Insights 控制台中的日志和可视化表之间切换。
查询
parse @message "* * * * * * * * * * * * * * * * * * * * * * * * * * *" as account_id, vpc_id, subnet_id, interface_id,instance_id, srcaddr, srcport, dstaddr, dstport, protocol, packets, bytes, action, log_status, start, end, flow_direction, traffic_path, tcp_flags, pkt_srcaddr, pkt_src_aws_service, pkt_dstaddr, pkt_dst_aws_service, region, az_id, sublocation_type, sublocation_id | filter srcaddr in ["172.31.1.247","172.31.11.212"] and dstaddr in ["172.31.1.247","172.31.11.212"] and protocol = 6 and (dstport = 80 or srcport=80) | stats sum(bytes) as Data_Transferred by bin(1m)
输出
bin(1m) | Data_Transferred |
---|---|
2022-04-01 15:23:00.000 | 17225787 |
2022-04-01 15:21:00.000 | 17724499 |
2022-04-01 15:20:00.000 | 1125500 |
2022-04-01 15:19:00.000 | 101525 |
2022-04-01 15:18:00.000 | 81376 |
相关信息
相关内容
- AWS 官方已更新 7 个月前
- AWS 官方已更新 3 年前
- AWS 官方已更新 9 个月前
- AWS 官方已更新 5 个月前