Python Digital Forensics 简明教程

Python Digital Mobile Device Forensics

本章将讲解移动设备上的 Python 数字取证和涉及到的概念。

This chapter will explain Python digital forensics on mobile devices and the concepts involved.

Introduction

移动设备取证是数字取证的一个分支,其涉及移动设备的获取和分析,以恢复对调查有价值的数字证据。此分支不同于计算机取证,因为移动设备具有内置的通信系统,该系统有助于提供与位置相关的信息。

Mobile device forensics is that branch of digital forensics which deals with the acquisition and analysis of mobile devices to recover digital evidences of investigative interest. This branch is different from computer forensics because mobile devices have an inbuilt communication system which is useful for providing useful information related to location.

尽管在数字取证领域中智能手机的使用量与日俱增,但由于其异构性,因此智能手机仍被认为是非标准的。另一方面,计算机硬件(如硬盘)被认为是标准的并已发展成为稳定的学科。在数字取证行业中,围绕用于具有瞬态证据(如智能手机)的非标准设备的技术展开了很多争论。

Though the use of smartphones is increasing in digital forensics day-by-day, still it is considered to be non-standard due to its heterogeneity. On the other hand, computer hardware, such as hard disk, is considered to be standard and developed as a stable discipline too. In digital forensic industry, there is a lot of debate on the techniques used for non-standards devices, having transient evidences, such as smartphones.

Artifacts Extractible from Mobile Devices

与只有通话记录或短信的早期手机相比,现代移动设备拥有大量的信息。因此,移动设备可以为调查人员提供很多对其用户的见解。可以从移动设备提取的部分工件如下所述——

Modern mobile devices possess lot of digital information in comparison with the older phones having only a call log or SMS messages. Thus, mobile devices can supply investigators with lots of insights about its user. Some artifacts that can be extracted from mobile devices are as mentioned below −

  1. Messages − These are the useful artifacts which can reveal the state of mind of the owner and can even give some previous unknown information to the investigator.

  2. Location History− The location history data is a useful artifact which can be used by investigators to validate about the particular location of a person.

  3. Applications Installed − By accessing the kind of applications installed, investigator get some insight into the habits and thinking of the mobile user.

Evidence Sources and Processing in Python

智能手机的主要证据来源是 SQLite 数据库和 PLIST 文件。在本节中,我们将处理 Python 中的证据来源。

Smartphones have SQLite databases and PLIST files as the major sources of evidences. In this section we are going to process the sources of evidences in python.

Analyzing PLIST files

PLIST(属性列表)是一种灵活且便捷的格式,用于存储应用程序数据,尤其是在 iPhone 设备上。它使用扩展名 .plist 。此类文件用于存储有关捆绑包和应用程序的信息。它可以采用两种格式: XMLbinary 。下面的 Python 代码将打开和读取 PLIST 文件。请注意,在继续之前,我们必须创建自己的 Info.plist 文件。

A PLIST (Property List) is a flexible and convenient format for storing application data especially on iPhone devices. It uses the extension .plist. Such kind of files used to store information about bundles and applications. It can be in two formats: XML and binary. The following Python code will open and read PLIST file. Note that before proceeding into this, we must create our own Info.plist file.

首先,通过以下命令安装名为 biplist 的第三方库——

First, install a third party library named biplist by the following command −

Pip install biplist

现在,导入一些有用的库来处理 plist 文件——

Now, import some useful libraries to process plist files −

import biplist
import os
import sys

现在,可以在 main 方法下使用以下命令将 plist 文件读入变量——

Now, use the following command under main method can be used to read plist file into a variable −

def main(plist):
   try:
      data = biplist.readPlist(plist)
   except (biplist.InvalidPlistException,biplist.NotBinaryPlistException) as e:
print("[-] Invalid PLIST file - unable to be opened by biplist")
sys.exit(1)

现在,我们可以要么在控制台上读取数据,要么直接从变量中打印数据。

Now, we can either read the data on the console or directly print it, from this variable.

SQLite Databases

SQLite 作为移动设备上的主要数据存储库。SQLite 是一个进程内库,可以执行独立、无服务器、零配置、事务性 SQL 数据库引擎。它是一个数据库,零配置(亦即无需在系统中配置它,不同于其他数据库)。

SQLite serves as the primary data repository on mobile devices. SQLite an in-process library that implements a self-contained, server-less, zero-configuration, transactional SQL database engine. It is a database, which is zero-configured, you need not configure it in your system, unlike other databases.

如果您是 SQLite 数据库的新手或不熟悉它,则可以点击 www.tutorialspoint.com/sqlite/index.htm 链接。如果您想深入了解 Python 中的 SQLite,则可以点击 www.tutorialspoint.com/sqlite/sqlite_python.htm 链接。

If you are a novice or unfamiliar with SQLite databases, you can follow the link www.tutorialspoint.com/sqlite/index.htm Additionally, you can follow the link www.tutorialspoint.com/sqlite/sqlite_python.htm in case you want to get into detail of SQLite with Python.

在移动取证过程中,我们可以与移动设备的 sms.db 文件进行交互,并且可以从 message 表中提取有价值的信息。Python 有一个名为 sqlite3 的内置库,用于连接到 SQLite 数据库。您可以使用以下命令导入相同的库——

During mobile forensics, we can interact with the sms.db file of a mobile device and can extract valuable information from message table. Python has a built in library named sqlite3 for connecting with SQLite database. You can import the same with the following command −

import sqlite3

在移动设备的情况下,使用以下命令的帮助,我们现在可以连接到数据库,例如 sms.db

Now, with the help of following command, we can connect with the database, say sms.db in case of mobile devices −

Conn = sqlite3.connect(‘sms.db’)
C = conn.cursor()

此处,C 是游标对象,借助该游标,我们可以与数据库进行交互。

Here, C is the cursor object with the help of which we can interact with the database.

现在,假设如果我们想执行一个特定的命令,例如从 abc table 获取详细信息,可以使用以下命令来完成 −

Now, suppose if we want to execute a particular command, say to get the details from the abc table, it can be done with the help of following command −

c.execute(“Select * from abc”)
c.close()

上述命令的结果将存储在 cursor 对象中。类似地,我们可以使用 fetchall() 方法将结果转储到我们可以操作的变量中。

The result of the above command would be stored in the cursor object. Similarly we can use fetchall() method to dump the result into a variable we can manipulate.

我们可以使用以下命令获取 sms.db 中 message 表的列名称数据 −

We can use the following command to get column names data of message table in sms.db

c.execute(“pragma table_info(message)”)
table_data = c.fetchall()
columns = [x[1] for x in table_data

请注意,此处我们使用 SQLite PRAGMA 命令,这是一个特殊命令,用于控制 SQLite 环境中的各种环境变量和状态标志。在上述命令中, fetchall() 方法返回一个结果元组。每列的名称都存储在每个元组的第一个索引中。

Observe that here we are using SQLite PRAGMA command which is special command to be used to control various environmental variables and state flags within SQLite environment. In the above command, the fetchall() method returns a tuple of results. Each column’s name is stored in the first index of each tuple.

现在,借助以下命令,我们可以查询表中所有数据并将其存储在名为 data_msg 的变量中 −

Now, with the help of following command we can query the table for all of its data and store it in the variable named data_msg

c.execute(“Select * from message”)
data_msg = c.fetchall()

上述命令将数据存储在变量中,此外我们还可以使用 csv.writer() 方法将上述数据写入 CSV 文件中。

The above command will store the data in the variable and further we can also write the above data in CSV file by using csv.writer() method.

iTunes Backups

iPhone 移动取证可以在 iTunes 制作的备份中进行。法医检查人员依靠分析通过 iTunes 获取的 iPhone 逻辑备份。AFC(Apple 文件连接)协议由 iTunes 用于获取备份。此外,备份过程不会修改 iPhone 上的任何内容,除了第三方密钥记录。

iPhone mobile forensics can be performed on the backups made by iTunes. Forensic examiners rely on analyzing the iPhone logical backups acquired through iTunes. AFC (Apple file connection) protocol is used by iTunes to take the backup. Besides, the backup process does not modify anything on the iPhone except the escrow key records.

现在,出现的问题是,数字取证专家了解 iTunes 备份中的技术有何重要性?如果我们能够直接访问嫌疑人的计算机而不是 iPhone,这很重要,因为当使用计算机与 iPhone 同步时,iPhone 上的大多数信息可能会备份到计算机。

Now, the question arises that why it is important for a digital forensic expert to understand the techniques on iTunes backups? It is important in case we get access to the suspect’s computer instead of iPhone directly because when a computer is used to sync with iPhone, then most of the information on iPhone is likely to be backed up on the computer.

Process of Backup and its Location

每当 Apple 产品备份到电脑时,它会与 iTunes 同步,并且会有一个带有设备唯一 ID 的特定文件夹。在最新的备份格式中,文件存储在包含文件名前两个十六进制字符的子文件夹中。从这些备份文件中,有一些文件(例如 info.plist)很有用,还有名为 Manifest.db 的数据库。下表显示了备份位置,这些位置根据 iTunes 备份的操作系统而有所不同 −

Whenever an Apple product is backed up to the computer, it is in sync with iTunes and there will be a specific folder with device’s unique ID. In the latest backup format, the files are stored in subfolders containing the first two hexadecimal characters of the file name. From these back up files, there are some files like info.plist which are useful along with the database named Manifest.db. The following table shows the backup locations, that vary with operating systems of iTunes backups −

OS

Backup Location

Win7

C:\Users\[username]\AppData\Roaming\AppleComputer\MobileSync\Backup\

MAC OS X

~/Library/Application Suport/MobileSync/Backup/

要使用 Python 处理 iTunes 备份,我们需要根据我们的操作系统首先识别备份位置中的所有备份。然后,我们将遍历每个备份并读取数据库 Manifest.db。

For processing the iTunes backup with Python, we need to first identify all the backups in backup location as per our operating system. Then we will iterate through each backup and read the database Manifest.db.

现在,借助以下 Python 代码,我们可以执行相同的操作 −

Now, with the help of following Python code we can do the same −

首先,按如下方式导入必要的库 −

First, import the necessary libraries as follows −

from __future__ import print_function
import argparse
import logging
import os

from shutil import copyfile
import sqlite3
import sys
logger = logging.getLogger(__name__)

现在,提供两个位置参数 INPUT_DIR 和 OUTPUT_DIR(分别表示 iTunes 备份和所需的输出文件夹) −

Now, provide two positional arguments namely INPUT_DIR and OUTPUT_DIR which is representing iTunes backup and desired output folder −

if __name__ == "__main__":
   parser.add_argument("INPUT_DIR",help = "Location of folder containing iOS backups, ""e.g. ~\Library\Application Support\MobileSync\Backup folder")
   parser.add_argument("OUTPUT_DIR", help = "Output Directory")
   parser.add_argument("-l", help = "Log file path",default = __file__[:-2] + "log")
   parser.add_argument("-v", help = "Increase verbosity",action = "store_true") args = parser.parse_args()

现在,按如下方式设置日志 −

Now, setup the log as follows −

if args.v:
   logger.setLevel(logging.DEBUG)
else:
   logger.setLevel(logging.INFO)

现在,按如下方式为该日志设置消息格式 −

Now, setup the message format for this log as follows −

msg_fmt = logging.Formatter("%(asctime)-15s %(funcName)-13s""%(levelname)-8s %(message)s")
strhndl = logging.StreamHandler(sys.stderr)
strhndl.setFormatter(fmt = msg_fmt)

fhndl = logging.FileHandler(args.l, mode = 'a')
fhndl.setFormatter(fmt = msg_fmt)

logger.addHandler(strhndl)
logger.addHandler(fhndl)
logger.info("Starting iBackup Visualizer")
logger.debug("Supplied arguments: {}".format(" ".join(sys.argv[1:])))
logger.debug("System: " + sys.platform)
logger.debug("Python Version: " + sys.version)

以下代码行将使用 os.makedirs() 函数为主动的输出目录创建必要的文件夹 −

The following line of code will create necessary folders for the desired output directory by using os.makedirs() function −

if not os.path.exists(args.OUTPUT_DIR):
   os.makedirs(args.OUTPUT_DIR)

现在,按如下方式将提供的输入和输出目录传递给 main() 函数 −

Now, pass the supplied input and output directories to the main() function as follows −

if os.path.exists(args.INPUT_DIR) and os.path.isdir(args.INPUT_DIR):
   main(args.INPUT_DIR, args.OUTPUT_DIR)
else:
   logger.error("Supplied input directory does not exist or is not ""a directory")
   sys.exit(1)

现在,编写 main() 函数,它将进一步调用 backup_summary() 函数以识别输入文件夹中存在的全部备份−

Now, write main() function which will further call backup_summary() function to identify all the backups present in input folder −

def main(in_dir, out_dir):
   backups = backup_summary(in_dir)
def backup_summary(in_dir):
   logger.info("Identifying all iOS backups in {}".format(in_dir))
   root = os.listdir(in_dir)
   backups = {}

   for x in root:
      temp_dir = os.path.join(in_dir, x)
      if os.path.isdir(temp_dir) and len(x) == 40:
         num_files = 0
         size = 0

         for root, subdir, files in os.walk(temp_dir):
            num_files += len(files)
            size += sum(os.path.getsize(os.path.join(root, name))
               for name in files)
         backups[x] = [temp_dir, num_files, size]
   return backups

现在,按如下方式将每个备份的摘要打印到控制台−

Now, print the summary of each backup to the console as follows −

print("Backup Summary")
print("=" * 20)

if len(backups) > 0:
   for i, b in enumerate(backups):
      print("Backup No.: {} \n""Backup Dev. Name: {} \n""# Files: {} \n""Backup Size (Bytes): {}\n".format(i, b, backups[b][1], backups[b][2]))

现在,将 Manifest.db 文件的内容转储到名为 db_items 的变量中。

Now, dump the contents of the Manifest.db file to the variable named db_items.

try:
   db_items = process_manifest(backups[b][0])
   except IOError:
      logger.warn("Non-iOS 10 backup encountered or " "invalid backup. Continuing to next backup.")
continue

现在,让我们定义一个函数,它将采用备份的目录路径−

Now, let us define a function that will take the directory path of the backup −

def process_manifest(backup):
   manifest = os.path.join(backup, "Manifest.db")

   if not os.path.exists(manifest):
      logger.error("Manifest DB not found in {}".format(manifest))
      raise IOError

现在,使用 SQLite3,我们将通过名为 c 的游标连接到数据库−

Now, using SQLite3 we will connect to the database by cursor named c −

c = conn.cursor()
items = {}

for row in c.execute("SELECT * from Files;"):
   items[row[0]] = [row[2], row[1], row[3]]
return items

create_files(in_dir, out_dir, b, db_items)
   print("=" * 20)
else:
   logger.warning("No valid backups found. The input directory should be
      " "the parent-directory immediately above the SHA-1 hash " "iOS device backups")
      sys.exit(2)

现在,按如下方式定义 create_files() 方法−

Now, define the create_files() method as follows −

def create_files(in_dir, out_dir, b, db_items):
   msg = "Copying Files for backup {} to {}".format(b, os.path.join(out_dir, b))
   logger.info(msg)

现在,遍历 db_items 字典中的每个键−

Now, iterate through each key in the db_items dictionary −

for x, key in enumerate(db_items):
   if db_items[key][0] is None or db_items[key][0] == "":
      continue
   else:
      dirpath = os.path.join(out_dir, b,
os.path.dirname(db_items[key][0]))
   filepath = os.path.join(out_dir, b, db_items[key][0])

   if not os.path.exists(dirpath):
      os.makedirs(dirpath)
      original_dir = b + "/" + key[0:2] + "/" + key
   path = os.path.join(in_dir, original_dir)

   if os.path.exists(filepath):
      filepath = filepath + "_{}".format(x)

现在,使用 shutil.copyfile() 方法按如下方式复制备份文件−

Now, use shutil.copyfile() method to copy the backed-up file as follows −

try:
   copyfile(path, filepath)
   except IOError:
      logger.debug("File not found in backup: {}".format(path))
         files_not_found += 1
   if files_not_found > 0:
      logger.warning("{} files listed in the Manifest.db not" "found in
backup".format(files_not_found))
   copyfile(os.path.join(in_dir, b, "Info.plist"), os.path.join(out_dir, b,
"Info.plist"))
   copyfile(os.path.join(in_dir, b, "Manifest.db"), os.path.join(out_dir, b,
"Manifest.db"))
   copyfile(os.path.join(in_dir, b, "Manifest.plist"), os.path.join(out_dir, b,
"Manifest.plist"))
   copyfile(os.path.join(in_dir, b, "Status.plist"),os.path.join(out_dir, b,
"Status.plist"))

使用上述 Python 脚本,我们可以在输出文件夹中获取更新后的备份文件结构。我们可以使用 pycrypto python 库解密备份。

With the above Python script, we can get the updated back up file structure in our output folder. We can use pycrypto python library to decrypt the backups.

Wi - Fi

可以通过连接到随处可用的 Wi-Fi 网络,使用移动设备连接到外界。设备有时会自动连接到这些开放网络。

Mobile devices can be used to connect to the outside world by connecting through Wi-Fi networks which are available everywhere. Sometimes the device gets connected to these open networks automatically.

对于 iPhone,设备已连接的开放 Wi-Fi 连接列表存储在名为 com.apple.wifi.plist 的 PLIST 文件中。此文件将包含 Wi-Fi SSID、BSSID 和连接时间。

In case of iPhone, the list of open Wi-Fi connections with which the device has got connected is stored in a PLIST file named com.apple.wifi.plist. This file will contain the Wi-Fi SSID, BSSID and connection time.

我们需要使用 Python 从标准的 Cellebrite XML 报告中提取 Wi-Fi 详细信息。为此,我们需要使用无线地理定位引擎 (WIGLE) 的 API,WIGLE 是一种热门平台,可用于使用 Wi-Fi 网络名称查找设备位置。

We need to extract Wi-Fi details from standard Cellebrite XML report using Python. For this, we need to use API from Wireless Geographic Logging Engine (WIGLE), a popular platform which can be used for finding the location of a device using the names of Wi-Fi networks.

我们可以使用名为 requests 的 Python 库从 WIGLE 访问 API。可按如下方式安装−

We can use Python library named requests to access the API from WIGLE. It can be installed as follows −

pip install requests

API from WIGLE

我们需要在 WIGLE 网站 https://wigle.net/account 上注册才能获得免费的 WIGLE API。下面讨论了用于获取用户设备信息及其通过 WIGEL 的 API 连接的信息的 Python 脚本−

We need to register on WIGLE’s website https://wigle.net/account to get a free API from WIGLE. The Python script for getting the information about user device and its connection through WIGEL’s API is discussed below −

首先,导入用于处理不同事务的以下库−

First, import the following libraries for handling different things −

from __future__ import print_function

import argparse
import csv
import os
import sys
import xml.etree.ElementTree as ET
import requests

现在,提供两个位置参数,即 INPUT_FILEOUTPUT_CSV ,它们分别表示具有 Wi-Fi MAC 地址的输入文件和所需的输出 CSV 文件−

Now, provide two positional arguments namely INPUT_FILE and OUTPUT_CSV which will represent the input file with Wi-Fi MAC address and the desired output CSV file respectively −

if __name__ == "__main__":
   parser.add_argument("INPUT_FILE", help = "INPUT FILE with MAC Addresses")
   parser.add_argument("OUTPUT_CSV", help = "Output CSV File")
   parser.add_argument("-t", help = "Input type: Cellebrite XML report or TXT
file",choices = ('xml', 'txt'), default = "xml")
   parser.add_argument('--api', help = "Path to API key
   file",default = os.path.expanduser("~/.wigle_api"),
   type = argparse.FileType('r'))
   args = parser.parse_args()

现在,以下代码行将检查输入文件是否存在且是否为文件。如果不是,它退出脚本−

Now following lines of code will check if the input file exists and is a file. If not, it exits the script −

if not os.path.exists(args.INPUT_FILE) or \ not os.path.isfile(args.INPUT_FILE):
   print("[-] {} does not exist or is not a
file".format(args.INPUT_FILE))
   sys.exit(1)
directory = os.path.dirname(args.OUTPUT_CSV)
if directory != '' and not os.path.exists(directory):
   os.makedirs(directory)
api_key = args.api.readline().strip().split(":")

现在,按如下方式将参数传递给主程序−

Now, pass the argument to main as follows −

main(args.INPUT_FILE, args.OUTPUT_CSV, args.t, api_key)
def main(in_file, out_csv, type, api_key):
   if type == 'xml':
      wifi = parse_xml(in_file)
   else:
      wifi = parse_txt(in_file)
query_wigle(wifi, out_csv, api_key)

现在,我们将按如下方式解析 XML 文件−

Now, we will parse the XML file as follows −

def parse_xml(xml_file):
   wifi = {}
   xmlns = "{http://pa.cellebrite.com/report/2.0}"
   print("[+] Opening {} report".format(xml_file))

   xml_tree = ET.parse(xml_file)
   print("[+] Parsing report for all connected WiFi addresses")

   root = xml_tree.getroot()

现在,按如下方法遍历根的子元素:

Now, iterate through the child element of the root as follows −

for child in root.iter():
   if child.tag == xmlns + "model":
      if child.get("type") == "Location":
         for field in child.findall(xmlns + "field"):
            if field.get("name") == "TimeStamp":
               ts_value = field.find(xmlns + "value")
               try:
               ts = ts_value.text
               except AttributeError:
continue

现在,我们将检查“ssid”字符串是否存在于值的文本中:

Now, we will check that ‘ssid’ string is present in the value’s text or not −

if "SSID" in value.text:
   bssid, ssid = value.text.split("\t")
   bssid = bssid[7:]
   ssid = ssid[6:]

现在,我们需要将 BSSID、SSID 和时间戳添加到 wifi 词典中,方法如下:

Now, we need to add BSSID, SSID and timestamp to the wifi dictionary as follows −

if bssid in wifi.keys():

wifi[bssid]["Timestamps"].append(ts)
   wifi[bssid]["SSID"].append(ssid)
else:
   wifi[bssid] = {"Timestamps": [ts], "SSID":
[ssid],"Wigle": {}}
return wifi

文本解析器比 XML 解析器简单得多,如下所示:

The text parser which is much simpler that XML parser is shown below −

def parse_txt(txt_file):
   wifi = {}
   print("[+] Extracting MAC addresses from {}".format(txt_file))

   with open(txt_file) as mac_file:
      for line in mac_file:
         wifi[line.strip()] = {"Timestamps": ["N/A"], "SSID":
["N/A"],"Wigle": {}}
return wifi

现在,让我们使用 requests 模块执行 WIGLE API*calls and need to move on to the *query_wigle() 方法:

Now, let us use requests module to make WIGLE API*calls and need to move on to the *query_wigle() method −

def query_wigle(wifi_dictionary, out_csv, api_key):
   print("[+] Querying Wigle.net through Python API for {} "
"APs".format(len(wifi_dictionary)))
   for mac in wifi_dictionary:

   wigle_results = query_mac_addr(mac, api_key)
def query_mac_addr(mac_addr, api_key):

   query_url = "https://api.wigle.net/api/v2/network/search?" \
"onlymine = false&freenet = false&paynet = false" \ "&netid = {}".format(mac_addr)
   req = requests.get(query_url, auth = (api_key[0], api_key[1]))
   return req.json()

实际上,对于 WIGLE API 调用,每天都有一个限制,如果超过该限制,则必须显示以下错误:

Actually there is a limit per day for WIGLE API calls, if that limit exceeds then it must show an error as follows −

try:
   if wigle_results["resultCount"] == 0:
      wifi_dictionary[mac]["Wigle"]["results"] = []
         continue
   else:
      wifi_dictionary[mac]["Wigle"] = wigle_results
except KeyError:
   if wigle_results["error"] == "too many queries today":
      print("[-] Wigle daily query limit exceeded")
      wifi_dictionary[mac]["Wigle"]["results"] = []
      continue
   else:
      print("[-] Other error encountered for " "address {}: {}".format(mac,
wigle_results['error']))
   wifi_dictionary[mac]["Wigle"]["results"] = []
   continue
prep_output(out_csv, wifi_dictionary)

现在,我们将使用 prep_output() 方法将字典展平为易于写入的块:

Now, we will use prep_output() method to flattens the dictionary into easily writable chunks −

def prep_output(output, data):
   csv_data = {}
   google_map = https://www.google.com/maps/search/

现在,访问我们迄今为止收集的所有数据,方法如下:

Now, access all the data we have collected so far as follows −

for x, mac in enumerate(data):
   for y, ts in enumerate(data[mac]["Timestamps"]):
      for z, result in enumerate(data[mac]["Wigle"]["results"]):
         shortres = data[mac]["Wigle"]["results"][z]
         g_map_url = "{}{},{}".format(google_map, shortres["trilat"],shortres["trilong"])

现在,我们可以将输出写入 CSV 文件,就像我们在本章前面的脚本中使用 write_csv() 函数所做的那样。

Now, we can write the output in CSV file as we have done in earlier scripts in this chapter by using write_csv() function.