多线程链接检查器

运用掌握的新知识创建一个多线程链接检查工具。应先从网页入手,并检查网页上的链接是否有效。该工具应以递归方式检查同一网域中的其他网页,并且一直执行此操作,直到所有网页都通过验证。

For this, you will need an HTTP client such as reqwest. You will also need a way to find links, we can use scraper. Finally, we’ll need some way of handling errors, we will use thiserror.

Create a new Cargo project and reqwest it as a dependency with:

cargo new link-checker cd link-checker cargo add --features blocking,rustls-tls reqwest cargo add scraper cargo add thiserror

如果 cargo add 操作失败并显示 error: no such subcommand,请手动修改 Cargo.toml 文件。添加下面列出的依赖项。

cargo add 调用会将 Cargo.toml 文件更新为如下所示:

[package] name = "link-checker" version = "0.1.0" edition = "2021" publish = false [dependencies] reqwest = { version = "0.11.12", features = ["blocking", "rustls-tls"] } scraper = "0.13.0" thiserror = "1.0.37"

您现在可以下载初始页了。请尝试使用一个小网站,例如 https://www.google.org/

您的 src/main.rs 文件应如下所示:

use reqwest::blocking::Client; use reqwest::Url; use scraper::{Html, Selector}; use thiserror::Error; #[derive(Error, Debug)] enum Error { #[error("request error: {0}")] ReqwestError(#[from] reqwest::Error), #[error("bad http response: {0}")] BadResponse(String), } #[derive(Debug)] struct CrawlCommand { url: Url, extract_links: bool, } fn visit_page(client: &Client, command: &CrawlCommand) -> Result<Vec<Url>, Error> { println!("Checking {:#}", command.url); let response = client.get(command.url.clone()).send()?; if !response.status().is_success() { return Err(Error::BadResponse(response.status().to_string())); } let mut link_urls = Vec::new(); if !command.extract_links { return Ok(link_urls); } let base_url = response.url().to_owned(); let body_text = response.text()?; let document = Html::parse_document(&body_text); let selector = Selector::parse("a").unwrap(); let href_values = document .select(&selector) .filter_map(|element| element.value().attr("href")); for href in href_values { match base_url.join(href) { Ok(link_url) => { link_urls.push(link_url); } Err(err) => { println!("On {base_url:#}: ignored unparsable {href:?}: {err}"); } } } Ok(link_urls) } fn main() { let client = Client::new(); let start_url = Url::parse("https://www.google.org").unwrap(); let crawl_command = CrawlCommand{ url: start_url, extract_links: true }; match visit_page(&client, &crawl_command) { Ok(links) => println!("Links: {links:#?}"), Err(err) => println!("Could not extract links: {err:#}"), } }

使用以下命令运行 src/main.rs 中的代码

cargo run

任务

  • 通过线程并行检查链接:将要检查的网址发送到某个通道,然后使用多个线程并行检查这些网址。
  • 您可以对此进行扩展,以递归方式从 www.google.org 网域的所有网页中提取链接。设置网页上限(例如 100 个),以免被网站屏蔽。