Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
View Source
var Cookies = map[string]string{
"PHPSESSID": "ae6doi5fo94hv5k2ouqmopd47k",
"s2_csrf_cookie_name": "cf0b4574d2c27713afd4b26879597e5d",
"s2_theme_ui": "red",
"s2_uGoo": "w6a162dd67b1968e6349944bcff010fdd63ee724",
"s2_uLang": "en",
"sh": "72",
"sw": "95.4",
}
Cookies required for requests
View Source
var Headers = map[string]string{
"Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
"X-Requested-With": "XMLHttpRequest",
"Origin": "https://myip.ms",
"Referer": "https://myip.ms/browse/sites/1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36",
"Accept": "*/*",
}
Headers for HTTP requests
Functions ¶
This section is empty.
Types ¶
type HTTPClient ¶
type HTTPClient struct {
// contains filtered or unexported fields
}
HTTPClient represents an HTTP client with headers and cookies
func NewHTTPClient ¶
func NewHTTPClient(proxyURL, proxyUser, proxyPass string) *HTTPClient
NewHTTPClient creates a new HTTP client with default cookies and optional proxy
type Runner ¶
type Runner struct {
// contains filtered or unexported fields
}
Runner orchestrates the scraping process
type Scraper ¶
type Scraper struct {
// contains filtered or unexported fields
}
Scraper holds the HTTP client and URL template for efficient scraping
func NewScraper ¶
func NewScraper(httpClient *HTTPClient, filter *options.Filter) *Scraper
NewScraper creates a new scraper with the HTTP client and builds the URL template
Click to show internal directories.
Click to hide internal directories.