When I execute a scraper, it loads the url using this method:
$html = scraperWiki::scrape("foo.html");
So every time I add new code to the scraper and want to try it, it loads again the html, which takes a fair amount of time.
Is there anyway to save the $html so it’s only loaded the first time?
Advertisement
Answer
As said on FAQ at docs and help section of scraper wiki site:
Yes, but all files are temporary.
I suggest you save the file(HTML or wherever is) with functions fopen
/fwrite
as you are using PHP.