本文整理汇总了Java中org.openqa.selenium.WebDriver.getPageSource方法的典型用法代码示例。如果您正苦于以下问题:Java WebDriver.getPageSource方法的具体用法?Java WebDriver.getPageSource怎么用?Java WebDriver.getPageSource使用的例子?那么, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在类org.openqa.selenium.WebDriver
的用法示例。
在下文中一共展示了WebDriver.getPageSource方法的3个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于系统推荐出更棒的Java代码示例。
示例1: getArtifact
import org.openqa.selenium.WebDriver; //导入方法依赖的package包/类
/**
* Produce page source from the specified driver.
*
* @param optDriver optional web driver object
* @param reason impetus for capture request; may be 'null'
* @param logger SLF4J logger object
* @return page source; if capture fails, an empty string is returned
*/
public static String getArtifact(Optional<WebDriver> optDriver, Throwable reason, Logger logger) {
if (canGetArtifact(optDriver, logger)) {
try {
WebDriver driver = optDriver.get();
StringBuilder sourceBuilder = new StringBuilder(driver.getPageSource());
insertBaseElement(sourceBuilder, driver);
insertBreakpointInfo(sourceBuilder, reason);
insertOriginalUrl(sourceBuilder, driver);
return sourceBuilder.toString();
} catch (WebDriverException e) {
logger.warn("The driver is capable of producing page source, but failed.", e);
}
}
return "";
}
示例2: testDriver
import org.openqa.selenium.WebDriver; //导入方法依赖的package包/类
@Test
public void testDriver() throws IOException {
WebDriver driver = new RemoteWebDriver(toUrl("http://localhost:9515"), DesiredCapabilities.chrome());
driver.get(URL2);
String response = driver.getPageSource();
Document doc = Jsoup.connect(URL2).ignoreContentType(true).get();
Elements scriptTags = doc.select("body");
// get All functions
try {
String result = (String) engine.eval(response);
} catch (ScriptException e) {
e.printStackTrace();
}
log.info("PageSource " + response);
driver.quit();
}
示例3: downloadPic
import org.openqa.selenium.WebDriver; //导入方法依赖的package包/类
public void downloadPic(String url) {
WebDriver driver = new ChromeDriver();
driver.get(url);
String html = driver.getPageSource();
List<String> urls = parseHtmlToImages(html,picParser);
crawlerClient.downloadPics(urls);
}