In our recent Big Data survey, the vast majority of respondents said they recognize the potential value of Web data to power everything from better competitive and market intelligence to lead gen, list management, and much more. And yet, 70% say they’re having trouble taking advantage of this powerful new resource of automated data extraction.
Based on our experience as the leading provider of intelligent, machine-learning-based Web scraping services, we offer insight into the roadblocks – and a roadmap to success.
Pitfall #1 — Thinking like a machine, not a human.
Websites are built from HTML code, so your first instinct might be to try to write scripts that go into that coding to extract the data you need. Unfortunately, to make this work, you’ll have to write a separate script for each Web page, which could require hours of manual engineering or reverse engineering. Plus, you’re likely going to have to rewrite your script every time someone refreshes the page, or before too long, you’ll be collecting gobbledygook.
A human being is not confused by website changes. No matter what, we can recognize an image, a headline, a text block, a price, a product description. At Connotate, we’ve taught the machine to “perceive” like a human does, using an approach called visual abstraction. Visual abstraction applies machine learning to allow the machine to interpret Web pages like people do. So if a headline moves from the top to the side, or the format changes from two columns to four columns, the extraction algorithms still know what to do.
Pitfall #2 — Throwing bodies at the problem.
Another early decision might be to use teams of people to copy and paste Web data from browsers. But this is not an especially effective or sustainable solution. Low-cost, low-skilled talent can unknowingly introduce a significant amount of error. If you hire high-level, skilled programmers — at a much higher cost — you still haven’t addressed the problems created by a constantly changing Web landscape. This means not only will this team be tasked with creating scripts to capture data from new websites, but they’ll also be spending a significant amount of time fixing broken codes from existing websites.
The truth is, the Web is so large and complex that it can’t be mined by humans alone. Machine-learning-based data monitoring solutions that allow non-programmers to rapidly build extraction Agents and can flexibly deal with website changes without asking for significant human intervention are what’s required to get the data you need.
Pitfall #3 — Thinking it’s going to be easy to make sense of the Web data you extract.
No surprise: from dates to currencies to product identifiers, different websites present data in different ways. If you don’t have a way to normalize it all, you’ll end up with a mishmash of data you can’t use to drive insights.
What’s the solution? Automated ways to organize and structure the varied data coming back from many different websites so that it can be integrated into your existing BI and workflows. We’ve built these capabilities into the Connotate platform.
Pitfall #4 — Trying to manually orchestrate thousands of Agents
If you’re just scraping a couple of sites, it’s not hard to do. The problem is, most companies need hundreds, if not thousands, of different Agents to get the job done. Manually managing how and when these Agents are sent out, and in what order, then becomes an almost impossible task.
A proper solution must be able to automate and optimize this flow of Agents. For example, Connotate’s data extraction software allows Agent runs to be scheduled and orchestrated. And Agents themselves can automatically bookmark final destinations and identify unnecessary links, delivering better results faster.
You know Web data is valuable, but you may not know how simple Connotate makes getting it. We’ve helped businesses of all sizes improve productivity, streamline operational workflow, increase revenue and reduce cost
ARTICLE SOURCE: This factual content has not been modified from the source. This content is syndicated news that can be used for your research, and we hope that it can help your productivity. This content is strictly for educational purposes and is not made for any kind of commercial purposes of this blog.