Download - Colloquim Report on Crawler - 1 Dec 2014
Rotto Link Web Crawler A Summer Internship Project
By
Sunny Kumar 2011ECS43
Under the Guidance of Mr. Saurabh Kumar Senior Developer
Ophio Technologies Pvt. Ltd.
Bachelor of Technology, Department of Computer Science Shri Mata Vaishno Devi University, J&K, India
December, 2014
UNDERTAKING
I hereby certify that the Colloqium Report entitled “Rotto Link Crawler”, submitted in the
partial fulfillment of the requirements for the award of Bachelor of Technology in
Computer Science and Engineering to the School of Computer Science &
Engineering of Shri Mata Vaishno Devi University, Katra, J&K is an authentic record of
my own study carried out during a period from MayJuly, 2014.
The matter presented in this report has not been submitted by me for the award of any
other degree elsewhere. The content of the report does not violate any copyright and
due credit is given in to the source of information if any.
Name : Sunny Kumar Entry Number : 2011ECS43 Place : SMVDU, Katra Date : 1 December 2014
About the Company
Ophio is a private company where passionate, dedicated programmers and creative designers team develop outstanding services and applications for Web, iPhone, iPad, Android and the Mac. Their approach is simple, they take well designed products make them function beautifully. Specializing in the creation of unique, immersive and stunning web and mobile applications, videos and animations. At Ophio, they literally make digital dreams, reality. The Ophio team are a core group of skilled development experts, allowing us to bring projects to life, adding an extra dimension of interactivity into all our work. Whether it be responsive builds, CMS sites, microsites or full eCommerce systems, they have the team to create superb products. With launch of iPhone 5 and opening of 3G spectrum in asian countries, there is huge demand for iPhones. They help others product owner reach out to your customers with creative & interactive applications built by our team of experts. Future lies in the open source. Android platform is a robust opening system meant for rapid development, developers at Ophio exploit it to full and build content rich application for your mobile device. Ophio is comprising of 20 members team, out of which 16 are developers, 2 are motion designers and 2 are QA Analyst. Team is best defined as youthful, ambitious, amiable and passionate. Delivering high quality work on time, bringing value to the project and our clients.
Table of Contents
S.No Topic Page
1. Abstract………………………………………………………………….. 5 2. Project Description…………………………………………………….
2.1. Web Crawler……………………………………………………... 6 2.2. Rotto Link Web Crawler……………………………………….... 8 2.3. BackEnd……………………………………..………………….. 9
2.3.1. Web API…………………………………………………. 9 2.3.2. Crawler Module………………………………………… 12
2.3.2.1. GRequests………………………………………. 13 2.3.2.2. BeautifulSoup…………………………………… 13 2.3.2.3. NLTK…………………………………………….. 14 2.3.2.4. SqlAlchemy……………………………………… 14 2.3.2.5. Redis…………………………………………….. 15 2.3.2.6. LogBook…………………………………………. 17 2.3.2.7. Smtp……………………………………………… 18
2.4. FrontEnd……………………………………..……………….... 19 2.5. ScreenShots of Application…………………………………. 20
3. Scope……………………………………..……………………………… 23 4. Conculsion………………………………………………………………. 24
Abstract Rotto link web crawler is an application tool to extract the broken link (i.e dead links) within a website. This application takes a seed url (i.e universal resource locator), a url of a website to be crawl, and visits every page of a website and search for broken links. While a crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier (i.e Worker Queue). This application follow REST architecture to design its web API. Web API take a target/seed url and set of keywords to be searched in a page consists of broken hyperlinks and return results containing a set of link of pages consists broken hyperlinks. Web API has two endpoints, which performs two actions :
● An HTTP GET request containing a seed/target url and an array of keywords in an JSON form.This returns JOB_ID to user which can be used to get the results.
● An HTTP GET request containing a JOB_ID. This return results as a set of pages which matches keywords entered while request by user along with broken links.
All request and response are in JSON form. Application uses mutiple Reddis workers for crawling websites in queue. These workers spawn a process for every website in queue. As the crawler visits all pages of website and stores all result in database with their respective JOB ID. Application is using sqlite engine. Database implements SqlAlchemy for doing database events. For UI purpose, Application interface is designed using AngularJS framework.
Project Description
2.1 Web Crawler A Web crawler starts with a list of URLs to visit, called the seeds. As the crawler visits
these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs
to visit, called the crawl frontier. URLs from the frontier are recursively visited according
to a set of policies. If the crawler is performing archiving of websites it copies and saves
the information as it goes. Such archives are usually stored such that they can be
viewed, read and navigated as they were on the live web, but are preserved as
‘snapshots'.
The large volume implies that the crawler can only download a limited number of the
Web pages within a given time, so it needs to prioritize its downloads. The high rate of
change implies that the pages might have already been updated or even deleted.
2.2 Rotto Link Web Crawler
Rotto link web crawler extracts the broken link (i.e dead links) within a complete website. This application takes a seed url, a url/link of a website to be crawl, and visits every page of a website and search for broken (i.e dead) links. As the crawler visits these URLs, it identifies all the hyperlinks in the web page and these hyperlinks are distributed into two parts : internal links(i.e refer to internal site) and external links (i.e refer to outside website). Application checks for broken hyperlinks by requesting header, if not broken then checks for internal link, if found internal and notbroken then adds them to the queue of URLs to visit, called the crawl frontier (i.e Worker Queue). A content of web page is also extracted and search for the keywords in a content requested by user. For searching/matching a keyword , a very popular algorithm is implements named as AhoCorasick String Matching Algorithm. “ Aho–Corasick string matching algorithm is a string searching algorithm invented by Alfred V. Aho and Margaret J. Corasick. It is a kind of dictionarymatching algorithm that locates elements of a finite set of strings (the "dictionary") within an input text. It matches all patterns simultaneously. The complexity of the algorithm is linear in the length of the patterns plus the length of the searched text plus the number of output matches.” If the broken links are found in the page then matched keywords along with list of all broken links are stored in the database. This whole process iteratively follow above sequence flow untill all the web pages in the worker queue is processed. Further, crawler looks for the queue to be emptied and if found empty i.e no more links to be crawled. Then status of process is changed from “queued” to “finished”. Once a process is crawled completely, it notify requested user by email contains links of result. User can click on the link to see all stats generated after crawling for broken links. User can check whats wrong in the website that has been crawled and fix the page content accordingly.
This application is primarily divided into two parts : Backend section that deals with crawling and frontend section that provides data set for crawling.
2. 3 Back End Back end of the application is designed on Flask Python Microframework.
Flask is a lightweight web application framework written in Python and based on the
WSGI toolkit and Jinja2 template engine. It is BSD licensed.
Flask takes the flexible Python programming language and provides a simple template
for web development. Once imported into Python, Flask can be used to save time
building web applications. An example of an application that is powered by the Flask
framework is the community web page for Flask.
Back end of an application further consist two parts, REST web API and crawler
modules.
● WEB API act an interface between Frontend and backend of an application.
● Crawler modules consists of whole works like dispatching, scraping, storing data,
mailing.
2.3.1 WEB API
An application web api conforms REST standard and has two main endpoints, one for
taking input the request of a website to be crawled and other one , for returning result
on requesting by input job id.Web API only accepts a HTTP JSON request and
responds with a JSON object as output.
The detailed description of these two endpoints are as follows:
● /api/v1.0/crawl/ Takes input a three fields i.e seed url of website, a array of
keywords, and a emailid of an user in JSON object.Returns an JSON object
containing serialized Website class object having fields like website/job id, status
etc.
Example of HTTP GET request:
Example of HTTP GET Response:
● /api/v1.0/crawl/<website/jobid> Takes an input a website/job id as prepended
in the endpoint.Returns an JSON object containing an Website class model
object .This object contains many fields related to website as described above.
Results are return as a field of object as an array of hyperlinks of web pages
contains broken links , a list of all broken links are returns alongwith these pages
and the a set of keywords matched on a particular web page out of entered
keywords by the user.
Example of HTTP GET Response:
Web API also returns an decriptive HTTP Errors in response headers alongwith a message containing error :
● HTTP Error Code 500 : Internal server error ● HTTP Error Code 405 : Method not allowed ● HTTP Error Code 400 : Bad Request ● HTTP Error Code 404 : Not Found
Example of HTTP Error Response:
2.3.2 Crawler Module Crawler module is the heart of this application which performs several vital process like Dispatching set of websites from the database to worker queue, crawling a webpage popped from worker queue, store data into database and mail back the result link to the user. In implementing the web crawler, several python packages are used to extracting, manipulating the web pages. The list of python packages which are used in this application are as follows:
● GRequest to fetch a content of a web page by giving input as a url of web page.
● BeautifulSoup
to extract the plain text and links from contents of web page
● Nltk to convert the utf8 text into plain text.
● SqlAlchemy
an ORM (Object Relational Mapper) for database intensive tasks.
● Reddis for implementing workers to spawn a crawling process from worker queue.
● LogBook
for logging of an application.
● Smtp Python mailing module for sending mail.
2.3.2.1 GRequest GRequests allows you to use Requests with Gevent to make asynchronous HTTP Requests easily.
2.3.2.2 BeautifulSoup Beautiful Soup sits atop an HTML or XML parser, providing Pythonic idioms for iterating, searching, and modifying the parse tree.
2.3.2.3 NLTK NLTK is a leading platform for building Python programs to work with human language data. It provides easytouse interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning
2.3.2.4 SqlAlchemy SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that gives application
developers the full power and flexibility of SQL.
It provides a full suite of well known enterpriselevel persistence patterns, designed for
efficient and highperforming database access, adapted into a simple and Pythonic domain
language.
There are two class model which are used for storing data related to user and websites
results.
● Website Class Model: List of fields related to website results.
○ id : Unique id of website.
○ url : root url of website
○ last_time_crawled : time stamp of last crawled.
○ status : status of website
○ keywords : keywords to be searched in webpage
○ result : result of website in json form
○ userid : id of a user who requested crawling process of this website
● User Class Model: List of fields related to user.
○ id : unique id of user.
○ email_id : Mail id of user where result will be mailed.
○ websites : users requested website.
2.3.2.5 Reddis RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily. This application uses two reddis worker.
● DISPATCHER Dispatcher is a worker which pops out five websites to be crawled from the database an pushed into the worker queue.
● CRAWLER Crawler is a worker which pops a web hyperlink from a worker queue and process the page, extract the broken links, enqueue new hyperlinks to be crawl into the worker queue, insert the result into the database and mail a link back to the user to access the result.
2.3.2.6 LogBook Logbook is based on the concept of loggers that are extensible by the application.Each logger
and handler, as well as other parts of the system, may inject additional information into the
logging record that improves the usefulness of log entries.
It also supports the ability to inject additional information for all logging calls happening in a
specific thread or for the whole application. For example, this makes it possible for a web
application to add requestspecific information to each log record such as remote address,
request URL, HTTP method and more.The logging system is (besides the stack) stateless
and makes unit testing it very simple. If context managers are used, it is impossible to corrupt
the stack, so each test can easily hook in custom log handlers.
2.3.2.7 SMTP The smtplib module defines an SMTP client session object that can be used to send mail to any Internet machine with an SMTP or ESMTP listener daemon.
2.4 Front End For Application User Interface more interacting, AngularJS Front end framework is used. HTML is great for declaring static documents, but it falters when we try to use it for declaring dynamic views in webapplications. AngularJS lets you extend HTML vocabulary for your application. The resulting environment is extraordinarily expressive, readable, and quick to develop. AngularJS is a toolset for building the framework most suited to your application development. It is fully extensible and works well with other libraries. Every feature can be modified or replaced to suit your unique development workflow and feature needs. UI of Application takes input in 3 stages :
● Target Website URL : Contains Valid Hyperlink to be crawled. ● Keywords : Keywords to be searched on pages contains dead links. ● User Mail : Mail id of user to mail back the result link after crawling done.
This UI application make a HTTP GET request to the backend web API on submitting the form by user. The request contains above described three input fields and their respected value in a JSON form.
2.5 Screenshots of Application
● Input Field for a seed url of website to be crawl
● Input Field for a set of keywords to be match
● Input Field for a emailid of user as result hyperlink is to be sent to this email on completing crawling
● Confirm Details and Submit Request Page
● Result Page shows the list of hyperlinks of pages contains broken links and also show the broken links and set of keywords matched in a page in a nested form.
Scope
Hidden Web data integration is a major challenge nowadays. Because of autonomous and heterogeneous nature of hidden web content, traditional search engine has now become an ineffective way to search this kind of data. They can neither integrate the data nor they can query the hidden web sites. Hidden Web data needs syntactic and semantic matching to achieve fully automatic integration. Rotto Web Crawler can be widely used in the web industry to search for links and contents. Many companies have a heavy website like news, blogging, Educational sites, Government sites etc. They add large number of pages and hyperlinks refer to internal links or to other websites daily. Old Content in these sites are never reviewed by the admin to check for correctness. As the time pass by, the url mentioned in pages turns into dead link and it never notified to admin. Here Application like this can be very useful for searching broken links in their website and this is helpful for the admin of the site in maintaining with less flaw contents. Application search keywords service helps owner of the site to find an article around which links are broken. This helps owner to maintain pages related to specific topic errorless. This crawler enhances overall user experience and robustness of web platform. Source code available on : https://github.com/sunnykrGupta/RottoLinksScraper
Conclusion
During the project development, We studied Web crawling at many different levels. Our main objectives were to develop a model for Web crawling, to study crawling strategies and to build a Web crawler implementing them. In this work, various challenges in the area of Hidden web data extraction and their possible solutions have been discussed. Although this system extracts, collects and integrates the data from various websites successfully, this work could be extended in near future. In this work, a search crawler has been created which was tested on a particular domain i.e ( text and hyperlinks). This work could be extended for other domains by integrating this work with the unified search interface.