9lessons Programming Blog - Tutorials about Angular, ReactJS, PHP, MySQL and Web Development
Showing posts with label google. Show all posts
Showing posts with label google. Show all posts
Tuesday, March 05, 2013

Oauth Login for Linkedin, Facebook, Google and Microsoft

Introducing the new light weight OAuth Login commercial edition, an OAuth login system for your website with Facebook, Google, Microsoft and Linkedin. OAuth Login is very quick and powerful, sure this helps you to increase your web project registrations. It's definitely a must have login system for every PHP based web projects. Hardly it will take 10 mins for installation.

Login with Facebook and Google.

Monday, October 22, 2012

Login with Facebook and Google.

Introducing the new FGlogin commercial edition, an OAuth login system for your website with Facebook and Google. FGLogin is very quick and powerful, sure this helps you to increase your web project registrations. It's definitely a must have login system for every PHP based web projects. Hardly it will take 5 mins for installation.

Login with Facebook and Google.
Sunday, September 23, 2012

Login with Google Account OAuth

We covered almost all the Open Authentication (OAuth) login systems for Facebook, Twitter, Google plus and Instagram, but unfortunately I missed most important Google Open Authentication login. Today I want explain how to implement this for your web project, this script is very quick and sure it helps you to increase your web project registrations.

Login with Google Account OAuth.
Monday, July 02, 2012

Google Apps Standard for Free.

I strongly suggest every web developer should own a domain, that helps you to understand the real time web system. The web giant Google offering a free tool called Google Apps Standard Edition you will get Gmail, GTalk, GCal, and Gdocs. This post explains how to setup the free Google Apps Standard for your domain.  

Google Apps Standard for Free.
Tuesday, March 06, 2012

Access Websense Blocked Articles.

Are you working at multi national company, not able to access popular technology blogs powered by wordpress and blogspot articles because of websense firewall rules. I'm introducing (reader.9lessons.info) using that you can access websense blocked articles. It's simple concept URL RSS data using api.thequeue.org and sending request and response from Google feed API. Using this you can build flipboard kind of application.

Access Websense Blocked URLs.
Wednesday, September 21, 2011

Google Plus Style Drag and Drop adding Groups

Are you looking for Google plus style drag and drop adding friends in groups or circle. Google plus circle implementation so cool, same way I have tried similar user groups adding application with drag and drop effect using jquery and php. I hope it’s useful for your social media web projects.

Google Plus Style Drag and Drop adding Groups

Sunday, July 17, 2011

Login with Google Account OpenID

Few months back we had posted an article about login with Twitter and Facebook, it is most popular on 9lessons.info. We developed a system to quick login with Google account using OpenID authentication. This script helps you to avoid registration forms and email verification system, It’s is very quick, useful and simple to integrate.

Login with Google open id

Monday, July 11, 2011

Google Plus Style Animations with Jquery and CSS3

Google plus given an awesome kick to user experience, specially circles UI animations. I feel it’s great and new definition to have user experience design. I have tried circle rotation animation effect with Jquery and CSS3. Just few lines of code applying CSS styles using jQuery methods like .addClass() and .animation(). Take a look at these live demos with modern brewers.

Special Animation Effects

Monday, June 13, 2011

Import GMail Contacts Google OAuth Connect with PHP.

Inviting friends is the most important part of the web project success. This post explains you how to import Gmail contacts from address book via Google open authorization protocol connect with PHP. It’s simple just few configurations follow below four steps, use this script and give option to invite more friends.

Importing GMail Contacts Google OAuth Connect with PHP.
Tuesday, July 06, 2010

Google Like CAPTCHA with PHP.

Are you looking for Google style CAPTCHA (Human verification code) script for PHP projects, Please take a look at this post. I want to explain how to implement cool-php-captcha script for forms. Use it and add security to your web projects.

Wednesday, November 18, 2009

Press Enter to Search - Google Future Advertising Plan.

Have you seen Google is experimenting out a search page (google.co.uk) with visually nothing else instead of search button text saying “press enter to search”. My view Google will use blank place on home page. Last week I read an article on Mashable ( An Ad on Google’s Homepage ).


Wednesday, July 01, 2009

Flash colors effect with MooTools Framework.

My first post about mootools framework it's powerful, cross-browser, and flexible. In this post how to implement flash colors effect with mootools framework form Google Ajax Libraries API. It's simple and useful to display colorful titles on your web projects.


Demo
Loading...
Wednesday, May 27, 2009

Draggable with jQuery from Google Ajax Libraries API.

Implement jQuery element draggable() form Google Ajax Libraries API it simple just some lines of code take a look at livedemo.
Google host providing frequently used javascript files eg. jQuery, Mootools, Dojo. Use google.load() to speedup your website loading.

Thursday, May 07, 2009

Draw Charts with Google Visualization API.

Google Visualization API is a Javascript library displays methods to load and control the visualization. It's useful to display nice charts in your web sites.

To load Google Visualization AJAX API.
<script type='text/javascript' src='http://www.google.com/jsapi'></script>

Sunday, September 28, 2008

Google Search Major Data Structures Documentation

Google Search Major Data Structures Documentation submited by Sergey Brin and Lawrence Page in Stanford University

Previous Article Link : Google Search Architecture Diagram Overview

Google's data structures are optimized so that a large document collection can be crawled, indexed, and searched with little cost. Although, CPUs and bulk input output rates have improved dramatically over the years, a disk seek still requires about 10 ms to complete. Google is designed to avoid disk seeks whenever possible, and this has had a considerable influence on the design of the data structures.

1. BigFiles

BigFiles are virtual files spanning multiple file systems and are addressable by 64 bit integers. The allocation among multiple file systems is handled automatically. The BigFiles package also handles allocation and deallocation of file descriptors, since the operating systems do not provide enough for our needs. BigFiles also support rudimentary compression options.

2.Repository

The repository contains the full HTML of every web page. Each page is compressed using zlib (see RFC1950). The choice of compression technique is a tradeoff between speed and compression ratio. We chose zlib's speed over a significant improvement in compression offered by bzip. The compression rate of bzip was approximately 4 to 1 on the repository as compared to zlib's 3 to 1 compression. In the repository, the

Repository Data Structure





documents are stored one after the other and are prefixed by docID, length, and URL as can be seen in Figure. The repository requires no other data structures to be used in order to access it. This helps with data consistency and makes development much easier; we can rebuild all the other data structures from only the repository and a file which lists crawler errors.

3. Document Index

The document index keeps information about each document. It is a fixed width ISAM (Index sequential access mode) index, ordered by docID. The information stored in each entry includes the current document status, a pointer into the repository, a document checksum, and various statistics. If the document has been crawled, it also contains a pointer into a variable width file called docinfo which contains its URL and title. Otherwise the pointer points into the URLlist which contains just the URL. This design decision was driven by the desire to have a reasonably compact data structure, and the ability to fetch a record in one disk seek during a search

Additionally, there is a file which is used to convert URLs into docIDs. It is a list of URL checksums with their corresponding docIDs and is sorted by checksum. In order to find the docID of a particular URL, the URL's checksum is computed and a binary search is performed on the checksums file to find its docID. URLs may be converted into docIDs in batch by doing a merge with this file. This is the technique the URLresolver uses to turn URLs into docIDs. This batch mode of update is crucial because otherwise we must perform one seek for every link which assuming one disk would take more than a month for our 322 million link dataset.

4. Lexicon

The lexicon has several different forms. One important change from earlier systems is that the lexicon can fit in memory for a reasonable price. In the current implementation we can keep the lexicon in memory on a machine with 256 MB of main memory. The current lexicon contains 14 million words (though some rare words were not added to the lexicon). It is implemented in two parts -- a list of the words (concatenated together but separated by nulls) and a hash table of pointers. For various functions, the list of words has some auxiliary information which is beyond the scope of this paper to explain fully.


5. Hit Lists

A hit list corresponds to a list of occurrences of a particular word in a particular document including position, font, and capitalization information. Hit lists account for most of the space used in both the forward and the inverted indices. Because of this, it is important to represent them as efficiently as possible. We considered several alternatives for encoding position, font, and capitalization -- simple encoding (a triple of integers), a compact encoding (a hand optimized allocation of bits), and Huffman coding. In the end we chose a hand optimized compact encoding since it required far less space than the simple encoding and far less bit manipulation than Huffman coding. The details of the hits are shown in Figure.

Our compact encoding uses two bytes for every hit. There are two types of hits: fancy hits and plain hits. Fancy hits include hits occurring in a URL, title, anchor text, or meta tag. Plain hits include everything else. A plain hit consists of a capitalization bit, font size, and 12 bits of word position in a document . Font size is represented relative to the rest of the document using three bits (only 7 values are actually used because 111 is the flag that signals a fancy hit). A fancy hit consists of a capitalization bit, the font size set to 7 to indicate it is a fancy hit, 4 bits to encode the type of fancy hit, and 8 bits of position. For anchor hits, the 8 bits of position are split into 4 bits for position in anchor and 4 bits for a hash of the docID the anchor occurs in. This gives us some limited phrase searching as long as there are not that many anchors for a particular word. We expect to update the way that anchor hits are stored to allow for greater resolution in the position and docIDhash fields. We use font size relative to the rest of the document because when searching, you do not want to rank otherwise identical documents differently just because one of the documents is in a larger font.

Forward and Reverse Indexes and the Lexicon






The length of a hit list is stored before the hits themselves. To save space, the length of the hit list is combined with the wordID in the forward index and the docID in the inverted index. This limits it to 8 and 5 bits respectively (there are some tricks which allow 8 bits to be borrowed from the wordID). If the length is longer than would fit in that many bits, an escape code is used in those bits, and the next two bytes contain the actual length.

6. Forward Index

The forward index is actually already partially sorted. It is stored in a number of barrels (we used 64). Each barrel holds a range of wordID's. If a document contains words that fall into a particular barrel, the docID is recorded into the barrel, followed by a list of wordID's with hitlists which correspond to those words. This scheme requires slightly more storage because of duplicated docIDs but the difference is very small for a reasonable number of buckets and saves considerable time and coding complexity in the final indexing phase done by the sorter. Furthermore, instead of storing actual wordID's, we store each wordID as a relative difference from the minimum wordID that falls into the barrel the wordID is in. This way, we can use just 24 bits for the wordID's in the unsorted barrels, leaving 8 bits for the hit list length.

7. Inverted Index

The inverted index consists of the same barrels as the forward index, except that they have been processed by the sorter. For every valid wordID, the lexicon contains a pointer into the barrel that wordID falls into. It points to a doclist of docID's together with their corresponding hit lists. This doclist represents all the occurrences of that word in all documents.

An important issue is in what order the docID's should appear in the doclist. One simple solution is to store them sorted by docID. This allows for quick merging of different doclists for multiple word queries. Another option is to store them sorted by a ranking of the occurrence of the word in each document. This makes answering one word queries trivial and makes it likely that the answers to multiple word queries are near the start. However, merging is much more difficult. Also, this makes development much more difficult in that a change to the ranking function requires a rebuild of the index. We chose a compromise between these options, keeping two sets of inverted barrels -- one set for hit lists which include title or anchor hits and another set for all hit lists. This way, we check the first set of barrels first and if there are not enough matches within those barrels we check the larger ones.
Thursday, September 25, 2008

Google Search Architecture Overview

Friends just imagine with out Google Search we can't ! Most of Google is implemented in C or C++ for efficiency and can run in either Solaris or Linux.

In Google, the web crawling (downloading of web pages) is done by several distributed crawlers. There is a URLserver that sends lists of URLs to be fetched to the crawlers. The web pages that are fetched are then sent to the storeserver. The storeserver then compresses and stores the web pages into a repository.Every web page has an associated ID number called a docID which is assigned whenever a new URL is parsed out of a web page. The indexing function is performed by the indexer and the sorter. The indexer performs a number of functions. It reads the repository, uncompresses the documents, and parses them. Each document is converted into a set of word occurrences called hits.

High Level Google Architecture


The hits record the word, position in document, an approximation of font size, and capitalization. The indexer distributes these hits into a set of "barrels", creating a partially sorted forward index. The indexer performs another important function. It parses out all the links in every web page and stores important information about them in an anchors file. This file contains enough information to determine where each link points from and to, and the text of the link.

The URLresolver reads the anchors file and converts relative URLs into absolute URLs and in turn into docIDs. It puts the anchor text into the forward index, associated with the docID that the anchor points to. It also generates a database of links which are pairs of docIDs. The links database is used to compute PageRanks for all the documents.



The sorter takes the barrels, which are sorted by docID, and resorts them by wordID to generate the inverted index. This is done in place so that little temporary space is needed for this operation. The sorter also produces a list of wordIDs and offsets into the inverted index. A program called DumpLexicon takes this list together with the lexicon produced by the indexer and generates a new lexicon to be used by the searcher. The searcher is run by a web server and uses the lexicon built by DumpLexicon together with the inverted index and the PageRanks to answer queries.



Next Article LinkGoogle Search Major Data Structures Documentation
Saturday, September 13, 2008

Add Your Site/Blog URL in Google Search Database.

Recently my friend Sreekanth published one website about his company. But Google search database doesn't know about his website. So how to add site URL in Google Search Database.

Goto Google Webmaster Tools Website

Google Verify Your Site URL

Copy and Paste in your site index/home page


Paste this after title tag 

After saving the site home/index page click verify buttom.


Add Sitemap every website having Site Structure in XML formate many sitemap generators in internet http://www.xml-sitemaps.com.


But Blogger/Wordpress having defalut sitemape files like atom.xml.

Your site will be added in Google database with in 2-3 weeks.
mailxengine Youtueb channel
Make in India
X