Friday, October 23, 2009

Google Analytics New Features

Google Analytics has added new features to provide more flexibility, customization and adaptability according to the needs of your enterprise.

These Features are categorized into Powerful , Flexible and Intelligent

Powerful Features :

  • Engagement Goals: Two new goal types allow you to measure user engagement and branding success on your site. The new goal types allow you to set thresholds for Time on Site and Pages per Visit. Furthermore, you can now define up to 20 goals per profile.

  • Expanded Mobile Reporting: Google Analytics now tracks mobile websites and mobile apps so you can better measure your mobile marketing efforts. If you're optimizing content for mobile users and have created a mobile website, Google Analytics can track traffic to your mobile website from all web-enabled devices, whether or not the device runs JavaScript.

  • Advanced Analysis Features: Advanced Table Filtering feature is being added to the arsenal of power tools you can use to perform advanced data analysis. This allows you to filter the rows in a table based on different metric conditions.

  • Unique Visitor Metric: Now when you create a Custom Report, you can select Unique Visitors as a metric against any dimensions in Google Analytics. This allows marketers to see how many actual visitors (unique cookies) make up any user-defined segment



Flexible Features :
  • Multiple Custom Variables: Custom Variables provide you the power and flexibility to customize Google Analytics and collect the unique site usage data most important to your business.

  • Sharing Segments and Custom Report Templates: You may have recently noticed in your accounts the ability to administer and share Custom Reports and Advanced Segments, features we announced earlier this year. Have a Custom Report you created just for the Sales Team? Simply share the URL link for that report to anyone who has an Analytics account and a pre-formatted Sales report template will automatically be imported. You can also now select which profiles you want to share or hide your Advanced Segments and Custom Reports with.


Intelligent Features
  • Analytics Intelligence: We're launching the initial phase of an algorithmic driven Intelligence engine to Google Analytics. Analytics Intelligence will provide automatic alerts of significant changes in the data patterns of your site metrics and dimensions over daily, weekly and monthly periods.

  • Custom Alerts make it possible for you to tell Google Analytics what to watch for. You can set daily, weekly, and monthly triggers on different dimensions & metrics, and be notified by email or right in the user interface when the changes actually occur.


Source : http://analytics.blogspot.com/2009/10/google-analytics-now-more-powerful.html

Matt Cutts Prefers HTML Sitemaps over XML Sitemaps

Matt Cutts from Google Prefers HTML Sitemaps over XML Sitemaps as HTML Sitemap is useful for both users and spiders.

He furhter explains an HTML sitemap is a single page which, is used by all the users to find a particular information related to the site. It is an old school landing page for users to find all (or most) of your pages on your web site via single page and is best suited for smaller sites.

An XML sitemap are not visible to a user, but useful only for search engine spider.

once you make an HTML sitemap, making an XML version is extremely easy. So, he advices, to do both.

For more information visit the below video

Tuesday, August 5, 2008

Google To Add Relevant Titles for Incomplete Search Results

Google is going to add relevant custom titles for incomplete search results as per page content . This was stated by Google search quality group in offical Google Blog.

According to Google search quality group "One of the bigger recent changes has been to extract titles for pages that don't specify an HTML title — yet a title on the page is clearly right there, staring at you. To "see" that title that the author of the page intended, we analyze the HTML of the page to determine the title that the author probably meant. This makes it far more likely that you will not ignore a page for want of a good title.”

Monday, August 4, 2008

Google New Mile Stone : 1 Trillion URL Index

Google has announced last week that it reach 1 trillion URL index. In 1998 it had 26 million pages, and by 2000 the Google index reached the one billion mark.

Google official Blog also stated that "We don't index every one of those trillion pages -- many of them are similar to each other, or represent auto-generated content similar to the calendar example that isn't very useful to searchers. But we're proud to have the most comprehensive index of any search engine, and our goal always has been to index all the world's data."

Google has now become leader in search engines not only in Traffic Share but also in URL Indexing.

Friday, August 1, 2008

Microsoft New " Browse Rank " Theory

According to News.com, Microsoft research team has recently come up with a new concept known as "BrowseRank". According to this new theory, BrowseRank can prove to be more effective than PageRank as it would rank pages according to the user online behavior and not by the number of web pages linked to a specific web page as done by Google PageRank.

According to the researchers who worked on the BrowseRank theory, "The more visits of the page made by the users and the longer time periods spent by the users on the page, the more likely the page is important. We can leverage hundreds of millions of users' implicit voting on page importance." the research team included Bin Gao, Tie-Yan Liu, and Hang Li from Microsoft Research Asia and Ying Zhang of Nankai University, Zhiming Ma of the Chinese Academy of Sciences, and Shuyuan He of Peking University.

How is Browse Rank Calculated

BrowseRank takes into account the amount of time a user spends on a particular website. This helps BrowseRank in assessing the quality of the webpage. BrowseRank not only monitors traffic arriving via links, but also has the ability to monitor direct traffic visits via bookmarks or URLs that are typed in the Address Bar.

The Pages which are more interesting and have user popularity as termed as "Green Traffic."

Disadvantages of Browse Rank

As "Browse Rank " Theory takes into consideration the time spent by the user on the web site , it is understood that it default benefits more social networking web sites, which have less useful and quality content.

Friday, June 6, 2008

Google, Yahoo and Live Search Robots Exclusion Protocol

Wikipedia.org defines "The robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a convention to prevent cooperating web spiders and other web robots from accessing all or part of a website which is otherwise publicly viewable. "

In Lay Man Words The robot exclusion standard, also known as the Robots Exclusion Protocol or robots.txt protocol, is a way to inform or prevent or disable the search engine spiders from accessing all or part of a website.

Earlier this week, Microsoft announced that, together with Google and Yahoo, it would offer insight on their respective way to tackle the protocol.

This means that webmasters will be able to reap the benefits out of a common implementation of REP across Google, Yahoo and Live Search.

Common REP Directives
The following list are all the major REP features currently implemented by Google, Microsoft, and Yahoo!.


1.Robots.txt Directives

Directive: Disallow

Impact : Tells a crawler not to crawl your site or parts of your site -- your site's robots.txt still needs to be crawled to find this directive, but the disallowed pages will not be crawled

Use Cases: 'No crawl' pages from a site. This directive in the default syntax prevents specific path(s) of a site from crawling

Directive: Allow
Impact : Tells a crawler the specific pages on your site you want indexed so you can use this in combination with Disallow. If both Disallow and Allow clauses apply to a URL, the most specific rule – the longest rule – applies.

Use Cases: This is useful in particular in conjunction with Disallow clauses, where a large section of a site is disallowed, except a small section within it.


Directive: $ Wildcard Support

Impact : Tells a crawler to match everything from the end of a URL -- large number of directories without specifying specific pages

Use Cases: 'No Crawl' files with specific patterns, for e.g., files with certain file types that always have a certain extension, say '.pdf', etc.

Directive: * Wildcard Support

Impact : Tells a crawler to match a sequence of characters (available by end of June)
Use Cases: 'No Crawl' URLs with certain patterns, for e.g., disallow URLs with session ids or other extraneous parameters, etc.

Directive: Sitemaps Location

Impact : Tells a crawler where it can find your sitemaps.

Use Cases: Point to other locations where feeds exist to point the crawlers to the site's content

2. HTML META Directives

Directive: NOINDEX META Tag

Impact : Tells a crawler not to index a given page

Use Cases: Don't index the page. This allows pages that are crawled to be kept out of the index.

Directive: NOFOLLOW META Tag

Impact : Tells a crawler not to follow a link to other content on a given page

Use Cases: Prevent publicly writeable areas to be abused by spammers looking for link credit. By NOFOLLOW, you let the robot know that you are discounting all outgoing links from this page.

Directive: NOSNIPPET META Tag

Impact : Tells a crawler not to display snippets in the search results for a given page

Use Cases: Present no abstract for the page on Search Results.

Directive: NOARCHIVE / NOCACHE META Tag

Impact : Tells a search engine not to show a "cached" link for a given page

Use Cases: Do not make a copy of the page available to users from the Search Engine cache.

Directive: NOODP META Tag

Impact : Tells a crawler not to use a title and snippet from the Open Directory Project for a given page
Use Cases: Do not use the ODP (Open Directory Project) title and abstract for this page in Search."

In addition to the above there are other directives supported only by Google

UNAVAILABLE_AFTER Meta Tag - Tells a crawler when a page should "expire", i.e., after which date it should not show up in search results.

NOIMAGEINDEX Meta Tag - Tells a crawler not to index images for a given page in search results.

NOTRANSLATE Meta Tag - Tells a crawler not to translate the content on a page into different languages for search results.

Friday, February 15, 2008

An overview of the Google Over Optimization filter

Google Over Optimization filter

If someone told you that the race to the top spot in Google search engine rankings is not easy, then he is probably telling the truth. As you enter the World Wide Web, you desperately want to be noticed. You want traffic to start flowing to your website and your website to be in the top of search engine results. But this is not possible.

At least it’s not possible to do his in a legitimate manner. In an aim to hit the top faster than the rest, some people create web pages for search engines rather than web users. These pages are called over optimized pages and they use a series of strategies to attempt to fool search engines.

These websites contain more competitive keyword phrases which have little or no link to the overall content of the page.

Google has a filter called the ‘Google Over Optimization filter’ which uses an algorithm to detect such web pages. The penalty is severe.

The penalty and avoiding Google over optimization Filter

It is said that the Google Over Optimization filter is also called the ‘-950 penalty filter’ because the ranking of that website goes down by 950 positions.

Yes, this is exactly what happens to any website that uses over optimization techniques. Most webmasters find that their website is suffering in the search engine rankings owing to a filter. But they have no clue why they have been filtered. How do you know if your website is over optimized?

It’s simple really. Do not employ each and every strategy that you read in a search engine optimization forum. The usage of the H1 tag is one such element. While many SEO users suggest that you use the H1 tag for keywords, there are others who suggest that you avoid them. Analyze to know what strategy top ranking web pages have used.

Monday, February 4, 2008

A look at Google spam filters

Ever since google became the search engine king, the responsibility on its shoulders has increased. The onus is on Google to keep its search results free of spam and fake websites to make every search a worthwhile experience for its users.

Considering that thousands of websites are created every day and a lot of these might be spam websites, the job isn’t easy at all. So google has created a set of manual and automated filters called as google spam filters.

Google spam filters

The aim of these filters is simple enough. Any website that uses illegitimate techniques to gain an undue advantage in search engine rankings will be greyed out and put into an imaginary quarantine zone.

In the quarantine zone the page rank of the website will suffer and according to the filter that has been applied, it can even go below hundred. To get out of the quarantine, the website has to stop using those spam techniques or it may have to wait till the penalty time period is over.

The top 3

The existence of these filters has been a widely debated topic amongst SEO specialists around the world. Some SEO experts go up to the extent of saying that there is about 15 google spam filters now. But there are some who say that there are none. Nevertheless, here are the top 3 Google spam filters that are said to be in use now.

• The sandbox: The google sandbox filter is a filter that prevents new websites from getting high rankings in search results. Every new website created will automatically go into the sandbox and as it ages, its credibility will increase. Gradually, the website will start faring better in search results.

• The Trust Rank: This is a very important filter and it takes into account the age, the quality of the inbound links of the website and the content. The trust rank means that google trusts your website.

• The domain age: This is another filter that does not allow newer domains to get high rankings.

Tuesday, January 29, 2008

Understanding Google Domain Age Filter

When you launch a website, you are entering a world that is as populated as the world that we exist in. Maybe it’s even more populated than the real world. The worst part is that every webmaster who launches a website wants to be at the number one spot in Google search engine results.

Now this makes the job extremely difficult for Google to go through each one of these websites and rank them on the basis of their content and SEO techniques. A lot of webmasters want to hit it big early and hence they choose spam techniques to get to the top really fast.

So Google introduced a set of filters to counter spamming and create a search results page that has only quality websites in it. The Google domain age filter works on the policy that if your domain has been around for a few years, then the chances that you are a spam website are negligible. This is why google gives older domains a much better ranking than newer ones.

Can you get around it?

Does it mean that it will take a few years before your domain name gets some credibility? Not really, say experts.

  • Buying old domain names is one way to get around this. If there is a domain that has been bought and parked at a domain parking service, then it will have good credibility at Google.


  • Another work around is to buy the domain name early. If you are planning to start a few websites next year or so, then start now and book the domain names. Now tell google about them so that they start the evaluation process right away.


  • Also if and when you are buying an older domain name, please does not change the WHOIS information or you risk losing all the benefits of buying an old domain name.


And keep in mind that this is only half the war won. There are a set of other filters that you have to combat in order to get high rankings.

Thursday, January 24, 2008

Understanding the Google Trust Rank Filter

Google Filters - Google Trust Rank Filter

If you thought that a well detailed SEO program is enough to get you a top rank in the search results pages, then you are misinformed. The information that you have is only partially correct.

The complete information is that you need to follow this program routinely for almost one year to get out of the google sandbox and get into the google trust rank filter. Both these are a set of filters used by Google, which avoid new websites from gaining an undue advantage by using spamming techniques.

Some websites create thousands of back links at once using spamming techniques and dummy web pages to gain a high back link structure. This makes it possible to gain a high page rank the moment the website is launched defeating the very purpose of a search engine.

Hence the arrival of google trust rank filter might have actually benefited the web users but it has made life difficult for webmasters.

The Google trust rank

In order to get a good trust rank with google, you have to get quality in bound links to your website. Now this is not that easy to achieve and may even take years.

If you lose patience and resort to spamming, then you are out of the trust rank and into the sand box. Consider them to be the two sides of a coin that work in tandem. As the name suggests, the google trust rank filter is all about creating trust. Keep working on your web promotion using ethical practices and over a period of time as google gets to know your website better, your ranking will improve on the search results page.

Can you beat it?

This has been one of the most widely discussed things on the internet. Can you really beat the google filters? My advice would be to try and stick to legitimate SEO practices rather than trying to beat the filter and get into the sand box filter again.