B2B SEO - Marketing, Search Engine Optimization for Business

Website for Sale

April 9th, 2010 · No Comments

This website (domain name) is for sale

Asking price is $13,500.00

I will not reply to your email unless your initial email confirms that you will pay $13,500.00.

→ No CommentsTags: advertising

Meta elements

April 15th, 2007 · No Comments

Meta elements are HTML or XHTML elements used to provide structured metadata about a web page. Such elements must be placed as tags in the head section of an HTML or XHTML document. Meta elements can be used to specify page description, keywords and any other metadata not provided through the other head elements and attributes.

The meta element has four valid attributes: content, http-equiv, name and scheme. Of these, only content is a required attribute.

Example meta Element Usage

In one form, meta elements can specify HTTP headers which should be sent before the actual content when the HTML page is served from web server to client. For example:

<meta http-equiv=”Content-Type” content=”text/html”>

This specifies that the page should be served with an HTTP header called ‘Content-Type’ that has a value ‘text/html’. This is a typical use of the meta element which is used to specify the document type so that a client (browser or otherwise) knows what content type it is expected to render.

In the general form, a meta element specifies name and associated content attributes describing aspects of the HTML page. For example

<meta name=”keywords” content=”b2bseo”>
In this example, the meta element identifies itself as containing the ‘keywords’ element, with a value of ‘b2bseo’

Meta element use in search engine optimization

Meta elements provide information about a given webpage, most often to help search engines categorize them correctly. They are inserted into the HTML document, but are often not directly visible to a user visiting the site.

They have been the focus of a field of marketing research known as search engine optimization (SEO), where different methods are explored to provide a user’s site with a higher ranking on search engines. In the mid to late 1990s, search engines were reliant on meta data to correctly classify a web page and webmasters quickly learned the commercial significance of having the right meta element, as it frequently led to a high ranking in the search engines — and thus, high traffic to the web site.

As search engine traffic achieved greater significance in online marketing plans, consultants were brought in who were well versed in how search engines perceive a web site. These consultants used a variety of techniques (legitimate and otherwise) to improve ranking for their clients.

Meta elements have significantly less effect on search engine results pages today than they did in the 1990s and their utility has decreased dramatically as search engine robots have become more sophisticated. This is due in part to the nearly infinite re-occurrence (keyword stuffing) of meta elements and/or to attempts by unscrupulous website placement consultants to manipulate (spamdexing) or otherwise circumvent search engine ranking algorithms.

While search engine optimization can improve search engine ranking, consumers of such services should be careful to employ only reputable providers. Given the extraordinary competition and technological craftsmanship required for top search engine placement, the implication of the term “search engine optimization” has deteriorated over the last decade. Where it once implied crafting a website into a state of search engine perfection, for the average consumer it now implies something on the order of making a website search engine tolerable.

Major search engine robots are more likely to quantify such extant factors as the volume of incoming links from related websites, quantity and quality of content, technical precision of source code, spelling, functional v. broken hyperlinks, volume and consistency of searches and/or viewer traffic, time within website, page views, revisits, click-throughs, technical user-features, uniqueness, redundancy, relevance, advertising revenue yield, freshness, geography, language and other intrinsic characteristics.

→ No CommentsTags: Meta Tags

Redirects, HTTP message headers, Alternative to meta elements

April 10th, 2007 · No Comments

Redirects

Meta refresh elements can be used to instruct a web browser to automatically refresh a web page after a given time interval. It is also possible to specify an alternative URL and use this technique in order to redirect the user to a different location. Using a meta refresh in this way and solely by itself rarely achieves the desired result. For Internet Explorer’s security settings, under the miscellaneous category, meta refresh can be turned off by the user, thereby disabling its redirect ability entirely.

Many web design tutorials also point out that client side redirecting tends to interfere with the normal functioning of a web browser’s “back” button. After being redirected, clicking the back button will cause the user to go back to the redirect page, which redirects them again. Some modern browsers seem to overcome this problem however, including Safari, Mozilla Firefox and Opera.

It should be noted that auto-redirects via markup (versus server side redirects) are not in compliance with the W3C’s – Web Content Accessibility Guidelines (WCAG) 1.0 (guideline 7.5).
HTTP message headers

Meta elements of the form <meta http-equiv=”foo” content=”bar”> can be used as alternatives to http headers. For example, <meta http-equiv=”expires” content=”Wed, 21 Jun 2006 14:25:27 GMT”> would tell the browser that the page “expires” on June 21 2006 14:25:27 GMT and that it may safely cache the page until then.
Alternative to meta elements

An alternative to meta elements for enhanced subject access within a web site is the use of a back-of-book-style index for the web site. See examples at the web sites of the Australian Society of Indexers and the American Society of Indexers.

In 1994, ALIWEB, which was likely the first web search engine, also used an index file to provide the type of information commonly found in meta keywords attributes.

→ No CommentsTags: Uncategorized

More meta attibutes for search engines

April 8th, 2007 · No Comments

More meta attibutes for search engines

NOODP

The search engines Google, Yahoo! and MSN use in some cases the title and abstract of the Open Directory Project (ODP) listing of a web site for the title and/or description (also called snippet or abstract) in the search engine results pages (SERPS). To give webmasters the option to specify that the ODP content should not be used for listings of their website, Microsoft introduced in May 2006 the new “NOODP” value for the “robots” element of the meta tags. Google followed in July 2006 and Yahoo! in October 2006.

The syntax is the same for all search engines who support the tag.

<META NAME=”ROBOTS” CONTENT=”NOODP”>

Webmasters can decide if they want to disallow the use of their ODP listing on a per search engine basis

Google: <META NAME=”GOOGLEBOT” CONTENT=”NOODP”>

Yahoo! <META NAME=”Slurp” CONTENT=”NOODP”>

MSN and Live Search: <META NAME=”msnbot” CONTENT=”NOODP”>
NOYDIR

Yahoo! also used next to the ODP listing the content from their own Yahoo! directory but introduced in February 2007 a meta tag that provides webmasters with the option to opt-out of this.

Yahoo! Directory titles and abstracts will not be used in search results for their pages if the NOYDIR tag is being added to a web page.

<META NAME=”ROBOTS” CONTENT=”NOYDIR”>

<META NAME=”Slurp” CONTENT=”NOYDIR”>
Robots-NoContent

Yahoo! also introduced in May 2007 the “class=robots-nocontent” tag. This is not a meta tag, but a tag, which can be used throughout a web page where needed. Content of the page where this tag is being used will be ignored by the Yahoo! crawler and not included in the search engine’s index.

Examples for the use of the robots-nocontent tag:

<div class=”robots-nocontent”>excluded content</div>

<span class=”robots-nocontent”>excluded content</span>

<p class=”robots-nocontent”>excluded content</p>
Academic studies
Google does not use HTML keyword or metatag elements for indexing. The Director of Research at Google, Monika Henziger, was quoted (in 2002) as saying, “Currently we don’t trust metadata”. Other search engines developed techniques to penalize web sites considered to be “cheating the system”. For example, a web site repeating the same meta keyword several times may have its ranking decreased by a search engine trying to eliminate this practice, though that is unlikely. It is more likely that a search engine will ignore the meta keyword element completely, and most do regardless of how many words used in the element.

→ No CommentsTags: Meta Tags

Keywords, Description, and Robots Tags

April 5th, 2007 · No Comments

The keywords attribute

The keywords attribute was popularized by search engines such as Infoseek and AltaVista in 1995, and its popularity quickly grew until it became one of the most commonly used meta elements. By late 1997, however, search engine providers realized that information stored in meta elements, especially the keyword attribute, was often unreliable and misleading, and at worst, used to draw users into spam sites. (Unscrupulous webmasters could easily place false keywords into their meta elements in order to draw people to their site.)

Search engines began dropping support for metadata provided by the meta element in 1998, and by the early 2000s, most search engines had veered completely away from reliance on meta elements, and in July 2002 AltaVista, one of the last major search engines to still offer support, finally stopped considering them. The Director of Research at Google, Monika Henziger, was quoted (in 2002) as saying, “Currently we don’t trust metadata”.

No consensus exist whether or not the keywords attribute has any impact on ranking at any of the major search engine today. It is being speculated that they do, if the keywords used in the meta can be found in the page copy itself. 37 leaders in search engine optimization concluded in April 2007 that the relevance of having your keywords in the meta-attribute keywords is little to none.

The description attribute

Unlike the keyword attribute, the description attribute is supported by most major search engines, like Yahoo and Live Search, while Google will fall back on this tag when information about the page itself is requested (e.g. using the related: query). The description attribute provides a concise explanation of a web page’s content. This allows the webpage authors to give a more meaningful description for listings than might be displayed if the search engine was to automatically create its own description based on the page content. The description is often, but not always, displayed on search engine results pages, so it can impact click-through rates. Industry commentators have suggested that major search engines also consider keywords located in the description attribute when ranking pages. W3C doesn’t specify the size of this description meta tag, but almost all search engines recommend it to be shorter than 200 characters of plain text[citation needed].

The robots attribute

The robots attribute is used to control whether search engine spiders are allowed to index a page, or not, and whether they should follow links from a page, or not. The noindex value prevents a page from being indexed, and nofollow prevents links from being crawled. Other values are available that can influence how a search engine indexes pages, and how those pages appear on the search results. The robots attribute is supported by several major search engines. There are several additional values for the robots meta attribute that are relevant to search engines, such as NOARCHIVE and NOSNIPPET, which are meant to tell search engines what not to do with a web pages content. Meta tags are not the best option to prevent search engines from indexing content of your website. A more reliable and efficient method is the use of the Robots.txt file (Robots Exclusion Standard).

NOINDEX tag tells Google not to index a specific page. NOFOLLOW tag tells Google not to follow the links on a specific page. NOARCHIVE tag tells Google not to store a cached copy of your page. NOSNIPPET tag tells Google not to show a snippet (description) under your Google listing, it will also not show a cached link in the search results

→ No CommentsTags: Meta Tags

Web scraping

April 4th, 2007 · No Comments

Web scraping generically describes any of various means to extract content from a website over HTTP for the purpose of transforming that content into another format suitable for use in another context.

Scraper sites

A typical example application for web scraping is a web crawler that copies content from one or more existing websites in order to generate a scraper site. The result can range from fair use excerpts or reproduction of text and content, to plagiarized content. In some instances, plagiarized content may be used as an illicit means to increase traffic and advertising revenue. The typical scraper website generates revenue using Google AdSense, hence the term ‘Made For AdSense’ or MFA website.

Web scraping differs from screen scraping in the sense that a website is really not a visual screen, but a live HTML/JavaScript-based content, with a graphics interface in front of it. Therefore, web scraping does not involve working at the visual interface as screen scraping, but rather working on the underlying object structure (Document Object Model) of the HTML and JavaScript.

Web scraping also differs from screen scraping in that screen scraping typically occurs many times from the same dynamic screen “page”, whereas web scraping occurs only once per web page over many different static web pages. Recursive web scraping, by following links to other pages over many web sites, is called “web harvesting”. Web harvesting is necessarily performed by a software called a bot or a “webbot”, “crawler”, “harvester” or “spider” with similar arachnological analogies used to refer to other creepy-crawly aspects of their functions. Web harvesters are typically demonised, while “webbots” are often typecast as benevolent.

There are legal web scraping sites that provide free content and are commonly used by webmasters looking to populate a hastily made site with web content, often to profit by some means from the traffic the article hopefully brings. This content does not help the ranking of the site in search engine results because the content is not original to that page.[1] Original content is a priority of search engines. Use of free articles usually requires one to link back to the free article site, as well as to a link(s) provided by the author. The site Wikipedia.org, (particularly the english Wikipedia) is a common target for web scraping.

Legal issues

Although scraping is against the terms of use of some websites, the enforceability of these terms is unclear. Outright duplication of content is, of course, illegal, but the courts ruled in Feist Publications v. Rural Telephone Service that duplication of facts is allowable. Also, in a February, 2006 ruling, the Danish Maritime and Commercial Court (Copenhagen) found systematic crawling, indexing and deep linking by portal site ofir.dk of real estate site Home.dk not to conflict with Danish law or the database directive of the European Union.

U.S. courts have acknowledged that users of “scrapers” or “robots” may be held liable for committing trespass to chattels, which involves a computer system itself being considered personal property upon which the user of a scraper is trespassing. However, to succeed on a claim of trespass to chattels, the plaintiff must demonstrate that the defendant intentionally and without authorization interfered with the plaintiff’s possessory interest in the computer system and that the defendant’s unauthorized use caused damage to the plaintiff. Not all cases of web spidering brought before the courts have been considered trespass to chattels.

In Australia, the 2003 Spam Act outlaws some forms of web harvesting

Technical measures to stop bots

A web master can use various measures to stop or slow a bot. Some techniques include:

  • Blocking an IP address. This will also block all browsing from that address.
  • If the application is well behaved, adding entries to robots.txt will be adhered to. You can stop Google and other well-behaved bots this way.
  • Sometimes bots declare who they are. Well behaved ones do (for example ‘googlebot’). They can be blocked on that basis. Unfortunately, malicious bots may declare they are a normal browser.
  • Bots can be blocked by excess traffic monitoring.
  • Bots can be blocked with tools to verify that it is a real person accessing the site, such as the CAPTCHA project.
  • Sometimes bots can be blocked with carefully crafted Javascript.
  • Locating bots with a honeypot or other method to identify the IP addresses of automated crawlers.

→ No CommentsTags: Web Contents

Google bomb

April 3rd, 2007 · No Comments

A Google bomb (also referred to as a ‘link bomb’) is Internet slang for a certain kind of attempt to influence the ranking of a given page in results returned by the Google search engine, often with humorous or political intentions. Because of the way that Google’s algorithm works, a page will be ranked higher if the sites that link to that page use consistent anchor text. A Google bomb is created if a large number of sites link to the page in this manner. Google bomb is used both as a verb and a noun. The phrase “Google bombing” was introduced to the New Oxford American Dictionary in May 2005. Google bombing is closely related to spamdexing, the practice of deliberately modifying HTML pages to increase the chance of their being placed close to the beginning of search engine results, or to influence the category to which the page is assigned in a misleading or dishonest manner.

The term Googlewashing was coined in 2003 to describe the use of media manipulation to change the perception of a term, or push out competition from search engine results pages (SERPs).

→ No CommentsTags: Black Hat SEO · Google bomb · SEO Spam

SEO As a marketing strategy

April 2nd, 2007 · No Comments

Eye tracking studies have shown that searchers scan a search results page from top to bottom and left to right, looking for a relevant result. Placement at or near the top of the rankings therefore increases the number of searchers who will visit a site. However, more search engine referrals does not guarantee more sales. SEO is not necessarily an appropriate strategy for every website, and other Internet marketing strategies can be much more effective, depending on the site operator’s goals. A successful Internet marketing campaign may drive organic search results to pages, but it also may involve the use of paid advertising on search engines and other pages, building high quality web pages to engage and persuade, addressing technical issues that may keep search engines from crawling and indexing those sites, setting up analytics programs to enable site owners to measure their successes, and improving a site’s conversion rate.

SEO may generate a return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors. According to notable technologist Jakob Nielsen, website operators should liberate themselves from dependence on search engine traffic. A top ranked SEO blog Seomoz.org has reported, “Search marketers, in a twist of irony, receive a very small share of their traffic from search engines.” Instead, their main sources of traffic are links from other websites.

→ No CommentsTags: SEO Introduction

Reciprocal link

April 1st, 2007 · No Comments

A reciprocal link is a mutual link between two objects, commonly between two websites in order to ensure mutual traffic. Example: Alice and Bob have websites. If Bob’s website links to Alice’s website, and Alice’s website links to Bob’s website, the websites are reciprocally linked.

Website owners often submit their sites to reciprocal link exchange directories in order to achieve higher rankings in the search engines.

Reciprocal linking between websites became an important part of the search engine optimization process thanks to the link popularity algorithm PageRank employed by Google, which ranks websites for relevancy dependent on the number of links that led to a particular page and the anchor text of the link.

Three way linking

Three way linking (siteA -> siteB -> siteC -> siteA) is a special type of reciprocal linking.

The attempt of this link building method is to create more “natural” links in the eyes of search engines. The value of links by three-way linking can then be better than normal reciprocal links, which are usually done between two domains.

Automated Linking

In order to take advantage of the need for inbound links to rank well in the search engines, a number of automatic link exchange services have been launched. Members of these schemes will typically agree to have several links added to all their web pages in return for getting similar links back from other sites.

Link Exchange

An alternative to the automated linking above is a Link Exchange forum, in which members will advertise the sites that they want to get links to, and will in turn offer reciprocal or three way links back to the sites that link to them. The links generated through such services are subject to editorial review.

→ No CommentsTags: advertising · Affiliate Marketing · Link

Link exchange

March 30th, 2007 · No Comments

Link Exchanging is the process of exchanging text links or banner links between websites. Webmasters exchange links to improve traffic through numerous inbound links so users on other websites can click through. Exchanging links is normally a free process (unless you choose to use a link exchange program which charges fees), making link exchanges cost effective. Another benefit of using a link exchange service is that it helps improve placement in the search engines.

Link exchanges have been used by webmasters for years as a means of direct marketing. Recently, this practice has gained more popularity among webmasters due to the fact search engines prefer websites that have many inbound links, thus improving positions in the Search Engine Result Pages (SERPs). To the search engines, this is an accurate means of determining the importance of websites. This practice by the search engines to rank websites has helped lead to the popularity of linked-based search engines, like Google. Thus, link exchange software and services have become popular.

There are various ways to initiate a link exchange with another website. The primary way of beginning a link exchange is to email another webmaster and ask to exchange links. Another way to find websites looking to exchange links is to visit webmaster discussion boards. These websites might have link exchange forums or link exchange directories where webmasters can request link exchanges from a specific category or have open requests to allow any website to exchange a link.

With the importance of link exchanges, many types of link exchange websites have emerged as a source for webmasters. There are 2 types of link exchange directories. Paid directories make the process of exchanging links easier, but cost money and usually require software to be placed on the webmasters website to assist with the link exchange. Free link exchange directories have no cost, except they require the webmaster to manually add each website. Webmasters can have complete control over links when part of a free link exchange.

Recently some marketing companies have stated that search engines are no longer placing a heavy importance on reciprocal links. The consensus is that popularity of a website is now gauged by incoming one way links. The experts also agree that in addition to having numerous inbound links, the relevance of the linking websites is very important. Link exchanges between complimenting websites is important. Webmasters do not have to link directly with competitors, but should link with websites that have industry relevance. Having a website link with no relevance could potentially negatively affect Search Engine Result Pages.

Websites that have completed many link exchanges will usually experience increased traffic through direct clicks and search engine results. There are various ways for webmasters to find linking websites through direct email contact or link exchange directories. It is important for webmasters to check the relevancy of the links that are being adding as links that have little or no relevance might negatively affect their placement. The use of link exchanges by webmasters will continue to be an important and useful means of marketing websites and improving search engine placement.

→ No CommentsTags: Affiliate Marketing · Link · SEO Basic

Link popularity

March 9th, 2007 · No Comments

Link popularity is a measure of the quantity and quality of other web sites that link to a specific site on the World Wide Web. It is an example of the move by search engines towards off-the-page-criteria to determine quality content. In theory, off-the-page-criteria adds the aspect of impartiality to search engine rankings.

Link popularity plays an important role in the visibility of a web site among the top of the search results. Indeed, some search engines require at least one or more links coming to a web site, otherwise they will drop it from their index.

Search engines such as Google use a special link analysis system to rank web pages. Citations from other WWW authors help to define a site’s reputation. The philosophy of link popularity is that important sites will attract many links. Content-poor sites will have difficulty attracting any links. Link popularity assumes that not all incoming links are equal, as an inbound link from a major directory carries more weight than an inbound link from an obscure personal home page. In other words, the quality of incoming links counts more than sheer numbers of them.

To search for pages linking to a specific page, simply enter the URL on Google or Yahoo! this way:

link:http://yourdomainname/pagename.html

Here are some strategies that are generally considered to be important to increase link popularity:

  • There should be links from the home page to all subpages so that a search engine can transfer some link popularity to the subpages.
  • Appropriate anchor text with relevant keywords should be used in the text links that are pointing to pages within a site (technically, this helps link context, not link popularity).
  • Getting links from other web sites, particularly sites with high PageRank, can be one of the most powerful site promotion tools. Therefore, the webmaster should try to get links from other important sites offering information or products compatible or synergistic to his/her own site or from sites that cater to the same audience the webmaster does. The webmaster should explain the advantages to the potential link partner and the advantages his/her site has to their visitors.
  • One way links often count for more than reciprocal links.
  • The webmaster should list his/her site in one or more of the major directories such as Yahoo! or the Open Directory Project.
  • The webmaster should only link to sites that he/she can trust, i.e. sites that do not use “spammy techniques”.
  • The webmaster should not participate in link exchange programs or link farms, as search engines will ban sites that participate in such programs.

To increase link popularity, many webmasters interlink multiple domains that they own, but often search engines will filter out these links, as such links are not independent votes for a page and are only created to trick the search engines. See Spamdexing. In this context, closed circles are often used, but these should be avoided, as they hoard PageRank.

→ No CommentsTags: Uncategorized

Sponsored link

March 8th, 2007 · No Comments

Sponsored links are text-based advertisements that describe an advertiser’s Web site and the products and services offered. A hyperlink is included, so that interested consumers may click on the advertisement and go to the advertised site.

Typically charged on a Pay per click model, when Web surfers click on an advertisement and visit the advertiser site, charges are made on a pre-determined amount that the advertiser has agreed to pay. The prices of sponsored links are usually set through a bidding system, where higher bids (along with other factors including performance, etc.) result in more prominent placement.

→ No CommentsTags: Link

Free For All link page

March 7th, 2007 · No Comments

A Free For All link page (FFA) is a web page set up ostensibly to improve the search engine placement of a particular web site.

Webmasters typically will use software to place a link to their site on hundreds of FFA sites hoping that the resulting incoming links will increase the ranking of their site in search engines. Experts in search engine optimization techniques do not place much value on FFAs. First, most FFAs only maintain a small number of links for a short time, too short for most search engine crawls to pick them up. The high “human” traffic to FFA sites is almost completely other webmasters visiting the site to place their own links manually. Search engine algorithms do more than count link numbers, they also check relevancy which the unrelated links on FFA sites do not have. Another drawback to FFAs is the amount of spam webmasters will receive from the owners and paying members of the FFA. Using an FFA can be considered a form of spamdexing.

→ No CommentsTags: Link

Scraper site, SEO

March 6th, 2007 · No Comments

A scraper site is a website that copies all of its content from other websites using web scraping. No part of a scraper site is original. A search engine is not a scraper site: sites such as Yahoo and Google gather content from other websites and index it that the index can be searched with keywords. Search engines then display snippets of the original site content which they have scraped in response to your search.

In the last few years, and due to the advent of the Google Adsense web advertising program, scraper sites have proliferated at an amazing rate for spamming search engines. Open content sites such as Wikipedia are a common source of material for scraper sites.

Made for AdSense

Some scraper sites are created for monetizing the site using advertising programs such as Google AdSense. In such case, they are called Made for AdSense sites or MFA. This is also a derogatory term used to refer to websites that have no redeeming value except to get web visitors to the website for the sole purpose of clicking on advertisements.

Made for AdSense sites are considered sites that are spamming search engines and diluting the search results by providing surfers with less than satisfactory search results. The scraped content is considered redundant to that which would be shown by the search engine under normal circumstances had no MFA website been found in the listings.

These types of websites are being eliminated in various search engines and sometimes show up as supplemental results instead of being displayed in the initial search results.

Google offers a domain parking service tailored for this kind of site. These supposed parked domains often run Google Adwords to attract more visitors to their site in the hopes that they will click on Adsense ads and generate a greater return than the original cost of the Adwords click. For many this has been a successful business plan, and one that Google has failed to combat.

AdsBlackList.com has a huge database of these fraudulent MFA/LCPC sites for Adwords/Adsense members to filter, resulting in a higher ROI and better quality content.

Legality

Scraper sites may violate copyright law. Even taking content from an open content site can be a copyright violation, if done in a way which does not respect the license. For instance, the GNU Free Documentation License (GFDL) and Creative Commons ShareAlike (CC-BY-SA) licenses require that a republisher inform readers of the license conditions, and give credit to the original author.

Techniques

Many scrapers will pull snippets and text from websites that rank high for keywords they have targeted. This way they hope to rank highly in the SERPs (Search Engine Results Pages). RSS feeds are vulnerable to scrapers.

Some scraper sites consist of advertisements and paragraphs of words randomly selected from a dictionary. Often a visitor will click on an advertisement because it is the only comprehensible text on the page. Operators of these scraper sites gain financially from these clicks. Ad networks such as Google AdSense claims to be constantly working to remove these sites from their programs although there is an active polemic about this, since these networks benefit directly from the clicks generated at these kind of sites. From the advertisers point of view, the networks don’t seem to be making enough effort to stop this problem.

Scrapers tend to be associated in the mind with link farms and are sometimes perceived as the same thing.

→ No CommentsTags: Web Contents

Paid inclusion. SEO, Search Engine listing

March 5th, 2007 · No Comments

Paid inclusion is a search engine marketing product where the search engine company charges fees related to inclusion of websites in their search index. Paid inclusion products are provided by most search engine companies, the most notable exception being Google.

The fee structure is both a filter against superfluous submissions and a revenue generator. Typically, the fee covers an annual subscription for one webpage, which will automatically be catalogued on a regular basis. A per-click fee may also apply. Each search engine is different. Some sites allow only paid inclusion, although these have had little success. More frequently, many search engines, like Yahoo!, mix paid inclusion (per-page and per-click fee) with results from web crawling. Others, like Google (and a little recently, Ask.com), do not let webmasters pay to be in their search engine listing (advertisements are shown separately and labeled as such).

Some detractors of paid inclusion allege that it causes searches to return results based more on the economic standing of the interests of a web site, and less on the relevancy of that site to end-users.

Often the line between pay per click advertising and paid inclusion is debatable. Some have lobbied for any paid listings to be labeled as an advertisement, while defenders insist they are not actually ads since the webmasters do not control the content of the listing, its ranking, or even whether it is shown to any users. Another advantage of paid inclusion is that it allows site owners to specify particular schedules for crawling pages. In the general case, one has no control as to when their page will be crawled or added to a search engine index. Paid inclusion proves to be particularly useful for cases where pages are dynamically generated and frequently modified.

Paid inclusion is a search engine marketing method in itself, but also a tool of search engine optimization, since experts and firms can test out different approaches to improving ranking, and see the results often within a couple of days, instead of waiting weeks or months. Knowledge gained this way can be used to optimize other web pages, without paying the search engine company.

→ No CommentsTags: Listing · Search Engine

Multivariate landing page optimization (MVLPO)

March 4th, 2007 · No Comments

The first application of an experimental design to website optimization was done by Moskowitz Jacobs Inc. in 1998 in a simulation demo-project for Lego website (Denmark). MVLPO did not become a mainstream approach until 2003-2004.

Execution modes

MVLPO can be executed in a live (production) environment (e.g. Google website optimizer, Optimost.com, etc.) or through a Market Research Survey / Simulation (e.g., StyleMap.NET).

Live environment MVLPO execution

In live environment MVLPO execution, a special tool makes dynamic changes to the web site, so the visitors are directed to different executions of landing pages created according to an [experimental design]. The system keeps track of the visitors and their behavior (including their conversion rate, time spent on the page, etc.) and with sufficient data accumulated, estimates the impact of individual components on the target measurement (e.g., conversion rate).

Pros of live environment MVLPO execution

  • This approach is very reliable because it tests the effect of variations as a real life experience, generally transparent to the visitors.
  • It has evolved to a relatively simple and inexpensive to execute approach (e.g., Google Optimizer).

Cons of live environment MVLPO execution (applicable mostly to the tools prior to Google Optimizer)

  • High cost
  • Complexity involved in modifying a production-level website
  • Long time it may take to achieve statistically reliable data caused by variations in the amount of traffic, which generates the data necessary for the decision
  • This approach may not be appropriate for low traffic / high importance websites when the site administrators do not want to lose any potential customers.

Many of these drawbacks are reduced or eliminated with the introduction of the Google Website Optimizer – a free DIY MVLPO tool that made the process more democratic and available to the website administrators directly.

Simulation (survey) based MVLPO

A simulation (survey) based MVLPO is built on advanced market research techniques. In the research phase, the respondents are directed to a survey, which presents them with a set of experimentally designed combinations of the landing page executions. The respondents rate each execution (screen) on a rating question (e.g., purchase intent). At the end of the study, regression model(s) are created (either individual or for the total panel). The outcome relates the presence/absence of the elements in the different landing page executions to the respondents’ ratings and can be used to synthesize new pages as combinations of the top-scored elements optimized for subgroups, segments, with or without interactions.

Pros of the Simulation approach

  • Much faster and easier to prepare and execute (in many cases) compared to the live environment optimization
  • It works for low traffic websites
  • Usually produces more robust and rich data because of a higher control of the design.

Cons of the Simulation approach

  • Possible bias of a simulated environment as opposed to a live one
  • A necessity to recruit and optionally incentivise the respondents.

MVLPO paradigm is based on an experimental design (e.g., conjoint analysis, Taguchi methods, etc.) which tests structured combination of elements. Some vendors use full factorial approach (e.g., Google Optimizer that tests all possible combinations of elements). This approach requires very large sample sizes (typically, many thousands) to achieve statistical importance. Fractional designs typically used in simulation environments require the testing of small subsets of possible combinations. Some critics of the approach raise the question of possible interactions between the elements of the web pages and the inability of most fractional designs to address the issue.

To resolve these limitations, an advanced simulation method based on the Rule Developing Experimentation paradigm (RDE) has been introduced. RDE creates individual models for each respondent, discovers any and all synergies and suppressions between the elements, uncovers attitudinal segmentation, and allows for databasing across tests and over time.

→ No CommentsTags: Landing Page · Optimization

Landing page optimization – based on experimentation

March 3rd, 2007 · No Comments

There are two major types of LPO based on experimentation:

  • Close-ended experimentation exposes consumers to various executions of landing pages and observes their behavior. At the end of the test, an optimal page is selected that permanently replaces the experimental pages. This page is usually the most efficient one in achieving target goals such as conversion rate, etc. It may be one of tested pages or a synthesized one from individual elements never tested together. The methods include simple A/B-split test, multivariate (conjoint) based, Taguchi, total experience testing, etc.
  • Open-ended experimentation is similar to close-ended experimentation with ongoing dynamic adjustment of the page based on continuing experimentation.
    This article covers in details only the approaches based on the experimentation. Experimentation based LPO can be achieved using the following most frequently used methodologies: A/B split test, multivariate LPO and total experience testing. The methodologies are applicable to both close-ended and open-ended types of experimentation.

A/B testing

A/B testing (also called “A/B split test”), is a generic term for testing a limited set (usually 2 or 3) of pre-created executions of a web page without use of experimental design. The typical goal is to try, for example, three versions of the home page or product page or support FAQ page and see which version of the page works better. The outcome in A/B Testing is usually measured as click-thru to next page or conversion, etc. The testing can be conducted sequentially or concurrently. In sequential (the easiest to implement) execution the page executions are placed online one at a time for a specified period. Parallel execution (“split test”) divides the traffic between the executions.

Pros of doing A/B testing

  • Inexpensive since you will use your existing resources and tools
  • Simple –no heavy statistics involved

Cons of doing A/B testing

  • It is difficult to control all the external factors (campaigns, search traffic, press releases, seasonality) in sequential execution.
  • The approach is very limited, and cannot give reliable answers for pages that combine multiple elements.

MVLPO

MVLPO structurally handles a combination of multiple groups of elements (graphics, text, etc.) on the page. Each group comprises multiple executions (options). For example, a landing page may have n different options of the title, m variations of the featured picture, k options of the company logo, etc.

Pros of doing Multivariate Testing

  • The most reliable science based approach to understand the customers mind and use it to optimize their experience.
  • It evolved to a quite easy to use approach in which not much IT involvement is needed. In many cases, a few lines of javascript on the page allows the remote servers of the vendors to control the changes, collect the data and analyze the results.
    It provides a foundation for a continuous learning experience.

Cons of doing Multivariate Testing

  • As with any quantitative consumer research, there is a danger of GIGO (garbage in, garbage out). You still need a clean pool of ideas that are sourced from known customer points or strategic business objectives.
  • With MVLPO, you are usually optimizing one page at a time. Website experiences for most sites are complex multi page affairs. For a e-commerce website it is typical for a entry to a successful purchase to be around 12 to 18 pages, for a support site even more pages.

Total experience testing

Total experience testing (also called experience testing) is a new and evolving type of experiment based testing in which the entire site experience of the visitor is examined using technical capabilities of the site platform (e.g., ATG, Blue Martini, etc.).

Instead of actually creating multiple websites, the methodology uses the site platform to create several persistent experiences and monitors which one is preferred by the customers.

Pros of doing experience testing

  • The experiments reflect the total customers experience, not just one page at a time.

Cons of doing Experience Testing

  • You need to have a website platform that supports experience testing (for example ATG supports this).
  • It takes longer than the other two methodologies.

→ No CommentsTags: Uncategorized

Landing page optimization

March 2nd, 2007 · No Comments

Landing page optimization (LPO, also known as webpages optimization) is the process of improving a visitor’s perception of a website by optimizing its content and appearance in order to make them more appealing to the target audiences, as measured by target goals such as conversion rate.

Multivariate landing page optimization (MVLPO) is landing page optimization based on an experimental design.

LPO can be achieved through targeting and experimentation.

LPO based on targeting

There are three major types of LPO based on targeting:

  1. Associative content targeting (also called “rules-based optimization” or “passive targeting”): Modifies the content with relevant to the visitors information based on the search criteria, source, geo-information of source traffic or other known generic parameters that can be used for explicit non-research based consumer segmentation.
  2. Predictive content targeting (also called “active targeting”): Adjusts the content by correlating any known information about the visitors (e.g., prior purchase behavior, personal demographic information, browsing patterns, etc.) to anticipated (desired) future actions based on predictive analytics.
  3. Consumer directed targeting (also called “social”): The content of the pages could be created using the relevance of publicly available information through a mechanism based on reviews, ratings, tagging, referrals, etc.

→ No CommentsTags: Landing Page · Optimization

Social media optimization

March 1st, 2007 · No Comments

Social media optimization (SMO) is a set of methods for generating publicity through social media, online communities and community websites. Methods of SMO include adding RSS feeds, adding a “Digg This” button, blogging and incorporating third party community functionalities like Flickr photo slides and galleries or YouTube videos. Social media optimization is a form of search engine marketing.

Social media optimisation is in many ways connected as a technique to viral marketing where word of mouth is created not through friends or family but through the use of networking in social bookmarking, video and photo sharing websites. In a similar way the engagement with blogs achieves the same by sharing content through the use of RSS in the blogsphere and special blog search engines such as Technorati.

Origins

Rohit Bhargava was credited with inventing the term SMO. His original five rules to help guide our thinking with conducting an SMO for a client’s website are:

  1. Increase your linkability
  2. Make tagging and bookmarking easy
  3. Reward inbound links
  4. Help your content travel
  5. Encourage the mashup

Spam in social media

As with to search engines, social media is prone to spamming. However, due to the nature of the medium, where users are active participants, networks such as LinkedIn and Facebook have taken against spam to lessen the occurrence of malicious activity. Some social media sites, such as Digg, empower users to bury content that is considered low quality or spam.

→ No CommentsTags: SEO Basic · social media

Web analytics technologies

February 25th, 2007 · No Comments

There are two main technological approaches to collecting web analytics data. The first method, logfile analysis, reads the logfiles in which the web server records all its transactions. The second method, page tagging, uses JavaScript on each page to notify a third-party server when a page is rendered by a web browser.

Web server logfile analysis

Web servers have always recorded all their transactions in a logfile. It was soon realised that these logfiles could be read by a program to provide data on the popularity of the website. Thus arose web log analysis software.

In the early 1990s, web site statistics consisted primarily of counting the number of client requests made to the web server. This was a reasonable method initially, since each web site often consisted of a single HTML file. However, with the introduction of images in HTML, and web sites that spanned multiple HTML files, this count became less useful. The first true commercial Log Analyzer was released by IPRO in 1994

Two units of measure were introduced in the mid 1990s to gauge more accurately the amount of human activity on web servers. These were page views and visits (or sessions). A page view was defined as a request made to the web server for a page, as opposed to a graphic, while a visit was defined as a sequence of requests from a uniquely identified client that expired after a certain amount of inactivity, usually 30 minutes. The page views and visits are still commonly displayed metrics, but are now considered rather unsophisticated measurements.

The emergence of search engine spiders and robots in the late 1990s, along with web proxies and dynamically assigned IP addresses for large companies and ISPs, made it more difficult to identify unique human visitors to a website. Log analyzers responded by tracking visits by cookies, and by ignoring requests from known spiders.

The extensive use of web caches also presented a problem for logfile analysis. If a person revisits a page, the second request will often be retrieved from the browser’s cache, and so no request will be received by the web server. This means that the person’s path through the site is lost. Caching can be defeated by configuring the web server, but this can result in degraded performance for the visitor to the website.

Page taggingConcerns about the accuracy of logfile analysis in the presence of caching, and the desire to be able to perform web analytics as an outsourced service, led to the second data collection method, page tagging or ‘Web bugs’.

In the mid 1990s, Web counters were commonly seen — these were images included in a web page that showed the number of times the image had been requested, which was an estimate of the number of visits to that page. In the late 1990s this concept evolved to include a small invisible image instead of a visible one, and, by using JavaScript, to pass along with the image request certain information about the page and the visitor. This information can then be processed remotely by a web analytics company, and extensive statistics generated.

The web analytics service also manages the process of assigning a cookie to the user, which can uniquely identify them during their visit and in subsequent visits.

With the increasingly popularity of Ajax-based solutions, an alternative to the use of an invisible image, is to implement a call back to the server from the rendered page. In this case, when the page is rendered on the web browser, a piece of Ajax code would call back to the server and pass information about the client that can then be aggregated by a web analytics company. This is in some ways flawed by browser restrictions on the servers which can be contacted with XmlHttpRequest objects.

Logfile analysis vs page tagging

Both logfile analysis programs and page tagging solutions are readily available to companies that wish to perform web analytics. In many cases, the same web analytics company will offer both approaches. The question then arises of which method a company should choose. There are advantages and disadvantages to each approach.

Advantages of logfile analysis

The main advantages of logfile analysis over page tagging are as follows.

  • The web server normally already produces logfiles, so the raw data is already available. To collect data via page tagging requires changes to the website.
  • The web server reliably records every transaction it makes. Page tagging relies on the visitors’ browsers co-operating, which a certain proportion may not do (for example, if JavaScript is disabled).
  • The data is on the company’s own servers, and is in a standard, rather than a proprietary, format. This makes it easy for a company to switch programs later, use several different programs, and analyze historical data with a new program. Page tagging solutions involve vendor lock-in.
  • Logfiles contain information on visits from search engine spiders. Although these should not be reported as part of the human activity, it is important data for performing search engine optimization.
  • Logfiles contain information on failed requests; page tagging only records an event if the page is successfully viewed.

Advantages of page tagging

The main advantages of page tagging over logfile analysis are as follows.

  • The JavaScript is automatically run every time the page is loaded. Thus there are fewer worries about caching.
  • It is easier to add additional information to the JavaScript, which can then be collected by the remote server. For example, information about the visitors’ screen sizes, or the price of the goods they purchased, can be added in this way. With logfile analysis, information not normally collected by the web server can only be recorded by modifying the URL.
  • Page tagging can report on events which do not involve a request to the web server, such as interactions within Flash movies.
  • The page tagging service manages the process of assigning cookies to visitors; with logfile analysis, the server has to be configured to do this.
  • Page tagging is available to companies who do not run their own web servers.

Economic factors

Logfile analysis is almost always performed in-house. Page tagging can be performed in-house, but it is more often provided as a third-party service. The economic difference between these two models can also be a consideration for a company deciding which to purchase.

  • Logfile analysis typically involves a one-off software purchase; however, some vendors are introducing maximum annual page views with additional costs to process additional information.
  • Page tagging most often involves a monthly fee, although some vendors offer installable page tagging solutions with no additional page view costs.

Which solution is cheaper often depends on the amount of technical expertise within the company, the vendor chosen, the amount of activity seen on the web sites, the depth and type of information sought, and the number of distinct web sites needing statistics.

Hybrid methods

Some companies are now producing programs which collect data through both logfiles and page tagging. By using a hybrid method, they aim to produce more accurate statistics than either method on its own. The first Hybrid solution was produced in 1998 by Rufus Evison who then spun the product out to create a company based upon the increased accuracy of hybrid methods

Other methods

Other methods of data collection have been used, but are not currently widely deployed. These include integrating the web analytics program into the web server, and collecting data by sniffing the network traffic passing between the web server and the outside world.

There is also another method of the page tagging analysis. Instead of getting the information from the user side, when he / she opens the page, it’s also possible to let the script work on the server side. Right before a page is sent to a user it then sends the data

→ No CommentsTags: Web Analytics