Instant SEO Checker + Score & Report
Enter the URL of your website or landing page and the keyword you want to rank. Your SEO score and report will be instantly emailed to you with some SEO tips and recommendations.
SEO Checker Overview
This section outlines your overall SEO score between 0 and 100 based on the 58 SEO elements checked. In addition, you can see the number of “good signals” and “issues found”.
Website speed is an increasingly important aspect of SEO. This section checks your page load speed, page size and the number of file requests.
This section checks if your URL is SEO friendly, does the keyword appear in the domain, are there any underscores and if the page is close enough to the top-level domain.
Your title tag is one of the most important SEO elements. It helps both users and search engine know what the page is about. This sections checks if your keyword is found in the title tag, if the title tag starts with the keyword and if the your title tags is less 70 characters or less.
Your meta description is an important SEO element that is pulled into SERP listings to help both users and search engine understand what your page is about. This section checks if your keyword is included in your meta description and if the meta description is 160 characters or less.
Google and other search engines need image meta data to determine what the images on your web page are about. This section checks your image meta tags, alt tags and file names to ensure they are optimized for SEO.
Top 5 Keywords Used
Content and keyword density is an indicator to Google for what the page is about. This section identifies which 5 keywords are used the most within your content.
Headers are an important element for both users and search engines to explain your web pages key points and content sections. This section checks to see if your keywords are included in the important headers.
Content is king. Your web page copy is definitely one of the most important aspects of your pages SEO. This section checks several important SEO elements to make sure your copy follows best SEO practices including: word count, frequency, emphasize, priority and anchor text.
Google and other search engines read code. Several important standards need to be followed to ensure your pages are being properly read and indexed. This section checks HTML W3C validation, flash, css, text to HTML ratios, schema and sitemaps.
Shares on social media have become an increasingly important SEO indicator. This section checks Facebook for shares as well as the existence of a Facebook share button and active blog.
More than 50% of Google’s 5 billion+ daily searches come from mobile devices. Thus, Google has adopted a mobile-first policy. This means all aspects of the mobile version of your website are more important than the desktop version.
Page Link Analysis
Backlinks are still a primary SEO indicator to Google and other search engine regarding the authority and trust of your website. This section checks both inbound and outbound links as well as the MozTrust, MozRank and Moz Page Authority scores of your web page.
Root Domain Link Analysis
The homepage of your website or root domain is usually the most authoritative page on your website. This section checks the links and MozTrust, MozRank and Moz Domain Authority scores of your homepage or root domain.
This section checks several domain factors including: domain length, age of domain, expiration date, canonicalization, SSL, robots.txt and favicon.
Extended SEO Site Auditor
JEMSU’s Site Auditor built into the All-in-One SEO & Digital Marketing Dashboard checks 45 elements across your entire website and allows you to schedule regular, automatic full website audits. The main difference between our instant SEO checker and the site auditor is the instant SEO checker only looks at 58 elements on a single page. The site auditor looks at 45 elements across your entire website.
SEO Auditor Overview
This section gives you an overview of your SEO audit including an overall percentage score, number of critical errors, regular errors and warnings. It also outlines the number of pages crawled and the number of tests past or failed out of 45.
These errors normally occur because a page does not exist (404), it requires authentication (401) or it is forbidden to access the page (403). Make sure you deal with each type of code appropriately to ensure the page can be crawled.
These are fatal errors that will prevent anyone, including search engines, from accessing your website. They are normally caused from a programming bug or a server misconfiguration.
Blocked from Being Indexed
If a page contains a noindex meta tag, then it will tell a search engine to avoid crawling this page. This may have been done intentionally, but if not, then this page will have no presence in a search engine. Simply remove the noindex meta tag to resolve this.
Broken External Images
An external image is one that links to another image hosted on another website and is considered broken when it will not load. Generally it is bad practice to reference external images since it limits your control. A simple solution is to download the image and host it internally.
Broken External Links
An external link is one that links to another website and is considered broken when the page cannot be accessed. Since the linking website is not under your control, your best option is to remove the link. Otherwise this will diminish the reliability of your website according to a search engine or visitor.
Broken Internal Images
An internal image is one that links to another image within your website and is considered broken when it will not load. This could occur because the file does not exist or the image could be too large and is randomly timing out when trying to load it.
Broken Internal Links
An internal link is one that points to another page that exists on your server and is considered broken when the page cannot be accessed. This could be because it does not exist or there is an error trying to connect to it. Make sure the URL is inputted correctly and that you clear up any issues with the page. Excessive broken links will not only impact your visitors experience, it may also cause search engines to diminish the importance of your website.
Doctype not Declared
A doctype is the first thing that appears in your page’s source code and instructs a web browser which version of HTML you are using. If this is not specified then your code could be interpreted incorrectly and become uncrawlable.
A page is considered to have duplicate content if it contains very similar text to another page. Duplicate content will diminish the quality of a page since it is unclear on which page has more relevance to a given topic. Since there would be no purpose for a search engine to index the same page twice, it may ultimately lead to banning both pages from the results.
Duplicate Meta Descriptions
A meta description is a hidden tag that describes the purpose of a page. Search engines may use this description in the listing for this site and in determining the topic of the page. If the same description is used on other pages, it maybe be difficult to differentiate between pages. Make sure meta descriptions are unique and use topical keywords to describe the content of the page.
Duplicate Title Tags
A title tag is considered to be a duplicate if it matches the exact title of another page. Duplicate title tags will diminish the quality of a page since it is unclear on which page has more relevance to a given topic. Furthermore, it will also confuse the user when navigating your site.
Encoding Not Declared
Specifying an encoding for a page will ensure that each character is displayed properly. Generally this usually set to utf-8 but depending on the language of the page, this could be different.
Flash Content Used
It is generally a bad idea to use flash on your website. It isn’t possible for a search engine to interpret flash content and may skip over your page when crawling it. Furthermore, it creates a bad user experience as they will have to wait for it to load and may not be able to see anything on their mobile device. Google Chrome will no longer support flash at all starting in 2020.
Using HTML frames are considered to be dated and should be avoided. It is difficult for a search engine to read them and creates a bad user experience. Try to remove any frames from your pages in favor of better and newer methods to accomplish the same thing.
Every website should be accessible securely with an https url. In order to make sure that you’re always using https, your website should redirect any request from http to https.
Canonical tags are used to identify a duplicate page so that Google only indexes one URL. Make sure that each of your pages are not pointing to the same page.
Incorrect URLs in Sitemap.xml
A sitemap.xml lists all the public pages of your website so a crawler can easily find them. You should only include pages that you want a search engine to crawl. An error is triggered if any URL is not found.
Invalid Sitemap.xml Format
A sitemap.xml lists all the public pages of your website so a crawler can easily find them. You should only include pages that you wish a search engine to crawl. An error is triggered if the syntax of the xml is incorrect.
Large Page Size
In order to keep page load time low, you should try to minimize the amount of content and HTML contained in it. Generally, a page file size should be less than 2 MB in order to avoid any search engine penalties.
Long Title Tags
Any title with more than 70 characters is generally considered to be too long. Most search engines and sites will automatically shorten such a long title. A long title could penalize your site especially if you are keyword stuffing.
URLs longer than 100 characters are considered to be not ideal when it comes to SEO. A long URL can be difficult to read or share and can even cause problems with browsers or applications.
Low Text HTML Ratio
The amount of text compared to HTML tags represents your text to HTML ratio. This test will fail if the ratio is less than 10%. A search engine can only look at your text to determine the page’s relevance. If there is an abundance of HTML compared to actual content, it will have difficulty segmenting the content. Furthermore, too much HTML may cause your page to load much slower.
Low Word Count
This test fails if the number of words on a page is less than 200. If a page does not have much content, it is hard for a search engine to properly assign a topic to it and may not bother indexing it. Try to make use of relevant content while using as many keywords as possible.
Matching H1 and Title Content
Using the same title as your H1 content is an ineffective way of defining the page topic. Use this opportunity to create two distinct phrases that illustrate the purpose of the page.
Missing ALT Attributes
An alt attribute is used to describe an image in a textual context. Search engines may interpret an alt tag to identify the purpose of the image. This is a great way to increase your page relevance as it relates to a topic.
Missing Canonical Tag
Canonical tags help to avoid duplicate content when an unique content is accessible via multiple URLs. Defining a correct canonical tag for all pages will keep them away from possible duplicate issues.
Missing Canonical Tags in AMP Pages
AMP stands for Accelerated Mobile Pages. This is used to strip down a pages HTML so it will render faster on mobile devices.
H1 tags are considered to be the main heading of a page and are used to help define the topic of the page. Creating a descriptive heading is an effective way to improve your search engine presence and make it easier for a user to navigate your page.
Missing Meta Description
A meta description is a hidden tag that describes the purposes of a page. Search engines may use this description in the results listing and in determining the topic of the page. Make sure each of your pages has a meta description that is unique and topical.
Missing Sitemap.xml Reference
If your site contains a robots.txt and sitemap.xml file, it is a good idea to reference the location to sitemap.xml within your robots.txt. Your robots.txt file is what a search engine will read when indexing your site so it is good to make it easy for the crawler to find the links you want indexed.
A <title> tag is one of the most important components of a page. It is often used as a link to your page on a search engine and is meant to describe the purpose of the page in a few words.
Missing Viewport Tag
This is a meta tag which allows you to control the scale in which the page appears on a mobile device. This will ensure that the page is not too small or large and is easily legible on a user’s device.
Multiple H1 Tags
Generally it is best to have only one H1 tag on a page to specifically define its topic. Multiple H1 tags can confuse a search engine or a user in determining the focus of the page.
NoFollow Attributes in External Links
If a link contains rel=’nofollow’, then it instructs a search engine to avoid crawling it. This may be done on purpose but if you wish to pass link juice then you should replace this link.
NoFollow Attributes in Internal Links
If a link contains rel=’nofollow’, then it instructs a search engine to avoid crawling it. This may be done on purpose but if you wish to pass link juice then you should remove this.
Overused Canonical Tags
This test will fail if too many pages have the same canonical tag. Canonical tags are used to identify a duplicate page so that Google only indexes one URL. Make sure that each of your pages are not pointing to the same page or else it will be the only one page indexed.
Robots.txt Blocking Crawlers
A Robots.txt file gives instructions to any web crawlers, including search engines, on which pages of a website they should crawl. This way you can choose which pages you would want to be indexed on Google for example. Any errors in this file could cause a search engine to not index your website at all.
Robots.txt Not Found
A Robots.txt file gives instructions to any web crawlers, including search engines, on what pages of a website they should crawl. This way you can choose which pages you would want to be indexed on Google for example. Missing this file may cause some of your pages to be ignored by Google.
Short Title Tag
Generally, using short titles on web pages is a recommended practice. However, keep in mind that titles containing 10 characters or less do not provide enough information about what your web page is about and limit your page’s potential to show up in search results for different keywords.
Sitemap.xml Not Found
A sitemap.xml is simply a way to list the pages of your site that you would like a search engine to index. This will make it easier and faster for a search engine to crawl your site and notify them about any new or updated pages.
Slow Page Load
A slow page can be frustrating for a user and will lower your relevance in the eyes of a search engine. Most users will not put up with a slow page and go elsewhere. A search engine understands this and will do the same. This test fails if it takes longer than 7 seconds to load the page.
Temporary redirects are triggered when a page has a 302 or 307 http status code. This means that the page has moved temporarily to a new location. Although the page will be indexed by a search engine, it will not pass any link juice to the redirected page
Too Many On-Page Links
This test will fail if a page has more than 500 links. Having too many links on a page can overwhelm the user and offer too many exit options. Search engines also have a limit on the numbers of links that they crawl on a page.
Too Many URL Parameters
Over using parameters in a URL is not the proper way to segment a page. It often creates an ugly URL that is not easy to read or pick out any defining keywords. Generally, parameters should be transformed into a path based structure (ie. /param1/param2). This test fails if there are more than 2 parameters in the URL.
Underscores in URL
Semantically, underscores are allowed in a URL but is bad practice in terms of SEO. It is a good idea to separate words, however, you should use hyphens to accomplish this.