Git Product home page Git Product logo

google-indexing-script's Introduction

Google Indexing Script

Use this script to get your entire site indexed on Google in less than 48 hours. No tricks, no hacks, just a simple script and a Google API.

You can read more about the motivation behind it and how it works in this blog post https://seogets.com/blog/google-indexing-script

Important

  1. Indexing != Ranking. This will not help your page rank on Google, it'll just let Google know about the existence of your pages.
  2. This script uses Google Indexing API. We do not recommend using this script on spam/low-quality content.

Requirements

Preparation

  1. Follow this guide from Google. By the end of it, you should have a project on Google Cloud with the Indexing API enabled, a service account with the Owner permission on your sites.
  2. Make sure you enable both Google Search Console API and Web Search Indexing API on your Google Project ➀ API Services ➀ Enabled API & Services.
  3. Download the JSON file with the credentials of your service account and save it in the same folder as the script. The file should be named service_account.json

Installation

Using CLI

Install the cli globally on your machine.

npm i -g google-indexing-script

Using the repository

Clone the repository to your machine.

git clone https://github.com/goenning/google-indexing-script.git
cd google-indexing-script

Install and build the project.

npm install
npm run build
npm i -g .

Note

Ensure you are using an up-to-date Node.js version, with a preference for v20 or later. Check your current version with node -v.

Usage

With service_account.json (recommended)

Create a .gis directory in your home folder and move the service_account.json file there.

mkdir ~/.gis
mv service_account.json ~/.gis

Run the script with the domain or url you want to index.

gis <domain or url>
# example
gis seogets.com

Here are some other ways to run the script:

# custom path to service_account.json
gis seogets.com --path /path/to/service_account.json
# long version command
google-indexing-script seogets.com
# cloned repository
npm run index seogets.com
With environment variables

Open service_account.json and copy the client_email and private_key values.

Run the script with the domain or url you want to index.

GIS_CLIENT_EMAIL=your-client-email GIS_PRIVATE_KEY=your-private-key gis seogets.com
With arguments (not recommended)

Open service_account.json and copy the client_email and private_key values.

Once you have the values, run the script with the domain or url you want to index, the client email and the private key.

gis seogets.com --client-email your-client-email --private-key your-private-key
As a npm module

You can also use the script as a npm module in your own project.

npm i google-indexing-script
import { index } from 'google-indexing-script'
import serviceAccount from './service_account.json'

index('seogets.com', {
  client_email: serviceAccount.client_email,
  private_key: serviceAccount.private_key
})
  .then(console.log)
  .catch(console.error)

Read the API documentation for more details.

Here's an example of what you should expect:

Important

  • Your site must have 1 or more sitemaps submitted to Google Search Console. Otherwise, the script will not be able to find the pages to index.
  • You can run the script as many times as you want. It will only index the pages that are not already indexed.
  • Sites with a large number of pages might take a while to index, be patient.

πŸ”€ Alternative

If you prefer a hands-free, and less technical solution, you can use a SaaS platform like TagParrot.

πŸ“„ License

MIT License

πŸ’– Sponsor

This project is sponsored by SEO Gets

google-indexing-script's People

Contributors

anjum48 avatar antoinekm avatar dev-coco avatar github-actions[bot] avatar goenning avatar lucarestagno avatar ruthvik-17 avatar teenbiscuits avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

google-indexing-script's Issues

Introduce sent indexing urls cache?

Hi, thanks for your tool!
For the large websites, like 5k+ pages, is it possible to introduce the sent urls for indexing cache?
For most of us is ok to run your tool on a daily basis in order to cover all the website pages with a daily limitation of 200 request.
Just needs to be sute that tool is not sending already sent pages.
Thanks!

I think the script doesn't work with a 23k urls sitemap

Hey,
The script is working fine on 4 of my websites (with less than 500 urls on sitemap each). When I try with a website who have 23k urls I have this error

As the other website the service account was correctly added with the same permissions as the others website. In fact it was working before when I was 1k urls.

If you have an idea tell me, thanks!

 gis mywebsite.app
πŸ”Ž Processing site: sc-domain:mywebsite.app
πŸ‘‰ Found 2732 URLs in 2 sitemap

/Users/user1/.nvm/versions/node/v18.17.1/lib/node_modules/google-indexing-script/node_modules/sitemapper/lib/assets/sitemapper.js:1
"use strict";Object.defineProperty(exports,"__esModule",{value:!0}),exports.default=void 0;var _xml2js=require("xml2js"),_got=_interopRequireDefault(require("got")),_zlib=_interopRequireDefault(require("zlib")),_pLimit=_interopRequireDefault(require("p-limit")),_isGzip=_interopRequireDefault(require("is-gzip"));function _interopRequireDefault(a){return a&&a.__esModule?a:{default:a}}function asyncGeneratorStep(a,b,c,d,e,f,g){try{var h=a[f](g),i=h.value}catch(a){return void c(a)}h.done?b(i):Promise.resolve(i).then(d,e)}function _asyncToGenerator(a){return function(){var b=this,c=arguments;return new Promise(function(d,e){function f(a){asyncGeneratorStep(h,d,e,f,g,"next",a)}function g(a){asyncGeneratorStep(h,d,e,f,g,"throw",a)}var h=a.apply(b,c);f(void 0)})}}class Sitemapper{constructor(a){var b=a||{requestHeaders:{}};this.url=b.url,this.timeout=b.timeout||15e3,this.timeoutTable={},this.lastmod=b.lastmod||0,this.requestHeaders=b.requestHeaders,this.debug=b.debug,this.concurrency=b.concurrency||10,this.retries=b.retries||0,this.rejectUnauthorized=!1!==b.rejectUnauthorized}fetch(){var a=arguments,b=this;return _asyncToGenerator(function*(){var c=0<a.length&&a[0]!==void 0?a[0]:b.url,d={url:"",sites:[],errors:[]};b.debug&&b.lastmod&&console.debug("Using minimum lastmod value of ".concat(b.lastmod));try{d=yield b.crawl(c)}catch(a){b.debug&&console.error(a)}return{url:c,sites:d.sites||[],errors:d.errors||[]}})()}static get timeout(){return this.timeout}static set timeout(a){this.timeout=a}static get lastmod(){return this.lastmod}static set lastmod(a){this.lastmod=a}static set url(a){this.url=a}static get url(){return this.url}static set debug(a){this.debug=a}static get debug(){return this.debug}parse(){var a=arguments,b=this;return _asyncToGenerator(function*(){var c=0<a.length&&a[0]!==void 0?a[0]:b.url,d={method:"GET",resolveWithFullResponse:!0,gzip:!0,responseType:"buffer",headers:b.requestHeaders,https:{rejectUnauthorized:b.rejectUnauthorized}};try{var e=_got.default.get(c,d);b.initializeTimeout(c,e);var f=yield e;if(!f||200!==f.statusCode)return clearTimeout(b.timeoutTable[c]),{error:f.error,data:f};var g=(0,_isGzip.default)(f.rawBody)?yield b.decompressResponseBody(f.body):f.body;var h=yield(0,_xml2js.parseStringPromise)(g);return{error:null,data:h}}catch(a){return"CancelError"===a.name?{error:"Request timed out after ".concat(b.timeout," milliseconds for url: '").concat(c,"'"),data:a}:"HTTPError"===a.name?{error:"HTTP Error occurred: ".concat(a.message),data:a}:{error:"Error occurred: ".concat(a.name),data:a}}})()}initializeTimeout(a,b){this.timeoutTable[a]=setTimeout(()=>b.cancel(),this.timeout)}crawl(a){var b=arguments,c=this;return _asyncToGenerator(function*(){var d=1<b.length&&b[1]!==void 0?b[1]:0;try{var{error:k,data:l}=yield c.parse(a);if(clearTimeout(c.timeoutTable[a]),k)return d<c.retries?(c.debug&&console.log("(Retry attempt: ".concat(d+1," / ").concat(c.retries,") ").concat(a," due to ").concat(l.name," on previous request")),c.crawl(a,d+1)):(c.debug&&console.error("Error occurred during \"crawl('".concat(a,"')\":\n\r Error: ").concat(k)),{sites:[],errors:[{type:l.name,message:k,url:a,retries:d}]});if(l&&l.urlset&&l.urlset.url){c.debug&&console.debug("Urlset found during \"crawl('".concat(a,"')\""));var m=l.urlset.url.filter(a=>{if(0===c.lastmod)return!0;if(void 0===a.lastmod)return!1;var b=new Date(a.lastmod[0]).getTime();return b>=c.lastmod}).map(a=>a.loc&&a.loc[0]);return{sites:m,errors:[]}}if(l&&l.sitemapindex){c.debug&&console.debug("Additional sitemap found during \"crawl('".concat(a,"')\""));var e=l.sitemapindex.sitemap.map(a=>a.loc&&a.loc[0]),f=(0,_pLimit.default)(c.concurrency),g=e.map(a=>f(()=>c.crawl(a))),h=yield Promise.all(g),i=h.filter(a=>0===a.errors.length).reduce((a,b)=>{var{sites:c}=b;return[...a,...c]},[]),j=h.filter(a=>0!==a.errors.length).reduce((a,b)=>{var{errors:c}=b;return[...a,...c]},[]);return{sites:i,errors:j}}return d<c.retries?(c.debug&&console.log("(Retry attempt: ".concat(d+1," / ").concat(c.retries,") ").concat(a," due to ").concat(l.name," on previous request")),c.crawl(a,d+1)):(c.debug&&console.error("Unknown state during \"crawl('".concat(a,")'\":"),k,l),{sites:[],errors:[{url:a,type:l.name||"UnknownStateError",message:"An unknown error occurred.",retries:d}]})}catch(a){c.debug&&c.debug&&console.error(a)}})()}getSites(){var a=arguments,b=this;return _asyncToGenerator(function*(){var c=0<a.length&&a[0]!==void 0?a[0]:b.url,d=1<a.length?a[1]:void 0;console.warn("\r\nWarning:","function .getSites() is deprecated, please use the function .fetch()\r\n");var e={},f=[];try{var g=yield b.fetch(c);f=g.sites}catch(a){e=a}return d(e,f)})()}decompressResponseBody(a){return new Promise((b,c)=>{var d=Buffer.from(a);_zlib.default.gunzip(d,(a,d)=>{a?c(a):b(d)})})}}exports.default=Sitemapper,module.exports=exports.default,module.exports.default=exports.default;


TypeError: b.cancel is not a function
    at Timeout._onTimeout (/Users/user1/.nvm/versions/node/v18.17.1/lib/node_modules/google-indexing-script/node_modules/sitemapper/lib/assets/sitemapper.js:1:2619)
    at listOnTimeout (node:internal/timers:569:17)
    at process.processTimers (node:internal/timers:512:7)

Node.js v18.17.1

throw new Error('No key or keyFile set.')

/Users/raz1ner/Downloads/google-indexing-script/node_modules/gtoken/build/src/index.js:148
            throw new Error('No key or keyFile set.');
                  ^

Error: No key or keyFile set.
    at GoogleToken.getTokenAsyncInner (/Users/raz1ner/Downloads/google-indexing-script/node_modules/gtoken/build/src/index.js:148:19)
    at GoogleToken.getTokenAsync (/Users/raz1ner/Downloads/google-indexing-script/node_modules/gtoken/build/src/index.js:137:55)
    at GoogleToken.getToken (/Users/raz1ner/Downloads/google-indexing-script/node_modules/gtoken/build/src/index.js:96:21)
    at JWT.refreshTokenNoCache (/Users/raz1ner/Downloads/google-indexing-script/node_modules/google-auth-library/build/src/auth/jwtclient.js:165:36)
    at JWT.refreshToken (/Users/raz1ner/Downloads/google-indexing-script/node_modules/google-auth-library/build/src/auth/oauth2client.js:143:25)
    at JWT.authorizeAsync (/Users/raz1ner/Downloads/google-indexing-script/node_modules/google-auth-library/build/src/auth/jwtclient.js:146:35)
    at JWT.authorize (/Users/raz1ner/Downloads/google-indexing-script/node_modules/google-auth-library/build/src/auth/jwtclient.js:142:25)
    at getAccessToken (file:///Users/raz1ner/Downloads/google-indexing-script/src/shared/auth.mjs:20:34)
    at file:///Users/raz1ner/Downloads/google-indexing-script/src/index.mjs:22:27
    at ModuleJob.run (node:internal/modules/esm/module_job:193:25)

i always get this error fetch is not defined when submit the domain

npm run index domain.com

[email protected] index
ts-node ./src/cli.ts domain.com

πŸ”Ž Processing site: sc-domain:domain.com
/seo/google-indexing-script/src/shared/utils.ts:19
const response = await fetch(url, options);
^
ReferenceError: fetch is not defined
at fetchRetry (/seo/google-indexing-script/src/shared/utils.ts:19:22)
at fetchRetry (seo/google-indexing-script/src/shared/utils.ts:29:12)
at fetchRetry (/seo/google-indexing-script/src/shared/utils.ts:29:12)
at fetchRetry (/seo/google-indexing-script/src/shared/utils.ts:29:12)
at fetchRetry (/seo/google-indexing-script/src/shared/utils.ts:29:12)
at fetchRetry (/seo/google-indexing-script/src/shared/utils.ts:29:12)
at getSitemapsList (/seo/google-indexing-script/src/shared/sitemap.ts:8:36)
at getSitemapPages (/seo/google-indexing-script/src/shared/sitemap.ts:38:26)
at index (/seo/google-indexing-script/src/index.ts:50:50)
at processTicksAndRejections (node:internal/process/task_queues:96:5)

TypeError: Cannot read properties of undefined (reading 'map')

Hello!
Can someone help me why this script is not working for me.
I turned on Google Search Console API and Web Search Indexing API.
I downloaded the user json which has owner permissions and renamed it service_account.json.

This is the error what i got every time:

> [email protected] index
> node ./src/index.mjs panjandrum.hu

πŸ”Ž Processing site: sc-domain:panjandrum.hu
file:///C:/Users/Fema/Desktop/google-indexing-script-main/src/shared/sitemap.mjs:29
  return body.sitemap.map((x) => x.path);
                      ^

TypeError: Cannot read properties of undefined (reading 'map')
    at getSitemapsList (file:///C:/Users/Fema/Desktop/google-indexing-script-main/src/shared/sitemap.mjs:29:23)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async getSitemapPages (file:///C:/Users/Fema/Desktop/google-indexing-script-main/src/shared/sitemap.mjs:33:20)
    at async file:///C:/Users/Fema/Desktop/google-indexing-script-main/src/index.mjs:27:27

Node.js v20.11.0

Thank you very much for your help in advance

RateLimited during url inspection

Currently, when conducting an inspection for indexing using the tool, I encountered an obstacle. It appears that there is a rate limit restriction of 2000 calls per day (https://support.google.com/webmasters/thread/240916045/429-quota-issue-googleapis-com-v1-urlinspection-index-inspect?hl=en), which significantly hampers the inspection process.

πŸ‘ Done, here's the status of all 14784 pages:
🚦 RateLimited: 14784 pages
{
  "error": {
    "code": 429,
    "message": "Quota exceeded for sc-domain:xxxxxxxxx.fr.",
    "status": "RESOURCE_EXHAUSTED"
  }
}

Proposed Enhancement

  • Cache-Based Inspection Calls: Introduce use the caching mechanism to use previous inspection results and only inspect URL with 429 Responses.

  • Rate Limit Management: Implement a rate limit strategy for inspection calls to ensure adherence to the daily limit of 2000 calls. This could involve throttling the rate of inspection requests to stay within the allowed quota.

  • Error Handling for 429 Responses: Identify and block inspection requests for URLs that were not analyzed due to encountering a 429 error (rate limit exceeded). This prevents redundant calls for URLs that are already queued for inspection.

  • Bypass Mechanism for Cached URLs: Introduce a mechanism to bypass inspection for URLs already present in the cache. This would allow for direct indexing of URLs stored in the cache, optimizing the indexing process.

Error.captureStackTrace(err, this) / cause: ConnectTimeoutError: Connect Timeout Error

Hi, I got this error when I started executing the npm run index mywebsite

node:internal/deps/undici/undici:11576
    Error.captureStackTrace(err, this);
          ^

TypeError: fetch failed
    at Object.fetch (node:internal/deps/undici/undici:11576:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async fetchRetry (file:///Users/jack/Desktop/google-indexing-script/src/shared/utils.mjs:14:22)
    at async getSitemapsList (file:///Users/jack/Desktop/google-indexing-script/src/shared/sitemap.mjs:9:20)
    at async getSitemapPages (file:///Users/jack/Desktop/google-indexing-script/src/shared/sitemap.mjs:33:20)
    at async file:///Users/jack/Desktop/google-indexing-script/src/index.mjs:27:27 {
  cause: ConnectTimeoutError: Connect Timeout Error
      at onConnectTimeout (node:internal/deps/undici/undici:8522:28)
      at node:internal/deps/undici/undici:8480:50
      at Immediate._onImmediate (node:internal/deps/undici/undici:8511:13)
      at process.processImmediate (node:internal/timers:478:21) {
    code: 'UND_ERR_CONNECT_TIMEOUT'
  }
}

image

API's and limitation

Hi buddy!

Which api will limit the consumption of the tool? Thinking about automating it to run 1x a day on all my sites?

Ps: Your tool is absolutely fantastic, I ran the repository today, super easy to use! Congratulations!

Are PRs welcome?

Hello,

First of all - love the script.
Second - Are PRs welcome? I'd love to setup a binary so it can be used via npx and a token flag.
Maybe also add an automated release system via semantic-release? :)

This service account doesn't have access to this site.

I'm getting the below output although I did add the service account with Owner permission on my site using the client_email property and moved the service_account.json file to the folder with the script. Any idea why?

πŸ” This service account doesn't have access to this site.
❌ No sitemaps found, add them to Google Search Console and try again.

0 urls found

I'm running it like this
npm run index andreaponzio.com
It found a sitemap, since i'm logging it, but it doesn't find any url even if there are 3 pages, and two are already indexed
This is the result

npm run index andreaponzio.com

> [email protected] index
> ts-node ./src/index.ts andreaponzio.com

πŸ”Ž Processing site: sc-domain:andreaponzio.com
Sitemaps: [ 'https://www.andreaponzio.com/sitemap.xml' ] 1
πŸ‘‰ Found 0 URLs in 1 sitemap
πŸ‘ Done, here's the status of all 0 pages:

✨ There are no pages that can be indexed. Everything is already indexed!

πŸ‘ All done!
πŸ’– Brought to you by https://seogets.com - SEO Analytics.

Publish as npm package

First of all: Thank you very much for this great project!

I would love to just install it globally via npm install and then use it as a CLI. Do you plan to release this script as an npm package?

The script works strangely

Hi all,
I'm trying to index a site with 1 million pages, after a long time it gives out:
2024-03-06_23-15-44
Web Search Indexing API quota is not consumed, no errors.
2024-03-06_23-18-49
The cache is also not created in the folder. There is only one file.
Can anyone encounter it?

429 error

site has about 6 million web pages, and a 429 error occurs during batch processing.

Found 0 URLs in 1 sitemap

Hi, thank you for putting this together.

When I run the command on a domain-verified property, I get this:

πŸ”Ž Processing site: sc-domain:example.com
πŸ‘‰ Found 0 URLs in 1 sitemap

πŸ‘ Done, here's the status of all 0 pages:

✨ There are no pages that can be indexed. Everything is already indexed!

Despite the sitemap having multiple URLs that haven't been indexed yet. I tried with another domain-verified property and I got the same error.

After a few hours, I tried again with two other URL prefix properties, and it worked.

Processing site: https://www.example.com/
πŸ‘‰ Found 209 URLs in 1 sitemap
πŸ“¦ Batch 1 of 5 complete
πŸ“¦ Batch 2 of 5 complete
πŸ“¦ Batch 3 of 5 complete
πŸ“¦ Batch 4 of 5 complete
πŸ“¦ Batch 5 of 5 complete

πŸ‘ Done, here's the status of all 209 pages:
β€’ βœ… Submitted and indexed: 56 pages
β€’ ❓ URL is unknown to Google: 8 pages
β€’ πŸ‘€ Crawled - currently not indexed: 129 pages
β€’ πŸ‘€ Discovered - currently not indexed: 9 pages
β€’ ❌ Excluded by β€˜noindex’ tag: 2 pages
β€’ ❌ Alternate page with proper canonical tag: 1 pages
β€’ ❌ Not found (404): 4 pages

✨ Found 146 pages that can be indexed.
etc.

Requests can be made again when an unexpected error occurs.

πŸ‘‰ Found 1157 URLs in 1 sitemap
πŸ“¦ Batch 1 of 24 complete
πŸ“¦ Batch 2 of 24 complete
πŸ“¦ Batch 3 of 24 complete
πŸ“¦ Batch 4 of 24 complete
πŸ“¦ Batch 5 of 24 complete
πŸ“¦ Batch 6 of 24 complete
πŸ“¦ Batch 7 of 24 complete
πŸ“¦ Batch 8 of 24 complete
❌ Failed to get indexing status.
Error was: Error: Server error code 500
{
  "error": {
    "code": 500,
    "message": "Internal error encountered.",
    "status": "INTERNAL"
  }
}

file:///Users/raz1ner/Downloads/google-indexing-script/src/shared/utils.mjs:17
      throw new Error(`Server error code ${response.status}\n${body}`);
            ^

Error: Server error code 500
{
  "error": {
    "code": 500,
    "message": "Internal error encountered.",
    "status": "INTERNAL"
  }
}

Sometimes unexpected errors occur during usage, so I have to start over and run it again. It would be great if, when an unexpected error occurs, the request causing the error could be resubmitted instead of stopping directly.

Error when using the script: errno: -4058

Using Windows 11 and Node.js v20.11.1

πŸ”Ž Processing site: https://teenbiscuits.github.io/Pro2324/
πŸ‘‰ Found 25 URLs in 1 sitemap
πŸ“¦ Batch 1 of 1 complete

πŸ‘ Done, here's the status of all 25 pages:
node:fs:2352
    return binding.writeFileUtf8(
                   ^

Error: ENOENT: no such file or directory, open 'C:\Users\pablo\.cache\https_teenbiscuits.github.io_Pro2324\.json'
    at writeFileSync (node:fs:2352:20)
    at exports.a (C:\Users\pablo\AppData\Roaming\npm\node_modules\google-indexing-script\dist\chunk-IOOAN7NK.js:1:3780)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
  errno: -4058,
  code: 'ENOENT',
  syscall: 'open',
  path: 'C:\\Users\\pablo\\.cache\\https_teenbiscuits.github.io_Pro2324\\.json'
}

Node.js v20.11.1

Hey does this script support subdomains?

It always fail whenever I try to index subdomain, I setup a URL prefix for the subdomain.

πŸ”Ž Processing site: sc-domain:sub.example.com
πŸ” This service account doesn't have access to this site.
❌ No sitemaps found, add them to Google Search Console and try again.

Script blocked

Hi,
I don't know if this is due to my configuration, but recently the script has been blocked, regardless of the domain entered.

[email protected] index
ts-node ./src/index.ts domain.tld

Everything up to date (node, google indexing script).

[ERR_WORKER_OUT_OF_MEMORY]: Worker terminated due to reaching memory limit: JS heap out of memory

I executed "npm run build" and receive

CLI Building entry: src/bin.ts, src/cli.ts, src/index.ts, src/shared/auth.ts, src/shared/gsc.ts, src/shared/index.ts, src/shared/sitemap.ts, src/shared/types.ts, src/shared/utils.ts
CLI Using tsconfig: tsconfig.json
CLI tsup v8.0.2
CLI Using tsup config: xxxxxxxxxxxxx/google-indexing-script/tsup.config.ts
CLI Target: esnext
CLI Cleaning output folder
CJS Build start
ESM You have emitDecoratorMetadata enabled but @swc/core was not installed, skipping swc plugin
CJS dist/shared/utils.js 355.00 B
CJS dist/bin.js 401.00 B
CJS dist/cli.js 460.00 B
CJS dist/chunk-LWWEROSD.js 388.00 B
CJS dist/index.js 1.08 KB
CJS dist/chunk-LBCQ44KW.js 4.51 KB
CJS dist/shared/auth.js 271.00 B
CJS dist/shared/gsc.js 579.00 B
CJS dist/shared/index.js 1.02 KB
CJS dist/chunk-75CVEFN6.js 669.00 B
CJS dist/chunk-74KA7FXM.js 1.43 KB
CJS dist/chunk-2MKILNFN.js 3.54 KB
CJS dist/shared/sitemap.js 307.00 B
CJS dist/chunk-MEQJBOPJ.js 1.36 KB
CJS dist/chunk-TTTS3QMW.js 807.00 B
CJS dist/shared/types.js 264.00 B
CJS dist/chunk-XL3RXX4H.js 708.00 B
CJS dist/chunk-IL7T5FML.js 692.00 B
CJS dist/shared/utils.js.map 51.00 B
CJS dist/bin.js.map 136.00 B
CJS dist/cli.js.map 51.00 B
CJS dist/chunk-LWWEROSD.js.map 230.00 B
CJS dist/index.js.map 51.00 B
CJS dist/chunk-LBCQ44KW.js.map 8.75 KB
CJS dist/shared/auth.js.map 51.00 B
CJS dist/shared/gsc.js.map 51.00 B
CJS dist/shared/index.js.map 51.00 B
CJS dist/chunk-75CVEFN6.js.map 376.00 B
CJS dist/chunk-74KA7FXM.js.map 2.72 KB
CJS dist/chunk-2MKILNFN.js.map 7.01 KB
CJS dist/shared/sitemap.js.map 51.00 B
CJS dist/chunk-MEQJBOPJ.js.map 2.68 KB
CJS dist/chunk-TTTS3QMW.js.map 2.76 KB
CJS dist/shared/types.js.map 51.00 B
CJS dist/chunk-XL3RXX4H.js.map 841.00 B
CJS dist/chunk-IL7T5FML.js.map 51.00 B
CJS ⚑️ Build success in 225ms
DTS Build start
node:events:492
throw er; // Unhandled 'error' event
^

Error [ERR_WORKER_OUT_OF_MEMORY]: Worker terminated due to reaching memory limit: JS heap out of memory
at [kOnExit] (node:internal/worker:313:26)
at Worker..onexit (node:internal/worker:229:20)
Emitted 'error' event on Worker instance at:
at [kOnExit] (node:internal/worker:313:12)
at Worker..onexit (node:internal/worker:229:20) {
code: 'ERR_WORKER_OUT_OF_MEMORY'
}

Forced reindex command

On some websites I manage there are pages that are changing rather frequently.
It would be great if there was some command that forces indexing of selected page i.e. npm run reindex <url> even if this page is already known and indexed. While I can do that directly from Search Console, it's very slow.

ETIMEDOUT

response: undefined,
error: FetchError: request to https://www.googleapis.com/oauth2/v4/token failed, reason:
at ClientRequest. (H:\work\google-indexing-script-main\node_modules\node-fetch\lib\index.js:1501:11)
at ClientRequest.emit (node:events:518:28)
at TLSSocket.socketErrorListener (node:_http_client:495:9)
at TLSSocket.emit (node:events:518:28)
at emitErrorNT (node:internal/streams/destroy:169:8)
at emitErrorCloseNT (node:internal/streams/destroy:128:3)
at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
type: 'system',
errno: 'ETIMEDOUT',
code: 'ETIMEDOUT'
},
code: 'ETIMEDOUT

Can it work for multiple websites and multiples services account

Hi and thank for that amazing work!

I apologize if the question seems stupid, but I have around 50 sites to index, each with approximately 1000 URLs. I have about 300 service accounts, and all the sites have their sitemaps properly set up in Google Search Console.

Do you know what modifications need to be made to the script so that after requesting indexing for 200 URLs, the script switches to a new service account to avoid being blocked by the API limits?

Same question to switch from 1 website to another

Thank you so much

URLs found but could not be indexed due to Code 403 (Permission Error)

Hello:
Can someone help me with the following situation:

I have enabled the two APIs and added the ownership to the service account. The script can successfully crawl my sitemaps and find the correct URLs, but when processing, it encounters the following errors. This is the error I got:

πŸ“„ Processing url: http://(one of my URLs here)
πŸ” This service account doesn't have access to this site.
Response was: 403
{
  "error": {
    "code": 403,
    "message": "Permission denied. Failed to verify the URL ownership.",
    "status": "PERMISSION_DENIED"
  }
}

image

Credential error?

Hi! I got the following error when I ran the code. I have enabled the APIs and set up the service account accordingly. Do you have any clues about what's causing the issue? Thanks!

/Users/ce/Downloads/google-indexing-script/node_modules/gaxios/build/src/gaxios.js:148
                : new common_1.GaxiosError(e.message, opts, undefined, e);
                  ^

GaxiosError: request to https://www.googleapis.com/oauth2/v4/token failed, reason:
    at Gaxios._request (/Users/ce/Downloads/google-indexing-script/node_modules/gaxios/build/src/gaxios.js:148:19)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async GoogleToken.requestToken (/Users/ce/Downloads/google-indexing-script/node_modules/gtoken/build/src/index.js:230:23)
    at async GoogleToken.getTokenAsync (/Users/ce/Downloads/google-indexing-script/node_modules/gtoken/build/src/index.js:137:20)
    at async JWT.refreshTokenNoCache (/Users/ce/Downloads/google-indexing-script/node_modules/google-auth-library/build/src/auth/jwtclient.js:165:23)
    at async JWT.authorizeAsync (/Users/ce/Downloads/google-indexing-script/node_modules/google-auth-library/build/src/auth/jwtclient.js:146:24)
    at async getAccessToken (file:///Users/ce/Downloads/google-indexing-script/src/shared/auth.mjs:20:18)
    at async file:///Users/ce/Downloads/google-indexing-script/src/index.mjs:22:21 {
  config: {
    method: 'POST',
    url: 'https://www.googleapis.com/oauth2/v4/token',
    data: {
      grant_type: '<<REDACTED> - See `errorRedactor` option in `gaxios` for configuration>.',
      assertion: '<<REDACTED> - See `errorRedactor` option in `gaxios` for configuration>.'
    },
    headers: {
      'Content-Type': 'application/x-www-form-urlencoded',
      'User-Agent': 'google-api-nodejs-client/9.4.2',
      'x-goog-api-client': 'gl-node/21.1.0',
      Accept: 'application/json'
    },
    responseType: 'json',
    paramsSerializer: [Function: paramsSerializer],
    body: '<<REDACTED> - See `errorRedactor` option in `gaxios` for configuration>.',
    validateStatus: [Function: validateStatus],
    errorRedactor: [Function: defaultErrorRedactor]
  },
  response: undefined,
  error: FetchError: request to https://www.googleapis.com/oauth2/v4/token failed, reason:
      at ClientRequest.<anonymous> (/Users/ce/Downloads/google-indexing-script/node_modules/node-fetch/lib/index.js:1501:11)
      at ClientRequest.emit (node:events:515:28)
      at TLSSocket.socketErrorListener (node:_http_client:495:9)
      at TLSSocket.emit (node:events:515:28)
      at emitErrorNT (node:internal/streams/destroy:151:8)
      at emitErrorCloseNT (node:internal/streams/destroy:116:3)
      at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {
    type: 'system',
    errno: 'ETIMEDOUT',
    code: 'ETIMEDOUT'
  },
  code: 'ETIMEDOUT'
}

Node.js v21.1.0

Help sir const accessToken = await getAccessToken();

[email protected] index
node ./src/index.mjs "x.com"

file:///mnt/c/Users/Administrator/Documents/GITHUB/GITHUB/WP/google-indexing-script-main/google-indexing-script-main/src/index.mjs:22
const accessToken = await getAccessToken();
^^^^^

SyntaxError: Unexpected reserved word
at Loader.moduleStrategy (internal/modules/esm/translators.js:133:18)
at async link (internal/modules/esm/module_job.js:42:21)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.