Git Product home page Git Product logo

linkgopher's Introduction

Link Gopher

Link Gopher is a web browser extension: it extracts all links from web page, sorts them, removes duplicates, and displays them in a new tab for inspection or copy and paste into other systems.

Download

To download and install the latest release:

Documentation

There is brief documentation

License

Copyright (c) 2008, 2009, 2014, 2017, 2021, 2023 by Andrew Ziem. All rights reserved.

Licensed under the GNU General Public License version 3 or later

linkgopher's People

Contributors

az0 avatar benzbrake avatar bilalaslim avatar dh-lstudios avatar jtagcat avatar jwilk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

linkgopher's Issues

Feature request: Get URLs of other media (also on multiple tabs)

Sorry for late of coming back here for a while.

Anyway, firefox has this feature - View Page info which is found on the context menu or clicking on the padlock icon, then show connection details (right arrow button) and then More information.

On the media tab, you'll find a list of loaded content's URLs, this includes images, videos, etc. I would like linkgopher multi-tab ( https://github.com/andrewdbate/linkgopher/ ) to be able to not only extract clickable links, but also media URLs too. There isn't an issue tab on that page in that link, so I was thinking this is the better place to post my request here.

Firefox's page info only gets URLs of media on the current tab.

linkgopher overcome mozilla read ban

hallo and all here be well,
when try to interprete source code mozilla permit this action
screenshot 225
for this instance iam using linkgopher this to file against toptor.top webservice, theyr hoster and finally against the owner
otherhand mozilla didnt has an manual control (like start/stop) to stop refreshing and forwarding asson as i can read carefully original html code
pls make linkgopher more as administrator rights in addition with nan play control button for stopping mozilla to do tyer best: interprte the code an do it, In case of fire i will NOT do what the code in wesite want to do: analyttics, references call,ing, write cookies or malcode ans more... as an normal user, my be??
thanks for thios great tool, that helps me to build an hughe hostfile against analytics, crimes and some hidden actions insyde buil in websites

lutz

Extract links by filter

Help please,
I try to filter multiple extension like this:

jpg, JPG, bmp, BMP or jpg,JPG,bmp,BMP and any other variation with no luck

Is it possible to filter various extension and how?
Thanks

Thunderbird version?

It would be useful if there was also a Thunderbird version for searching though links in emails.

Error: Malformed URI sequence

Link Gopher 2.0.1
Firefox 73.0x64

Trying to extract all links from https://www.bav-versand.ch/, I only receive the alert popup "Error: malformed URI sequence".

Feature request: Extract links live as the page loads

Something similar to the devtools' network monitor.

Twitter is planning to completely phase out the legacy site by june 1 2020 (extensions forcing the old design will stop working from this point on). And I do like archiving tweets. The problem with the current linkgopher is that it only extract tweets that are loaded. The new design on twitter unloads tweets when they go far enough offscreen. The old design does not unload tweets, so you can extract all the tweets on the page by loading all the way to the bottom and then use the extension to extract all that loaded tweets.

I was thinking of adding a feature in which you checked a checkbox labeled “extract links on the fly as page loads”, and it starts scanning the HTML every second and logs it into a side panel of every new tweet appearing as you scroll down the page. Similar to chrome and firefox's devtools's network monitor, which you can tick “persist logs” or “preserve logs” which when things unloads, remain on the list.

pale moon compatibility

I find your extension to be the most useful in its category. Unfortunately, it's not installable "as is" in Pale Moon. Do you have plans to "fix" this?

linkgopher stop working since sept 2nd at all (Firefox)

thanks for quick fix.
I had trash firefox an runs developer edition. And all linkgopher works fine as expected.
two days ago my linkgopher stop working at all.
had reset to default firefox 92x64 on 1st system
have downgrade firefox 92x64 to 89x64 back
have started firefox 82x64 on ubuntu
give all permissions access/open tabs and websites of course
nothing work
version 2.2. seems to be deadhowever, the
window "about linkgopher" didnt pop up
nor both extract windows
pls help or check if some going better

White page

Good evening.

I am a recent user of Link Gopher and have previously enjoyed the quick link retrieval from any given page. However, as of late, I click of the option to Extract All Links. It produces a new tab in Firefox with nothing in the Windows. I followed the thread regarding e10s and have made sure that option was disabled in my about:config. I also tried uninstalling and re-installing, to no avail. Currently using 1.3.3.1. Is there something more I should be looking at?

Error: Missing host permission for the tab

Hi, this error is recently happening while extracting all the links from any page source, while extraction for the page itself is OK. ( not the source)
Anything I can do differently to make it work from the page source as before?
Thanks

Japanese translation of "Link Gopher version 2.0.1"

linkgopher/_locales/日本語/messages.json
locales: 日本語(ja-JP)

{
"askPattern": {
"message": "リンク内で検索する文字列を入力して下さい。この文字列が無いリンクは無視されます。",
"description": "none"
},

"pleaseWait": {
"message": "お待ち下さい...",
"description": "none"
},

"links": {
"message": "リンク先",
"description": "none"
},

"domains": {
"message": "ドメイン",
"description": "none"
},

"noMatches": {
"message": "一致するものはありません。",
"description": "none"
},

"extractAll": {
"message": "すべてのリンクを抽出する",
"description": "none"
},

"extractSome": {
"message": "フィルタによるリンクの抽出",
"description": "none"
},

"aboutLinkGopher": {
"message": "Link Gopher について",
"description": "none"
}
}

[object Object]

When we click extract all links or any button on the chrome extension we get a pop up that says [object Object} with an OK button. Not sure if this a me or product problem. Please let me know.

Don't sort option to extracted links

Hi, I use this extension occasionally and in most cases i want to extract the links in the same order as it appears on the webpage. Due to sorted links I have to manually re-arrange the links and put them in a table. That's a lot of work. 😢 😅

To solve this issue, can you please add don't sort extracted links option to the extension?

Regards,
Mani

add name of link

Could you please add the name of the link ?
I am trying to get the links ,but also know to whom the link belongs to .
for example
https://open.spotify.com/search/albums/year%3A1980
I can get the links , but I would like to know the artist and album name .
could you please add this feature I am willing to pay you for this .
Thanks

Get links only on the current and active page

Hello developers,

I'm wondering if it is possible to get link gopher to collate all links in the current webpage where we click on the Link Gopher icon, but not in the all the tabs?

Thanks a lot for your reply

Suggestion to extract the original link facebook

Hi
When entering Facebook groups and profiles and extracting images, the links to the original images do not appear like this
https://scontent.famm2-3.fna.fbcdn.net/v/t39.30808-6/274310811_10159068497072955_623015438404356824_n.jpg?_nc_cat=101&ccb=1-5&_nc_sid=5cd70e&_nc_eui2=AeGnL4Neirs_-xu2M1kYdzCpT-vvVH-K_ZRP6-9Uf4r9lKYMozjwxdVEjx5-kDYVJo_34mmcgKwhNhf92KkSy4d7&_nc_ohc=gOokQCtTnKAAX8Tnm6d&_nc_zt=23&_nc_ht=scontent.famm2-3.fna&oh=00_AT85vK2VF-3p1BD5wbODHkfXcfWby-qIlH4UYsRD-Ijg5Q&oe=6229BCD3

Extracts links for low-quality images
https://web.facebook.com/photo/?fbid=10159068497152955&set=g.239184319620879

Or is it extracted like this?
https://scontent.famm2-3.fna.fbcdn.net/v/t39.30808-6/275060762_10159068497257955_8699186377974658292_n.jpg?stp=dst-jpg_s118x118&_nc_cat=104&ccb=1-5&_nc_sid=5cd70e&_nc_eui2=AeGWq0i18IhDUOq448_TWKFJPRyX0LdACCg9HJfQt0AIKGRkCkQCeU6hxw9vgA9kJBxVuc69MhB0Ao60ypHY8zza&_nc_ohc=0S9or56U_iMAX8yPZmy&_nc_zt=23&_nc_ht=scontent.famm2-3.fna&oh=00_AT8QdsCCIUtxIBNqnzemsF4drCx50efhW7MzAVRvI9qOBg&oe=6228EEE8

I want to suggest and that the links to the original images are extracted instead of the low-quality links to the images
Links like this
straight to the picture
https://scontent.famm2-3.fna.fbcdn.net/v/t39.30808-6/274310811_10159068497072955_623015438404356824_n.jpg?_nc_cat=101&ccb=1-5&_nc_sid=5cd70e&_nc_eui2=AeGnL4Neirs_-xu2M1kYdzCpT-vvVH-K_ZRP6-9Uf4r9lKYMozjwxdVEjx5-kDYVJo_34mmcgKwhNhf92KkSy4d7&_nc_ohc=gOokQCtTnKAAX8Tnm6d&_nc_zt=23&_nc_ht=scontent.famm2-3.fna&oh=00_AT85vK2VF-3p1BD5wbODHkfXcfWby-qIlH4UYsRD-Ijg5Q&oe=6229BCD3

I hope this proposal will be implemented

Feature request: Copy URLs in percent encoded form

Sometimes, the URL may contain such characters in the URL (space, comma, brackets, japanese characters and all other non-UTF-8 characters in the URL) that may be dangerous and could point to invalid links (in the case of mojibakes).

I was thinking of adding the option to force them into percent-encoded form, much like this tool: https://github.com/vincepare/CopyAllUrl_Chrome that extract URLs the tabs are on (not to be confused with extracting all links on the pages on each tab).

add powerful, dynamic filter

On the results page, present a form that adjust the results in real time. The options include

  • Remove anchors like #foo
  • Remove query strings like ?foo=1&bar=1
  • Remove paths like /foo/bar.html
  • Include URLs that match pattern
  • Exclude URLs that match pattern

The form has a reset button

The settings are persistent (in other words, remembered)

Suggestion

Hi, would it be possible to grab the links of the subdomains to?
Greetings

Feature Request: Extract All Selected Links

First, my grateful thanks for this extension.

A thought as to a potential improvement: selecting text, then a contextual menu item that lets you extract the links just in the selected text. Would be useful for avoiding the menu bar/footer links that are present in many websites.

Can you add a new button to also exclude a new filter, here is an example, please see below

Extracting images/media?

Pretty much generating a similar list, but including the list of media included on the page. I reckon this list would match what's under "Page Info -> Media". Having it split by type (image, video, audio, other mime) would be handy.

Option to strip ID

It'd be neet if instead of

foo#bar
foo#baz
raf#dance
raf#place

it'd be deduped to

foo#bar
raf#dance

or

foo
raf

Feature request: Dark mode

Would it be possible to implement a dark mode for this extension? Going from a dark web page to blinding white is a bit jarring. Thank you.

Show Domain List Only

Using LinkGopher to find hidden spam links on web pages. Often times the list returned by LinkGopher is quite long requiring scrolling to get to the domain list. Is there a way to only display the domain name list? Not a programmer but if it's simply modifying one of the files then I can probably handle that. 😁

[feature request] Extract links on multiple tabs

Would be really cool if it had a feature where you open 2 tabs, one on twitter and one on google, you use the tool, and it will extract links from both tabs and place them in a single output tab.

Improve post-install experience

Scott D. from Mozilla suggested a better post-install experience

Looking over our editorial evaluation notes the only thing I see is the absence of a post-install experience (i.e. user installs Link Gopher and is left to their own devices to figure out how to use it). Ideally we would throw up a post-install page that provides a basic overview of the extension, or perhaps open Link Gopher's settings/prefs in about:addons

Problems extracting links from spreadsheets on Google Docs

Context Menu

Hello,

Link Gopher is very useful for me and if the extension is not abandoned I would like to suggest a simple enhancement.

Currently the commands can be executed only from the tool bar button. For me it would be handier if I can execute these commands from the context menu, example:

  • Link Gopher
    • Extract All Links
    • Extract Links by Filter
    • About Link Gopher

In short, Link Gopher appears in the context menu with sub-menus for the available commands. Actually About Link Gopher is not much needed in the context menu but it might be kept so that the tool bar commands and the context menu commands are identical.

I suppose that context menu integration would not require too much work.

Regards

add option custom filters

hi dear coder
add option custom filters
example user can add url filters
extract Links by filter "turbobit.net"
extract Links by filter "link.tl"
extract Links by filter "turbobit.tl" or "uploaded.com"
.........................................
etc.
please add filter or feature .
thanks.

Only a few links extracted

Using linkgopher v2.4.4 on Firefox 104.0.2 (on Chrome 105, same happens)

I found this behavior while browsing a site

https://www.instagram.com/madonna/reels/

or any instagram site with Reels, not Madonna specific :-) . Reproducible.

Site creates hundreds of media reels/links while going page down.

But whenever I "Extract all links", only a small random number (typically 20-40) of reel links are extracted:

https://www.instagram.com/reel/CS15qtanjeO/
https://www.instagram.com/reel/CS4zBMjCJPu/
https://www.instagram.com/reel/CSt_27OievG/
https://www.instagram.com/reel/CSwtj_FiD-y/
https://www.instagram.com/reel/CT-mKMigP2G/
https://www.instagram.com/reel/CT2qVhbA2vi/
https://www.instagram.com/reel/CT5dC9pAZRA/
https://www.instagram.com/reel/CTAciy1C6oB/
https://www.instagram.com/reel/CTDKs1En35L/
https://www.instagram.com/reel/CTIfj8-C-Su/

The behavior in other sites with multiple media links (e.g. Tiktok, etc) I checkd is OK.

Is it instagram fooling around or an undiscovered bug ?

Thank you and keep up the Great Work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.