Git Product home page Git Product logo

tumblthree's People

Contributors

amigre38 avatar elipriaulx avatar johanneszab avatar salrana avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tumblthree's Issues

Bugs in the version 1.0.4.25

  1. When I select the mode of downloading images and meta-data of images in the details view, they are all downloaded at crawling, but in the details view the downloader value of images and metadata goes only in the images value, leaving the metadata value empty.
  2. When I set the force rescan mode for the blogs, the download count in the column abnormally increases so it becomes greater than number total of images in the blogs and number images in the metadata. And after repeated forced scanning the values of download only increased.
  3. After the forced scan with mode downloading metadata the program creates duplicate entries in the metadata files, in fact whole copies of meta-data are downloaded and created inside the text of the files of metadata.
  4. Forced re-scanning does not work for deleted or renamed files, they do not download again. But it looks like all the missing files are downloaded after forced rescanning.

P.S. Sorry for possible mistakes in the text, English is not my native language

Settings are always saved.

It's not a big deal and I don't know if it's by design. I just found it weird that the settings are saved without pressing the save button. If I just close the settings window, the settings are saved anyways.

Randomly not scanning images?

i just noticed the program just goes blind randomly and wont see any new content unless i delete the blog and re-add it. anyone else had that issue?

edit: it's like everytime i update the program all the blogs i added prior go invisible

Downloading specified higher resolution pictures often/always leads to same image

I doubt that TumblThree is the issue here, instead it's Tumblr's file naming or api, but anyway, I ran through my blogs again today. I ended up with 43,000ish files. I ran those files through dupeguru, both against themselves, and then against my reference files from older downloads. That lead to deleting 8000 photos, and 23,000 photos, respectively. For the second run, dupeguru showed that files with the exact same filename except for the end(_400 vs _1280, for example) were exact duplicate files. Here is a screenshot: https://imgur.com/RTqQlQB

I have not had a chance yet to check whether this does sometimes result in your intended(and super awesome!) idea. Perhaps I'll use a mass renamer on both sets of files to clip off the end of the filename and see if there are still collisions based on filename but with different file sizes.

I don't know if you have considered this, but regardless of the outcome of the above, I think doing duplication checks based on the md5 hash rather than the filename can address all of the above. Also, if my wording before wasn't clear, what I mean by "against themselves" is that dupeguru found that different blogs had downloaded the same exact images today that had different filenames. This has always been the case, and isn't too big of a deal for me thanks to Dupeguru, but I figured I would mention it as well.

[bug] The value of the progress bar is not correct

The progress bar never gets to 100%, because the value of blog.DownloadedImages is refreshed after the progress bar :

blog.Progress = (uint)((double)blog.DownloadedImages / (double)blog.TotalCount * 100);

blog.Progress = (uint)((double)blog.DownloadedImages / (double)blog.TotalCount * 100);
// ...
blog.DownloadedImages = (uint) blog.Links.Count();

tumblthree_bug

[Feature Request] Parse the regular website instead of using the tumblr api

I've uploaded a branch here where my first steps are visible. If someone wants to participate, I think the hard thing is already done: How to parallelize it?

By using the archive page it's possible to determine the lifetime of the blog. Now we can access the archive page by using _archive?before_time= where the before time is unix time. So we could start the crawl at several times during the blogs lifetime. Say xxx crawls per month and stop the crawlers once it hit a post of a different crawler started at an earlier time.

I've uploaded a functional commit for image crawling without accessing the tumblr api. Still needs optimization and a code rebase to the current master branch.

Option to not redownload existing files if there are no index files

It would be nice if the program did not redownload the already downloaded files but just skipped them in the case of absence index files for them. This would be actual if, for example, the index files were lost or deleted. I just have lots of downloaded files from blogs, that were downloaded year ago, I would like to update them to download new posts, but the index files have been deleted because I deleted blogs from the program due bug crash occurs after adding new blogs if existing blogs were so many in the program so now I have no index files for older blogs. And if I add all the old blogs to the program, then it will redownload all the files in the folders and this process will take a long time because I have over 500 thousands files. I know that you are currently working on program development or other things, so I do not expect that this option will appear in the program anytime soon or at all, but if there is an opportunity to introduce this option I will be happy because it will save a lot of time and internet traffic.

Number of downloads turn blank after a crawl.

So in the last few days my blogs have stopped downloading anything, it will crawl but wont download newly added files during or after a finished evaluation of posts, then after a finished evaluation the number of downloads category empties to blank. Recrawling or forced rescan doesn’t fix it.

Avoid creating empty folders when no new posts exist

This is another low priority issue. When doing a crawl on a blog, if there have been zero posts since the last crawl, and empty folder will be created for the blog, and of course no content will be added to it. Would it be possible to not create this folder, or delete it upon the end of the crawl or closing the program or something?

Internal error description / MaxBytesPerSecond has to be <0

I like TumblrThree's design better than older versions. And I have gotten it to work a few times.

But...

After a while of use (4 -6 times), on load, the program fails.
It will allow you to add tumblrs to the cue, but will not crawl the tumblr sites.

The program only starts working again once I delete the entire program and associated libraries, all the directories containing downloaded flies, and the AppData\Local\TumblThree\Settings files under the User directory.

Details:

  1. You start the program.

  2. An error message appears in the top status line (just above the list of tumblr blogs), reading:

    Error 1:Could not load file in library:

Highlighting the error message gives you the extra info:

Internal error description
MaxBytesPerSecond has to be <0

I have tried lots of work around like adding all the blogs to the cue and then removing them, rebooting, etc.

All that seems to working is deleting every trace of the program and starting over.

out of space handling

Hi, I've just started using this program, so I might have missed something.
I have added some blogs and ran out of space. I have noticed that the program doesn't stop at all.
Would you please have the program delete the partial/downloading files and issue the stop command when write error/out of space occurs? Right now I am trying to clean the folders by deleting the most recent files, but I don't know how far I should go back. I might just delete everything and try again beacause I noticed that the program doesn't check for corrupted files. i.e. If I replace a file with a different file, but same name. The program wouldn't notice. I am thinking that I might end up with some corrupted files if I use the pause/stop too often?

Thanks.

[Feature Request] Grouping of Blogs in the Manager

This has been an enhancement that I have been wanting for a while, so I figured I might as well bring it up while I'm thinking about it again.

Would it be possible to group blogs together into folders or apply some sort of label to them inside the program? For a folder, I'm envisioning like a sorta free structure where blogs can be moved to, which can be expanded or collapsed. But a single label would work as well, as long as there is a column for it that they can be sorted by.

Mostly, I am interested in being able to sort/group them inside the program itself, but an added bonus would definitely be to have all blogs with a folder/label also have their download folders be inserted into a subfolder of the Blogs folder. So, for all blogs in the "Wallpapers" folder/label, they would download to ".\Blogs\Wallpapers{BlogName}", and all blogs in the "Food" folder/label would download to ".\Blogs\Food{BlogName}". This would allow for easier sorting after downloading everything.

[Enhancement] Ignore Reblogs Option

Giving the ability to download original content from the blog, it's just a suggestion I thought would be useful I read the wiki and looked at the documentation for something like this. The closest thing I could find was the tag system.

This website shows the desired result.

With v1.0.4.19 version the program skipped about one-third percent of meta-data and most of the images

Hi everyone.
The program does not download some percent of metadata. Here is a screenshot that shows the downloaded blogs in mode downloading only metadata without images
http://i.imgur.com/2fUGxj8.jpg
I also noticed that in the metadata (images.txt) there are many duplicates, whole parts of the text are repeated along the entire file. If remove the duplicates from text, it turns out that the program has downloaded much less metadata.
Cases with the downloaded images are worse, the program does not download a lot of images
For example, there is a blog, sorry it is NSFW
http://sagarxx.tumblr.com
I downloaded this blog twice using TumblThree and another downloader and compared the downloaded folder from two programs. Another downloader has downloaded 1247 images, TumblThree has downloaded only 123 images. And it's not duplicates, I see in the folder from another downloader such an images, which are absent in the Tumblthree folder. Tomorrow I will try to download the same blogs, which I had previously downloaded and see how many images will be downloaded and compared to the number of images that are in the old folders.
This is all what I faced today. Is there anyone who has faced the same problems?
P.S. English is not my native language, so sorry for possible mistakes in the text.

How to keep images from one post close to each other?

How to make images to be saved in particular order of reblog? I use sorting pics by date/date modified. If i am downloading a blog with more than one images in post , these images can't be downloaded by particular order , they're just saved randomly among other images.

For example, there is 20 tumblr posts. Program starts to save pics from these posts and it looks like at first it downloads one pic from 1st post, then three pics from 5th post, then pic from 20th post, then another one pic from 1st post. I can't make the program download them like whole 1st post, then whole 2nd etc.

I have to use TumblOne because of it lol, but it have outdated indexing system (if I delete needless pictures, and then crawl blog, TumblOne will download them again). I tried to set 1 or 2 parallel connections instead of original 25, but in that case images just are not downloaded at all.

What should I do?

Video File is not downloaded Properly

I tried to download video files using Tumblethree, files are being downloaded, but when I run them, it seems that they are not downloaded properly as the file play stops in the middle of the video, so is this common issue , or not ??

thanks in advance

"Could not load library" on startup

I've had an issue with no free space on my hard drive and after that I've lost all my library.
When I start TumblThree, I don't see any of my previously added blogs. Sad. Maybe this could be fixed in the future. Definitely something with available space on hard drive. Seems like program don't know what to do when you don't have free space and do something wrong instead. Right now I have free space but it's too late.

Folders not being created on crawl, and other weirdness

Okay, so I have been trying to figure out exactly what's going on here for a while this morning. I'm having some odd issues with 1.0.4.5

So when i finished crawling a few days ago, I deleted all the empty folders and moved all of the downloads to another folder. When I returned to the app today, I had it download some blogs again. The Queue showed them downloading (and the details screen didn't show any duplicates), but in the Blogs folder, the blog did not have a folder present, and all of the supposedly downloaded files were of course no where to be found. Blogs that did not have any posts in the last few days had empty folders in the Blogs folder, however.

I played with this for a bit, trying to figure out if there was something I had done wrong. Something I noticed was that if i restored my index, and then enabled the images/videos on all of my blogs, that would create all of the folders for every single blog. If I then told it to download, it would do so successfully. But if i did the same thing but deleted the empty folders, restarted the program, and started a crawl, the same issue as above would occur, no folders, no files.

Now this is the really weird one. I tried downloading a blog again when it had no folder to begin with. But before it finished "downloading" according to the queue, I stopped the download and I exited TumblThree. This caused an empty folder for said blog to appear. I then started TumbleThree back up and had it start crawling that blog(after accidentally adding the blog to the queue, even thought it was still in the queue list from the last time. This added it to the queue list a second time, 2 entries). It went berserk, downloading WAY more posts than have occurred in the last few days. Thousands and thousands of pictures. I think that it was ignoring the index file, or the index file was corrupted, IDK. The downloaded files column went nuts, jumping up and down.

I tried the above once more, picking up after stopping it mid crawl with no folder, closing it(causing folder to be created). Upon opening TumbleThree, the queue list showed the blog I had interrupted, and nothing else. Upon starting the crawl, it downloaded 1184 files, despite there being a total of 4 posts since the last time I had run this blog. I then deleted all of those files(but not the folder), restored my index file from a backup, and restarted tumblethree, re-enabled pictures and videos on that blog, and set it to crawl. It did the same thing! Crazy! So I thought maybe it was because I hadn't crawled since 1.0? Ugh

So I burned it all down, deleted appdata, restored my index file from backup, and tried downloading just that blog again. Same exact thing happens, 1184 downloads occur. Okay, maybe something weird happened on the blog right? I start my copy of TumblThree 1.0.0, and using the same index files, Upon first crawl, it actually downloaded nothing, rather than the 4 new posts. I've seen that bug, w/e. I ran it again, and it downloaded 1184 files.

I don't get it. The last time I ran that blog was 10/31/16. If you check the archive page on the blog, it shows 4 posts in November of 2016. So, thinking that maybe the files were renamed or something, I took the 1184 files and compared them to past downloads. Just about all of them matched. I copied the 1184 pictures into a folder where 1470 old ones existed, and ended up with 1524. the extra gap of 36 can probably be explained by other duplicate picture software i run to free up hard drive space.

That's where I am now. IDK if you want the index file for that blog or what, it's a NSFW tumblr blog(nothing weird), but whatever. I wrote this second half while working on it, so I guess the main "new" bug here is the fact that on 1.0.4.5, folders aren't created at the beginning of the crawl, so downloads have no where to go unless you make the folder. The other issue must be longstanding, or caused by something Tumblr did on their backend. I tested that on 1.0.0., 1.0.4.3, and 1.0.4.5. Sorry for the super long post, I hope some of the details prove useful.

Tumblr now limits access to its version 1 api.

A few days ago I randomly started noticing that after beginning a crawl it would go for a few minutes before a bunch of the blogs would light up red as offline and the crawl would be aborted without notice, if I close the program and reopen I can resume crawling for a few more minutes before it happens again, when I click on a off line blog and select go to website the blog is still up, I also noticed a bunch of my "number of downloads" and "downloaded" are massively out of synch, it will say something like you have 1000 images downloaded which is correct but the "number of downloads" says there's only 100 available which is incorrect. I've tried reverting back a few releases but it still gives the same issue.
Is there a way to quickly refresh the entire blog list other than removing and reading every blog manually?

Upgrading from 1.0?

I have been trying to move from TumblThree 1.0 to any of the newer versions for a while, and I have been unsuccessful. I'm not sure if my index files are the problem or what.

On 1.0.1, the blogs will be evaluated, and then clear from the list without downloading anything.

On newer versions, like 1.0.2.4 and 1.0.4.3, the same thing happens, but the "Number of Images" column for said blog will be cleared at the end of the evaluation.

I delete the appdata before trying a new build, and I will go into the settings after first launch and change a few things to make sure it is set properly. What do I need to do to upgrade to the newer versions without starting from scratch? Redownloading absolutely everything isn't really an option for me.

New suggestion.... Justa suggestion. No comment duh......

Detail - what for? Never once see DETAIL FUCTION. WASTE OF SPACE.
Progress - what for two? Never see the function. Also waste of space.
Rating - use for? If tumblr not use any rating so tumblthree use for what?
Bottom button - i use 22 inch monitor screen but why must i scroll right and left if i want to SETTING or OPEN FOLDER.

In next version, can u make a setting to ENABLE/DISABLE EVALUATE? Maybe something good will happen.

Also can u make a setting where tumblthree always automatically QUEUE its all blog and running indefinately? Just say because already QUEUE same 50 BLOGS AGAIN AND AGAIN until it TOTAL 5000 BLOGS IN QUEUE. This way, each blog getkick inits butt again so somehow will download something.

Tumbthree download video is the main reason i love it. So please consider this.

Or just make a BETA version.

Video files being downloaded multiple times

I am seeing blogs redownload videos each time I run the program. This is causing lots of extra bandwidth to be taken up. I don't know if it is every video, or just reposts that point to the same file, but it's definitely happening. The exact same file names as the previous videos are being downloaded. I can generally cut all the new ones into the folder with the old and 95% of them are overwrites.

Option to download only a certain amount of last posts of the blogs.

How much I understand from recent commits, the program development still proceeds? If so, it's very good. I have one request for adding one desired function, which I really want to see in the program. I often download a lot of blogs and I watch them all, but there are blogs that offer too poor quality content and I delete them from the computer so in consequence of this too many gigabytes of traffic is wasted. And I thought that it would be nice if the program had an option to download say only the last 1000 posts or 500 posts and then I will already be able to estimate the approximate quality of content in blogs by this posts just entering in a folder with sketches, so if there are bad blogs, it will be possible to remove them painlessly and also if there are good blogs, it will enough uncheck this option so that after this the program could download the entire blog completely. If you have the desire and free time to introduce this function and it will not be too hard to write this in the code, I will be very happy if this function appears in the program, personally for me it will be a very useful addition to the program. Thanks!

The program skips some files at downloading

When I adding blog to the download queue, the program are starting downloading, but at the end of downloading it show that the program don't downloaded several files from blog. In the details window, it looks like this: "Download images: 9427/9436, duplicates found: 0". The same happens with other blogs, the program are always not downloading several files from blogs. I tried remove from disk downloaded blog and re-downloading it to keep track of whether the same files skipped or not. It appears that files skipped randomly, not same files as at first downloading.
I have several times changed the settings, I set scan connection to 1, parallels connection to 1, parallels blog to 1, timeout to 1250, all of the above does not help. Also tried deleting setting.xml file in AppData, this doesn't help.
It situation is sadly for me because a week ago I added many blogs to queue crawling and today I saw that all finished blogs are downloaded at 99.5-99.9 percent, there are no completely downloaded blogs, total were downloaded 500+ thousands files thus were skipped 2500+ files.
I dont know why this bug appear with me, maybe it is because of my internet connection or something wrong with my computer. Now at older versions such bug are repeated, but a two month ago when the last time i downloading images there was no bug in the older versions, all the blogs with all the files were downloaded correctly before.
I in desperate because so many files were skipped and no possible re-downloading them, only re-downloading all blogs.
Maybe it makes sense to introduce option to checking folders for missing files to further downloading this files?
P.S. English is not my native language, so sorry for possible mistakes in the text.

Bugs in the version 1.0.4.35

  1. With the enabled mode of downloading the list of url they do not download at all, a text file with them is not created.
  2. There used to be a bug when after adding hundreds of blogs to the program half of them at the start were shown as offline blogs, but after a while their status changed to online, in the next versions this bug was removed, at least this error is not reproduced on the version 1.0.4.24. And in version 1.0.4.35, this bug again appeared. And when the program changes the status of all offline blogs to online, then after the program is closed and reopened, their status is reset to offline status with a subsequent change to online.
  3. Minor bug: when I enter into the program, I always see an error at the top "Could not restore UI setting".

If necessary, I can send a list of hundreds of blogs to the mail for testing.

Adding a blog that already exists

I encounter this one in the past but it's not corrected in the 20 latest version.
If you add in your list a blog that exists and that you already download. TumblThree tell you the blog exists in the list.

The next time you relaunch the program the blog appears as if it was never downloaded (no last download time and no counter of files downloaded) so you need to download all the blog another time.

Option to grab the url list only

While the new _files.tumblr is good, it would be better to have an options to get those links only.

Having only the urls saved, but not the actual files, one can merge those lists outside the program.
Once merged, duplicated links can be cleaned and batch downloaded with a browser's extension saving space and bandwidth.

Just an idea. I've tried something similar by checking to download only the meta files. But those urls are blog specific if I recall correctly.

Thanks.

[Feature Request] Theme Support

I've uploaded a branch showing how we could implement theme support. Using resource dictionaries for the themes and DynamicResources in the .xaml it would be possible the change the colors during run time in the settings window.

I've uploaded a working example with a carcinogenic, unfinished dark color scheme. If someone is interested in something like this, maybe it's nice to have a dark theme.

The code is unfinished and needs a rebase to the current master.

Unable to download using latest v1.0.4.16

Hi
I am trying to download using the latest 1.0.4.16 (English version)
It shows all the images processing/downloading in the Queue section, but neither are actually being downloaded. Also nothing is being shown in the preview section. (Preview is on from setting)

for example :: http://senaruna.tumblr.com/

please see to it.
Thanks for the nice app.

[Minor Bug] Duplicate Entries from "Check Clipboard"

When copying a URL from my web browser it creates a duplicate entry. It's not that big of a deal just mostly an annoyance. I have tested this in Chrome and Edge, but does not appear to do it in Notepad. Closing TumblThree and opening it again seems to delete the extra entries.

Using Windows 10 Anniversary with Google Chrome

Steps to Reproduce:

  1. Open TumblThree
  2. Open web browser (Google Chrome/Edge)
  3. Copy URL from address bar in browser with Crtl + C

Result:
http://i.serealia.ca/2016/10/TumblThree_aTYlu2g_2016-10-31_15-00-20.png

If Crtl + C is spammed it produces multiple entries:
http://i.serealia.ca/2016/10/TumblThree_34YEdf1_2016-10-31_14-55-11.png

[Question] Can I build it for Linux Ubuntu?

Hi. Thanks for a great software!
I'm not into C Sharp so I don't know anything about this language but I believe if it's possible I would compile this app for Linux (Ubuntu). I just need to make sure that it's real without changing code. Can I do that? Or there are something windows specific? (folder structure for example)

Different numbers in the "downloaded files" and "number of downloads" columns after downloading the blog due to the presence of duplicates in the blog

If the blog contains a certain number of duplicates, which the program misses, then the "downloaded files" and "number of downloads" columns display different numbers after downloading. Say, the blog contains 1000 images, 50 of which are duplicates, then the program shows that 950 images have been downloaded and the "number of downloads" column displays a number of 1000 images. Because of this the progress bar becomes incomplete, does not reach a bit until the end, and therefore it seems that the blog has not downloaded completely. It would be convenient if you did so that the number of "number of downloads" would be less by the number of duplicates then it would be clear that the blog was downloaded completely

When Tumblr is empty

I encounter a strange issue (and i cannot put my finger on it myself).
If i have a tumblr site where files were downloaded in the past (i don't try with a new one with the same properties). If the tumblr now have no post (by example davidcharlec.tumblr.com).
If i add this tumblr in the queue and launch TumblThree, the blog is put in processing state and it's like it enters in a no end loop as it never stop and release the process.

Maybe here:
totalPosts = Int32.Parse(blogDoc.Element("tumblr").Element("posts").Attribute("total").Value);

If totalPosts is 0 then we must exit this function? If i launch the program through the IDE i see the line just after raise an exception in this specific case:
ulong highestId = UInt64.Parse(blogDoc.Element("tumblr").Element("posts").Element("post").Attribute("id").Value);

An exception of type 'System.NullReferenceException' occurred in TumblThree.Applications.dll but was not handled in user code

Additional information: Object reference not set to an instance of an object.

inline images?

i know i had a thread about inline images before and you implemented downloading them, but it looks like the program downloads inline pics from some blogs but not others
re-adding the blog doesnt work
i notice the program sees everything and sets the number of downloads to one value, then i add the blog to the queue and crawl it and suddenly the number of images drops

Could not save the blog

When i try to add the blog:
http://p-e-n-e-l-o-p-e-m-a-c-h-i-n-e.tumblr.com

I get the error "Could not save the blog". If i look in the index folder the file was well created.
I then close and restart and it's ok the blog is in the list. Maybe a kind of name problem?
It's with the latest 19 version.

In fact after test, the new added blog were not downloading. I test on blogs already in the list, the refresh and download seems to work (seems only i don't know exactly what is done or not but at least i see some files downloaded). With the new blog added since the new version, it's like frozen during the first "evaluation" step.

Release 1.0.4.34 - shift + click

When i do a click on one item and then while pressing shift i click on another to select more than a number maybe 10-15 blogs the program crashes.

Are larger files (videos) truncated?

Is it just me and my test blogs or are videos broken and truncated and only around <10mb size after the download? Was there any size limit for videos previously?

Someone around here who is downloading more videos than I do? I rarely use the application myself right now, so it's hard for me to detect all the issues.

Thanks for any reply.

Audio files are not properly downloaded

Audio files are not properly downloaded right now. Some posts contain mp3 files, which can be easily downloaded. Sadly that's not true for all audio posts.

All posts however contain a .swf file which, if downloaded directly, only contain a text file with a link to a tumblr.com hosted swf player which downloads and plays a stream. There is some authentication involved which prevents direct access to the stream.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.