Git Product home page Git Product logo

ddrive's Introduction

DDRIVE

Turn Discord into a datastore that can manage and store your files.

Discord server


DDrive A lightweight cloud storage system using discord as storage device written in nodejs. Supports an unlimited file size and unlimited storage, Implemented using node js streams with multi-part up & download.
ddrive_demo.mp4

Current stable branch 4.x

Live demo at ddrive.forscht.dev

Features

  • Theoretically unlimited file size, thanks to splitting the file in 24mb chunks using nodejs streams API.
  • Simple yet robust HTTP front end
  • Rest API with OpenAPI 3.1 specifications.
  • Tested with storing 4000 GB of data on single discord channel (With max file size of 16GB).
  • Supports basic auth with read only public access to panel.
  • Easily deployable on heroku/replit and use as private cloud storage.

New Version 4.0

This next major version release 4.0 is ddrive written from scratch. It comes with most requested features and several improvements.

  • Now uses postgres to store files metadata. Why?
    • Once you have huge amount of data stored on ddrive it makes ddrive significantly slow to start since ddrive have to fetch all the metadata from discord channel (For 3 TB of data it takes me 30+ minutes.)
    • With postgres, deleting file is extremely faster because now ddrive don't have to delete files on discord channel and just need to remove from metadata only.
    • With postgres now it's possible to move or rename files/folders which was impossible with older version.
  • Added support for rename files/folders.
  • Added support to move file/folder (Only via API, Not sure how to do it with frontend, PR welcomes.)
  • Now uses webhooks instead of bot/user tokens to bypass the discord rate limit
  • DDrive now uploads file chunks in parallel with limit. Which significantly increase the upload speed. I was able to upload file with 5GB of size in just 85 seconds.
  • Public access mode - It is now possible to provide users read-only access with just one config var
  • Batch upload files - Now you can upload multiple files at once from panel. (DClone support has been removed from this version)
  • Bug fix - download reset for few mobile devices
  • Added support for optional encryption to files uploaded to discord
  • DDrive now has proper rest API with OpenAPI 3.1 standards
  • Added support for dark/light mode on panel

I spent several weeks finalizing this new version. Any support is highly appreciated - Buy me a coffee

Requirements

  • NodeJS v16.x or Docker
  • Postgres Database, Discord Webhook URLs
  • Avg technical knowledge

Setup Guide

  1. Clone this project
  2. Create few webhook urls. For better performance and to avoid rate limit at least create 5 with 1 webhook / text channel. (How to create webhook url)
  3. Setup postgres using docker, if you already don't have it running
    • cd .devcontainer
    • docker-compose up -d
  4. Copy config/.env_sample to config/.env and make necessary changes
  5. Optional - If you have lots of webhookURLs you can put those in webhook.txt with \n seperated.
  6. Run - npm run migration:up
  7. Run - node bin/ddrive
  8. Navigate to http://localhost:3000 in your browser.

How to keep it running forever

  1. Install pm2 with npm install -g pm2
  2. Run - pm2 start bin/ddrive
  3. Run - pm2 list to check status of ddrive
  4. Run - pm2 logs to check ddrive logs

Config variables explanation

# config/.env

# Required params
DATABASE_URL= # Database URL of postgres with valid postgres uri

WEBHOOKS={url1},{url2} # Webhook urls seperated by ","

# Optional params
PORT=3000 # HTTP Port where ddrive panel will start running

REQUEST_TIMEOUT=60000 # Time in ms after which ddrive will abort request to discord api server. Set it high if you have very slow internet

CHUNK_SIZE=25165824 # ChunkSize in bytes. You should probably never touch this and if you do  don't set it to more than 25MB, with discord webhooks you can't upload file bigger than 25MB

SECRET=someverysecuresecret # If you set this every files on discord will be stored using strong encryption, but it will cause significantly high cpu usage, so don't use it unless you're storing important stuff

AUTH=admin:admin # Username password seperated by ":". If you set this panel will ask for username password before access

PUBLIC_ACCESS=READ_ONLY_FILE # If you want to give read only access to panel or file use this option. Check below for valid options.
                             # READ_ONLY_FILE - User will be only access download links of file and not panel
                             # READ_ONLY_PANEL - User will be able to browse the panel for files/directories but won't be able to upload/delete/rename any file/folder.

UPLOAD_CONCURRENCY=3 # ddrive will upload this many chunks in parallel to discord. If you have fast internet increasing it will significantly increase performance at cost of cpu/disk usage                                              

Run using docker

docker run -rm -it -p 8080:8080 \
-e PORT=8080 \
-e WEBHOOKS={url1},{url2} \
-e DATABASE_URL={database url} \
--name ddrive forscht/ddrive

One Click Deploy with Railway

Deploy on Railway

Setup tutorials

  • Setup under 4 minutes in local/cloud server using neon.tech postgres - Youtube

API Usage

npm install @forscht/ddrive

const { DFs, HttpServer } = require('@forscht/ddrive')

const DFsConfig = {
  chunkSize: 25165824,
  webhooks: 'webhookURL1,webhookURL2',
  secret: 'somerandomsecret',
  maxConcurrency: 3, // UPLOAD_CONCURRENCY
  restOpts: {
    timeout: '60000',
  },
}

const httpConfig = {
  authOpts: {
    auth: { user: 'admin', pass: 'admin' },
    publicAccess: 'READ_ONLY_FILE', // or 'READ_ONLY_PANEL'
  },
  port: 8080,
}

const run = async () => {
  // Create DFs Instance
  const dfs = new DFs(DFsConfig)
  // Create HTTP Server instance
  const httpServer = HttpServer(dfs, httpConfig)

  return httpServer.listen({ host: '0.0.0.0', port: httpConfig.port })
}

run().then()

Migrate from v3 to v4

Migrating ddrive v3 to v4 is one way process once you migrate ddrive to v4 and add new files you can't migrate new files to v3 again but you can still use v3 with old files.

  1. Clone this project
  2. Create few webhooks (1 webhook/text channel). Do not create webhook on old text channel where you have already stored v3 data.
  3. Take pull of latest ddrive v3
  4. Start ddrive v3 with option --metadata=true. Ex - ddrive --channelId {id} --token {token} --metadata=true
  5. Open localhost:{ddrive-port}/metadata in browser
  6. Save JSON as old_data.json in cloned ddrive directory
  7. Put valid DATABASE_URL in config/.env
  8. Run node bin/migrate old_data.json
  9. After few seconds once process is done you should see the message Migration is done

Feel free to create new issue if it's not working for you or need any help.

Discord Support server

ddrive's People

Contributors

002-sans avatar 0xde4db33f avatar aminoxix avatar atharv-pathak-14 avatar brainbursty avatar burritoflakes avatar dependabot[bot] avatar epicgamer007 avatar forscht avatar kloudalpha avatar lonelil avatar sarthakjdev avatar shivamb25 avatar sploder-saptarshi avatar xaliks avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ddrive's Issues

Can not download files which has Unicode characters in filename.

When I try to download a file with Unicode characters in its filename, for example: ฤƒ ร  แบก รฃ.zip, it returns an invalid character set error.
image
Logs:

{"level":50,"time":1679326699626,"err":{"type":"TypeError","message":"Invalid character in header content [\"Content-Disposition\"]","stack":"TypeError [ERR_INVALID_CHAR]: Invalid character in header content [\"Content-Disposition\"]\n    at storeHeader (node:_http_outgoing:532:5)\n    at processHeader (node:_http_outgoing:527:3)\n    at ServerResponse._storeHeader (node:_http_outgoing:421:11)\n    at ServerResponse.writeHead (node:_http_server:369:8)\n    at module.exports.handler (/home/meowice/Downloads/ddrive/src/http/api/routes/file/download.js:75:15)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)","code":"ERR_INVALID_CHAR","reqId":"req-x"},"msg":"Invalid character in header content [\"Content-Disposition\"]"}
{"level":30

Downloads are resetting for some reason

I've tried to teach a friend how to use ddrive to download some stuff from our discord server, but it doesn't seem to work for some reason

He boots up the program on Termux and can acess the main page just fine, but whenever he tries to download a file the download progress goes up to around 80% and then the download pauses automatically, and after restarting it, the download goes back to the beggining and starts to download again

Here's a video trying to reproduce the issue:
https://user-images.githubusercontent.com/88287554/182630166-68fc3386-dcbd-45a2-8726-f81070d6f9ce.mp4

I don't think the issue is related to the use on Termux because I've tried to reproduce the same steps on my phone and it works just fine

Weird thing is that my friend is actually able to download any files directly through Discord, said issue does not happens there

If there's more information I should provide, just ask for it

and thanks!!!

APP CRASHED

I am trying to run this command but it does not work

ddrive --token {{valid token}} --channelId {{valid channel id}}

error log:

  discordFS >>> booting discordFS +0ms
  app === APP CRASHED :: UNKNOWN ERROR === 
  app  TypeError: Cannot read properties of undefined (reading 'type')
    at /usr/lib/node_modules/@forscht/ddrive/src/discordFS/index.js:77:94
    at arrayAggregator (/usr/lib/node_modules/@forscht/ddrive/node_modules/lodash/lodash.js:511:34)
    at Function.groupBy (/usr/lib/node_modules/@forscht/ddrive/node_modules/lodash/lodash.js:4885:16)
    at DiscordFS.load (/usr/lib/node_modules/@forscht/ddrive/src/discordFS/index.js:77:39)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async startApp (/usr/lib/node_modules/@forscht/ddrive/bin/ddrive:51:5) +0ms

Cant access ddrive v3 metadata

Each time I try to access the localhost:4380/metadata, it just appears a "not found" message. And yes I did put --metadata=true in the command.

deploy to Heroku button?

Couldn't figure out how to deploy to heroku, any chance this feature could be added for those of us less familiar with the service?

Keep getting a unknown error

Keep getting this error

discordFS >>> booting discordFS +0ms

  app === APP CRASHED :: UNKNOWN ERROR ===
  app  TypeError: Cannot read properties of undefined (reading 'type')
    at /data/data/com.termux/files/usr/lib/node_modules/@forscht/ddrive/src/discordFS/index.js:78:94                                                at arrayAggregator (/data/data/com.termux/files/usr/lib/node_modules/@forscht/ddrive/node_modules/lodash/lodash.js:511:34)
    at Function.groupBy (/data/data/com.termux/files/usr/lib/node_modules/@forscht/ddrive/node_modules/lodash/lodash.js:4885:16)                    at DiscordFS.load (/data/data/com.termux/files/usr/lib/node_modules/@forscht/ddrive/src/discordFS/index.js:78:39)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
    at async startApp (/data/data/com.termux/files/usr/lib/node_modules/@forscht/ddrive/bin/ddrive:62:5) +0ms

Running it in termux

Node version: v16.18.1
NPM version: 8.19.2

It worked before, I had ~16 gb stored.
I had this issue before but it somehow fixed it self by me uninstalling and switching back and forth between the 2 versions of node that termux provides, reinstalling ddrive each time

Make the files accessible without having to login

Maybe add an argument to enable file expose? When you have direct url to the file, you don't need to login to view it. Good for sharing large videos with friends on Discord while not having the upload UI exposed for exploitation.

how to make this service available all the time

i managed to run this app on localhost and upload files. the service only works when i launch it from my local machine using cmd. but im planning on using it to share my uploaded files with others. which would require this service to be running all the time on dedicated machines to keep the downloads online.
i tried deploying the api code into replit and it worked. but im still running into the risk of having my respiratory interface out there in the open using free plan. and i don't know what heroku offers in that regard because i couldn't figure out how to run the script on it.
are there any other services that could freely run my scrript all the time privately other than these 2 options?

[Feature Request] Uploading using the cURL or wget

Is it possible to somehow upload files to the ddrive using a curl command like this:

curl -T archive.tar.gz http://127.0.0.1:8080/archive.tar.gz

I searched for a way to do this but only found dclone which is not available on ARM and would like to be able to upload files to the site via cURL or wget.

[SUGGESTION] Add encryption

Currently Discord is able to see all data that gets stored onto someones DDrive because they are split into chunks but still saved unencrypted. It would be awesome to have encryption in place so Discord (even after reassembling the chunks into a full file) isn't able to view the contents until they also know the secret key. The key can be derived as by many products from a password given by the user on startup as parameter like --encrypt "ThisPasswordWillMakeMyDataOnDiscordSave". Encryption should be optionally added as additional feature (to not introduce breaking changes) so when the flag isn't supplied the program still just doesn't use it.

Encryption can be done very securely via AES256 encryption.
The key derivation can be made very securely by utilizing a good key derivation function like argon2.
It should use an IV for each part instead of for each file to add some additional security (as it's already split anyway).

Files larger than 100MB are not supported.

I use a user token that has nitro and uploads files up to 500 MB. Uploads through Discord itself work:

But in ddrive when I try to upload a file with 100MB+ chunks, I get error.

when configured this: const chunkSize = 104857601

  http POST/file3 +3s
  discordFS >> [ADD] in progress : /file3 +3s
  error === Begin Error ===
  error ---
  error Error: Request entity too large
  error method : POST
  error url : /file3
  error Stack: DiscordAPIError[40005]: Request entity too large
  error     at SequentialHandler.runRequest (/root/.idanya/node_modules/@discordjs/rest/dist/index.js:667:15)
  error     at processTicksAndRejections (node:internal/process/task_queues:96:5)
  error     at async SequentialHandler.queueRequest (/root/.idanya/node_modules/@discordjs/rest/dist/index.js:464:14)
  error     at async REST.request (/root/.idanya/node_modules/@discordjs/rest/dist/index.js:910:22)
  error     at async DiscordFS.createFileChunk (/root/.idanya/node_modules/@forscht/ddrive/src/discordFS/index.js:265:25)
  error     at async StreamChunker.chunkProcessor (/root/.idanya/node_modules/@forscht/ddrive/src/discordFS/index.js:212:27)
  error ---
  error === End Error === +12s

At the same time, if I set the chunkSize to exactly 100MB, the files are uploaded successfully.

when configured this: const chunkSize = 104857600

  http POST/file3 +3s
  discordFS >> [ADD] in progress : /file3 +3s
  discordFS >> [ADD] completed   : /file3 +21s

message.content is undefined, "that file might've been a virus"

i was starting my ddrive and i kept noticing that it kept crashing during launch. i've dug into this and found that discord, apparently, detected one of the uploading files as a virus and in result did not upload it. solved by deleting the messages, but i'd suggest adding a failsave for that

// my own code i added to find the exact messages
// src/discordFS/index.js, just before messagesGroupByType
tempMessageCache.forEach(message => {
    if (!message.content) {
        debug('>>> INVALID MESSAGE CONTENT')                                              
        debug(message)
    }
})

IMG_20221102_095141
IMG_20221102_095351

Throws this Error

[bvb@bvbManjLX ~]$ sudo ddrive --token " **** " --channelId " **** "
discordFS >>> booting discordFS +0ms
app === APP CRASHED :: UNKNOWN ERROR ===
app TypeError: Cannot read properties of undefined (reading 'type')
at /usr/lib/node_modules/@forscht/ddrive/src/discordFS/index.js:77:94
at arrayAggregator (/usr/lib/node_modules/@forscht/ddrive/node_modules/lodash/lodash.js:511:34)
at Function.groupBy (/usr/lib/node_modules/@forscht/ddrive/node_modules/lodash/lodash.js:4885:16)
at DiscordFS.load (/usr/lib/node_modules/@forscht/ddrive/src/discordFS/index.js:77:39)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async startApp (/usr/lib/node_modules/@forscht/ddrive/bin/ddrive:51:5) +0ms
[bvb@bvbManjLX ~]$

Im using Manjaro, installed newest nodejs version...

Bug with handling canary subdomain

I noticed when using a webhook using the canary.discord.com domain it seems to parse incorrectly
here's a snippet of what it requests
rawError":{"type":"Object","message":"404: Not Found","stack":"","code":0},"code":0,"status":404,"method":"POST","url":"[https://discord.com/api/v10https://canary.discord.com/api/webhooks/(channel)/(token)","requestBody"](https://discord.com/api/v10https://canary.discord.com/api/webhooks/(channel)/(token)%22,%22requestBody%22);:
this however can be easily fixed by just removing the canary subdomain, but just wanted to point this out if this means anything

Import File

Is it possible to importe a file stucture ? let's say i have a folder with 2-3 subfoler and a few movie/music, or it's only file by file ?

[Error] discord api: socket hang up

from time to time when uploading large files (>8mb in size) I get an error mid-uploading:
in console Error: request to https://discord.com/api/v9/channels/XXX/messages failed, reason: socket hang up
uploading file with curl request to https://discord.com/api/v9/channels/XXX/messages failed, reason: socket hang up
web server actually displays an error only like 20% of the time.

95% of the time that leaves some left over file parts, which just stay there and don't get deleted.
from my observation: if file uploading speed is too high, the upload pauses after X mb to catch up with uploading the parts to discord, could discord's rate limits be the issue? and if so, is it possible to limit the uploading speed if a file is over 8mb?

Server error when uploading file.

upon uploading a 24mb file i get an internal server error
image
smaller files seem to work, this is the error

  discordFS >> [ADD] in progress : /New Folder/01 Paint It, Black.flac +7s
  error === Begin Error ===
  error ---
  error Error: The user aborted a request.
  error method : POST
  error url : /New Folder/01 Paint It, Black.flac
  error Stack: AbortError: The user aborted a request.
  error     at abort (/usr/lib/node_modules/@forscht/ddrive/node_modules/node-fetch/lib/index.js:1448:16)
  error     at AbortSignal.abortAndFinalize (/usr/lib/node_modules/@forscht/ddrive/node_modules/node-fetch/lib/index.js:1463:4)
  error     at AbortSignal.dispatchEvent (/usr/lib/node_modules/@forscht/ddrive/node_modules/event-target-shim/dist/event-target-shim.js:818:35)
  error     at abortSignal (/usr/lib/node_modules/@forscht/ddrive/node_modules/abort-controller/dist/abort-controller.js:52:12)
  error     at AbortController.abort (/usr/lib/node_modules/@forscht/ddrive/node_modules/abort-controller/dist/abort-controller.js:91:9)
  error     at Timeout.<anonymous> (/usr/lib/node_modules/@forscht/ddrive/node_modules/@discordjs/rest/dist/lib/handlers/SequentialHandler.js:115:53)
  error     at listOnTimeout (node:internal/timers:564:17)
  error     at process.processTimers (node:internal/timers:507:7)
  error ---
  error === End Error === +3m

Can't upload large files

I'm not able to upload large files. Ddrive works very good with low file sizes but when i try to upload files more than 250 mb i'm facing an error:
image
I tried increasing the timeout.
I tried uploading different files.

My network speed:
image

Can't load lots of messages with user token

I have a discord channel with ~248.9gb of data and I use a user token, the problem is that at first everything worked, but when the data size increased I started getting:

discordFS >>> booting discordFS +0ms (and then ddrive closes)
2022-09-13_13-48

The Discord account itself is alive, through the client I can easily read messages and download files from the channel. (as well as upload them)

I keep getting an error with hosting this on Cyclic dot sh

[Error] {"message":"Request Entity Too Large"}
Shows in the webpage url I hosted when I try to upload larger files.
I am unsure what is the maximum size allowed, but small files around 3-6MB work fine at least, and when I tried hosting this thing on replit it didn't have this weird problem. Cyclic and node logs do not show anything about the affected file operations.
image

v4 Connection terminated unexpectedly

deployed, can open webpage and login,
but refresh or try to do anything like create folder/upload will fail with Connection terminated unexpectedly
Is there a way like a debug flag to see what caused this issue?

{"level":30,"time":1682692698747,"msg":"Server listening at http://0.0.0.0:3003"}
{"level":30,"time":1682692706040,"reqId":"req-1","req":{"method":"GET","url":"/","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8663},"msg":"incoming request"}
{"level":30,"time":1682692706049,"reqId":"req-1","res":{"statusCode":304},"responseTime":7.71611999720335,"msg":"request completed"}
{"level":30,"time":1682692706084,"reqId":"req-2","req":{"method":"GET","url":"/normalize.css","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8663},"msg":"incoming request"}
{"level":30,"time":1682692706086,"reqId":"req-2","res":{"statusCode":304},"responseTime":1.9976000040769577,"msg":"request completed"}
{"level":30,"time":1682692706099,"reqId":"req-3","req":{"method":"GET","url":"/style.css","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8574},"msg":"incoming request"}
{"level":30,"time":1682692706100,"reqId":"req-3","res":{"statusCode":304},"responseTime":0.8849600031971931,"msg":"request completed"}
{"level":30,"time":1682692706104,"reqId":"req-4","req":{"method":"GET","url":"/index.js","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8663},"msg":"incoming request"}
{"level":30,"time":1682692706105,"reqId":"req-4","res":{"statusCode":304},"responseTime":0.5741599947214127,"msg":"request completed"}
{"level":30,"time":1682692706132,"reqId":"req-5","req":{"method":"GET","url":"/api/directories/","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8663},"msg":"incoming request"}
{"level":30,"time":1682692706138,"reqId":"req-6","req":{"method":"GET","url":"/favicon.png","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8574},"msg":"incoming request"}
{"level":30,"time":1682692706140,"reqId":"req-6","res":{"statusCode":304},"responseTime":1.631400004029274,"msg":"request completed"}
{"level":50,"time":1682692706142,"err":{"type":"Error","message":"Connection terminated unexpectedly","stack":"Error: Connection terminated unexpectedly\n    at Connection.<anonymous> (/home/user1/git/ddrive/node_modules/pg/lib/client.js:132:73)\n    at Object.onceWrapper (node:events:627:28)\n    at Connection.emit (node:events:513:28)\n    at Socket.<anonymous> (/home/user1/git/ddrive/node_modules/pg/lib/connection.js:107:12)\n    at Socket.emit (node:events:525:35)\n    at endReadableNT (node:internal/streams/readable:1359:12)\n    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)","reqId":"req-5"},"msg":"Connection terminated unexpectedly"}
{"level":30,"time":1682692706143,"reqId":"req-5","res":{"statusCode":500},"responseTime":10.649720005691051,"msg":"request completed"}
{"level":30,"time":1682692746367,"reqId":"req-7","req":{"method":"GET","url":"/","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8663},"msg":"incoming request"}
{"level":30,"time":1682692746369,"reqId":"req-7","res":{"statusCode":304},"responseTime":1.6050800010561943,"msg":"request completed"}
{"level":30,"time":1682692746400,"reqId":"req-8","req":{"method":"GET","url":"/style.css","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8574},"msg":"incoming request"}
{"level":30,"time":1682692746401,"reqId":"req-9","req":{"method":"GET","url":"/normalize.css","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8663},"msg":"incoming request"}
{"level":30,"time":1682692746403,"reqId":"req-8","res":{"statusCode":304},"responseTime":2.8660800009965897,"msg":"request completed"}
{"level":30,"time":1682692746404,"reqId":"req-9","res":{"statusCode":304},"responseTime":2.469360001385212,"msg":"request completed"}
{"level":30,"time":1682692746412,"reqId":"req-a","req":{"method":"GET","url":"/index.js","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8575},"msg":"incoming request"}
{"level":30,"time":1682692746415,"reqId":"req-a","res":{"statusCode":304},"responseTime":1.6706800013780594,"msg":"request completed"}
{"level":30,"time":1682692746433,"reqId":"req-b","req":{"method":"GET","url":"/api/directories/","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8575},"msg":"incoming request"}
{"level":50,"time":1682692746437,"err":{"type":"Error","message":"Connection terminated unexpectedly","stack":"Error: Connection terminated unexpectedly\n    at Connection.<anonymous> (/home/user1/git/ddrive/node_modules/pg/lib/client.js:132:73)\n    at Object.onceWrapper (node:events:627:28)\n    at Connection.emit (node:events:513:28)\n    at Socket.<anonymous> (/home/user1/git/ddrive/node_modules/pg/lib/connection.js:107:12)\n    at Socket.emit (node:events:525:35)\n    at endReadableNT (node:internal/streams/readable:1359:12)\n    at process.processTicksAndRejections (node:internal/process/task_queues:82:21)","reqId":"req-b"},"msg":"Connection terminated unexpectedly"}
{"level":30,"time":1682692746438,"reqId":"req-b","res":{"statusCode":500},"responseTime":4.789559997618198,"msg":"request completed"}
{"level":30,"time":1682692746443,"reqId":"req-c","req":{"method":"GET","url":"/favicon.png","hostname":"server-ip-address:3003","remoteAddress":"client-ip-address","remotePort":8663},"msg":"incoming request"}
{"level":30,"time":1682692746445,"reqId":"req-c","res":{"statusCode":304},"responseTime":1.7502000033855438,"msg":"request completed"}

[Discussion] Version 4.0

I want to discuss some changes for version 4.0.


Switch from JavaScript to TypeScript

Switching from JavaScript to TypeScript makes it way easier to determine errors in the code and support older Node.JS versions. Furthermore it gives developers that are using DDrive as a library in TypeScript projects full autocompletion and typesafety.

Rename the main branch to main, bleeding-edge, latest, ...

Renaming the main branch everytime a new version comes out just breaks the repository for all developers that are working on it. Also it makes it nearly impossible to reliable use automatic deployment/release systems.

Automatically publish new changes

This goes hand in hand with the previous point. After there is a good naming for the main branch the project can introduce an automatic deployment/release system like Semantic Release. Which allows to instantly and automatically publish new changes as soon as they are available and reviewed.

Fix #21

Just fix the bug :)

Add #12

This is a pretty good suggestion as it's pain how slow ddrive currently is. It can't even use 10% of my potential network speed.

Add #29

This is a pretty good suggestion too as discord shouldn't see what you are uploading. This is what I would expect by default from any software that wants to store my files.

[SUGGESTION] Stream files

I am not sure how possible this is, but is there a way to make it so you could stream videos and audio that are uploaded by changing "Content-Type:"?

5 minute timeout when running remotely

Whenever I run ddrive locally, there is no time limit for uploads. but when I deploy to Railway it has a five minute time out.

also idk what the readme is saying to do? timeout doesn't seem to be an option anywhere in the src.

Getting error while deploying

Whenever I deploy and try to click the upload button on the website I get this error and the button isn't working at all

{"level":30,"time":1676437037068,"reqId":"req-1e","req":{"method":"GET","url":"/api/directories/","hostname":"ddrive-production-9901.up.railway.app","remoteAddress":"10.10.10.15","remotePort":58094},"msg":"incoming request"}
{"level":50,"time":1676437037107,"err":{"type":"DatabaseError","message":"select , (select json_agg(r) FROM (select "d"., sum(b.size) as size\n from "directory" as "d"\n left join "block" as "b" on "d"."id" = "b"."fileId"\n where "d"."parentId" = (select "id" from "directory" where "parentId" is null)\n group by "d"."id") r ) as child from "directory" where "parentId" is null limit $1 - relation "directory" does not exist","stack":"error: select , (select json_agg(r) FROM (select "d"., sum(b.size) as size\n from "directory" as "d"\n left join "block" as "b" on "d"."id" = "b"."fileId"\n where "d"."parentId" = (select "id" from "directory" where "parentId" is null)\n group by "d"."id") r ) as child from "directory" where "parentId" is null limit $1 - relation "directory" does not exist\n at Parser.parseErrorMessage (/app/node_modules/pg-protocol/dist/parser.js:287:98)\n at Parser.handlePacket (/app/node_modules/pg-protocol/dist/parser.js:126:29)\n at Parser.parse (/app/node_modules/pg-protocol/dist/parser.js:39:38)\n at Socket. (/app/node_modules/pg-protocol/dist/index.js:11:42)\n at Socket.emit (node:events:513:28)\n at addChunk (node:internal/streams/readable:315:12)\n at readableAddChunk (node:internal/streams/readable:289:9)\n at Socket.Readable.push (node:internal/streams/readable:228:10)\n at TCP.onStreamRead (node:internal/stream_base_commons:190:23)","length":109,"name":"error","severity":"ERROR","code":"42P01","position":"372","file":"parse_relation.c","line":"1373","routine":"parserOpenTable"},"msg":"select , (select json_agg(r) FROM (select "d"., sum(b.size) as size\n from "directory" as "d"\n left join "block" as "b" on "d"."id" = "b"."fileId"\n where "d"."parentId" = (select "id" from "directory" where "parentId" is null)\n group by "d"."id") r ) as child from "directory" where "parentId" is null limit $1 - relation "directory" does not exist"}
{"level":30,"time":1676437037109,"reqId":"req-1e","res":{"statusCode":500},"responseTime":40.103023529052734,"msg":"request completed"}
{"level":30,"time":1676437037121,"reqId":"req-1f","req":{"method":"GET","url":"/api/directories/","hostname":"ddrive-production-9901.up.railway.app","remoteAddress":"10.10.10.15","remotePort":58090},"msg":"incoming request"}
{"level":50,"time":1676437037124,"err":{"type":"DatabaseError","message":"select , (select json_agg(r) FROM (select "d"., sum(b.size) as size\n from "directory" as "d"\n left join "block" as "b" on "d"."id" = "b"."fileId"\n where "d"."parentId" = (select "id" from "directory" where "parentId" is null)\n group by "d"."id") r ) as child from "directory" where "parentId" is null limit $1 - relation "directory" does not exist","stack":"error: select , (select json_agg(r) FROM (select "d"., sum(b.size) as size\n from "directory" as "d"\n left join "block" as "b" on "d"."id" = "b"."fileId"\n where "d"."parentId" = (select "id" from "directory" where "parentId" is null)\n group by "d"."id") r ) as child from "directory" where "parentId" is null limit $1 - relation "directory" does not exist\n at Parser.parseErrorMessage (/app/node_modules/pg-protocol/dist/parser.js:287:98)\n at Parser.handlePacket (/app/node_modules/pg-protocol/dist/parser.js:126:29)\n at Parser.parse (/app/node_modules/pg-protocol/dist/parser.js:39:38)\n at Socket. (/app/node_modules/pg-protocol/dist/index.js:11:42)\n at Socket.emit (node:events:513:28)\n at addChunk (node:internal/streams/readable:315:12)\n at readableAddChunk (node:internal/streams/readable:289:9)\n at Socket.Readable.push (node:internal/streams/readable:228:10)\n at TCP.onStreamRead (node:internal/stream_base_commons:190:23)","length":109,"name":"error","severity":"ERROR","code":"42P01","position":"372","file":"parse_relation.c","line":"1373","routine":"parserOpenTable"},"msg":"select , (select json_agg(r) FROM (select "d"., sum(b.size) as size\n from "directory" as "d"\n left join "block" as "b" on "d"."id" = "b"."fileId"\n where "d"."parentId" = (select "id" from "directory" where "parentId" is null)\n group by "d"."id") r ) as child from "directory" where "parentId" is null limit $1 - relation "directory" does not exist"}
{"level":30,"time":1676437037126,"reqId":"req-1f","res":{"statusCode":500},"responseTime":4.039794921875,"msg":"request completed"}

Suggestion - API

Would be nice to have an API to interact with ddrive remotely.

Files not cleaned up when reloading page

When you upload a bigger file (depending on internet speed ~1GB) and reload during the upload the bot crashes and has only one part of the file in the browser.

how to dclone

How to clone ? When I do bin/dclone -T *token* -C *channeld* -P Charlotte (for example) it don't work, in ddrive Charlotte is a folder

Error

Getting this error while hosting it on repl.it
Screenshot_20221019-101518-616
Intents are also enabled
Screenshot_20221019-101633

Split file when upload

Cloudflare have limited free user upload is 100MB, the file is over 100MB it will deny, we can workaround by splitting the file into 90MB part and upload, server-side will combine it, not be risk by using the 'dns-only'

Suggestion - use forums as folders

Theres a new feature that everyone has probably heard of which is forums, it would be great to use them as folders so it speeds up loading times. (and discord is releasing a 500mb limit for nitro users so pls consider this too).

[Suggestion] Pages for folders with too many files

opening a ddrive folder that contains a lot of files can very quickly eat up a lot of ram.
i've tested this with over 2.1k files in one folder. using chrome the page used 353mb of ram (checked using chrome's task manager)
a good solution would be to implement pages to not display all 2.1k files at once.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.