Git Product home page Git Product logo

absurd-sql's People

Contributors

jlongster avatar keithhenry avatar matissjanis avatar quolpr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

absurd-sql's Issues

Call for help: webkitFileSystem backed

I have already written a backend using webkitFileSystem: https://gist.github.com/jlongster/ec00ddbb47b4b29897ab5939b8e32fbe

I did it somewhat naively though. It's way slower because I don't know how to do bulk reads/writes. I can't find anything in the API about it, or any examples. Surely there's a way to avoid doing entry.file every single time??https://gist.github.com/jlongster/ec00ddbb47b4b29897ab5939b8e32fbe#file-webkitfilesystem-backend-js-L48

Sure you can somehow call writer.write more than once? When I tried it, it errored. Calling createWriter each time is super slow! https://gist.github.com/jlongster/ec00ddbb47b4b29897ab5939b8e32fbe#file-webkitfilesystem-backend-js-L36

I should probably create a PR so that you already have some running code with it. I'd love for someone to help with it!

Multi-threading not working

@jlongster
My use-case involves multiple queries which take time and to improve the performance, I wanted to run absurd sql in multiple worker thread. Problem is that, once I write some change on one worker thread, second worker doesn't have these changes, Is there some solution for this?

Split the async fs interface, indexedDB, and sqlite-specific code

Hello !
This is very useful work you have been doing here. If I understand well, this repository contains both the Atomic and Shared Array Buffer logic required to provide an async interface on top of emscripten's synchronous FS, and the code that implements this interface with an IndexedDB backend.

  • Is there any part of this code that is specific to sql.js, or could this be reused with any emscripten-compiled code that interacts with files ? If there is code that is specific to sql.js, then would it be feasible to split it from the sync-async bridge and the IndexedDB logic ?

  • Would it be possible to split the IndexedDB backend, in order to let the user provide a different implementation? I'm thinking about an HTTP backend in particular.

What is the license for this project?

Hello! Thanks for the effort you have put in this very cool project. I was wondering if you thought about licensing and are going to add one on the repo since it's currently missing and people may start using it on live project.

Please use Discussions (and read my first letter!)

Hello, I've enabled Discussions and all general questions/discussions should happen there, not here. I will close issues that don't have a specific technical problem they are addressing.

I've also posted a letter explaining why I haven't been super active here. Sorry about that! Please read this: #33

(Posting here in case people come into Issues first -- I'll close this soon)

Database disk image is malformed error sometimes

I use Microsoft edge(chrome-based browser), and it supports the Atomics & SharedBuffer, and sometimes I get database disk image is malformed error. However, I didn't check if it happens when multiple tabs are opened, and I am unsure under which conditions it is happening.

Here is screenshot from sentry:

image

The stack trace:

Error: database disk image is malformed
  at n.handleError(/assets/VaultDb.worker.d94830ba.js:15:55024)
  at e.step(/assets/VaultDb.worker.d94830ba.js:15:50388)
  at n.exec(/assets/VaultDb.worker.d94830ba.js:15:53846)
  at Ri.sqlExec(/assets/VaultDb.worker.d94830ba.js:15:148562)
  at Ri.getRecords(/assets/VaultDb.worker.d94830ba.js:15:148929)
  at Qo.getLinksOfNoteId(/assets/VaultDb.worker.d94830ba.js:15:127812)
  at MessagePort.n(/assets/VaultDb.worker.d94830ba.js:1:1863)

The change counter in the sqlite header isn't *always* updated

I see "Once we’ve grabbed a readwrite transaction, we can read some bytes from the file which represent SQLite’s “change counter”. If that counter is the same as what we had when we requested a write, it’s safe to write!" in the announcement post.

Unfortunately, I discovered that's only mostly true while working on https://github.com/backtrace-labs/verneuil-wip. Verneuil runs sqlite's tcl regressions tests with assertions enabled in its replicating VFS, and assertion failures in the bitwise replication check uncovered at least two scenarios where sqlite can change the data on disk without updating the header's version counter:

  1. Maybe benign for absurd-sql, but rollback doesn't always restore all garbage (unused) bytes on a page: sqlite stages writes in uninitialised malloced arrays, so non-deterministic bytes from previous allocations can leak through to persistent storage (we work around that by building sqlite with a macro that redirects malloc to calloc).
  2. The page-cache flushing (https://github.com/sqlite/sqlite/blob/master/test/cacheflush.test / https://www.sqlite.org/c3ref/db_cacheflush.html) tests don't roll back (AFAICT), but manage to cause physical changes without updating the version counter. I believe the writes happen to pages that aren't semantically visible to other connections until the SQL transaction commits, so that's safe for sqlite, which only uses the page counter to drop its in-memory page cache. However, this does mean that the change counter cannot be used to detect all writes to the underlying storage.

It probably makes sense for absurd-sql to manage its own dedicated change-tracking key-value entry, or maybe simplify the locking mechanism to jump over the "reserved/pending" states, and only have shared & exclusive.

Asincify version of SQL.js (no COOP required!)

I was able to achieve the same performance as absurd-sql has, but with asincify version of SQL.js(using wa-sqlite).

Here is the comparison:

example3.mp4

And, you can also run some benchmarks at https://cerulean-bublanina-96bf3a.netlify.app/

The results are roughly the same, but no COOP is required anymore!

Here is the source code of backend https://github.com/trong-orm/wa-sqlite-web-backend/blob/main/src/IDBCachedWritesVFS.ts (it's a bit messy). I used the same tricks as absurd-sql do — don't write till transaction finish, use cursor if sequence read.

The problem now is that wa-sqlite is under GPL3 license, so we need someone who make MIT version of wa-sqlite 😅

optimal cache_size?

I can confirm sql-js/sql.js#447 (comment) but for absurd-sql.

absurd-sql is very slow on mobile (android chrome) for me. Increasing cache_size to high values alleaviates the problem but some automatic behaviour would be nice.

How to add extensions? (fts4 and spellfix1)

I am trying to create a fuzzy full-text search for a website and I'm quite lost as to how one would add extensions into absurd-sql.

The extensions are fts4 and spellfix1.

Any solutions?

Run absurd-sql in a serviceworker

I did some quick tests to run absurd-sql in a serviceworker, but it fails like this:

serviceworker.js:63 Failed to open the database Yb.w.ErrnoError.w.ErrnoError {node: undefined, Oa: 20, message: 'FS error', hd: ƒ}Oa: 20hd: ƒ (c)message: "FS error"node: undefined[[Prototype]]: Error
at Object.Yb (webpack://absurd-example-project/./node_modules/@jlongster/sql.js/dist/sql-wasm.js?:160:231)
at Object.lc (webpack://absurd-example-project/./node_modules/@jlongster/sql.js/dist/sql-wasm.js?:160:402)
at eval (webpack://absurd-example-project/./node_modules/@jlongster/sql.js/dist/sql-wasm.js?:177:15)
at new Promise ()
at initSqlJs (webpack://absurd-example-project/./node_modules/@jlongster/sql.js/dist/sql-wasm.js?:24:24)
at openDB (webpack://absurd-example-project/./src/serviceworker.js?:19:77)
at eval (webpack://absurd-example-project/./src/serviceworker.js?:47:20)
eval @ serviceworker.js:63

Before diving deeper I would like to know if this is something that is supposed to work in general. My idea was to serve static HTML/JS files and images out of an absurd-sql instance and add some sync capabilities to fetch them from a server-side Sqlite DB.
But there's probably a workaround in using the Worker from the sample project and store duplicates of the web resources in the Cache Storage API.

The "VACUUM" command doestn't work。

I use indexdb to store SQLite data.
I repeatedly delete tables and insert large amounts of data, which causes the database to grow larger and the speed to become slower.
What can I do to handle this?

can't resolve fs

I'm using Webpack and somehow it can't resolve 'fs'. Do I need a polyfill?

Note: I'm trying to use it in background js in Chrome Extension

ERROR in ./node_modules/@jlongster/sql.js/dist/sql-wasm.js 93:81-94
Module not found: Error: Can't resolve 'fs' in './node_modules/@jlongster/sql.js/dist'
resolve 'fs' in './node_modules/@jlongster/sql.js/dist'
  Parsed request is a module
  using description file: ./node_modules/@jlongster/sql.js/package.json (relative path: ./dist)
    Field 'browser' doesn't contain a valid alias configuration
    resolve as module
      ./node_modules/@jlongster/sql.js/dist/node_modules doesn't exist or is not a directory
      ./node_modules/@jlongster/sql.js/node_modules doesn't exist or is not a directory
      ./node_modules/@jlongster/node_modules doesn't exist or is not a directory
      ./node_modules/node_modules doesn't exist or is not a directory
      looking for modules in ./node_modules
        single file module
          using description file: ./package.json (relative path: ./node_modules/fs)
            no extension
 @ ./src/worker/index.ts 43:31-59

TypeScript support

@quolpr You mentioned that you were interested in helping migrate to TypeScript. Are you still interested in that? I'm familiar with it, but not as familiar with setting it up on a new project.

Alternate solution for sync IDB calls in Safari?

Hi!

Congrats for this project! It has the potential of a game changer. I was actually having the same idea after reading hosting-sqlite-databases-on-github-pages earlier this summer. But ideas are one thing - creating something that works with all the details and deep problem solving that you have made with locks and everything is impressing 🥇

My thoughts of solving the async problem was to utilize synchronous XMLHttpRequest and intercept the request locally from a service worker and do the async indexeddb jobs. I suppose this might be heavier than your solution with SharedArrayBuffer and Atomics, but maybe it could be an option for Safari? Thoughts?

manual setup - need some help :(

Hi - I am very very excited about absurd-Sql ! It is exactly was I been searching for to port a bunch of (each one is small) tools from MS-Access across to a web interface.

Unfortunately, the web-server that I have available for this project is not my server. I have quite limited access to it, and I cannot (ever) get NPM or Node.js or Yarn - put onto there. But I can put the files I need - manually - into place.

I have confirmed:

Cross-Origin-Embedder-Policy: require-corp
Cross-Origin-Opener-Policy: same-origin 

But I did not think it would be much of a problem to arrange the files manually. Except I find that it is. I am stuck - maybe just too tired and stressed- but I just cannot see where the files should go. I am just trying to get the example running. I thought the console would show a bunch of errors for missing files... And then I could use those messages to understand what is missing from where..... But the console is clean -

Its almost certainly something obvious I am overlooking - but any assistance to get it working would be greatly appreciated.

The file structure I am using is as follows:

image
image
image
image

memory leak in sql.js, new package needed

@jlongster It appears that the npm package you have psoed for sql.js does not include the following commit

sql-js/sql.js@ec44c18

We are seeing this memory leak and with large data sets its pretty severe

Your fork does include this commit but the npm package for 1.6.7 looks like it was created Aug 8th which would not have contained this fix

Is it possible to get a new package created? I notice you are attempting to merge you sql.js branch back into the upstream but it hasn't been completed.

Use File System API

The File System API allows to read and write native files on disk of the host system, and make in-place writes. There's a dedicated file system (called Origin private file system / OPFS) that websites can access without having to first prompt the user for access, but that's sufficient for the purposes of a database. Reads happen via File / Blob and writes using WritableStream. I have not researched how random access read/write works.
(The File System Access API is a different variation of the API, for arbitary file access using a file picker, and cannot write in-place, so it's not relevant here.)

It seems that all major browsers implement it now, aside from Android WebView: Chrome desktop, Chrome Android, Firefox, Safari Mac and iOS, Samsung Android browser.

To avoid the overhead of using IndexedDB as storage, the File System API should be used as FS implementation of absurd-sql, hopefully making it considerably faster. The goal is to come close to native speed of sqlite.

Where do I get sql-wasm.wasm ?

Hi! Really excited by this project. I'm trying it out locally, using the directions in the readme, and my browser's making a request for /js/sql-wasm.wasm which throws a 404 as it doesn't exist. Not sure what's calling it, or where it's supposed to be. Any suggestions?

I see it as a static file in the example repo, but the readme doesn't mention it 🤔

IDB benchmark would probably be faster with `getAll()`

Hey, this is an awesome project and really technically impressive. Hats off!

Just wanted to point out that the IDB benchmark for summing each item would probably be faster using getAll() rather than cursors. It avoids the back-and-forth of iterating through each item in the object store with a cursor.

Now of course, for the 100,000 case, you probably don't want to read 100,000 items into memory at once. But even in that case, you can use batching with getAll() to implement a faster "cursor" (e.g. fetch 100 items at a time instead of just 1).

I did this years ago for PouchDB (pouchdb/pouchdb#6033); it was a ~50% speedup at the time. So absurd-sql would probably still win! 🙂 But it might be interesting to see a comparison with a more "optimized" IDB usage.

Discussion regarding performance / absurd

Since this implementation consistently beats IndexedDB implementation AND provides a consistent syntax for SQLite operation (e.g. you may use same-way defined operations like in an electron app) isn't that implementation is actually absurd?

Good use cases seem pretty clear to me
In case there some another, a well-maintained project for the same purpose, maybe it would be good to add a link to that repo in the description?

"Unexpected token 'export'" when installing

Hello,

When installing absurd-sql in a new Sveltekit project, I get the following error: Unexpected token 'export' from /node_modules/absurd-sql/dist/indexeddb-main-thread.js:66: export { initBackend };.

I hope anyone can help me, thanks in advance!

Steps to reproduce

  1. Create a new Sveltekit project: npm create svelte@latest my-app
  2. Install dependencies and start dev server
  3. Install @jlongster/sql.js and absurd-sql via NPM
  4. Update /src/routes/+page.svelte to:
<script>
    import { initBackend } from 'absurd-sql/dist/indexeddb-main-thread'
    import { onMount } from 'svelte'

    onMount(() => {
        function init() {
            let worker = new Worker(new URL('./index.worker.js', import.meta.url))
            // This is only required because Safari doesn't support nested
            // workers. This installs a handler that will proxy creating web
            // workers through the main thread
            initBackend(worker)
        }

        init()
    })
</script>

<h1>Welcome to SvelteKit</h1>
<p>Visit <a href="https://kit.svelte.dev">kit.svelte.dev</a> to read the documentation</p>
  1. Create a file src/routes/index.worker.js with the following code:
import initSqlJs from '@jlongster/sql.js';
import { SQLiteFS } from 'absurd-sql';
import IndexedDBBackend from 'absurd-sql/dist/indexeddb-backend';

async function run() {
  let SQL = await initSqlJs({ locateFile: file => file });
  let sqlFS = new SQLiteFS(SQL.FS, new IndexedDBBackend());
  SQL.register_for_idb(sqlFS);

  SQL.FS.mkdir('/sql');
  SQL.FS.mount(sqlFS, {}, '/sql');

  const path = '/sql/db.sqlite';
  if (typeof SharedArrayBuffer === 'undefined') {
    let stream = SQL.FS.open(path, 'a+');
    await stream.node.contents.readIfFallback();
    SQL.FS.close(stream);
  }

  let db = new SQL.Database(path, { filename: true });
  // You might want to try `PRAGMA page_size=8192;` too!
  db.exec(`
    PRAGMA journal_mode=MEMORY;
  `);

   // Your code
}
  1. Open the browser

In the browser, I get the error message.

verson problem

The module '\?\C:\Users\Administrator\Desktop\test\node_modules\enmap\node_modules\better-sqlite3\build\Release\better_sqlite3.node'
was compiled against a different Node.js version using
NODE_MODULE_VERSION 93. This version of Node.js requires
NODE_MODULE_VERSION 108. Please try re-compiling or re-installing
the module (for instance, using npm rebuild or npm install)..

How to fix this can anyone help me please.............

Storing Blobs and startup time

Hi, first off, nice job on this project!

I work on a photo editing / graphic design webapp where we auto-save the user's project in IndexedDB. Most of what we're saving is binary image Blobs. I'm trying to get a sense for the trade-offs of using your library for our use case, and have two questions:

  1. Do you have a sense for the performance of storing Blobs in IndexedDB vs using this library? I'm assuming there'd be the cost of converting the Blob to an ArrayBuffer so it can be shared with the worker, and converting the ArrayBuffer back to a Blob when reading it.

  2. Startup time really matters for our app, as we need to read the user's auto-saved Blobs from IndexedDB and render them to canvas as quickly as possible. Am I right in thinking startup would be slower with sql.js b/c of the need to boot up the worker and also do the ArrayBuffer -> Blob conversion?

Really appreciate your feedback on this. Thanks!

Database locked sometimes

Sometimes I get database is locked when I have two tabs, and sometimes they write to DB simultaneously. I use Microsoft edge(chrome-based browser), and it has the support of the Atomics & SharedBuffer.

And I am curious how it could happen, cause the lock should be handled by IDB 🤔. Or maybe I missed something.

The screenshots from sentry:

image

And stack trace:

Error: database is locked
  at n.handleError(/assets/VaultDb.worker.ee6ed4b1.js:15:55024)
  at e.step(/assets/VaultDb.worker.ee6ed4b1.js:15:50388)
  at n.exec(/assets/VaultDb.worker.ee6ed4b1.js:15:53846)
  at Ri.sqlExec(/assets/VaultDb.worker.ee6ed4b1.js:15:149051)
  at ? (/assets/VaultDb.worker.ee6ed4b1.js:15:149374)
  at Array.forEach(<anonymous>)
  at Ri.insertRecords(/assets/VaultDb.worker.ee6ed4b1.js:15:149281)
  at ? (/assets/VaultDb.worker.ee6ed4b1.js:15:125570)
  at Wo(/assets/VaultDb.worker.ee6ed4b1.js:15:123937)
  at Ri.transaction(/assets/VaultDb.worker.ee6ed4b1.js:15:148788)

I didn't set up the source map, but I still hope that it will help somehow to understand why such error happens

WITHOUT ROWID Optimization

I didn't expect it would work, but leaving here in case some would try it. It happens randomly after a few dozens of items are added.

Uncaught (in promise) DOMException: Failed to execute 'advance' on 'IDBCursor': The transaction has finished.
    at f.read (blob:http://localhost:3000/59fedd9f-74f2-4db5-b08a-a4d22abebf7e:1:5038)
    at blob:http://localhost:3000/59fedd9f-74f2-4db5-b08a-a4d22abebf7e:1:7600
    at w (blob:http://localhost:3000/59fedd9f-74f2-4db5-b08a-a4d22abebf7e:1:6652)
    at blob:http://localhost:3000/59fedd9f-74f2-4db5-b08a-a4d22abebf7e:1:7560
    at g (blob:http://localhost:3000/59fedd9f-74f2-4db5-b08a-a4d22abebf7e:1:7671)
    at g (blob:http://localhost:3000/59fedd9f-74f2-4db5-b08a-a4d22abebf7e:1:9082)

Encryption - is this possible?

Normal SQLite has extensions/Pager support for adding encryption to the entire DB file i.e. AES ciphers. I was wondering if this project had similar capabilities.

Looking at how the DB is read/parsed, this may be as easy as encrypting/decryption on read/write of chunks.

Use cache api instead of indexeddb

I have worked with indexeddb several times. Every time it was nothing but pain and suffering. Last time I tried something demanding with indexeddb (implementing a blob store for in-browser ipfs), it completely broke. Not just the indexeddb I was working with, but also all other indexeddb based websites on the same browser.

I have been looking around for alternatives. Obviously local storage won't work, because it is too coarse grained and in any case limited to 5mb or so.

But there is the browser cache api which might work:
https://developer.mozilla.org/en-US/docs/Web/API/Cache

At least according to the docs, you have complete control over when a user-generated cache is being purged. "An origin can have multiple, named Cache objects. You are responsible for implementing how your script (e.g. in a ServiceWorker) handles Cache updates. Items in a Cache do not get updated unless explicitly requested; they don't expire unless deleted. Use CacheStorage.open() to open a specific named Cache object and then call any of the Cache methods to maintain the Cache."

And it will be purged either all at once, or not at all. So kinda similar to indexeddb, but with less overhead and overall insanity. I did some experiments, and it seems to work quite well from wasm.

Browser caches are essential for functioning, so I think they are quite efficient. For example they are not using one file per cache entry. They offer a key value store interface without transactions, but it seems that you should be able to build a sqlite backend / emscripten compatible file system on top of it, similar to https://github.com/jlongster/absurd-sql/tree/master/src/indexeddb

One cache entry would correspond to a block of a file. Modifying would work by reading, modifying, writing. This is quite similar to how it seems to be done in the indexeddb backend.

WAL Mode

Awesome work! I was playing around with the SQLite VFS mechanism myself back in February— But I hadn't thought of your clever Atomics.wait approach.

In the blog post, you mention:

There’s also write-ahead logging, or “WAL mode”. This can be more efficient if working with a real disk. However, to achieve its performance it requires things like shared memory using mmap and things like that. I don’t think we can reliably map those semantics onto IndexedDB, and I don’t even see the point. WAL potentially adds more overhead if we’re just writing them all to IDB anyway. The structure of the writes isn’t as important to us.

I haven't tested this, but I suspect that for the SQLite-on-IDB case, WAL actually makes a fair amount of sense. For one, you can avoid the extra writes to a rollback journal (and i don't think JOURNAL_MODE=MEMORY is sufficent— as when you run BEGIN TRANSACTION and COMMIT as part of different statements— closing the tab in between them may corrupt the SQLite database).

And also I think it's better aligned with how the underlying storage engines for IndexedDB work. Firefox uses SQLite in WAL mode, so any in-place edit of any IDB block is going to actually just append data to a WAL anyway. Likewise Chrome's IDB is implemented on top of LevelDB which is a Log-Structured Merge database that similarly never writes over anything and just appends changes to the end.

The other alternative is the more obscure VFS options like SQLITE_FCNTL_COMMIT_ATOMIC_WRITE— which allows you to tell SQLite that your underlying storage medium is capable of doing atomic commits and then in certain circumstances SQLite won't bother maintaining a journal altogether. Unfortunately it's more of an optimization rather than something you can rely on entirely— as I've found that in some situations it'll still try to write to a regular rollback journal.

Browser requirements (iOS lockdown mode)

I'm currently using Actual Budget, and I'm a big fan.
However, the web app does not work when in iOS lockdown mode.

I'm trying to find out why. What I tried:

  • the Actual domain is fully whitelisted in iOS - basically this means the fact that I have lockdown mode enabled, should not have any impact.
  • typeof SharedArrayBuffer returns a value - so this should not be a problem
  • the error I get in Actual is 'no native wasm support detected' . This is the same error as shown by https://sql.js.org/examples/GUI/index.html when not whitelisted. However, after whitelisting the sql.js domain, that demo works as designed. This leads me to believe WASM support is there when a domain is whitelisted.
  • your demo never works in Safari in iOS lockdown, even when whitelisting it.
  • Actual never works in Safari in iOS lockdown, even when whitelisting it. It keeps returning 'no native wasm support detected'.
  • According to https://wasm-feature-detect.surma.technology/, the same WASM features are available (tested in Safari on a locked down iPad and in Safari on a non-locked down iPad - same results)

Any clues as to what is done differently? I understand this is a case you probably do not wish to support, but I'm very curious what is blocking this library from functioning in iOS in lockdown mode. Maybe it has nothing do with WASM. I have changed the title to 'browser requirements' as I would like to understand which browser features (such as WASM) are required for this library to work.

Also see discussion in the Actual Discord channel here: https://discord.com/channels/937901803608096828/1139056081490948149

Lock error using existing database

Is there a way to initialize a database using an existing SQLite file? For instance, here's what it might look like if I combine your example repo with a sql.js example to load an existing database from a URL:

import initSqlJs from '@jlongster/sql.js';
import { SQLiteFS } from 'absurd-sql';
import IndexedDBBackend from 'absurd-sql/dist/indexeddb-backend';

async function init() {
  let SQL = await initSqlJs({ locateFile: file => file });
  let sqlFS = new SQLiteFS(SQL.FS, new IndexedDBBackend());
  SQL.register_for_idb(sqlFS);

  SQL.FS.mkdir('/sql');
  SQL.FS.mount(sqlFS, {}, '/sql');

  // host an example file somewhere with permissive CORS settings
  const data = await fetch("https://example.com/chinook.db").then(res => res.arrayBuffer());
  const db = new SQL.Database(new Uint8Array(data));
  return db;
}

async function runQueries() {
  let db = await init();

  const stmt = db.prepare("SELECT Name FROM artists;");
  stmt.step(); // Execute the statement
  console.log(stmt.getAsObject());
}

runQueries();

This is the error that gets generated:

index.js:149 Uncaught (in promise) TypeError: Cannot read property 'lock' of undefined
    at SQLiteFS$1.lock (index.js:149)
    at eval (sql-wasm.js:93)
    at sql-wasm.wasm:0xf7cec
    at sql-wasm.wasm:0x9e651
    at sql-wasm.wasm:0x2a775
    at sql-wasm.wasm:0xd83d
    at sql-wasm.wasm:0xe7e5
    at sql-wasm.wasm:0xebb41
    at sql-wasm.wasm:0x986a
    at sql-wasm.wasm:0x27d84

Maybe it makes sense that it doesn't work. Your version of SQL.Database doesn't let you specify a filename if you give it a data array instead of a filename. There might be other limitations preventing this pattern from working, but I'm not sure.

This might be a trivial fix to your sql.js fork, but I'm unable to try it out since I've had all sorts of problems getting local versions of absurd-sql and sql.js to work with the example project.

Kudos and a question

First, I just wanted to say: WOW. Thank you for all your hard and brilliant work on this.

Second, my question: do you think this could replace MySQL as far as Wordpress goes? One of the pain-points in my maintenance of several WP sites is the performance issues and quirks of admining MySQL instances. Obviously, it would be a lot of effort for someone (or a group of someones) to substitute absurd since WP seems to be largely MySQL-specific, but with all the sites out there that use WP as a CMS, the gains could be substantial.

Export DB throw error.

I'm using absurd-sql in electron.
I export the db like this.

try {
      const db = await getInstance()
      const data = db.export()
      const blob = new Blob([data], { type: 'application/octet-stream' })
      const blobHref = URL.createObjectURL(blob)

      self.postMessage({ action: 'exportedDB', blobHref })
    } catch (error) {
      console.info('sql export error, return ', error)
    }

It got error.

sql export error, return  TypeError: Cannot read properties of undefined (reading 'constructor')
    at readChunks (indexeddb-backend.js:65:20)
    at File.read (indexeddb-backend.js:248:22)
    at Object.read (index.js:101:37)
    at Object.read (sql-wasm.js:151:211)
    at Object.readFile (sql-wasm.js:154:349)
    at c.export (sql-wasm.js:88:238)
    at self.onmessage (instance.ts?type=mod…er_file? [sm]:86:23)

The error caused by const data = db.export()
In indexeddb-backend.js ,the data in undefined.

if (chunk.data.constructor.name !== 'ArrayBuffer') {
     throw new Error('Chunk data is not an ArrayBuffer');
   }

How can i resolve it. Thanks.

IDBMutableFile / webkitRequestFileSystem

Chrome and firefox each have a non-standard way of storing and retrieving flat files that can be much faster and less overhead than using IDB for file-like data. In the browser I normally use https://github.com/random-access-storage/random-access-web to pull in all the hacks required and there is an IDB fallback if the faster versions aren't detected.

Using the random-access-storage API you'd get all of these backends with less work but that would involve replacing or wrapping the storage layer. I'm doing this for a similar wasm project (a spatial database) written in rust when it runs in the browser.

But all of that would make the project more useful and less absurd.

IDB name is undefined

I am using the latest version of the absurd-sql + your sql.js (0.0.53). When I do this:

main:

function init() {
  let worker = new Worker(
    new URL('./SqlNotesRepository.worker.js', import.meta.url),
    {
      name: 'sql-notes-worker',
      type: 'module',
    },
  );

  // This is only required because Safari doesn't support nested
  // workers. This installs a handler that will proxy creating web
  // workers through the main thread
  initBackend(worker);
}

init()

worker:

import initSqlJs from '@jlongster/sql.js';
import { SQLiteFS } from 'absurd-sql';
import IndexedDBBackend from 'absurd-sql/dist/indexeddb-backend';

const run = async () => {
  let SQL = await initSqlJs({
    locateFile: (file: string) => `/sqljs/${file}`,
  });
  let sqlFS = new SQLiteFS(SQL.FS, new IndexedDBBackend());
  SQL.register_for_idb(sqlFS);

  SQL.FS.mkdir('/blocked');

  SQL.FS.mount(sqlFS, {}, '/blocked');

  const db = new SQL.Database(`/blocked/123.sqlite`, {
    filename: true,
  });

  db.exec(`
    PRAGMA journal_mode=MEMORY;
  `);

  let sqlstr =
    "CREATE TABLE IF NOT EXISTS hello (a int, b char); \
INSERT INTO hello VALUES (0, 'hello'); \
INSERT INTO hello VALUES (1, 'world');";
  db.run(sqlstr); // Run the query without returning anything
};

run();

The new IndexedDB DB got created, but its name is undefined

image

But in your example site, IndexedDB DBs have the correct names 🤔

Also, on page reload I am getting this message on insert:

image

And I think it somehow may relate to the issue 🤔 The interesting is that when it overwrites(I guess) all blocks from the previous site open — it starts working well.

Btw, great lib 👍 I was making a note-taking app, and I was on stage where I was needed to introduce a full-text search, and in IndexedDB it is a headache. Now I am going to rewrite the persistence layer to absurd-sql, and I am super excited to get overall speed improvements with a text search for free 🙂 And it also gives me the easy way to improve the speed when I will be porting the app to the phones/desktops — ionic/cordova/electron has wrappers for the native SQLite.

Using absurd-sql starting from a web worker, rather than main thread.

My application's domain logic is already in a web-worker; is the additional worker abstraction necessary in this case? Is there a trick to using absurd-sql in this manner? I'm unable to get it to start, as it gets stuck waiting at dist/indexeddb-backend.js:809, in the waitRead function of the Writer.

EDIT: Oh I see, starting to build my mental model of this thing. The indexeddb-backend wants a thread to itself for this very reason. In my case, I use ThreadsJS and they still don't seem to play nicely.

Multiple Windows (Safari and WKWebView)

First congratulations to your genius approach. It looks promising to solve many issues I face in my projects. Thanks for sharing this solution!

In particular for Safari and WKWebView I wonder if you have an idea how to have multiple windows open and still keep the data in sync. AFAIK there are not shared workers or other contexts except IndexedDB. This results in a similar situation as if one would open a SQLite database natively on different computers on a shared folder.

The only solution I see by now is to have a SQLite database per window and close it, once a window becomes inactive and reopen it, once the window becomes active.

Do you know about a better solution?

How does query work (without loading everything into memory)?

Nice work for exploring a new way to improve the persistent storage on web. The performance improvement is very impressive.

I have a question around querying. In your blog, you mentioned it never loads the database into memory because it only loads whatever SQLite asks for. It confuses me how the library is able to execute SQL statements against ArrayBuffers persisted in idb without loading them into memory first. Can you please share more details on the memory usage of the library? Sharing JS heap snapshots would help a lot.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.