Git Product home page Git Product logo

pg.zig's People

Contributors

karlseguin avatar kitsuniru avatar richard-powers avatar zigster64 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pg.zig's Issues

error: @intCast must have a known result type

branch: zip-0.11

$ zig version
0.11.0

zig build output:

Build Summary: 2/5 steps succeeded; 1 failed (disable with --summary none)

install transitive failure
└─ install zapexample transitive failure
   └─ zig build-exe zapexample Debug native 1 errors
/root/.cache/zig/p/1220104468d980763e5878db../src/proto/StartupMessage.zig:19:24: error: @intCast must have a known result type
 view.writeIntBig(u32, @intCast(payload_len));
                       ^~~~~~~~~~~~~~~~~~~~~
/root/.cache/zig/p/1220104468d980763e5878db.../src/proto/StartupMessage.zig:19:24: note: result type is unknown due to anytype parameter
/root/.cache/zig/p/1220104468d980763e5878db../src/proto/StartupMessage.zig:19:24: note: use @as to provide explicit result type
referenced by:
    auth: /root/.cache/zig/p/1220104468d980763e5878db../src/auth.zig:41:22
    auth: /root/.cache/zig/p/1220104468d980763e5878db../src/conn.zig:143:19
    newConnection: /root/.cache/zig/p/1220104468d980763e5878db../src/pool.zig:214:6
    init: /root/.cache/zig/p/1220104468d980763e5878db../src/pool.zig:55:19
    main: src/main.zig:5:27
    callMain: /root/.asdf/installs/zig/0.11.0/lib/std/start.zig:574:32
    initEventLoopAndCallMain: /root/.asdf/installs/zig/0.11.0/lib/std/start.zig:508:34
    callMainWithArgs: /root/.asdf/installs/zig/0.11.0/lib/std/start.zig:458:12
    posixCallMainAndExit: /root/.asdf/installs/zig/0.11.0/lib/std/start.zig:414:39
    _start: /root/.asdf/installs/zig/0.11.0/lib/std/start.zig:327:40

Cross compile to 32bit - error in counter.zig - expected 32-bit integer type or smaller; found 64-bit integer type.

Situation:

I have a small util that uses pg.zig.
I dev and test this util locally on Apple M silicon / arm64 mac
To deploy - I normally compile this for linux/x86 (32bit), scp the binary to the target machine, and run it there

Workaround - I just cross compile for x86_64 linux musl, and that works fine, but seems like a regression

Currently using zig 0.12-release version

Not sure if this is a Zig comptime bug with @atomicrmw() or a pg.zig bug (or a metrics bug) ?

In metrics.zig, the Counters for alloc_params, etc are defined as Counter(u64), but the alloc_params, etc functions assume its incrementing the counters by usize. For 64bit platforms, this of course works fine :)

Have played with changing metrics.zig to use u16 or u32 for those counters, but its not as simple as that ... which makes me think it might be a newish Zig comptime bug.

Not sure

--------------------------------------------s

To easily duplicate the error

  • pull the latest pg.zig, on your 64bit machine
  • zig build test -Dtarget=x86-linux-musl

Fails with the same error Im seeing

Segfault trying to read query results `result.next()`

Zig version: 0.11.0

build.zig.zon:

.{
    .name = "test",
    .version = "0.0.1",

    .dependencies = .{
	.pg = .{
            .url = "https://github.com/karlseguin/pg.zig/archive/zig-0.11.tar.gz",
            .hash = "1220e19021af49c72e97c814e317aa0e567c655fdb42fa725d0c7f8a3046506df151"
        }
    }
}

I've created a min-repro:

const std = @import("std");
const pg = @import("pg");
const Pool = pg.Pool;
const Result = pg.Result;

pub fn main() !void {
    var gpa = std.heap.GeneralPurposeAllocator(.{
        .thread_safe = true,
    }){};
    var allocator = gpa.allocator();

    var pool = try Pool.init(allocator, .{
        .size = 10,
        .connect = .{
            .host = "myhost",
            .port = 6543,
        },
        .auth = .{
            .username = "user",
            .database = "postgres",
            .password = "pass",
            .timeout = 10_000,
        },
    });
    defer pool.deinit();

    var result = try query(
        pool,
        "SELECT * FROM public.\"Product\" p WHERE p.\"companyId\"=$1;",
        .{1},
    );
    defer result.deinit();

    while (try result.next()) |row| {
        std.debug.print("{}\n", .{row});
    }
}

pub fn query(pool: *Pool, sql: []const u8, values: anytype) !Result {
    const conn = try pool.acquire();
    defer conn.deinit();

    var result = try conn.query(sql, values);
    defer result.deinit();
    return result;
}

zig build run output:

Segmentation fault at address 0x7373de80f006
/home/user/zig/0.11.0/files/lib/std/mem.zig:1430:35: 0x3a3329 in readIntNative__anon_8428 (test)
    return @as(*align(1) const T, @ptrCast(bytes)).*;
                                  ^
/home/user/zig/0.11.0/files/lib/std/mem.zig:1438:35: 0x367420 in readIntForeign__anon_5964 (test)
    return @byteSwap(readIntNative(T, bytes));
                                  ^
/home/user/.cache/zig/p/1220e19021af49c72e97c814e317aa0e567c655fdb42fa725d0c7f8a3046506df151/src/reader.zig:236:34: 0x3677f1 in buffered (test)
   const len = std.mem.readIntBig(u32, buf[start+1..len_end][0..4]);
                                 ^
/home/user/.cache/zig/p/1220e19021af49c72e97c814e317aa0e567c655fdb42fa725d0c7f8a3046506df151/src/reader.zig:140:24: 0x367b6c in next (test)
   return self.buffered(self.pos) orelse self.read();
                       ^
/home/user/.cache/zig/p/1220e19021af49c72e97c814e317aa0e567c655fdb42fa725d0c7f8a3046506df151/src/conn.zig:476:27: 0x367cd2 in read (test)
   const msg = reader.next() catch |err| {
                          ^
/home/user/.cache/zig/p/1220e19021af49c72e97c814e317aa0e567c655fdb42fa725d0c7f8a3046506df151/src/result.zig:75:34: 0x37aac9 in next (test)
  const msg = try self._conn.read();
                                 ^
/home/user/programming/zig/test/src/main.zig:34:27: 0x37b232 in main (test)
    while (try result.next()) |row| {
                          ^
/home/user/zig/0.11.0/files/lib/std/start.zig:574:37: 0x2a1d8e in posixCallMainAndExit (test)
            const result = root.main() catch |err| {
                                    ^
/home/user/zig/0.11.0/files/lib/std/start.zig:243:5: 0x2a1871 in _start (test)
    asm volatile (switch (native_arch) {
    ^
???:?:?: 0x0 in ??? (???)
run test: error: the following command terminated unexpectedly:
/home/user/programming/zig/test/zig-out/bin/test
Build Summary: 3/5 steps succeeded; 1 failed (disable with --summary none)
run transitive failure
└─ run test failure
error: the following build command failed with exit code 1:
/home/user/programming/zig/test/zig-cache/o/27bb83cfc76fda6c71d3b8afdfe5a95e/build /home/user/zig/0.11.0/files/zig /home/user/programming/zig/test /home/user/programming/zig/test/zig-cache /home/user/.cache/zig run

For more info that may be helpful, there is just 1 row in the table. The query is correct SELECT * FROM public."Product" p WHERE p."companyId"=1, and pg.zig correctly reports back the number of columns. It just segfaults on result.next()

build.zig.zon "error: UnsupportedUrlScheme" Zig 0.11

Really thankful for your amazing work :)

What am I doing wrong?
May it be that git+url is not supported in Zig 0.11?

.{
    .name = "Learning",
    .version = "0.0.1",

    .dependencies = .{
        .pg = .{
            .url = "git+https://github.com/karlseguin/pg.zig#master",
        }
    }
}

.tar.gz libraries are working fine for me

Using pg.zig with multiple threads or workers with Zap

(using zig 0.11.0)

I've been working on a zap project that utilizes pg.zig to pull data from a database. Everything seems to work fine, unless I try using multiple threads or workers.

I assume there is a way to do this, but I'm likely not taking the correct approach.

I'm using a Pool with size = 10, and hitting the server with 9 concurrent queries. This works perfectly fine when using only 1 thread and 1 worker via zap. If I use more than 1 thread or worker, I start seeing messages like:

Segmentation fault at address 0x78e8fee44000
error: PG bind message has 4 result formats but query has 3 columns

Error GET /products - error.PG
127.0.0.1 - - [Tue, 26 Mar 2024 21:28:56 GMT] "GET /configurations/products HTTP/1.1" 500 46b 97331us
Segmentation fault at address 0x78e8fee44000
/home/user/zig/0.11.0/files/lib/compiler_rt/memcpy.zig:19:21: 0x68c650 in memcpy (compiler_rt)
            d[0] = s[0];
                    ^

The segfault only happens when using multiple workers, but I'll still get the error: PG bind message has 4 result formats but query has 3 columns message when only using more than 1 "thread".


Source code of one of the locations where this error occurs (it can happen for any endpont...):

pub fn queryProducts(alloc: std.mem.Allocator, companyId: usize) ![]Product {
    const conn = try pool.aquire();
    defer conn.release();

    const sql =
        \\ SELECT * FROM public."Product" p
        \\ WHERE p."companyId"=$1
        \\ ORDER BY p."name" ASC;
    ;

    var result = conn.query(sql, .{companyId}) catch |err| {
        if (err == error.PG) {
            if (conn.err) |pge| {
                std.log.err("PG {s}\n", .{pge.message});
            }
        }
        return err;
    };
    defer result.deinit();

    var arr = std.ArrayList(Product).init(alloc);
    while (try result.next()) |row| {
        const description = row.get(?[]const u8, 3);
        try arr.append(.{
            .id = row.get(i32, 0),
            .companyId = row.get(i32, 1),
            .name = try alloc.dupe(u8, row.get([]const u8, 2)),
            .description = if (description) |desc| try alloc.dupe(u8, desc) else null,
        });
    }
    return arr.toOwnedSlice();
}

Again, there's no error when only using one zap "thread" and one "worker".

Any thoughts?

Issues with the dynamic buffer

Im seeing issues with the contents of the dynamic buffer, under very specific circumstances. I haven't been able to definitively put a finger on them yet, and I expect that might take a bit of time.

These all refer to the buffer used in reader.zig, in the situation where im inside a StartFlow, and Im using an opt.allocator (being the http.result.arena allocator)

In my DB connection, im using 4k page sizes

  • Starts off with buf = static
  • When a new message won't fit in the static buffer, it allocates a new value for buf, using the arena allocator. No probs
  • When another new message in the same flow won't fit in this dynamic buffer, it will realloc the dynamic buffer to be big enough

This is using self.allocator.realloc()

This has 2 possible paths in std.mem :
1 - It tries to resize buf. If this works, then the buffer is resized, retains the same ptr, and everything works
2 - If a resize fails, it then allocates a new buffer of the correct size, copies the old buffer contents to the new, then memsets the old buffer with undefined (0xAA ?), then frees the old buffer, then returns the new slice

In cases where the reallocation is small (ie - fits within 4k page size), then everything seems to work fine and reliable.

In cases where the reallocation is large, and resize fails (more than 4k ... and Im checking this by inserting debugs into my stdlib to see that its failing to resize) ... then what Im seeing in my app is that when the next message is read in result.zig, then this code here is returning unreasonably large values

                        message_length = std.mem.readInt(u32, buf[start + 1 .. start + 5][0..4], .big) + 1;

ie - Message size = 820Mb, or 2.3GB or whatever :(
The values of start / pos / etc in the reader look correct - its the contents of but that are incorrect.

Which in turn fail. Sometimes they fail with a runtime panic, sometimes they fail with a hanging read on the DB socket looking for 800Mb of data.

ouch. That's hard to reason about.

I have made this small change in my copy of reader.zig, and ... for now ... it seems to fix the problem at my end, and my application works reliably over very large queries.

                            } else {
                                // currently using a dynamically alllocated buffer, we'll
                                // grow or allocate a larger one (which is what realloc does)
                                std.debug.print("realloc the new buff  ml {} current_len {} start {} pos {}\n", .{ message_length, current_length, start, pos });
                                // comment out the old realloc
                                // new_buf = try self.allocator.realloc(buf, message_length);

                               // do it manually
                                if (allocator.resize(buf, message_length)) {
                                    std.debug.print("Resized buffer to {}, buf len is now {}\n", .{ message_length, buf.len });
                                    new_buf = buf;
                                } else {
                                    new_buf = try allocator.alloc(u8, message_length);
                                    std.debug.print("Resize failed, realloc buffer to {}\n", .{message_length});
                                    self.buf_allocator = allocator;
                                    @memcpy(new_buf[0..current_length], buf[start..pos]);
                                    allocator.free(buf);
                                }
                                std.debug.print("realloc success - buf.len is now {}\n", .{new_buf.len});
                                lib.metrics.allocReader(message_length - current_length);
                            }

Very odd - the logic above is similar to what std.mem.Allocator.realloc() is doing ?

If I stick with using realloc, then my application reliably crashes at the same spots.

Suggests that either - My app has some serious memory corruption issues, or Zig stdlib realloc is broken. One of the 2 must be true.

I notice the test cases in reader.zig don't pick this up at the moment, because there are all small enough re-allocs to fit within the page size, and they all appear to resize correctly (rather than create a new buf to fit the new message)

Another option at my end - if I use 64k page sizes for the DB connection, im not hitting any queries that require a realloc, so I don't see any other problem. This suggests to me that my app might not be the source of memory corruption issues after all. Hard to say.

I guess the next step is for me to try to build a zig test case that exactly triggers what Im seeing here, then we can debug it from there.

On that note - If I run zig test, it's expecting a DB setup on localhost / user Postgres, with a few tables setup. I can get it running if I guess what those test schemas should be and manually add them to my local Postgres server. You probably have these on your local setup already, but might need to add CREATE TABLE statements in the test runner so its all setup.

access violation

latest commit in zig-0.11 branch might have issue about access violation. But this possibly be my bug.

["REQ",":1",{"kinds":[1]}]
select id, pubkey, created_at, kind, tags, content, sig from event where kind in ($1) order by created_at desc limit 500
["REQ",":1",{"kinds":[1]}]
select id, pubkey, created_at, kind, tags, content, sig from event where kind in ($1) order by created_at desc limit 500
thread 881242 panic: index out of bounds: index 16856, len 16645
/usr/local/zig-0.11.0/lib/std/crypto/tls/Client.zig:1251:81: 0x37cb12 in finishRead2 (zig-nostr-relay)
        @memcpy(c.partially_read_buffer[c.partial_ciphertext_idx + first.len ..][0..frag1.len], frag1);
                                                                                ^
/usr/local/zig-0.11.0/lib/std/crypto/tls/Client.zig:1071:35: 0x376e95 in readvAdvanced__anon_6330 (zig-nostr-relay)
                return finishRead2(c, first, frag1, vp.total);
                                  ^
/usr/local/zig-0.11.0/lib/std/crypto/tls/Client.zig:900:38: 0x3830cc in readvAtLeast__anon_6329 (zig-nostr-relay)
        var amt = try c.readvAdvanced(stream, iovecs[vec_i..]);
                                     ^
/usr/local/zig-0.11.0/lib/std/crypto/tls/Client.zig:861:24: 0x383562 in readAtLeast__anon_6327 (zig-nostr-relay)
    return readvAtLeast(c, stream, &iovecs, len);
                       ^
/usr/local/zig-0.11.0/lib/std/crypto/tls/Client.zig:866:23: 0x3835f4 in read__anon_6326 (zig-nostr-relay)
    return readAtLeast(c, stream, buffer, 1);
                      ^
/home/mattn/.cache/zig/p/1220e19021af49c72e97c814e317aa0e567c655fdb42fa725d0c7f8a3046506df151/src/stream.zig:30:26: 0x383705 in read (zig-nostr-relay)
   return tls_client.read(self.stream, buf);
                         ^
/home/mattn/.cache/zig/p/1220e19021af49c72e97c814e317aa0e567c655fdb42fa725d0c7f8a3046506df151/src/reader.zig:213:30: 0x384674 in read (zig-nostr-relay)
    const n = try stream.read(buf[pos..]);
                             ^
/home/mattn/.cache/zig/p/1220e19021af49c72e97c814e317aa0e567c655fdb42fa725d0c7f8a3046506df151/src/reader.zig:140:51: 0x384fe8 in next (zig-nostr-relay)
   return self.buffered(self.pos) orelse self.read();
                                                  ^
/home/mattn/.cache/zig/p/1220e19021af49c72e97c814e317aa0e567c655fdb42fa725d0c7f8a3046506df151/src/conn.zig:476:27: 0x3850e2 in read (zig-nostr-relay)
   const msg = reader.next() catch |err| {
                          ^
/home/mattn/.cache/zig/p/1220e19021af49c72e97c814e317aa0e567c655fdb42fa725d0c7f8a3046506df151/src/result.zig:75:34: 0x566a09 in next (zig-nostr-relay)
  const msg = try self._conn.read();
                                 ^
/home/mattn/dev/zig-nostr-relay/src/main.zig:733:28: 0x5695ba in handleReq (zig-nostr-relay)
        while (try res.next()) |row| {
                           ^
/home/mattn/dev/zig-nostr-relay/src/main.zig:788:31: 0x56c45c in handleText (zig-nostr-relay)
            try self.handleReq(parsed.value);
                              ^
/home/mattn/dev/zig-nostr-relay/src/main.zig:808:41: 0x56cf1d in handle (zig-nostr-relay)
            .text => try self.handleText(message),
                                        ^
/home/mattn/.cache/zig/p/1220361c4b45f9dbffa9972997298b9137741ac2ad67a41a38396e9d27e734e52128/src/server.zig:238:18: 0x56d927 in readLoop__anon_162152 (zig-nostr-relay)
     try h.handle(message);
                 ^
/home/mattn/.cache/zig/p/1220361c4b45f9dbffa9972997298b9137741ac2ad67a41a38396e9d27e734e52128/src/server.zig:133:15: 0x4c6c6d in clientLoop__anon_61413 (zig-nostr-relay)
 conn.readLoop(*H, &handler, &reader) catch {};
              ^
/usr/local/zig-0.11.0/lib/std/Thread.zig:412:13: 0x45418c in callFn__anon_57975 (zig-nostr-relay)
            @call(.auto, f, args);
            ^
/usr/local/zig-0.11.0/lib/std/Thread.zig:1210:30: 0x3d05d3 in entryFn (zig-nostr-relay)
                return callFn(f, self.fn_args);
                             ^
/usr/local/zig-0.11.0/lib/c.zig:239:13: 0x6bdf60 in clone (c)
            asm volatile (
            ^
???:?:?: 0xfff in ??? (???)
Unwind information for `???:0xfff` was not available, trace may be incomplete

https://github.com/mattn/zig-nostr-relay/blob/f0df269eec07d01bf1c07a603b0efa382c231786/src/main.zig#L742

Extend row.get() to handle iterator types, which in turn fixes row.to() to handle iterator columns

The new row.to() function is very useful

Im using a number of Iterator([]const u8) fields in my structs, and I find that this small change to row.to() works for me :

        inline for (std.meta.fields(T)) |field| {
            switch (field.type) {
                Iterator([]const u8) => {
                    @field(result, field.name) = self.iterator([]const u8, columnIndex);
                },
                else => {
                    const value = self.get(field.type, columnIndex);
                    @field(result, field.name) = switch (field.type) {
                        []u8, []const u8 => if (allocator) |a| try a.dupe(u8, value) else value,
                        ?[]u8, ?[]const u8 => blk: {
                            const v = value orelse break :blk null;
                            break :blk if (allocator) |a| try a.dupe(u8, v) else v;
                        },
                        else => value,
                    };
                },
            }
            columnIndex += 1;
        }

I think looking at the more general solution - instead of expanding this out to cover every possible iterator, it might be cleaner to simply extend row.get(type, col) to handle iterator field types.

So row.get(Iterator(T), col) -> returns an Iterator(T)

The code for row.get() and row.iterator() is pretty close anyway - its just the decoder assignment that sets them apart

Doing that would also mean that row.iterator() becomes redundant

Raw data for record?

I'm not sure if this is possible but I have a function in postgress that return a record.

Basicly it will return (x) if something wrong happen and if is ok will return (a,b)

there is the function:

CREATE OR REPLACE FUNCTION criartransacao(
    IN id_cliente INT,
    IN valor INT,
    IN tipo CHAR(1),
    IN descricao varchar(10)
) RETURNS RECORD AS $$
DECLARE
    ret RECORD;
BEGIN
    PERFORM id FROM cliente
    WHERE id = id_cliente;

    IF not found THEN
    select 1 into ret;
    RETURN ret;
    END IF;

    INSERT INTO transacao (valor, tipo, descricao, cliente_id)
    VALUES (ABS(valor), tipo, descricao, id_cliente);
    UPDATE cliente
    SET saldo = saldo + valor
    WHERE id = id_cliente AND (valor > 0 OR saldo + valor >= -limite)
    RETURNING saldo, limite
    INTO ret;

    IF ret.limite is NULL THEN
        select 2 into ret;
    END IF;
    
    RETURN ret;
END;
$$ LANGUAGE plpgsql;

My first guess was that I cloud get the raw data and if the len of the data was 1, I had an error, otherwhite I would have on a on data[0] and b on data[1]. Not what I am seeing here.

This is a snnipet of what I'm trying to do

 var result = try global.pool.query("select criartransacao($1, $2, $3, $4);", .{ id, valor, payload.tipo, payload.descricao });
        defer result.deinit();

        const row = try result.next();
        const record = row.?.get([]const u8, 0);

        std.debug.print("record: '{any}' len: {}\n", .{ record, record.len });

        if (record.len == 1) {
            const status: CreateTransactionReturn = @enumFromInt(record[0]);

            switch (status) {
                CreateTransactionReturn.NotFound => {
                    res.status = 404;
                    return;
                },
                CreateTransactionReturn.LimitExceeded => {
                    res.status = 422;
                    return;
                },
            }
        }

        const saldo = record[0];
        const limite = record[1];

right now I'm working with this

record: '{ 0, 0, 0, 1, 0, 0, 0, 23, 0, 0, 0, 4, 0, 0, 0, 2 }' len: 16
record: '{ 0, 0, 0, 2, 0, 0, 0, 23, 0, 0, 0, 4, 255, 254, 121, 97, 0, 0, 0, 23, 0, 0, 0, 4, 0, 1, 134, 160 }' len: 28
record: '{ 0, 0, 0, 1, 0, 0, 0, 23, 0, 0, 0, 4, 0, 0, 0, 2 }' len: 16

I'm guessing that the 4 first values represents the quantity of the data, like, when I expect a error to return like (2) the frist record is 00001, I'm not sure what 23 and 4 means here, and 2 is the value I expect

in the middle one seems similar, 2 for the two results, 23, and 4 (I guess is shows that is an integer or smothing) and them the value.

I'm sorry if this is a dumb question but I'm lost into this, have no Idea how to convert this to the data I expect, hope someone can help me or show me an easy way to do this?

Consider exporting type pg.Iterator as a public type

See my branch - export_iterator for 3 changes

  • pub fn Iterator() in result.zig
  • add export to lib.zig and pg.zig

I wont put up a PR, because its a tiny change, and formatting differences are huge (8 space tab stops on your code ?)

Situation :

I want to create a simple wrapper struct over an SQL statement that gets re-used in many spots. Do not want consumers of the SQL statement to have to fiddle with setting index values, or copypasting custom logic on a per row level.

This is easy enough to do ... until you hit a statement that includes an iterator result type. (because you need an instance of the row do be able to get an iterator ... so you cant easily define a receiver struct that includes an iterator field without have a row)

eg:

const CustomerList = struct {
    pub const Row = struct {
        name: []const u8 = "",
        count: i64 = 0,
        value: f64 = 0,
        unallocated: bool = false,

        pub fn aliases(self: Row, row: pg.Row) pg.Iterator([]const u8) {
            _ = self; // autofix
            return row.iterator([]const u8, 3);
        }
    };

    const select_customer_list =
        \\ SELECT
        \\      LOWER(customer_name) as name,
        \\      COUNT(*) as count,
        \\      SUM(amount) as value,
        \\      ARRAY_AGG(distinct customer_name) AS aliases
        \\ FROM purchase_order
        \\ GROUP BY LOWER(customer_name)
        \\ ORDER BY name
    ;

    pub fn query(self: *CustomerList, db: *pg.Conn) !pg.Result {
        _ = self; // autofix
        const res = db.query(select_customer_list, .{}) catch |err| {
            if (db.err) |pge| {
                logz.err().src(@src()).string("get list of customers from purchase orders", pge.message).log();
            }
            return err;
        };
        return res;
    }

    pub fn map(self: *CustomerList, row: pg.Row) CustomerList.Row {
        _ = self; // autofix
        const name = row.get([]const u8, 0);
        return .{
            .name = if (name.len < 1) Unallocated else name,
            .unallocated = name.len < 1,
            .count = row.get(i64, 1),
            .value = row.get(f64, 2),
        };
    }
};

Consumer use case

    {
        var customers = CustomerList{};
        var query = try customers.query(db);
        defer query.deinit();

        try template.tableStart(w);
   
        while (try query.next()) |row| {
            const customer = customers.map(row);

            try template.tableRow( .{
                .name = customer.name,
                .count = customer.count,
                .value = customer.value,
            }, w);

            var aliases = customer.aliases(row);   // get an iterator of aliases
            while (aliases.next()) |alias| {
                try template.AliasRow( .{ .alias = alias }, w);
            }
        }
    }

There might be a simple way of doing the same thing without exporting pg.Iterator, but I cant see it yet

zig 0.12 support

These's repo build.zig have some troubles on zig 0.12 that blocks it from successful compilation

fix: #5

Feature request - Add support for application_name inside connection string for PostgreSQL

Karl,

This is just a small feature request to improve tracking down and monitoring connections on PostgreSQL, especially when there are more than one application connecting to PostgreSQL from the same IP.

Essentially by simply adding application_name=xxxxx when connecting to PostgreSQL this will do the trick.

From: https://www.postgresql.org/docs/16/runtime-config-logging.html#GUC-APPLICATION-NAME

application_name (string):

The application_name can be any string of less than NAMEDATALEN characters (64 characters in a standard build). It is typically set by an application upon connection to the server. The name will be displayed in the pg_stat_activity view and included in CSV log entries. It can also be included in regular log entries via the log_line_prefix parameter. Only printable ASCII characters may be used in the application_name value. Other characters are replaced with C-style hexadecimal escapes.

I hope you can see the value in this.

Cheers,

Andrew

Concurrent queries on the same connection

Coming from Go here ... so have some preconceptions about the meaning of "connection" vs "query"

Im doing this in Zig, and experiencing this issue

my_db = db_pool.aquire(); 
orders = my_db.query("select id from orders", .{});
while (orders.next()) |row| {
   ... do stuff
   my_db.row("update some_other_table set field1=$1 where keyfield=$2", .{value1, value2});  // bad idea !!!
}

The re-use of the connection to launch another query, whilst the outer query is still being used == a bad idea

You can see that the 1st Parse msg emitted by the update statement returns a packet of type 'D' instead of '1' which suggests that the conversation between the outer query and the PG server is getting messed up with the inner query.

Simple enough fix on my side

outer_db = db_pool.aquire();  // outer query
inner_db = db.pool.acquire(); // update query
orders = outer_db.query("select id from orders", .{});
while (orders.next()) |row| {
   ... do stuff
  // use a separate connection for the update
  inner_db.row("update some_other_table set field1=$1 where keyfield=$2", .{value1, value2});  
}

That's all good. I don't know if that is the intention of how "connection" and "query" should be related in this lib ?

If so, then maybe return an error .ConnectionInUse or something similar ?

Personal preference from me would be to keep pg.zig as low level as possible, and put any ergonomic higher level trickery into a pgx.zig wrapper lib

Bus error on read - investigating

Hit an interesting bug with a particular query .. generates a bus error, so the stack trace is pretty short and not very useful yet.

I need to dive in deeper to debug, so can’t post much more than that yet sorry.

looks like it’s in stdlib reader which then reaches for an allocation.. and blows up somewhere from there.

its 100 % reproducible for a particular query, so that helps. Seeing it on Mac m2, will try other setups as well.

could be pg.zig (because it’s in the reader), or it could be in http.zig as well (or my app code even)

couple more days to track this one down I think

Bind parameters causing invalid byte sequence error

When running the following query with a single i32 bind parameter,

SELECT pg_notify('channel', CAST($1 AS text));

I get this error:

invalid byte sequence for encoding "UTF8": 0x00

If I manually replace the bind parameter with a value,

SELECT pg_notify('channel', CAST(1 AS text));

the query executes successfully.

Also, if I serialize the i32 to a []u8 beforehand and pass the string to the bind parameter instead and replace the CAST($1 AS text) with just $1, like:

SELECT pg_notify('channel', $1);

then the query also executes successfully.

It seems that I've isolated the issue down to a potential bug in the binding logic in this library. I'm using the rowOpts function to execute the query as I only expect a single row back.

error: @intCast must have a known result type in StartupMessage.zig

Getting the following error when trying to use the example code from the readme. I'm using Zig 11.

I see that file hasn't changed since initial commit so I'm sure it's something my end but would you have any advice what I'm doing wrong?

/Users/max/.cache/zig/p/12206d5e0d49941e4a811830e436e8bdec91e7d6331397ccfb73ffe45b9d1e013198/src/proto/StartupMessage.zig:20:24: error: @intCast must have a known result type
 view.writeIntBig(u32, @intCast(payload_len));
                       ^~~~~~~~~~~~~~~~~~~~~
/Users/max/.cache/zig/p/12206d5e0d49941e4a811830e436e8bdec91e7d6331397ccfb73ffe45b9d1e013198/src/proto/StartupMessage.zig:20:24: note: result type is unknown due to anytype parameter
/Users/max/.cache/zig/p/12206d5e0d49941e4a811830e436e8bdec91e7d6331397ccfb73ffe45b9d1e013198/src/proto/StartupMessage.zig:20:24: note: use @as to provide explicit result type
referenced by:
    auth: /Users/max/.cache/zig/p/12206d5e0d49941e4a811830e436e8bdec91e7d6331397ccfb73ffe45b9d1e013198/src/conn.zig:164:23
    newConnection: /Users/max/.cache/zig/p/12206d5e0d49941e4a811830e436e8bdec91e7d6331397ccfb73ffe45b9d1e013198/src/pool.zig:205:6
    remaining reference traces hidden; use '-freference-trace' to see all reference traces
    ```

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.