Git Product home page Git Product logo

dgo's Introduction

dgo GoDoc

Official Dgraph Go client which communicates with the server using gRPC.

Before using this client, we highly recommend that you go through dgraph.io/tour and dgraph.io/docs to understand how to run and work with Dgraph.

Use Github Issues for reporting issues about this repository.

Table of contents

Supported Versions

Depending on the version of Dgraph that you are connecting to, you will have to use a different version of this client and their corresponding import paths.

Dgraph version dgo version dgo import path
dgraph 20.11.0 dgo 200.03.0 "github.com/dgraph-io/dgo/v200"
dgraph 21.X.Y dgo 210.X.Y "github.com/dgraph-io/dgo/v210"
dgraph 22.X.Y dgo 210.X.Y "github.com/dgraph-io/dgo/v210"
dgraph 23.X.Y dgo 230.X.Y "github.com/dgraph-io/dgo/v230"

Note: One of the most important API breakages from dgo v1 to v2 is in the function dgo.Txn.Mutate. This function returns an *api.Assigned value in v1 but an *api.Response in v2.

Note: We have removed functions DialSlashEndpoint, DialSlashGraphQLEndpoint from v230.0.0. Please use DialCloud instead.

Note: There is no breaking API change from v2 to v200 but we have decided to follow the CalVer Versioning Scheme.

Using a client

Creating a client

dgraphClient object can be initialized by passing it a list of api.DgraphClient clients as variadic arguments. Connecting to multiple Dgraph servers in the same cluster allows for better distribution of workload.

The following code snippet shows just one connection.

conn, err := grpc.NewClient("localhost:9080", grpc.WithTransportCredentials(insecure.NewCredentials()))
// Check error
defer conn.Close()
dgraphClient := dgo.NewDgraphClient(api.NewDgraphClient(conn))

The client can be configured to use gRPC compression:

dialOpts := append([]grpc.DialOption{},
	grpc.WithTransportCredentials(insecure.NewCredentials()),
	grpc.WithDefaultCallOptions(grpc.UseCompressor(gzip.Name)))
d, err := grpc.NewClient("localhost:9080", dialOpts...)

Login into a namespace

If your server has Access Control Lists enabled (Dgraph v1.1 or above), the client must be logged in for accessing data. Use Login endpoint:

Calling login will obtain and remember the access and refresh JWT tokens. All subsequent operations via the logged in client will send along the stored access token.

err := dgraphClient.Login(ctx, "user", "passwd")
// Check error

If your server additionally has namespaces (Dgraph v21.03 or above), use the LoginIntoNamespace API.

err := dgraphClient.LoginIntoNamespace(ctx, "user", "passwd", 0x10)
// Check error

Connecting To Dgraph Cloud

Please use the following snippet to connect to a Dgraph Cloud backend.

conn, err := dgo.DialCloud("https://your.endpoint.dgraph.io/graphql", "api-token")
// Check error
defer conn.Close()
dgraphClient := dgo.NewDgraphClient(api.NewDgraphClient(conn))

Altering the database

To set the schema, create an instance of api.Operation and use the Alter endpoint.

op := &api.Operation{
  Schema: `name: string @index(exact) .`,
}
err := dgraphClient.Alter(ctx, op)
// Check error

Operation contains other fields as well, including DropAttr and DropAll. DropAll is useful if you wish to discard all the data, and start from a clean slate, without bringing the instance down. DropAttr is used to drop all the data related to a predicate.

Starting Dgraph version 20.03.0, indexes can be computed in the background. You can set RunInBackground field of the api.Operation to true before passing it to the Alter function. You can find more details here.

op := &api.Operation{
  Schema:          `name: string @index(exact) .`,
  RunInBackground: true
}
err := dgraphClient.Alter(ctx, op)

Creating a transaction

To create a transaction, call dgraphClient.NewTxn(), which returns a *dgo.Txn object. This operation incurs no network overhead.

It is a good practice to call txn.Discard(ctx) using a defer statement after it is initialized. Calling txn.Discard(ctx) after txn.Commit(ctx) is a no-op. Furthermore, txn.Discard(ctx) can be called multiple times with no additional side-effects.

txn := dgraphClient.NewTxn()
defer txn.Discard(ctx)

Read-only transactions can be created by calling c.NewReadOnlyTxn(). Read-only transactions are useful to increase read speed because they can circumvent the usual consensus protocol. Read-only transactions cannot contain mutations and trying to call txn.Commit() will result in an error. Calling txn.Discard() will be a no-op.

Running a mutation

txn.Mutate(ctx, mu) runs a mutation. It takes in a context.Context and a *api.Mutation object. You can set the data using JSON or RDF N-Quad format.

To use JSON, use the fields SetJson and DeleteJson, which accept a string representing the nodes to be added or removed respectively (either as a JSON map or a list). To use RDF, use the fields SetNquads and DelNquads, which accept a string representing the valid RDF triples (one per line) to added or removed respectively. This protobuf object also contains the Set and Del fields which accept a list of RDF triples that have already been parsed into our internal format. As such, these fields are mainly used internally and users should use the SetNquads and DelNquads instead if they are planning on using RDF.

We define a Person struct to represent a Person and marshal an instance of it to use with Mutation object.

type Person struct {
	Uid   string   `json:"uid,omitempty"`
	Name  string   `json:"name,omitempty"`
	DType []string `json:"dgraph.type,omitempty"`
}

p := Person{
	Uid:   "_:alice",
	Name:  "Alice",
	DType: []string{"Person"},
}

pb, err := json.Marshal(p)
// Check error

mu := &api.Mutation{
	SetJson: pb,
}
res, err := txn.Mutate(ctx, mu)
// Check error

For a more complete example, see Example.

Sometimes, you only want to commit a mutation, without querying anything further. In such cases, you can use mu.CommitNow = true to indicate that the mutation must be immediately committed.

Mutation can be run using txn.Do as well.

mu := &api.Mutation{
  SetJson: pb,
}
req := &api.Request{CommitNow:true, Mutations: []*api.Mutation{mu}}
res, err := txn.Do(ctx, req)
// Check error

Running a query

You can run a query by calling txn.Query(ctx, q). You will need to pass in a DQL query string. If you want to pass an additional map of any variables that you might want to set in the query, call txn.QueryWithVars(ctx, q, vars) with the variables map as third argument.

Let's run the following query with a variable $a:

q := `query all($a: string) {
    all(func: eq(name, $a)) {
      name
    }
  }`

res, err := txn.QueryWithVars(ctx, q, map[string]string{"$a": "Alice"})
fmt.Printf("%s\n", res.Json)

You can also use txn.Do function to run a query.

req := &api.Request{
  Query: q,
  Vars: map[string]string{"$a": "Alice"},
}
res, err := txn.Do(ctx, req)
// Check error
fmt.Printf("%s\n", res.Json)

When running a schema query for predicate name, the schema response is found in the Json field of api.Response as shown below:

q := `schema(pred: [name]) {
  type
  index
  reverse
  tokenizer
  list
  count
  upsert
  lang
}`

res, err := txn.Query(ctx, q)
// Check error
fmt.Printf("%s\n", res.Json)

Query with RDF response

You can get query result as a RDF response by calling txn.QueryRDF. The response would contain a Rdf field, which has the RDF encoded result.

Note: If you are querying only for uid values, use a JSON format response.

// Query the balance for Alice and Bob.
const q = `
{
	all(func: anyofterms(name, "Alice Bob")) {
		name
		balance
	}
}
`
res, err := txn.QueryRDF(context.Background(), q)
// check error

// <0x17> <name> "Alice" .
// <0x17> <balance> 100 .
fmt.Println(res.Rdf)

txn.QueryRDFWithVars is also available when you need to pass values for variables used in the query.

Running an Upsert: Query + Mutation

The txn.Do function allows you to run upserts consisting of one query and one mutation. Variables can be defined in the query and used in the mutation. You could also use txn.Do to perform a query followed by a mutation.

To know more about upsert, we highly recommend going through the docs at Upsert Block.

query = `
  query {
      user as var(func: eq(email, "[email protected]"))
  }`
mu := &api.Mutation{
  SetNquads: []byte(`uid(user) <email> "[email protected]" .`),
}
req := &api.Request{
  Query: query,
  Mutations: []*api.Mutation{mu},
  CommitNow:true,
}

// Update email only if matching uid found.
_, err := dg.NewTxn().Do(ctx, req)
// Check error

Running Conditional Upsert

The upsert block also allows specifying a conditional mutation block using an @if directive. The mutation is executed only when the specified condition is true. If the condition is false, the mutation is silently ignored.

See more about Conditional Upsert Here.

query = `
  query {
      user as var(func: eq(email, "[email protected]"))
  }`
mu := &api.Mutation{
  Cond: `@if(eq(len(user), 1))`, // Only mutate if "[email protected]" belongs to single user.
  SetNquads: []byte(`uid(user) <email> "[email protected]" .`),
}
req := &api.Request{
  Query: query,
  Mutations: []*api.Mutation{mu},
  CommitNow:true,
}

// Update email only if exactly one matching uid is found.
_, err := dg.NewTxn().Do(ctx, req)
// Check error

Committing a transaction

A transaction can be committed using the txn.Commit(ctx) method. If your transaction consisted solely of calls to txn.Query or txn.QueryWithVars, and no calls to txn.Mutate, then calling txn.Commit is not necessary.

An error will be returned if other transactions running concurrently modify the same data that was modified in this transaction. It is up to the user to retry transactions when they fail.

txn := dgraphClient.NewTxn()
// Perform some queries and mutations.

err := txn.Commit(ctx)
if err == y.ErrAborted {
  // Retry or handle error
}

Setting Metadata Headers

Metadata headers such as authentication tokens can be set through the context of gRPC methods. Below is an example of how to set a header named "auth-token".

// The following piece of code shows how one can set metadata with
// auth-token, to allow Alter operation, if the server requires it.
md := metadata.New(nil)
md.Append("auth-token", "the-auth-token-value")
ctx := metadata.NewOutgoingContext(context.Background(), md)
dg.Alter(ctx, &op)

Development

Running tests

Make sure you have dgraph installed in your GOPATH before you run the tests. The dgo test suite requires that a Dgraph cluster with ACL enabled be running locally. To start such a cluster, you may use the docker compose file located in the testing directory t.

docker compose -f t/docker-compose.yml up -d
# wait for cluster to be healthy
go test -v ./...
docker compose -f t/docker-compose.yml down

dgo's People

Contributors

all-seeing-code avatar aman-bansal avatar apekshithr avatar ashish-goswami avatar billprovince avatar blasrodri avatar danielmai avatar dependabot[bot] avatar dolftax avatar gja avatar jairad26 avatar johncming avatar joshua-goldstein avatar lockedthread avatar mangalaman93 avatar manishrjain avatar martinmr avatar micheldiz avatar minhaj-shakeel avatar namanjain8 avatar pawanrawal avatar poonai avatar quasilyte avatar ryanfoxtyler avatar samsends avatar saromanov avatar shivaji-dgraph avatar sipickles avatar sleto-it avatar srfrog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dgo's Issues

Examples: Nquads

The Mutation type from github.com/dgraph-io/dgo/protos/api provides exported fields SetNquads, DelNquads, Set and Del but there's neither documentation nor any examples for it.

It'd be good to have example code showcasing when those fields are useful.

gRPC error transport is closing

I have a Go server exposing a REST API, using Dgraph as my database.

Whenever my server app is idle for too long, I get the following error during my next query or mutation:
rpc error: code = Unavailable desc = transport is closing

If I retry it works again. So it seems the Dgraph server is closing the connection without the dgo client knowing about it.

Is this a dgo problem or a Dgraph bug?
Could dgo retry automatically on such error?

Tested with Dgraph v1.0.9 and v1.0.10

google.golang.org/grpc v1.16.0
github.com/dgraph-io/dgo v0.0.0-20181102011806-23d7ac35e2c7

I want to know where is the v2 client

`
package dgo

import (
"context"
"fmt"
"math/rand"
"strings"
"sync"

"github.com/dgraph-io/dgo/v2/protos/api"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/metadata"
"google.golang.org/grpc/status"

)
`
It seems that only v2 client can support operations on dgraph v1.1.*, but I find no v2 client under dgo directory

JSON format ... doesn't differentiate between INT/FLOAT

Since the mutation input is in JSON format which doesn't differentiate between INT/FLOAT (everything is a number), the server gets a float and stores the facet as such. This is, unfortunately, a limitation of the JSON format and the fact that facets don't have a fixed type.

I would recommend setting a property on a node (instead of facet) if the type is important to you or using the RDF mutation format.

Originally posted by @pawanrawal in #3 (comment)

This seems like a more general problem affecting node predicates as well. It seems at the moment, the only choices are:

  1. use mutation JSON, in which case I can't get my integers back out of the database, because they get converted to floats during unmarshalling
  2. Use NQuads and construct them myself.

In my use case, I have dynamic data, and I don't know what it looks like ahead of time, so I cannot unmarshal into a predefined struct.

I think a modified json.Unmarshaler which does not coerce ints might be helpful. Perhaps the standard lib code could be repurposed and modified relatively easily...

Date string is automatically formatted as dateTime

What am I trying to do?

  • Saving a string that contains iso date information (i.e. "8888-01-01") as string predicate in dgraph using the json mutation format

What is happening?

  • The string is parsed as dateTime and saved as dateTime string, i.e. 8888-01-02T00:00:00Z

What would I expect?

  • The string should not be parsed and should be saved as is "8888-01-01".

The problem does not seem to occur if the predicate is updated from ratel using the rdf n-quad format.

Refactor tests

Currently, dgo tests are running directly from examples. We need to add unit tests and have actual system tests.

  • Add unit tests were needed
  • Create system tests with a working Dgraph cluster

Update does not add uid to api.Assigned uid map, but does update in database

When I try to update, the update does not add the uid to the api.Assigned map, but does update the database.

// type model {
//     UID string `json:"uid,omitempty"`
//     Name string `json:"name,omitempty"`
// }
input := &model{
     UID: "0x1234",
        Name: "Test",
}
inputJson, err := json.Marshal(input)
if err != nil {
	log.Println("error handing json marshal")
	return nil
}
mu := &api.Mutation{
	SetJson:   inputJson,
	CommitNow: true,
}
log.Println("mutation object:", mu)
 // mutation object: set_json:"{\"uid\":\"0x1234\",\"name\":\"Test\"}" commit_now:true

txn := o.graph.NewTxn()
defer txn.Discard(ctx)
assigned, err := txn.Mutate(ctx, mu)
if err != nil {
	log.Println("error handing mutation for update")
	return nil
}
log.Println("assigned:", assigned.Uids)
// assigned: map[]
// the database has changed the name to "Test"
if _, ok := assigned.GetUids()[input.ID]; !ok {
	log.Println("update unsuccessful")
	return nil
}

store err with geo and map[string]interface{}

There is a map[string]interface{} in structs. If I add "data: geo ." to schema:

  • if there is no data in the dgraph:
    This will not be able to save data and no error message
  • if there is data in the dgraph:
    rpc error: code = unknown desc = Schema change not allowed from scalar to uid or vice versa while there is data for pred: data

My test case as follows:

package main

import (
	"context"
	"encoding/json"
	"github.com/fatih/structs"
	"github.com/dgraph-io/dgo"
	"github.com/dgraph-io/dgo/protos/api"
	"google.golang.org/grpc"
	"log"
)

type Animal struct {
	Id   string `json:"id,omitempty"`
	Name string `json:"name,omitempty"`
	Info string `json:"info,omitempty"`
	Size string `json:"size,omitempty"`
}

type PResult struct {
	ID       string                 `json:"id"`
	Name     string                 `json:"name,omitempty"`
	Category string                 `json:"category,omitempty"`
	Data     map[string]interface{} `json:"data,omitempty"`
}

func main() {
	conn, _ := grpc.Dial("localhost:9080", grpc.WithInsecure())
	dg := dgo.NewDgraphClient(api.NewDgraphClient(conn))

	ani := Animal{
		Id:   "12345",
		Name: "Tom",
		Info: "dog",
		Size: "10kg",
	}
	pr := PResult{
		ID:       "f11111",
		Name:     "animal",
		Category: "friend",
		Data:     structs.Map(ani),
	}

	op := &api.Operation{}
	op.Schema = `
		id:		  string @index(exact) .
		name:	  string .
        category: string .
		data:	  geo	 .
    `

	ctx := context.Background()
	if err := dg.Alter(ctx, op); err != nil {
		log.Fatal(err)
	}
	mu := &api.Mutation{
		CommitNow: true,
	}
	pb, _ := json.Marshal(pr)
	mu.SetJson = pb
	_, _ = dg.NewTxn().Mutate(ctx, mu)
	//	fmt.Println(assign.Uids["blank-0"])
}

Updating graph using struct with uid makes edges disappear

I am creating a node by serializing struct and feeding to dgraph. Then I would like to update a value node of the graph by building go struct again and providing it with node UID. Even though update is successful, some edges have disappeared.

Dgo go documentation says While setting an object if a struct has a Uid then its properties in the graph are updated else a new node is created.

I am providing code which:

  1. Creates User with FavQuote, FavCar and Contact details. (All good)
  2. Knowing UID of User I update FavQuote. (FavQuote updated, FavCar remains, however edge to Contact details disappeared)

I had no problems with such node updates before, however this time the struct is more deeply nested.

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"log"

	"github.com/dgraph-io/dgo/v200"
	"github.com/dgraph-io/dgo/v200/protos/api"
	"google.golang.org/grpc"
)

// User -
type User struct {
	UID            string             `json:"uid,omitempty"`
	DType          []string           `json:"dgraph.type,omitempty"`
	FavQuote       string             `json:"quote,omitempty"`
	FavCar         string             `json:"car,omitempty"`
	ContactDetails UserContactDetails `json:"user.contact_details,omitempty"`
}

// UserContactDetails -
type UserContactDetails struct {
	UID    string      `json:"uid,omitempty"`
	DType  []string    `json:"dgraph.type,omitempty"`
	Mobile PhoneNumber `json:"contact_details.mobile,omitempty"`
}

// PhoneNumber -
type PhoneNumber struct {
	UID            string   `json:"uid,omitempty"`
	DType          []string `json:"dgraph.type,omitempty"`
	CountryCode    string   `json:"phone.country_code,omitempty"`
	NationalNumber string   `json:"phone.national_number,omitempty"`
}

func main() {
	conn, err := grpc.Dial("localhost:9080", grpc.WithInsecure())
	if err != nil {
		log.Fatal(err)
	}
	defer conn.Close()
	dg := dgo.NewDgraphClient(api.NewDgraphClient(conn))

	u := User{
		UID:      "_:newUser",
		DType:    []string{"User"},
		FavQuote: "Moon",
		FavCar:   "Lambo",
		ContactDetails: UserContactDetails{
			DType: []string{"ContactDetails"},
			Mobile: PhoneNumber{
				DType:          []string{"PhoneNumber"},
				CountryCode:    "+33",
				NationalNumber: "98766543",
			},
		},
	}

	op := &api.Operation{}
	op.Schema = `
		<phone.country_code>: string @index(exact) .
		<phone.national_number>: string @index(exact) .
		<quote>: string .
		<car>: string .

		type User {
			quote
			car
			user.contact_details: ContactDetails
		}

		type ContactDetails {
			contact_details.mobile: PhoneNumber
		}

		type PhoneNumber {
			phone.country_code
			phone.national_number
		}
	`

	ctx := context.Background()
	if err := dg.Alter(ctx, op); err != nil {
		log.Fatal(err)
	}

	// CREATING USER
	mu := &api.Mutation{
		CommitNow: true,
	}

	ub, err := json.Marshal(u)
	if err != nil {
		log.Fatal(err)
	}

	mu.SetJson = ub
	response, err := dg.NewTxn().Mutate(ctx, mu)
	if err != nil {
		log.Fatal(err)
	}

	uid := response.Uids["newUser"]
	fmt.Println("UID: " + uid)

	// User correctly created with all details and can be verified in Ratel

	// MODIFYING USER
	u = User{
		UID:      uid,
		FavQuote: "Jupiter",
	}

	mu = &api.Mutation{
		CommitNow: true,
	}

	ub, err = json.Marshal(u)
	if err != nil {
		log.Fatal(err)
	}

	mu.SetJson = ub
	_, err = dg.NewTxn().Mutate(ctx, mu)
	if err != nil {
		log.Fatal(err)
	}

	// User updated `FavQuote`. "FavCar" remains however contact details edge from user have dissapeared !?
}

To check data in Ratel I use

{
  u(func: uid(0x2a85)) {
    uid
    quote
    car
    user.contact_details {
      contact_details.mobile {
        phone.country_code
        phone.national_number
      }
    }
  }
}

Graph before modifying user

      {
        "uid": "0x2a85",
        "quote": "Moon",
        "car": "Lambo",
        "user.contact_details": {
          "contact_details.mobile": {
            "phone.country_code": "+33",
            "phone.national_number": "98766543"
          }
        }
      }

Graph after modifying user

      {
        "uid": "0x2a85",
        "quote": "Jupiter",
        "car": "Lambo"
      }

Which no longer has contact details ๐Ÿคทโ€โ™‚๏ธ

support for go module

Hi, Go v1.11 is already out with support for modules and other projects slowly are adapting themselves with it. Please adopt this project to go modules too. Thank you so much.

Export/Import option in dgo

Currently for exporting data, as per the docs here, it is required to run the mentioned GraphQL mutation at the /admin endpoint. This requires either logging in the instance of alpha, or whitelisting some server that can connect to the alpha. This process doesn't seem right when it comes to running Dgraph deployment on production.

It would be really helpful to have export/import client API's, that will take an alpha's address, and run the mentioned mutation on the doc.

rpc error: code = Unimplemented desc = unknown service api.Dgraph

I copied & pasted the example code from readme

package dgraph

import (
	"log"
	"google.golang.org/grpc"
	"github.com/dgraph-io/dgo"
	"github.com/dgraph-io/dgo/protos/api"
	"context"
)

func F() error {
	conn, err := grpc.Dial("localhost:5080", grpc.WithInsecure())
	if err != nil {
		log.Fatal(err)
	}
	defer conn.Close()
	dgraphClient := dgo.NewDgraphClient(api.NewDgraphClient(conn))

	op := &api.Operation{
		Schema: `name: string @index(exact) .`,
	}
	err = dgraphClient.Alter(context.Background(), op)
	return err
}

get

rpc error: code = Unimplemented desc = unknown service api.Dgraph

Ability to unmarshal query response directly to protobuf struct

I would like to know if there is a way to unmarshal a query response directly into a protobuf struct (the same used to insert data). In the code below it will only work if I create a new struct and define contract field as an array.
This was previously possible using client.unmarshal and I believe being able to insert and query data from dgraph via grpc and protobuf (without any additional code) was a great feature of the go client.

// PB definition
//
// message European {
// 	double timestamp = 1;
// 	string ticker = 2;
// 	string undticker = 3;
// 	double strike = 4;
// 	double expiry = 5;
// 	string putcall = 6;
// }
//
// message PriceRequest {
// 	double pricingdate = 1;
// 	European contract = 2;
// 	OptionMarket marketdata = 3;
// }

func Test_contractRequest(t *testing.T) {
	resp := query(
		`query ContractRequest($optionTicker: string){
			contract(func: eq(ticker, $optionTicker)){
				ticker
				strike
				undticker
				expiry
				putcall
			}
		}`,
		map[string]string{
			"$optionTicker": "AAPL DEC2017 PUT",
		})

	priceReq := &pb.PriceRequest{}
	// Fails with error "json: cannot unmarshal array into Go struct field PriceRequest.contract of type pb.European"
	err := json.Unmarshal(resp.GetJson(), priceReq)
	if err != nil {
		type Root struct {
			Contract []pb.European `json:"contract"`
		}

		root := &Root{}
		// Works fine
		err = json.Unmarshal(resp.GetJson(), root)
		if err != nil {
			t.Error(err)
		}
		t.Log(root)
		t.Fail()
	}
	t.Log(priceReq)
}

client batch mutation

hi, dgraph offer batch-mutation tool "dgraph live", which is so helpful . so ,i want to know , have
a plan making tools like that in this go-client-package?

Allow QueryWithVars with a list as argument instead of a map

QueryWithVars allows adding a map of variables that will be interpolated into the query. But this is not sufficient for functions that are variadic (i.e. uid).

Proposal

Add QueryWithListVars, which allows including a slice of elements instead of a map.

My current workaround is:

q := fmt.Sprintf(`
	query {
		getNodes(func: uid(%s)){
			uid
		}
	}
	`,strings.Join(listUids, ","))
	res, err := txn.Query(ctx, q)

And it would be nice to have

q := `
	query getNodesFunc($listUids: []string){
		getNodes(func: uid($listUids)){
			uid
		}
	}
	`
	res, err := txn.QueryWithList(ctx, q, map[string]string{"$listUids": listUids})

Supports fields localization as variables when QueryWithVariables

Hi,

I've tried to perform a query which returns localized fields. I would like to pass the localization as variables such as:

var variables = map[string]string{
		"$local":"en:fr:.",
	}
	const q = `
{
  items(func: has(item)){
    id: uid
    short_description: title@$local.
    long_description: description@$local.
  }
}
`
	response, err := txn.QueryWithVars(ctx, q,variables)

I'm getting a lexical error when I try this.

It would make sense to be able to query with a local as variable

Querying schema using gRpc client, returns empty object

Tried querying schema with gRpc connection with this code:

        d, err := grpc.Dial("localhost:9080", grpc.WithInsecure())
	if err != nil {
		panic(err)
	}

	c := dgo.NewDgraphClient(
		api.NewDgraphClient(d),
	)
        schemaQuery := `
		schema {
			type
			index
			reverse
			tokenizer
			list
			count
			upsert
			lang
		}
	`

	tx := c.NewTxn()
	resp, err := tx.Query(context.Background(), schemaQuery)
	if err != nil {
		return nil, err
	}

	var queryResult struct {
		Data []Schema
	}
	log.Println((string(resp.Json))

All it logs is just an empty object {}.

Tried querying using HTTP, seems to return schema as normal.

~$ curl -X POST localhost:8080/query -d $'
schema {
type
index
reverse
tokenizer
list
count
upsert
lang
}' | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   409  100   346  100    63   5792   1054 --:--:-- --:--:-- --:--:--  5965
{
  "data": {
    "schema": [
      {
        "predicate": "_predicate_",
        "type": "string",
        "list": true
      },
      {
        "predicate": "email",
        "type": "string",
        "index": true,
        "tokenizer": [
          "hash"
        ]
      },
      {
        "predicate": "password",
        "type": "string"
      },
      {
        "predicate": "username",
        "type": "string",
        "index": true,
        "tokenizer": [
          "hash"
        ]
      }
    ]
  },
  "extensions": {
    "server_latency": {
      "encoding_ns": 1777000
    },
    "txn": {
      "start_ts": 30036
    }
  }
}

Any issue with the gRpc client?

Adding array of geo gives "Input for predicate example.Loc of type scalar is uid"

When using:


type loc struct {
		Type   string    `json:"type,omitempty"`
		Coords []float64 `json:"coordinates,omitempty"`
	}

	type Tracker struct {
		Uid  string `json:"uid,omitempty"`
		Name string `json:"example.Name,omitempty"`
		Loc  []loc  `json:"example.Loc,omitempty"`
	}

	t := Tracker{
		Name: "Tom Baker",
		Loc: []loc{
			{
				Type:   "Point",
				Coords: []float64{1.1, 2},
			},
			{
				Type:   "Point",
				Coords: []float64{1.9, 5},
			},
		},
	}

And mapping:

	example.Name: string @index(exact) .
	example.Loc: [geo] @index(geo) .

I get Input for predicate example.Loc of type scalar is uid

When using geo as a list the parser seems to view the geojson types as nodes instead of scalars.

Btw I've heard that the python client seems to have the same problem.

dart grpc client

i want a dart client to use from flutter

so i tried compiling the the clent to Dart. It has a few issues to clear up but looks like it will work.

Is this the plan for other languages too ? I saw a js and a java client and have no idea if the java one uses the go proto for code gen from.

Bug: Mutation issue with facets integer

I apologize, if this is not the correct place to post this. When I set facets with int(1) value it stored as float (1.000000). It's perfectly fine if I post or query with HTTP

type GraphCollection struct {
	Uid           string      `json:"uid"`
	Weight        []GraphNode `json:"weight,omitempty"`
}

type GraphNode struct {
	Uid          string `json:"uid"`
	WeightFacets int64  `json:"weight|score,omitempty"`
}

data := GraphCollection{
		Uid: cuid,
		Weight: []GraphNode{
			{
				Uid:          muid,
				WeightFacets: 1,
			},
		},
	}

	ctx := context.Background()

	dg, conn := dgraph.NewClient()
	defer conn.Close()

	pb, err := json.Marshal(data)
	if err != nil {
		log.Fatal(err)
	}

	_, err = dg.NewTxn().Mutate(ctx, &api.Mutation{SetJson: pb, CommitNow: true})
	if err != nil {
		log.Error(err)
	}

marshal string:
{"uid":"0x9c6711","weight":[{"uid":"0x1053b2","weight|score":1}]}

	q := fmt.Sprintf(`{
	  me(func: uid(0x9c6711)) {
	    weight @facets {
	      uid
	    }
	  }
	}`)

	txn := dg.NewTxn()
	resp, err := txn.Query(ctx, q)

	if err != nil {
		log.Fatal(err)
	}

	log.Debug(string(resp.Json))

response:
{"me":[{"weight":[{"uid":"0x1053b2","weight|score":1.000000}]}]}
with error
json: cannot unmarshal number 1.000000 into Go struct field weight of type int

more tests /example involving mutate?

Hi

i can see the documentation isn't sufficient.
I dont mind it, as long as there are enough test code coverage to show usage of those functionality.

perhaps example or test of mutate delete single predicate, delete entire object using mutate json and mutate del.

thank you

Dgo support for GraphQL queries

Dgraph now supports the standard graphQL queries and mutations , can you please advice the roadmap for dgo client to support the same -how can we use the generated query* , mutate* functions generated

v1.0.0 wrong wire type for grpc proto

It seems there is a mismatch between the api and the grpc proto in dgo/v1.0.0

Running the base example of the docs to get a schema, go returns:
"rpc error: code = Internal desc = grpc: error unmarshalling request: proto: wrong wireType = 2 for field StartTs"

gomobile

Does this work with gomobile. Would like to run this on Mobiles.

Mutate: rpc error: code = Unknown desc = Empty query

Hi there!

I have been trying to play with Dgraph but am having issues inserting data. I am using docker-compose as described in 'Get started'. Following examples using curl for setting and retrieving data work.

Here is the relevant code below:

type Record struct {
        Uid string `json:"uid,omitempty"`
        Name string `json:"name,omitempty"`
        Type uint16 `json:"type,omitempty"`
        Class uint16 `json:"class,omitempty"`
        Ttl uint32 `json:"ttl,omitempty"`
        Original string  `json:"original,omitempty"`
}

func addRecord(client *dgo.Dgraph, rr dns.RR) error {
        var record Record

        record.Name = header.Name
        record.Type = header.Rrtype
        record.Class = header.Class
        record.Ttl = header.Ttl
        record.Original = header.String()

        ctx := context.Background()

        data, err := json.Marshal(record)
        if err != nil {
                return errors.Wrap(err, "Marshal")
        }

        mu := &api.Mutation{
                CommitNow: true,
                SetJson: data,
        }

        tx := client.NewTxn()
        defer tx.Discard(ctx)

        log.Printf("%+v", mu)
       // prints: set_json:"{\"name\":\"COM.\",\"type\":2,\"class\":1,\"ttl\":172800,\"original\":\"COM.\\t172800\\tIN\\tNS\\t\"}" commit_now:true

        _, err = tx.Mutate(ctx, mu)
        if err != nil {
                return errors.Wrap(err, "Mutate")
                // err == "rpc error: code = Unknown desc = Empty query"
        }

        err = tx.Commit(ctx)
        if err != nil {
                return errors.Wrap(err, "Commit")
        }

        return nil
}

the output for this is:

2019/08/24 15:31:43 set_json:"{\"name\":\"COM.\",\"type\":2,\"class\":1,\"ttl\":172800,\"original\":\"COM.\\t172800\\tIN\\tNS\\t\"}" commit_now:true 
2019/08/24 15:31:43 Error in add record at count 2: Mutate: rpc error: code = Unknown desc = Empty query

Thank you for your time.

Backup option in dgo client

Will there be a backup option available in dgo client in coming releases. This issue dgraph-io/dgraph#4900 was created sometime ago and it has been accepted. It will be convenient if we can have this api/client for backing up as well. Also, currently what is the recommended way to create backups for the data, for community edition?

api.proto - versioning & single source of truth

Hello,

I hope this is the correct channel to post my question. If it is not, I apologise and kindly request redirection to the appropriate location. Thanks! On to the actual question:

I have a basic (functional) custom C++ gRPC client for Dgraph, based on the protos/api.proto file in this repository, as it is linked by the Dgraph client documentation.

Given that a number of different official clients exist, and that Dgraph itself has a number of releases (branches, tags, archives), but this client does not, I wonder about gRPC compatibility.

In other words, does this repository host the ultimate "correct" version of the api.proto file, and how should we proceed to maintain compatibility with Dgraph current and future releases?

Thanks!

Cannot delete node by json

Hi, I could not delete node by uid, could someone help me.

My code is :

type Person struct {
        Uid     string   `json:"uid"`
        Name    string   `json:"name"`
        Age     int      `json:"age"`
        Friends []Person `json:"friends"`
}

func main() {
        c := NewDgraphClient()
        err := CreateSchema(c)
        if err != nil {
                log.Fatal(err)
        }
        err = AddSomeData(c)
        if err != nil {
                log.Fatal(err)
        }
        err = DeleteData(c)
        if err != nil {
                log.Fatal(err)
        }

}

func NewDgraphClient() *dgo.Dgraph {
        conn, err := grpc.Dial("localhost:9080", grpc.WithInsecure())
        if err != nil {
                log.Fatal(err)
        }

        client := dgo.NewDgraphClient(api.NewDgraphClient(conn))

        return client
}

func CreateSchema(client *dgo.Dgraph) error {
        schema := `
name: string @index(term) .
age: int .
friends: [uid] .

type Person {
  name: string
  age: int
  friends: [Person]
}
`
        op := &api.Operation{Schema: schema}

        err := client.Alter(context.Background(), op)
        return err
}

func AddSomeData(client *dgo.Dgraph) error {
        p1 := &Person{
                Uid:  "_:dog",
                Name: "Dog",
                Age:  10,
        }
        p1.Friends = make([]Person, 0)

        p2 := &Person{
                Uid:  "_:monkey",
                Name: "Monkey",
                Age:  20,
        }
        p3 := &Person{
                Uid:  "_:cat",
                Name: "Cat",
                Age:  30,
        }

        p1.Friends = append(p1.Friends, *p2)
        p1.Friends = append(p1.Friends, *p3)

        mu := &api.Mutation{CommitNow: true}
        pb, err := json.Marshal(p1)
        if err != nil {
                return err
        }

        mu.SetJson = pb
        _, err = client.NewTxn().Mutate(context.Background(), mu)
        return err
}

func DeleteData(client *dgo.Dgraph) error {
        d := map[string]string{"uid": "0xfffd8d67d832b97d"}
        pb, err := json.Marshal(d)
        if err != nil {
                return err
        }
        fmt.Println(string(pb))
        mu := &api.Mutation{
                CommitNow:  true,
                DeleteJson: pb,
        }

        _, err = client.NewTxn().Mutate(context.Background(), mu)
        return err
}

DeleteData had not delete any data.
0xfffd8d67d832b97d is uid of the data created by *AddSomeData.
I find this uid by http://localhost:8000

v2 Documentation for getting Schema in api.Response

I've been going through old code, updating it to dgo v2, and I found out that api.Response does not have a Schema field anymore for schema queries.

In the README, it still indicates to get the Schema field from api.Response, while this PR seems to get the schema from the Json field.

Should the documentation be updated?

Alter() should make retryable errors discernible from other errors

I have a test suite that uses DropAll to clear out the database and then use Alter to update the schema. The tests are then executed. After each run, we drop the database and update the schema again.

Sometimes, Alter() returns rpc error: code = Unknown desc = Pending transactions found. Please retry operation. The error is retryable, but there doesn't appear to be a way to tell that this is the case when using the client, other than trying to do a string match against the grpc error.

Ideally, there should be a Retryable interface that is implemented by all classes of retryable errors returned by the client. This would enable us to generalize the code required for retrying transactions by type asserting on the interface. See Assert errors for behaviour, not type in this article: https://dave.cheney.net/tag/error-handling

Query error automatically discards the transaction

This is a behavior change from dgo v1.0 to dgo v2. In dgo v2.1.0 a query error discards the current transaction, so future queries using the same txn are not allowed. This used to work in dgo v1.

Steps to reproduce

This is reproducible with the following sample code that runs these two queries in the same txn:

  1. {me(){}me(){}}: The first query returns a query error. In this case, Duplicate aliases not allowed: me.
  2. {me(){}}: This is a valid query that I expect should succeed.

dgo/v2:

package main

import (
	"context"
	"flag"
	"fmt"
	"log"

	"github.com/dgraph-io/dgo/v2"
	"github.com/dgraph-io/dgo/v2/protos/api"
	"google.golang.org/grpc"
)

var (
	addr = flag.String("addr", "localhost:9180", "Dgraph Alpha address.")

	ctx = context.Background()

	query1 = `{me(){}me(){}}`
	query2 = `{me(){}}`
)

func main() {
	flag.Parse()
	conn, err := grpc.Dial(*addr, grpc.WithInsecure())
	if err != nil {
		log.Fatal(err)
	}
	defer conn.Close()
	dg := dgo.NewDgraphClient(api.NewDgraphClient(conn))

	txn := dg.NewTxn()

	resp, err := txn.Query(ctx, query1)
	if err != nil {
		log.Printf("Query 1 error: %v\n", err)
	} else {
		log.Printf("Response: %v\n", string(resp.Json))
	}

	resp, err = txn.Query(ctx, query2)
	if err != nil {
		log.Printf("Query 2 error: %v\n", err)
	} else {
		log.Printf("Response: %v\n", string(resp.Json))
	}

	txn.Discard(ctx)

	fmt.Println("Done.")
}

Output:

2019/11/15 11:43:12 Query 1 error: rpc error: code = Unknown desc = Duplicate aliases not allowed: me
2019/11/15 11:43:12 Query 2 error: Transaction has already been committed or discarded
Done.

Expected behavior

In dgo v1 this behaves differently. We can use the same example code but change the import paths to use dgo v1.0.0:

dgo (v1):

package main

import (
	"context"
	"flag"
	"fmt"
	"log"

	"github.com/dgraph-io/dgo"
	"github.com/dgraph-io/dgo/protos/api"
	"google.golang.org/grpc"
)

var (
	ctx  = context.Background()
	addr = flag.String("addr", "localhost:9180", "Dgraph Alpha address.")

	query1 = `{me(){}me(){}}`
	query2 = `{me(){}}`
)

func main() {
	flag.Parse()
	conn, err := grpc.Dial(*addr, grpc.WithInsecure())
	if err != nil {
		log.Fatal(err)
	}
	defer conn.Close()
	dg := dgo.NewDgraphClient(api.NewDgraphClient(conn))

	txn := dg.NewTxn()

	resp, err := txn.Query(ctx, query1)
	if err != nil {
		log.Printf("Query 1 error: %v\n", err)
	} else {
		log.Printf("Response: %v\n", string(resp.Json))
	}

	resp, err = txn.Query(ctx, query2)
	if err != nil {
		log.Printf("Query 2 error: %v\n", err)
	} else {
		log.Printf("Response: %v\n", string(resp.Json))
	}

	txn.Discard(ctx)

	fmt.Println("Done.")
}

Output:

2019/11/15 11:45:15 Query 1 error: rpc error: code = Unknown desc = Duplicate aliases not allowed: me
2019/11/15 11:45:15 Response: {"me":[]}
Done.

Additional notes

In dgo v2 the Query and QueryWithVars methods internally call Do. And Do calls txn.Discard if a query returns an error.

dgo/txn.go

Line 159 in 3efa60e

_ = txn.Discard(ctx)

Discarding txn after an error is only documented for the Mutate method in both dgo v1 and dgo v2:

If the mutation fails, then the transaction is discarded and all future operations on it will fail.

But discarding the transaction after a query error was not and is not the documented behavior for the Query methods in dgo.

No query results in v2.0.0-rc1 / v1.1.0-rc2 (same query in Ratel works)

Quite possibly another version mismatch issue, but I am not getting results back for a query (upsert blocks are working), and querying from ratel works.

docker image: dgraph:v1.1.0-rc2
dgo: v2.0.0-rc1 h1:753hWclPX4U+enGF0P+otArveffRAde1pm3prJ6gbxE=

code:

        query := fmt.Sprintf(`{ nameserver(func: eq(nameserver.name, %q)) { uid } }`, name)

        tx := p.client.NewReadOnlyTxn().BestEffort()
        defer tx.Discard(ctx)

        resp, err := tx.Query(ctx, query)
        if err != nil {
                return "", errors.Wrap(err, "Query")
        }

        log.Printf("query: %s, resp: %+v", query, resp)

output:

2019/08/26 08:01:12 query: { nameserver(func: eq(nameserver.name, "ns.buydomains.com")) { uid } }, resp: json:"{\"nameserver\":[]}" txn:<start_ts:1721348 > latency:<parsing_ns:5585 processing_ns:224733 encoding_ns:2963 >

Running the query in ratel works as expected:

{
  "data": {
    "nameserver": [
      {
        "uid": "0xe72c66"
      }
    ]
  },
  "extensions": {
    "server_latency": {
      "parsing_ns": 23370,
      "processing_ns": 3643589,
      "encoding_ns": 11415,
      "assign_timestamp_ns": 1320425
    },
    "txn": {
      "start_ts": 1721601
    }
  }
}

Get query result of upsert block

Hi,

Is it possible to retrieve the query result of an upsert block? I checked the resp.Json field of the api.Response from tx.Do, it returns an empty object {}.

Here is an example code.

schema := `username: string @index(term) @upsert .
email: string @index(term) @upsert .
no: int @index(int) @upsert .
name: string @index(term) .

type User {
	name
	username
	email
	no
}`

err := c.Alter(context.Background(), &api.Operation{Schema: schema})
if err != nil {
	log.Println(err)
}

query := `{
	q2 as var(func: type(User)) @filter(eq(username, "wildan")) { uid }
	q3 as var(func: type(User)) @filter(eq(email, "[email protected]")) { uid }
}`
condition := `@if(eq(len(q2), 0) AND eq(len(q3), 0))`
jsonData := `{"dgraph.type":"User","name":"H3h3","username":"wildan","email":"[email protected]","no":1}`

tx := c.NewTxn()

assigned, err := tx.Do(context.Background(), &api.Request{
	Query: query,
	Mutations: []*api.Mutation{
		&api.Mutation{
			Cond:    conditionString,
			SetJson: jsonData,
		}
	},
	CommitNow: true,
})

log.Println(string(assigned.Json), assigned.Uids)

tx = c.NewTxn()

assigned, err = tx.Do(context.Background(), &api.Request{
	Query: query,
	Mutations: []*api.Mutation{
		&api.Mutation{
			Cond:    conditionString,
			SetJson: jsonData,
		}
	},
	CommitNow: true,
})

log.Println(string(assigned.Json), assigned.Uids)

Which returns:

2020/01/04 13:16:01 {} map[dg.675717852.13:0x4e27]
2020/01/04 13:16:01 {} map[]

Thanks.

multiple connections and connection pooling

hi,

does this library support for connection pooling?
do we need to initialise new client connection for every single client request?

how does transaction work? can we reuse the same connection for multiple client?

thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.