# Dgraph Database Dgraph is a horizontally scalable and distributed GraphQL database with a native graph backend. It provides ACID transactions, consistent replication, and linearizable reads built from the ground up to perform rich queries over terabytes of structured data. As a native GraphQL database, Dgraph tightly controls data arrangement on disk to optimize query performance and throughput, reducing disk seeks and network calls in distributed clusters. The system supports both GraphQL and DQL (Dgraph Query Language) query syntax, responding in JSON and Protocol Buffers over gRPC and HTTP. Dgraph implements a sharded architecture with Raft consensus for replication, uses Badger as its underlying key-value store, and provides enterprise features including ACL-based security, multi-tenancy, backup/restore, and encryption at rest. The database is designed to provide Google production-level scale with low enough latency to serve real-time user queries. ## APIs and Key Functions ### DQL Query Execution via HTTP Execute queries using Dgraph Query Language through HTTP endpoints with transaction support and variable binding. ```go // Query with transaction context func queryWithTs(query string, startTs uint64, hash string) (string, error) { params := []string{} if startTs != 0 { params = append(params, fmt.Sprintf("startTs=%d", startTs)) params = append(params, fmt.Sprintf("hash=%s", hash)) } url := "http://localhost:8080/query?" + strings.Join(params, "&") queryBody := `{ me(func: uid(0x01)) { name uid gender alive friend { uid name } } }` resp, err := http.Post(url, "application/dql", strings.NewReader(queryBody)) if err != nil { return "", err } defer resp.Body.Close() body, _ := ioutil.ReadAll(resp.Body) // Response: {"data": {"me":[{"uid":"0x1","alive":true,"friend":[{"uid":"0x17","name":"Rick Grimes"}],"gender":"female","name":"Michonne"}]}} return string(body), nil } ``` ### Mutations with Blank Node UID Assignment Apply mutations using JSON or RDF format with automatic UID assignment for blank nodes and transaction commit control. ```go // Mutation with commitNow func mutateWithCommit(jsonData string, commitNow bool, startTs uint64, hash string) (map[string]string, error) { params := []string{} if startTs != 0 { params = append(params, fmt.Sprintf("startTs=%d", startTs)) params = append(params, fmt.Sprintf("hash=%s", hash)) } if commitNow { params = append(params, "commitNow=true") } url := "http://localhost:8080/mutate?" + strings.Join(params, "&") mutation := `{ "set": [ { "uid": "_:person", "name": "Alice", "age": 26, "friend": [ { "uid": "_:friend", "name": "Bob" } ] } ] }` resp, err := http.Post(url, "application/json", strings.NewReader(mutation)) if err != nil { return nil, err } defer resp.Body.Close() var result struct { Data struct { Uids map[string]string `json:"uids"` } `json:"data"` } json.NewDecoder(resp.Body).Decode(&result) // Returns: {"person": "0x123", "friend": "0x124"} return result.Data.Uids, nil } ``` ### Schema Definition and Indexing Define and update database schema with type system, indexes, and reverse edges for bidirectional traversal. ```go // Setup schema with indexes and types func setupSchema(client *dgo.Dgraph) error { ctx := context.Background() schema := ` name: string @index(term, fulltext) . age: int @index(int) . email: string @index(exact) @upsert . friend: [uid] @reverse @count . created_at: datetime @index(hour) . location: geo @index(geo) . type Person { name age email friend created_at location } ` op := &api.Operation{Schema: schema} err := client.Alter(ctx, op) if err != nil { return err } // Schema now supports: // - Term search on name // - Range queries on age // - Exact match on email with upsert constraint // - Reverse edge traversal on friend // - Geo queries on location return nil } ``` ### GraphQL Schema Management Deploy GraphQL schemas with auto-generated CRUD operations and custom resolvers. ```go // Update GraphQL schema via admin endpoint func updateGraphQLSchema(adminURL string) error { gqlSchema := ` type Author { id: ID! name: String! @search(by: [term]) dob: DateTime posts: [Post] @hasInverse(field: author) reputation: Float @search } type Post { id: ID! title: String! @search(by: [fulltext]) text: String @search(by: [fulltext]) author: Author! tags: [String] @search(by: [exact]) publishedAt: DateTime @search } ` mutation := `mutation updateGQLSchema($sch: String!) { updateGQLSchema(input: { set: { schema: $sch }}) { gqlSchema { id schema generatedSchema } } }` params := map[string]interface{}{ "query": mutation, "variables": map[string]interface{}{"sch": gqlSchema}, } body, _ := json.Marshal(params) resp, err := http.Post(adminURL+"/admin", "application/json", bytes.NewReader(body)) if err != nil { return err } defer resp.Body.Close() // Auto-generates queries: getAuthor, queryAuthor, getPost, queryPost // Auto-generates mutations: addAuthor, updateAuthor, deleteAuthor, addPost, updatePost, deletePost return nil } ``` ### Transaction Management with gRPC Client Execute multi-statement transactions with read consistency and conflict detection. ```go // Multi-operation transaction func transferData(client *dgo.Dgraph, fromUID, toUID string, amount int) error { ctx := context.Background() txn := client.NewTxn() defer txn.Discard(ctx) // Read current balances query := fmt.Sprintf(`{ from(func: uid(%s)) { uid balance } to(func: uid(%s)) { uid balance } }`, fromUID, toUID) resp, err := txn.Query(ctx, query) if err != nil { return err } var result struct { From []struct { UID string `json:"uid"` Balance int `json:"balance"` } `json:"from"` To []struct { UID string `json:"uid"` Balance int `json:"balance"` } `json:"to"` } json.Unmarshal(resp.Json, &result) // Update balances mu := &api.Mutation{ SetJson: []byte(fmt.Sprintf(`[ {"uid": "%s", "balance": %d}, {"uid": "%s", "balance": %d} ]`, fromUID, result.From[0].Balance-amount, toUID, result.To[0].Balance+amount)), } _, err = txn.Mutate(ctx, mu) if err != nil { return err } // Commit transaction (fails if concurrent modification detected) return txn.Commit(ctx) } ``` ### User Authentication and ACL Manage users, groups, and predicate-level access control with JWT-based authentication. ```go // Create user with ACL func createUserWithACL(adminClient *dgo.Dgraph) error { ctx := context.Background() txn := adminClient.NewTxn() defer txn.Discard(ctx) // Create user userNQuads := []*api.NQuad{ { Subject: "_:newuser", Predicate: "dgraph.xid", ObjectValue: &api.Value{Val: &api.Value_StrVal{StrVal: "alice"}}, }, { Subject: "_:newuser", Predicate: "dgraph.password", ObjectValue: &api.Value{Val: &api.Value_StrVal{StrVal: "secretpass"}}, }, { Subject: "_:newuser", Predicate: "dgraph.type", ObjectValue: &api.Value{Val: &api.Value_StrVal{StrVal: "dgraph.type.User"}}, }, } mu := &api.Mutation{Set: userNQuads, CommitNow: true} _, err := txn.Mutate(ctx, mu) if err != nil { return err } // Login to get JWT loginResp, err := http.Post("http://localhost:8080/login", "application/json", strings.NewReader(`{"userid": "alice", "password": "secretpass"}`)) if err != nil { return err } defer loginResp.Body.Close() var tokens struct { AccessJWT string `json:"accessJWT"` RefreshJWT string `json:"refreshJWT"` } json.NewDecoder(loginResp.Body).Decode(&tokens) // Use access token in subsequent requests req, _ := http.NewRequest("POST", "http://localhost:8080/query", strings.NewReader("{ me(func: uid(0x1)) { name } }")) req.Header.Set("X-Dgraph-AccessToken", tokens.AccessJWT) // Request now authenticated as alice return nil } ``` ### GraphQL Query Execution Execute GraphQL queries with variables, fragments, and nested selections through the GraphQL endpoint. ```go // GraphQL query with variables func executeGraphQLQuery(graphqlURL string) ([]byte, error) { query := `query GetAuthorPosts($authorName: String!) { queryAuthor(filter: { name: { eq: $authorName } }) { id name dob reputation posts(order: { desc: publishedAt }, first: 10) { id title text tags publishedAt } } }` params := map[string]interface{}{ "query": query, "variables": map[string]interface{}{ "authorName": "John Doe", }, } body, _ := json.Marshal(params) resp, err := http.Post(graphqlURL+"/graphql", "application/json", bytes.NewReader(body)) if err != nil { return nil, err } defer resp.Body.Close() responseBody, _ := ioutil.ReadAll(resp.Body) // Returns: {"data": {"queryAuthor": [{"id": "0x123", "name": "John Doe", "posts": [...]}]}} return responseBody, nil } ``` ### Advanced DQL Queries with Filters Execute complex graph traversals with filters, pagination, sorting, and aggregations. ```go // Complex DQL query with filters and aggregations func advancedQuery(client *dgo.Dgraph) (string, error) { ctx := context.Background() query := `{ # Find users aged 25-35 with term search on name users(func: has(name), first: 100, offset: 0) @filter(ge(age, 25) AND le(age, 35)) { uid name age email # Traverse to friends (with reverse edge count) friend @filter(gt(count(friend), 5)) (first: 10, orderasc: name) { uid name friendCount: count(friend) } # Aggregate friend count totalFriends: count(friend) } # Aggregation query stats() { avgAge: avg(val(age)) totalUsers: count(uid) } # Geo query for nearby locations var(func: near(location, [37.7749, -122.4194], 10km)) { uid } }` resp, err := client.NewReadOnlyTxn().Query(ctx, query) if err != nil { return "", err } return string(resp.Json), nil } ``` ### Backup and Restore Operations Trigger and monitor full or incremental backups with task-based asynchronous operations. ```go // Backup and restore cluster data func backupAndRestore(adminURL string) error { // Trigger full backup backupMutation := `mutation { backup(input: { destination: "/backups", forceFull: true }) { response { code message } taskId } }` params := map[string]interface{}{"query": backupMutation} body, _ := json.Marshal(params) resp, err := http.Post(adminURL+"/admin", "application/json", bytes.NewReader(body)) if err != nil { return err } defer resp.Body.Close() var backupResp struct { Data struct { Backup struct { TaskID string `json:"taskId"` } `json:"backup"` } `json:"data"` } json.NewDecoder(resp.Body).Decode(&backupResp) // Monitor task completion taskQuery := fmt.Sprintf(`query { task(input: {id: "%s"}) { status kind } }`, backupResp.Data.Backup.TaskID) for { params := map[string]interface{}{"query": taskQuery} body, _ := json.Marshal(params) resp, _ := http.Post(adminURL+"/admin", "application/json", bytes.NewReader(body)) var taskResp struct { Data struct { Task struct { Status string `json:"status"` } `json:"task"` } `json:"data"` } json.NewDecoder(resp.Body).Decode(&taskResp) resp.Body.Close() if taskResp.Data.Task.Status == "Success" { break } time.Sleep(5 * time.Second) } // Restore from backup restoreMutation := `mutation { restore(input: { location: "/backups", backupId: "dgraph.20250112.133000.000", backupNum: 1 }) { code message } }` params = map[string]interface{}{"query": restoreMutation} body, _ = json.Marshal(params) resp, err = http.Post(adminURL+"/admin", "application/json", bytes.NewReader(body)) if err != nil { return err } defer resp.Body.Close() return nil } ``` ### Multi-Tenancy with Namespaces Create isolated namespaces for multi-tenant deployments with separate authentication and data isolation. ```go // Namespace management func createAndUseNamespace(adminURL string, grootPassword string) (uint64, error) { // Login as groot (super admin) loginResp, _ := http.Post(adminURL+"/admin", "application/json", strings.NewReader(fmt.Sprintf(`{"userid": "groot", "password": "%s"}`, grootPassword))) var tokens struct { AccessJWT string `json:"accessJWT"` } json.NewDecoder(loginResp.Body).Decode(&tokens) loginResp.Body.Close() // Create new namespace createNsMutation := `mutation { addNamespace(input: {password: "namespace-password"}) { namespaceId message } }` req, _ := http.NewRequest("POST", adminURL+"/admin", strings.NewReader(fmt.Sprintf(`{"query": %q}`, createNsMutation))) req.Header.Set("X-Dgraph-AccessToken", tokens.AccessJWT) req.Header.Set("Content-Type", "application/json") resp, _ := http.DefaultClient.Do(req) defer resp.Body.Close() var nsResp struct { Data struct { AddNamespace struct { NamespaceID uint64 `json:"namespaceId"` Message string `json:"message"` } `json:"addNamespace"` } `json:"data"` } json.NewDecoder(resp.Body).Decode(&nsResp) namespaceID := nsResp.Data.AddNamespace.NamespaceID // Access namespace (all subsequent operations isolated to this namespace) nsLoginResp, _ := http.Post(adminURL+"/login", "application/json", strings.NewReader(fmt.Sprintf(`{"userid": "groot", "password": "namespace-password", "namespace": %d}`, namespaceID))) var nsTokens struct { AccessJWT string `json:"accessJWT"` } json.NewDecoder(nsLoginResp.Body).Decode(&nsTokens) nsLoginResp.Body.Close() // All queries/mutations with this token are scoped to namespaceID return namespaceID, nil } ``` ### Upsert Operations Perform conditional upserts using query blocks and mutation variables for conflict-free updates. ```go // Upsert to avoid duplicates func upsertUser(client *dgo.Dgraph, email, name string, age int) error { ctx := context.Background() query := fmt.Sprintf(` query { user as var(func: eq(email, %q)) } `, email) mu := &api.Mutation{ SetNquads: []byte(fmt.Sprintf(` uid(user) %q . uid(user) %q . uid(user) "%d"^^ . uid(user) "Person" . `, name, email, age)), } req := &api.Request{ Query: query, Mutations: []*api.Mutation{mu}, CommitNow: true, } txn := client.NewTxn() defer txn.Discard(ctx) _, err := txn.Do(ctx, req) // Creates new user if not exists, updates if exists return err } ``` ### Live Data Loading Bulk load data into running cluster using live loader with concurrent batch processing. ```bash # Load RDF data file dgraph live \ --alpha localhost:9080 \ --files data.rdf.gz \ --schema schema.txt \ --batch 1000 \ --conc 10 \ --zero localhost:5080 # Load JSON data dgraph live \ --alpha localhost:9080 \ --files data.json.gz \ --format=json \ --batch 1000 ``` ## Summary Dgraph serves as a production-ready distributed graph database optimized for applications requiring complex relationship queries, real-time performance at scale, and flexible schema evolution. Primary use cases include social networks with friend-of-friend queries, recommendation engines with multi-hop traversals, knowledge graphs with rich type systems, access control systems with group-based permissions, and multi-tenant SaaS platforms requiring data isolation. The database excels when data has sparse or interconnected structure that doesn't fit traditional SQL tables, particularly when requiring both graph traversal capabilities and ACID transaction guarantees. Integration patterns center around three approaches: gRPC clients using the official dgo library for programmatic access with full transaction control, HTTP/JSON APIs for language-agnostic REST-style integration with curl or standard HTTP libraries, and GraphQL endpoints for rapid application development with auto-generated schemas and resolvers. The system supports horizontal scaling through predicate-based sharding, consistent replication via Raft consensus, and high availability through multi-replica deployments. Operations teams benefit from built-in backup/restore, live data loading without downtime, metrics exporters for Prometheus, distributed tracing with OpenTelemetry, and rolling upgrades across cluster versions.