This project is mirrored from https://github.com/dgraph-io/dgraph.
Pull mirroring failed .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
Repository mirroring has been paused due to too many failed attempts. It can be resumed by a project maintainer or owner.
Last successful update .
- Oct 26, 2021
-
-
aman bansal authored
* adding bulk call for alpha to inform zero about the tablets (#8088)
-
- Oct 19, 2021
-
-
Naman Jain authored
(cherry picked from commit 3f514fee)
-
Naman Jain authored
(cherry picked from commit e7a19317) Co-authored-by:
minhaj-shakeel <minhaj@dgraph.io>
-
- Oct 18, 2021
-
-
Daniel Mai authored
Cherry-picked from #8077 Negative offsets (e.g., offset: -4) can cause panics when sorting. This can happen when the query has the following characteristics: 1. The query is sorting on an indexed predicate 2. The results include nodes that also don't have the sorted predicate 3. A negative offset is used. (panic trace is from v20.11.2-rc1-23-gaf5030a5) panic: runtime error: slice bounds out of range [-4:] goroutine 1762633 [running]: github.com/dgraph-io/dgraph/worker.sortWithIndex(0x1fb12e0, 0xc00906a880, 0xc009068660, 0x0) /ext-go/1/src/github.com/dgraph-io/dgraph/worker/sort.go:330 +0x244d github.com/dgraph-io/dgraph/worker.processSort.func2(0x1fb12e0, 0xc00906a880, 0xc009068660, 0xc0090686c0) /ext-go/1/src/github.com/dgraph-io/dgraph/worker/sort.go:515 +0x3f created by github.com/dgraph-io/dgraph/worker.processSort /ext-go/1/src/github.com/dgraph-io/dgraph/worker/sort.go:514 +0x52a
-
- Oct 05, 2021
-
-
Naman Jain authored
Optimize populateSchema() by avoiding repeated lock acquisition. we can get the schema for the predicate once and then check for the required field without taking a read lock. (cherry picked from commit d935b8b7) Co-authored-by:
Ahsan Barkati <ahsan@dgraph.io>
-
- Sep 25, 2021
-
-
Naman Jain authored
Earlier the admin server mutex lock was used to protect the graphql schema map. But now we store that in schema store that internally handles the concurrency. Hence, we don't need to take the admin server's read lock to access schema. /probe/graphql is used as health check and is called very frequently. This rlock on adminserver mutex makes the /probe/graphql requests block while lazy loading when restore operation gets triggered at the startup. That leads to so many go routines being spun up. (cherry picked from commit 5ad40d84)
-
- Sep 24, 2021
-
-
aman bansal authored
* updating badger to latest version
-
aman bansal authored
* fix: fixing audit logs for websocket connections
-
- Sep 03, 2021
-
-
NamanJain8 authored
-
Naman Jain authored
(cherry picked from commit db841dec)
-
Ahsan Barkati authored
Change the proposal's unique key to an atomic counter instead of using a randomly generated key. (cherry picked from commit a515d0de)
-
- Aug 31, 2021
-
-
aman bansal authored
* fix: add validation of null values with correct order of graphql rule validation
-
- Aug 24, 2021
-
-
Daniel Mai authored
Sort the buffer beforehand instead of sorting it in the goroutine used for writing the buffer to disk. The writeToDisk goroutines are throttled and making it expensive causes other goroutines to block. This change significantly improves the restore map phase. (cherry picked from commit 19662456) Co-authored-by:
Ahsan Barkati <ahsan@dgraph.io>
-
- Aug 19, 2021
-
-
Naman Jain authored
* fix(acl): subscribe for the correct predicates (#7992) We were subscribing to the wrong predicates. Hence the ACL cache was not getting updated. (cherry picked from commit 1b75c01d) * feat(acl): allow access to all the predicates using wildcard (#7991) There are usecases that need read/write/modify permissions over all the predicates of the namespace. It is quite tedious to manage the permissions every time a new predicate is created. This PR adds a feature to allow a group, access to all the predicates in the namespace using wildcard dgraph.all. This example provides to dev group, read+write access to all the predicates mutation { updateGroup( input: { filter: { name: { eq: "dev" } } set: { rules: [{ predicate: "dgraph.all", permission: 6 }] } } ) { group { name rules { permission predicate } } } } NOTE: The permission to a predicate for a group (say dev) is a union of permissions from dgraph.all and the permissions to specific predicate (say name). So suppose dgraph.all is given READ permission, while predicate name is given WRITE permission. Then the group will have both READ and WRITE permission. (cherry picked from commit 3504044d)
-
- Aug 13, 2021
-
-
Daniel Mai authored
This will attempt to connect to Kafka over TLS using the system certs. * Add helper function x.TLSBaseConfig. Sets the min TLS version to v1.2 along with the minimum cipher suites.
-
Daniel Mai authored
(cherry picked from commit 9159e846) Co-authored-by:
NamanJain8 <jnaman806@gmail.com>
-
- Aug 12, 2021
-
-
Naman Jain authored
We store the groupId and userId in a predicate named dgraph.xid.There was a subtle bug where if a we create the group with same name as that of a user, then the user is not able to log in. This happens because we were not applying a filter by type. This PR fixes that.
-
- Aug 06, 2021
-
-
Daniel Mai authored
Write to t instead of /tmp which in Cloud is mapped to the node storage instead of the attached volume. /tmp can fill up easily since it's typically smaller than the allocated storage for the Dgraph Alpha disk where the t directory is.
-
- Jul 29, 2021
-
-
Naman Jain authored
(cherry picked from commit b41ff1f8)
-
Ahsan Barkati authored
No need to execute filter subgraph if there are no source UIDs.
-
- Jul 26, 2021
-
-
Ahsan Barkati authored
Write rolled up keys at (max ts of the deltas + 1) because if we write the rolled-up keys at the same ts as that of the delta, then in case of WAL replay the rolled-up key would get over-written by the delta which can bring DB to an invalid state.
-
- Jul 16, 2021
-
-
Daniel Mai authored
JoinCluster loop was getting the connection from pool upfront, and then looping over it. This opened up a bug because in https://github.com/dgraph-io/dgraph/pull/7918 , we close the connection in case it becomes unhealthy. This PR gets the latest connection available in the loop. This was the only place in the codebase where I found this issue. (cherry picked from commit 7531e95f) Co-authored-by:
Manish R Jain <manish@dgraph.io>
-
- Jul 15, 2021
-
-
Ahsan Barkati authored
-
- Jul 14, 2021
-
-
Naman Jain authored
-
- Jul 07, 2021
-
-
Ahsan Barkati authored
This commit introduces incremental restore. It allows incremental backups to be restored on top of a set of already restored backups. In between two incremental restores, the cluster is in draining mode.
-
- Jul 02, 2021
-
-
Naman Jain authored
For big datasets, we're seeing a big slowdown due to loading schema and types serially using a single iterator. Using the Stream framework, makes this metadata loading step much faster, resulting in a much faster Alpha initialization. (cherry picked from commit d03d5ad1) Co-authored-by:
Manish R Jain <manish@dgraph.io>
-
- Jun 30, 2021
-
-
OmarAyo authored
Motivation: Currently, there is no way to query namespaces. This adds namespaces field to state. This field can be used to query list of namespaces. Note that this will output list of namespace only in case the user is an admin user (guardians of galaxy). In all other cases, it will return an empty list. (cherry picked from commit d2bd8328) Co-authored-by:
vmrajas <rajas@dgraph.io>
-
- Jun 28, 2021
-
-
Ahsan Barkati authored
The ForceFull parameter was not being passed in the backup request queue, causing all the backups to be incremental despite the request with `forceFull=True`. This commit fixes this issue. (cherry picked from commit 8d08cc33)
-
Ahsan Barkati authored
The kv version should be set to restore timestamp for rolled-up keys and schema keys as well.
-
aman bansal authored
fix: fixing graphql schema update when the data is restored + skipping /probe/graphql from audit (#7925) * fix: fixing grapgql schema update when the data is restored * making audit to skip /probe/graphql endpoint as this is health endpoint for kube
-
- Jun 25, 2021
-
-
Daniel Mai authored
Earlier we were showing the opposite status: unbanned namespaces were shown as banned. This change fixes that.
-
- Jun 24, 2021
-
-
Naman Jain authored
Earlier, whenever the alpha starts(or restarts), we were upserting guardian and groot for all the namespaces. This is not actually needed. The change was made in the PR #7759 to fix a bulk loader edge case. This PR fixes that by generating the required RDFs in the bulk loader itself. Essentially, it inserts the ACL RDFs when force loading into non-Galaxy namespace. (cherry picked from commit 6730f10b)
-
- Jun 22, 2021
-
-
Daniel Mai authored
In case the heartbeats in a connection pool stop, try to re-establish the connection via a redial. That's a more robust way than waiting for current connection to become usable again and allows much faster recovery from network partitions. (cherry picked from commit 947a62bd) Co-authored-by:
Manish R Jain <manish@dgraph.io>
-
minhaj-shakeel authored
-
Daniel Mai authored
-
Daniel Mai authored
When streaming raft messages in k8s cluster, we don't seem to get an error if the send didn't succeed. The packets get queued up, but don't fail and don't get sent. This causes a long re-election process. This PR periodically tries to send a message to the destination node via IsPeer, so it has another way to test the connection. If that fails, the streaming fails too, and the node is marked as unreachable. Co-authored-by:
Manish R Jain <manish@dgraph.io>
-
- Jun 18, 2021
-
-
Naman Jain authored
-
- Jun 16, 2021
-
-
Ahsan Barkati authored
Check for the length of wal entry's data before parsing it for key
-
- Jun 11, 2021
-
-
Ahsan Barkati authored
Don't try to ban a namespace if pstore is nil. pstore will be nil when running the map phase of export_backup.
-
Ahsan Barkati authored
-