Here's a video from J On The Beach this year on the details around the Just-Right Consistency approach that might help answer some questions:
Lasp (previously on HN):
SyncFree project review results:
If you'd be interested in any of the above, or just using the database and/or trying the tools, reach out to me via email christopher.meiklejohn at gmail.com or cmeik on Twitter.
That said, a major part of why databases are hard is that reliable storage is hard. I see remarkably little about this on Antidote's homepage. I'd wish this was just a frontend to some battle tested storage engine but it appears that this is not the case.
We're investigating a backend that works on LevelDB and RocksDB. We just haven't had the academic resources to get it implemented yet. However, it's largely an engineering resource problem and not a theoretical problem.
I appreciate your openness and honesty here, but the moment that an academic says that all the theory is solved and now "it's largely an engineering resource problem" is when people from industry tend to get nervous :-)
The majority of aspects that make databases robust aren't theoretical problems but "mere" engineering problems. Nevertheless there's an enormous variety in quality in this area. Pile sufficiently many engineering problems together and it becomes very hard to get right.
But fundamental "academic" problems in the system architecture won't ever be solved, because the resulting codebase would be a different product targeting different use-cases.
Some CRDTs support garbage collection directly - if you run them in a causally consistent environment. Antidote is causally consistent and has a Set and Map implementation that work like this; for these CRDTs you don't need a global sync.
Would you agree that it seems for many disciplines, whether its cs or physics, the power of the statement “it’s solved theoretically” relates to the scope of the problem?
For example, if someone came up with a strong argument or proof that raises the upper bound of performance for a specific algorithm, I could be convinced to start celebrating right away, and would think nothing of the implementation being passed off to a student as a mere formality.
However, the scope doesn’t have to increase much before a formal proof, or even a conclusive argument, become impossible to make in a way that’s completely convincing.
It’s not a knock against either theory or engineering, it’s a simple matter of our inability to model or predict things accurately beyond a certain complexity.
I only wish I could explain this well to lay people, like if a friend asks me, “after over half a century of writing code why cant even the greatest software companies in the world reliably predict how long it takes to ship software?”
Maybe next time I’ll tell them, “it’s the same reason we need database reference implementations”. It will just add more confusion, but at least I can spit out some kind of answer.
It uses Riak Core as the underlying distribution layer, which is the same that Riak itself uses. For storage of the log, the built-in Erlang disk_log is used.
Is this at the cost of fast writes or flexible schema? The pitch video doesn't seem to mention any cons, yet seems to avoid mentioning the type of data or mutations supported. I guess I'll go read their publications.
It's at the cost of playing by the rules of CRDTs. Making CRDTs consequence-free is ongoing research.
What do you mean by that?
I found possibly related language on this page the other day :
CRDTs are "[t]ypically not suited for editing application with consequent UI."
Can you point me in the right direction? My googling got me nowhere. Thanks.
In general there are tradeoffs to CRDTs and not everyone loves those tradeoffs. As time goes on we find CRDTs that make better tradeoffs.
For example, the earliest examples of CRDT sets grew linearly with the values added to the set, without regard for deletion. For all time. That's a pretty steep cost.
You can likely build something SQL-like to form complex queries coordinating related objects, but the efficiency of such queries is not a given. You can likely manually build something like an index to speed up lookups using e.g. the map primitive. With a plan for a complex query, you're on your own.
Another likely problematic part is PK generation. You can of course use a counter, but I'm not sure how efficient will it be with burst inserts. For that, you'd have to come up with client-generated PKs that are unlikely to clash, e.g. UUIDs.
There is a lot of research still going on in the back, including indexing, access control, verification tools for apps, and some other really cool stuff.
Efficient support for queries is non-trivial, indeed.
"thinking about objects as sequences of unchangeable versions,
"the correct construction and
execution of a new atomic action can be accomplished without knowledge of all other atomic actions
in the system that might execute concurrently"
Is anyone aware of comparisons between these two streams of ideas (pseudo time based action and CRDT based)?
In short, antidote looks like a decent solution to this kind of problems.
It's now rebranded as Kuhiro: http://highscalability.com/blog/2017/11/6/birth-of-the-nearc...
I think there's a strong case for something like this for IoT type devices. Imagine the simple case of adding a name to a contact list on a mobile device, and wanting that to get synced up with a series of other devices.
Antidote is designed for DC deployments, in the hundreds of clusters. It provides causal consistency and transactions.
Lasp is designed for high-churn, edge deployments (with CRDTs) and provides eventual consistency: it's designed for 1,000+ nodes.
They share a bit of code, and you might want to evaluate both to determine which is a better use case: Lasp might be a better fit for the IoT application, because it was designed for that.
Here's a HN post on Lasp's scalability:
Here's a HN post on Lasp:
Your application works as well as if it was executed fully in strong consistency, but with improved scalability for the set of operations that can execute in an eventual consistent model.
With credit cards, you can indeed start more transactions against the same balance, and you're never sure in which order they will complete.
No, I will not further detail how here.
I had a question, in the video at a certain point you say that you must modify the code to disallow concurrent debits. This makes sense in theory, but wouldn't it fail in practice? If two machines in different regions are running this code, they would not know that there is a concurrent debit. How si that addressed?
E.g. db[key] += value and db[key].insert(value) commute with themselves, but db[key] = value obviously doesn't.
This just seems to be an attempt to implement a correct eventually consistent database, and the CRDTs are simply datatypes with commutative update operations.
Unfortunately, it seems to allow transactions that examine objects and then (conditionally) update them, which obviously are not guaranteed to be commutative even if the objects are commutative, so maybe it doesn't succeed at implementing a correct database.
If I wanted to change the schema later, the database would let me just rewrite the code and it would reprocess all the received events since the beggining.
My use case is not anything high-performance or with thousands of writes -- it's the opposite.
CRDT does not. It can be fully P2P.
I'm very excited for AntidoteDB, for its use cases but also for the underlying, pioneering work you're doing on CRDTs. Thank you for doing it! <3
Can you point me to an Erlang 20 compatible version?
No one's arguing that a Jepsen test shouldn't be done. Just that it'll probably be very different in character from more invented industry technology.
Unsnarkily, I agree that implementation and composition of CRDTs is comparatively straightforward compared with other approaches people have taken. But if correct behavior is a requirement, full testing is too, regardless of how easy it -should- be in theory.
If you look at the Jepsen posts for Cassandra and Riak, for example, you can see that the general characteristics are fairly similar.
- Built in Erlang
- Great explainer videos
- Well documented CRDTs that you accept
- Team of university related researches in CRDTs.
I'll be looking through your guys stuff more. But good job! We need more people like you guys out there.