# riak-java-client
**Repository Path**: mirrors_craigwblake/riak-java-client
## Basic Information
- **Project Name**: riak-java-client
- **Description**: Riak's Java client
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2020-08-08
- **Last Updated**: 2026-03-21
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
This document describes how to use the Java client to interact with Riak. See the
[[https://github.com/basho/riak-java-client/blob/master/DEVELOPERS.md][DEVELOPERS]] document for a technical overview of the project.
In addition, we are now working on a "cookbook" for using the Riak Java client. See the [[https://github.com/basho/riak-java-client/wiki/Cookbook][Cookbook section on the Github Wiki]] for complete, compilable examples of using the Java client.
You can also browse the [[http://basho.github.com/riak-java-client/][Javadocs]].
* Overview
Riak is a dynamo style, distributed key-value store that provides a [[http://wiki.basho.com/MapReduce.html][map reduce]]
query interface. It exposes both a [[http://wiki.basho.com/REST-API.html][REST]] and [[http://wiki.basho.com/PBC-API.html][protocol buffers]] API. This
is a Java client for talking to Riak via a single interface, regardless of
underlying transport. This library also attempts to simplify some of the
realities of dealing with a fault-tolerant, eventually consistent database by
providing strategies for:
- Conflict resolution
- Retrying requests
- Value mutation
The client provides some lightweight ORM like capability for storing domain
objects in Riak and returning domain objects from map reduce queries.
* Using riak-java-client
** Including riak-java-client in your project
The current riak-java-client is available from maven central. Add the dependency to your pom.xml
#+BEGIN_SRC xml
com.basho.riak
riak-client
1.1.1
#+END_SRC
Maven artifacts are published to the [[https://oss.sonatype.org/index.html#nexus-search;quick~riak][Sonatype OSS Repository]].
As a convenience we also provide an [[http://riak-java-client.s3.amazonaws.com/riak-client-1.1.0-jar-with-dependencies.jar][all-in-one .jar file you can include in non-maven projects]]
that includes all the dependencies.
** Quick start
Assuming you're running Riak on localhost on the default ports getting started is as simple as:
#+BEGIN_SRC java
// create a client (see Configuration below in this README for more details)
IRiakClient riakClient = RiakFactory.pbcClient(); //or RiakFactory.httpClient();
// create a new bucket
Bucket myBucket = riakClient.createBucket("myBucket").execute();
// add data to the bucket
myBucket.store("key1", "value1").execute();
//fetch it back
IRiakObject myData = myBucket.fetch("key1").execute();
// you can specify extra parameters to the store operation using the
// fluent builder style API
myData = myBucket.store("key1", "value2").returnBody(true).execute();
// delete
myBucket.delete("key1").rw(3).execute();
#+END_SRC
** Some History
This riak-java-client API is new. Prior to this version there were two separate
clients, one for protocol buffers, one for REST, both in the same library and
both with quite different APIs.
*** Deprecated
The REST client (which you can still use directly) has been moved to
#+BEGIN_SRC java
com.basho.riak.client.http.RiakClient
#+END_SRC
Though a deprecated RiakClient still exists at
#+BEGIN_SRC java
com.basho.riak.client.RiakClient
#+END_SRC
for another release or two to ease transition. All the REST client's HTTP
specific classes have been moved to
#+BEGIN_SRC java
com.basho.riak.client.http.*
#+END_SRC
and the originals retained *but deprecated*. If you want to use the legacy,
low-level client directly please use the newly packaged version. The
deprecated classes will be deleted in the next or following release to
clean up the namespaces.
At that time IRiak* will become Riak* and any I* names will be
deprecated then dropped. I'm sorry about the unpleasant naming
conventions in the short term.
** What's new?
*** Builders
To avoid the profusion of constructors and setters there is a builder
#+BEGIN_SRC java
com.basho.riak.client.builders.RiakObjectBuilder
#+END_SRC
to simplify creating and updating IRiakObjects.
In fact most classes are as immutable as possible and are created
using fluent builders. The builders are *not* designed to be used
across multiple threads but the immutable value objects they create are.
*** Layers
*** Low
There is a low-level interface, RawClient
#+BEGIN_SRC java
com.basho.riak.client.raw.RawClient
#+END_SRC
and two adapters that adapt the legacy protocol buffers and REST clients to the
RawClient interface. RawClient provides access to Riak's APIs. If you don't want
any of the higher level features to deal with domain objects, eventual
consistency and fault tolerance (see below) then at least
use RawClient over the underlying legacy clients so your code will not need to
change if you decide to move from REST to protocol buffers or
back. For example:
#+BEGIN_SRC java
RiakClient pbcClient = new RiakClient("127.0.0.1");
// OR
// com.basho.riak.client.http.RiakClient httpClient = new
// com.basho.riak.client.http.RiakClient("http://127.0.0.1:8098/riak");
RawClient rawClient = new PBClientAdapter(pbcClient);
// OR new HTTPClientAdapter(httpClient);
IRiakObject riakObject = RiakObjectBuilder.newBuilder(bucketName, "key1").withValue("value1").build();
rawClient.store(riakObject, new StoreMeta(2, 1, false));
RiakResponse fetched = rawClient.fetch(bucketName, "key1");
IRiakObject result = null;
if(fetched.hasValue()) {
if(fetched.hasSiblings()) {
//do what you must to resolve conflicts
} else {
result = fetched.getRiakObjects()[0];
}
}
result.addLink(new RiakLink("otherBucket", "otherKey", "tag"));
result.setValue("newValue");
RiakResponse stored = rawClient.store(result, new StoreMeta(2, 1, true));
IRiakObject updated = null;
if(stored.hasValue()) {
if(stored.hasSiblings()) {
//do what you must to resolve conflicts
} else {
updated = stored.getRiakObjects()[0];
}
}
rawClient.delete(bucketName, "key1");
#+END_SRC
If *you* want to add a client transport to Riak (say you hate Apache HTTP client
but love Netty) implementing RawClient is the way to do it.
*** High
All the code so far elides somes rather important details:
#+BEGIN_SRC java
// handle conflict here
#+END_SRC
If your bucket allows siblings at some point you may have to deal with
conflict. Likewise, if you are running in the real world you may have to deal
with temporary failure.
The higher level API (built on top of RawClient) gives
you some tools to deal with eventual consistency and temporary failure.
*** Operations
Talking to Riak is modelled as a set of operations. An operation is
a fluent builder for setting operation parameters (like the tunable CAP
quorum for a read) and an execute method to carry out the operation. EG
#+BEGIN_SRC java
Bucket b = client.createBucket(bucketName)
.nVal(1)
.allowSiblings(true)
.execute();
#+END_SRC
or
#+BEGIN_SRC java
b.store("k", "v").w(2).dw(1).returnBody(false).execute();
#+END_SRC
All the operations implement RiakOperation, which has a single method:
#+BEGIN_SRC java
T execute() throws RiakException;
#+END_SRC
**** Retry
Each operation needs a Retrier. You can specify a default retrier
implementation when you create an IRiakClient or you can provide one
to each operation when you build it. There is a simple retrier
provided with this library that retries the given operation *n* times
before throwing an exception.
#+BEGIN_SRC java
b.store("k", "v").withRetrier(DefaultRetrier.attempts(3)).execute();
#+END_SRC
The DefaultRiakClient implementation provides a 3 times retrier to all it's
operations. You can override this from the constructor or
provide your own per operation (or per bucket, see below). The Retrier interface
accepts Callable for its "attempt" method. Internally, operations are
built around that interface.
#+BEGIN_SRC java
public interface Retrier {
T attempt(Callable command) throws RiakRetryFailedException;
}
#+END_SRC
*** Buckets
To simplify the Riak client all value related operations are performed via the
Bucket interface. The Bucket also provides access to the set of bucket
properties (nval, allow_mult etc).
NOTE: at present not all bucket properties are exposed by either
API. This is something that will be addressed very soon.
One thing to note is that you can store more than
just IRiakObjects in buckets. Bucket has convenience methods to store
byte[] and String values against a key but also type parameterized
generic fetch and store methods. This allows you to store your domain
objects in Riak. Please see Conversion below for details.
Although it is expensive and somewhat ill advised, you may list a bucket's keys
with:
#+BEGIN_SRC java
for(String k : bucket.keys()) {
// do your key thing
}
#+END_SRC
The keys are streamed, and the stream closed by a reaper thread when the
iterator is weakly reachable.
There is a further wrapper to bucket (see DomainBucket below) that simplifies
calling operations further.
*** Conflict Resolution
Conflict happens in Dynamo style systems. It is best to have a strategy in mind
to deal with it. The strategy you employ is highly dependant on your domain. One
example is a shopping cart. Conflicting shopping carts should be merged by a
union of their contents, you might reinstate a deleted toaster but that is
better than losing money.
See MergeCartResolver in src/test for an example of a Shopping Cart conflict
resolver.
Both fetch and store make use of a ConflictResolver to handle siblings.
The default conflict resolver does not "resolve" conflicts, it blows up with
an UnresolvedConflictException (which gives you access to the siblings).
Using the basic bucket interface you can provide a conflict resolver
to either a fetch or a store operation. All operations are configured
by default with a resolver for which siblings are an exception.
The conflict resolver interface is a single method that accepts a
Collection of domain objects and returns the one true value, or
throws an exception of conflict cannot be
resolved. UnresolvedConflictException contains all the siblings. In
cases were logic fails to resolve the conflict you can push the
decision to a user:
#+BEGIN_SRC java
T resolve(final Collection siblings) throws UnresolvedConflictException;
#+END_SRC
Since conflict resolution requires domain knowledge it makes sense to convert
riak data into domain objects.
*** Conversion
Data in riak is made up of the value, its content-type, links and user meta
data. There is then some riak meta data along with that (for example,
the VClock, last update time etc.)
The data payload can be any type you like, but normally it is
a serialized version of some application specific data. It is a lot
easier to reason about siblings and conflict with the domain knowledge
of your application, and easier still with the actual domain objects.
Each operation provided by Bucket can accept an implementation of
#+BEGIN_SRC java
com.basho.riak.client.convert.Converter
#+END_SRC
Converter has two methods
#+BEGIN_SRC java
IRiakObject fromDomain(T domainObject, VClock vclock)
T toDomain(IRiakObject riakObject)
#+END_SRC
Implement these and pass to a bucket operation to convert riak data into POJOs
and back.
This library currently provides a JSONConverter that uses the [[http://wiki.fasterxml.com/JacksonHome][Jackson]] JSON
library. Jackson requires your classes to be either simple Java Bean types
(getter, setter, no arg constructor) or annotated. Please see
#+BEGIN_SRC java
com.megacorp.commerce.ShoppingCart
#+END_SRC
for an example of Jackson annotated domain class and LegacyCart in the same
package for an unannotated class.
You can annotate a field of your class with
#+BEGIN_SRC java
@RiakKey
#+END_SRC
and the client will use the value of that field as the key for fetch and store
operations. If you do not or cannot annotate a key field then you must use the
#+BEGIN_SRC java
bucket.store("key", myObject);
#+END_SRC
Implementing your own converter is pretty simple, so if you want to store XML,
go ahead. Be aware that the converter should write the content-type when
serializing and also check the content-type when deserializing.
There is also a pass through converter for IRiakObject.
You may also use the JSONConverter to store Java Collection types (like Map,
List or Map and List