Automated Testing Using Gradle, JUnit and DynamoDB Local

Recently, I’ve been working on an open-source project (Todo) that uses HSQL as an in-memory embedded database. However, I’d prefer to migrate to a NoSQL database for production. Our company already uses Amazon Web Services (AWS), so I’ve decided to go with DynamoDB. Since our company is frugal, I wanted to test our code using DynamoDB Local, which is free of charge and doesn’t require an internet connection. However, unlike the automatic configuration of HSQL in a Spring Boot app, getting DynamoDB Local to run properly during tests requires additional configuration in your build script and test cases.

In this post, I’m going to outline the steps for configuring and running a local instance of DynamoDB in your JUnit tests. We’ll begin with the Gradle dependencies and tasks.

Gradle Dependencies

In the Gradle build file, configure the AWS custom repository for DynamoDB Local.

repositories {

    maven {
     //Local DynamoDB repository
     url ""

Then add the following dependencies:

dependencies {
    // AWS dynamodb
    compile group: 'com.amazonaws', name: 'aws-java-sdk-dynamodb', version: '1.11.213'

    // Use JUnit test framework
    testCompile 'junit:junit:4.12'
    //Local DynamoDB
    testCompile "com.amazonaws:DynamoDBLocal:1.+"
    //SQLite4Java, required by local DynamoDB
    testCompile group: 'com.almworks.sqlite4java', name: 'sqlite4java', version: '1.0.392'    


Gradle Tasks

Next, add the following tasks to the Gradle build file:

//Copy SQLite4Java dynamic libs
task copyNativeDeps(type: Copy) {
    from(configurations.compile + configurations.testCompile) {
        include '*.dll'
        include '*.dylib'
        include '*.so'
    into 'build/libs'

test {
    dependsOn copyNativeDeps
    systemProperty "java.library.path", 'build/libs'


JUnit Test Case

In your test case, I recommend configuring and running DynamoDB Local before any tests are executed and shutting it down after the tests have completed. You’ll also want to create your tables before executing your tests. This can be achieved by annotating a public static method in your test class with @BeforeClass.

Note: sServer and sClient are static fields in your test class.

public static void runDynamoDB() {

  //Need to set the SQLite4Java library path to avoid a linker error
  System.setProperty("sqlite4java.library.path", "./build/libs/");

  // Create an in-memory and in-process instance of DynamoDB Local that runs over HTTP
  final String[] localArgs = { "-inMemory" };

  try {
	sServer = ServerRunner.createServerFromCommandLineArgs(localArgs);

  } catch (Exception e) {


private static void createAmazonDynamoDBClient() {
  sClient = AmazonDynamoDBClientBuilder.standard()
	        .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://localhost:8000", "us-west-2"))

private static void createMyTables() {
	//Create task tables
  DynamoDBMapper mapper = new DynamoDBMapper(sClient);
  CreateTableRequest tableRequest = mapper.generateCreateTableRequest(MyItemOne.class);
  tableRequest.setProvisionedThroughput(new ProvisionedThroughput(1L, 1L));

  tableRequest = mapper.generateCreateTableRequest(MyItemTwo.class);
  tableRequest.setProvisionedThroughput(new ProvisionedThroughput(1L, 1L));

  tableRequest = mapper.generateCreateTableRequest(MyItemThree.class);
  tableRequest.setProvisionedThroughput(new ProvisionedThroughput(1L, 1L));

Then after the tests have completed, you’ll want to ensure that the database is shutdown by by annotating a public static method in your test class with @AfterClass.

public static void shutdownDynamoDB() {
  if(sServer != null) {
     try {
     } catch (Exception e) {

I hope these steps help you to more easily implement local automated tests of your DynamoDB code. Let me know if you have any feedback.


Stack Overflow,

Stack Overflow,

Testing Excellence,

Automated Testing Using Gradle, JUnit and DynamoDB Local

How to Build a Serverless API With AWS DynamoDB, Lambda, and API Gateway

Lambda symbol

Imagine running your entire IT department or SaaS without servers. The age of serverless architecture is here now. So, what’s serverless architecture? According to Mike Roberts:

Serverless can also mean applications where some amount of server-side logic is still written by the application developer but unlike traditional architectures is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a 3rd party. One way to think of this is Functions as a service (FaaS). AWS Lambda is one of the most popular implementations of FaaS at present, but there are others. I’ll be using ‘FaaS’ as a shorthand for this meaning of Serverless throughout the rest of this article.

Source: Roberts, M. (2016, August 4). Serverless Architectures.

At Rodax Software, we’re thinking about scrapping our AWS EC2s running our Skedi services and migrating them into serverless microservices running on AWS Lambda. That said, it’s important to have some healthy skepticism about making this transition. There are some potentially significant drawbacks to consider such as vendor lock-in; however, I’m not going address these in this post. For more information about the drawbacks click here.

Given the scope of transiting our API and potential risks, I wanted to get a better idea of the level effort and overall feasibility. The primary goal of this article is to walkthrough how to build a serverless RESTful API that’s integrated with Lambda and uses a NoSQL database as its data store. So, I’ve put together this sample FaaS that exposes an AWS API Gateway method, which invokes a Lambda function, which then queries a DynamoDB table. Sounds like fun, right? Anyway, I purposely minimized wizard use because I wanted to have a clear understanding of the all steps involved. Moreover, I think it’s easier to learn by stepping through AWS Management Console.

In a nutshell, here’s what we’re going to do in this tutorial:

  • Create a table in DynamoDB and populate it with sample data
  • Create a Lambda function that queries the DynamoDB table
  • Create an API Gateway method that invokes the Lambda function

The prerequisites include the following:

  1. An AWS Account and AWS CLI configured. Learn more here.
  2. Experience with the AWS Management Console. Learn more here.
  3. AWS SDK for Java is setup and configured properly (optional). For more information click here.
  4. Git is installed and configured, click here for instructions.
  5. Maven is installed and configured, click here for instructions.
  6. cURL or equivalent to download files using the command line (optional).

Populate DynamoDB

DynamoDB is Amazon’s premier fully managed proprietary NoSQL database available on AWS. SimpleDB is another NoSQL database offered by Amazon. It’s designed for smaller workloads and I intended to use it; however, it’s unsupported in the Lambda execution environment. If you’re interested, I have written code to populate my sample data in SimpleDB in Java and C#, here and here, respectively.

In this section, we’re going to create a DynamoDB table and populate it in two different ways. Choose whichever option you prefer or try both.

Option 1: Create DynamoDB Table using Java

  1. Clone the project:
    $ git clone
  2. Review the code that creates the table and populates it with sample data:
    private static void createTable() throws InterruptedException {
        AttributeDefinition[] defs = {
                                    new AttributeDefinition(EMAIL, S)
        ProvisionedThroughput throughput = new ProvisionedThroughput()
        //Email address is the key
        KeySchemaElement emailKey = new KeySchemaElement(EMAIL, KeyType.HASH);
        CreateTableRequest createTableRequest = new CreateTableRequest()
        // Create table if it does not exist yet
        TableUtils.createTableIfNotExists(sDynamoDB, createTableRequest);
        // wait for the table to move into ACTIVE state
        TableUtils.waitUntilActive(sDynamoDB, TABLE);
    private static void addSampleItems() {
        // Add an item
        Map item = createItem("", "John", "Doe");
        PutItemRequest putItemRequest = new PutItemRequest(TABLE, item);
        PutItemResult putItemResult = sDynamoDB.putItem(putItemRequest);

For information about programming in DynamoDB click here.

  1. From the aws-dynamodb-demo-java/dynamo-db directory, package the project:
    $ mvn package
  2. Then run the app:
    $ mvn exec:java

Option 2: Create DynamoDB Table using AWS CLI

  1. In the terminal window, create a directory for the project.
  2. To create the table, we’ll use the following JSON:
        "AttributeDefinitions": [{
            "AttributeName": "email",
            "AttributeType": "S"
        "TableName": "customer",
        "KeySchema": [{
            "AttributeName": "email",
            "KeyType": "HASH"
        "ProvisionedThroughput": {
            "ReadCapacityUnits": 1,
            "WriteCapacityUnits": 1
  3. Download the table.json file at the prompt:
    $ curl >table.json or use your web browser.
  4. Create the customer table using the AWS CLI:
    $ aws dynamodb create-table --table-name customer --cli-input-json file://table.json
  5. To populate the table, we’ll use the following JSON format:
    "customer": [{
        "PutRequest": {
            "Item": {
            "email": {
                "S": ""
            "first_name": {
                "S": "John"
            "last_name": {
                "S": "Doe"
  6. Download the data.json file:
    $ curl >data.json
  7. Populate the customer table:
    $ aws dynamodb batch-write-item --request-items file://data.json

If you tried both of these methods, which one did you prefer?

Create a Lambda Function

Now that we’ve populated the database, we’re ready to create and configure our Lambda function. In the following steps we’ll:

  • Build and package a Java Lambda function
  • Create, configure, and deploy the function
  • Create and configure the function’s AWS IAM role
  • Verify and test the function
  1. Ensure that Maven is installed and configured on your computer.
  2. Clone the project:
    $ git clone
  3. From the aws-lambda-demo-java/lambda-demo directory, package the project:
    $ mvn package
  4. In the AWS Management Console, go to the Lambda Console and click Create a Lambda Function > Blank Function > Next.
  5. For the Name, enter getCustomers and Java 8 for the runtime.
  6. Click Upload and navigate to aws-lambda-demo-java/lambda-demo/target/lambda-demo-1.0.0.jar
  7. In the Lambda function handler and role section, set the Handler to, in the dropdown click Create new role from templates, set the Role name to customerLambdaRole and set the Policy templates to Simple Microservice permissions, click Next and Create Function.

    Note the Java 8 requires the handler value to have the following format: packageName::methodName.

  8. Then in the IAM Management Console, click Roles > customerLambdaRole > Permissions > Attach Policy > AWSLambdaDynamoDBExecutionRole > Attach Policy.
  9. In the Lambda Console, click Functions > getCustomers > Test and verify if it works.
  10. Bummer, the test fails because of a pesky permissions error.
  11. In the IAM Management Console, click Policies > Create policy > Copy an AWS Managed Policy > Select.
  12. Search for dynamodbread and select AmazonDynamoDBReadOnlyAccess.
  13. In the Policy Document, set the Resource to your DyanomoDB’s customer table ARN, e.g., arn:aws:dynamodb:us-west-2:123456789:table/customer, click Validate Policy and Create Policy. Note the generated name of the policy.
  14. Click Roles > customerLambdaRole > Permissions > AWSLambdaDynamoDBExecutionRole > Detach Policy > Detach.
  15. Then click Attach Policy > Policy Type > Customer Managed > policy from step 13 > Attach Policy.
  16. In the Lambda Console, click Functions > getCustomers > Test and verify if it works. It should and the output will look like the following:
    "{Items: [{last_name={S: Smith,}, created_date={S: 2017-07-11T00:44:40.209Z,}, first_name={S: Mary,}, email={S:,}}, {last_name={S: Smith,}, created_date={S: 2017-07-11T00:44:40.225Z,}, first_name={S: Bob,}, email={S:,}}, {last_name={S: Doe,}, created_date={S: 2017-07-11T00:44:40.191Z,}, first_name={S: Jane,}, email={S:,}}, {last_name={S: Boyer,}, created_date={S: 2017-07-11T00:44:40.144Z,}, first_name={S: John,}, email={S:,}}],Count: 4,ScannedCount: 4,}"

Although I’m not going to cover it here in detail, the AWS CLI for a creating our Lambda function would look something like the following:

    $ aws lambda create-function \
        --region us-west-2 \
        --function-name getCustomers \
        --zip-file fileb:///aws-lambda-demo-java/lambda-demo/target/lambda-demo-1.0.0.jar \
        --role arn:aws:iam::123456789:role/service-role/customerLambdaRole \
        --handler \
        --runtime java8 \
        --profile <adminuser>

Note the path for the ZIP file can also be an AWS S3 bucket. For more information about the AWS Lambda CLI click here.

Anyway, there’s a lot more to learn about Lambda such as versioning, supported event triggers, automated deployment and so on. I hope to cover some of these topics in future posts. In the meantime, I recommend checking out Lambda’s documentation here.

Create an API Gateway Method

  1. In Amazon API Gateway Console, click Create API > New API, enter CustomerSample for the name and click Create API.
  2. Then click Actions > Create Method, click GET in the dropdown menu and the OK checkmark.
  3. For the Integration type, click Lambda Function, your region (e.g., us-west-2), getCustomers for the Lambda Function and click Save.
  4. Review the overview of the API’s invocation sequence.
  5. Then click TEST > Test, the output should be the same as the Lambda function.
  6. To invoke the API via a web browser, you’ll need to deploy it. Click Actions, create a new deployment stage, e.g., test, and click Deploy.
  7. Copy and paste the Invoke URL into your web browser, e.g.,

As with Lambda, there’s a lot more to learn about API Gateway an a good place to start is the developer guide here.


After developing this exercise and writing the post. I’ve come to the realization that there are lots of moving parts between DynamoDB, Lambda, and API Gateway, especially with respect to setup and configuration. Creating the Lambda function in the console required sixteen steps. Clearly, we would need to automate these steps to ease the pain of setup and deployment. That will require further learnings through prototyping and code katas.

Additionally, I observed that after I deployed my function and waited a day between executions, the latency was very high. This is what’s known as a cold start:

A cold start occurs when an AWS Lambda function is invoked after not being used for an extended period of time resulting in increased invocation latency…
From the data, it’s clear that AWS Lambda shuts down idle functions around the hour mark.

Source: Cui, Y. (2017, July 3). How long does AWS Lambda keep your idle functions around before a cold start?

From an architectural perspective, it will be important to consider the cold start scenario. For example, asynchronous background tasks maybe better suited for Lambda than infrequent client API invocations that usually require low latency. Consequently, when we decide to migrate our first Lambda function into production, we’ll choose a background task that isn’t dependent on low latency.

In the meantime, please let me know if you have any additional thoughts or comments on this tutorial.

How to Build a Serverless API With AWS DynamoDB, Lambda, and API Gateway

Migrating to AWS SQS from ActiveMQ

After years of using ActiveMQ 5.x in our Java-based production system, we decided to migrate to the Amazon Simple Queue Service (SQS). Managing and configuring ActiveMQ no longer seemed to make sense, especially when SQS does it for us. Moreover, SQS recently added FIFO support. Messages in SQS FIFO queues are delivered in order and are guaranteed to be processed only once. These features are exactly what we needed to replace ActiveMQ with SQS and ensure that our backend processes would function as expected.


Before migrating to SQS, here are some important considerations:

  • JMS support is limited to queues, i.e., point-to-point.
  • Message size is limited 256 KB.
  • Message content is restricted to text only, i.e., XML, JSON, and unformatted text. JMS’s MapMessage and StreamMessage interfaces are unsupported.
  • Message durability is constrained to a maximum of 14 days (defaults is 4 days).
  • JMS selectors for filtering messages are unsupported.


My company Rodax Software provides a propriety RESTful API for Skedi—a calendar aggregation app for families and teams. The API has many long running tasks, so we use queues to manage the workflow while providing fast HTTP responses to our mobile clients.

Our messaging architecture includes queues that are consumed by server-side components and remote mobile clients. Since migrating the mobile clients to SQS would require code changes and an App Store update, we decided to first migrate the server queues.

Code Migration

Before diving into the code, we also had to consider SQS’s JMS compliance. Thankfully, SQS supports most of the JMS 1.1 specification for queues, click here for more information about using JMS with Amazon SQS.

Given the JMS queue support in SQS, the server-side code changes were relatively painless. So, lets take a look at them, which includes accessing a JMS connection, starting a JMS listenersending a JMS message, and converting MapMessage to ObjectMessage.

Accessing a JMS connection

The primary difference between instantiating a JMS connection in SQS and ActiveMQ is that SQS requires credentials and the region where the queues are located.

Connection getJMSConnection() throws NamingException, JMSException {
	if(connection == null) {
		String url = "tcp://localhost:61616";
		ActiveMQConnectionFactory factory;
		factory = new ActiveMQConnectionFactory(url);
		connection = factory.createConnection();
	return connection;

Connection getSQSConnection() throws JMSException {
	AWSCredentials credentials;
	credentials = DefaultAWSCredentialsProviderChain.getInstance().getCredentials();
	AWSStaticCredentialsProvider credProvider;
	credProvider = new AWSStaticCredentialsProvider(credentials);

	Regions region = host.equals(PRODUCTION_HOST) ? 
	                            Regions.US_WEST_2 : Regions.US_EAST_2;
	SQSConnectionFactory factory;
	factory = new SQSConnectionFactory(new ProviderConfiguration(),
	return factory.createConnection();

Note: Using the DefaultAWSCredentialsProviderChain class, which is the preferred method for accessing AWS credentials.

Starting a JMS Listener

Unlike ActiveMQ, a SQS JMS listener must be listening to a queue that actually exists; therefore, before we start listening, we ensure that the queue exists.

public void start() throws NamingException, JMSException  {"Starting message listener...");

	connection = getJMSConnection();

	Session session;
	session  = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

	Destination destination = session.createQueue(queue.getQueueName());
	MessageConsumer consumer = session.createConsumer(destination);


public void start() throws NamingException, JMSException {"Starting message listener...");

    connection = getJMSConnection();

    Session session;
    session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

    String queueName = getQueueName();
    if (connection instanceof SQSConnection) {
        //AWS SQS requires queues to be created before first use
        createSQSFifoQueue(queueName, (SQSConnection) connection);

    Destination destination = session.createQueue(queue.getQueueName());
    MessageConsumer consumer = session.createConsumer(destination);


    try {
    } catch (InterruptedException e) {
        log.error("Thread sleep error", e);

public static void createSQSFifoQueue(String queueName, SQSConnection connection) throws JMSException {
    if (queueName.endsWith(".fifo")) {
        // Get the wrapped client
        AmazonSQSMessagingClientWrapper client = connection.getWrappedAmazonSQSClient();

        // Create an Amazon SQS FIFO queue, if it does not already exist
        if (!client.queueExists(queueName)) {
            Map < String, String > attributes = new HashMap < > ();
            attributes.put("FifoQueue", "true");
            attributes.put("ContentBasedDeduplication", "true");
            client.createQueue(new CreateQueueRequest().withQueueName(queueName).withAttributes(attributes));
    } else {
        throw new ContextedRuntimeException("Invalid FIFO queue name")
            .addContextValue("queueName", queueName);

Lines 11-15 above were added because Amazon requires the queues to be created before first use. Lines 23-27 were to added to force the main thread to wait for one second after starting the listener. Lines 30-46 is a new method to create the FIFO before we start listening.

Sending a JMS Message

Sending a message to a FIFO queue requires only one additional line of code to set the message group ID.

// Create the text message
TextMessage message = ...

// To send a message to a FIFO queue, you must set the message group ID.
message.setStringProperty("JMSXGroupID", "Default”);

For more information about these changes click here.

Converting MapMessage to ObjectMessage

SQS doesn’t support  MapMessage; therefore, we simply refactored by converting our MapMessage objects to HashMap<String, String> instances. Note: An ObjectMessage object must implement Serializable.

Message createMessage(Session session) throws JMSException {
        MapMessage mapMessage = session.createMapMessage();
        mapMessage.setString("key0", "value0");
        return mapMessage;
Message createMessage(Session session) throws JMSException {
        HashMap<String, String> mapMessage = new HashMap<>();
        mapMessage.put("key0", "value0");
        return session.createObjectMessage(mapMessage);

Voilà! The next thing we have to do is to migrate our mobile clients using the ActiveMQ Stomp protocol and JMS selectors. Replacing the selector filtering will most likely require architectural changes. In any case, once we’ve completed the mobile migration, I’ll blog about it here too.

To conclude, it’s worth noting that these SQS migration changes should work with any JMS compliant codebase. Let me know if you have any experiences or thoughts to share.

Migrating to AWS SQS from ActiveMQ

Handling Access to Calendars in iOS 6

I had to laugh at my last post. I went on hiatus in August 2010 and now I’m back, in 2012. I’ve been busy working on Skedi Family Calendar, which aggregates calendar data from different vendors such as Google Calendar, Microsoft Exchange, etc.

Anyway, the latest release of iOS (6.0) isolates a user’s calendars and blocks access by default. This presented a problem for us at Rodax Software because we also support earlier versions of iOS (i.e., 4 and 5) and we wanted to avoid using conditional compilation.

In iOS 6, access to the calendars is controlled through the EKEventStore class. In the past, we’d call a alloc/init and that was it.  This is unchanged; however, now we need to request permission and if granted, we can access the users calendars and events. Note: This request can only be performed once, which I’ll cover later.

 _store = [[EKEventStore alloc] init];

So, if we need to support earlier versions of iOS, how can we avoid using ugly preprocessor macros? Simple, use the power of Objective-C’s dynamic typing. In iOS 6, the method to request access in EKEventStore is requestAccessToEntityType. Hence, we need to determine if it’s available.

if([_store respondsToSelector:@selector(requestAccessToEntityType:completion:)]) {
	//invoke requestAccessToEntityType...

If the statement is true, request access:

[_store requestAccessToEntityType:EKEntityTypeEvent
                       completion:^(BOOL granted, NSError *error) {
 //Handle the response here…
//Note: If you prompt the user, make sure to call the main thread


Since you can only request access once, you’ll need to test for the current authorization status first. We decided to create our own status flags that correspond with Event Kit’s then call authorizationStatus.

/// \brief Proxy for EKAuthorizationStatus in iOS 6
enum SKEKAuthorizationStatus {
    SKEKAuthorizationStatusNotDetermined = 0,

/// \brief Helper method checks the authorization
/// \return If running in iOS 6 or higher returns the EKAuthorizationStatus; otherwise, returns SKEKAuthorizationStatusAuthorized
- (enum SKEKAuthorizationStatus) authorizationStatus {
    if ([[EKEventStore class] respondsToSelector: @selector(authorizationStatusForEntityType:)]) {
        return [EKEventStore authorizationStatusForEntityType:EKEntityTypeEvent];
    else {
        return SKEKAuthorizationStatusAuthorized;

If calendar access was denied before (SKEKAuthorizationStatusDenied), then we prompt the user to enable access by tapping Settings > Privacy > Calendars > Skedi. If permission is undetermined (SKEKAuthorizationStatusNotDetermined), then we instantiate the EKEventStore and request access. If previously granted access, we can safely alloc and init without a care.

So, I hope this helps anyone who is currently working on adapting their Event Kit code for iOS 6. Let me know if you have any feedback.

Handling Access to Calendars in iOS 6

How to configure TextMate’s SQL bundle on Mac OS X Snow Leopard

TextMate is a great text editor for Macs. Its supports a myriad of programming and scripting languages. However, after I installed it. I was unable to get the SQL bundle to work properly on Mac OS X (10.6.x) or Snow Leopard. I blew it off for a while. Then I found this post by 豆皮儿.

The post says to replace the keychain and plist bundles in TextMate’s application bundle. Instead of using the command line, I recommend these steps:

  1. Go to
  2. Download keychain.bundle and plist.bundle.
  3. In the Finder window, navigate to /Applications/ and right-click Show Package Contents.
  4. Navigate to /Contents/SharedSupport/Support/lib/osx.
  5. From your downloads directory, drag the new keychain and plist bundles to the osx directory.
  6. Open TextMate, configure the SQL bundle (SQL > Preferences) and test a query such as “SELECT 1;”

That’s it. Enjoy.


How to configure TextMate’s SQL bundle on Mac OS X Snow Leopard

Installing the APR-based Tomcat Native library and enabling SSL

Tomcat 6.x can be turbo-charged by using the Apache Portable Runtime (APR).

The Apache Portable Runtime is a highly portable library that is at the heart of Apache HTTP Server 2.x. APR has many uses, including access to advanced IO functionality (such as sendfile, epoll and OpenSSL), OS level functionality (random number generation, system status, etc), and native process handling (shared memory, NT pipes and Unix sockets). –Apache Tomcat User Guide

The Tomcat native library requires the following three components:

  • APR Library
  • JNI wrappers for APR used by Tomcat (libtcnative)
  • OpenSSL libraries
  1. Download and install the APR 1.4.x library and follow the README instructions. For Mac OS X, I used the following commands from this article.
    # Configure the make file from the download directory
    # Users of 64-bit Java 6 should use the following configure command:
    CFLAGS='-arch x86_64' ./configure
    # Make the library
    # Test the build (Takes a while)
    make test
    # Install APR
    make install
  2. Compile and install the Tomcat native library in the bin directory. Detailed instructions here. For Mac OS X, I used the following commands from this article.
    # Build the make file for Java 5
    ./configure --with-apr=/usr/local/apr --with-ssl=/usr # With SSL
    ./configure --with-apr=/usr/local/apr --without-ssl # Without SSL
    # Some have reported having to use the --with-java-home option even with Java 5
    ./configure --with-apr=/usr/local/apr --with-ssl=/usr --with-java-home=/System/Library/Frameworks/JavaVM.framework/Versions/1.5 # With SSL
    ./configure --with-apr=/usr/local/apr --without-ssl --with-java-home=/System/Library/Frameworks/JavaVM.framework/Versions/1.5 # Without SSL
    # Users of 64-bit Java 6 should use the following configure command:
    CFLAGS='-arch x86_64' ./configure --with-apr=/usr/local/apr --with-ssl=/usr/ssl --with-java-home=/System/Library/Frameworks/JavaVM.framework/Versions/1.6
    # Make
  3. Install the OpenSSL libraries (if necessary), more details here. It’s already installed on Mac OS X and distributions of Linux.

Okay, if you’re new to OpenSSL, here’s where the missing manual comes in. For testing or development, create self-signed certificates as follows:

openssl req -new -newkey rsa:1024 -nodes -out <tomcat home>conf/ssl/ca/localhost.csr -keyout <tomcat home>conf/ssl/ca/localhost.key

Then create a X.509 certificate:

openssl x509 -trustout -signkey <tomcat home>conf/ssl/ca/ca.key -days 365 -req -in <tomcat home>conf/ssl/ca/localhost.csr -out <tomcat home>conf/ssl/ca/localhost.pem

Edit the context.xml file in the conf directory (<tomcat home>conf). See Tomcat’s SSL documentation for more details.

<!-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -->
<Connector protocol="org.apache.coyote.http11.Http11AprProtocol"
 port="8443" maxThreads="200"
 scheme="https" secure="true" SSLEnabled="true"

Shutdown and start Tomcat and you should see the following line:
INFO - Loaded APR based Apache Tomcat Native library 1.1.16.

I hope helps you smoothly transition to the Tomcat native library.

Installing the APR-based Tomcat Native library and enabling SSL

One Piece of Advice from Seven Investors

This is a belated live blog post for the Early Stage VC and Angle Investor Event I attended yesterday in downtown Seattle. The event was organized by FundingPost and sponsored by Perkins Coie LLP.

In afternoon, Ben Straughan a partner at Perkins Coie, moderated a panel discussion with the following VC/angel investors:

  • Petra Franklin, Managing Director, Vault Capital
  • Bill McAleer, Managing Director, Voyager Capital
  • Cathi Hatch, Founder and CEO, ZINO Society
  • Janis Machala, Founder and Managing Partner, Paladin Partners
  • Lucinda Stewart, Managing Director, OVP Venture Partners
  • Bill Bryant, Venture Partner, Draper Fisher Jurvetson
  • Saqib Rasool, CEO and Angel Investor, Conceivian

For their background and expertise, check out their bios here.

There was a lot of useful information, especially for new entrepreneurs and folks considering approaching angel investors. I’m not going to rehash the event. Instead I’m going to focus on one question asked by Ben Straughan: “If you had one thing to say to these entrepreneurs, what would it be?”

Here’s their paraphrased responses from my notes:

Pick a good partner. ― Petra Franklin

Persistence and focus. ― Bill McAleer

Listen to your investors. ― Cathi Hatch

Why you? Why are you born to do this? ― Janis Machala

Find two to three CEOs to coach you. ― Lucinda Stewart

It’s about the team. ― Bill Bryant

Raise the least amount possible, build your product fast and keep your budget low. ― Saqib Rasool

Good advice indeed and for those of us with limited resources, Saqib Rasool’s point could be a daily scrum for CEOs:

  • Are we raising the least amount possible?
  • Are we building the product as fast as we can?
  • Are we keeping our budget low?

Do you have any advice to share?

One Piece of Advice from Seven Investors

Tip: MySQL Table Naming Across Platforms

If your MySQL development and production environments are Mac OS X or Windows, queries containing all lowercase or uppercase table names will work fine. This is because these platforms are case-insensitive. However, if you deploy to a Unix system, queries referencing table names in the incorrect case will not work. Consequently, adopting a standard naming convention across platforms is the best policy. I decided to go with all lowercase with underscores between words (my_table_name).

For more information here’s a post by Craig Buckler on

Tip: MySQL Table Naming Across Platforms

RESTful Serialization with Flexjson

XML is too fat for a mobile RESTful API. Therefore, I’m using JavaScript Object Notation (JSON) to exchange data between mobile devices and a cloud service I’m developing. My server environment is J2EE-based, so I chose Flexjson to serialize Java object fields as JSON. This post is a quick overview of my implementation and a lesson learned.

Flexjson is a lightweight Java library that enables object filtering during serialization. If you have a complex object model, serializing the entire object graph is undesirable. Flexjson allows you to pick and choose which objects or fields to serialize.

Here’s simple example of excluding a password field in serialized JSON.

String result =
         new JSONSerializer().
         exclude("password").serialize("user", this);
Client's JSON result
   "user": {
   "class": "com.mycompany",
   "email": "",
   "firstName": "John",
   "lastName": "Doe",
   "phone": "555-1212"

The shallow deserialization is limited to: String, Date, Number, Boolean, Character, Enum, Object and null. Subclasses of Object will be serialized except for Collection or Arrays. Consequently, if the deserializer is unable to construct the object, an exception will be thrown.

All objects are built using an ObjectFactory during deserialization and each object must have a constructor that takes no arguments. Users can write their own factories and the library comes with many factories for types such as bytes, characters, dates, and so on. However, an integer object factory is nonexistent. Here’s mine:

public class IntObjectFactory implements ObjectFactory {

	/* (non-Javadoc)
	 * @see flexjson.ObjectFactory#instantiate(flexjson.ObjectBinder,

	public Object instantiate(ObjectBinder context,
                                  Object value,
                                  Type targetType,
                                  Class targetClass) {
		 if( value instanceof Number ) {
	            return ((Number)value).intValue();
	        } else {
	            throw context.

It was so easy to write, I’m not sure why is wasn’t included in the shipped version.

Lastly, here’s how to use the factory during deserialization:

JSONDeserializer<Map<String, List<SomeObject>>> deserializer =
       new JSONDeserializer<Map<String, List<SomeObject>>>();

Map<String, List<SomeObject>> members =
       new IntObjectFactory()).deserialize(returnValue);

For more info check out the Flexjson home page here:

RESTful Serialization with Flexjson