Automated Testing Using Gradle, JUnit and DynamoDB Local

Recently, I’ve been working on an open-source project (Todo) that uses HSQL as an in-memory embedded database. However, I’d prefer to migrate to a NoSQL database for production. Our company already uses Amazon Web Services (AWS), so I’ve decided to go with DynamoDB. Since our company is frugal, I wanted to test our code using DynamoDB Local, which is free of charge and doesn’t require an internet connection. However, unlike the automatic configuration of HSQL in a Spring Boot app, getting DynamoDB Local to run properly during tests requires additional configuration in your build script and test cases.

In this post, I’m going to outline the steps for configuring and running a local instance of DynamoDB in your JUnit tests. We’ll begin with the Gradle dependencies and tasks.

Gradle Dependencies

In the Gradle build file, configure the AWS custom repository for DynamoDB Local.

repositories {
   jcenter()

    maven {
     //Local DynamoDB repository
     url "https://s3-us-west-2.amazonaws.com/dynamodb-local/release"
    }
}

Then add the following dependencies:

dependencies {
    // AWS dynamodb
    compile group: 'com.amazonaws', name: 'aws-java-sdk-dynamodb', version: '1.11.213'

    // Use JUnit test framework
    testCompile 'junit:junit:4.12'
    
    //Local DynamoDB
    testCompile "com.amazonaws:DynamoDBLocal:1.+"
    
    //SQLite4Java, required by local DynamoDB
    testCompile group: 'com.almworks.sqlite4java', name: 'sqlite4java', version: '1.0.392'    
}

 

Gradle Tasks

Next, add the following tasks to the Gradle build file:

//Copy SQLite4Java dynamic libs
task copyNativeDeps(type: Copy) {
    from(configurations.compile + configurations.testCompile) {
        include '*.dll'
        include '*.dylib'
        include '*.so'
    }
    into 'build/libs'
}

test {
    dependsOn copyNativeDeps
    systemProperty "java.library.path", 'build/libs'
}

 

JUnit Test Case

In your test case, I recommend configuring and running DynamoDB Local before any tests are executed and shutting it down after the tests have completed. You’ll also want to create your tables before executing your tests. This can be achieved by annotating a public static method in your test class with @BeforeClass.

Note: sServer and sClient are static fields in your test class.

@BeforeClass
public static void runDynamoDB() {

  //Need to set the SQLite4Java library path to avoid a linker error
  System.setProperty("sqlite4java.library.path", "./build/libs/");

  // Create an in-memory and in-process instance of DynamoDB Local that runs over HTTP
  final String[] localArgs = { "-inMemory" };

  try {
	sServer = ServerRunner.createServerFromCommandLineArgs(localArgs);
	sServer.start();

  } catch (Exception e) {
	e.printStackTrace();
 	Assert.fail(e.getMessage());
	return;
  }

  createAmazonDynamoDBClient();

  createMyTables();
}
	
private static void createAmazonDynamoDBClient() {
  sClient = AmazonDynamoDBClientBuilder.standard()
	        .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration("http://localhost:8000", "us-west-2"))
	        .build();
}

private static void createMyTables() {
	//Create task tables
  DynamoDBMapper mapper = new DynamoDBMapper(sClient);
  CreateTableRequest tableRequest = mapper.generateCreateTableRequest(MyItemOne.class);
  tableRequest.setProvisionedThroughput(new ProvisionedThroughput(1L, 1L));
  sClient.createTable(tableRequest);

  tableRequest = mapper.generateCreateTableRequest(MyItemTwo.class);
  tableRequest.setProvisionedThroughput(new ProvisionedThroughput(1L, 1L));
  sClient.createTable(tableRequest);

  tableRequest = mapper.generateCreateTableRequest(MyItemThree.class);
  tableRequest.setProvisionedThroughput(new ProvisionedThroughput(1L, 1L));
  sClient.createTable(tableRequest);
}

Then after the tests have completed, you’ll want to ensure that the database is shutdown by by annotating a public static method in your test class with @AfterClass.

@AfterClass
public static void shutdownDynamoDB() {
  if(sServer != null) {
     try {
          sServer.stop();
     } catch (Exception e) {
	  e.printStackTrace();
     }
  }
}

I hope these steps help you to more easily implement local automated tests of your DynamoDB code. Let me know if you have any feedback.

References:

Stack Overflow, https://stackoverflow.com/a/39086207/314897

Stack Overflow, https://stackoverflow.com/a/31845157/314897

Testing Excellence, https://www.testingexcellence.com/unsatisfiedlinkerror-sqlite4java-jar-mac-os-x

Advertisements
Automated Testing Using Gradle, JUnit and DynamoDB Local

How to Build a Serverless API With AWS DynamoDB, Lambda, and API Gateway

Lambda symbol

Imagine running your entire IT department or SaaS without servers. The age of serverless architecture is here now. So, what’s serverless architecture? According to Mike Roberts:

Serverless can also mean applications where some amount of server-side logic is still written by the application developer but unlike traditional architectures is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a 3rd party. One way to think of this is Functions as a service (FaaS). AWS Lambda is one of the most popular implementations of FaaS at present, but there are others. I’ll be using ‘FaaS’ as a shorthand for this meaning of Serverless throughout the rest of this article.

Source: Roberts, M. (2016, August 4). Serverless Architectures.

At Rodax Software, we’re thinking about scrapping our AWS EC2s running our Skedi services and migrating them into serverless microservices running on AWS Lambda. That said, it’s important to have some healthy skepticism about making this transition. There are some potentially significant drawbacks to consider such as vendor lock-in; however, I’m not going address these in this post. For more information about the drawbacks click here.

Given the scope of transiting our API and potential risks, I wanted to get a better idea of the level effort and overall feasibility. The primary goal of this article is to walkthrough how to build a serverless RESTful API that’s integrated with Lambda and uses a NoSQL database as its data store. So, I’ve put together this sample FaaS that exposes an AWS API Gateway method, which invokes a Lambda function, which then queries a DynamoDB table. Sounds like fun, right? Anyway, I purposely minimized wizard use because I wanted to have a clear understanding of the all steps involved. Moreover, I think it’s easier to learn by stepping through AWS Management Console.

In a nutshell, here’s what we’re going to do in this tutorial:

  • Create a table in DynamoDB and populate it with sample data
  • Create a Lambda function that queries the DynamoDB table
  • Create an API Gateway method that invokes the Lambda function

The prerequisites include the following:

  1. An AWS Account and AWS CLI configured. Learn more here.
  2. Experience with the AWS Management Console. Learn more here.
  3. AWS SDK for Java is setup and configured properly (optional). For more information click here.
  4. Git is installed and configured, click here for instructions.
  5. Maven is installed and configured, click here for instructions.
  6. cURL or equivalent to download files using the command line (optional).

Populate DynamoDB

DynamoDB is Amazon’s premier fully managed proprietary NoSQL database available on AWS. SimpleDB is another NoSQL database offered by Amazon. It’s designed for smaller workloads and I intended to use it; however, it’s unsupported in the Lambda execution environment. If you’re interested, I have written code to populate my sample data in SimpleDB in Java and C#, here and here, respectively.

In this section, we’re going to create a DynamoDB table and populate it in two different ways. Choose whichever option you prefer or try both.

Option 1: Create DynamoDB Table using Java

  1. Clone the project:
    $ git clone https://github.com/johnboyer/aws-dynamodb-demo-java.git
  2. Review the code that creates the table and populates it with sample data:
    private static void createTable() throws InterruptedException {
        AttributeDefinition[] defs = {
                                    new AttributeDefinition(EMAIL, S)
                                    };
    
        ProvisionedThroughput throughput = new ProvisionedThroughput()
                                                .withReadCapacityUnits(1L)
                                                .withWriteCapacityUnits(1L);
    
        //Email address is the key
        KeySchemaElement emailKey = new KeySchemaElement(EMAIL, KeyType.HASH);
        CreateTableRequest createTableRequest = new CreateTableRequest()
                                .withTableName(TABLE)
                                .withKeySchema(emailKey)
                                .withAttributeDefinitions(defs)
                                .withProvisionedThroughput(throughput);
    
        // Create table if it does not exist yet
        TableUtils.createTableIfNotExists(sDynamoDB, createTableRequest);
        // wait for the table to move into ACTIVE state
        TableUtils.waitUntilActive(sDynamoDB, TABLE);
    }
    
    private static void addSampleItems() {
        // Add an item
        Map item = createItem("john@example.com", "John", "Doe");
        PutItemRequest putItemRequest = new PutItemRequest(TABLE, item);
        PutItemResult putItemResult = sDynamoDB.putItem(putItemRequest);
        //...
    }
    

For information about programming in DynamoDB click here.

  1. From the aws-dynamodb-demo-java/dynamo-db directory, package the project:
    $ mvn package
  2. Then run the app:
    $ mvn exec:java

Option 2: Create DynamoDB Table using AWS CLI

  1. In the terminal window, create a directory for the project.
  2. To create the table, we’ll use the following JSON:
    {
        "AttributeDefinitions": [{
            "AttributeName": "email",
            "AttributeType": "S"
        }],
        "TableName": "customer",
        "KeySchema": [{
            "AttributeName": "email",
            "KeyType": "HASH"
        }],
        "ProvisionedThroughput": {
            "ReadCapacityUnits": 1,
            "WriteCapacityUnits": 1
        }
    }
    
  3. Download the table.json file at the prompt:
    $ curl https://johnboyer.me/files/blog/table.json >table.json or use your web browser.
  4. Create the customer table using the AWS CLI:
    $ aws dynamodb create-table --table-name customer --cli-input-json file://table.json
  5. To populate the table, we’ll use the following JSON format:
    {
    "customer": [{
        "PutRequest": {
            "Item": {
            "email": {
                "S": "john@example.com"
            },
            "first_name": {
                "S": "John"
            },
            "last_name": {
                "S": "Doe"
            }
           }
        }
       },
        ...
      ]
    }
    
  6. Download the data.json file:
    $ curl https://johnboyer.me/files/blog/data.json >data.json
  7. Populate the customer table:
    $ aws dynamodb batch-write-item --request-items file://data.json

If you tried both of these methods, which one did you prefer?

Create a Lambda Function

Now that we’ve populated the database, we’re ready to create and configure our Lambda function. In the following steps we’ll:

  • Build and package a Java Lambda function
  • Create, configure, and deploy the function
  • Create and configure the function’s AWS IAM role
  • Verify and test the function
  1. Ensure that Maven is installed and configured on your computer.
  2. Clone the project:
    $ git clone https://github.com/johnboyer/aws-lambda-demo-java.git
  3. From the aws-lambda-demo-java/lambda-demo directory, package the project:
    $ mvn package
  4. In the AWS Management Console, go to the Lambda Console and click Create a Lambda Function > Blank Function > Next.
  5. For the Name, enter getCustomers and Java 8 for the runtime.
  6. Click Upload and navigate to aws-lambda-demo-java/lambda-demo/target/lambda-demo-1.0.0.jar
  7. In the Lambda function handler and role section, set the Handler to me.johnboyer.aws.samples.lambda.DynamoDBFunctionHandler::handleRequest, in the dropdown click Create new role from templates, set the Role name to customerLambdaRole and set the Policy templates to Simple Microservice permissions, click Next and Create Function.

    Note the Java 8 requires the handler value to have the following format: packageName::methodName.

  8. Then in the IAM Management Console, click Roles > customerLambdaRole > Permissions > Attach Policy > AWSLambdaDynamoDBExecutionRole > Attach Policy.
  9. In the Lambda Console, click Functions > getCustomers > Test and verify if it works.
  10. Bummer, the test fails because of a pesky permissions error.
  11. In the IAM Management Console, click Policies > Create policy > Copy an AWS Managed Policy > Select.
  12. Search for dynamodbread and select AmazonDynamoDBReadOnlyAccess.
  13. In the Policy Document, set the Resource to your DyanomoDB’s customer table ARN, e.g., arn:aws:dynamodb:us-west-2:123456789:table/customer, click Validate Policy and Create Policy. Note the generated name of the policy.
  14. Click Roles > customerLambdaRole > Permissions > AWSLambdaDynamoDBExecutionRole > Detach Policy > Detach.
  15. Then click Attach Policy > Policy Type > Customer Managed > policy from step 13 > Attach Policy.
  16. In the Lambda Console, click Functions > getCustomers > Test and verify if it works. It should and the output will look like the following:
    "{Items: [{last_name={S: Smith,}, created_date={S: 2017-07-11T00:44:40.209Z,}, first_name={S: Mary,}, email={S: mary@example.com,}}, {last_name={S: Smith,}, created_date={S: 2017-07-11T00:44:40.225Z,}, first_name={S: Bob,}, email={S: bob@example.com,}}, {last_name={S: Doe,}, created_date={S: 2017-07-11T00:44:40.191Z,}, first_name={S: Jane,}, email={S: jane@example.com,}}, {last_name={S: Boyer,}, created_date={S: 2017-07-11T00:44:40.144Z,}, first_name={S: John,}, email={S: john@example.com,}}],Count: 4,ScannedCount: 4,}"
    

Although I’m not going to cover it here in detail, the AWS CLI for a creating our Lambda function would look something like the following:

    $ aws lambda create-function \
        --region us-west-2 \
        --function-name getCustomers \
        --zip-file fileb:///aws-lambda-demo-java/lambda-demo/target/lambda-demo-1.0.0.jar \
        --role arn:aws:iam::123456789:role/service-role/customerLambdaRole \
        --handler me.johnboyer.aws.samples.lambda.DynamoDBFunctionHandler::handleRequest \
        --runtime java8 \
        --profile <adminuser>
 

Note the path for the ZIP file can also be an AWS S3 bucket. For more information about the AWS Lambda CLI click here.

Anyway, there’s a lot more to learn about Lambda such as versioning, supported event triggers, automated deployment and so on. I hope to cover some of these topics in future posts. In the meantime, I recommend checking out Lambda’s documentation here.

Create an API Gateway Method

  1. In Amazon API Gateway Console, click Create API > New API, enter CustomerSample for the name and click Create API.
  2. Then click Actions > Create Method, click GET in the dropdown menu and the OK checkmark.
  3. For the Integration type, click Lambda Function, your region (e.g., us-west-2), getCustomers for the Lambda Function and click Save.
  4. Review the overview of the API’s invocation sequence.
  5. Then click TEST > Test, the output should be the same as the Lambda function.
  6. To invoke the API via a web browser, you’ll need to deploy it. Click Actions, create a new deployment stage, e.g., test, and click Deploy.
  7. Copy and paste the Invoke URL into your web browser, e.g., https://l093tb3xa9.execute-api.us-west-2.amazonaws.com/test

As with Lambda, there’s a lot more to learn about API Gateway an a good place to start is the developer guide here.

Conclusions

After developing this exercise and writing the post. I’ve come to the realization that there are lots of moving parts between DynamoDB, Lambda, and API Gateway, especially with respect to setup and configuration. Creating the Lambda function in the console required sixteen steps. Clearly, we would need to automate these steps to ease the pain of setup and deployment. That will require further learnings through prototyping and code katas.

Additionally, I observed that after I deployed my function and waited a day between executions, the latency was very high. This is what’s known as a cold start:

A cold start occurs when an AWS Lambda function is invoked after not being used for an extended period of time resulting in increased invocation latency…
From the data, it’s clear that AWS Lambda shuts down idle functions around the hour mark.

Source: Cui, Y. (2017, July 3). How long does AWS Lambda keep your idle functions around before a cold start?

From an architectural perspective, it will be important to consider the cold start scenario. For example, asynchronous background tasks maybe better suited for Lambda than infrequent client API invocations that usually require low latency. Consequently, when we decide to migrate our first Lambda function into production, we’ll choose a background task that isn’t dependent on low latency.

In the meantime, please let me know if you have any additional thoughts or comments on this tutorial.

How to Build a Serverless API With AWS DynamoDB, Lambda, and API Gateway

Migrating to AWS SQS from ActiveMQ

After years of using ActiveMQ 5.x in our Java-based production system, we decided to migrate to the Amazon Simple Queue Service (SQS). Managing and configuring ActiveMQ no longer seemed to make sense, especially when SQS does it for us. Moreover, SQS recently added FIFO support. Messages in SQS FIFO queues are delivered in order and are guaranteed to be processed only once. These features are exactly what we needed to replace ActiveMQ with SQS and ensure that our backend processes would function as expected.

Considerations

Before migrating to SQS, here are some important considerations:

  • JMS support is limited to queues, i.e., point-to-point.
  • Message size is limited 256 KB.
  • Message content is restricted to text only, i.e., XML, JSON, and unformatted text. JMS’s MapMessage and StreamMessage interfaces are unsupported.
  • Message durability is constrained to a maximum of 14 days (defaults is 4 days).
  • JMS selectors for filtering messages are unsupported.

Overview

My company Rodax Software provides a propriety RESTful API for Skedi—a calendar aggregation app for families and teams. The API has many long running tasks, so we use queues to manage the workflow while providing fast HTTP responses to our mobile clients.

Our messaging architecture includes queues that are consumed by server-side components and remote mobile clients. Since migrating the mobile clients to SQS would require code changes and an App Store update, we decided to first migrate the server queues.

Code Migration

Before diving into the code, we also had to consider SQS’s JMS compliance. Thankfully, SQS supports most of the JMS 1.1 specification for queues, click here for more information about using JMS with Amazon SQS.

Given the JMS queue support in SQS, the server-side code changes were relatively painless. So, lets take a look at them, which includes accessing a JMS connection, starting a JMS listenersending a JMS message, and converting MapMessage to ObjectMessage.

Accessing a JMS connection

The primary difference between instantiating a JMS connection in SQS and ActiveMQ is that SQS requires credentials and the region where the queues are located.

//Before
Connection getJMSConnection() throws NamingException, JMSException {
	if(connection == null) {
		String url = "tcp://localhost:61616";
		ActiveMQConnectionFactory factory;
		factory = new ActiveMQConnectionFactory(url);
		connection = factory.createConnection();
	}
	return connection;
}

//After
Connection getSQSConnection() throws JMSException {
	AWSCredentials credentials;
	credentials = DefaultAWSCredentialsProviderChain.getInstance().getCredentials();
	AWSStaticCredentialsProvider credProvider;
	credProvider = new AWSStaticCredentialsProvider(credentials);

	Regions region = host.equals(PRODUCTION_HOST) ? 
	                            Regions.US_WEST_2 : Regions.US_EAST_2;
	SQSConnectionFactory factory;
	factory = new SQSConnectionFactory(new ProviderConfiguration(),
			AmazonSQSClientBuilder.standard()
			.withRegion(region)
			.withCredentials(credProvider));
	return factory.createConnection();
} 

Note: Using the DefaultAWSCredentialsProviderChain class, which is the preferred method for accessing AWS credentials.

Starting a JMS Listener

Unlike ActiveMQ, a SQS JMS listener must be listening to a queue that actually exists; therefore, before we start listening, we ensure that the queue exists.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
//Before
public void start() throws NamingException, JMSException  {

	log.info("Starting message listener...");

	connection = getJMSConnection();

	Session session;
	session  = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

	Destination destination = session.createQueue(queue.getQueueName());
	MessageConsumer consumer = session.createConsumer(destination);
	consumer.setMessageListener(this);

	connection.start();		
}

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
//After
public void start() throws NamingException, JMSException {

    log.info("Starting message listener...");

    connection = getJMSConnection();

    Session session;
    session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

    String queueName = getQueueName();
    if (connection instanceof SQSConnection) {
        //AWS SQS requires queues to be created before first use
        createSQSFifoQueue(queueName, (SQSConnection) connection);
    }

    Destination destination = session.createQueue(queue.getQueueName());
    MessageConsumer consumer = session.createConsumer(destination);
    consumer.setMessageListener(this);

    connection.start();

    try {
        Thread.sleep(1000);
    } catch (InterruptedException e) {
        log.error("Thread sleep error", e);
    }
}

public static void createSQSFifoQueue(String queueName, SQSConnection connection) throws JMSException {
    if (queueName.endsWith(".fifo")) {
        // Get the wrapped client
        AmazonSQSMessagingClientWrapper client = connection.getWrappedAmazonSQSClient();

        // Create an Amazon SQS FIFO queue, if it does not already exist
        if (!client.queueExists(queueName)) {
            Map < String, String > attributes = new HashMap < > ();
            attributes.put("FifoQueue", "true");
            attributes.put("ContentBasedDeduplication", "true");
            client.createQueue(new CreateQueueRequest().withQueueName(queueName).withAttributes(attributes));
        }
    } else {
        throw new ContextedRuntimeException("Invalid FIFO queue name")
            .addContextValue("queueName", queueName);
    }
}

Lines 11-15 above were added because Amazon requires the queues to be created before first use. Lines 23-27 were to added to force the main thread to wait for one second after starting the listener. Lines 30-46 is a new method to create the FIFO before we start listening.

Sending a JMS Message

Sending a message to a FIFO queue requires only one additional line of code to set the message group ID.

// Create the text message
TextMessage message = ...

// To send a message to a FIFO queue, you must set the message group ID.
message.setStringProperty("JMSXGroupID", "Default”);

For more information about these changes click here.

Converting MapMessage to ObjectMessage

SQS doesn’t support  MapMessage; therefore, we simply refactored by converting our MapMessage objects to HashMap<String, String> instances. Note: An ObjectMessage object must implement Serializable.

//Before
Message createMessage(Session session) throws JMSException {
        MapMessage mapMessage = session.createMapMessage();
        mapMessage.setString("key0", "value0");
        //...
        return mapMessage;
}
//After
Message createMessage(Session session) throws JMSException {
        HashMap<String, String> mapMessage = new HashMap<>();
        mapMessage.put("key0", "value0");
        //...
        return session.createObjectMessage(mapMessage);
}

Voilà! The next thing we have to do is to migrate our mobile clients using the ActiveMQ Stomp protocol and JMS selectors. Replacing the selector filtering will most likely require architectural changes. In any case, once we’ve completed the mobile migration, I’ll blog about it here too.

To conclude, it’s worth noting that these SQS migration changes should work with any JMS compliant codebase. Let me know if you have any experiences or thoughts to share.

Migrating to AWS SQS from ActiveMQ

Handling Access to Calendars in iOS 6

I had to laugh at my last post. I went on hiatus in August 2010 and now I’m back, in 2012. I’ve been busy working on Skedi Family Calendar, which aggregates calendar data from different vendors such as Google Calendar, Microsoft Exchange, etc.

Anyway, the latest release of iOS (6.0) isolates a user’s calendars and blocks access by default. This presented a problem for us at Rodax Software because we also support earlier versions of iOS (i.e., 4 and 5) and we wanted to avoid using conditional compilation.

In iOS 6, access to the calendars is controlled through the EKEventStore class. In the past, we’d call a alloc/init and that was it.  This is unchanged; however, now we need to request permission and if granted, we can access the users calendars and events. Note: This request can only be performed once, which I’ll cover later.

 _store = [[EKEventStore alloc] init];

So, if we need to support earlier versions of iOS, how can we avoid using ugly preprocessor macros? Simple, use the power of Objective-C’s dynamic typing. In iOS 6, the method to request access in EKEventStore is requestAccessToEntityType. Hence, we need to determine if it’s available.

if([_store respondsToSelector:@selector(requestAccessToEntityType:completion:)]) {
	//invoke requestAccessToEntityType...
}

If the statement is true, request access:

[_store requestAccessToEntityType:EKEntityTypeEvent
                       completion:^(BOOL granted, NSError *error) {
 //Handle the response here…
//Note: If you prompt the user, make sure to call the main thread

}];

Since you can only request access once, you’ll need to test for the current authorization status first. We decided to create our own status flags that correspond with Event Kit’s then call authorizationStatus.

/// \brief Proxy for EKAuthorizationStatus in iOS 6
enum SKEKAuthorizationStatus {
    SKEKAuthorizationStatusNotDetermined = 0,
    SKEKAuthorizationStatusRestricted,
    SKEKAuthorizationStatusDenied,
    SKEKAuthorizationStatusAuthorized
};

/// \brief Helper method checks the authorization
/// \return If running in iOS 6 or higher returns the EKAuthorizationStatus; otherwise, returns SKEKAuthorizationStatusAuthorized
- (enum SKEKAuthorizationStatus) authorizationStatus {
    if ([[EKEventStore class] respondsToSelector: @selector(authorizationStatusForEntityType:)]) {
        return [EKEventStore authorizationStatusForEntityType:EKEntityTypeEvent];
    }
    else {
        return SKEKAuthorizationStatusAuthorized;
    }
}

If calendar access was denied before (SKEKAuthorizationStatusDenied), then we prompt the user to enable access by tapping Settings > Privacy > Calendars > Skedi. If permission is undetermined (SKEKAuthorizationStatusNotDetermined), then we instantiate the EKEventStore and request access. If previously granted access, we can safely alloc and init without a care.

So, I hope this helps anyone who is currently working on adapting their Event Kit code for iOS 6. Let me know if you have any feedback.

Handling Access to Calendars in iOS 6

How to configure TextMate’s SQL bundle on Mac OS X Snow Leopard

TextMate is a great text editor for Macs. Its supports a myriad of programming and scripting languages. However, after I installed it. I was unable to get the SQL bundle to work properly on Mac OS X (10.6.x) or Snow Leopard. I blew it off for a while. Then I found this post by 豆皮儿.

The post says to replace the keychain and plist bundles in TextMate’s application bundle. Instead of using the command line, I recommend these steps:

  1. Go to http://svn.textmate.org/trunk/Support/lib/osx
  2. Download keychain.bundle and plist.bundle.
  3. In the Finder window, navigate to /Applications/TextMate.app and right-click Show Package Contents.
  4. Navigate to /Contents/SharedSupport/Support/lib/osx.
  5. From your downloads directory, drag the new keychain and plist bundles to the osx directory.
  6. Open TextMate, configure the SQL bundle (SQL > Preferences) and test a query such as “SELECT 1;”

That’s it. Enjoy.

 

How to configure TextMate’s SQL bundle on Mac OS X Snow Leopard

Installing the APR-based Tomcat Native library and enabling SSL

Tomcat 6.x can be turbo-charged by using the Apache Portable Runtime (APR).

The Apache Portable Runtime is a highly portable library that is at the heart of Apache HTTP Server 2.x. APR has many uses, including access to advanced IO functionality (such as sendfile, epoll and OpenSSL), OS level functionality (random number generation, system status, etc), and native process handling (shared memory, NT pipes and Unix sockets). –Apache Tomcat User Guide

The Tomcat native library requires the following three components:

  • APR Library
  • JNI wrappers for APR used by Tomcat (libtcnative)
  • OpenSSL libraries
  1. Download and install the APR 1.4.x library and follow the README instructions. For Mac OS X, I used the following commands from this article.
    # Configure the make file from the download directory
    ./configure
    # Users of 64-bit Java 6 should use the following configure command:
    CFLAGS='-arch x86_64' ./configure
    # Make the library
    make
    # Test the build (Takes a while)
    make test
    # Install APR
    make install
  2. Compile and install the Tomcat native library in the bin directory. Detailed instructions here. For Mac OS X, I used the following commands from this article.
    # Build the make file for Java 5
    ./configure --with-apr=/usr/local/apr --with-ssl=/usr # With SSL
    ./configure --with-apr=/usr/local/apr --without-ssl # Without SSL
    
    # Some have reported having to use the --with-java-home option even with Java 5
    ./configure --with-apr=/usr/local/apr --with-ssl=/usr --with-java-home=/System/Library/Frameworks/JavaVM.framework/Versions/1.5 # With SSL
    ./configure --with-apr=/usr/local/apr --without-ssl --with-java-home=/System/Library/Frameworks/JavaVM.framework/Versions/1.5 # Without SSL
    
    # Users of 64-bit Java 6 should use the following configure command:
    CFLAGS='-arch x86_64' ./configure --with-apr=/usr/local/apr --with-ssl=/usr/ssl --with-java-home=/System/Library/Frameworks/JavaVM.framework/Versions/1.6
    
    # Make
    make
  3. Install the OpenSSL libraries (if necessary), more details here. It’s already installed on Mac OS X and distributions of Linux.

Okay, if you’re new to OpenSSL, here’s where the missing manual comes in. For testing or development, create self-signed certificates as follows:

openssl req -new -newkey rsa:1024 -nodes -out <tomcat home>conf/ssl/ca/localhost.csr -keyout <tomcat home>conf/ssl/ca/localhost.key

Then create a X.509 certificate:

openssl x509 -trustout -signkey <tomcat home>conf/ssl/ca/ca.key -days 365 -req -in <tomcat home>conf/ssl/ca/localhost.csr -out <tomcat home>conf/ssl/ca/localhost.pem

Edit the context.xml file in the conf directory (<tomcat home>conf). See Tomcat’s SSL documentation for more details.

<!-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -->
<Connector protocol="org.apache.coyote.http11.Http11AprProtocol"
 port="8443" maxThreads="200"
 scheme="https" secure="true" SSLEnabled="true"
 SSLCertificateFile="${catalina.base}/conf/ssl/ca/localhost.pem"
 SSLCertificateKeyFile="${catalina.base}/conf/ssl/ca/localhost.key"
 SSLProtocol="TLSv1"/>

Shutdown and start Tomcat and you should see the following line:
INFO - Loaded APR based Apache Tomcat Native library 1.1.16.

I hope helps you smoothly transition to the Tomcat native library.

Installing the APR-based Tomcat Native library and enabling SSL