Spring Boot & Amazon Web Services (EC2, RDS & S3)

This post will take you through a step by step guide to building and deploying a simple Java app in the AWS cloud. The app will use a few well known AWS services which I’ll describe along the way. There is quite a bit to cover in this post so the overview of the AWS services will be light. For those interested in finding out more I’ll link to the appropriate section of the AWS documentation. Amazon have done a fine with their documentation so I’d encourage you to have a read if time permits.

Prerequisites

In order to get the sample app up and running you’ll need access to AWS. If you don’t already have access you can register for a free account which will give you access to a bunch of great services and some pretty generous allowances. I’d encourage you to get an account set up now before going any further.

What will the sample application look like?

The app we’re going to build is a simple customer management app and will consist of a Spring Boot REST layer and an AnularJS front end. We’ll deploy the application to AWS and make use of the following services.

  • EC2 – Amazons Elastic Cloud Compute provides on demand virtual server instances that can be quickly provisioned with the operating system and software stack of your choice. We’ll be using Amazons own Linux machine image to deploy our application.
  • Relational Database Service – Amazons database as a service allows developers to provision Amazon managed database instances in the cloud.  A number of  common database platforms are supported but we’ll be using a MySQL instance.
  • S3 Storage – Amazons Simple Storage Service provides simple key value data storage which we’ll be using to store image files.

We’re going to build a simple CRUD style customer management app to create, view and delete customer details. Below is a high level overview of each of the screens and how they interact with other components.

  • Create customer – An Angular view will capture and post customer data to a Spring Boot managed endpoint. When a customer is added the endpoint will save the customer data to a MySQL database instance on RDS. The customer image will be saved to S3 storage which will generate a unique key and a public URL to the image. The key and public URL will be saved in the database as part of the customer data.

Create Customer View

  • View customer – An Angular view will issue a GET request to an endpoint for a specific customer. The endpoint will retrieve customer data from the MySQL database instance on RDS and return it to the client. The response data will include a publicly accessible URL which will be used to reference the customer image directly from S3 storage.

View Customer

  • View all customers – An Angular view will issue a GET request for all customers to a Spring Boot managed endpoint. Customers will be displayed in a simple table and users will have the ability to view or delete customer rows. The endpoint will retrieve all customer data from the MySQL database instance on RDS and return it to the client. Images will be referenced from S3 in the same way as the View Customer screen.

View All Customers

The first part of this post will focus on building the demo application. In the second part we’ll look at configuring the various services on AWS, running the application locally and then deploying it in the cloud.

Source Code

The full source code for this tutorial is available on github at https://github.com/briansjavablog/spring-boot-aws. You may find it useful to pull the code locally so that you can experiment with it as you work through the tutorial.

Application Structure.

Project Structure

In the sections that follow we’ll look at some of the most important components in detail. The focus of this post isn’t Spring Boot so I wont describe every class in detail, as I’ve covered quite a bit this already in a separate post. We’ll focus more on AWS integration and making our app cloud ready.

Domain Model

The domain model for the demo app is very simple and consist of just 3 entities – a CustomerCustomerAddress and CustomerImage. The Customer entity is defined below.

@Entity(name="app_customer")
public class Customer{
  public Customer(){}

  public Customer(String firstName, String lastName, Date dateOfBirth, CustomerImage customerImage, Address address) {
    super();
    this.firstName = firstName;
    this.lastName = lastName;
    this.dateOfBirth = dateOfBirth;
    this.customerImage = customerImage;
    this.address = address;
  }

  @Id
  @Getter
  @GeneratedValue(strategy=GenerationType.AUTO)
  private long id;

  @Setter
  @Getter
  @Column(nullable = false, length = 30)
  private String firstName;

  @Setter
  @Getter
  @Column(nullable = false, length = 30)
  private String lastName;

  @Setter 
  @Getter
  @Column(nullable = false)
  private Date dateOfBirth;

  @Setter
  @Getter
  @OneToOne(cascade = {CascadeType.ALL})
  private CustomerImage customerImage;

  @Setter
  @Getter
  @OneToOne(cascade = {CascadeType.ALL})
  private Address address;
}

Address is defined as follows.

@Entity(name="app_address")
public class Address{
  public Address(){}

  public Address(String street, String town, String county, String postCode) {
    this.street = street;
    this.town = town;
    this.county = county;
    this.postcode = postCode;
  }

  @Id
  @Getter
  @GeneratedValue(strategy=GenerationType.AUTO)
  private long id;

  @Setter
  @Getter
  @Column(name = "street", nullable = false, length=40)
  private String street;

  @Setter
  @Getter
  @Column(name = "town", nullable = false, length=40)
  private String town;

  @Setter 
  @Getter
  @Column(name = "county", nullable = false, length=40)
  private String county;

  @Setter
  @Getter
  @Column(name = "postcode", nullable = false, length=40)
  private String postcode;
}

And finally CustomerImage is defined as follows.

@Entity(name="app_customer_image")
public class CustomerImage {
  
  public CustomerImage(){}

  public CustomerImage(String key, String url) {
    this.key = key;
    this.url =url; 
  }

  @Id
  @Getter
  @GeneratedValue(strategy=GenerationType.AUTO)
  private long id;

  @Setter
  @Getter
  @Column(name = "s3_key", nullable = false, length=200)
  private String key;

  @Setter
  @Getter
  @Column(name = "url", nullable = false, length=1000)
  private String url;

}

Customer Controller

The CustomerController exposes endpoints for creating, retrieving and deleting customers and is called from an Angular front end that we’ll create later.

@RestController
public class CustomerController {
  
  @Autowired
  private CustomerRepository customerRepository;

  @Autowired
  private FileArchiveService fileArchiveService;

  @RequestMapping(value = "/customers", method = RequestMethod.POST)
  public @ResponseBody Customer createCustomer(@RequestParam(value="firstName", required=true) String firstName,
                         @RequestParam(value="lastName", required=true) String lastName,
                         @RequestParam(value="dateOfBirth", required=true) @DateTimeFormat(pattern="yyyy-MM-dd") Date dateOfBirth,
                         @RequestParam(value="street", required=true) String street,
                         @RequestParam(value="town", required=true) String town,
                         @RequestParam(value="county", required=true) String county,
                         @RequestParam(value="postcode", required=true) String postcode,
                         @RequestParam(value="image", required=true) MultipartFile image) throws Exception {

    CustomerImage customerImage = fileArchiveService.saveFileToS3(image); 
    Customer customer = new Customer(firstName, lastName, dateOfBirth, customerImage, new Address(street, town, county, postcode));

    customerRepository.save(customer);
    return customer; 
  }

This code snippet above does a few different things

  • Injects a CustomerRepository for saving and retrieving customer entities and a FileArchiveService for saving and retrieving customer images in S3 storage.
  • Takes posted form data including an image file and maps it to method parameters.
  • Uses the FileArchiveService service to save the uploaded file to S3 storage. The returned CustomerImage object contains a key and public URL returned from S3.
  • Creates a Customer entity and saves it to the database. Note that the CustomerImage is saved as part of Customer so that the customer entity has a reference to the image stored on S3.
@RequestMapping(value = "/customers/{customerId}", method = RequestMethod.GET)
public Customer getCustomer(@PathVariable("customerId") Long customerId) {
  
  /* validate customer Id parameter */
  if (null==customerId) {
    throw new InvalidCustomerRequestException();
  }

  Customer customer = customerRepository.findOne(customerId);

  if(null==customer){
    throw new CustomerNotFoundException();
  }

  return customer;
}

The method above provides an endpoint that takes a customer Id via a HTTP GET, retrieves the customer from the database and returns a JSON representation to the client.

@RequestMapping(value = "/customers", method = RequestMethod.GET)
public List<Customer> getCustomers() {
  
  return (List<Customer>) customerRepository.findAll();
}

The method above provides an endpoint for retrieving all customers via a HTTP GET.

@RequestMapping(value = "/customers/{customerId}", method = RequestMethod.DELETE)
public void removeCustomer(@PathVariable("customerId") Long customerId, HttpServletResponse httpResponse) {
  
  if(customerRepository.exists(customerId)){
    Customer customer = customerRepository.findOne(customerId);
    fileArchiveService.deleteImageFromS3(customer.getCustomerImage());
    customerRepository.delete(customer); 
  }

  httpResponse.setStatus(HttpStatus.NO_CONTENT.value());
}

The method above exposes an endpoint for deleting customers using a HTTP DELETE. The CustomerImage associated with the Customer is used to call the FileArchiveService to remove the customer image from S3 storage. The Customer is then removed from the database and a HTTP 204 is returned to the client.

File Archive Service

As mentioned above, we’re going to save uploaded images to S3 storage. Thankfully AWS provides an SDK that makes it easy to integrate with S3, so all we need to do is write a simple Service that uses that SDK to save and retrieve files.

@Service
public class FileArchiveService {
  
  @Autowired
  private AmazonS3Client s3Client;

  private static final String S3_BUCKET_NAME = "brians-java-blog-aws-demo";

  /**
   * Save image to S3 and return CustomerImage containing key and public URL
   * 
   * @param multipartFile
   * @return
   * @throws IOException
   */
  public CustomerImage saveFileToS3(MultipartFile multipartFile) throws FileArchiveServiceException {

    try{
      File fileToUpload = convertFromMultiPart(multipartFile);
      String key = Instant.now().getEpochSecond() + "_" + fileToUpload.getName();

      /* save file */
      s3Client.putObject(new PutObjectRequest(S3_BUCKET_NAME, key, fileToUpload));

      /* get signed URL (valid for one year) */
      GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(S3_BUCKET_NAME, key);
      generatePresignedUrlRequest.setMethod(HttpMethod.GET);
      generatePresignedUrlRequest.setExpiration(DateTime.now().plusYears(1).toDate());

      URL signedUrl = s3Client.generatePresignedUrl(generatePresignedUrlRequest);

      return new CustomerImage(key, signedUrl.toString());
    }
    catch(Exception ex){ 
      throw new FileArchiveServiceException("An error occurred saving file to S3", ex);
    } 
  }
  • Line 5 – AmazonS3Client is provided by the AWS SDK and allows us to read and write to S3. This component gets the credentials necessary to connect to S3 from aws-config.xml which we’ll define later.
  • Line 7 – The name of the S3 bucket that the application will read from and write to. You can think of a bucket as a storage container into which you can save resources. We’ll look at how to define an S3 bucket later in the post.
  • Lines 19 & 20 – The MultiPartFile uploaded from the client is converted to a File and a key is generated using the file name and time stamp. The combination of file name and time stamp is important so that multiple files can be uploaded with the same name.
  • Line 23 – The S3 client saves the file to the specified bucket using the generated key.
  • Lines 26 to 30 – Using the bucket name and key to uniquely identify this resource,  a pre signed public facing URL is generated that can be later used to retrieve the image. The expiration is set to one year from today to tell S3 to make the resource available using this public URL for no more than one year.
  • Line 32 – The generated key and public facing URL are wrapped in a CustomerImage and returned to the controller. CustomerImage is saved to the database as part of the Customer persist and is the link between the Customer stored in the database and the customers image file on S3. When a client issues a GET request for a specific customer the public facing URL to the customer image is returned. This allows the client application to reference the image directly from S3.
/**
 * Delete image from S3 using specified key
 * 
 * @param customerImage
 */
public void deleteImageFromS3(CustomerImage customerImage){
  
  s3Client.deleteObject(new DeleteObjectRequest(S3_BUCKET_NAME, customerImage.getKey())); 
}

The method above uses the key from CustomerImage to delete the specific resource from the brians-java-blog-aws-demo bucket on S3. This is the key that was used to save the image to S3 in the saveFileToS3 method described above.

Java Resource Configuration for AWS

The AwsResourceConfig class handles configuration required for integration with S3 storage and the MySQL instance running on RDS. The contents of this class are explained in detail below.

@Configuration
@ImportResource("classpath:/aws-config.xml")
@EnableRdsInstance(databaseName = "${database-name:}", 
                   dbInstanceIdentifier = "${db-instance-identifier:}", 
                   password = "${rdsPassword:}")
public class AwsResourceConfig {
}
  • @Configuration indicates that this class contains configuration and should be processed as part of component scanning.
  • @ImportResources tells Spring to load the XML configuration defined in aws-config.xml. We’ll cover the contents of this file later.
  • @EnableRdsInstance is provided by Spring Cloud AWS as a convenient way of configuring an RDS instance. The databaseName, dbInstanceIdentifier and password are defined when setting up the RDS instance in the AWS console. We’ll look at RDS set up later.

XML Resource Configuration for AWS

In order to access protected resources using Amazons SDK an access key and a secret key must be supplied. Spring Cloud for AWS provides an XML namespace for configuring both values so that they are available to the SDK at runtime.

<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aws-context="http://www.springframework.org/schema/cloud/aws/context"
       xmlns:jdbc="http://www.springframework.org/schema/cloud/aws/jdbc"
       xsi:schemaLocation="http://www.springframework.org/schema/beans 
                           http://www.springframework.org/schema/beans/spring-beans-4.1.xsd
                           http://www.springframework.org/schema/cloud/aws/context
                           http://www.springframework.org/schema/cloud/aws/context/spring-cloud-aws-context-1.0.xsd
                           http://www.springframework.org/schema/cloud/aws/jdbc             
                           http://www.springframework.org/schema/cloud/aws/jdbc/spring-cloud-aws-jdbc-1.0.xsd">
   
   <aws-context:context-credentials>
      <aws-context:simple-credentials access-key="${accessKey:}" secret-key="${secretKey:}"/>
   </aws-context:context-credentials>

   <aws-context:context-resource-loader/>

</beans>
  • Line 13 sets the access key and secret key required by the SDK. It’s important to note that these values should not be set directly in your configuration or properties files and should be passed to the application on start up (via environment or system variables). The secret key as the name suggests is very sensitive and if compromised will provide a user with access to all AWS services on your account. Make sure this value is not checked into source control, especially if your code is in a public repository. It’s common for applications to trawl public repositories looking for keys that are subsequently used to compromise AWS accounts.
  • The context-resource-loader on line 16 is required to access S3 storage. You’ll remember that we injected an instance of AmazonS3Client into the FileArchiveService earlier. The context-resource-loader ensures that an instance of AmazonS3Client is available with the credentials supplied in context-credentials.

Front End – AngularJS

Now that the core server side components are in place it’s time to look at some of the client side code. I’m not going to cover it in great detail as the focus of this post is integrating with AWS. The AngularJS logic is wrapped up in app.js as follows.

(function() {
    var springBootAws = angular.module('SpringBootAwsDemo', ['ngRoute', 'angularUtils.directives.dirPagination']);
    springBootAws.directive('active', function($location) {
        return {
            link: function(scope, element) {
                function makeActiveIfMatchesCurrentPath() {
                    if ($location.path().indexOf(element.find('a').attr('href').substr(1)) > -1) {
                        element.addClass('active');
                    } else {
                        element.removeClass('active');
                    }
                }

                scope.$on('$routeChangeSuccess', function() {
                    makeActiveIfMatchesCurrentPath();
                });
            }
        };
    });

    springBootAws.directive('fileModel', ['$parse', function($parse) {
        return {
            restrict: 'A',
            link: function(scope, element, attrs) {
                var model = $parse(attrs.fileModel);
                var modelSetter = model.assign;

                element.bind('change', function() {
                    scope.$apply(function() {
                        modelSetter(scope, element[0].files[0]);
                    });
                });
            }
        };
    }]);

    springBootAws.controller('CreateCustomerCtrl', function($scope, $location, $http) {
        var self = this;

        self.add = function() {
            var customerModel = self.model;
            var savedCustomer;

            var formData = new FormData();
            formData.append('firstName', customerModel.firstName);
            formData.append('lastName', customerModel.lastName);
            formData.append('dateOfBirth', customerModel.dateOfBirth.getFullYear() + '-' + (customerModel.dateOfBirth.getMonth() + 1) + '-' + customerModel.dateOfBirth.getDay());
            formData.append('image', customerModel.image);
            formData.append('street', customerModel.address.street);
            formData.append('town', customerModel.address.town);
            formData.append('county', customerModel.address.county);
            formData.append('postcode', customerModel.address.postcode);

            $scope.saving = true;
            $http.post('/spring-boot-aws/customers', formData, {
                transformRequest: angular.identity,
                headers: {
                    'Content-Type': undefined
                }
            }).success(function(savedCustomer) {
                $scope.saving = false;
                $location.path("/view-customer/" + savedCustomer.id);
            }).error(function(data) {
                $scope.saving = false;
            });
        };
    });

    springBootAws.controller('ViewCustomerCtrl', function($scope, $http, $routeParams) {

        var customerId = $routeParams.customerId;
        $scope.currentPage = 1;
        $scope.pageSize = 10;

        $scope.dataLoading = true;
        $http.get('/spring-boot-aws/customers/' + customerId).then(function onSuccess(response) {
            $scope.customer = response.data;
            $scope.dataLoading = false;
        }, function onError(response) {
            $scope.customer = response.statusText;
            $scope.dataLoading = false;
        });
    });

    springBootAws.controller('ViewAllCustomersCtrl', function($scope, $http) {

        var self = this;
        $scope.customers = [];
        $scope.searchText;

        $scope.dataLoading = true;
        $http.get('/spring-boot-aws/customers').then(function mySucces(response) {
            $scope.customers = response.data;
            $scope.dataLoading = false;
        }, function myError(response) {
            $scope.customer = response.statusText;
            $scope.dataLoading = false;
        });

        self.add = function(customerId) {
                $scope.selectedCustomer = customerId;
                $scope.customerDelete = true;
                $http.delete('/spring-boot-aws/customers/' + customerId).then(function onSucces(response) {
                    $scope.customers = _.without($scope.customers, _.findWhere($scope.customers, {
                        id: customerId
                    }));
                    $scope.customerDelete = false;
                }, function onError() {

                });
            },

            $scope.searchFilter = function(obj) {
                var re = new RegExp($scope.searchText, 'i');
                return !$scope.searchText || re.test(obj.firstName) || re.test(obj.lastName.toString());
            };
    });

    springBootAws.filter('formatDate', function() {
        return function(input) {
            return moment(input).format("DD-MM-YYYY");
        };
    });

    springBootAws.config(function($routeProvider) {
        $routeProvider.when('/home', {
            templateUrl: 'pages/home.tpl.html'
        });
        $routeProvider.when('/create-customer', {
            templateUrl: 'pages/createCustomer.tpl.html'
        });
        $routeProvider.when('/view-customer/:customerId', {
            templateUrl: 'pages/viewCustomer.tpl.html'
        });
        $routeProvider.when('/view-all-customers', {
            templateUrl: 'pages/viewAllCustomers.tpl.html'
        });
        $routeProvider.otherwise({
            redirectTo: '/home'
        });
    });

}());

The controller logic handles the 3 main views in the application – create customer, view customer and view all customers.

  • CreateCustomerCtrl uses model data populated in the view to build a FormData object and performs a HTTP POST to the create customer endpoint defined earlier. In the success callback there is a transition to the view customer route, passing the target customer Id in the URL.
  • ViewCustomerCtrl uses the customer Id passed in the URL and issues a HTTP GET to the getCustomer endpoint defined earlier. The response JSON is added to scope for display.
  • ViewAllCustomersCtrl issues a HTTP GET to the getAllCustomers endpoint to retrieve all customers. The response JSON is added to scope for display in a tabular view. The delete method takes the selected customer Id and issues a HTTP DELETE to the removeCustomer endpoint to remove the customer from the database and to remove the uploaded image from S3.

The demo app is now complete so its time to turn our attention to AWS so that we can configure the the RDS database instance and S3 resources needed.

Relational Database Service & S3 Storage

In this section you’ll need access to the AWS console. If you haven’t already done so you should register for a free account. We’re going to step through the RDS database instance set up and the creation of a new storage bucket in S3. By the end of this section you should have the application running locally, hooked up to an  RDS database instance and S3 storage.

Creating a Security Group to access RDS

Security groups provide a means of granting granular access to AWS services. Before creating a database instance on RDS we need to create a security group that will make the database accessible from the internet. This is required so that the application running on your local machine will be able to connect to the database instance on RDS.
Note: in a production environment your database should never be publicly accessible and should only be accessible to EC2 instances within your Virtual Private Cloud.

1. Log into the AWS console and on the landing page select EC2.

AWS Console – Landing Screen

2. Select Security Groups from the menu on the left hand side.

EC2 Landing Screen

4. Click Create Security Group.

Security Groups Screen

5. Enter a security group name and a meaningful description. Next select the default VPC (denoted with a *). A VPC (Virtual Private Cloud) allows users to configure a logically isolated network infrastructure for their applications to run on. Each AWS account comes with a default VPC so you don’t have to define one to get started. For the sake of this demo we’ll stick the default VPC.
Next we’ll specify rules that will define the type of inbound and outbound traffic permitted by the security group. We need to define a single inbound rule that will allow TCP traffic on port 3306 (port used by MySQL). In the rule config below I’ve set the inbound Source to Anywhere, meaning that the database instance will accept connections from any source IP. This is handy if you’re connecting to to a development database instance from public WIFI where your IP will vary. In most cases we’d obviously narrow this to a specified IP range. The default outbound rule allows all traffic to all IP addresses.

Create Security Group

6. From the main AWS dashboard click RDS. On the main RDS dashboard click Launch a DB Instance.

RDS Dashboard Landing Screen

7. Select MySQL as the DB engine.

RDS – Select Database Engine

8. Select the Dev/Test option as we don’t need advanced features like multi availability zone deployments for our demo.

Select Database Type

9. In the next section we define the main database instance settings. We’ll retain most of the default settings so I’ll describe only the most relevant settings below.

  • DB Instance Class – the size of the DB instance to launch. Choose T2 Micro as this is currently the smallest available and is free as part of free tier usage.
  • Multi AZ Deployment – indicates whether or not we want the DB deployed across multiple available zones for high availability. We don’t need this for a simple demo.
  • Storage Type – the underlying persistence storage type used by the instance. General purpose Solid State Drives are now available by default so we’ll use those.
  • Allocated Storage – The amount of physical storage available to the database. 5GB is suffice for this demo.
  • DB Instance Identifier – the name that will uniquely identify this database instance. This value is used by the AWSResourceConfig class we looked at earlier.
  • Master Username – the username we’ll use to connect to the database.
  • Master Password – the password we’ll use to authenticate with.

Database Instance Settings

10. Next we’ll configure some of the advanced settings. Again we’ll be able to use many of the default values here so I’ll only describe the settings that are most relevant.

  • VPC – Select the default VPC. We haven’t defined a custom VPC as part of this demo so select the default VPC option.
  • Subnet Group – As we’re using the default VPC we’ll also use the default subnet group.
  • Publicly Accessible – Set to true so that we can connect to the DB from our local dev environment.
  • Availability Zone – Select No Preference and allow AWS to decide which AZ the DB instance will reside.
  • VPC Security Groups – Select the Security Group we defined earlier, in this case demo-rds-sec-group. This will apply the defined inbound and outbound TCP rules to the database instance.
  • Database Name – select a name for the database. This will be used along with the database identifier we defined in the last section to connect to the database.
  • Database Port – Use default MySQL port 3306.
  • The remaining settings in the Database Options section should use the defaults as shown below.
  • Backup – Use default retention period of 7 days and No Preference for backup window. Carefully considered backup settings are obviously very important for a production database but for this demo we’ll stick with the defaults.
  • Monitoring & Maintenance – Again these values aren’t important for our demo app so we’ll use the defaults shown below.

Database Instance Advanced Settings

11. Click Launch DB Instance and wait for a few moments while instance is brought up. Click View Your DB Instance to see the configured instance in the RDS instance screen.

Database Instance Created

12. In the RDS instances view the newly create instance should be displayed with status Available. If you expand the instance view you’ll see a summary of the configuration details we defined above.

Configured DB Instance

Connecting to the database & creating the schema

Now that the database instance is up and running we can connect from the command line. You’ll need MySQL client locally for this section so if you don’t already have it installed you can get here.

  • cd to MY_SQL_INSTALL_DIRECTORYmysql-5.7.11-winx64bin.
  • Here is a sample connection command mysql -u briansjavablog1 -h rds-sample-db2.cg29ws2p7rim.us-west-2.rds.amazonaws.com -p
  • Replace the value following -u with the username you defined as part of the DB instance configuration.
  • Replace the value following -h with the DB host of the instance you created above.  The host is displayed as Endpoint on your newly created DB instance (see screenshot above). Note: The Endpoint displayed in the console includes the port number (3306). When connecting from the command line you should drop this portion of the endpoint as MySQL will use 3306 by default (see screenshot below).
  • When prompted enter the master password that you defined as part of the DB instance configuration above.

Once connected run the show databases command and you will see the rds_demo instance we created in the AWS console. Running use rds_demo and then show tables should return no results as the schema is empty. We can now create the schema by running the SchemaScript.sql from src/main/resources.  SchemaScript.sql creates 3 tables that correspond to the 3 JPA entities created earlier and is defined as follows.

DROP SCHEMA IF EXISTS rds_demo;

CREATE SCHEMA IF NOT EXISTS rds_demo DEFAULT CHARACTER SET utf8;

USE rds_demo;
CREATE TABLE IF NOT EXISTS rds_demo.app_address (
    id INT NOT NULL AUTO_INCREMENT, 
    street VARCHAR(40) NOT NULL,
    town VARCHAR(40) NOT NULL,
    county VARCHAR(40) NOT NULL,
  postcode VARCHAR(8) NOT NULL,
  PRIMARY KEY (id));

CREATE TABLE IF NOT EXISTS rds_demo.app_customer_image (
  id INT NOT NULL AUTO_INCREMENT,
  s3_key VARCHAR(200) NOT NULL,
  url VARCHAR(1000) NOT NULL,
  PRIMARY KEY (id));

CREATE TABLE IF NOT EXISTS rds_demo.app_customer (
  id INT NOT NULL AUTO_INCREMENT,
  first_name VARCHAR(30) NOT NULL,
  last_name VARCHAR(30) NOT NULL,
  date_of_birth DATE NOT NULL,
  customer_image_id INT NOT NULL,
  address_id INT NOT NULL,
  PRIMARY KEY (id),
  CONSTRAINT FK_ADDRESS_ID
  FOREIGN KEY (address_id)
  REFERENCES rds_demo.app_address (id),
  CONSTRAINT FK_CUSTOMER_IMAGE_ID
  FOREIGN KEY (customer_image_id)
  REFERENCES rds_demo.app_customer_image (id));

Run the script with source ROOTspring-boot-awssrcmainresourcesSchemaScript.sql. Running show tables again should display 3 new tables as shown below.

Create Database Schema

Creating an S3 storage bucket

Now that the database instance is up and running we can look at setting up the S3 storage. On the main AWS management console select S3 under the storage and content delivery section. When the S3 management console loads, click Create Bucket.

S3 Management Console

Enter a bucket name and ensure its matches the name specified in FileArchiveService.java we defined earlier. If you’re running the sample code straight from github then the bucket name should be brians-java-blog-demo as shown below.

Create S3 Bucket

Click Create and the new bucket will be displayed as shown below.

New S3 Bucket

Running the application locally

Its preferable to run the application locally before attempting to deploy it to EC2 as it helps iron out any issues with RDS or S3 connectivity.

In order to run the application we need to supply application properties on start-up.  The properties are defined below and are set based on the values used to create the database instance and the access keys associated with your account.

{
 "database-name": "rds_demo",
 "db-instance-identifier": "rds-demo",
 "rdsPassword": "rds-sample-db",
 "accessKey": "XXXXXXXXXXXXXXXXXXXX",
 "secretKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}

Boot allows you to supply configuration on the command line via a the -Dspring.application.json system variable.

java -Dspring.application.json='{"database-name": "rds_demo","db-instance-identifier": "rds-demo","rdsPassword": "rds-sample-db","accessKey": "XXXXXXXXXXXXXXXX","secretKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"}' -jar target/spring-boot-aws-0.1.0.jar

You can also supply configuration via the SPRING_APPLICATION_JSON environment variable. An example of supplying the environment variable and running the application in STS is shown below.

Environment Variable Configuration

At this point you should have the application up and running. When the application starts it will establish a connection with the database instance on RDS. Navigate to http://localhost:8080/spring-boot-aws/#/home and you should see the home screen.

Home Screen

Check that everything is working by clicking the Create New Customer link in the header to add a new customer.

Create Customer View

After saving the new customer you’ll be taken to the view customer screen.

View Customer

Clicking the customer image will open a new tab where you’ll see that the image being referenced directly from S3 storage.

Customer Image From S3 Storage

Note the structure of the URL is as followings

https://<s3_bucket_name>s3<region>.amazonaws.com/<item_key>?AWSAccessKeyId=….

  • Bucket name – the value used to create the bucket in the AWS console.
  • Region – the region associated with your AWS account
  • Item Key – the key we construct at runtime while saving the customer image. We looked at this logic earlier in the FileArchiveService.
To view all customers click the View All icon at the top of the screen.

View All Customers

Here you can search for customers, view a specific customer or delete a customer using the icons on the right hand side.

Deploying the application to EC2

Once everything is working locally you should be ready to deploy the application to the cloud. This section takes you through a step by step guide to creating a new EC2 instance and deploying the application. Lets get started.

Create a role for EC2

  • Before we create the EC2 instance we’ll create a Role through Identity Access Management. The role will be granted to the EC2 instance as part of the set up and will allow access to the database instance on RDS and S3 storage.
  • Log into the AWS console and navigate to Identity Access Management.

Identity Access Management Console

  • On the left hand side select Roles and click Create New Role

Create New Role

  • Enter the role name rds-and-s3-access-role

Set Role Name

  • Select Role Type Amazon EC2

Select Role Type

  • Attach AmazonS3FullAccess and AmazonRDSFullAccess policies to the role to allow read/write access to RDS and S3.

Attach Policies for RDS and S3

  • Review the role configuration and click Create Role.

Create Role

Creating an EC2 Instance

Now that we’ve created a role that will provide read/write access to RDS and S3, we’re ready to create the EC2 instance.
  • Navigate to the EC2 console and click launch instance.
  • Choose the AWS Linux AMI. This is the base server image we’ll to create the EC2 instance.

Select Amazon Machine Image

  • To keep costs down select t2 micro as the instance type. This is a pretty light weight instance with limited resources but is sufficient for running our demo app.

Select EC2 Instance Type

  • We only need one instance for the demo and can deploy it to the default VPC. Ensure that auto assign public IP is enabled, either via Use Subnet Setting or explicitly. This is required so that the instance can be accessed from the internet. Select the rds-and-s3-access-role IAM role we created earlier so that RDS and S3 services can be accessed from the instance. The remaining settings can be left defaulted as shown below. When all values have been selected click Next:Add Storage.

Configure EC2 Instance

  • Use the default storage settings for this instance and click Next:Tag Instance

Add Storage to EC2 Instance

  • Add a single tag to store the instance name and click Next:Configure Security Group

Tag EC2 Instance

The security group settings define what type of traffic is allowed to access your instance. We need to configure SSH access to the instance so that we can SSH onto the box to set it up and run the application. We also need HTTP access so that we can access the application once its up and running. The Source value specifies what IPs the instance will accept traffic from. I spend quite a bit of time on the train (public WI-FI) where the IP address changes regularly, so for handiness I’m leaving the Source open. Ordinarily we’d want to limit this value so that the instance is not open to the world.

Configure EC2 Security Group

  • The final step is to review the configuration settings and click Launch.

Review and Launch Instance

  • You’ll be prompted to select a key pair that will be used to SSH onto the EC2 instance. If you don’t already have a key pair you can create one now.

Select Key Pair

  • Click Launch Instance to display the launch status screen shown below. At this point the instance is being created so you can navigate back to the EC2 instance landing screen.

Launch Instance Summary

  • Returning to the instance landing screen you should see the instance with state initializing. It may take a few minutes before the instance state changes to running and is ready to use.

Instance Initializing

  • When the instance state changes to running the instance is ready to use. Open the description tab and get the public IP that will be used to SSH onto the instance.

Instance Running

  • Open a command prompt and SSH onto the instance with ssh <ip_address> -l ec2-user -i <my_private_key>.pem as shown below.

SSH on to Instance

  • Once we’re connected to the instance we need to do some basic setup. Switch to the root user, remove the default Java 7 JDK that comes bundled with the Amazon Machine Image and install the Java 8 JDK.
sudo su
yum remove java-1.7.0-openjdk -y
yum install java-1.8.0
  • The EC2 image should now be ready to use, so all that remains is to copy up our application JAR and run it. On the command line use SCP to copy the application JAR to the EC2 instance..

Copy Application JAR to EC2 Instance

  • When you SSH onto the EC2 instance the spring-boot-aws-0.1.0.jar should be in /home/ec2-user/. Launch the application by running the same command you ran locally, not forgetting to supply the application config JSON.

Running the Application on EC2 Instance

  • When the application starts you should be able to access it on port 8080 using the public DNS displayed in description tab of the EC2 instances page.

Access Application on EC2

In a production environment we wouldn’t access the application directly on the EC2 instance. Instead we’d configure an Elastic Load Balancer to route and distribute incoming traffic across multiple EC2 instances. That however is a story for another day.

Summary

We’ve covered quite a bit in this post and hopefully provided a decent introduction to building and deploying a simple application on AWS.  EC2, RDS and S3 are just the tip of the iceberg in terms of AWS services, so I’d encourage you to dive in and experiment with some of the others. You could even use the demo app created here as a starting point for playing around with the likes of SQS or Elastic Cache.  As always I’m keen to hear some feedback, so if you have any questions on this post or have suggestions for future posts please let me know.