Book Summary: Designing Data Intensive Applications

The Book, Designing Data Intensive Applications by Martin Kleppmenn is a great book about designing software applications with keeping dataset in mind. Below are my notes about the book:

  • Reliable, Scalable, and Maintainable Applications:
    • Reliable means a system should continue to work always even when there’s a fault or other difficulty. To achieve reliability, we add redundancy to the systems. Redundancy means adding a duplication of the system components.
    • Scalable means a system should be able to handle the growth of the usage, data, or complexity.
    • Maintainable means a system should be flexible to support existing and new use cases.
  • Data Models and Query Languages:
    • SQL is most common query language.
    • Object-Relational Mismatch: application code is generally written in object oriented programming languages. We need to translate data from tabular format to application code format. ORM tools like Hibernate helps to simplify this translation.
    • Which data model is appropriate for application code: If application is using a data set of a tree of one-to-many, then document data model may make more sense. Document model is faster for writes. Graph database is suitable for many-to-many relationships. Document databases are more suitable when there is less connection to data with other datasets.
    • Cypher Query Language: this is created for Neo4J graph database.
  • Storage and Retrieval:
    • An index is a structure derived from the primary data. Databases leave the decision of indexing to application developers.
    • SSTables and LSM Trees: Sorted String Table (or SS Table) has the sequence of key-values sorted. LSM Trees: yet to add notes about it.
    • B-Trees: One place per key. I am yet to add more notes about it.
    • Row storage: Data rows are stored at the same location. It happens with RDBMS systems.
    • Column storage: In column storages, data rows are not stored together. Instead, data values are split into columns. An example of a columnar database is Snowflake.
    • OLTP (Online Transaction Processing) databases: designed for low latency needs. Examples are RDBMS systems like MySQL and Oracle.
    • OLAP (Online Analytical Processing) databases: Examples are HBase, Hive, and Spark.
  • Encoding and Evolution: yet to add notes about it.
    • Rolling upgrade: rolling a change in production on a fewer node at a time.
    • Backward compatibility means newer code can read the data that was written by older code. Forward compatibility means older code can read the data that was written by newer code.
    • Encoding/Decoding: Converting raw data to encoded bytes is encoding ( or Serialization or Marshalling). Decoding is the opposite of it.
  • Distributed Data:
    • Scaling to higher load:
      • The simplest approach is Vertical scaling.
      • Share nothing architectures: also called horizontal scaling.
      • Replication versus Partitioning: Replication means keeping a copy of the data on several nodes. Partitioning means splitting a big database into smaller subsets called partitions.
  • Replication:
    • For replication style, we assume that the dataset is so small that each machine or a node can hold a copy of the entire dataset.
    • Different approaches to replicate datasets:
      • Single Leader based replication: It’s an active/passive replication. One of the nodes is designated as the leader. Other replicas are known as followers. Whenever there is a new data into leader, the change is processed to the followers in the order of the change. One leader is responsible to all writes. Either leader maybe responsible to copy data to other replicas or some replicas may be responsible to copy data to other replicas.
      • Multi-leader based replication: In this configuration, we can have one leader per datacenter. Multi-leader configuration has these advantages over a single-leader configuration: higher performance, tolerance of datacenter outages, and tolerance of network problems. The big disadvantage in multi-leader approach is the write conflicts. There are ways to handle write conflicts.
      • Leader less replication: The application owner or the client is responsible to replicate data to all databases. The application owner or the client is also responsible to resolve conflicts. An example database is Amazon Dynamo.
  • Partitioning:
    • The main to partition the data is scalability.
    • The book covers partitioning strategies in detail. I am yet to understand the details. As I learn, I will add notes about it.
    • Partitioning also means data splitting or data sharding.
    • Skewed means some partitions have more data than other partitions. a partition with relatively higher load is called a hot spot.
  • Transactions:
    • Transaction is a way to ensure either all involved reads and writes together pass or fail.
    • The books covers ACID and 2 Phase Commit transaction types in detail. I am yet to understand the details. As I learn, I will add notes about it.
    • ACID properties:
      • Atomicity means all or nothing actions for the actions.
      • Consistency means the data is correct in the database as per defined terms. For example, no foreign key violation, etc. to be followed correctly.
      • Isolation means that the concurrently running transactions are isolated (or we can say serialized) from each other.
      • Durability means that database guarantees the storage of the data.
    • Reading the data that’s not committed is a dirty read. Overriding partially committed data is dirty writes.
  • Troubles with distributed systems: this chapter is about ways to avoid faults in distributed systems.
  • Consistency and consensus: this chapter is about maintaining consistency of the data.
  • Derived data:
    • The source of truth holds the authentic dataset.
    • Derived data is redundant, derived from the source of truth. Cache is an example of the derived data.
  • Batch processing:
    • Services (online systems) are real time systems, that provide the response as soon as possible.
    • Batch processing are the jobs that run on scheduled time.
    • Stream processing systems (or near realtime systems) process the data as they receive the inputs.
  • Stream processing: Stream refers to the data that is available incrementally. Stream processing is the way to process the data near real time.
  • Future of Data Systems: I am yet to understand the details. As I learn, I will add notes about it.

As I learn more, I will update this page. Thank you.

References:

Book Summary: The Art Of Scalability

The Art of Scalability by Abbott Fisher is a great foundational book for software systems designs. Below are my notes for this book:

1. Impact of people and leadership on scalability

People are the most important piece of the scale puzzle. Leadership is about creating a vision. Management is about measurement. Management is about achievement of the goals.

2. Roles for the scalable technology organization:

A common cause of failures in scalability and availability is lack of clarity in responsibilities of people.  Overlapping responsibility creates wasted effort and bad conflicts. To avoid confusions and ambiguity of ownerships, the author suggests creating a RASCI matrix with the clear single ownership of each item.

3. Design organizations:

  • Building a great team: A good team size is a team that can be fed by two large pizzas. A team should have a mix of people with varied experience and diversity. Too large a team size can cause a loss of productivity. 
  • Organizational types are functional, matrix, and agile. In functional, we have one type of role in a team. In a matrix organization, a Project Manager builds a temporary, project specific team from different teams. In agile organization, all required types of roles are within the same team. Agile organizations provide increased innovation by providing an ability to quickly market a product.
  • Conflicts are inevitable.
  • Good conflicts: why should we do it?
  • Bad conflicts: who will do what?
  • A team should have members of different experience levels. That helps driving innovation.

4. Leadership 101:

  • Leadership is a pull activity. Management is a push activity. Management measures.
  • Getting feedback from the team and improving goes a long way.
  • Act and behave ethically and do not take advantage of your position of authority.
  • Be the type of person who thinks first about how to create stakeholder value rather than personal value.
  • Mission First, People Always. 

5. Management 101:

  • Management is about measuring. Leadership pulls and management pushes.
  • AFK 50-95 Rule: 
    • Spend 5% of the time building a plan.
    • Spend 95% of the time planning for contingencies when things don’t go the way you expect.

6. Relationship, Mindset, and the business case:

Both business and technology leaders should develop the knowledge on each others’s areas.

7. Why processes are critical to scale:

Processes are critical part of scaling an application. If we are managing people constantly for repetitive tasks, it’s a sign of introducing processes. For any process, there should be a an owner assigned to it.

8. Incidents and problems:

Incidents are the issues in the production environment. Problems are the causes of incidents. For example, a slowdown of a data transfer is an incident. No data availability on time is a problem caused by the data delay incident. While managing incidents and problems, try to keep people separate from issues. Conducting quarterly incident reviews and post mortem processes are important to improve the processes.

9. Managing crisis and escalations:

  • Crisis can harm businesses severely.
  • We must determine the unique crisis threshold for the businesses.
  • A person managing a crisis should be able to take the charge of the situation. This person should also be calm from inside and persuasive from outside. This person should also keep the business informed about the crisis. Set-up war rooms as required.

10. Controlling change in production environments:

We should plan for quarterly or annual reviews of changes. We should know why a change’s function is and how this change can be validated.

11. Determining headroom for applications:

Purpose of this process is to assess how long this application can serve the customers before it starts failing. Headroom helps in product planning and hiring. A general simple rule is to use the application’s capacity up to 50%.

12. Establishing architectural principles:

Make sure principles follow SMART guidelines. SMART stands for Specific, Measurable, Achievable, Realistic, and Testable. Below are most adopted principles:

  • N +1 Design: Anything we develop has at least one additional instance in the event of failure. Apply rule of three: build one for you, one for customers, and one to fail.
  • Design for rollback: Ensure the product/application is backward compatible.
  • Design to be disabled: design the service/application in a way that it can be marked down or disabled.
  • Design to be monitored: design with the monitoring mindset.
  • Design for multiple live sites: design to deploy from multiple geographical sites.
  • Use mature technologies that are well known.
  • Asynchronous Design. Use synchronous design only when it’s absolutely necessary.
  • Stateless systems. Use state only when it’s business required.
  • Scale out, not up. Forcing transactions through a single person, computer, or a process is a recipe for disaster.
  • Design for at least two axes of scale. Always think how we will execute next set of horizontal splits before the need arises.
  • Buy when non-core. Build things only when you are really good at it.
  • Use commodity hardware. Cheaper is better.
  • Build small, release small, fail fast.
  • Isolate faults.
  • Automate over people. Never rely on people to do something that can be automated.

Keep number of principles that can easily be memorized by the team to utilize these principles. Do not have more than 15.

13. JAD and ARB:

  • JAD (Joint Architecture Design) is a process wherein all engineering teams work together to design new functionalities together in a way that it is consistent with the architecture principles of the organization.
  • ARB (Architecture Review Board) is a review board that ensures that all principles are incorporated and best practices have been applied. For example, in my one previous company, a design team ensured that all teams have implemented the architecture principles.
  • For JAD and ARB details, read the book in detail.

14. Agile architecture design:

Agile teams should act autonomously. JAD and ARB processes ensure a cross-functional design of the services.

15. Build versus Buy:

Use cost and strategy-centric approaches. Use the checklist mentioned in the book to determine a build versus buy decision.

16. Determining Risk:

The first approach to measure a risk is a gut feel method. It’s a very fast method. The second method is the traffic light method. In this method, we break down the action into smallest components and assigning a risk priority to them (like green, yellow, and red).

This is a half way to the book. As I learn more, I will update this page.

How to decide what skills to work on?

In the life, there are situations in that we’re stuck with multiple choices. Sometimes, we’re overwhelmed with many things in hand. For example, I am so much interested in reading many books, all at the same time. Sometimes, I am confused in selecting a topic over other. My one side of the brain would suggest to work on something that will help my career. My another side of the brain would suggest me to pick the item I enjoy the most. How to make the choices in such a situation? Here are some options that I explores so far:

  1. Do nothing. Just do the minimum required things for the day.
  2. Work on items that will add value to the career. Pick items that will make me a better professional. For example, if I am a technical professional, sharpening my skill on that technical skill will help me most in my career.
  3. Work on items that I enjoy the most. These items may not directly add value to my career. For example, I am interested in reading a book about human psychology because I am so much interested in learning about humans.
  4. Find a forced balance to pick a career related and an interest related item at the same time. Divide the week-time accordingly.
  5. After doing the minimum required work, work on the items you enjoy most. Then, slowly, integrate less enjoyable items to it, only if you find it interesting. For example, I can pick a human psychology book. Later, I might find a technical topic interesting, to add to my do list.

Personally, I like option 5. It gives me a freedom to approach things as per the interest. Integrating less enjoyable items to it is the best option for me.

Some may question about finding the interest. I believe selecting the values help in determining the interest. I encourage all of us to decide one to two values for the life, to bring the clarity in life.

Thank you for reading it. If you have any suggestions to me, please share.

Learnings from the book, How to be an Adult

How to be an adult is a book by David Richo on psychological and spiritual help. If anyone looking for a reference book about leading the life in a more meaningful way, I recommend you this book.

Here are my notes for this book:

  • Growing pains and growing up: If our childhood was challenging, we need to mourn it and let it go. The book suggests that the current painful situations have a direct connection with the past painful situations. The book suggests the ways for grief work. It suggests how we can go into the past problems, relive those moments with the imagination of how we took care of ourselves, and forgave people responsible for it. Here is my person example:
    • I feel angry on people who were responsible for some personal situations.
    • I am thankful that I began to learn how to stand up for myself. I managed myself very well. I took care of myself independently.
    • I imagine being assertive in my childhood.
    • I forgive everyone who did not stand up for me.
    • I am abundantly taking care of myself.
  • Assertiveness skills: The book provides suggestions on how can we correctly be assertive.
  • Fear: Every problem is something we have trouble in adopting. The author suggested a three step process to work on fears: Admit/acknowledge the fear, don’t suppress the fear, and act on it.
  • Anger: The author suggests to distinguish anger and drama. Work on anger with a three step process: understand what happened, what I believe, and what I feel.
  • Guilt: Guilt is a belief or judgment.
  • Values & self-esteem: Living the values can help in building self-esteem.
  • Boundaries in relationships: Boundaries help to create interdependencies. Ask directly what you want.
  • Intimacy: The author suggested helpful tips about maintaining an intimate relationship.
  • Integration: We’re complete NOW. Do not try to be perfect. Let others know that sometimes you succeed and sometimes you fail. When you fail, you learn from it and try to do better next time.
  • Shadow: The author described two shadows: negative and positive. Negative shadow is composed of unaccepted defects that we strongly condemn in others. Positive shadow is composed of good qualities that ww strongly admire in others.
  • Dreams and Destiny: The author suggests to learn from our dreams.
  • Ego/Self: Meditation is helpful to accept the present situation.
  • Love: Love lets go and never clings or controls.

As I learn more, I’ll update this blog. Thank you!

Strength app part 6: Terraform script

This is the part 6 of application development series. Refer to part 5 for the previous information. On our strength application, we wanted to enable https certificate. As it is for learning purpose, we wanted to keep it low cost.

For the deployment, we chose to write terraform scripts. Terraform scripts provided the ease to create, edit, and delete the AWS environment. For example, if we want to delete the AWS dev infrastructure, we can do so using terraform scripts. We arranged scripts as below:

  1. A script to create EC2 cluster and an EC2 instance.
  2. A script to associate an elastic IP with the created EC2 instance.
  3. A script to set-up databases. We created one container to create a database, one container to restore database, and another container to take the database backup once a day every day.
  4. A script to set-up Java application. We created one container for Spring Boot application and another container to copy SSL certificates.

As we learn more, we’ll share more.

Common Design Patterns Overview

In this article, we’ll go through the overview of basic design patterns.

Circuit Breaker: This is a pattern that helps to manage calls from one service to another. There are three states of it:

  • Open: Calls from one service to another service are not allowed.
  • Closed: Calls from one service to another service are allowed.
  • Half-Open: A few calls from one service to another service are allowed but not all calls are allowed.

Two implementations for circuit breakers: Hystrix and Resilience4J.

Bulkhead: It allows to set maximum concurrent users that can connect to a service.

Backpressure: We will add details of it later.

Bloom filters: it is a data structure to search an element in a data set quickly with the level of certainty. Guava is a known Java API implementation of bloom filters.

HyperLogLog: it is a data structure that can provide the probabilistic calculation of the cardinality of a data set. Let’s say we want to understand how many unique visitors visited a mall. We can use HyperLogLog data structure to do it efficiently.

Gang of Four Design Patterns: It consists of three types of patterns: structural, creational, and behavioral, In total there are 23 of patterns.

References:

Bloom filters: https://richardstartin.github.io/posts/building-a-bloom-filter-from-scratch

HyperLogLog: https://www.baeldung.com/java-hyperloglog

Common Java libraries

Lombok API: This is a very helpful library that allows reducing infrastructural code. If we use Lombok API, we don’t have to write the code for getters, setters, constructor, equals, hash code methods, and even more.

Resilience4j API: It is a library designed for functional programming. One example use case is the rate limiter functionality, to limit number of maximum requests served by an API in a defined time period. Other examples are:

  • Concurrency control using bulkhead module
  • Fault Tolerant using retry

Hystrix API: Hystrix API can help make a service fault tolerant and resilient.

Javatuples: It’s an API that allows us to work with tuples. A tuple is a sequence of unrelated objects of different types. For example, a tuple may contain an integer, a string, and an object.

Javasist, CGLib, and ASM: These are APIs to manipulate Java byte codes.

P6Spy: It is a library that allows logging of database operations in the realtime.

Java Transaction Management: A transaction is a series of actions that must be completed. Java provides multiple ways to control transactions. Java provides transactions that are based on JDBC, JPA, JMS, Global Transactions, Java Transaction API (JTA), Java Transaction Service (JTS), and other related ways.

References:

Strength app part 5: Enable https on AWS

This is the part 5 of application development series. Refer to part 4 for the previous information. On our strength application, we wanted to enable https certificate. As it is for learning purpose, we wanted to keep it low cost.

Here were our options:

  • Enable AWS provided https option.
  • Get a free https certificate via letsencrypt and enable it on AWS. For our Sprint Boot application, we needed to generate a keystore.p12 file. We decided to opt for option2: get a free https certificate via letsencrypt website.

Our next challenge is to access the generated certificate into Spring Boot application in a way that is scalable in the future and does not go away if we terminate our EC2 instance on the ECS cluster. Here are options for us:

  • Manually copy https certificate to EC2 instance. We did not opt for this option. Reason is, if we terminate our ECS instance (attached to the ECS cluster), the https certificate will be deleted with the termination of the EC2 instance.
  • Keep the certificate at Amazon S3. Then, copy it to EC2 instance manually. We did not opt this option because every time we have a need to recreate an EC2 instance, we will have to manually copy the certificate.
  • When creating an EC2 instance within ECS cluster, add commands in user data option, to copy the certificate from AWS S3. We think this is an optimum option. But we couldn’t enable it. Free version of ECS enabled EC2 instance did not allow adding user data properly. To allow running user data into EC2 instance, we had to run an EC2 agent configuration. Running these configurations were either not easily available or too complicated within the free tier EC2 instance. So, we did not opt this option.
  • Add the https certificate within the Spring Boot application via S3 copy using SSL configuration. This could have been a considerable option. Within Spring Boot code, we can add SSL configuration bean to copy the certificate from AWS S3 and recreate a certificate file within the Spring Boot application. Below is a sample code to do it:

import java.io.File;

import org.apache.catalina.Context;
import org.apache.catalina.connector.Connector;
import org.apache.tomcat.util.descriptor.web.SecurityCollection;
import org.apache.tomcat.util.descriptor.web.SecurityConstraint;
import org.springframework.boot.web.embedded.tomcat.TomcatServletWebServerFactory;
import org.springframework.boot.web.servlet.server.ServletWebServerFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

//@Configuration
public class SslConfiguration {

@Bean
public ServletWebServerFactory servletContainer() {
    TomcatServletWebServerFactory tomcat = new TomcatServletWebServerFactory() {
        @Override
        protected void postProcessContext(Context context) {
            SecurityConstraint securityConstraint = new SecurityConstraint();
            securityConstraint.setUserConstraint("CONFIDENTIAL");
            SecurityCollection collection = new SecurityCollection();
            collection.addPattern("/*");
            securityConstraint.addCollection(collection);
            context.addConstraint(securityConstraint);
        }
    };
    tomcat.addAdditionalTomcatConnectors(redirectConnector());
    return tomcat;
}

private Connector redirectConnector() {

    Connector connector = new Connector("org.apache.coyote.http11.Http11NioProtocol");

    connector.setPort(8443);
    connector.setSecure(true);
    connector.setScheme("https");
    connector.setAttribute("keyAlias", "tomcat");
	connector.setAttribute("keystorePass", "<hidden>");
	connector.setAttribute("keyStoreType", "PKCS12");

    Object keystoreFile;
    File file = new File("");// ADD PATH
    String absoluteKeystoreFile = file.getAbsolutePath();

    connector.setAttribute("keystoreFile", absoluteKeystoreFile);
    connector.setAttribute("clientAuth", "false");
    connector.setAttribute("sslProtocol", "TLS");
    connector.setAttribute("SSLEnabled", true);
    return connector;

}

}


  • Add the https certificate within the Spring Boot application via S3 using properties file. To use this option, we need to read the https certificate file from application.properties. Below is a sample code to do it:

private static void copySSLCertificateFromS3() {

try {

Properties props = readPropertiesFile("src/main/resources/application.properties");

String clientRegion = props.getProperty("clientRegion");

String bucketName = props.getProperty("bucketName");

String sslFileNameWithPath = props.getProperty("sslFileNameWithPath");

String keyStoreFileName = props.getProperty("server.ssl.key-store");

AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(clientRegion)

.withCredentials(new ProfileCredentialsProvider()).build();

S3Object object = s3Client.getObject(new GetObjectRequest(bucketName, sslFileNameWithPath));

InputStream objectData = object.getObjectContent();

// Process the objectData stream.

File file = new File(keyStoreFileName);

try (OutputStream outputStream = new FileOutputStream(file)) {

IOUtils.copy(objectData, outputStream);

} catch (FileNotFoundException e) {

e.printStackTrace();

// handle exception here

} catch (IOException e) {

e.printStackTrace();

// handle exception here

}

objectData.close();

} catch (Exception e) {

e.printStackTrace();

}

}

public static Properties readPropertiesFile(String fileName) throws IOException {

FileInputStream fis = null;

Properties prop = null;

try {

fis = new FileInputStream(fileName);

prop = new Properties();

prop.load(fis);

} catch (FileNotFoundException fnfe) {

fnfe.printStackTrace();

} catch (IOException ioe) {

ioe.printStackTrace();

} finally {

fis.close();

}

return prop;

}

private static void copySSLCertificateFromS3() {

try {

Properties props = readPropertiesFile("src/main/resources/application.properties");

String clientRegion = props.getProperty("clientRegion");

String bucketName = props.getProperty("bucketName");

String sslFileNameWithPath = props.getProperty("sslFileNameWithPath");

String keyStoreFileName = props.getProperty("server.ssl.key-store");

AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(clientRegion)

.withCredentials(new ProfileCredentialsProvider()).build();

S3Object object = s3Client.getObject(new GetObjectRequest(bucketName, sslFileNameWithPath));

InputStream objectData = object.getObjectContent();

// Process the objectData stream.

File file = new File(keyStoreFileName);

try (OutputStream outputStream = new FileOutputStream(file)) {

IOUtils.copy(objectData, outputStream);

} catch (FileNotFoundException e) {

e.printStackTrace();

// handle exception here

} catch (IOException e) {

e.printStackTrace();

// handle exception here

}

objectData.close();

} catch (Exception e) {

e.printStackTrace();

}

}

public static Properties readPropertiesFile(String fileName) throws IOException {

FileInputStream fis = null;

Properties prop = null;

try {

fis = new FileInputStream(fileName);

prop = new Properties();

prop.load(fis);

} catch (FileNotFoundException fnfe) {

fnfe.printStackTrace();

} catch (IOException ioe) {

ioe.printStackTrace();

} finally {

fis.close();

}

return prop;

}

  • Use a docker container to copy https certificate form S3 to EC2 instance. Every time we have a new EC2 instance, we can copy the https certificate file from S3 to EC2 instance using a very light weight docker container task. So far, this seems to be the best possible approach within the free tier EC2 instance of ECS type. We’re exploring this option.

If anyone has suggestions to us for a better approach, feel free to share your comments.

Strengths app part 4

This article is a part of application development series. We are providing details of creating the strength application. In part 3, we discussed about REST and other backend APIs for the application, In this part, we will discuss user interface details of the web application.

  • User login functionality: We have finalized a basic login page to authenticate a user. If a user is not authenticated and attempts to view the home page of the application, we redirect the user back to the user login page.
  • Home page: Home page provides these details:
    • User information: Name of the logged in user.
    • Number of total votes: Total votes for the user.
    • Strengths details: We show strengths of the user in a tabular form. Each row of the table has these details:
      • Strength title
      • Total votes on the strength
      • Created by
      • Buttons to view strength details, update a strength, and delete a strength

As we add more features to the application, we will update this page. Stay tuned for the updates and new articles on it.

Strengths app part 3

In the application development series part 2, we learned the use cases of this application. In this part, we’ll go through high level technical details of Spring Boot APIs and search integration of the application.

Below are REST APIs an search functionality for this application:

  • User API: It is a REST API for a user profile. It will allow to authenticate a user to view determined functionalities of the application. For example, only an authenticated user can vote on the strengths of a friend.
  • Strengths API: This API provides a feature to add, view, update, and delete a strength of a user.
  • Vote API: This API provide a feature to vote on a strength of a user. Only a friend can vote another friend’s strength.
  • Search functionality: To search a strength of a user, we’ve integrated AWS’s open search functionality. It is equivalent to Elastic Search. Strength API is integrated with open search via SNS configuration. When a strength is created, Strength API publishes a message to Elastic Search. In other words, we add a new strength entry into Open Search via SNS messaging which is integrated via Strength’s Add API method.

Later, we have a plan to add more features. We will update this page as we make more progress. Stay tuned.

In the next part, we will discuss the Desktop version User Interface part of the application.