Product Management Overview: Part 2 of 3

The articles on product management has three parts:

  • Part 1: Product Management (PM) role, Becoming a PM, Product life cycle, and understanding the company.
  • Part 2: Creating an opportunity hypothesis, validating a hypothesis, and taking an idea into action.
  • Part 3: Working with design, engineering, and marketing. Finally, completing the product lifecycle.

In the first part, we went through PM role, PM life cycle, and strategically understanding the company. In this part we will go through creating opportunity hypothesis, validating hypothesis, and taking an idea into action.

Creating an opportunity hypothesis:

There are ways like iterative progress or a big change step. Depending on the situation, a PM should decide if an incremental change is appropriate or if building a new product from scratch is appropriate.

  • Goals: It is important to establish the goal for the product. A product can be built in a iterative way or as a one big release, depending on the goal.
  • Quantitatively finding an opportunity hypothesis: Quantitative reasoning involves data analysis, to find the next approach. Qualitative reasoning involves understanding the vision of the product or the intuition based approach to determine the best step for the customers.
  • Metrics and analysis: metrics and analysis is important to determine the next steps for the product.
    • AAARR metrics: it is an acronym developed by McClure for the products. It stands for:
      • Acquisition: how users visit your product
      • Activation: a users’ first experience with your product
      • Retention: a user’s liking to use the product again
      • Referral: a user’s liking of the product to refer it to other person
      • Revenue: a user finds the product useful enough to pay for it
  • Surveys and customer interviews are helpful to assess the need and current situation of the product.
  • Intuition: intuitions are good but it is important to understand from personas’ point of view. Will this idea help a user? If yes, how?
  • Vision for the product is important and is developed by the team.
  • Team ideas: A PM should be a team player who listens to all the ideas and helps making the right decision for the product.
  • R&D: it is important to research an idea and plan to build it with the assessment of engineering and other feasibilities.
  • The competition: understanding the competition is important, to decide the approach for the product. For example, for a technical blog, there is competition with many companies and bloggers. Should a new technical blog be one of many of offer something different that others are not offering? Or should the quality be high? How will a new technical blog make the position in the market?
  • Business model and value proposition: Analyze how a product fits into the business and how the product provides the value to the customers. Here are key points about business model:
    • Key partners: outside the company, who are the key partners who make the business model work.
    • Key activities: what are the key activities. For example, a blog company’s key activity maybe content writing and website publishing.
    • Key resources: key resources could be human, physical (hardware), and intellectual properties.
    • Value propositions: what value a product provide to a persona, is the value proposition.
    • Customer relationships: this is about managing the relationships with the customers.
    • Channels: Channels are the ways a company reaches out to their customers. For example, for a blog company, Facebook, direct emailing, twitter, and other means could be channels to reach out to customers.
    • Customer segments: these are categories of the personas that the product will serve.
    • Costs structure: this is about finding the cost to maintain the product.
    • Revenue streams: this is about ways how the product will get the revenues. For example, technical books could be useful to generate revenues.
  • External affairs: sometimes, there are external affairs as well. For example, what if a client offers a multi-year maintenance contract, if certain features are added per a client’s needs?
  • Using Kano model to find opportunities: Per Kano model, a product needs three things, to be successful over time: 1. Value, 2. Quality, and 3. Innovation. As per this principle, there are three features:
    • Basic features: these are features expected from a product.
    • Performance features (satisfiers): these are features like how the app performs.
    • Excitement features (delighters): these are surprising, unexpected, wow features.

Over the period of time, each feature moves down. Excitement features becomes performance feature. Performance features become basic features.

Validating your hypothesis:

The next thing is to decide if it is the right thing to do. Every idea has an opportunity cost. Below are options s to validate a hypothesis:

  • Customer development: it is a process of reaching out to existing or new customers, to know their pain points, goals.
  • MVP or A/B testing: There maybe a decision to build an MVP. Or, maybe use an A/B testing. A/B testing is an approach to compare two sets of something. If idea is validated to be a good idea, next step is to plan for building it.
  • SWOT analysis: SWOT stands for Strength, Weakness, Opportunity, and Threats. It helps to identify what’s important to focus on. To do a SWOT analysis, identify goals and success criteria.
  • Internal validation: it is about asking key questions to understand if a product’s vision is aligned with the company’s vision.
  • External validation: it is helpful to perform external validation. It means seeking feedback from customers about the product.
  • Other ways: Interviews, Surveys, Analyzing data, Experiments, and A/B tests.

From idea to action:

Blow are key points to take a product from the idea to actions:

  • Why new ideas struggle: there may be chances that customers no longer need the new feature in the product that you planned for. It is very helpful to anticipate next steps and avoid assumptions, as much as possible. Suppose my product is a eBook on work skills for IT professionals. If it takes six months for me to write the book, what if there are better options available in the market in the next six months? In this example, I should list out assumptions and address those.
  • Working backwards for the future feature. Imagine your product is released and you are writing product reviews, product release, and product FAQ sections. These future preparation will help to assess the product. For example, if a writer is planning to write a book, what if he/she writes why and who should read the book, how this book is helpful to readers, and how it is different from other books.
  • Plan for an MVP: Minimum Viable Product (MVP) does not mean an incomplete product. It is a great exercise to prioritize what minimum features will help the customers. Then, over the period of time, iterating and adding more prioritized feature helps. For example, if I am writing a book, identify minimum required areas to address first in the book. I can create a second book later, with additional details.
  • MVPs and Plussing: MVP concept forces us to prioritize to the most important items. Plussing is a concept of adding a surprising element to it. This plussing concept has to be completely in-line with the core model. For example, if I am writing a book on technical product manager interviews, my MVP would be to cover the topics that the readers expect. The core idea of the book is to prepare candidates to clear an interviews. A plussing on it could be to share general tips on resume writing and networking.
  • Communicating via a Product Requirements Document (PRD): A PRD covers the features of the products to be built. This should be helpful to all stakeholders. A PRD should include what’s covered and not covered in it. It should include. In a waterfall approach, a PRD is a detailed document. In a lead development approach, it is a lighter weight document that will be iterated frequently. In any case, a PRD is a live document. For a project scope items, a PRD should be written. For bug fixes or an enhancement, a PRD may not be needed and such item can be covered via a ticket or a similar system. Some other points: we humans are wired to learn in stories. So, writing the requirements in the form of stories is helpful.

Next, we will go through part 3: Working with design, engineering, and marketing. Finally, completing the product lifecycle.

Reference:

  • The Product Book: How to Become a Great Product Manager by Product School, Carlos Gonzalez de Villaumbrosia, et al.

Product Management Overview: Part 1 of 3

Why and who should read this article: read this article if you want to know the basics of a product management role.

This is a Part 1 of three series:

  • Part 1: Product Management (PM) role, Becoming a PM, Product life cycle, and Understanding the company.
  • Part 2: Creating an opportunity hypothesis, validating a hypothesis, and taking an idea into action.
  • Part 3: Working with design, engineering, and marketing. Finally, completing the product lifecycle.

What is the product management role:

A product manager is someone who represents customers. A product manager is a person responsible for a product. He/she has multiple skills. On a day to day, a PM understands the business and execution strategy of a product. They know how to set the vision of the product.

How to become a product manager:

A product manager should be knowledgeable in product design, engineering, and marketing. PMs come to this role from various stream like development, designing, marketing, business, quality assurance, business analysis, and other such related areas. It’s always an added advantage for PMs to know programing.

Below are some skills that a product manager should know:

  • A fundamental understanding of product design, engineering, and marketing.
  • Understand who are the customers.
  • Business strategy: A product manager should know the business strategy of the company. They should understand who all are players, how a company makes money, and understand the revenue vs. profit.
  • Execution: a product manager should know the execution.
  • Vision: they must know how to set the vision, see the right opportunities by using data and metrics.
  • Defining the success: they must know how to define the success criteria of a product.
  • Marketing: they must know how to work with marketing, to market the product.
  • People skills: they must know how to work, appreciate, motivate, and lead people.
  • Prioritization: PMs must know how to say “no”. A product manager understands what the customers want and prioritize to build limited features in the product.

Types of product managers:

  • Technical product managers: A technical product manager is someone who is managing a technical product, like an API for a system. They focus on how products are built. They are not involved in coding.
  • Strategic product managers: someone who has a business background and a compliment to a technical product manager.
  • Other: like growth product manager, mobile product manager, etc.

Product management development approaches:

There are ways like lean development, waterfall development. Sometimes, companies prefer a hybrid of lean and waterfall approach.

Product development lifecycle:

  1. Find/plan the opportunity: a product manager finds the right opportunity and plan for it. Product managers write PRDs (Product Requirement Documents) in collaboration with all stakeholders from business, engineering, design, analytics, and other teams. A Product manager also decides if the product will be built using lean or waterfall approach. There could be a plan to build a Minimum Viable Product (MVP) using the lean approach.
  2. Design the solution: PMs work with design team to design a solution. The engagement from engineering is important to assess the technical feasibility of the design.
  3. Build the solution: Once a product is defined and design is agreed, then the next step is to build the solution. There would be a need for a PM to negotiate releasing a quick solution or a long term solution.
  4. Share the solution: This step is about sharing the product to the world. In this phase, if the set-up supports, there maybe a need of a Product Marketing Manager (PMM) who focuses on marketing. A PM will focus on the internal details of a product. Releasing a product maybe done in phases: a pilot, small release, and other subsequent releases. There may be a need to run marketing campaigns, ad, etc. to market the product and get the customer’s feedback about the product, to improve it for the customer’s needs.
  5. Assess the solution: In this phase, PMs should evaluate the team’s situation like are they happy working on a next project and anything to be done differently. After the release, real data is available to assess the product. The after launch analysis of the product can help improve the further steps in the product development.

Strategically understand the company:

A PM should think of these things below:

  • Understand why does the company exist: It is critical to understand why the company exists, what are core beliefs, and mission statement of the company. This should be the guiding principle in planning for a product. Understand customers and personas. In a simpler way, a persona is a customer profile type using the product. Let’s say, the product is a web browser,. An example a persona is a web developer. For the product web browser, it is critical to understand how this persona will use the product, web browser. Understanding use cases are important. A use case is a function of a product used by a persona. Understand if a product is build for an enterprise or for a customer.
  • How do we know if a product is good: Key Performance Indicators (KPIs) metrics can help determine if a product is good. Vanity versus actionable metrics: vanity metrics are those that are not directly related to product’s performance. These can be for some secondary or other benefits. Actionable metrics are the metrics that are directly related to the performance of the product. Analytics, surveys, and interviews can help to collect the right metrics about a product. Net Promoter Score (NPS) is a metrics to assess an overall customer satisfaction of the product. It measures how like it is that a user would suggest the product to others. NPS is a % of promoters minus the % of detractors out of all replies. Scale is 1 to 10. Detractors score from 0 to 6. Promoters score 9 or 10.
  • What products are we building: It focuses on the company’s current products.
  • Other things to consider: Plan for the roadmap of the product for the short term and a long term strategy. Understanding the competition & climate is also important to understand if the product we are building is competitive and fit to the market. A great analysis is called 5C analysis, to assess the opportunity for a product. The five Cs are: Company, Customer, Collaborators, Competitors, and Climate.

In the next part, we will go through creating an opportunity hypothesis, validating a hypothesis, and taking an idea into action.

Reference:

  • The Product Book: How to Become a Great Product Manager by Product School, Carlos Gonzalez de Villaumbrosia, et al.

How to move back from a Project Manager to a Developer Role

There are different reasons why a developer transitions to a Project/Program manager role. After some time, there could be a moment in your career when you want to move back to a developer role. If you think moving to a PM role was a right choice, ignore reading this further. If you think you miss technical hands-on development in that you excelled, it’s never late to switch back. Here are some tips to get back on a hands-on development track and take your career to the next level:

1. Find out why you want to be a developer again: 

It’s important to assess your whys clearly. Let’s look at some common whys:

  • Self-realization: Switching from a developer to a PM may not be easier for everyone. Some developers will find the ecstasy in a PM role. A PM who was an excellent developer, may have two possible thoughts on two extremes: 
  1. Oh, I wish, I was born as a PM. I like leading people towards a goal. I love meeting people. I love high level integration, without going in deep of the code (then again, you don’t need to read it further).
  2. Oh, did I really work today? Am I really delivering any value? I heard I led that meeting well, but I didn’t do anything at all. I just presented what others did. Am I going to get any credit for any of the work? If yes, is it fair to get the credit for something I didn’t do? Does anyone really appreciate scheduling a meeting and asking people to work towards a goal? Don’t they already know why, how, and what to work towards their goals? I don’t think I did anything. Did I have to pull up my sleeve to get into anything deep technical, to solve a problem? Why are people thanking me as if I did all the magic to make the project successful? Is it a fake thank you and they really meant to say: you were not needed, you wasted our time with many useless meetings.
  • A step to be a technical manager: Many times, a belief is that to be a development manager, you must excel in technical skills, leadership skills, and operational/business skills. Getting into a PM role may provide you a right combination of leadership and operational/business skills.

2. Acknowledge what you learned as a PM:

The years you worked as a PM were a great investment. You learned these skills:

  • How to lead and communicate: This is a great skill to have. As a developer, the tendency is to know all the details about a system. PM skills are great in understanding and leading the integration of systems as a black box (not knowing what’s inside).
  • Influencing stakeholders: As a developer, your job generally is to complete the assigned tasks. There may be opportunities to influence technical teams for a technical solution. But, as a PM, you get more assigned tasks to influence users. How many developers will be motivated to read a book on influencing stakeholders? If we ask the same question to a PM community, the question should be opposite: can you be a PM without reading the book on influencing stakeholders?
  • Respect what PMs do: To appreciate a skill, it’s the best to be in that role yourself. As a PM, do you recall your scariest moment when you were not sure to get a buy-in from your stakeholders? If you were wrong (they showed up to your meeting and respected your engagement) and you got their buy-in, will you ever forget that moment of joy? You clearly remember the number of stakeholders you invited who showed up to your meeting. And yes, majority of them agreed to your point of view.

3. Finally, how to be a developer again:

  • You’re here with the real interest: Congratulations. You clarified your whys and made a decision. You’re learning the development with your true interest. This is a critical step. 
  • Don’t give up and don’t lean back: As it’s your interest and your strong desire, you’ll succeed if you don’t give up. So, first thing first: don’t give up in your uncomfortable journey ahead, of becoming a developer again. Here are some common feelings you’ll encounter as your daemons and you must not deviate from your goal:
  • Are you too old to be a developer ? Will people laugh and will you feel bad at your moderate and old technical skills?
  • Did you fail as a PM? Many people may ask you, why are you transitioning back to a developer role? Did you not perform well as a PM? Is it a demotion? You may hear people saying: the grass is greener on the other side.
  • Will you excel as a developer again? There’re many smart developers in the market who’re young, probably with less family responsibilities. Will you be able to spend that much time again? Are you still that sharp to learn new technical skills quickly?
  • What if you become an average developer who’s not able to cope up with newer technologies? There are many intelligent new age developers. You quickly recalled an incidence. That day, a developer was cursing himself to be a developer. He was feeling overwhelmed with so many new technologies to learn.
  • Work/life balance. Development tasks are very time consuming. It feels like two 40 hours a week sets are needed (at the same time). But will you be paid for 80 hours a week? You may feel it was nice to be a PM. As a PM, you guided many people how to manage their time within 40 hours a week.
  • Decide a skills set: List out what skills you acquired earlier that are still in demand. If not the same, at least, do you find equivalent skills set in the market? Once you find your strengths, it’s important to decide a new technical skills set that you want to master. Stay with the planned skills set.
  • Join forums and communities: Find opportunities inside and outside your company, to learn new technical development skills.
  • Show your curiosity of technical skills: Show your curiosity of technical development skills within your network. You know how to communicate, make relationships, and it’s the time to ask for the help of others to help you learn technical skills.
  • Find the development opportunities: Find a development opportunity in your area. Here are key ideas:
  • Do you see a gap of technical resources in your project? If yes and if you can do the task, dive into it, work beyond your comfort zone, and own the technical task.
  • Work on a prototype code that’s not mission critical.
  • Develop your side project you have been dreaming for years. There are no limits to the possibilities. It could be your pet project to organize your family members list, your investments, your documents, or a simple utility that your family & friends would like to use.
  • Review your progress: Review your development skills progress after six months. Then, follow a quarterly review routine. Rome wasn’t built in a day.
  • Will you be an excellent developer again: You are skilled in communication skills, operational/business skills, and you’re progressing in your development skills. It’s not fair to assess yourself in one dimension. Do what you love to. A better question would be: am I an all-rounder professional who loves development as the core skill set?
  • By now, you failed many times: But. You already made a decision. Don’t look back. Stay on the journey. You’ll succeed if you don’t give up.

How to make a Gantt Chart

The Gantt Chart is simply a breakdown illustration of project schedule. It was named after Henry Grant, American mechanical engineer and management consultant who invented this chart in 1910s. A Gantt diagram represents the project or tasks in the horizontal bar chart form. This cascading format with the various project activities listed helps project managers to track the tasks against their scheduled time and predefined milestones. The Gantt chart could also be considered as a scrum artifact as the burndown chart is used by many teams to communicate and track progress towards the sprint goal.

What to consider when making a Gantt chart

  • What are the major deliverables?
  • Who is on the team and what role will they play in those deliverables?
  • Identify the stakeholders?
  • What is the project plan to get to those deliverables and deadline?
  • What are the milestones?
  • Are there road blockers that could impact the timeline?

MongoDB notes

Who should read it: It is for you if you are looking for an overview of this topic for a project, to conduct/appear in an interview, or in general. As I learn more, we will update this article.

MongoDB is a document based, schema-less, and a highly available NOSQL database. Let’s look at some high level details about it.

Basic Terms:

  • document: is a basic unit of data equivalent of a row in an RDBMS table.
  • collection: is equivalent to a table in an RDBMS database.
  • database: A single MongoDB instance can have multiple instances of databases.
  • mongo shell: it’s a shell that helps in administrative tasks.

Basic Operations:

CREATE:

  • Use insertOne() operation.
  • If we need to insert many documents into a collection, use insertMany().
  • MongoDB also provide a Bulk Write API, that affects a single collection. It has db.collection.bulkWrite() method which by default performs ordered operations. It also has an option to turn off ordering.

READ:

  • Use find() or findOne() operation.

DELETE:

  • Use deleteOne() to delete a single document.
  • Use deleteMany() to delete all eligible matching documents.

UPDATE:

  • For update, we can use updateOne(), updateMany(), or replaceOne().
  • updateOne() and updateMany() takes a filter document and a modifier document, as the second parameter.
  • replaceOne() expects a document with which it will replace the document with the first input document.

DROP:

  • It is possible to use deleteMany() to drop all matching documents in a collection. It can also be used to delete all documents in a collection.
  • To clear an entire collection, use of drop() is faster.

UPSERT:

  • It is equivalent to update-else-insert. Means, update a matching document. If no matching document found, insert a new document.

Supported Data types: Null, Boolean, Number, String, Date, Regular expression, Array, Embedded document, Object ID, Binary Data, Code

ACID operations: ACID (Atomicity, Consistency, Isolation, Durability) operations in MongoDB are at a document level. Below are some basics of ACID operations:

  • Atomicity: zero or none operations.
  • Consistency: ensure data is persisted in all nodes.
  • Isolation: Any reads or writes are not impacted with other reads or writes.
  • Durability: Commits are performed successfully.

References:

AWS infrastructure notes

  • Performance efficiency: focuses on running services efficiently and scalably. AWS focuses on two categories:
    • Selection: For the selection, there are three things you need to consider:
      • Type of service: it could be VM based, container based, or serverless based.
      • Degree of Management
      • Configuration
    • Scaling: Is is easy to scale in AWS
  • Cost optimization: this pillar helps achieve business outcomes while minimizing costs. Think of cloud spend in terms of Opex, instead of Capex. Opex is a pay as you go model. Capex is a one time cost.
References:
  • Performance efficiency: focuses on running services efficiently and scalably. AWS focuses on two categories:
    • Selection: For the selection, there are three things you need to consider:
      • Type of service: it could be VM based, container based, or serverless based.
      • Degree of Management
      • Configuration
    • Scaling: Is is easy to scale in AWS
  • Cost optimization: this pillar helps achieve business outcomes while minimizing costs. Think of cloud spend in terms of Opex, instead of Capex. Opex is a pay as you go model. Capex is a one time cost.
References:
  • Performance efficiency: focuses on running services efficiently and scalably. AWS focuses on two categories:
    • Selection: For the selection, there are three things you need to consider:
      • Type of service: it could be VM based, container based, or serverless based.
      • Degree of Management
      • Configuration
    • Scaling: Is is easy to scale in AWS
  • Cost optimization: this pillar helps achieve business outcomes while minimizing costs. Think of cloud spend in terms of Opex, instead of Capex. Opex is a pay as you go model. Capex is a one time cost.
References: Who should read it: It is for you if you are looking for a quick overview of this topic for a project, to conduct/appear in an interview, or in general. As we learn more, we will update this article. What is Cloud computing: Cloud computing is an on-demand IT resources and applications via the internet with the pay-as-you-go solution. In a simpler way, cloud computing provides ways to access servers, databases, storage, and many application services over the internet. Why cloud infrastructure matters:  As of today, AWS has 81 availability zones within 25 geographic regions. On a high level, AWS cloud infrastructure these main benefits:
  • Security
  • Availability
  • Performance
  • Global Footprint
  • Scalability
  • Flexibility
Advantages of cloud computing:  Six major advantages are below:
  • Variable versus Capital Expense: Instead of setting up servers and paying the cost of it, we can pay for the infrastructure per the usage.
  • Economies of Scale: Using cloud resources like AWS can reduce the cost.
  • Stop guessing capacity: Companies can gain as little or as much per the requirement, within a short notice.
  • Increase speed and agility: new IT resources can be made available very quickly.
  • Focus on business differentiators: Businesses can stop focusing on maintaining the infrastructure and focus on main business items.
  • Go global in minutes: it’s easy to expand the applications globally, in few minutes.
Cloud computing models:
  • All-in cloud based applications:  everything on cloud.
  • Hybrid deployment: A hybrid solution with some parts on cloud and some on on-premises.
AWS compute and networking services:
  • Amazon Elastic Compute Cloud (EC2): it’s a service that provides resizable compute capacity in the cloud. Organization can select memory, CPU, etc. per their need.
  • AWS Lambda: It’s a platform that allows developers to have zero maintenance of infrastructure. AWS deploys the code on Amazon EC2 instances.
  • Auto-scaling: Auto scaling allows companies to scale up or down the resources as needed.
  • Elastic load balancing: Elastic load balancing allows automatic distribution of traffic across Amazon load balancers.
  • AWS elastic Beanstalk: To deploy a web application faster, this service handles resource provisioning, monitoring, etc. automatically.
  • Amazon Virtual private Cloud (Amazon VPC): Amazon VPC allows organizations to control the AWS infrastructure, by allowing them to choose IP address, etc.
  • AWS Direct Connect: This provides direct network connections between a company\’s owned data centers and AWS.
  • Amazon Route 53: It’s a highly scalable DNS service. For example, using Route 53, I configured my own domain name with AWS.
 Storage and Content delivery:
  • Simple Storage Service (Amazon S3): Amazon S3 provide the storage for various usages like storing files, code backups, etc.
  • Amazon Glacier: It’s a low cost service that allows data storage for the long term backups.
  • Amazon Elastic Block Store (Amazon EBS): Amazon EBS provide block-level storage volume for use within Amazon EC2 instances.
  • AWS Storage Gateway: AWS Storage Gateway service connects on-premises software appliances with AWS infrastructure.
  • Amazon CloudFront: It’s a content delivery web service.
Databases:
  • Amazon Relational Database Service (Amazon RDS): It’s a fully managed relational database service.
  • Amazon DynamoDB: It’s a NOSQL database service.
  • Amazon RedShift: It’s a petabyte-scale data warehouse service.
  • Amazon ElastiCache: It’s a service that provides in-memory cache in the cloud. It supports Memcached and Redis cache engines.
Management Tools:
  • Amazon CloudWatch: It’s a monitoring service for cloud resources and cloud hosted applications.
  • AWS CloudFormation: It provides a way to effective manage a collection of AWS resources.
  • AWS CloudTrail: It records logs for the audit and review.
  • AWS Config: This service provides configuration history and configuration change notifications.
Security and Identity services:
  • AWS Identity and Access Management (IAM): It allows organization users to securely access AWS cloud services.
  • AWS Key Management Service (KMS): It allows users to create encryption keys to encrypt the data. It uses Hardware Security Modules (HMS) to protect the security of the keys.
  • AWS Directory Service: AWS Directory Service uses Microsoft Active Directory. Using AWS Directory Service, organization users and user groups can manage single sign-on, group user accounts, etc.
  • AWS Certificate Manager: It’s a service that manages SSL/TLS certificates for use with AWS cloud services.
  • AWS Web Application Firewall (WAF): WAF allows to manage security and allow/deny access of web applications, for the security attacks prevention.
Application Services:
  • Amazon API Gateway: It is a managed service that helps developers to create, publish, maintain, and secure APIs.
  • Amazon Elastic Transcoder: It’s a media transcoding in the cloud. Transcoding is a process to convert an audio or a video file from one format to another.
  • Amazon Simple Notification Service (SNS): Amazon SMS is a service to delivery messages to recipients.
  • Amazon Simple Email Service (SES): It is an email service, to send any kind of emails to their customers.
  • Amazon Simple Workflow Service (SWS): SWS is a workflow service that can run jobs in parallel or in a sequential steps. It has retry features.
  • Amazon Simple Queue Service (SQS): SQS is a messaging queueing service.
Five pillars of Amazon Web Services (AWS):
  • Operational excellence:
    • Infrastructure as a Code (IaC): There are two main services: CloudFormation and Cloud Development Kit (CDK)
    • Observability: it’s a process of monitoring infrastructure metrics at three levels: ◦ Infrastructure level ◦ Application level ◦ Account level
    • Three things:
      • PRINCIPAL(S) for WHO has permissions to
      • ACTION(S) for WHAT to perform
      • RESOURCE(S) specifies which properties to access
  • Network Security: A zero trust on Network Security involves a defense in search approach. It involves Network Level Security and Resource Level Security. Data Encryption is about a plan to encrypt the data in the transit and at rest.
  • Reliability: this pillar focuses on building services resilient to both service and infrastructure disruptions
  • Performance efficiency: focuses on running services efficiently and scalably. AWS focuses on two categories:
    • Selection: For the selection, there are three things you need to consider:
      • Type of service: it could be VM based, container based, or serverless based.
      • Degree of Management
      • Configuration
    • Scaling: Is is easy to scale in AWS
  • Cost optimization: this pillar helps achieve business outcomes while minimizing costs. Think of cloud spend in terms of Opex, instead of Capex. Opex is a pay as you go model. Capex is a one time cost.
References:
  • Performance efficiency: focuses on running services efficiently and scalably. AWS focuses on two categories:
    • Selection: For the selection, there are three things you need to consider:
      • Type of service: it could be VM based, container based, or serverless based.
      • Degree of Management
      • Configuration
    • Scaling: Is is easy to scale in AWS
  • Cost optimization: this pillar helps achieve business outcomes while minimizing costs. Think of cloud spend in terms of Opex, instead of Capex. Opex is a pay as you go model. Capex is a one time cost.
References:
  • Performance efficiency: focuses on running services efficiently and scalably. AWS focuses on two categories:
    • Selection: For the selection, there are three things you need to consider:
      • Type of service: it could be VM based, container based, or serverless based.
      • Degree of Management
      • Configuration
    • Scaling: Is is easy to scale in AWS
  • Cost optimization: this pillar helps achieve business outcomes while minimizing costs. Think of cloud spend in terms of Opex, instead of Capex. Opex is a pay as you go model. Capex is a one time cost.
References:

RESTful APIs

Who should read it: It is for you if you are looking for a quick overview of this topic for a project, to conduct/appear in an interview, or in general. As we learn more, we will update this article.

REST definition: REST is an architecture style based on web standards and the HTTP protocol. This was initially described by Roy Fielding in 2000. In REST, everything is a resource.

Resource: Any information that can be named is a resource.

Six Guiding principles of REST: To make a web service a true RESTful API, below are six guiding principles:

  • Uniform Interface: A RESTful API MUST have an interface that is consistent to access all resources. A resource should have only one logical URI. Any single resource should not be too large.
  • Client Server: A client and a server must be independent of each other.
  • Stateless: All client-server interactions must be stateless. No session. No history.

Cacheable: When applicable, make the resources of REST cacheable. It improves the performance. Caching helps to reduce bandwidth, latency, and load on servers. It also helps in hiding network failures. There are two HTTP response headers to control caching behavior: Expires and Cache-Control. There are two validator options: E-Tag or Last-Modified. One of these two validators should be used.

  • Layered system: REST allows to use a layered system architecture. For example, an API may be deployed on system 1, get database data from system 2, and authenticate the user from system 3.
  • Code on demand (optional): This is an optional constraint. For example, my API may return the code to hide/show UI logic of a UI element. 

HTTP methods: REST architecture uses these methods below:

  • HTTP GET: Use GET requests to retrieve the information.
  • HTTP POST: Use POST to create new resources.
  • HTTP PUT: Use PUT to update an existing resource. If a resource does not exist, API may decide to create a new resource. Consider it as an update-else-create operation.
    • POST versus PUT: POST requests are made on resource collections whereas PUT requests are made on a single resource.
  • HTTP PATCH: Use PATCH to partially update a resource.
    • PUT versus PATCH: Use PUT to update a resource completely (like replacing a resource). Use PATCH to update a record partially.
  • HTTP DELETE: Use DELETE to delete a resource.

HATEOAS driven APIs: HATEOAS (Hypermedia As The Engine Of Application State) is a constraint of REST API.  It keeps the REST API unique from most other network application architectures. As per HATEOAS constraint, a client needs to know only a single URL. All other resources should be dynamically discoverable via the URL.

Other REST terms:

  • Safe methods: GET and HEAD are considered safe methods because these are used to retrieve information, not to update or delete.
  • Idempotent methods: Idempotent term means that the repeated execution should not impact anything negatively. In other words, making one request or multiple requests should have same impact. If we follow REST API design, GET, PUT, DELETE, HEAD, OPTIONS, and TRACE HTTP methods are automatically idempotent. Only POST methods are not idempotent.

REST’s eight security essentials:

  • Least Privilege: follow the minimum privilege model.
  • Fail-Safe Defaults: Users should not have a default access. Accesses should be explicitly given.
  • Economy of Mechanism: The design should be as simple as possible.
  • Complete Mediation: A system should validate all access rights as a fresh validation. Access rights should not be cached.
  • Open Design: Design should not have any secrets or confidential algorithms.
  • Separation of Privilege: For the privileges, use of multiple conditions is a better idea, rather than providing a single condition access.
  • Least Common Mechanism: Mechanism used to access resources should not be shared.
  • Psychological Acceptability: Security should not make the experience worse.
Best practices to secure REST APIs:
  • Keep it simple.
  • Always use HTTPS.
  • Use password hash.
  • Never expose information on URLs.
  • Consider OAuth: OAuth 2.0 authorization framework enables a third-party application to limited access to an HTTP service. Read more here.
  • Consider adding timestamp in Request.
  • Input Parameter Validation.

Content Negotiation: Asking for a suitable presentation by a client is called the content negotiation.

Versioning: It helps to manage the iterations of the APIs. Versioning allows clients to use existing API version while new version can be made available. Below are ways to manage versioning:

Compression: REST APIs can returnresource representations in several formats like XML, JSON, HTML, or plain text. All these formats can be compressed to save bandwidth on the network.

REST API’s N+1 problem and solution: It is a problem when loading a request needs N more requests, to load associated items. To solve this problem, include more information in individual resource inside collection resource.

‘q’ parameter: In HTTP accept header, ‘q’ parameter helps in setting the MIME type preference and the degree of it.For example:

  • Accept: audio/*; q=0.2, audio/basic
    • should be interpreted as “I prefer audio/basic, but send me any audio type if it is the best available after an 80% mark-down in quality.”

For more details about ‘q’ parameter, read here.

REST Maturity Model: After the detailed analysis, Leonard Richardson used three factors to decide the maturity model of a web service: URI, HTTP, and HATEOAS (Hypermedia)being URI at the bottom and Hypermedia at the top.

  • Level zero: Level zero does not use any of three: URI, HTTP, or HATEOAS (Hypermedia). These web services are a single HTTP based typically POST.  For example, a SOAP web service may use HTTP POST, to transfer SOAP based payloads. An XML-RPC based service can send data using Plain Old XML (POX).
  • Level one: Level one maturity model makes use of URI and does not make use of HTTP and HATEOAS. These APIs may implement multiple URIs and with the implementation of a single verb, maybe just POST.
  • Level two: Level two maturity model makes use of URI and HTTP but does not use HATEOAS. These services may support CRUD operations.
  • Level three: this level uses all three: URI, HTTP, and HATEOAS.

 API Management: It is about the processes and technologies that maintain the APIs in the cloud. There are many tools options. Some of these are below:

  • AWS API Gateway
  • Azure API Management
  • Mulesoft Anypoint Platform
  • IBM API Connect
  • Postman
  • Apigee API Management
  • RapidAPI
  • Tyk
  • Software AG
  • Boomi

API Rate limiting: It is the number of times an API can be called in a given time period.

OpenAPI Specification (OAS): OAS defines the standards for RESTful APIs.

Documenting public APIs: Documentation of public APIs is as important as developing these APIs. It is needed to document public APIs well so that users of these APIs can understand these in a systematic way. For details of how to document public APIs, refer here.

Technology/Framework terms

Who should read it: It is for you if you are looking for a quick overview of this topic for a project, to conduct/appear in an interview, or in general. As we learn more, we will update this article.

Below are some technology/framework terms and basic definitions:

Customer Data Platform: As per CDP institute, Customer Data Platform is a  packaged software that creates a persistent,  unified customer database that is accessible to other systems. CDP is usually controlled by business users. It makes it different from data ware houses or data lakes, that are set-up by corporate IT teams. CDP creates a comprehensive view of each customer by capturing data from multiple sources. It tracks customer behavior over time. Data stored in CDP can be used for analysis and to manage customer interactions. There are different types of CDPs like Data CDPs, Analytics CDPs, Campaign CDPs, and Delivery CDPs.

Difference between CDP and CRM (Customer Relationship Management):

  • CRMs maintain the customer interactions. CDP maintains the 360 degree view of the customer behavior for the used products. 
  • CRM data is gathered manually via the service/support/other customer specific interactions. CDP data is usually collected automatically using integrations and code snippets. 
  • CRMs are to improve interactions with the customers. CDPs are for understanding customers and their behavior.

Segment IO: Segment is a CDP service that looks after data governance, data integration, and audience management at once place.

Optimizely: Optimizely is a technology company that provides multiple options to support A/B testing experiments. 

Split IO: It allows the feature delivery for the apps. By using split IO, a feature can be enabled or disabled. It can also use A/B testing.

Elasticsearch: It is an open source, search and analytics engine. It is accessible using RESTful web services and stores data in JSON format. It is developed in Java, which makes it compatible with other platforms.

Splunk: Splunk is a software platform widely used for monitoring, searching, analyzing, visualizing machine generated data. It offers real-time visibility of data into a dashboard. It is best suited to understand the root-causes.

Distributed tracing: It is also known as distributed request tracing. This is a method that is used to monitor applications, especially built around micro services architecture. IT and DevOps teams use it to monitor applications.

Open Tracing: Open tracing is a vendor agnostic API to achieve distributed tracing. These APIs are build to support many languages.

API Gateways: Just like servers support web applications, API gateways support APIs. There are many proxy servers in front of the API to support various features like authentication, rate limiting, routing, load balancing, etc. API gateways are integral parts of microservices, nanoservices, and serveless components. 

Benefits of API gateways:

  • Decoupling: API gateway allows you to decouple the public facing endpoint APIs from the underlying implementation.
  • Reduce round trips: If some APIs need to join data across other services, an API gateway can save round trips.
  • Security: API gateways provide centralized services like authentication, CORS, rate limiting, bot detection, etc.
  • Cross cutting concerns: API gateways can provide a way to centrally plan the logging, caching, and other cross cutting concerns.

Distributed caching: There are two common technologies for distributed caching: redis and memcached.

  • Redis: It is an open source in-memory remote database that uses disk space for the data storage.
  • Memcached: It is also an open source in-memory database.
  • Redis versus Memchached:
    • Both are no-SQL storage types that store key-vaue pairs.
    • Both are open-source databases.
    • Redis supports more data types. Memchached support string types.

MuleSoft:  Mule ESB (Enterprise Service Bus) is a highly scalable Java-based enterprise service bus and integration platform provided by MuleSoft. It has two concepts: Bus and Adapter. The concept of Bus is achieved through a messaging server like JMS or AMQP. By using Bus, applications decouple from one another. The concept of Adapter is responsible to convert application specific backend data into the Bus format. Data from one application to another is exchanged using the bus format, which is a consistent message format.

Micro services: Micro service architecture is the strategy to implement independent services for a larger application. It is becoming a way to build APIs.

Solr: Solr is a search server which is an open source, Java-based information retrieval library.

Serverless: It is a cloud based code execution model in that cloud providers deal with the servers. A developer is not needed to work on server infrastructure. Cloud providers provide a way to pay per use model, to execute the serverless components. That way, there is no idle cost. Using serverless components features no hassle of maintaining an psychical machines, easy to scale, cost efficient, easy to maintain, and easy to deploy. Testing a serverless component is challenging because it is not easy to replicate the environment for the testing needs. There are two types of serverless components: BaaS (Backend as a Service) and FaaS (Frontend as a Service).

In 1990s, if a developer had to deploy an application into production, he/she had to plan to servers,  networking, databases, and all related end to end solutions, to deploy a solution. In 2000s, we had virtual machines, to build the infrastructure. In 2008, Amazon released EC2 instances that made things more easier. But with EC2, we had to maintain servers. We still had to take care of OS and other elements. S3 and SQS also helped further. But now, we can use cloud infrastructure without worrying anything about server infrastructures. AWS lambda providers a great way. A serverless application involves multiple microservices. With serverless, scaling down is also easy.

URL Shortner: URL shortner is a service to create short links from very long URLs. Converting URLs to a short URL makes it easy to share, tweet, and for other purposes. Clicking on a short URL redirects the user browser to the original URL. Below are some ways to convert a URL to a short URL:

  • Hash original URLs with a hash function (like MD5 or SHA-2).
  • Generate short links using UUID.
  • Convert numbers from base 10 to base 62.

gRPC: gRPC is a newer approach for RPC (Remote Procedure Protocol). It is an open source technology. It is a method to execute a procedure on a remote server. gRPC added a concept of protocol buffers (also known as protbufs). It can run on any platform and supported by many programming languages.

GraphQL: With GraphQL, client determines which data, in which format, and how the client wants it. By default, it typically sends the smallest possible request.

Webhooks: It’s an HTTP POST request that is triggered when an event occurs. It updates the client when there is an update. Webhooks provide instant, real-time notifications.

Common Java Libraries:

  • Lombok API: Lombok API helps minimize boilerplate code. It reduces the line of code. To minimize the lines of code, it provides annotations. Below are some of the annotations:
    • Getter & Setter
    • No Arg Constructor
    • All Arg Constructor
    • Data
    • ToString
    • EqualsAndHashCode
  • Resilience4j: It is an API designed for functional programming. It provides options to pick and choose what’s needed. For more details, read here. It was inspired by Hystrix. Hystrix helps in distributed services by adding latency tolerance and fault tolerance logic.
  • AspectJ: AspectJ is a programming paradigm that increases the modularity by allowing the separation of concerns. For more, read here.
  • Swagger: Swagger is a set of open source tools built around Open source API specification. Open API specification ( also known as Swagger specification) is an API specification for REST APIs.
  • javatuples: javatuples is a Java API that allows Java programs to work with tuples. A tuple is a sequence of elements that are not related to each other. For example, it can contain an integer, string, and an object. Tuples can only contain specific number of elements. All elements are type safe.
  • JDK, ASM, Javassist, and CGlib: These four are APIs to generate Java proxy classes. JDK dynamic proxy is easiest to use. CGLib and javassist are advanced libraries to generate byte codes.
  • p6spy: p6spy is a dynamic monitoring framework for database operations.
  • Spring Cloud: It provides options to manage Spring applications on the cloud infrastructure.

Reference: 

Introduction of Web Protocols

Who should read it: It is for you if you are looking for an overview of this topic for a project, to conduct/appear in an interview, or in general. As we learn more, we will update this article.

HTTP: It stands for Hyper Text Transfer Protocol. An HTTP response is a response on web client, received via internet, in answer to an HTTP request. It is an application layer protocol for transferring hypermedia documents like HTML. HTTP is a stateless protocol. Means, the server does not keep any data between two requests.

HTTP response status codes: response codes are categorized in five classes below:

Informational responses (100–199)
Successful responses (200–299)
Redirects (300–399)
Client errors (400–499)
Server errors (500–599)

Cross-Origin Resource Sharing (CORS): It is a mechanism to allow the requests for browsers from cross-origin sources.

HTTP cookies: HTTP cookie is a small information that a server sends to a browser. This is used to identify if two requests are coming from the same browser. HTTP is a stateless protocol. HTTP cookie helps to remember the states. Cookies are used for session management, personalization, and tracking.

HTTP rate limiting: HTTP 429 Too Many Requests response code indicates that the user has sent so many requests in a given amount of time. This is called rate limiting.

HTTP2: HTTP/2 makes applications more faster, robust, and simpler.

SPDY: It was an experimental protocol introduced by Google. It targeted for 50% reduction in page load time.

HTTPS: It is a secured HTTP protocol. This involves public and private key cryptography.

HTTP caching: Caching in general helps to get the data faster, without hitting the server every time. Browser has cache option. Proxy server also keeps a cache. We also have a reverse proxy server.  HTTP header plays the key role in cache mechanism. There are three types of headers: expires, pragma, and content-control. The content-protocol is a preferred caching header. There are properties to configure the duration of cache.

Related terms to learn:  HTTP tunneling, HTTP content negotiation, HTTP caching, and Certificates

Reference:

Functional Programming

Overview: Java 8 introduced functional programming. Let’s understand what it is. Basics of functional programming: it’s a paradigm that enables concurrent usage of code. Lambda expressions are pillars of functional programming. 

Pure Function: Pure functions should follow these rules:

  • No state: it means a function should not refer any member of a class or an object.
  • No side effects: a function can not change the state outside of the function.
  • Immutable variables: use immutable variables.
  • Prefer recursion over looping. Another option is to use stream API.

First class function: A first class function can be passed as a parameter to a function. It’s like a one-time use function with a function. In Java, it’s achieved using a lambda expression.

High order function: It takes a function as a parameter or returns a function. In Java, it’s achieved using a lambda expression.

Functional interface: An interface is a functional interface if it has a single abstract (unimplemented) method. This interface could also have a default method and a static method.

Functional composition: it is a technique to combine multiple functions into a single function.

Imperative versus Recursive function: Imperative is a simple way of writing logic. Recursion is a way to call function in a nested way. Recursion is of two types: Tail and Head Recursion. Tail recursion has a recursive method call at the end of the logic in a method. Head recursion has a recursive method at the beginning of the logic in a method.

Parallelism: In this concept, we split the task into multiple sub tasks. At the end, we combine the results of subtasks for the final result.

Monads: A monad is a design pattern that helps to represent a missing value.

Java Stream API: Java Stream API is a way that makes operations on data sources easier and convenient. It does not modify underlying data sources. Some examples of common stream operations are forEach, map, collect, filter, findFirst, flapMap, etc.

Lambdas: A Java lambda expression is a short code which takes input parameters and returns a value. These are like methods but without a name. Java Lambda expression provides the way to implement a functional interface.

References: