Tag: aws

Module Spotlight – Experiment Broker

Posted November 2, 2022

Vertical Relevance's Experiment Broker provides the infrastructure to implement automated resiliency experiments via code to achieve standardized resiliency testing at scale. The Experiment broker is a resiliency module that orchestrate experiments with the use of state machines, the input is driven by a code pipeline that kicks off the state machine but also can be executed manually. Coupled with a deep review and design of targeted resiliency tests, it can help ensure your AWS cloud application will meet business requirements in all circumstances.

Vertical Relevance Achieves AWS Service Delivery Designation for AWS Control Tower

Posted October 27, 2022

NEW YORK, NY –  October 27, 2022 - Today, AWS announced the launch of the AWS Control Tower delivery and AWS Control Tower ready program. Vertical Relevance (VR), a financial services-focused consulting firm and Amazon Web Services (AWS) Advanced Tier Services Partner, announced it has achieved the AWS Service Delivery designation for Control Tower. The achievement signifies Vertical Relevance’s extensive and AWS-recognized understanding of best practices and validated success in delivering Control Tower implementations to its customers

Vertical Relevance Achieves the AWS Service Delivery Designation for Amazon EKS

Posted October 27, 2022

Vertical Relevance, a financial services-focused consulting firm, announced today that it has achieved an Amazon Web Service (AWS) Service Delivery designation for Amazon Elastic Kubernetes Service (Amazon EKS), recognizing that Vertical Relevance has proven success in helping customers architect, deploy, and operate containerized workloads.

Solution Spotlight – Monitoring Foundations 

Posted October 5, 2022

Organizations are rapidly adopting modern development practices – agile development, continuous integration and continuous deployment (CI/CD), DevOps, multiple programming languages – and cloud-native technologies such as microservices, Docker containers, Kubernetes, and serverless functions. As a result, they're bringing more services to market faster than ever. In this solution, learn how to implement a monitoring system to lower costs, mitigate risk, and provide an optimal end user experience.

Best Practice Guide – Monitoring Foundations

Posted October 5, 2022

Monitoring is the act of observing a system’s performance over time. Monitoring tools collect and analyze system data and translate it into actionable insights. Fundamentally, monitoring technologies, such as application performance monitoring (APM), can tell you if a system is up or down or if there is a problem with application performance. Monitoring data aggregation and correlation can also help you to make larger inferences about the system. Load time, for example, can tell developers something about the user experience of a website or an app. Vertical Relevance highly recommends that the following foundational best practices be implemented when creating a monitoring solution. 

Module Spotlight – Role Broker

Posted September 15, 2022

In non-production AWS environments today, security and IAM are often deprioritized to increase velocity of development. Vertical Relevance’s Role Broker was created as an alternative to the costly, error-prone strategies that many organizations use to manage their IAM roles in non-production environments.

Vertical Relevance Achieves AWS Service Delivery Designation for Amazon API Gateway

Posted September 7, 2022

Vertical Relevance, a financial services-focused consulting firm and Amazon Web Services (AWS) Advanced Tier Services Partner, today announced it has achieved the AWS Service Delivery designation for Amazon API Gateway. The achievement signifies Vertical Relevance’s extensive and AWS-recognized understanding of best practices and validated success in delivering Amazon API Gateway implementations to its customers. 

Use Case: Lakehouse and Data Governance

Posted August 16, 2022

In this use case learn how a leading financial services company obtained a data platform that is capable of scaling to accommodate the various steps of the data lifecycle along with tracking of all the steps involved including cost allocation, parameter capturing, and the providing of metadata required for integration of the client’s third party services.

Use Case: Financial Risk Data Analytics Pipeline and Lakehouse

Posted August 10, 2022

In this use case learn how a leading financial services company obtained a carefully planned, scalable, and maintainable testing framework that dramatically reduced testing time for their mission-critical application and enabled them to constantly test the applications releasability.
Page 3 of 6

Learn More