InterSystems READY 2025: InterSystems Cloud Core Architecture

ICCA

Rick Guidice and Jorge Martinez-Alba, InterSystems Cloud Engineers, presented an overview of InterSystems Cloud Core Architecture (ICCA), a cloud-native platform for managing cloud services using containerization with IKO and Kubernetes. The architecture includes a user portal, API gateways, external connection support, data import capabilities, and a management plane handling storage, security, and backups.


 

ICCA Overview: Guidice and Martinez-Alba provided a high-level overview of InterSystems Cloud Core Architecture (ICCA). ICCA is a cloud-native platform designed for provisioning, managing, and maintaining cloud services, utilizing an extensible and reusable collection of components that can be applied across various cloud offerings and is portable across different cloud service providers.

Portal and API Gateways: The user interacts with the platform through a web portal, an Angular application hosted in AWS S3, accessible at portal.live.iccloud.io, which allows users to create accounts and log in using Cognito or federated identity providers. The portal communicates with the backend via a REST API, utilizing two API gateways: one for portal management and another for managing ICCA deployments.

Cloud Service External Connections and Data Import: Cloud services support external connections via JDBC or DB API, utilizing TLS and a network load balancer, directly to the super server port. The platform also enables the import of data from a customer’s VPC and S3 bucket, a feature used by services like Cloud SQL and Cloud Document for importing DDL, DML, or CSV files.

Containerized Architecture with IKO: Unlike virtual machines with network boundaries around individual EC2 instances, ICCA employs a containerized approach where IRIS instances within RCA or 7G EC2 instances are managed by the InterSystems Kubernetes Operator (IKO). This setup isolates IRIS instances within namespaces tied to usernames, enhancing security and enabling resource sharing within a cluster.

Resource Management and Licensing: In the containerized environment, if an IRIS instance spins down, Kubernetes automatically spins it back up on a different node, utilizing shared resources within the managed cluster. IRIS licenses are retrieved from a Kubernetes secret, enabling hourly billing based on compute power usage rather than the license itself.

Management Plane Responsibilities: The management plane handles crucial tasks, including storage, security, IRIS configuration for performance optimization, disaster recovery, backup and restore operations, upgrades, and file injection. A separate instance within the cluster manages backup and cron jobs, ensuring these tasks are executed within the appropriate namespace.

Security Extensions: To enhance security within the cluster, the platform utilizes Lacework for general security, Calico for network policies to further secure IRIS instances, and CoreLogic to track events for debugging and issue resolution.

Cloud SQL and Service Deployment: Cloud SQL serves as an example of a service deployed using ICCA, where a base Iris image with applied configurations is stored in an AWS container repository. Deploying Cloud SQL involves pulling this image and spinning it up, with user-provided configurations, such as default passwords, injected at deployment. This platform facilitates the easy addition of new services like Cloud Documents by using pre-configured IRIS images.

License Scaling and Persistence: The pricing model for licenses varies by service. For Cloud SQL, it is based on CPU usage with two metering dimensions. Persistence is managed through four persistent volume claims (PVCs) associated with the IRIS pod, which retain data even if the instance scales down or restarts. Users can provision more storage for their Iris instance as needed.

Backup Jobs and Data Freezing: Backup jobs, which involve taking volume snapshots of the IRIS instances, are run every six minutes as cron jobs, with the option for manual backups triggered via a lambda function. The backup schedule can be modified in the configuration. During backups, the IRIS instance is briefly frozen to ensure data consistency.

Multi-tenancy Considerations and Load Balancer Functionality: While the current architecture deploys four PVCs per IRIS instance, multi-tenancy use cases would be defined at the service level. The load balancer in the current architecture primarily serves as an endpoint and does not perform load balancing; it provides a consistent access point for connections. Multiple discrete IRIS instances can be deployed within a namespace.

On-Demand Use Case Clarification: The architecture could potentially be used for on-demand spinning up of IRIS instances for tasks such as large queries, which can then be shut down. However, this would require existing data to be hosted on the same cloud provider as the managed services offered by InterSystems. The discussed architecture pertains to InterSystems Managed Cloud Services, not for running independent cloud instances.

J2 Interactive

J2 Interactive is an award-winning software development and IT consulting firm that specializes in customized solutions for healthcare and life sciences.