gRPC Services
Deploy gRPC services independently for distributed GraphQL Federation.
Introduction
gRPC services in Cosmo are independent, remotely deployed services that communicate with the Cosmo Router over the network using gRPC protocol. Unlike gRPC plugins that run as local processes managed by the router, gRPC services operate as standalone microservices that can be deployed anywhere in your infrastructure.
This approach is ideal for distributed architectures where services are owned by different teams, require independent scaling, or need to be implemented in languages other than Go.
For an overview of gRPC concepts shared between plugins and services, see our gRPC Concepts documentation.
What Makes gRPC Services Unique
gRPC services are standalone microservices that expose their functionality through gRPC endpoints and integrate into your GraphQL Federation as subgraphs. These services:
- Run independently as separate deployments with their own lifecycle management
- Communicate over the network using standard gRPC protocol
- Maintain service autonomy while participating in the federated graph
- Scale independently based on their specific requirements
- Support any gRPC language - Python, Java, C#, Node.js, Rust, and many others
The key distinction from gRPC plugins is that services are:
- Remotely deployed rather than co-located with the router
- Network-based communication instead of inter-process communication
- Independently managed with separate deployment pipelines
- Service-oriented following microservices architecture patterns
Key Differences from gRPC Plugins
Remote Deployment
Unlike plugins that run as locally forked processes managed by the router, gRPC services can be deployed anywhere in your infrastructure - different servers, containers, or even cloud regions.
Language Agnostic
While gRPC plugins currently only support Go, gRPC services can be implemented in any language that supports gRPC - Python, Java, C#, Node.js, Rust, and many others.
Independent Scaling
Services can be scaled independently based on their specific load patterns and resource requirements, without affecting the router or other services.
Network Communication
Communication happens over the network using standard gRPC protocols, enabling distributed architectures and cross-datacenter deployments.
Service Autonomy
Each service maintains its own deployment lifecycle, monitoring, and operational concerns, following traditional microservices patterns.
Team Independence
Different teams can own and operate their services independently, using their preferred languages, frameworks, and deployment strategies.
When to Choose gRPC Services
Choose gRPC Services when:
- You need to use languages other than Go
- Services are owned by different teams
- You require independent scaling and deployment
- Services are distributed across different environments
- You want to maintain existing microservices architecture
- Services have different release cycles
Choose gRPC Plugins when:
- You want the simplest possible deployment model
- Performance is critical (lower latency with local communication)
- You’re comfortable with Go development
- You prefer unified deployment and monitoring
Service Architecture
gRPC services integrate into GraphQL Federation through this architecture:
- Independent Deployment: Services are deployed and managed independently from the router
- Network Discovery: The router discovers and connects to services over the network
- Protocol Translation: The router translates GraphQL requests to gRPC calls over the network
- Autonomous Operation: Services handle their own scaling, monitoring, and lifecycle management
- Distributed Response: Results are collected from multiple distributed services and assembled
Deployment Considerations
Network Configuration
- Services must be accessible from the router over the network
- Consider network latency and reliability in your architecture
- Plan for service discovery and health checking
Security
- Implement proper authentication between router and services
- Consider network security and encryption (when TLS support is available)
- Follow microservices security best practices
Monitoring and Observability
- Set up independent monitoring for each service
- Implement distributed tracing across services
- Plan for service health checks and circuit breakers
Scaling Strategy
- Design services to scale independently based on load
- Consider auto-scaling policies for each service
- Plan for different resource requirements per service