From "Fundamentals of Software Architecture"
🎧 Listen to Summary
Free 10-min PreviewSpace-Based Architecture Characteristics, Trade-offs, and Use Cases
Key Insight
Space-based architecture achieves maximal elasticity, scalability, and performance, all rated as five-star strengths, by leveraging in-memory data caching to eliminate the database as a transactional bottleneck, enabling support for millions of concurrent users. The deployment manager is a key component for this, dynamically starting up new processing unit instances when load increases and shutting them down when load decreases, ensuring adaptable scalability and elasticity. Processing units also possess automatic self-synchronization; if one instance fails, the others update their member lists to reflect the loss. The processing grid, an optional component, orchestrates requests involving multiple processing unit types, such as coordinating between order processing and payment processing units, handling communication between them.
Despite its performance advantages, space-based architecture entails significant trade-offs, particularly in simplicity and testability, both rated one-star. Its inherent complexity stems from extensive caching and the eventual consistency model with the primary data store, requiring meticulous design to prevent data loss across its numerous interdependent parts during system failures. Testing is notably challenging and costly, especially when simulating the high levels of scalability and elasticity for hundreds of thousands of concurrent users, often necessitating testing in production environments, which introduces substantial operational risks. Furthermore, space-based architecture is relatively expensive, largely due to licensing fees for caching products and high resource utilization across both cloud and on-premises systems, exacerbated by its high scalability and elasticity demands.
This architecture style is ideally suited for applications experiencing high, unpredictable spikes in user or request volume, or those requiring throughput exceeding 10000 concurrent users. Examples include online concert ticketing systems, which face sudden surges from hundreds to tens of thousands of users when tickets go on sale, demanding immediate elasticity for rapidly updating seat availability, and online auction systems, characterized by unpredictable user and bidding loads. In these scenarios, the deployment manager can proactively or reactively scale processing units to meet demand, with individual units potentially dedicated to specific auctions for consistent bidding data. The architecture offers flexible deployment options, allowing the entire system (processing units, middleware, data pumps/readers/writers, database) to be deployed entirely in cloud or on-premises environments, or in a hybrid fashion. A powerful hybrid model enables transactional processing units and middleware to reside in elastic cloud environments, while sensitive physical databases and data remain securely on-premises, facilitated by the asynchronous data pumps and eventual consistency model for seamless data synchronization.
📚 Continue Your Learning Journey — No Payment Required
Access the complete Fundamentals of Software Architecture summary with audio narration, key takeaways, and actionable insights from Mark Richards, Neal Ford.