Today’s networks are less differentiated by technology than ever before, but their size differences are emphasizing the importance of scalability! Everyone, from smaller single-location enterprises to large multinational corporations, are using many of the same basic building blocks including:
- Virtualized Servers
- Clouds – Public or private
- Storage Networks
- Core / Distribution / Access network topologies
- WLAN communication
- Multiple vendors
Having same network building blocks, means that feature requirements for network management and monitoring are also almost identical, while the network sizes maybe substantially different. Therefore, instead of asking about features, scalability is becoming a critical factor of management and monitoring solutions. Enter Big Data!
For example, let us examine two enterprises in the same industry (e.g. Financial) that are operating networks with different sizes. Let’s assume A is a regional enterprise with 6,000 employees and 1 million customers in one country while B is a large multinational organization with 100,000 employees and 15 million customers in over 120 countries. Both A and B probably have similar vendor technologies, switching/routing cores, remote office/branch locations connected via MPLS or similar. Also, belonging to the financial industry comes with strict privacy and security requirements, so both A and B must retain customer data and comply to strict security requirements.
So, although both may use some of the same back office software tools, like a customer portal for access to online banking tools as well as iPhone/Android App, and a call center to answer customer questions, there are also massive differences in scaling requirements.
One of the biggest factors which differentiate scalability in management and monitoring solutions is their database support.
Many network management vendors make use of open source or free DB’s like Microsoft SQL Server Express, (MySQL/Maria DB), and PostgreSQL, while others are using commercial offerings from Oracle, they are really no match in speed and scalability when compared with pure Big Data solutions. Therefore, it is crucially important to investigate the supported database architecture when comparing product scalability.
Compute Resources – Vertical & Horizontal Scalability
A second critical factor in scalability is a product’s ability to make use of additional compute resources.
Can a product scale efficiently by adding more CPU cores, memory, and faster SSD storage? For example, many applications were not designed for multithreading, or the administrator can not configure the threads and thread pool of each process. Only systems which are initially architected to scale vertically will be able to support the larger and larger network loads required today.
Another scaling concept is the ability to scale horizontally by adding more functional components to the overall architecture. For network management and monitoring tools, this commonly refers to adding more “Agents” (or Pollers, Collectors, Probes etc.). The main factors for horizontal scaling are:
- Number of devices/elements supported per Agent
- Total number of Agents supported
The third major component of scalability is High Availability (HA).
HA needs to be built-in by design into all of core components including Database, middleware and Agents. These components must be able to continue to operate even if other components are down, and must be able to self-heal once issues are resolved.
To better select a management and monitoring product, the three above scalability factors must be considered, even before evaluating features and prices!