Discovery and Real-time Data Analysis: Integrity management requires full discovery and low-overhead collection of data. Partial implementations due to the overhead of agents simply can't assure operating integrity because some system components are left unmonitored and inter-component dependencies are left unidentified.
End-to-end view of IT infrastructure: As a foundation, a consistent, end-to-end view through an intuitive interface allows IT staff better control than trying to work with different views of each component silo.
Business Transaction View: Filtering out everything extraneous to focus on the interdependent components that power each business transaction is critical when a problem impacts that transaction.
Dynamic Thresholding: Static thresholds detect only when a single measure, set with a particular condition in mind, is exceeded. But "normal" is a variable that changes constantly based on overall system load and component interactions. Integrity management sets thresholds dynamically based on a system component's history of operation. This enables application of advanced algorithms to detect a variety of abnormal behaviors under a variety of conditions at the first sign that metric behavior is becoming abnormal.
Event Correlation: Integrity management requires event correlation that uses critical information defined in transactions to apply automated root cause analysis. This analysis looks at such issues as topology, time shifts, and other behavior. It also self-learns various event scenarios over time by taking an event fingerprint of known problems. It can then anticipate problems by comparing current events to these fingerprints.
Forewarnings of Issues and Constraints: Based on the information analyzed across all the functional stacks involved in transactions, integrity management can issue alerts to the right expert in order to allow looming issues to be corrected before they impact the business.