A recent post titled “Security Ratings: Love, Loathe or Live With Them?” posed some important questions for the cyber ratings industry and more broadly for companies, investors and regulators with an interest in better understanding the downside risks of digital transformation.
I agree that security ratings are necessary, that more transparency is needed, and we need to work to improve them. But contrary to some of the points in the post, I do not believe the major issue in improving current cyber ratings is having more ‘inside data’ or ‘more collaboration’. These things will help, but it is Step 2 in a process.
Step 1 is to think outside the cyber box and validate ratings to something other than cyber incident occurrence. The unintended consequences of this validation approach has created 3 problems for the 1st Gen cyber ratings industry:
- Problem #1: Reported events used for validation represent a small sample size of the actual number of event occurrence. Why? It is well-reported that most companies don’t know their network has been breached. In addition, most companies don’t want to announce they have a problem even if they find it—especially ransomware. Actual events probably outnumber reported events by over 3 to 1.
- Problem #2: Validating to incident occurrence tells you nothing about the severity of the incident; it just tells you ‘event/no event’. It is like trying to trade a stock after hearing a company missed earnings. Who cares if they miss? What matters to the stock price is by how much is the miss relative to the estimates on the street.
- Problem #3: There is a skew towards: “bigger the company, the riskier the rating”. Bigger organizations have bigger networks, more people and by law of numbers, have a higher probability of incidents. If you validate to incidents, then that is going to be logical outcome, especially for an insurance underwriting model. Unfortunately, that makes the rating fairly useless outside of pricing premiums.
The Solution? Validate not to event occurrence, but to directional variance in operating performance.
As a former hedge fund manager who developed a cyber rating model for myself as an investment tool for my fund (version 1.0), the question of validation was always based on: “how does the cyber posture currently impact operational and financial performance for x,y,z factors?” That should be the real litmus test for a cyber rating. 20 years of my previous life in trading securities has shown me that markets eventually correctly price in both operational and financial performance over time. Companies that have good, stable, productive operations have share prices that go up and vice versa on the downside.
In cyber rating version 2.0, I turned the tool around and gave it to investors as well as company execs as they also indicated they’d like such a rating. To show greater transparency, we determined that the most objective validation test to prove that ratings for good vs. bad cyber posture impact operating performance was the stock market. Why? Because operational performance impacts stock prices in the long run (yes, occasional crazy bubble manias with IPOs and SPACs skew results in occasional short-term durations).
We index cyber performance on regional basis (broad US and Europe markets), and sector specific (i.e. banks vs. banks, consumer vs. consumer companies etc.) to ensure fairness in comparing apples vs. apples and normalizing for macroeconomic as well as cyclical impact on operations.
We have an independent auditor time-stamp and validate results (and have for 3 years). The results of the “good cyber-managed companies vs. bad cyber-managed companies” on the broad market are available as indices on Bloomberg, Reuters and our website. Sector-based results are available upon request. Again, indices are how credit ratings, governance, investment bank research ratings, sovereign risk and the ESG world express ratings validation. Why should cyber ratings be any different?
The Results. How accurate have our ratings been? Over the last 3 years until November 2020, good companies have outperformed bad ones in the US by 46% and EU 60% (!!). For 2020 YTD through Nov: for $1000 invested in the US Market, you would have $1016 if you bought the market tracking ETF, $1095 if you invested in “good cyber”, and $1227 if you shorted “bad” cyber. It is true that being bad at cyber is much worse for stakeholders.
Our ratings have also predicted a majority of ransomware breaches of public companies in 2020. Companies we rated as 1 and 2-Stars (worst performers) are much more likely to experience financially damaging breaches than 4 and 5-Star rated peers. This holds true across every market (US, EU, UK) and sector.
Bottom line: My view is that the cyber ratings industry will not succeed in the changing behavior of those who need it most—investors, boards, C-suites and regulators—until it prepares to measure up to what other ‘tried and true’ ratings produce: a consistent and predictable assessment on current and future company performance across all companies that can be measured at all times.