"Fun" being a euphemism for avoiding jail...
Shorter Version
A strategy to maximise bonuses and avoid personal culpability:
- Don’t commit the fraud yourself.
- Minimise information received about the actions of your employees.
- Control employees through automated, algorithmic systems based on plausible metrics like Value at Risk.
- Pay high bonuses to employees linked to “stretch” revenue/profit targets.
- Fire employees when targets are not met.
- …..Wait.
Longer Version
CEOs and senior managers of modern corporations possess the ability
to engineer fraud on an organisational scale and capture the upside
without running the risk of doing any jail time. In other words, they
can reliably commit fraud and get away with it.
Imagine that you are the newly hired CEO of a large bank and by some
improbable miracle your bank is squeaky clean and free of fraudulent
practises. But you are unhappy about this. Your competitors are making
more profits than you are by embracing fraud and coming out ahead of you
even after paying tens of billions of dollars in fines to the
regulators. And you want a piece of the action. But you’re a risk-averse
person and don’t want to risk spending any time in jail for committing
fraud. So how can you achieve this outcome?
Obviously you should not commit any fraudulent acts yourself. You
want your junior managers to commit fraud in the pursuit of higher
profits. One way to incentivise this behaviour is to adopt what are
known as ‘high-powered incentives’. Pay your employees high bonuses tied
to revenue/profits and maintain hard-to-meet ‘stretch’ targets. Fire
ruthlessly if these targets are not met. And finally, ensure that you
minimise the flow of information up to you about how exactly how your
employees meet these targets.
There is one problem with this approach. As a CEO, this allows you to
use the “I knew nothing!” defense and claim ignorance about all the
“deplorable” fraud taking place lower down the organisational food
chain. But it may fall foul of another legal principle that has been
tailored for such situations – the principle of
‘wilful blindness’ –
“if
there is information that you could have know, and should have known,
but somehow managed not to know, the law treats you as though you did
know it”. In a recent
essay,
Judge Rakoff uses exactly this principle to criticise the failure of
regulators in the United States in prosecuting senior bankers.
But wait – all hope is not lost yet. There is one way by which you as
a CEO can not only argue that adequate controls and supervision were in
place and at the same time make it easier for your employees to commit
fraud. Simply perform the monitoring and control function through an
automated system and restrict your role to signing off on the risk
metrics that are the output of this automated system.
It is hard to explain how this can be done in the abstract so let me
take a hypothetical example from the mortgage origination and
securitisation industry. As a CEO of a mortgage originator in 2005, you
are under a lot of pressure from your shareholders to increase subprime
originations. You realise that the task would be a lot easier if your
salespeople originated fraudulent loans where ineligible borrowers are
given loans they can’t afford. You’ve followed all the steps laid out
above but as discussed this is not enough. You may be accused of not
having any controls in the organisation. Even if you try hard to ensure
that no information regarding fraud filters through to you, you can
never be certain. At the first sign of something unusual, a mortgage
approval officer may raise an exception to his supervisor. Given that
every person in the management hierarchy wants to cover his own back,
how can you ensure that nothing filters up to you whilst at the same
time providing a plausible argument that you aren’t wilfully blind?
The answer is somewhat counterintuitive – you should codify and
automate the mortgage approval process. Have your salespeople input
potential borrower details into a system that approves or rejects the
loan application based on an algorithm without any human intervention.
The algorithm does not have to be naive. In fact it would ideally be a
complex algorithm, maybe even ‘learned from data’. Why so? Because the
more complex the algorithm, the more opportunities it provides to the
salespeople to ‘game’ and arbitrage the system in order to commit fraud.
And the more complex the algorithm, the easier it is for you, the CEO,
to argue that your control systems were adequate and that you cannot be
accused of wilful blindness or even the
‘failure to supervise’.
In complex domains, this argument is impossible to refute. No
regulator/prosecutor is going to argue that you should have installed a
more manual control system. And no regulator can argue that you, the
CEO, should have micro-managed the mortgage approval process.
Let me take another example – the use of Value at Risk (VaR) as a
risk measure for control purposes in banks. VaR is not ubiquitous
because traders and CEOs are unaware of its flaws. It is ubiquitous
because it allows senior managers to project the facade of effective
supervision without taking on the trouble or the legal risks of actually
monitoring what their traders are up to. It is sophisticated enough to
protect against the charge of wilful blindness and it allows ample room
for traders to load up on the tail risks that fund the senior managers’
bonuses during the good times. When the risk blows up, the senior
manager can simply claim that he was deceived and fire the trader.
What makes this strategy so easy to implement today compared to even a
decade ago is the ubiquitousness of fully algorithmic control systems.
When the control function is performed by genuine human domain experts,
then obvious gaming of the control mechanism is a lot harder to achieve.
Let me take another example to illustrate this. One of the positions
that lost UBS billions of dollars during the 2008 financial crisis was
called
‘AMPS’
where billions of dollars in super-senior tranche bonds were hedged
with a tiny sliver of equity tranche bonds so that the portfolio showed a
zero VaR and delta-neutral risk position. Even the most novice of
controllers could have identified the catastrophic tail risk embedded in
hedging a position where one can lose billions, with another position
where one could only gain millions.
There is nothing new in what I have laid out in this essay – for example, Kenneth Bamberger has
made much the same point on the interaction between technology and regulatory compliance:
automated systems—systems that governed loan
originations, measured institutional risk, prompted investment
decisions, and calculated capital reserve levels—shielded irresponsible
decisions, unreasonably risky speculation, and intentional manipulation,
with a façade of regularity….
Invisibility by design, allows engineering of fraudulent outcomes
without being held responsible for them – the “I knew nothing!” defense.
of course, they are also self-deceived so this is really true.
But although the automation that enables this risk-free fraud is a
recent phenomenon, the principle behind this strategy is one that is
familiar to managers throughout the modern era – “How do I get things
done the way I want to without being held responsible for them?”.
Just as the algorithmic revolution is simply a continuation of the
control revolution,
the ‘accountability gap’ due to automation is simply an acceleration of
trends that have been with us throughout the modern era. Theodore
Porter has
shown
how the rise of objectivity and bureaucracy were as much driven by the
desire to avoid responsibility as they were driven by the desire for
superior results. Many features of the modern corporate world only make
sense when we understand that one of their primary aims is the avoidance
of responsibility and culpability. Why are external consulting firms so
popular even when the CEO knows exactly what he wants to do? So that
the CEO can avoid responsibility if the ‘strategic restructuring’ goes
badly. Why do so many firms delegate their critical control processes to
a hotpotch of outsourced software contractors? So that they can blame
any failures on external counter-parties who have explicitly been
granted exemption from any liability
http://www.macroresilience.com/
...
"The era of procrastination, of half-measures, of soothing and baffling expedients, of delays, is coming to a close. In its place, we are entering a period of consequences." - Winston Churchill,
The Gathering Storm