May 2023

By Bob Arnold, Publisher, Disaster Recovery Journal

[originally published on drj.com; written by Colin Garrison based on an interview with Bob Arnold]

With the recent release to the general public of tools like ChatGPT and Google Bard, the long-heralded arrival of AI has now occurred, and in a big way.

In the wake of the debut of these tools, there has been a tsunami of think pieces on what AI means for society at large. Will it help cure disease? Ruin children’s ability to think for themselves? Make it easier to plan trips and meals? Worsen the problem of misinformation? Boost cybercrime? Improve mental health? Change warfare? Cost people their jobs? Destroy civilization?

For all anyone knows for sure, it might be any or all of the above.

Leaving the broader discussion to other people, I’ve been trying to come to grips with what AI might mean for the field of business continuity.

Influencing my thoughts is the fact these technologies are in their infancy. There will probably be huge advances in the next 10 years.  What will their capabilities be then? Nobody knows.

It seems to me there are two areas to look at in thinking about what AI means for BC: the risks AI poses to organizations’ ability to keep their critical processes running, and the possibilities of using AI as a tool to help in doing BC.

For a closer look at how BC professionals might be able to use AI to help them do BIAs, write plans, and devise exercises, you might take a look at the article “Chat GPT Comes to Business Continuity,” by Lawrence Robert, elsewhere in this issue.

For a business continuity consultant’s take on the threat AI poses to organizations’ operations, check out “A BC Consultant’s View of the Risks of Generative AI,” by Richard Long, on the MHA Consulting blog

The articles above feed into the first tip I would give BC professionals, faced as we are with the need to get our heads around the implications of AI: read widely about it, educate yourself, get into discussions about it. As uncomfortable as the rise of AI might be in some respects, putting our heads in the sand won’t make it go away.

Another tip I’d offer would be to use caution in leaning on AI to help with your BC program.

The reason why is simple. In scouring the internet for information to answer your questions, the tools might grab up bad data or advice—garbage which they would then feed it to you in a tone of absolute certainty. Plus, when AI tools like ChatGPT don’t “know” the answer, apparently they have a tendency to “hallucinate” a response, making something up out of thin air.

That being said, it has to be stated that tools such as ChatGPT have an extraordinary potential to help us in doing BC. This is not in the future. It’s right now.

Here’s an example of what they can do right now, for anyone, for free. I asked ChatGPT if it had the ability to evaluate a BC plan against the requirements of the FFIEC standard.

Its reply was pretty darn impressive:

Yes, I can certainly evaluate a business continuity plan against the requirements of the FFIEC (Federal Financial Institutions Examination Council) standard. The FFIEC’s Business Continuity Planning Booklet outlines the principles and expectations for effective business continuity planning in financial institutions.

The booklet provides guidance on the key components of a comprehensive business continuity plan.

ChatGPT then named them (Risk Assessment, Business Impact Analysis, Business Continuity Planning, Testing and Training, Maintenance and Review), concisely explaining each one. It concluded:

By evaluating your business continuity plan against the requirements of the FFIEC standard, I can identify any potential gaps or areas for improvement in your plan.

I then asked if it could look at a BC plan for me and tell me if there were any gaps. It replied:

Yes, I can certainly review your business continuity plan and provide feedback on any potential gaps or areas for improvement. Please share the plan with me, and I will be happy to review it for you.

That’s pretty amazing—especially when you consider how much work it would be for a person to extract this kind of information from a standard or review a plan for gaps.

Of course, it’s important to reiterate that you should double-check anything AI tells you or creates for you.

It’s also worth pointing out the limits of AI in doing this kind of work. The information that makes a program meaningful for an organization doesn’t reside in public sources, therefore it’s unavailable to the AI tools. Most of the information that really counts is located in people’s heads.

The task of getting SMEs to put the things they know about their departments and business processes into words is more art that science. So is prodding senior managers into weighting their BIA impact categories and identifying their risk tolerance. AI is of no help in developing that kind of information.

That’s the most important piece of all in building a BCM program and getting it still requires the human touch.

These are just some introductory thoughts in what is sure to be a topic of ongoing discussion.

I’d like some of this discussion to take place in the pages of DRJ and at our conferences.

We’re planning on having at least an initial discussion of the impact of AI on resilience at our Fall 2023 conference in Phoenix (Sept. 10-13). More comprehensive coverage will follow at DRJ Spring 2024 in Orlando (March 17-20). I hope to see you at both.

In the meantime, I’d like to invite anybody in the community who has a take on these issues—or just wants to kick them around—to get in touch with me at bob@drj.com.

Idea crowdsourcing is our best hope for coming up with a clear, comprehensive picture of what the impact of AI on BC is likely to be. It’s also our best bet for coming up with a set of best practices to help us in mitigating AI’s risks and making the most of its potential.