Navigating The New Era Of Digital Risk

Minimalist white geometric shape staircase graphic

By Lucy Saddleton, Managing Editor, ADB Insights

Legal leaders are grappling with heightened digital risk as emerging technologies increase fears of cyber crime, intellectual property theft, ransomware, phishing attacks and reputational threats.

Attendees at the fifth annual Legal Innovation Forum in Vancouver learnt from our expert panelists how to best prepare for emerging risks – and how to take advantage of the digital environment to seize opportunities for innovation and growth.

Businesses must take a holistic approach when developing a strategy for digital risk management, as it goes far beyond a simple incident response plan, and involves participation from every department – not just legal.

Jeremy Trickett, chief legal officer, BCI
Jeremy Trickett

“It’s a beast, so you’re going to have to take it in the mindset that it is a journey to create these programs for risk management, it has to come from the top, and it has to be a business strategy that folds back into your business objectives,” said Sukhi Ram, founding partner at Datum Law.

Ram advises keeping a business mindset while creating a digital risk strategy, and considering how it works for your specific business, and how you can engage stakeholders to ensure risk is being managed at all levels. Continuous improvement of the plan is also important, Ram noted.

TELUS Communications boasts a robust, mandatory training program for all employees which ensures that each employee knows what happens in the event of a cybersecurity breach, and that digital literacy spans the entire organization.

The telecoms giant also offers compliance programs relating to different regulatory frameworks including data breaches and privacy as well as everything from anti-spam legislation, to telecommunication rules and broadcasting rules.

“There is a privacy related framework in so many different legislations,” said Leena Khawaja, senior regulatory legal counsel at TELUS. “Our compliance programs have to capture all of them because we have to report these federally or provincially, so training and compliance is very big at TELUS. That’s how we succeed in mitigating a lot of risks.”

Catherine Lau, General counsel, MEC
Catherine Lau

The company also has very robust security infrastructures and a strong focus on transparency and self-regulation. With respect to technology, for example, TELUS was the first telecom company in Canada to sign the Government of Canada’s voluntary code of conduct for generative AI, and it was the first company in the world to get an ISO certification for its AI customer support tool.

“This self regulation builds trust, not just with our customers, but also within the organization that we stand behind our products,” said Khawaja.

LIABILITY INSURANCE

Candace Pietras, professional liability leader at Purves Redmond Ltd spoke about the importance of purchasing liability insurance. Firms should research their insurance options before purchasing cyber insurance, as failure to be adequately insured is often due to budget or simply not knowing what options are available, Pietras said.

Hubert Lai, University counsel, University Of British Columbia
Hubert Lai, University counsel, University Of British Columbia

“You’ve got AI coverage now until there’s a major claim, and then every insurer is going to panic and add an exclusion, much like they added the pandemic exclusion to CGL {Commercial General Liability} policies a few years ago,” said Pietras.

When negotiating with suppliers, law firms should ask to see their insurance certificate to understand what type of insurance they have, Pietras advised. This in turn will prompt a discussion about what’s available.

AI insurance is likely to become increasingly common, she added.

“Two years from now, at this conference, maybe half of you will have this insurance, because at the end of the day, it’s going to take a big event to trigger the exclusions, but it’s coming, because everyone’s using AI and I don’t know if the E&O {Errors & Omissions} policies were designed to respond to these types of risks,” said Pietras.

When seeking cyber insurance, be prepared for insurance companies to ask about multi-factor authentication, Pietras noted. While some plans may be easily available online, it is always preferable to seek more robust coverage.

SECURITY MEASURES

Our speakers discussed different types of security measures being used by firms and organizations to address cyber risk – and new AI risks.
Simply buying the most expensive antivirus software available is not a good approach, according to Travis Kelly, director, corporate legal at GeoComply, a Vancouver-based fraud prevention and cybersecurity solutions provider.

“There is no sort of technology out there that’s all encompassing; there is no one-stop magic bullet that’s going to protect your firm,” said Kelly. Instead, he recommends adding layers of protection across the enterprise, and conducting a risk assessment with the help of service providers and insurance providers.

When bringing in a new AI tool, it is important to consider the risk perspective and to negotiate contractual indemnities and confidentiality protections in the agreement, Kelly noted. Training is also critical.

“Unfortunately, your employees are probably your weakest link,” said Kelly. “You can have the best AI policy, and you can have the best data retention governance policies in place, but if you’re not training, and if your employees don’t know what these policies say, then really they’re not worth the paper that they’re written on.”

Seminars can be a useful training tool, to avoid fatigue from online training platforms, for example. Kelly also recommends doing a cost benefit analysis to determine the risk tolerance of the firm or organization, which will help to determine how much should be spent on external insurance.

Khawaja noted that it is critical to invest in the right people to build security systems and do cross-functional testing, which may include software engineers, data scientists and ethicists.

“At TELUS, we’ve invested a lot in the people as well as in the technology, and we do a lot of cross functional training and testing,” said Khawaja. TELUS also has a very rigorous process for launching new products or services that disclose data, which includes a thorough risk assessment.

“We’re never entirely risk proof, but at least we’ve done our due diligence, we have documented it, it is a repeatable process, and we can stand behind it,” said Khawaja.

Ram added: “Gone are the days that you do an incident test once a year. It’s continuous, and it’s with different stakeholders within the business.”

Ram also noted that the incident response plan needs to be available in the form of a physical document in case computer systems have crashed. It also needs to be easy to understand, and a crisis coach can be also a valuable tool, she added.

THE ETHICAL & REGULATORY LANDSCAPE

While technology is rapidly evolving in Canada, legislation surrounding its use continues to lag behind. With Bill C-27 – the Digital Charter Implementation Act – still not passed, organizations have challenges and opportunities to self regulate.

“We already have a chance to be proactive in our knowledge of where the regulatory framework is headed,” said Khawaja. For example, the government’s focus in the pending bill will be on transparency, on protecting minors, and on consumer rights with respect to their private information, among other points. Khawaja also advised attendees to examine the EU AI acts to see what is happening internationally.

“Have your regulatory experts review all these pieces of legislation, which should help you be ready, actually, for what is going to come eventually down the pipeline,” said Khawaja. She also recommended having your AI models certified, and consulting with regulators.

Although the onset of AI will inevitably lead to some job losses, panelists agreed that the benefits far outweigh the difficulties, noting that there will also be a lot of job creation.

“I think there was some fear a year or two ago, but now AI is here and the world hasn’t exploded. Let’s fully jump in and embrace it and make sure everyone’s using it responsibly,” said Kelly.

The panel was moderated by Ryan Berger, privacy & employment partner at Lawson Lundell.
Stay tuned for details about the Legal Innovation Forum’s six-part Generative AI Masterclass series, coming in 2025.

On The Ground With Generative AI

Minimalist white globe made of smooth plastic with contours floating in pink and blue and purple light desktop

By Lucy Saddleton, Managing Editor, ADB Insights


More than 150 attendees joined the Legal Innovation Forum’s fifth annual forum in Vancouver last month for a series of insightful panel discussions on the most pressing topics on the minds of legal professionals, including generative AI, digital risk, the human equation, strategy & operations, and the new business of law.

Our opening panel shone a light on the unprecedented disruption currently facing the legal ecosystem as new generative AI technologies emerge, offering enhanced capabilities and time-saving advantages across sectors – and potentially transforming the operating structure of the legal ecosystem.

Our expert speakers discussed the importance of having policies and guardrails in place around the use of Gen AI, and the need for cross-collaboration between departments, involving cross-functional committees.

They also explored the ways in which Gen AI is poised to change the structure of the legal ecosystem, as relationships between law firms, legal departments and legal service providers start to shift.

Danielle Gifford ,Director of AI, PwC
Danielle Gifford, Director of AI, PwC

“I foresee a shift away from traditional organizational structures in law firms,” said Danielle Gifford, director of AI at PwC. “The hierarchical model will evolve to become more flexible and cooperative, so firms will need to be adaptable.”

While all our panelists agreed that generative AI will be hugely disruptive, some felt that billing structures in law firms will be slow to change, potentially creating challenges.

“Canadian law firms in particular are about the most resistant to systemic change as any organization has ever seen,” said Ryan Black, a partner at DLA Piper. As AI frees up time and allows tasks to be completed far more quickly, the idea of a billable hour becomes increasingly antiquated, in Black’s view.

“The model has to change, and we have to think about how we’re going to do that,” said Black. He fears that lawyers will continue to leverage the hourly model, and use their free time to bill more instead of seeking ways to provide better value for clients.

Ryan Black, Partner, DLA Piper
Ryan Black, Partner, DLA Piper

Black also voiced concerns that articling students will have fewer opportunities for much-needed human mentorship.

Michael McGinn, senior manager, artificial intelligence solutions at Fasken agreed, noting that the billable hour is still increasing in US law firms.

Michael McGinn, Senior manager, artificial intelligence solutions, Fasken
Michael McGinn, Senior Manager, Artificial Intelligence Solutions, Fasken

“It’s not about more billable hours. It’s about training those lawyers and having more free time to learn and communicate the human side of that institutional knowledge that the AI will not be able to address in the future,” said McGinn.

While Canada previously lagged significantly behind the US with new AI product launches, the gap is starting to close, offering more opportunities for Canadian firms to catch up to their US counterparts and take advantage of new innovation and efficiency-building tools. LexisNexis is helping firms to leverage their own content, for example.

“What we’ve seen with generative AI is the ability to leverage content in a completely different way,” said Felix Evans, senior sales manager at LexisNexis. “Whereas before, the content was the starting point, I think now the workflow and the work product will be the start point of what is being produced, with the content underlying that.”

Felix Evans, Senior sales manager, LexisNexis
Felix Evans, Senior Sales Manager, LexisNexis

Through Gen AI, lawyers will be able to access proprietary Language Model content, as well as the firm’s own content, which can be used for many different tasks.

“It’s really about meeting lawyers where they are in terms of the tools and places that they are already working in, and helping them to generate work products,” added Evans.

Black noted that “walled gardens” are likely to become increasingly important at law firms as untapped data can be used to create long-term growth for AI tools. Tools such as LexisNexis will allow firms to turn that “walled garden” of data into a format that allows it to be leveraged by everyone at the firm in order to improve their work.

“That’s going to be the huge differentiator. Only your firm will be able to take the stuff that’s in your firm’s knowledge and put it into a product,” said Black.

Speakers agreed that a top-down approach is key for firms and legal departments as they formulate a flexible and innovative AI strategy. As Black stated, lawyers need to overcome their fears and start using AI as it is set to become “as ubiquitous as electricity, internet or email.” Education is essential to demystify AI tools and ensure an understanding of how they can be used to benefit different parts of the business.

Budget considerations are also important for firms and legal departments as they explore AI tools available to them.

“It’s not cheap. It’s very much like a strategic investment, so you want to really try and understand how much money you want to put forward to this,” said Gifford. She emphasised that while you don’t need to be an expert on AI, it is important to work with trusted partners, and to be entrepreneurial in your approach.

McGinn agreed, adding that testing AI tools with small low-stakes administrative tasks will allow you to measure the return on investment and build trust in technology, before tackling bigger-ticket high-risk items.

McGinn also recommended being use-case driven, as opposed to just buying the latest technology product on the market.

In-house teams and smaller firms that are just beginning their AI journey should focus on collaboration, McGinn advised.

“You need to break down those silos of not only practice groups, but also operational teams and the lawyers themselves, and get them all in the room to work together,” said McGinn. “We’ve had very successful initiatives come out of literally having practice groups sitting in with us, focusing on these technologies with business development, for example, to come up with some really interesting, innovative ideas of how to utilize this technology appropriately.”

When selecting a vendor, be sure to find out their policies around data security and privacy, and how their models are trained, Evans advised.

“What are they doing with your data? Where is it stored? Some questions may be more important to you and your use cases than others, but these are questions that at the end of the day, any vendor that you work with should be able to answer,” Evans said.

Gifford added: “If you’re not paying for a product, you are the product, so everything that you’re putting into a system – whether it be ChatGPT Perplexity or another system – if you’re not paying for it, you can assume all of that information is being used to train.”

Stay tuned for details about the Legal Innovation Forum’s six-part Generative AI Masterclass series, coming in 2025.

three surreal white twisted pillars with blue squares in front of a light blue background