Skip to contentSkip to navigation

Responsible AI is even more essential during a crisis

Rolling out innovative technologies carefully and strategically can build trust.

As governments, businesses and organizations, and workers figure out how to operate in the new normal brought on by COVID-19, technology, big data, and artificial intelligence are playing an important role. Some governments are deploying contact-tracing technologies, including app-based tracking and facial recognition, to identify those who may be at risk of infection and to keep others at a distance. To increase workplace safety and create a sense of security among staff, many organizations may follow the lead of those governments and launch contact-tracing capabilities in the office. Technologies that protect workplace safety will be instrumental in helping employees feel secure enough to go back to the office — and back to a semblance of normalcy. According to PwC’s CFO Pulse, 41 percent of surveyed chief financial officers consider the pandemic’s effects on their workforce to be a top-three concern.

But even when adopted with the best of intentions, the innovative technology that powers these solutions can be misapplied, yielding negative residual effects for individuals, organizations, and society. Governments in Asia and Europe are already contending with privacy concerns and hostile reactions to some of the apps and technologies they’ve deployed to curtail the spread of the virus.

These concerns are not unwarranted. After the September 11, 2001, attacks, proposed privacy regulations aimed at curbing data collection by major Internet providers were scrapped to make way for policies intended to halt future attacks. Those policies inadvertently opened the door for mass data collection, which almost 20 years later led to data privacy regulations such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

In the absence of a vaccine or effective treatments for the novel coronavirus, employers will have to keep precise track of where employees have been in their offices and facilities, and with whom they have been in contact. But it is also to be expected that employees will have legitimate questions about the use of tracking and tracing technology in the workplace: How will my employer use my data? Could this information be used more broadly than to simply ensure I do not come in contact with anyone who’s infected? Could this data be used to tally how many minutes I spend at my desk every day, or whom I meet for lunch? Will my employer be able to use this information against me? Do I have any right to privacy at work? In short, the management of these tools will become a new arena for building trust with employees — or destroying it.

Enabling responsible tech and governance

Before sending employees back to the office, leaders should be sure that the technologies and processes they adopt do not violate the trust of their workforce. To engender this trust, we must enable responsible data, technology, and AI governance practices. Keeping the following three imperatives in mind will help organizations stay on the right track.

1. Evaluate your existing data and technology ethics practices — or create new ones. Most organizations have data governance practices that establish standards for stewardship and quality. But some are also setting up new use and ethics practices for data and technology. Those organizations that lack a set of ethics principles or established guidelines should consider their own stated values, and then devise and adapt a clear set of guidelines at the highest levels. If, for example, one core value of the business is to be fair, an organization needs to define what fair means in the context of data and technology. For whom is it fair? With respect to what decision? And in what context?

It’s not exactly a best practice to buy hundreds of rolls of toilet paper just because you fear a shortage. And companies should not ‘panic buy’ AI and related technologies because of the pandemic, either.

So be transparent and have clear guidelines up front to build trust with employees, customers, and business partners, and ensure that these policies and standards apply to both in-house solutions and externally developed ones.

Though many organizations have adopted ethical principles around data and AI, those principles have not always translated into concrete technology, data, or AI-related policies and procedures. For example, if AI produces an outcome or decision with a certain probability, and if your policies do not anticipate any variance in that outcome or decision, then there will be a gap between your policies and your practice. Proper planning can reduce this risk: Evaluate your existing policies, assess where gaps for new technologies you wish to adopt may form, and, most importantly, abide by the values you have already established.

2. Be strategic in dealing with data and technology. It’s not exactly a best practice to buy hundreds of rolls of toilet paper just because you fear a shortage. And companies should not “panic buy” AI and related technologies because of the pandemic, either. Rather, be methodical and tactical about the solutions you buy or build, as well as the data you choose to collect. Though it may be tempting to think of potential future uses, only collect the data your company actually needs, and limit the scope of its application and collection to the use cases at hand. In proactively setting these limits, you’ll give employees a greater sense of comfort around the technology — and may encourage them to contribute in ways that make it more reliable.

You should also consider prioritizing those applications that have the potential to make lasting improvements — not the ones that are basically short-term Band-Aids. Don’t rush to adopt chatbots that replace service center staff. Instead, take the time to work with service representatives to design conversational agents that can function in parallel with them and progressively take on more complex customer requests. That approach will enable your organization to manage the trade-offs between its strategic goals and the concerns that surround the deployment of new technologies, such as data-privacy issues and employee safety. The responsible use of technology requires that these trade-offs be made and articulated at the organization’s strategic level, so that there is clarity and consistency across the enterprise.

3. Examine your vendor agreements. Don’t let the need to speed tech adoption get in the way of your tried-and-tested vendor-selection practices. Consider the data requirements of any vendor solution and limit the scope of the employee and customer data you are willing to share. Be realistic and try to understand the constraints of the technologies so that you can prepare internal practices accordingly. And don’t blindly agree to the vendors’ terms. Rather, ask them to detail their model development practices, policies around data retention, and successes and failures. The standards that you set for your own employees and practices must extend to your vendors.

It’s also important to equip your procurement and compliance teams with the information they need to effectively evaluate vendor solutions. In many cases, these teams lack the subject matter expertise required to evaluate the effectiveness of AI and technology solutions — and to understand their limitations. Don’t let your compliance teams do this work alone. Instead, pair your subject matter experts with your procurement and compliance specialists during the review process so that vendors can be evaluated appropriately and so that you actually know what you are buying and using.

Where do we go from here?

It’s important to get your response right. If employees don’t feel safe at work — physically and emotionally safe — they won’t thrive. In addition, it’s likely the practices adopted in a crisis will remain in place when the crisis abates and the world adjusts to a new normal. Any decision made today must also work for your company in a future full of unknowns. In the absence of clarity, values and strategy can provide important guidance. The responsible use of AI requires that your application of technology and data reflect your organization’s values. This approach necessitates good governance, sound risk management practices, and the application of ethics across the life cycle of AI and other technologies, as well as to the data you gather and the ways you use it. Technology can provide immense benefit and relief in a crisis. But even amid the most pressing challenges, it is vital that we don’t let progress trample on privacy.

Ilana Golbin

Ilana Golbin advises clients on the use of AI and other emerging technologies, and is a specialist in responsible AI practices. Based in Los Angeles, she is a director with PwC US.

 
Get s+b's award-winning newsletter delivered to your inbox. Sign up No, thanks
Illustration of flying birds delivering information
Get the newsletter

Sign up now to get our top insights on business strategy and management trends, delivered straight to your inbox twice a week.