Getty Images/iStockphoto
How organizations should handle AI in the workplace
When implementing AI in the workplace, organizations must build a comprehensive strategy aligned with business values. Failing to do so could lead to the emergence of shadow AI.
Integrating AI into consumer and business applications raises the risk of shadow AI: the unsanctioned use of generative AI tools without the IT department's knowledge or governance.
AI can enter enterprises through both official and unofficial channels. For example, a team might authorize augmenting Microsoft 365 with Copilot or Google Workspace with Gemini. Conversely, an individual employee could use the free version of ChatGPT to analyze proprietary corporate data without the company's knowledge.
An organization-wide AI strategy encompasses implementation, data governance, security and user adoption, aiming to maximize the corporate AI investment. Devising a strategy to manage both sanctioned and unsanctioned AI use in the workplace strengthens corporate data and risk management. An ideal AI strategy should cut through the hype, speak in business terms, and lay out a maturity framework detailing AI's impact on corporate operations and products.
Elements of an AI strategy in the workplace
Generative AI's rapid proliferation across enterprises complicates the implementation of an AI strategy. As AI technology develops, its increasing complexity leaves more IT and business decision-makers -- not to mention mid-level managers -- struggling to keep pace. This situation highlights the need for a clear, well-documented strategy to guide the organization in adopting and using AI tools.
The key elements of an AI strategy in the workplace include the following:
- Setting an enterprise-wide strategy championed by top leadership.
- Aligning AI initiatives with the core business strategy.
- Balancing efficiency with value-creation targets.
- Effectively communicating the AI initiative's goals.
- Maintaining a dynamic approach and iteratively updating the AI strategy.
Build a cross-functional AI strategy team
Implementing a new AI strategy requires internal education for leadership. Don't assume that one person has all the answers -- even with the highest leadership's support, laying the strategy's foundation requires a cross-functional team of early adopters.
Pick a cross-functional team comprising representatives from the management team and major departments. Ideal candidates include innovative individuals starting to consider how AI can address their team's business challenges.
Streamline data for AI initiatives
Successful AI deployments hinge on the availability and proper organization of data. Businesses with comprehensive data -- down to specific business lines and inventory levels -- stand to benefit significantly from AI, as AI tools can elicit detailed, granular insights. To maximize AI's benefits, companies should prioritize systematic data organization and identify relevant use cases that align with their business objectives.
Align AI with core business objectives
Don't implement AI for the sake of having AI. Instead, implement AI tools and systems that support your core business strategy. Successful strategies involve collaboration across business divisions and active participation from leaders at all levels. This ensures that AI initiatives are in sync with business goals and can create competitive advantages.
Balance efficiency with growth
It's essential to balance efficiency with value-creation targets. Although efficiency is crucial, AI initiatives should also focus on growth-oriented goals, such as improving decision-making and operations to save money and expand business opportunities. For example, this could include AI integration in back-office processes, such as RevOps, or consumer products.
Prioritize accountability
Establishing guidelines for addressing fairness, transparency and accountability in AI use is another crucial element of an AI workplace strategy. Ethical concerns associated with AI-generated content, such as copyright issues and bias, aren't going away and should form the basis for these guidelines. Emphasizing accountability ensures that AI is used responsibly and ethically within the organization.
Managing shadow AI and insider cyber risk
Shadow AI refers to employees' unauthorized use of generative AI tools to augment their workflows and improve efficiency and productivity. This isn't necessarily malicious; employees might resort to shadow AI to automate monotonous tasks such as meeting management, for example.
Security teams have long focused on shadow IT and insider threats due to the risks they present to corporate data. Unfortunately, most companies still lack an insider risk strategy -- an especially risky situation for organizations that are also permissive regarding employees' generative AI use. Managers who overlook AI's risks to their corporate data are still all too common. To mitigate these risks, it's essential to include shadow AI prevention elements in broader AI strategies.
How to reduce shadow AI risk
An enterprise shadow AI strategy should both support innovation and ensure security and compliance. Here are some tips for combating shadow AI in the workplace:
- Assess investments in data security to ensure robust security measures for AI systems and their data sources.
- Foster transparency about employees' AI use, particularly with open source AI, and give employees insight into the organization's decision-making processes for approving AI technologies.
- Regularly audit and update the data governance and security sections of the enterprise's bring-your-own-device (BYOD) strategy to safeguard corporate data against consumer AI apps on employees' personal devices, such as the ChatGPT iOS app.
- Comply with relevant regulations, like GDPR and CCPA, and any regulatory changes that affect the organization.
- Offer employees comprehensive training on AI technologies and their ethical use as part of employee onboarding and BYOD program enrollment, with refresher trainings throughout the year.
Enforcing a shadow AI strategy involves a mix of training, guidelines, policies, formal risk assessments and technology. Use endpoint security tools to restrict employee access to shadow AI, especially SaaS applications. It's also reasonable to expect shadow IT and shadow AI to converge in the future as AI becomes ubiquitous across SaaS and mobile apps.
Will Kelly is a technology writer, content strategist and marketer. He has written extensively about the cloud, DevOps and enterprise mobility for industry publications and corporate clients and worked on teams introducing DevOps and cloud computing into commercial and public sector enterprises.