Generative AI is here. ChatGPT hit the mainstream 8 weeks ago and we can already see negative impacts and limited backlash. To implement generative AI in a way that is good for their organizations and society at large, leaders should focus on ethical sourcing of their models, working with a broad group of stakeholders as they build, and finding ways to augment people, rather than replace them with a cheaper, inferior service.
Unlike traditional machine learning algorithms that use a fixed set of rules to make decisions, generative AI models can create new content or data that was not present in the original dataset. While this ability to generate new data by learning patterns from a dataset has caught mainstream attention through ChatGPT, image generation AI models have quietly grown their audience for several years, starting from Discords sharing CoLabs and Github repositories and culminating in the official release of Open AI’s DALL•E 2 and Stability AI’s Stable Diffusion in July and August 2022.
Until recently, the resources and know-how required were such that only large companies could develop and run these models. Now, it’s the work of an afternoon for even a novice programmer to connect to a powerful, pre-trained model via an API, and development of a new model can be accomplished through a tutorial over the course of several weekends. As cloud providers roll out AI and ML-specific services, new models can be trained quickly for little to no cost. Open source licensing ensures that even someone without programming knowledge can download a desktop client for Stable Diffusion and start creating images without the controls or safety features of an OpenAI product.
Savvy organizations have been applying machine learning to solve problems and wrestle with large datasets for some time, but generative AI brings both new opportunities and risks. Generative AI (especially as a commodity) allows even small organizations to work with large, unstructured, textual data and scale content creation, but it also brings chances for misapplication (applying an AI to the wrong problems and getting the wrong answers) and mistakes (such as spreading incorrect information to customers at a large scale) when left unsupervised. Many of these risks can be mitigated by ethical sourcing of data, close collaboration and transparency with stakeholders, as well as looking for ways to augment people, rather than replace them.
Generative AI allows even small organizations to work with large, unstructured, textual data and scale content creation, but it also brings chances for misapplication and mistakes when left unsupervised.
Just as companies such as Starbucks and Nike take pains to ensure suppliers meet their ethical standards, leaders can ensure AI put to use in their organizations is created with responsibly sourced data. In the absence of any shared standards, policies, or best practices, leaders will need to develop their stance on contentious issues in the AI field, such as unattributed use of art in model development.
To help understand if data meets standards, open source tools such as Datasette can be used to explore large datasets (or subsets of them) and review the inputs into open source models. If there are concerns about the training data, organizations can find or build their own datasets for both ethical collection and applicability of the data to their use case.
For organizations that already collect data with potential for training new models, collaboration with stakeholders is key. While customers often sign away use of their data in the terms of service, an organization focused on responsible data sourcing should seek explicit permission to use their data and share the benefits with them. Current industry practice and prioritization of investor returns complicates this, but collaboration between organizations, the dominant tech companies, and the open source community could lead to a shared standard for stakeholder collaboration without affecting competitiveness. Organizations such as the AI Infrastructure Alliance are a starting point, and collaboration with customers could be a selling point for some products, just as enhanced privacy has been for Apple and others.
Leaders will need to develop their stance on contentious issues in the AI field, such as unattributed use of art in model development.
It is inevitable that many leaders, focused on pleasing shareholders in this year of efficiency, will see generative AI as an opportunity to cut costs and replace employees. While this may go unnoticed in some sectors, we should remember recent, and perhaps less sophisticated, attempts to automate our way to lower labor costs. Automated phone systems for customer support and self-checkout in retail stores, even the first generation of chatbots, are examples of attempts to cut costs through automation that also shift labor to the consumer and cause frustration or accessibility issues for some or all users. While responsible leaders shouldn’t shy away from possible efficiencies through generative AI, they should approach integration of these systems into traditionally human interactions with care and look for opportunities to improve employee and customer experience.
Generative AI is a promising technology that has potential to revolutionize many aspects of both business and daily life, but it’s clearly not without risks for both organizations, their employees, and the people they serve. Leaders who take care in sourcing data, building partnerships, and implementing new tools and processes can create a strong foundation for generative AI in any organization.
It is inevitable that many leaders, focused on pleasing shareholders in this year of efficiency, will see generative AI as an opportunity to cut costs and replace employees.