Published by Invisible Technologies on October 7, 2024
AI can be trained to perform repetitive tasks, saving time and allowing businesses to focus more on customer touch-points or devising strategy. However, headaches arise when AI models only have a limited dataset to be trained on. This can lead to low-quality AI tools that are unable to appropriately forecast outcomes or that mishandles situations due to lack of training.
One solution? Synthetic data. We've spoken with experts at Invisible and have produced this series on synthetic data to share these insights. "Combining human-generated and synthetic data is crucial, and together they create robust, reliable AI models,” says Aleksei Shkurin, Technical Lead of AI Enablement at Invisible.
In our last post, we explained what synthetic data is, how it is produced, and why businesses should embrace it. As a refresher: Synthetic data is information that is artificially generated rather than produced by real-world events or people and can be used to augment real-world data in order to train machine learning models.
Recent advancements in synthetic data generation, such as NVIDIA's Nemotron-4 340B, which itself was trained using 98% synthetic data, highlight the ever-evolving nature of Artificial Intelligence.
Aleksei Shkurin says “In the fast-paced world of AI, staying informed about recent advancements is critical, as innovations like an open-sourced synthetic data generation pipeline from NVIDIA can dramatically enhance our ability to train AI models efficiently and cost-effectively.”
But how does a business generate synthetic data?
Broadly, there are two approaches for generating synthetic data for tuning models (Source: NVIDIA, 2024):
Knowledge distillation is the process of translating the capabilities of a larger model into a smaller model. The larger model can be used to solve a task and use that data to make the smaller model imitate the larger one.
Self-improvement involves using the same model to criticize its own reasoning and is often used to further hone the model’s capabilities.
These approaches are not exclusive, and both or either one can be used based on your business goals, the complexity of your original models, data availability, privacy concerns, and scalability requirements.
Businesses across industries can use these approaches when generating synthetic data to ensure it provides diverse, realistic data to overcome the limitations of real-world datasets. Some examples include:
Knowledge Distillation:
Guest Interaction Simulation: A large model trained on diverse guest interaction data might generate synthetic scenarios, such as different types of guest requests and responses. These synthetic interactions can be used to train a smaller model, which could be deployed in a customer service chatbot. The synthetic data allows the smaller model to learn from a wide range of potential guest scenarios without needing access to the full, real dataset.
Risk Assessment: A large model may assess risk based on a comprehensive analysis of historical data, customer profiles, and external factors. This model can generate synthetic data representing various risk scenarios, which a smaller model can then use to learn efficient risk assessment strategies. The distilled model can be used in real time to underwrite policies or evaluate claims.
Self-Improvement:
Demand Forecasting: A demand forecasting model might generate synthetic sales data based on its current understanding of market trends and customer behavior. The model then uses this synthetic data to evaluate and refine its predictions, continually improving its ability to forecast demand under varying market conditions. By generating scenarios in which it didn’t perform well previously, the model can learn and adapt, enhancing future forecasts.
Fraud Detection: A fraud detection model might generate synthetic transaction data, including both normal and fraudulent activities, based on its current understanding of fraud patterns. By testing itself on these synthetic datasets, the model can identify weaknesses in its detection capabilities and improve its accuracy and sensitivity in recognizing fraud, even as tactics evolve.
These adaptable strategies can allow businesses to transform their data into opportunities, thereby future-proofing businesses by staying at the leading edge of technological innovation.
Don't miss our third part of the series on synthetic data where we investigate challenges and solutions to generating usable synthetic data.
At Invisible, we consult with businesses spanning industries like Finserv, Healthcare, Retail, Hospitality, and more on their AI strategies. Visit our Get Started page to set up a conversation.