Oracle Corporation

04/17/2024 | Press release | Distributed by Public on 04/17/2024 16:46

Introducing AI Quick Actions in OCI Data Science

Today, we're announcing the release of OCI Data Science AI Quick Actions, designed to enable anyone to easily deploy, fine-tune, and evaluate foundation models.

Over the past year, we have witnessed an explosion in the popularity of generative AI and interest in the foundation models that power them. OCI Data Science offers a platform to fine-tune and deploy foundation models in OCI. However, until today, it was a multistep process that requires Python coding, containers, and machine learning expertise.

AI Quick Actions

This new feature available in OCI Data Science aims to expand the reach of foundation models by providing customers with a streamlined, code-free, and efficient environment to work with foundation models. Using AI Quick Actions incurs no extra cost on top of OCI Data Science, which only charges for the underlying compute infrastructure and storage for your Data Science workload. To start using AI Quick Actions, you need to set up the required policies. For more information, see our documentation. After setting up the policies, create a Data Science notebook or deactivate and reactivate an existing notebook to begin your journey with AI Quick Actions.

When you open a Data Science notebook, you can find AI Quick Actions under Extensions in the Notebook Launcher. See Figures 1 and 2. After opening AI Quick Actions, you can access models, deployments, and evaluations.

Figure 1: OCI Data Science Launcher interface
Figure 2: AI Quick Actions Interface

Explore LLMs and other foundation models

In the model explorer, a list of foundation models that AI Quick Actions support and a list of your fine-tuned models are shown. For our first release, we're supporting several large language models (LLMs), such as CodeLlama, Mistral, and Falcon. We plan to add more models over time. When you select the card of each LLM, you can view details about the model, such as the architecture, input, output, and model license. See figure 4.

Figure 3: Model explorer interface
Figure 4: Model card panel

Fine-tune your model with your own data

Fine-tuning is the process of taking a pretrained model and training it on a domain-specific dataset to improve its knowledge and provide more accurate responses in that domain. You can fine-tune a model with the tag "Ready to Fine Tune" on the model card so that the model is better-adapted to your specific domain, as shown in Figure 5. You can use a dataset from OCI Object Storage or the storage of your notebook session. We recommend that your dataset contains a minimum of 100 records. For information about the format of the dataset needed for fine tuning, consult this GitHub page.

Figure 5: Fine-tuning panel

Deploy and test your model immediately

Create your own model deployment from a foundation model with the tag "Ready to Deploy" or a model that you've already fine-tuned as shown in Figure 6. Then you can deploy the model as a HTTP endpoint in OCI. After your model is deployed, you can test it out quickly and easily with our prompt and response panel, see Figure 7.

Figure 6: Model deployment panel
Figure 7: Test your deployed model with the prompt and response interface

Evaluate your models

With AI Quick Actions, you can create a model evaluation to see your model performance. Evaluating LLMs according to your data is necessary, but it presents a considerable challenge because of the unstructured nature of their output. You have numerous metrics to assess specific aspects of a language model performance.

Currently, we're using ROUGE and BERTScore for model evaluation, with plans to support other evaluation metrics in the future. ROUGE is based on measuring the overlap between the model prediction and the human-produced reference. This focuses on the overlap of the n-grams between the reference and model prediction. BERTScore is a metric for evaluating text generation models. It measures the similarity between the contextual token embeddings of the reference and model prediction.

You can evaluate the model with a dataset from Object Storage or the storage of your notebook and group model evaluations together by creating an experiment name. After evaluation is complete, you can view the evaluation report and download it to your local machine.

Figure 8: Model evaluation panel

Conclusion

AI Quick Actions in OCI Data Science simplifies your users' experience, including the less technical ones, so that they can deploy, customize, test, and evaluate foundation models faster and focus on creating generative AI-powered applications. In the upcoming months, we plan to add more support and improvements including support for more foundation models, support for more dataset file formats, and the addition of other model evaluation metrics.

If you want to suggest specific models to add to AI Quick Actions email the OCI Data Science group. For more information on how to use AI Quick Actions, go to the Oracle Cloud Infrastructure Data Science YouTube playlist to see a demo video of AI Quick Actions, and find our technical documentation, and see our Github repository with tips and examples.