image: Panasonic × AI logo

Jul 11, 2024

Company / Press Release

-Towards interpretable*1 generative AI-

Panasonic HD Develops AI Technology That Can Handle Context-specific Knowledge Which Was Previously a Limitation of the Flow-based Generative Models

Osaka, Japan - Panasonic R&D Company of America (PRDCA) and Panasonic Holdings Co., Ltd. (Panasonic HD) have developed a flow-based generative AI model which can also handle context information such as an additional user or device information, achieving performance that exceeds conventional methods*2 in benchmarks such as failure prediction.

As the demand for interpretability in generative AI models increases, flow-based generative models are gaining attention. Flow-based generative models differ from other deep generative models in that they implement a layered bijective transformations between a target data distribution and a base distribution using learned parameters, making them easier to interpret in terms of what input the output data is based on. On the other hand, their bijective property makes it difficult to train existing models with the additional context-specific knowledge, posing a challenge in practical applications. To address this issue, we developed a new flow-based generative model, ContextFlow++, which can add contextual information to existing models using the additive operation while preserving bijection property.

This technology has been internationally recognized as being advanced, and was accepted to UAI 2024 (The Conference on Uncertainty in Artificial Intelligence), a top conference on AI and machine learning technologies. The findings will be presented at the conference, which will be held in Barcelona, Spain from July 15 to July 19, 2024.

Overview:

Figure 1: ContextFlow++ architecture. First, the data encoder and decoder pre-learn large-scale general knowledge without context information. Then, the context encoder and extended decoder learn the small-scale context information. At the decoder, the pre-trained parameters remain fixed and only the context-related parameters are updated.

Panasonic HD and PRDCA are working on research into AI interpretability. In recent years, we have focused on flow-based generative models, and since announcing FlowEneDet*3 in 2023, we have been working to improve performance and expand use cases. Flow-based generative models have been widely used in applications where the exact density estimation is of major importance, and their interpretability is very important when applying AI models to a wide range of applications, such as image generation and anomaly detection.
On the other hand, in the field of AI use, it is common to use the generalist knowledge of a large-scale pre-trained model as a base to learn contexts (specialist-knowledge) through small-scale additional learning, quickly and at low cost.
However, the benefit of the bijection property in flow-based generative models can be a hindrance, as it is extremely difficult to train a pre-trained model with additional specialized knowledge, and discrete variables (such as categorical data) are difficult to handle.
Therefore, we developed ContextFlow++ as a new approach that takes advantage of the benefits of flow-based generative models, which can increase the reliability of AI through their high interpretability, while overcoming the limitations that have prevented their practical application to date.

First, we devised a new algorithm that can explicitly separate the knowledge contained in the pre-trained model from context-specific expert knowledge (contextual information) while preserving bijective transformation. This makes it possible to model knowledge based on specific contexts more flexibly and accurately, something that was difficult to do with conventional flow-based generative models. In addition, by introducing a new architecture for handling discrete variables, it is now possible to handle types of data that could not be handled by conventional methods.

Figure 2: Results of classification tasks on the degraded image benchmark dataset CIFAR-10C. The accuracy rate is shown for the generic model for CIFAR-10 (green), which has no image degradation, and for the generic model for CIFAR-10C (blue), the conventional method (purple), and the proposed method (red). The proposed method not only achieved the highest accuracy compared to the conventional method (purple) and the generic model (blue), but also converged faster.

ContextFlow++ allows you to add “context” to the pre-trained model knowledge, allowing you to extend the model with specialist knowledge without the time-consuming training of a model from scratch. In addition, because the parameters of the pre-trained model can be processed while remaining fixed during training, it is possible to additionally learn contextual information, including discrete variables, without significantly increasing training and evaluation costs.

The performance of this method was evaluated on a variety of benchmark datasets, including the image classification tasks MNIST-R*4 (context information: rotation) and CIFAR-10C*5 (context information: deterioration type and deterioration level), as well as sensor data tasks such as ATM predictive maintenance*6 (context information: device ID) and SMAP unsupervised anomaly detection benchmark*7 (context information: entity ID), and the results showed that it achieved performance that surpassed conventional methods. In particular, when the ATM benchmark dataset was tested using imbalanced data in which the balance between abnormal and normal data was increased by 100 times to more closely resemble the real world, the performance degradation was limited compared to conventional methods, demonstrating the robustness unique to an architecture that takes context into account.

Future Outlook:

The newly developed ContextFlow++ is a technology that extends the flow-based generative model into a framework that can handle context information (e.g., device IDs), and  experiments with supervised image classification, predictive maintenance and unsupervised anomaly detection showed advantages of ContextFlow++. It is expected that this technology will be applied in fields such as image processing, anomaly detection, and failure prediction, in particular to highly accurate failure prediction that adapts to the characteristics of individual devices and individual installation conditions, where contextual information is an important factor.

Panasonic HD will continue to accelerate the implementation of AI in society and promote research and development of AI technology that will contribute to improving our customers' lives and workplaces.

Note:

*1: The degree to which the mechanisms and processes by which AI derives predictions and classification results are clear.

*2: You Lu and Bert Huang. Structured output learning with conditional generative flows. AAAI, 2020

*3: Efforts towards Responsible AI: Panasonic R&D Company of America and Panasonic Holdings Corporation have developed AI technology to deal with “out-of-distribution” false detection problem (Jul.28.2023) https://news.panasonic.com/global/press/en230728-2

*4: A dataset created by applying random image rotation in discrete steps of 360◦/64 to the MNIST, a popular dataset for machine learning.

*5: Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In ICLR, 2019.

*6: Víctor Manuel Vargas, Riccardo Rosati, César HervásMartínez, Adriano Mancini, Luca Romeo, and Pedro Antonio Gutiérrez. A hybrid feature learning approach based on convolutional kernels for ATM fault prediction using event-log data. Engineering Applications of Artificial Intelligence, 2023.

*7: Kyle Hundman, Valentino Constantinou, Christopher Laporte, Ian Colwell, and Tom Soderstrom. Detecting spacecraft anomalies using LSTMs and nonparametric dynamic thresholding. In SIGKDD, 2018.

About the Research:

Paper “ContextFlow++: Generalist-Specialist Flow-based Generative Models
with Mixed-Variable Context Encoding” https://arxiv.org/abs/2406.00578
This research is the result of a collaboration between Denis Gudovskiy of the Panasonic R&D Center of America, and Tomoyuki Okuno and Yohei Nakata of Panasonic HD Technology Headquarters.

Related Information:

About the Panasonic Group

Founded in 1918, and today a global leader in developing innovative technologies and solutions for wide-ranging applications in the consumer electronics, housing, automotive, industry, communications, and energy sectors worldwide, the Panasonic Group switched to an operating company system on April 1, 2022 with Panasonic Holdings Corporation serving as a holding company and eight companies positioned under its umbrella. The Group reported consolidated net sales of 8,496.4 billion yen for the year ended March 31, 2024. To learn more about the Panasonic Group, please visit: https://holdings.panasonic/global/

The content in this website is accurate at the time of publication but may be subject to change without notice.
Please note therefore that these documents may not always contain the most up-to-date information.
Please note that German, French and Chinese versions are machine translations, so the quality and accuracy may vary.

Issued:
Panasonic Holdings Corporation

Downloads (Images)

Featured news