ISSN 0253-2778

CN 34-1054/N

2024 Vol. 54, No. 4

Information Science and Technology
A statistical characteristics preserving watermarking scheme for time series databases
Yelu Yu, Zehua Ma, Jie Zhang, Han Fang, Weiming Zhang, Nenghai Yu
2024, 54(4): 0401. doi: 10.52396/JUSTC-2023-0091
Abstract:
Database watermarking is one of the most effective methods to protect the copyright of databases. However, traditional database watermarking has a potential drawback: watermark embedding will change the distribution of data, which may affect the use and analysis of databases. Considering that most analyses are based on the statistical characteristics of the target database, keeping the consistency of the statistical characteristics is the key to ensuring analyzability. Since statistical characteristics analysis is performed in groups, compared with traditional relational databases, time series databases (TSDBs) have obvious time-grouping characteristics and are more valuable for analysis. Therefore, this paper proposes a robust watermarking algorithm for time series databases, effectively ensuring the consistency of statistical characteristics. Based on the time-group characteristics of TSDBs, we propose a three-step watermarking method, which is based on linear regression, error compensation, and watermark verification, named RCV. According to the properties of the linear regression model and error compensation, the proposed watermark method generates a series of data that have the same statistical characteristics. Then, the verification mechanism is performed to validate the generated data until it conveys the target watermark message. Compared with the existing methods, our method achieves superior robustness and preserves constant statistical properties better.
Toward 3D scene reconstruction from locally scale-aligned monocular video depth
Guangkai Xu, Feng Zhao
2024, 54(4): 0402. doi: 10.52396/JUSTC-2023-0061
Abstract:
Monocular depth estimation methods have achieved excellent robustness on diverse scenes, usually by predicting affine-invariant depth, up to an unknown scale and shift, rather than metric depth in that it is much easier to collect large-scale affine-invariant depth training data. However, in some video-based scenarios such as video depth estimation and 3D scene reconstruction, the unknown scale and shift residing in per-frame prediction may cause the predicted depth to be inconsistent. To tackle this problem, we propose a locally weighted linear regression method to recover the scale and shift map with very sparse anchor points, which ensures the consistency along consecutive frames. Extensive experiments show that our method can drop the Rel error (relative error) of existing state-of-the-art approaches significantly over several zero-shot benchmarks. Besides, we merge 6.3 million RGBD images to train robust depth models. By locally recovering scale and shift, our produced ResNet50-backbone model even outperforms the state-of-the-art DPT ViT-Large model. Combined with geometry-based reconstruction methods, we formulate a new dense 3D scene reconstruction pipeline, which benefits from both the scale consistency of sparse points and the robustness of monocular methods. By performing simple per-frame prediction over a video, the accurate 3D scene geometry can be recovered.
Physically plausible and conservative solutions to Navier–Stokes equations using physics-informed CNNs
Jianfeng Li, Liangying Zhou, Jingwei Sun, Guangzhong Sun
2024, 54(4): 0403. doi: 10.52396/JUSTC-2022-0174
Abstract:
The physics-informed neural network (PINN) is an emerging approach for efficiently solving partial differential equations (PDEs) using neural networks. The physics-informed convolutional neural network (PICNN), a variant of PINN enhanced by convolutional neural networks (CNNs), has achieved better results on a series of PDEs since the parameter-sharing property of CNNs is effective in learning spatial dependencies. However, applying existing PICNN-based methods to solve Navier–Stokes equations can generate oscillating predictions, which are inconsistent with the laws of physics and the conservation properties. To address this issue, we propose a novel method that combines PICNN with the finite volume method to obtain physically plausible and conservative solutions to Navier–Stokes equations. We derive the second-order upwind difference scheme of Navier–Stokes equations using the finite volume method. Then we use the derived scheme to calculate the partial derivatives and construct the physics-informed loss function. The proposed method is assessed by experiments on steady-state Navier–Stokes equations under different scenarios, including convective heat transfer and lid-driven cavity flow. The experimental results demonstrate that our method can effectively improve the plausibility and accuracy of the predicted solutions from PICNN.
A feature transfer model with Mixup and contrastive loss in domain generalization
Yuesong Wang, Hong Zhang
2024, 54(4): 0404. doi: 10.52396/JUSTC-2023-0010
Abstract:
When domains, which represent underlying data distributions, differ between training and test datasets, traditional deep neural networks suffer from a substantial drop in their performance. Domain generalization methods aim to boost generalizability on an unseen target domain by using only training data from source domains. Mainstream domain generalization algorithms usually make modifications on some popular feature extraction networks such as ResNet, or add more complex parameter modules after the feature extraction networks. Popular feature extraction networks are usually well pre-trained on large-scale datasets, so they have strong feature extraction abilities, while modifications can weaken such abilities. Adding more complex parameter modules results in a deeper network and is much more computationally demanding. In this paper, we propose a novel feature transfer model based on popular feature extraction networks in domain generalization, without making any changes or adding any module. The generalizability of this feature transfer model is boosted by incorporating a contrastive loss and a data augmentation strategy (i.e., Mixup), and a new sample selection strategy is proposed to coordinate Mixup and contrastive loss. Experiments on the benchmarks PACS and Domainnet demonstrate the superiority of our proposed method against conventional domain generalization methods.
LightAD: accelerating AutoDebias with adaptive sampling
Yang Qiu, Hande Dong, Jiawei Chen, Xiangnan He
2024, 54(4): 0405. doi: 10.52396/JUSTC-2022-0100
Abstract:
In recommendation systems, bias is ubiquitous because the data are collected from user behaviors rather than from reasonable experiments. AutoDebias, which resorts to metalearning to find appropriate debiasing configurations, i.e., pseudolabels and confidence weights for all user-item pairs, has been demonstrated to be a generic and effective solution for tackling various biases. Nevertheless, setting pseudolabels and weights for every user-item pair can be a time-consuming process. Therefore, AutoDebias suffers from an enormous computational cost, making it less applicable to real cases. Although stochastic gradient descent with a uniform sampler can be applied to accelerate training, this approach significantly deteriorates model convergence and stability. To overcome this problem, we propose LightAutoDebias (short as LightAD), which equips AutoDebias with a specialized importance sampling strategy. The sampler can adaptively and dynamically draw informative training instances, which results in better convergence and stability than does the standard uniform sampler. Several experiments on three benchmark datasets validate that our LightAD accelerates AutoDebias by several magnitudes while maintaining almost equal accuracy.
Management
The impact of external search, tie strength, and absorptive capacity on new product development performance
Huijun Yang, Wei Wang
2024, 54(4): 0406. doi: 10.52396/JUSTC-2022-0170
Abstract:
This study examines the influences of external search breadth and depth on new product development performance from a knowledge-based view. In particular, we introduce tie strength and absorptive capacity as two contextual variables in this study. The findings from data on 281 Chinese firms indicate that search breadth facilitates new product creativity, whereas search depth facilitates development speed. Tie strength weakens the relationships between search breadth and new product creativity but strengthens the relationship between search depth and development speed. Furthermore, the synergistic effect of tie strength and absorptive capacity negatively moderates the relationship between search breadth and new product creativity but positively moderates the relationship between search depth and development speed.
Article
Hybrid fault tolerance in distributed in-memory storage systems
Zheng Gong, Si Wu, Yinlong Xu
2024, 54(4): 0406. doi: 10.52396/JUSTC-2022-0125
Abstract:
An in-memory storage system provides submillisecond latency and improves the concurrency of user applications by caching data into memory from external storage. Fault tolerance of in-memory storage systems is essential, as the loss of cached data requires access to data from external storage, which evidently increases the response latency. Typically, replication and erasure code (EC) are two fault-tolerant schemes that pose different trade-offs between access performance and storage usage. To help make the best performance and space trade-off, we design ElasticMem, a hybrid fault-tolerant distributed in-memory storage system that supports elastic redundancy transition to dynamically change the fault-tolerant scheme. ElasticMem exploits a novel EC-oriented replication (EOR) that carefully designs the data placement of replication according to the future data layout of EC to enhance the I/O efficiency of redundancy transition. ElasticMem solves the consistency problem caused by concurrent data accesses via a lightweight table-based scheme combined with data bypassing. It detects corelated read and write requests and serves subsequent read requests with local data. We implement a prototype that realizes ElasticMem based on Memcached. Experiments show that ElasticMem remarkably reduces the time of redundancy transition, the overall latency of corelated concurrent data accesses, and the latency of single data access among them.