eess.IV

169 posts

arXiv:2501.07248v1 Announce Type: cross Abstract: Understanding the movement of the left ventricle myocardium (LVmyo) during the cardiac cycle is essential for assessing cardiac function. One way to model this movement is through a series of deformable image registrations (DIRs) of the LVmyo. Traditional deep learning methods for DIRs, such as those based on convolutional neural networks, often require substantial memory and computational resources. In contrast, implicit neural representations (INRs) offer an efficient approach by operating on any number of continuous points. This study extends the use of INRs for DIR to cardiac computed tomography (CT), focusing on LVmyo registration. To enhance the precision of the registration around the LVmyo, we incorporate the signed distance field of the LVmyo with the Hounsfield Unit values from the CT frames. This guides the registration of the LVmyo, while keeping the tissue information from the CT frames. Our framework demonstrates high registration accuracy and provides a robust method for temporal registration that facilitates further analysis of LVmyo motion.

Mathias Micheelsen Lowes, Jonas Jalili Pedersen, Bj{\o}rn S. Hansen, Klaus Fuglsang Kofoed, Maxime Sermesant, Rasmus R. Paulsen1/14/2025

arXiv:2209.03832v2 Announce Type: replace-cross Abstract: The tensor low-rank prior has attracted considerable attention in dynamic MR reconstruction. Tensor low-rank methods preserve the inherent high-dimensional structure of data, allowing for improved extraction and utilization of intrinsic low-rank characteristics. However, most current methods are still confined to utilizing low-rank structures either in the image domain or predefined transformed domains. Designing an optimal transformation adaptable to dynamic MRI reconstruction through manual efforts is inherently challenging. In this paper, we propose a deep unrolling network that utilizes the convolutional neural network (CNN) to adaptively learn the transformed domain for leveraging tensor low-rank priors. Under the supervised mechanism, the learning of the tensor low-rank domain is directly guided by the reconstruction accuracy. Specifically, we generalize the traditional t-SVD to a transformed version based on arbitrary high-dimensional unitary transformations and introduce a novel unitary transformed tensor nuclear norm (UTNN). Subsequently, we present a dynamic MRI reconstruction model based on UTNN and devise an efficient iterative optimization algorithm using ADMM, which is finally unfolded into the proposed T2LR-Net. Experiments on two dynamic cardiac MRI datasets demonstrate that T2LR-Net outperforms the state-of-the-art optimization-based and unrolling network-based methods.

Yinghao Zhang, Peng Li, Yue Hu1/14/2025

arXiv:2501.07016v1 Announce Type: cross Abstract: Prognostic task is of great importance as it closely related to the survival analysis of patients, the optimization of treatment plans and the allocation of resources. The existing prognostic models have shown promising results on specific datasets, but there are limitations in two aspects. On the one hand, they merely explore certain types of modal data, such as patient histopathology WSI and gene expression analysis. On the other hand, they adopt the per-cancer-per-model paradigm, which means the trained models can only predict the prognostic effect of a single type of cancer, resulting in weak generalization ability. In this paper, a deep-learning based model, named UMPSNet, is proposed. Specifically, to comprehensively understand the condition of patients, in addition to constructing encoders for histopathology images and genomic expression profiles respectively, UMPSNet further integrates four types of important meta data (demographic information, cancer type information, treatment protocols, and diagnosis results) into text templates, and then introduces a text encoder to extract textual features. In addition, the optimal transport OT-based attention mechanism is utilized to align and fuse features of different modalities. Furthermore, a guided soft mixture of experts (GMoE) mechanism is introduced to effectively address the issue of distribution differences among multiple cancer datasets. By incorporating the multi-modality of patient data and joint training, UMPSNet outperforms all SOTA approaches, and moreover, it demonstrates the effectiveness and generalization ability of the proposed learning paradigm of a single model for multiple cancer types. The code of UMPSNet is available at https://github.com/binging512/UMPSNet.

Binyu Zhang, Shichao Li, Junpeng Jian, Zhu Meng, Limei Guo, Zhicheng Zhao1/14/2025

arXiv:2501.07197v1 Announce Type: cross Abstract: In this paper we discuss lung cancer detection using hybrid model of Convolutional-Neural-Networks (CNNs) and Support-Vector-Machines-(SVMs) in order to gain early detection of tumors, benign or malignant. The work uses this hybrid model by training upon the Computed Tomography scans (CT scans) as dataset. Using deep learning for detecting lung cancer early is a cutting-edge method.

Aryan Chaudhari, Ankush Singh, Sanchi Gajbhiye, Pratham Agrawal1/14/2025

arXiv:2501.06838v1 Announce Type: cross Abstract: Equipped with the continuous representation capability of Multi-Layer Perceptron (MLP), Implicit Neural Representation (INR) has been successfully employed for Arbitrary-scale Super-Resolution (ASR). However, the limited receptive field of the linear layers in MLP restricts the representation capability of INR, while it is computationally expensive to query the MLP numerous times to render each pixel. Recently, Gaussian Splatting (GS) has shown its advantages over INR in both visual quality and rendering speed in 3D tasks, which motivates us to explore whether GS can be employed for the ASR task. However, directly applying GS to ASR is exceptionally challenging because the original GS is an optimization-based method through overfitting each single scene, while in ASR we aim to learn a single model that can generalize to different images and scaling factors. We overcome these challenges by developing two novel techniques. Firstly, to generalize GS for ASR, we elaborately design an architecture to predict the corresponding image-conditioned Gaussians of the input low-resolution image in a feed-forward manner. Secondly, we implement an efficient differentiable 2D GPU/CUDA-based scale-aware rasterization to render super-resolved images by sampling discrete RGB values from the predicted contiguous Gaussians. Via end-to-end training, our optimized network, namely GSASR, can perform ASR for any image and unseen scaling factors. Extensive experiments validate the effectiveness of our proposed method. The project page can be found at \url{https://mt-cly.github.io/GSASR.github.io/}.

Du Chen, Liyi Chen, Zhengqiang Zhang, Lei Zhang1/14/2025

arXiv:2406.04158v3 Announce Type: replace Abstract: Multi-baseline Synthetic Aperture Radar (SAR) three-dimensional (3D) tomography is a crucial remote sensing technique that provides 3D resolution unavailable in conventional SAR imaging. However, achieving high-quality imaging typically requires multi-angle or full-aperture data, resulting in significant imaging costs. Recent advancements in sparse 3D SAR, which rely on data from limited apertures, have gained attention as a cost-effective alternative. Notably, deep learning techniques have markedly enhanced the imaging quality of sparse 3D SAR. Despite these advancements, existing methods primarily depend on high-resolution radar images for supervising the training of deep neural networks (DNNs). This exclusive dependence on single-modal data prevents the introduction of complementary information from other data sources, limiting further improvements in imaging performance. In this paper, we introduce a Cross-Modal 3D-SAR Reconstruction Network (CMAR-Net) to enhance 3D SAR imaging by integrating heterogeneous information. Leveraging cross-modal supervision from 2D optical images and error transfer guaranteed by differentiable rendering, CMAR-Net achieves efficient training and reconstructs highly sparse multi-baseline SAR data into visually structured and accurate 3D images, particularly for vehicle targets. Extensive experiments on simulated and real-world datasets demonstrate that CMAR-Net significantly outperforms SOTA sparse reconstruction algorithms based on compressed sensing (CS) and deep learning (DL). Furthermore, our method eliminates the need for time-consuming full-aperture data preprocessing and relies solely on computer-rendered optical images, significantly reducing dataset construction costs. This work highlights the potential of deep learning for multi-baseline SAR 3D imaging and introduces a novel framework for radar imaging research through cross-modal learning.

Da Li, Guoqiang Zhao, Houjun Sun, Jiacheng Bao1/14/2025

arXiv:2501.06273v1 Announce Type: cross Abstract: In recent years, there has been a surge of research focused on underwater image enhancement using Generative Adversarial Networks (GANs), driven by the need to overcome the challenges posed by underwater environments. Issues such as light attenuation, scattering, and color distortion severely degrade the quality of underwater images, limiting their use in critical applications. Generative Adversarial Networks (GANs) have emerged as a powerful tool for enhancing underwater photos due to their ability to learn complex transformations and generate realistic outputs. These advancements have been applied to real-world applications, including marine biology and ecosystem monitoring, coral reef health assessment, underwater archaeology, and autonomous underwater vehicle (AUV) navigation. This paper explores all major approaches to underwater image enhancement, from physical and physics-free models to Convolutional Neural Network (CNN)-based models and state-of-the-art GAN-based methods. It provides a comprehensive analysis of these methods, evaluation metrics, datasets, and loss functions, offering a holistic view of the field. Furthermore, the paper delves into the limitations and challenges faced by current methods, such as generalization issues, high computational demands, and dataset biases, while suggesting potential directions for future research.

Kancharagunta Kishan Babu, Ashreen Tabassum, Bommakanti Navaneeth, Tenneti Jahnavi, Yenka Akshaya1/14/2025

arXiv:2501.06939v1 Announce Type: cross Abstract: We develop a procedure for substantially improving the quality of segmented 3D micro-Computed Tomography (micro-CT) images of rocks with a Machine Learning (ML) Generative Model. The proposed model enhances the resolution eightfold (8x) and addresses segmentation inaccuracies due to the overlapping X-ray attenuation in micro-CT measurement for different rock minerals and phases. The proposed generative model is a 3D Deep Convolutional Wasserstein Generative Adversarial Network with Gradient Penalty (3D DC WGAN-GP). The algorithm is trained on segmented 3D low-resolution micro-CT images and segmented unpaired complementary 2D high-resolution Laser Scanning Microscope (LSM) images. The algorithm was demonstrated on multiple samples of Berea sandstones. We achieved high-quality super-resolved 3D images with a resolution of 0.4375 micro-m/voxel and accurate segmentation for constituting minerals and pore space. The described procedure can significantly expand the modern capabilities of digital rock physics.

Evgeny Ugolkov, Xupeng He, Hyung Kwak, Hussein Hoteit1/14/2025

arXiv:2501.06223v1 Announce Type: cross Abstract: Whether during the early days of popularization or in the present, the window setting in Computed Tomography (CT) has always been an indispensable part of the CT analysis process. Although research has investigated the capabilities of CT multi-window fusion in enhancing neural networks, there remains a paucity of domain-invariant, intuitively interpretable methodologies for Auto Window Setting. In this work, we propose an plug-and-play module originate from Tanh activation function, which is compatible with mainstream deep learning architectures. Starting from the physical principles of CT, we adhere to the principle of interpretability to ensure the module's reliability for medical implementations. The domain-invariant design facilitates observation of the preference decisions rendered by the adaptive mechanism from a clinically intuitive perspective. This enables the proposed method to be understood not only by experts in neural networks but also garners higher trust from clinicians. We confirm the effectiveness of the proposed method in multiple open-source datasets, yielding 10%~200% Dice improvements on hard segment targets.

Yiqin Zhang, Meiling Chen, Zhengjie Zhang1/14/2025

arXiv:2501.07120v1 Announce Type: cross Abstract: Ultrasound imaging frequently encounters challenges, such as those related to elevated noise levels, diminished spatiotemporal resolution, and the complexity of anatomical structures. These factors significantly hinder the model's ability to accurately capture and analyze structural relationships and dynamic patterns across various regions of the heart. Mamba, an emerging model, is one of the most cutting-edge approaches that is widely applied to diverse vision and language tasks. To this end, this paper introduces a U-shaped deep learning model incorporating a large-window Mamba scale (LMS) module and a hierarchical feature fusion approach for echocardiographic segmentation. First, a cascaded residual block serves as an encoder and is employed to incrementally extract multiscale detailed features. Second, a large-window multiscale mamba module is integrated into the decoder to capture global dependencies across regions and enhance the segmentation capability for complex anatomical structures. Furthermore, our model introduces auxiliary losses at each decoder layer and employs a dual attention mechanism to fuse multilayer features both spatially and across channels. This approach enhances segmentation performance and accuracy in delineating complex anatomical structures. Finally, the experimental results using the EchoNet-Dynamic and CAMUS datasets demonstrate that the model outperforms other methods in terms of both accuracy and robustness. For the segmentation of the left ventricular endocardium (${LV}_{endo}$), the model achieved optimal values of 95.01 and 93.36, respectively, while for the left ventricular epicardium (${LV}_{epi}$), values of 87.35 and 87.80, respectively, were achieved. This represents an improvement ranging between 0.54 and 1.11 compared with the best-performing model.

Xiaoxian Yang, Qi Wang, Kaiqi Zhang, Ke Wei, Jun Lyu, Lingchao Chen1/14/2025

arXiv:2501.06440v1 Announce Type: new Abstract: Recent advancements in meteorology involve the use of ground-based sky cameras for cloud observation. Analyzing images from these cameras helps in calculating cloud coverage and understanding atmospheric phenomena. Traditionally, cloud image segmentation relied on conventional computer vision techniques. However, with the advent of deep learning, convolutional neural networks (CNNs) are increasingly applied for this purpose. Despite their effectiveness, CNNs often require many epochs to converge, posing challenges for real-time processing in sky camera systems. In this paper, we introduce a residual U-Net with deep supervision for cloud segmentation which provides better accuracy than previous approaches, and with less training consumption. By utilizing residual connection in encoders of UCloudNet, the feature extraction ability is further improved.

Yijie Li, Hewei Wang, Shaofan Wang, Yee Hui Lee, Muhammad Salman Pathan, Soumyabrata Dev1/14/2025

arXiv:2501.06262v1 Announce Type: new Abstract: TinyML has made deploying deep learning models on low-power edge devices feasible, creating new opportunities for real-time perception in constrained environments. However, the adaptability of such deep learning methods remains limited to data drift adaptation, lacking broader capabilities that account for the environment's underlying dynamics and inherent uncertainty. Deep learning's scaling laws, which counterbalance this limitation by massively up-scaling data and model size, cannot be applied when deploying on the Edge, where deep learning limitations are further amplified as models are scaled down for deployment on resource-constrained devices. This paper presents a smart agentic system capable of performing on-device perception and planning, enabling active sensing on the edge. By incorporating active inference into our solution, our approach extends beyond deep learning capabilities, allowing the system to plan in dynamic environments while operating in real time with a modest total model size of 2.3 MB. We showcase our proposed system by creating and deploying a saccade agent connected to an IoT camera with pan and tilt capabilities on an NVIDIA Jetson embedded device. The saccade agent controls the camera's field of view following optimal policies derived from the active inference principles, simulating human-like saccadic motion for surveillance and robotics applications.

Devendra Vyas, Miguel de Prado, Tim Verbelen1/14/2025

arXiv:2501.06488v1 Announce Type: new Abstract: Neural View Synthesis (NVS), such as NeRF and 3D Gaussian Splatting, effectively creates photorealistic scenes from sparse viewpoints, typically evaluated by quality assessment methods like PSNR, SSIM, and LPIPS. However, these full-reference methods, which compare synthesized views to reference views, may not fully capture the perceptual quality of neurally synthesized scenes (NSS), particularly due to the limited availability of dense reference views. Furthermore, the challenges in acquiring human perceptual labels hinder the creation of extensive labeled datasets, risking model overfitting and reduced generalizability. To address these issues, we propose NVS-SQA, a NSS quality assessment method to learn no-reference quality representations through self-supervision without reliance on human labels. Traditional self-supervised learning predominantly relies on the "same instance, similar representation" assumption and extensive datasets. However, given that these conditions do not apply in NSS quality assessment, we employ heuristic cues and quality scores as learning objectives, along with a specialized contrastive pair preparation process to improve the effectiveness and efficiency of learning. The results show that NVS-SQA outperforms 17 no-reference methods by a large margin (i.e., on average 109.5% in SRCC, 98.6% in PLCC, and 91.5% in KRCC over the second best) and even exceeds 16 full-reference methods across all evaluation metrics (i.e., 22.9% in SRCC, 19.1% in PLCC, and 18.6% in KRCC over the second best).

Qiang Qu, Yiran Shen, Xiaoming Chen, Yuk Ying Chung, Weidong Cai, Tongliang Liu1/14/2025

arXiv:2501.06356v1 Announce Type: cross Abstract: Developing reliable healthcare AI models requires training with representative and diverse data. In imbalanced datasets, model performance tends to plateau on the more prevalent classes while remaining low on less common cases. To overcome this limitation, we propose DiffUltra, the first generative AI technique capable of synthesizing realistic Lung Ultrasound (LUS) images with extensive lesion variability. Specifically, we condition the generative AI by the introduced Lesion-anatomy Bank, which captures the lesion's structural and positional properties from real patient data to guide the image synthesis.We demonstrate that DiffUltra improves consolidation detection by 5.6% in AP compared to the models trained solely on real patient data. More importantly, DiffUltra increases data diversity and prevalence of rare cases, leading to a 25% AP improvement in detecting rare instances such as large lung consolidations, which make up only 10% of the dataset.

Yu-Cheng Chou, Gary Y. Li, Li Chen, Mohsen Zahiri, Naveen Balaraju, Shubham Patil, Bryson Hicks, Nikolai Schnittke, David O. Kessler, Jeffrey Shupp, Maria Parker, Cristiana Baloescu, Christopher Moore, Cynthia Gregory, Kenton Gregory, Balasundar Raju, Jochen Kruecker, Alvin Chen1/14/2025

arXiv:2501.06326v1 Announce Type: new Abstract: Brain decoding has emerged as a rapidly advancing and extensively utilized technique within neuroscience. This paper centers on the application of raw electroencephalogram (EEG) signals for decoding human brain activity, offering a more expedited and efficient methodology for enhancing our understanding of the human brain. The investigation specifically scrutinizes the efficacy of brain-computer interfaces (BCI) in deciphering neural signals associated with speech production, with particular emphasis on the impact of vocabulary size, electrode density, and training data on the framework's performance. The study reveals the competitive word error rates (WERs) achievable on the Librispeech benchmark through pre-training on unlabelled data for speech processing. Furthermore, the study evaluates the efficacy of voice recognition under configurations with limited labeled data, surpassing previous state-of-the-art techniques while utilizing significantly fewer labels. Additionally, the research provides a comprehensive analysis of error patterns in voice recognition and the influence of model size and unlabelled training data. It underscores the significance of factors such as vocabulary size and electrode density in enhancing BCI performance, advocating for an increase in microelectrodes and refinement of language models.

Zenon Lamprou, Yashar Moshfeghi1/14/2025

arXiv:2501.06938v1 Announce Type: new Abstract: The automatic identification of Magnetic Resonance Imaging (MRI) sequences can streamline clinical workflows by reducing the time radiologists spend manually sorting and identifying sequences, thereby enabling faster diagnosis and treatment planning for patients. However, the lack of standardization in the parameters of MRI scans poses challenges for automated systems and complicates the generation and utilization of datasets for machine learning research. To address this issue, we propose a system for MRI sequence identification using an unsupervised contrastive deep learning framework. By training a convolutional neural network based on the ResNet-18 architecture, our system classifies nine common MRI sequence types as a 9-class classification problem. The network was trained using an in-house internal dataset and validated on several public datasets, including BraTS, ADNI, Fused Radiology-Pathology Prostate Dataset, the Breast Cancer Dataset (ACRIN), among others, encompassing diverse acquisition protocols and requiring only 2D slices for training. Our system achieves a classification accuracy of over 0.95 across the nine most common MRI sequence types.

Yuli Wang, Kritika Iyer, Sep Farhand, Yoshihisa Shinagawa1/14/2025

arXiv:2501.06230v1 Announce Type: new Abstract: Current approaches to dichotomous image segmentation (DIS) treat image matting and object segmentation as fundamentally different tasks. As improvements in image segmentation become increasingly challenging to achieve, combining image matting and grayscale segmentation techniques offers promising new directions for architectural innovation. Inspired by the possibility of aligning these two model tasks, we propose a new architectural approach for DIS called Confidence-Guided Matting (CGM). We created the first CGM model called Background Erase Network (BEN). BEN is comprised of two components: BEN Base for initial segmentation and BEN Refiner for confidence refinement. Our approach achieves substantial improvements over current state-of-the-art methods on the DIS5K validation dataset, demonstrating that matting-based refinement can significantly enhance segmentation quality. This work opens new possibilities for cross-pollination between matting and segmentation techniques in computer vision.

Maxwell Meyer, Jack Spruyt1/14/2025

arXiv:2501.06653v1 Announce Type: new Abstract: Snapshot compressive imaging (SCI) refers to the recovery of three-dimensional data cubes-such as videos or hyperspectral images-from their two-dimensional projections, which are generated by a special encoding of the data with a mask. SCI systems commonly use binary-valued masks that follow certain physical constraints. Optimizing these masks subject to these constraints is expected to improve system performance. However, prior theoretical work on SCI systems focuses solely on independently and identically distributed (i.i.d.) Gaussian masks, which do not permit such optimization. On the other hand, existing practical mask optimizations rely on computationally intensive joint optimizations that provide limited insight into the role of masks and are expected to be sub-optimal due to the non-convexity and complexity of the optimization. In this paper, we analytically characterize the performance of SCI systems employing binary masks and leverage our analysis to optimize hardware parameters. Our findings provide a comprehensive and fundamental understanding of the role of binary masks - with both independent and dependent elements - and their optimization. We also present simulation results that confirm our theoretical findings and further illuminate different aspects of mask design.

Mengyu Zhao, Shirin Jalali1/14/2025

arXiv:2501.06336v1 Announce Type: new Abstract: We introduce MEt3R, a metric for multi-view consistency in generated images. Large-scale generative models for multi-view image generation are rapidly advancing the field of 3D inference from sparse observations. However, due to the nature of generative modeling, traditional reconstruction metrics are not suitable to measure the quality of generated outputs and metrics that are independent of the sampling procedure are desperately needed. In this work, we specifically address the aspect of consistency between generated multi-view images, which can be evaluated independently of the specific scene. Our approach uses DUSt3R to obtain dense 3D reconstructions from image pairs in a feed-forward manner, which are used to warp image contents from one view into the other. Then, feature maps of these images are compared to obtain a similarity score that is invariant to view-dependent effects. Using MEt3R, we evaluate the consistency of a large set of previous methods for novel view and video generation, including our open, multi-view latent diffusion model.

Mohammad Asim, Christopher Wewer, Thomas Wimmer, Bernt Schiele, Jan Eric Lenssen1/14/2025

arXiv:2311.06394v4 Announce Type: replace-cross Abstract: With the spread of COVID-19 around the globe over the past year, the usage of artificial intelligence (AI) algorithms and image processing methods to analyze the X-ray images of patients' chest with COVID-19 has become essential. The COVID-19 virus recognition in the lung area of a patient is one of the basic and essential needs of clicical centers and hospitals. Most research in this field has been devoted to papers on the basis of deep learning methods utilizing CNNs (Convolutional Neural Network), which mainly deal with the screening of sick and healthy people.In this study, a new structure of a 19-layer CNN has been recommended for accurately recognition of the COVID-19 from the X-ray pictures of chest. The offered CNN is developed to serve as a precise diagnosis system for a three class (viral pneumonia, Normal, COVID) and a four classclassification (Lung opacity, Normal, COVID-19, and pneumonia). A comparison is conducted among the outcomes of the offered procedure and some popular pretrained networks, including Inception, Alexnet, ResNet50, Squeezenet, and VGG19 and based on Specificity, Accuracy, Precision, Sensitivity, Confusion Matrix, and F1-score. The experimental results of the offered CNN method specify its dominance over the existing published procedures. This method can be a useful tool for clinicians in deciding properly about COVID-19.

Xinyuan Song1/14/2025