- Rosetta-PL: Propositional Logic as a Benchmark for Large Language Model Reasoning Large Language Models (LLMs) are primarily trained on high-resource natural languages, limiting their effectiveness in low-resource settings and in tasks requiring deep logical reasoning. This research introduces Rosetta-PL, a benchmark designed to evaluate LLMs' logical reasoning and generalization capabilities in a controlled environment. We construct Rosetta-PL by translating a dataset of logical propositions from Lean into a custom logical language, which is then used to fine-tune an LLM (e.g., GPT-4o). Our experiments analyze the impact of the size of the dataset and the translation methodology on the performance of the model. Our results indicate that preserving logical relationships in the translation process significantly boosts precision, with accuracy plateauing beyond roughly 20,000 training samples. These insights provide valuable guidelines for optimizing LLM training in formal reasoning tasks and improving performance in various low-resource language applications. 9 authors · Mar 25, 2025
- Rosetta Neurons: Mining the Common Units in a Model Zoo Do different neural networks, trained for various vision tasks, share some common representations? In this paper, we demonstrate the existence of common features we call "Rosetta Neurons" across a range of models with different architectures, different tasks (generative and discriminative), and different types of supervision (class-supervised, text-supervised, self-supervised). We present an algorithm for mining a dictionary of Rosetta Neurons across several popular vision models: Class Supervised-ResNet50, DINO-ResNet50, DINO-ViT, MAE, CLIP-ResNet50, BigGAN, StyleGAN-2, StyleGAN-XL. Our findings suggest that certain visual concepts and structures are inherently embedded in the natural world and can be learned by different models regardless of the specific task or architecture, and without the use of semantic labels. We can visualize shared concepts directly due to generative models included in our analysis. The Rosetta Neurons facilitate model-to-model translation enabling various inversion-based manipulations, including cross-class alignments, shifting, zooming, and more, without the need for specialized training. 4 authors · Jun 15, 2023
- The jetted NLS1 1H 0323+342: the Rosetta stone for accretion/ejection in AGN 1H 0323+342 is the nearest gamma-ray narrow-line Seyfert 1 galaxy (z=0.063). Its X-ray spectrum (0.3-10 keV) is characterised by significant spectral variability observed by many authors, with a backbone with photon index ~2 occasionally superimposed by a hard tail. This spectral variability has been interpreted as the interplay between the X-ray corona and the relativistic jet. The X-ray fluxes in the 0.3-10 keV energy band are generally around ~10^-11 erg cm^-2 s^-1, making it easier to get sufficient statistics even with short exposures. Here I present a reanalysis of all the available X-ray observations with Swift (181 obs), XMM-Newton (7 obs), Chandra (1 obs), and Suzaku (2 obs) performed between 2006 and 2025. Possible interpretations are proposed and discussed. 1 authors · Nov 30, 2025