<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Foundation Models |</title><link>https://lelain.net/tags/foundation-models/</link><atom:link href="https://lelain.net/tags/foundation-models/index.xml" rel="self" type="application/rss+xml"/><description>Foundation Models</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Sat, 30 Aug 2025 00:00:00 +0000</lastBuildDate><item><title>How to finetune DINOv2 for astronmy?</title><link>https://lelain.net/publications/dinov2-astronomy/</link><pubDate>Sat, 30 Aug 2025 00:00:00 +0000</pubDate><guid>https://lelain.net/publications/dinov2-astronomy/</guid><description/></item><item><title>Comment spécialiser DINOv2 pour l'astronomie?</title><link>https://lelain.net/events/gretsi2025/</link><pubDate>Sun, 01 Jun 2025 00:00:00 +0000</pubDate><guid>https://lelain.net/events/gretsi2025/</guid><description>&lt;p&gt;Talk presented at GRETSI 2025. This work evaluates the performance of existing visual foundation models (ViT, SwinV2, BEiT, DINOv2) for astronomical applications, focusing on fine-tuning strategies to specialize DINOv2 for galaxy morphological classification.&lt;/p&gt;</description></item><item><title>AstroLLaVA: towards the unification of astronomical data and natural language</title><link>https://lelain.net/publications/astrollava/</link><pubDate>Fri, 11 Apr 2025 00:00:00 +0000</pubDate><guid>https://lelain.net/publications/astrollava/</guid><description/></item><item><title>Foundation Models in Astronomy: Why They Matter</title><link>https://lelain.net/blog/foundation-models-astronomy/</link><pubDate>Mon, 10 Feb 2025 00:00:00 +0000</pubDate><guid>https://lelain.net/blog/foundation-models-astronomy/</guid><description>&lt;p&gt;Foundation models — large neural networks pre-trained on massive datasets — are starting to transform astronomical data analysis. Models like DINOv2 or CLIP, originally trained on natural images, can be fine-tuned for astronomical tasks with surprisingly good results. At
, I presented early comparisons of these models on galaxy morphological classification, and my
paper digs into how to best specialize DINOv2 for astronomy.&lt;/p&gt;
&lt;p&gt;The key challenge is the domain gap: astronomical images (multi-band, high dynamic range, specific noise) look nothing like everyday photos. Choosing the right fine-tuning strategy turns out to matter a lot — and is the core focus of my current work.&lt;/p&gt;
&lt;p&gt;On the multimodal side, I am involved in the
collaboration, where I contribute to
— a vision-language model for astronomy presented at ICLR 2025.&lt;/p&gt;</description></item><item><title>When Foundation Models Meet Astronomical Data</title><link>https://lelain.net/events/nldl2025/</link><pubDate>Tue, 14 Jan 2025 00:00:00 +0000</pubDate><guid>https://lelain.net/events/nldl2025/</guid><description>&lt;p&gt;Poster presented at the Northern Lights Deep Learning Conference 2025 in Tromsø, Norway. This work evaluates the performance of existing visual foundation models (ViT, DINOv2, CLIP) for astronomical data analysis, with a focus on galaxy morphological classification.&lt;/p&gt;</description></item><item><title>When Foundation Models Meet Astronomical Data</title><link>https://lelain.net/events/eas2024/</link><pubDate>Mon, 01 Jul 2024 00:00:00 +0000</pubDate><guid>https://lelain.net/events/eas2024/</guid><description>&lt;p&gt;ePoster presented at the European Astronomical Society Conference 2024 in Padova, Italy, in Session SS10: &lt;em&gt;The impact of the rapidly evolving field of artificial intelligence on astrophysics research: avenues and potential breakthroughs&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;This work evaluates the performance of existing visual foundation models (ViT, CLIP, DINOv2) for astronomical data analysis, with a focus on galaxy morphological classification using GalaxyZoo datasets.&lt;/p&gt;</description></item></channel></rss>