<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>所有文章 - Zhaoylee&#39;s Blogs</title>
        <link>https://zhaoylee.github.io/Blogs_lovelt/posts/</link>
        <description>所有文章 | Zhaoylee&#39;s Blogs</description>
        <generator>Hugo -- gohugo.io</generator><language>zh-CN</language><lastBuildDate>Sat, 04 Apr 2026 12:31:36 &#43;0800</lastBuildDate><atom:link href="https://zhaoylee.github.io/Blogs_lovelt/posts/" rel="self" type="application/rss+xml" /><item>
    <title>MONOLSS  Learnable Sample Selection For Monocular 3D Detection</title>
    <link>https://zhaoylee.github.io/Blogs_lovelt/posts/monolss--learnable-sample-selection-for-monocular-3d-detection/</link>
    <pubDate>Sat, 04 Apr 2026 12:31:36 &#43;0800</pubDate>
    <author>zhaoylee</author>
    <guid>https://zhaoylee.github.io/Blogs_lovelt/posts/monolss--learnable-sample-selection-for-monocular-3d-detection/</guid>
    <description><![CDATA[<div class="featured-image">
                <img src="https://cdn.jsdelivr.net/gh/zhaoylee/BlogImage@main/blogs/20260330230353149.png" referrerpolicy="no-referrer">
            </div>博客的简述]]></description>
</item>
<item>
    <title>Iter3DDet: Depth Guided Iterative Fusion and Refinement for Monocular 3D Object Detection</title>
    <link>https://zhaoylee.github.io/Blogs_lovelt/posts/iter3ddet---depth-guided-iterative-fusion-and-refinement-for--monocular-3d-object-detection/</link>
    <pubDate>Sat, 04 Apr 2026 12:24:33 &#43;0800</pubDate>
    <author>zhaoylee</author>
    <guid>https://zhaoylee.github.io/Blogs_lovelt/posts/iter3ddet---depth-guided-iterative-fusion-and-refinement-for--monocular-3d-object-detection/</guid>
    <description><![CDATA[<div class="featured-image">
                <img src="https://cdn.jsdelivr.net/gh/zhaoylee/BlogImage@main/blogs/20260404131128430.png" referrerpolicy="no-referrer">
            </div>博客的简述]]></description>
</item>
<item>
    <title>Mix-Teaching: A Simple, Unified and Effective  Semi-Supervised Learning Framework for  Monocular 3D Object Detection</title>
    <link>https://zhaoylee.github.io/Blogs_lovelt/posts/mix-teaching--a-simple-unified-and-effective--semi-supervised-learning-framework-for--monocular-3d-object-detection/</link>
    <pubDate>Mon, 30 Mar 2026 23:17:02 &#43;0800</pubDate>
    <author>zhaoylee</author>
    <guid>https://zhaoylee.github.io/Blogs_lovelt/posts/mix-teaching--a-simple-unified-and-effective--semi-supervised-learning-framework-for--monocular-3d-object-detection/</guid>
    <description><![CDATA[<div class="featured-image">
                <img src="https://cdn.jsdelivr.net/gh/zhaoylee/BlogImage@main/blogs/20260330231443214.png" referrerpolicy="no-referrer">
            </div>本文提出 Mix-Teaching，首个专为单目 3D 目标检测设计的半监督学习统一框架。针对伪标签“低精度与低召回率”导致的确认偏差痛点，创新性提出“分解-重组”的跨帧实例级混合机制，并结合基于不确定性的过滤策略，优雅且高效地释放了无标注数据的潜力。]]></description>
</item>
<item>
    <title>Adaptive Dual Uncertainty Optimization: Boosting Monocular 3D Object Detection under Test Time Shifts</title>
    <link>https://zhaoylee.github.io/Blogs_lovelt/posts/adaptive-dual-uncertainty-optimization---boosting-monocular-3d-object-detection-under-test-time-shifts/</link>
    <pubDate>Mon, 30 Mar 2026 10:44:00 &#43;0800</pubDate>
    <author>zhaoylee</author>
    <guid>https://zhaoylee.github.io/Blogs_lovelt/posts/adaptive-dual-uncertainty-optimization---boosting-monocular-3d-object-detection-under-test-time-shifts/</guid>
    <description><![CDATA[<div class="featured-image">
                <img src="https://cdn.jsdelivr.net/gh/zhaoylee/BlogImage@main//blogs/20260330113435293.png" referrerpolicy="no-referrer">
            </div>针对 M3OD 在未知域“测试时偏移”导致的性能断崖，本文提出双重不确定性优化 (DUO)。核心是通过无监督 Focal Loss 压制语义模糊，并用语义感知法向量约束修复空间几何坍塌，大幅提升落地鲁棒性。]]></description>
</item>
<item>
    <title>OBMO: One Bounding Box Multiple Objects for Monocular 3D Object Detection</title>
    <link>https://zhaoylee.github.io/Blogs_lovelt/posts/obmo-one-bounding-box-multiple-objects-for-monocular-3d-object-detection/</link>
    <pubDate>Tue, 24 Mar 2026 10:48:33 &#43;0800</pubDate>
    <author>zhaoylee</author>
    <guid>https://zhaoylee.github.io/Blogs_lovelt/posts/obmo-one-bounding-box-multiple-objects-for-monocular-3d-object-detection/</guid>
    <description><![CDATA[<div class="featured-image">
                <img src="/Blogs_lovelt/cover.jpg" referrerpolicy="no-referrer">
            </div>简述:一个框对应多个物体位置，通过施加软标签，帮助网络训练稳定，从而提升一定的性能;]]></description>
</item>
<item>
    <title>OCM3D: Object-Centric Monocular 3D Object Detection</title>
    <link>https://zhaoylee.github.io/Blogs_lovelt/posts/ocm3d--object-centric-monocular-3d-object-detection/</link>
    <pubDate>Mon, 16 Mar 2026 09:12:18 &#43;0800</pubDate>
    <author>zhaoylee</author>
    <guid>https://zhaoylee.github.io/Blogs_lovelt/posts/ocm3d--object-centric-monocular-3d-object-detection/</guid>
    <description><![CDATA[<div class="featured-image">
                <img src="/Blogs_lovelt/cover.jpg" referrerpolicy="no-referrer">
            </div><hr>
<blockquote>
<p><strong>🏛️ 会议/期刊</strong>：arxiv<br>
<strong>📅 发表年份</strong>：2021<br>
<strong>💻 开源代码</strong>：<a href="https://github.com/mrsempress/OBMO_GUPNet/blob/main/tools/offline_OBMO.py" target="_blank" rel="noopener noreffer ">OBMO_GUPNet</a><br>
<strong>📄 论文题目</strong>：<a href="https://arxiv.org/pdf/2104.06041" target="_blank" rel="noopener noreffer ">OCM3D: Object-Centric Monocular 3D Object Detection</a></p>
</blockquote>
<hr>
<h3 id="1-文献背景研究目的与核心问题">1. 文献背景、研究目的与核心问题</h3>
<ul>
<li>
<p><strong>研究背景</strong>：单目 3D 目标检测（Monocular 3D Object Detection）是一个高度病态（ill-posed）的问题。主流方法通常依赖纯图像或将其转化为伪激光雷达（Pseudo-LiDAR）点云。然而，前者难以捕捉像素间的 3D 空间几何关系，后者则受困于单目深度估计带来的巨大点云噪声。</p>]]></description>
</item>
<item>
    <title>LR3D: Improving Distant 3D Object Detection Using 2D Box Supervision</title>
    <link>https://zhaoylee.github.io/Blogs_lovelt/posts/lr3d--improving-distant-3d-object-detection-using-2d-box-supervision/</link>
    <pubDate>Sun, 15 Mar 2026 22:23:00 &#43;0800</pubDate>
    <author>zhaoylee</author>
    <guid>https://zhaoylee.github.io/Blogs_lovelt/posts/lr3d--improving-distant-3d-object-detection-using-2d-box-supervision/</guid>
    <description><![CDATA[<div class="featured-image">
                <img src="/Blogs_lovelt/cover.jpg" referrerpolicy="no-referrer">
            </div><hr>
<blockquote>
<p><strong>🏛️ 会议/期刊</strong>：CVPR<br>
<strong>📅 发表年份</strong>：2024<br>
<strong>💻 开源代码</strong>：<a href="%e5%a1%ab%e5%86%99%e4%bd%a0%e7%9a%84URL" rel="">无</a><br>
<strong>📄 论文题目</strong>：<a href="https://openaccess.thecvf.com/content/CVPR2024/papers/Yang_Improving_Distant_3D_Object_Detection_Using_2D_Box_Supervision_CVPR_2024_paper.pdf" target="_blank" rel="noopener noreffer ">Improving Distant 3D Object Detection Using 2D Box Supervision</a></p>
</blockquote>
<hr>
<p>这篇由 NVIDIA 等机构的研究人员发表在 CVPR 2024 的重磅论文 <strong>《Improving Distant 3D Object Detection Using 2D Box Supervision》(简称 LR3D)</strong>，切入了一个目前高阶自动驾驶极其头疼的落地难题：<strong>远距离感知（Long-Range Detection）</strong>。它展示了如何用最廉价的标注，榨取单目视觉在远距离上的极限潜力。</p>]]></description>
</item>
<item>
    <title>StreamPETR-QAF2D：Enhancing 3D Object Detection with 2D Detection-Guided Query Anchors</title>
    <link>https://zhaoylee.github.io/Blogs_lovelt/posts/streampetr-qaf2d--enhancing-3d-object-detection-with-2d-detection-guided-query-anchors/</link>
    <pubDate>Sun, 15 Mar 2026 21:59:16 &#43;0800</pubDate>
    <author>zhaoylee</author>
    <guid>https://zhaoylee.github.io/Blogs_lovelt/posts/streampetr-qaf2d--enhancing-3d-object-detection-with-2d-detection-guided-query-anchors/</guid>
    <description><![CDATA[<div class="featured-image">
                <img src="/Blogs_lovelt/cover.jpg" referrerpolicy="no-referrer">
            </div><hr>
<blockquote>
<p><strong>🏛️ 会议/期刊</strong>：CVPR<br>
<strong>📅 发表年份</strong>：2024<br>
<strong>💻 开源代码</strong>：<a href="https://github.com/nullmax-vision/QAF2D" target="_blank" rel="noopener noreffer ">nullmax-vision/QAF2D-CVPR 2024</a><br>
<strong>📄 论文题目</strong>：<a href="https://arxiv.org/pdf/2403.06093" target="_blank" rel="noopener noreffer ">Enhancing 3D Object Detection with 2D Detection-Guided Query Anchors</a></p>
</blockquote>
<hr>
<p>这篇发表于 CVPR 2024 的论文 <strong>《Enhancing 3D Object Detection with 2D Detection-Guided Query Anchors》(简称 QAF2D)</strong> 极具工程实用价值。它没有死磕 3D 空间中的特征提取瓶颈，而是打出了一套极其聪明的“降维组合拳”，巧妙地利用成熟的 2D 视觉技术来为 3D 检测器“引路”。</p>]]></description>
</item>
<item>
    <title>OBMO: One Bounding Box Multiple Objects
for Monocular 3D Object Detection</title>
    <link>https://zhaoylee.github.io/Blogs_lovelt/posts/obmo--one-bounding-box-multiple-objects-for-monocular-3d-object-detection/</link>
    <pubDate>Sun, 15 Mar 2026 21:59:12 &#43;0800</pubDate>
    <author>zhaoylee</author>
    <guid>https://zhaoylee.github.io/Blogs_lovelt/posts/obmo--one-bounding-box-multiple-objects-for-monocular-3d-object-detection/</guid>
    <description><![CDATA[<div class="featured-image">
                <img src="/Blogs_lovelt/cover.jpg" referrerpolicy="no-referrer">
            </div><hr>
<blockquote>
<p><strong>🏛️ 会议/期刊</strong>：IEEE TIP<br>
<strong>📅 发表年份</strong>：2023<br>
<strong>💻 开源代码</strong>：<a href="https://github.com/mrsempress/OBMO_patchnet" target="_blank" rel="noopener noreffer ">mrsempress/OBMO_patchnet</a><br>
<strong>📄 论文题目</strong>：<a href="https://arxiv.org/pdf/2212.10049" target="_blank" rel="noopener noreffer ">OBMO: One Bounding Box Multiple Objects for Monocular 3D Object Detection</a></p>
</blockquote>
<hr>
<p>这篇发表于 IEEE TIP (2023) 的经典论文 <strong>《OBMO: One Bounding Box Multiple Objects for Monocular 3D Object Detection》</strong> 切入点非常犀利。它没有在复杂的网络主干上做文章，而是直击单目 3D 目标检测在“底层数学物理逻辑”上的痛点，提出了一种极其优雅的“即插即用（Plug-and-play）”训练策略。</p>]]></description>
</item>
<item>
    <title>Open Vocabulary Monocular 3D Object Detection</title>
    <link>https://zhaoylee.github.io/Blogs_lovelt/posts/open-vocabulary-monocular-3d-object-detection/</link>
    <pubDate>Sun, 15 Mar 2026 21:14:37 &#43;0800</pubDate>
    <author>zhaoylee</author>
    <guid>https://zhaoylee.github.io/Blogs_lovelt/posts/open-vocabulary-monocular-3d-object-detection/</guid>
    <description><![CDATA[<div class="featured-image">
                <img src="/Blogs_lovelt/cover.jpg" referrerpolicy="no-referrer">
            </div><hr>
<blockquote>
<p><strong>🏛️ 会议/期刊</strong>：3DV<br>
<strong>📅 发表年份</strong>：2026<br>
<strong>💻 开源代码</strong>：<a href="https://github.com/UVA-Computer-Vision-Lab/ovmono3d" target="_blank" rel="noopener noreffer ">UVA-Computer-Vision-Lab/ovmono3d</a><br>
<strong>📄 论文题目</strong>：<a href="https://arxiv.org/pdf/2411.16833" target="_blank" rel="noopener noreffer ">Open Vocabulary Monocular 3D Object Detection</a></p>
</blockquote>
<hr>
<h3 id="一-背景研究目的与核心问题">一、 背景、研究目的与核心问题</h3>
<ul>
<li>
<p><strong>研究背景：</strong> 传统的单目 3D 目标检测（M3OD）模型都属于“闭集（Closed-set）”学习。这意味着模型只能检测训练集中预先定义好的那几种类别（例如 KITTI 数据集里的车、人、自行车）。但在真实的自动驾驶或机器人场景中，会遇到无数的长尾目标（如遗落的轮胎、奇形怪状的施工路障、甚至是一只突然窜出的动物）。</p>]]></description>
</item>
</channel>
</rss>
