<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:turbo="http://turbo.yandex.ru" version="2.0" xmlns:turbo="http://turbo.yandex.ru" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Codver.AI (RU)</title>
    <link>https://codver.ai/ru/</link>
    <description>AI-отобранные новости и инсайты: только то, что реально важно</description>
    <language>ru</language>
    <generator>Codver.AI Feed Generator</generator>
    <lastBuildDate>Tue, 14 Apr 2026 22:30:02 GMT</lastBuildDate>
    <atom:link href="https://codver.ai/feed-ru.xml" rel="self" type="application/rss+xml" />
    
    <item>
      <title>Broadcom создаёт чипы для Google: почему это сигнал о панике в AI</title>
      <link>https://codver.ai/ru/broadcom-создаёт-чипы-для-google-почему-это-сигнал-о-панике-в-ai.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/broadcom-создаёт-чипы-для-google-почему-это-сигнал-о-панике-в-ai.html</guid>
      <description>&lt;a href="https://www.cnbc.com/2026/04/06/broadcom-agrees-to-expanded-chip-deals-with-google-anthropic.html"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i23.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p23#a260406p23" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Jordan Novet / &lt;a href="http://www.cnbc.com/"&gt;CNBC&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-</description>
      <pubDate>Mon, 06 Apr 2026 23:18:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://www.cnbc.com/2026/04/06/broadcom-agrees-to-expanded-chip-deals-with-google-anthropic.html"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i23.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p23#a260406p23" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Jordan Novet / &lt;a href="http://www.cnbc.com/"&gt;CNBC&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.cnbc.com/2026/04/06/broadcom-agrees-to-expanded-chip-deals-with-google-anthropic.html"&gt;Filing: Broadcom agrees to produce future versions of Google's TPUs and expands its Anthropic deal to give the startup access to ~3.5 GW of computing capacity&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; - Broadcom said it agreed to produce future versions of Google's artificial intelligence chips,&lt;/p&gt;

https://codver.ai/ru/broadcom-создаёт-чипы-для-google-почему-это-сигнал-о-панике-в-ai.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Broadcom создаёт чипы для Google: почему это сигнал о панике в AI&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 23:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.cnbc.com/2026/04/06/broadcom-agrees-to-expanded-chip-deals-with-google-anthropic.html"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i23.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p23#a260406p23" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Jordan Novet / &lt;a href="http://www.cnbc.com/"&gt;CNBC&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.cnbc.com/2026/04/06/broadcom-agrees-to-expanded-chip-deals-with-google-anthropic.html"&gt;Filing: Broadcom agrees to produce future versions of Google's TPUs and expands its Anthropic deal to give the startup access to ~3.5 GW of computing capacity&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; - Broadcom said it agreed to produce future versions of Google's artificial intelligence chips,&lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/broadcom-создаёт-чипы-для-google-почему-это-сигнал-о-панике-в-ai.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>OpenAI против Маска: когда жертва становится охотником</title>
      <link>https://codver.ai/ru/openai-против-маска-когда-жертва-становится-охотником.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/openai-против-маска-когда-жертва-становится-охотником.html</guid>
      <description>&lt;a href="https://www.cnbc.com/2026/04/06/openai-asks-california-ag-to-probe-musks-anti-competitive-behavior-.html"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i24.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p24#a260406p24" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; &lt;a href="http://www.cnbc.com/"&gt;CNBC&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.</description>
      <pubDate>Mon, 06 Apr 2026 23:03:18 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://www.cnbc.com/2026/04/06/openai-asks-california-ag-to-probe-musks-anti-competitive-behavior-.html"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i24.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p24#a260406p24" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; &lt;a href="http://www.cnbc.com/"&gt;CNBC&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.cnbc.com/2026/04/06/openai-asks-california-ag-to-probe-musks-anti-competitive-behavior-.html"&gt;OpenAI sends a letter to the California and Delaware AGs, urging them to investigate &amp;ldquo;anti-competitive behavior&amp;rdquo; by Elon Musk, ahead of a trial in April&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; OpenAI on Monday sent a letter to the California and Delaware attorneys general, urging them to investigate &amp;hellip; &lt;/p&gt;

https://codver.ai/ru/openai-против-маска-когда-жертва-становится-охотником.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;OpenAI против Маска: когда жертва становится охотником&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 23:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.cnbc.com/2026/04/06/openai-asks-california-ag-to-probe-musks-anti-competitive-behavior-.html"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i24.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p24#a260406p24" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; &lt;a href="http://www.cnbc.com/"&gt;CNBC&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.cnbc.com/2026/04/06/openai-asks-california-ag-to-probe-musks-anti-competitive-behavior-.html"&gt;OpenAI sends a letter to the California and Delaware AGs, urging them to investigate &amp;ldquo;anti-competitive behavior&amp;rdquo; by Elon Musk, ahead of a trial in April&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; OpenAI on Monday sent a letter to the California and Delaware attorneys general, urging them to investigate &amp;hellip; &lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/openai-против-маска-когда-жертва-становится-охотником.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Синтетические данные для ИИ: почему «волшебная пилюля» оказалась плацебо</title>
      <link>https://codver.ai/ru/синтетические-данные-для-ии-почему-волшебная-пилюля-оказалась-плацебо.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/синтетические-данные-для-ии-почему-волшебная-пилюля-оказалась-плацебо.html</guid>
      <description>arXiv:2604.02946v1 Announce Type: cross 
Abstract: Learning methods using synthetic data have attracted attention as an effective approach for increasing the diversity of training data while reducing collection costs, thereby improving the robustness of model discrimination. However, many existing methods improve robustness only indirectly through the diversification of training samples and do not explicitly teach the model which regions in the input space truly contribute to discrimination; con</description>
      <pubDate>Mon, 06 Apr 2026 22:48:19 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02946v1 Announce Type: cross 
Abstract: Learning methods using synthetic data have attracted attention as an effective approach for increasing the diversity of training data while reducing collection costs, thereby improving the robustness of model discrimination. However, many existing methods improve robustness only indirectly through the diversification of training samples and do not explicitly teach the model which regions in the input space truly contribute to discrimination; consequently, the model may learn spurious correlations caused by synthesis biases and artifacts. Motivated by this limitation, this paper proposes a learning framework that uses provenance information obtained during the training data synthesis process, indicating whether each region in the input space originates from the target object, as an auxiliary supervisory signal to promote the acquisition of representations focused on target regions. Specifically, input gradients are decomposed based on information about target and non-target regions during synthesis, and input gradient guidance is introduced to suppress gradients over non-target regions. This suppresses the model's reliance on non-target regions and directly promotes the learning of discriminative representations for target regions. Experiments demonstrate the effectiveness and generality of the proposed method across multiple tasks and modalities, including weakly supervised object localization, spatio-temporal action localization, and image classification.

https://codver.ai/ru/синтетические-данные-для-ии-почему-волшебная-пилюля-оказалась-плацебо.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Синтетические данные для ИИ: почему «волшебная пилюля» оказалась плацебо&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 22:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02946v1 Announce Type: cross 
Abstract: Learning methods using synthetic data have attracted attention as an effective approach for increasing the diversity of training data while reducing collection costs, thereby improving the robustness of model discrimination. However, many existing methods improve robustness only indirectly through the diversification of training samples and do not explicitly teach the model which regions in the input space truly contribute to discrimination; consequently, the model may learn spurious correlations caused by synthesis biases and artifacts. Motivated by this limitation, this paper proposes a learning framework that uses provenance information obtained during the training data synthesis process, indicating whether each region in the input space originates from the target object, as an auxiliary supervisory signal to promote the acquisition of representations focused on target regions. Specifically, input gradients are decomposed based on information about target and non-target regions during synthesis, and input gradient guidance is introduced to suppress gradients over non-target regions. This suppresses the model's reliance on non-target regions and directly promotes the learning of discriminative representations for target regions. Experiments demonstrate the effectiveness and generality of the proposed method across multiple tasks and modalities, including weakly supervised object localization, spatio-temporal action localization, and image classification.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/синтетические-данные-для-ии-почему-волшебная-пилюля-оказалась-плацебо.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>OpenAI, Google и Anthropic объединились против воров — но защищают не то</title>
      <link>https://codver.ai/ru/openai-google-и-anthropic-объединились-против-воров-но-защищают-не-то.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/openai-google-и-anthropic-объединились-против-воров-но-защищают-не-то.html</guid>
      <description>&lt;a href="https://www.bloomberg.com/news/articles/2026-04-06/openai-anthropic-google-unite-to-combat-model-copying-in-china"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i20.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p20#a260406p20" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; &lt;a href="https://www.bloomberg.com/"&gt;Bloomberg&lt;/a&gt;:&lt;br /&gt;
&lt;span </description>
      <pubDate>Mon, 06 Apr 2026 22:33:19 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://www.bloomberg.com/news/articles/2026-04-06/openai-anthropic-google-unite-to-combat-model-copying-in-china"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i20.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p20#a260406p20" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; &lt;a href="https://www.bloomberg.com/"&gt;Bloomberg&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.bloomberg.com/news/articles/2026-04-06/openai-anthropic-google-unite-to-combat-model-copying-in-china"&gt;Sources: OpenAI, Anthropic, and Google are sharing information via the Frontier Model Forum to detect adversarial distillation attempts that violate their ToS&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Rivals OpenAI, Anthropic PBC, and Alphabet Inc.'s Google have begun working together to try to clamp down on Chinese competitors extracting results &amp;hellip; &lt;/p&gt;

https://codver.ai/ru/openai-google-и-anthropic-объединились-против-воров-но-защищают-не-то.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;OpenAI, Google и Anthropic объединились против воров — но защищают не то&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 22:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.bloomberg.com/news/articles/2026-04-06/openai-anthropic-google-unite-to-combat-model-copying-in-china"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i20.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p20#a260406p20" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; &lt;a href="https://www.bloomberg.com/"&gt;Bloomberg&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.bloomberg.com/news/articles/2026-04-06/openai-anthropic-google-unite-to-combat-model-copying-in-china"&gt;Sources: OpenAI, Anthropic, and Google are sharing information via the Frontier Model Forum to detect adversarial distillation attempts that violate their ToS&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Rivals OpenAI, Anthropic PBC, and Alphabet Inc.'s Google have begun working together to try to clamp down on Chinese competitors extracting results &amp;hellip; &lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/openai-google-и-anthropic-объединились-против-воров-но-защищают-не-то.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Broadcom скупает ИИ-партнёров: почему монополия хуже китайской угрозы</title>
      <link>https://codver.ai/ru/broadcom-скупает-ии-партнёров-почему-монополия-хуже-китайской-угрозы.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/broadcom-скупает-ии-партнёров-почему-монополия-хуже-китайской-угрозы.html</guid>
      <description>Broadcom said it agreed to produce future versions of Google artificial intelligence chips, and announced an expanded deal with Anthropic.</description>
      <pubDate>Mon, 06 Apr 2026 22:18:17 GMT</pubDate>
      <category>breaking</category>
      <content:encoded>
Broadcom said it agreed to produce future versions of Google artificial intelligence chips, and announced an expanded deal with Anthropic.

https://codver.ai/ru/broadcom-скупает-ии-партнёров-почему-монополия-хуже-китайской-угрозы.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Broadcom скупает ИИ-партнёров: почему монополия хуже китайской угрозы&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 22:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;Broadcom said it agreed to produce future versions of Google artificial intelligence chips, and announced an expanded deal with Anthropic.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/broadcom-скупает-ии-партнёров-почему-монополия-хуже-китайской-угрозы.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Firmus собрал $505M за ИИ-инфраструктуру: почему это сигнал паники</title>
      <link>https://codver.ai/ru/firmus-собрал-505m-за-ии-инфраструктуру-почему-это-сигнал-паники.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/firmus-собрал-505m-за-ии-инфраструктуру-почему-это-сигнал-паники.html</guid>
      <description>&lt;a href="https://www.bloomberg.com/news/articles/2026-04-06/nvidia-backed-data-center-builder-firmus-raises-505-million"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i19.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p19#a260406p19" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Ian King / &lt;a href="https://www.bloomberg.com/"&gt;Bloomberg&lt;/a&gt;:&lt;br /</description>
      <pubDate>Mon, 06 Apr 2026 22:03:20 GMT</pubDate>
      <category>breaking</category>
      <content:encoded>
&lt;a href="https://www.bloomberg.com/news/articles/2026-04-06/nvidia-backed-data-center-builder-firmus-raises-505-million"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i19.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p19#a260406p19" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Ian King / &lt;a href="https://www.bloomberg.com/"&gt;Bloomberg&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.bloomberg.com/news/articles/2026-04-06/nvidia-backed-data-center-builder-firmus-raises-505-million"&gt;Australian AI infrastructure startup Firmus raised $505M led by Coatue at a $5.5B valuation, bringing its funding raised in the last six months to $1.35B&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Data center builder Firmus Technologies Pty raised $505 million in an investment round led by Coatue Management LLC &amp;hellip; &lt;/p&gt;

https://codver.ai/ru/firmus-собрал-505m-за-ии-инфраструктуру-почему-это-сигнал-паники.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Firmus собрал $505M за ИИ-инфраструктуру: почему это сигнал паники&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 22:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.bloomberg.com/news/articles/2026-04-06/nvidia-backed-data-center-builder-firmus-raises-505-million"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i19.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p19#a260406p19" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Ian King / &lt;a href="https://www.bloomberg.com/"&gt;Bloomberg&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.bloomberg.com/news/articles/2026-04-06/nvidia-backed-data-center-builder-firmus-raises-505-million"&gt;Australian AI infrastructure startup Firmus raised $505M led by Coatue at a $5.5B valuation, bringing its funding raised in the last six months to $1.35B&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Data center builder Firmus Technologies Pty raised $505 million in an investment round led by Coatue Management LLC &amp;hellip; &lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/firmus-собрал-505m-за-ии-инфраструктуру-почему-это-сигнал-паники.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Автомобили научились читать дорогу: почему это делает водителей опаснее</title>
      <link>https://codver.ai/ru/автомобили-научились-читать-дорогу-почему-это-делает-водителей-опаснее.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/автомобили-научились-читать-дорогу-почему-это-делает-водителей-опаснее.html</guid>
      <description>arXiv:2604.02396v1 Announce Type: cross 
Abstract: The deep integration of communication with intelligence and sensing, as a defining vision of 6G, renders environment-aware channel prediction a key enabling technology. As a representative 6G application, vehicular communications require accurate and forward-looking channel prediction under stringent reliability, latency, and adaptability demands. Traditional empirical and deterministic models remain limited in balancing accuracy, generalization</description>
      <pubDate>Mon, 06 Apr 2026 21:48:21 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02396v1 Announce Type: cross 
Abstract: The deep integration of communication with intelligence and sensing, as a defining vision of 6G, renders environment-aware channel prediction a key enabling technology. As a representative 6G application, vehicular communications require accurate and forward-looking channel prediction under stringent reliability, latency, and adaptability demands. Traditional empirical and deterministic models remain limited in balancing accuracy, generalization, and deployability, while the growing availability of onboard and roadside sensing devices offers a promising source of environmental priors. This paper proposes an environment-aware channel prediction framework based on multimodal visual feature fusion. Using GPS data and vehicle-side panoramic RGB images, together with semantic segmentation and depth estimation, the framework extracts semantic, depth, and position features through a three-branch architecture and performs adaptive multimodal fusion via a squeeze-excitation attention gating module. For 360-dimensional angular power spectrum (APS) prediction, a dedicated regression head and a composite multi-constraint loss are further designed. As a result, joint prediction of path loss (PL), delay spread (DS), azimuth spread of arrival (ASA), azimuth spread of departure (ASD), and APS is achieved. Experiments on a synchronized urban V2I measurement dataset yield the best root mean square error (RMSE) of 3.26 dB for PL, RMSEs of 37.66 ns, 5.05 degrees, and 5.08 degrees for DS, ASA, and ASD, respectively, and mean/median APS cosine similarities of 0.9342/0.9571, demonstrating strong accuracy, generalization, and practical potential for intelligent channel prediction in 6G vehicular communications.

https://codver.ai/ru/автомобили-научились-читать-дорогу-почему-это-делает-водителей-опаснее.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Автомобили научились читать дорогу: почему это делает водителей опаснее&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 21:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02396v1 Announce Type: cross 
Abstract: The deep integration of communication with intelligence and sensing, as a defining vision of 6G, renders environment-aware channel prediction a key enabling technology. As a representative 6G application, vehicular communications require accurate and forward-looking channel prediction under stringent reliability, latency, and adaptability demands. Traditional empirical and deterministic models remain limited in balancing accuracy, generalization, and deployability, while the growing availability of onboard and roadside sensing devices offers a promising source of environmental priors. This paper proposes an environment-aware channel prediction framework based on multimodal visual feature fusion. Using GPS data and vehicle-side panoramic RGB images, together with semantic segmentation and depth estimation, the framework extracts semantic, depth, and position features through a three-branch architecture and performs adaptive multimodal fusion via a squeeze-excitation attention gating module. For 360-dimensional angular power spectrum (APS) prediction, a dedicated regression head and a composite multi-constraint loss are further designed. As a result, joint prediction of path loss (PL), delay spread (DS), azimuth spread of arrival (ASA), azimuth spread of departure (ASD), and APS is achieved. Experiments on a synchronized urban V2I measurement dataset yield the best root mean square error (RMSE) of 3.26 dB for PL, RMSEs of 37.66 ns, 5.05 degrees, and 5.08 degrees for DS, ASA, and ASD, respectively, and mean/median APS cosine similarities of 0.9342/0.9571, demonstrating strong accuracy, generalization, and practical potential for intelligent channel prediction in 6G vehicular communications.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/автомобили-научились-читать-дорогу-почему-это-делает-водителей-опаснее.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>BAS: Почему уверенность ИИ стала его самой опасной слабостью</title>
      <link>https://codver.ai/ru/bas-почему-уверенность-ии-стала-его-самой-опасной-слабостью.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/bas-почему-уверенность-ии-стала-его-самой-опасной-слабостью.html</guid>
      <description>arXiv:2604.03216v1 Announce Type: new 
Abstract: Large language models (LLMs) often produce confident but incorrect answers in settings where abstention would be safer. Standard evaluation protocols, however, require a response and do not account for how confidence should guide decisions under different risk preferences. To address this gap, we introduce the Behavioral Alignment Score (BAS), a decision-theoretic metric for evaluating how well LLM confidence supports abstention-aware decision mak</description>
      <pubDate>Mon, 06 Apr 2026 21:33:17 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.03216v1 Announce Type: new 
Abstract: Large language models (LLMs) often produce confident but incorrect answers in settings where abstention would be safer. Standard evaluation protocols, however, require a response and do not account for how confidence should guide decisions under different risk preferences. To address this gap, we introduce the Behavioral Alignment Score (BAS), a decision-theoretic metric for evaluating how well LLM confidence supports abstention-aware decision making. BAS is derived from an explicit answer-or-abstain utility model and aggregates realized utility across a continuum of risk thresholds, yielding a measure of decision-level reliability that depends on both the magnitude and ordering of confidence. We show theoretically that truthful confidence estimates uniquely maximize expected BAS utility, linking calibration to decision-optimal behavior. BAS is related to proper scoring rules such as log loss, but differs structurally: log loss penalizes underconfidence and overconfidence symmetrically, whereas BAS imposes an asymmetric penalty that strongly prioritizes avoiding overconfident errors. Using BAS alongside widely used metrics such as ECE and AURC, we then construct a benchmark of self-reported confidence reliability across multiple LLMs and tasks. Our results reveal substantial variation in decision-useful confidence, and while larger and more accurate models tend to achieve higher BAS, even frontier models remain prone to severe overconfidence. Importantly, models with similar ECE or AURC can exhibit very different BAS due to highly overconfident errors, highlighting limitations of standard metrics. We further show that simple interventions, such as top-$k$ confidence elicitation and post-hoc calibration, can meaningfully improve confidence reliability. Overall, our work provides both a principled metric and a comprehensive benchmark for evaluating LLM confidence reliability.

https://codver.ai/ru/bas-почему-уверенность-ии-стала-его-самой-опасной-слабостью.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;BAS: Почему уверенность ИИ стала его самой опасной слабостью&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 21:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.03216v1 Announce Type: new 
Abstract: Large language models (LLMs) often produce confident but incorrect answers in settings where abstention would be safer. Standard evaluation protocols, however, require a response and do not account for how confidence should guide decisions under different risk preferences. To address this gap, we introduce the Behavioral Alignment Score (BAS), a decision-theoretic metric for evaluating how well LLM confidence supports abstention-aware decision making. BAS is derived from an explicit answer-or-abstain utility model and aggregates realized utility across a continuum of risk thresholds, yielding a measure of decision-level reliability that depends on both the magnitude and ordering of confidence. We show theoretically that truthful confidence estimates uniquely maximize expected BAS utility, linking calibration to decision-optimal behavior. BAS is related to proper scoring rules such as log loss, but differs structurally: log loss penalizes underconfidence and overconfidence symmetrically, whereas BAS imposes an asymmetric penalty that strongly prioritizes avoiding overconfident errors. Using BAS alongside widely used metrics such as ECE and AURC, we then construct a benchmark of self-reported confidence reliability across multiple LLMs and tasks. Our results reveal substantial variation in decision-useful confidence, and while larger and more accurate models tend to achieve higher BAS, even frontier models remain prone to severe overconfidence. Importantly, models with similar ECE or AURC can exhibit very different BAS due to highly overconfident errors, highlighting limitations of standard metrics. We further show that simple interventions, such as top-$k$ confidence elicitation and post-hoc calibration, can meaningfully improve confidence reliability. Overall, our work provides both a principled metric and a comprehensive benchmark for evaluating LLM confidence reliability.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/bas-почему-уверенность-ии-стала-его-самой-опасной-слабостью.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Медицинский ИИ легко обмануть: хакеры атакуют через распределения данных</title>
      <link>https://codver.ai/ru/медицинский-ии-легко-обмануть-хакеры-атакуют-через-распределения-данных.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/медицинский-ии-легко-обмануть-хакеры-атакуют-через-распределения-данных.html</guid>
      <description>arXiv:2603.18545v2 Announce Type: replace-cross 
Abstract: Medical vision--language models (MVLMs) are increasingly used as perceptual backbones in radiology pipelines and as the visual front end of multimodal assistants, yet their reliability under real clinical workflows remains underexplored. Prior robustness evaluations often assume clean, curated inputs or study isolated corruptions, overlooking routine acquisition, reconstruction, display, and delivery operations that preserve clinical rea</description>
      <pubDate>Mon, 06 Apr 2026 21:18:19 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2603.18545v2 Announce Type: replace-cross 
Abstract: Medical vision--language models (MVLMs) are increasingly used as perceptual backbones in radiology pipelines and as the visual front end of multimodal assistants, yet their reliability under real clinical workflows remains underexplored. Prior robustness evaluations often assume clean, curated inputs or study isolated corruptions, overlooking routine acquisition, reconstruction, display, and delivery operations that preserve clinical readability while shifting image statistics. To address this gap, we propose CoDA, a chain-of-distribution framework that constructs clinically plausible pipeline shifts by composing acquisition-like shading, reconstruction and display remapping, and delivery and export degradations. Under masked structural-similarity constraints, CoDA jointly optimizes stage compositions and parameters to induce failures while preserving visual plausibility. Across brain MRI, chest X-ray, and abdominal CT, CoDA substantially degrades the zero-shot performance of CLIP-style MVLMs, with chained compositions consistently more damaging than any single stage. We also evaluate multimodal large language models (MLLMs) as technical-authenticity auditors of imaging realism and quality rather than pathology. Proprietary multimodal models show degraded auditing reliability and persistent high-confidence errors on CoDA-shifted samples, while the medical-specific MLLMs we test exhibit clear deficiencies in medical image quality auditing. Finally, we introduce a post-hoc repair strategy based on teacher-guided token-space adaptation with patch-level alignment, which improves accuracy on archived CoDA outputs. Overall, our findings characterize a clinically grounded threat surface for MVLM deployment and show that lightweight alignment improves robustness in deployment.

https://codver.ai/ru/медицинский-ии-легко-обмануть-хакеры-атакуют-через-распределения-данных.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Медицинский ИИ легко обмануть: хакеры атакуют через распределения данных&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 21:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2603.18545v2 Announce Type: replace-cross 
Abstract: Medical vision--language models (MVLMs) are increasingly used as perceptual backbones in radiology pipelines and as the visual front end of multimodal assistants, yet their reliability under real clinical workflows remains underexplored. Prior robustness evaluations often assume clean, curated inputs or study isolated corruptions, overlooking routine acquisition, reconstruction, display, and delivery operations that preserve clinical readability while shifting image statistics. To address this gap, we propose CoDA, a chain-of-distribution framework that constructs clinically plausible pipeline shifts by composing acquisition-like shading, reconstruction and display remapping, and delivery and export degradations. Under masked structural-similarity constraints, CoDA jointly optimizes stage compositions and parameters to induce failures while preserving visual plausibility. Across brain MRI, chest X-ray, and abdominal CT, CoDA substantially degrades the zero-shot performance of CLIP-style MVLMs, with chained compositions consistently more damaging than any single stage. We also evaluate multimodal large language models (MLLMs) as technical-authenticity auditors of imaging realism and quality rather than pathology. Proprietary multimodal models show degraded auditing reliability and persistent high-confidence errors on CoDA-shifted samples, while the medical-specific MLLMs we test exhibit clear deficiencies in medical image quality auditing. Finally, we introduce a post-hoc repair strategy based on teacher-guided token-space adaptation with patch-level alignment, which improves accuracy on archived CoDA outputs. Overall, our findings characterize a clinically grounded threat surface for MVLM deployment and show that lightweight alignment improves robustness in deployment.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/медицинский-ии-легко-обмануть-хакеры-атакуют-через-распределения-данных.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Пока все ждут AGI, Арканзас учит ИИ различать кетчуп</title>
      <link>https://codver.ai/ru/пока-все-ждут-agi-арканзас-учит-ии-различать-кетчуп.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/пока-все-ждут-agi-арканзас-учит-ии-различать-кетчуп.html</guid>
      <description>&lt;a href="https://www.wsj.com/business/retail/one-companys-effort-to-make-an-ai-ready-catalog-of-everything-we-buy-c33ee2c0?st=aWTHpb&amp;amp;reflink=desktopwebshare_permalink"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i18.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p18#a260406p18" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Sarah Nassauer /</description>
      <pubDate>Mon, 06 Apr 2026 21:03:22 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://www.wsj.com/business/retail/one-companys-effort-to-make-an-ai-ready-catalog-of-everything-we-buy-c33ee2c0?st=aWTHpb&amp;amp;reflink=desktopwebshare_permalink"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i18.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p18#a260406p18" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Sarah Nassauer / &lt;a href="https://www.wsj.com/"&gt;Wall Street Journal&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.wsj.com/business/retail/one-companys-effort-to-make-an-ai-ready-catalog-of-everything-we-buy-c33ee2c0?st=aWTHpb&amp;amp;reflink=desktopwebshare_permalink"&gt;A look at Eko, whose Arkansas &amp;ldquo;capture factory&amp;rdquo; creates digital product catalogs intended to serve as training data for retail-focused AI models&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; In an Arkansas &amp;lsquo;capture factory,&amp;rsquo; hand models and food stylists are preparing for the future of shopping&lt;/p&gt;

https://codver.ai/ru/пока-все-ждут-agi-арканзас-учит-ии-различать-кетчуп.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Пока все ждут AGI, Арканзас учит ИИ различать кетчуп&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 21:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.wsj.com/business/retail/one-companys-effort-to-make-an-ai-ready-catalog-of-everything-we-buy-c33ee2c0?st=aWTHpb&amp;amp;reflink=desktopwebshare_permalink"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i18.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p18#a260406p18" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Sarah Nassauer / &lt;a href="https://www.wsj.com/"&gt;Wall Street Journal&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.wsj.com/business/retail/one-companys-effort-to-make-an-ai-ready-catalog-of-everything-we-buy-c33ee2c0?st=aWTHpb&amp;amp;reflink=desktopwebshare_permalink"&gt;A look at Eko, whose Arkansas &amp;ldquo;capture factory&amp;rdquo; creates digital product catalogs intended to serve as training data for retail-focused AI models&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; In an Arkansas &amp;lsquo;capture factory,&amp;rsquo; hand models and food stylists are preparing for the future of shopping&lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/пока-все-ждут-agi-арканзас-учит-ии-различать-кетчуп.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Unified Thinker от Google: почему «мышление» в ИИ — это маркетинговый трюк</title>
      <link>https://codver.ai/ru/unified-thinker-от-google-почему-мышление-в-ии-это-маркетинговый-трюк.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/unified-thinker-от-google-почему-мышление-в-ии-это-маркетинговый-трюк.html</guid>
      <description>arXiv:2601.03127v2 Announce Type: replace-cross 
Abstract: Despite impressive progress in high-fidelity image synthesis, generative models still struggle with logic-intensive instruction following, exposing a persistent reasoning--execution gap. Meanwhile, closed-source systems (e.g., Nano Banana) have demonstrated strong reasoning-driven image generation, highlighting a substantial gap to current open-source models. We argue that closing this gap requires not merely better visual generators, bu</description>
      <pubDate>Mon, 06 Apr 2026 20:48:22 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2601.03127v2 Announce Type: replace-cross 
Abstract: Despite impressive progress in high-fidelity image synthesis, generative models still struggle with logic-intensive instruction following, exposing a persistent reasoning--execution gap. Meanwhile, closed-source systems (e.g., Nano Banana) have demonstrated strong reasoning-driven image generation, highlighting a substantial gap to current open-source models. We argue that closing this gap requires not merely better visual generators, but executable reasoning: decomposing high-level intents into grounded, verifiable plans that directly steer the generative process. To this end, we propose Unified Thinker, a task-agnostic reasoning architecture for general image generation, designed as a unified planning core that can plug into diverse generators and workflows. Unified Thinker decouples a dedicated Thinker from the image Generator, enabling modular upgrades of reasoning without retraining the entire generative model. We further introduce a two-stage training paradigm: we first build a structured planning interface for the Thinker, then apply reinforcement learning to ground its policy in pixel-level feedback, encouraging plans that optimize visual correctness over textual plausibility. Extensive experiments on text-to-image generation and image editing show that Unified Thinker substantially improves image reasoning and generation quality.

https://codver.ai/ru/unified-thinker-от-google-почему-мышление-в-ии-это-маркетинговый-трюк.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Unified Thinker от Google: почему «мышление» в ИИ — это маркетинговый трюк&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 20:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2601.03127v2 Announce Type: replace-cross 
Abstract: Despite impressive progress in high-fidelity image synthesis, generative models still struggle with logic-intensive instruction following, exposing a persistent reasoning--execution gap. Meanwhile, closed-source systems (e.g., Nano Banana) have demonstrated strong reasoning-driven image generation, highlighting a substantial gap to current open-source models. We argue that closing this gap requires not merely better visual generators, but executable reasoning: decomposing high-level intents into grounded, verifiable plans that directly steer the generative process. To this end, we propose Unified Thinker, a task-agnostic reasoning architecture for general image generation, designed as a unified planning core that can plug into diverse generators and workflows. Unified Thinker decouples a dedicated Thinker from the image Generator, enabling modular upgrades of reasoning without retraining the entire generative model. We further introduce a two-stage training paradigm: we first build a structured planning interface for the Thinker, then apply reinforcement learning to ground its policy in pixel-level feedback, encouraging plans that optimize visual correctness over textual plausibility. Extensive experiments on text-to-image generation and image editing show that Unified Thinker substantially improves image reasoning and generation quality.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/unified-thinker-от-google-почему-мышление-в-ии-это-маркетинговый-трюк.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>ИИ диагностирует рак груди лучше врачей — но это худшая новость для медицины</title>
      <link>https://codver.ai/ru/ии-диагностирует-рак-груди-лучше-врачей-но-это-худшая-новость-для-медицины.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/ии-диагностирует-рак-груди-лучше-врачей-но-это-худшая-новость-для-медицины.html</guid>
      <description>arXiv:2512.00129v2 Announce Type: replace-cross 
Abstract: Deep learning models for breast cancer detection from mammographic images have significant reliability problems when presented with Out-of-Domain (OOD) inputs such as other imaging modalities (CT, MRI, X-ray) or equipment variations, leading to unreliable detection and misdiagnosis. The current research mitigates the fundamental OOD issue through a comprehensive approach integrating ResNet50-based OOD filtering with YOLO architectures (Y</description>
      <pubDate>Mon, 06 Apr 2026 20:33:21 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2512.00129v2 Announce Type: replace-cross 
Abstract: Deep learning models for breast cancer detection from mammographic images have significant reliability problems when presented with Out-of-Domain (OOD) inputs such as other imaging modalities (CT, MRI, X-ray) or equipment variations, leading to unreliable detection and misdiagnosis. The current research mitigates the fundamental OOD issue through a comprehensive approach integrating ResNet50-based OOD filtering with YOLO architectures (YOLOv8, YOLOv11, YOLOv12) for accurate detection of breast cancer. Our strategy establishes an in-domain gallery via cosine similarity to rigidly reject non-mammographic inputs prior to processing, ensuring that only domain-associated images supply the detection pipeline. The OOD detection component achieves 99.77\% general accuracy with immaculate 100\% accuracy on OOD test sets, effectively eliminating irrelevant imaging modalities. ResNet50 was selected as the optimum backbone after 12 CNN architecture searches. The joint framework unites OOD robustness with high detection performance (mAP@0.5: 0.947) and enhanced interpretability through Grad-CAM visualizations. Experimental validation establishes that OOD filtering significantly improves system reliability by preventing false alarms on out-of-distribution inputs while maintaining higher detection accuracy on mammographic data. The present study offers a fundamental foundation for the deployment of reliable AI-based breast cancer detection systems in diverse clinical environments with inherent data heterogeneity.

https://codver.ai/ru/ии-диагностирует-рак-груди-лучше-врачей-но-это-худшая-новость-для-медицины.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;ИИ диагностирует рак груди лучше врачей — но это худшая новость для медицины&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 20:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2512.00129v2 Announce Type: replace-cross 
Abstract: Deep learning models for breast cancer detection from mammographic images have significant reliability problems when presented with Out-of-Domain (OOD) inputs such as other imaging modalities (CT, MRI, X-ray) or equipment variations, leading to unreliable detection and misdiagnosis. The current research mitigates the fundamental OOD issue through a comprehensive approach integrating ResNet50-based OOD filtering with YOLO architectures (YOLOv8, YOLOv11, YOLOv12) for accurate detection of breast cancer. Our strategy establishes an in-domain gallery via cosine similarity to rigidly reject non-mammographic inputs prior to processing, ensuring that only domain-associated images supply the detection pipeline. The OOD detection component achieves 99.77\% general accuracy with immaculate 100\% accuracy on OOD test sets, effectively eliminating irrelevant imaging modalities. ResNet50 was selected as the optimum backbone after 12 CNN architecture searches. The joint framework unites OOD robustness with high detection performance (mAP@0.5: 0.947) and enhanced interpretability through Grad-CAM visualizations. Experimental validation establishes that OOD filtering significantly improves system reliability by preventing false alarms on out-of-distribution inputs while maintaining higher detection accuracy on mammographic data. The present study offers a fundamental foundation for the deployment of reliable AI-based breast cancer detection systems in diverse clinical environments with inherent data heterogeneity.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/ии-диагностирует-рак-груди-лучше-врачей-но-это-худшая-новость-для-медицины.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Zero-shot Concept Bottleneck Models: ИИ наконец научился объяснять себя или это иллюзия понимания?</title>
      <link>https://codver.ai/ru/zero-shot-concept-bottleneck-models-ии-наконец-научился-объяснять-себя-или-это-иллюзия-понимания.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/zero-shot-concept-bottleneck-models-ии-наконец-научился-объяснять-себя-или-это-иллюзия-понимания.html</guid>
      <description>arXiv:2502.09018v2 Announce Type: replace-cross 
Abstract: Concept bottleneck models (CBMs) are inherently interpretable and intervenable neural network models, which explain their final label prediction by the intermediate prediction of high-level semantic concepts. However, they require target task training to learn input-to-concept and concept-to-label mappings, incurring target dataset collections and training resources. In this paper, we present zero-shot concept bottleneck models (Z-CBMs),</description>
      <pubDate>Mon, 06 Apr 2026 20:18:21 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2502.09018v2 Announce Type: replace-cross 
Abstract: Concept bottleneck models (CBMs) are inherently interpretable and intervenable neural network models, which explain their final label prediction by the intermediate prediction of high-level semantic concepts. However, they require target task training to learn input-to-concept and concept-to-label mappings, incurring target dataset collections and training resources. In this paper, we present zero-shot concept bottleneck models (Z-CBMs), which predict concepts and labels in a fully zero-shot manner without training neural networks. Z-CBMs utilize a large-scale concept bank, which is composed of millions of vocabulary extracted from the web, to describe arbitrary input in various domains. For the input-to-concept mapping, we introduce concept retrieval, which dynamically finds input-related concepts by the cross-modal search on the concept bank. In the concept-to-label inference, we apply concept regression to select essential concepts from the retrieved concepts by sparse linear regression. Through extensive experiments, we confirm that our Z-CBMs provide interpretable and intervenable concepts without any additional training. Code will be available at https://github.com/yshinya6/zcbm.

https://codver.ai/ru/zero-shot-concept-bottleneck-models-ии-наконец-научился-объяснять-себя-или-это-иллюзия-понимания.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Zero-shot Concept Bottleneck Models: ИИ наконец научился объяснять себя или это иллюзия понимания?&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 20:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2502.09018v2 Announce Type: replace-cross 
Abstract: Concept bottleneck models (CBMs) are inherently interpretable and intervenable neural network models, which explain their final label prediction by the intermediate prediction of high-level semantic concepts. However, they require target task training to learn input-to-concept and concept-to-label mappings, incurring target dataset collections and training resources. In this paper, we present zero-shot concept bottleneck models (Z-CBMs), which predict concepts and labels in a fully zero-shot manner without training neural networks. Z-CBMs utilize a large-scale concept bank, which is composed of millions of vocabulary extracted from the web, to describe arbitrary input in various domains. For the input-to-concept mapping, we introduce concept retrieval, which dynamically finds input-related concepts by the cross-modal search on the concept bank. In the concept-to-label inference, we apply concept regression to select essential concepts from the retrieved concepts by sparse linear regression. Through extensive experiments, we confirm that our Z-CBMs provide interpretable and intervenable concepts without any additional training. Code will be available at https://github.com/yshinya6/zcbm.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/zero-shot-concept-bottleneck-models-ии-наконец-научился-объяснять-себя-или-это-иллюзия-понимания.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Пока ИИ требует миллиарды данных, студенты решили задачу тремя учителями</title>
      <link>https://codver.ai/ru/пока-ии-требует-миллиарды-данных-студенты-решили-задачу-тремя-учителями.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/пока-ии-требует-миллиарды-данных-студенты-решили-задачу-тремя-учителями.html</guid>
      <description>arXiv:2604.03192v1 Announce Type: cross 
Abstract: We study multiteacher knowledge distillation for low resource abstractive summarization from a reliability aware perspective. We introduce EWAD (Entropy Weighted Agreement Aware Distillation), a token level mechanism that routes supervision between teacher distillation and gold supervision based on inter teacher agreement, and CPDP (Capacity Proportional Divergence Preservation), a geometric constraint on the student position relative to heterog</description>
      <pubDate>Mon, 06 Apr 2026 20:03:22 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.03192v1 Announce Type: cross 
Abstract: We study multiteacher knowledge distillation for low resource abstractive summarization from a reliability aware perspective. We introduce EWAD (Entropy Weighted Agreement Aware Distillation), a token level mechanism that routes supervision between teacher distillation and gold supervision based on inter teacher agreement, and CPDP (Capacity Proportional Divergence Preservation), a geometric constraint on the student position relative to heterogeneous teachers. Across two Bangla datasets, 13 BanglaT5 ablations, and eight Qwen2.5 experiments, we find that logit level KD provides the most reliable gains, while more complex distillation improves semantic similarity for short summaries but degrades longer outputs. Cross lingual pseudo label KD across ten languages retains 71-122 percent of teacher ROUGE L at 3.2x compression. A human validated multi judge LLM evaluation further reveals calibration bias in single judge pipelines. Overall, our results show that reliability aware distillation helps characterize when multi teacher supervision improves summarization and when data scaling outweighs loss engineering.

https://codver.ai/ru/пока-ии-требует-миллиарды-данных-студенты-решили-задачу-тремя-учителями.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Пока ИИ требует миллиарды данных, студенты решили задачу тремя учителями&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 20:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.03192v1 Announce Type: cross 
Abstract: We study multiteacher knowledge distillation for low resource abstractive summarization from a reliability aware perspective. We introduce EWAD (Entropy Weighted Agreement Aware Distillation), a token level mechanism that routes supervision between teacher distillation and gold supervision based on inter teacher agreement, and CPDP (Capacity Proportional Divergence Preservation), a geometric constraint on the student position relative to heterogeneous teachers. Across two Bangla datasets, 13 BanglaT5 ablations, and eight Qwen2.5 experiments, we find that logit level KD provides the most reliable gains, while more complex distillation improves semantic similarity for short summaries but degrades longer outputs. Cross lingual pseudo label KD across ten languages retains 71-122 percent of teacher ROUGE L at 3.2x compression. A human validated multi judge LLM evaluation further reveals calibration bias in single judge pipelines. Overall, our results show that reliability aware distillation helps characterize when multi teacher supervision improves summarization and when data scaling outweighs loss engineering.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/пока-ии-требует-миллиарды-данных-студенты-решили-задачу-тремя-учителями.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>ИИ учат видеть пожары, но настоящая проблема — в наших глазах</title>
      <link>https://codver.ai/ru/ии-учат-видеть-пожары-но-настоящая-проблема-в-наших-глазах.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/ии-учат-видеть-пожары-но-настоящая-проблема-в-наших-глазах.html</guid>
      <description>arXiv:2604.02479v1 Announce Type: cross 
Abstract: The scarcity of labeled satellite imagery remains a fundamental bottleneck for deep-learning (DL)-based wildfire monitoring systems. This paper investigates whether a diffusion-based foundation model for Earth Observation (EO), EarthSynth, can synthesize realistic post-wildfire Sentinel-2 RGB imagery conditioned on existing burn masks, without task-specific retraining.
  Using burn masks derived from the CalFireSeg-50 dataset (Martin et al., 202</description>
      <pubDate>Mon, 06 Apr 2026 19:48:21 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02479v1 Announce Type: cross 
Abstract: The scarcity of labeled satellite imagery remains a fundamental bottleneck for deep-learning (DL)-based wildfire monitoring systems. This paper investigates whether a diffusion-based foundation model for Earth Observation (EO), EarthSynth, can synthesize realistic post-wildfire Sentinel-2 RGB imagery conditioned on existing burn masks, without task-specific retraining.
  Using burn masks derived from the CalFireSeg-50 dataset (Martin et al., 2025), we design and evaluate six controlled experimental configurations that systematically vary: (i) pipeline architecture (mask-only full generation vs. inpainting with pre-fire context), (ii) prompt engineering strategy (three hand-crafted prompts and a VLM-generated prompt via Qwen2-VL), and (iii) a region-wise color-matching post-processing step.
  Quantitative assessment on 10 stratified test samples uses four complementary metrics: Burn IoU, burn-region color distance ({\Delta}C_burn), Darkness Contrast, and Spectral Plausibility. Results show that inpainting-based pipelines consistently outperform full-tile generation across all metrics, with the structured inpainting prompt achieving the best spatial alignment (Burn IoU = 0.456) and burn saliency (Darkness Contrast = 20.44), while color matching produces the lowest color distance ({\Delta}C_burn = 63.22) at the cost of reduced burn saliency.
  VLM-assisted inpainting is competitive with hand-crafted prompts. These findings provide a foundation for incorporating generative data augmentation into wildfire detection pipelines.
  Code and experiments are available at: https://www.kaggle.com/code/valeriamartinh/genai-all-runned

https://codver.ai/ru/ии-учат-видеть-пожары-но-настоящая-проблема-в-наших-глазах.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;ИИ учат видеть пожары, но настоящая проблема — в наших глазах&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 19:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02479v1 Announce Type: cross 
Abstract: The scarcity of labeled satellite imagery remains a fundamental bottleneck for deep-learning (DL)-based wildfire monitoring systems. This paper investigates whether a diffusion-based foundation model for Earth Observation (EO), EarthSynth, can synthesize realistic post-wildfire Sentinel-2 RGB imagery conditioned on existing burn masks, without task-specific retraining.
  Using burn masks derived from the CalFireSeg-50 dataset (Martin et al., 2025), we design and evaluate six controlled experimental configurations that systematically vary: (i) pipeline architecture (mask-only full generation vs. inpainting with pre-fire context), (ii) prompt engineering strategy (three hand-crafted prompts and a VLM-generated prompt via Qwen2-VL), and (iii) a region-wise color-matching post-processing step.
  Quantitative assessment on 10 stratified test samples uses four complementary metrics: Burn IoU, burn-region color distance ({\Delta}C_burn), Darkness Contrast, and Spectral Plausibility. Results show that inpainting-based pipelines consistently outperform full-tile generation across all metrics, with the structured inpainting prompt achieving the best spatial alignment (Burn IoU = 0.456) and burn saliency (Darkness Contrast = 20.44), while color matching produces the lowest color distance ({\Delta}C_burn = 63.22) at the cost of reduced burn saliency.
  VLM-assisted inpainting is competitive with hand-crafted prompts. These findings provide a foundation for incorporating generative data augmentation into wildfire detection pipelines.
  Code and experiments are available at: https://www.kaggle.com/code/valeriamartinh/genai-all-runned&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/ии-учат-видеть-пожары-но-настоящая-проблема-в-наших-глазах.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>6G и ИИ: Почему телеком-гиганты готовят технологическую капитуляцию</title>
      <link>https://codver.ai/ru/6g-и-ии-почему-телеком-гиганты-готовят-технологическую-капитуляцию.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/6g-и-ии-почему-телеком-гиганты-готовят-технологическую-капитуляцию.html</guid>
      <description>arXiv:2604.02370v1 Announce Type: cross 
Abstract: As wireless communication evolves, each generation of networks brings new technologies that change how we connect and interact. Artificial Intelligence (AI) is becoming crucial in shaping the future of sixth-generation (6G) networks. By combining AI and Machine Learning (ML), 6G aims to offer high data rates, low latency, and extensive connectivity for applications including smart cities, autonomous systems, holographic telepresence, and the tac</description>
      <pubDate>Mon, 06 Apr 2026 19:33:19 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02370v1 Announce Type: cross 
Abstract: As wireless communication evolves, each generation of networks brings new technologies that change how we connect and interact. Artificial Intelligence (AI) is becoming crucial in shaping the future of sixth-generation (6G) networks. By combining AI and Machine Learning (ML), 6G aims to offer high data rates, low latency, and extensive connectivity for applications including smart cities, autonomous systems, holographic telepresence, and the tactile internet. This paper provides a detailed overview of the role of AI in supporting 6G networks. It focuses on key technologies like deep learning, reinforcement learning, federated learning, and explainable AI. It also looks at how AI integrates with essential network functions and discusses challenges related to scalability, security, and energy efficiency, along with new solutions. Additionally, this work highlights perspectives that connect AI-driven analytics to 6G service domains like Ultra-Reliable Low-Latency Communication (URLLC), Enhanced Mobile Broadband (eMBB), Massive Machine-Type Communication (mMTC), and Integrated Sensing and Communication (ISAC). It addresses concerns about standardization, ethics, and sustainability. By summarizing recent research trends and identifying future directions, this survey offers a valuable reference for researchers and practitioners at the intersection of AI and next-generation wireless communication.

https://codver.ai/ru/6g-и-ии-почему-телеком-гиганты-готовят-технологическую-капитуляцию.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;6G и ИИ: Почему телеком-гиганты готовят технологическую капитуляцию&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 19:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02370v1 Announce Type: cross 
Abstract: As wireless communication evolves, each generation of networks brings new technologies that change how we connect and interact. Artificial Intelligence (AI) is becoming crucial in shaping the future of sixth-generation (6G) networks. By combining AI and Machine Learning (ML), 6G aims to offer high data rates, low latency, and extensive connectivity for applications including smart cities, autonomous systems, holographic telepresence, and the tactile internet. This paper provides a detailed overview of the role of AI in supporting 6G networks. It focuses on key technologies like deep learning, reinforcement learning, federated learning, and explainable AI. It also looks at how AI integrates with essential network functions and discusses challenges related to scalability, security, and energy efficiency, along with new solutions. Additionally, this work highlights perspectives that connect AI-driven analytics to 6G service domains like Ultra-Reliable Low-Latency Communication (URLLC), Enhanced Mobile Broadband (eMBB), Massive Machine-Type Communication (mMTC), and Integrated Sensing and Communication (ISAC). It addresses concerns about standardization, ethics, and sustainability. By summarizing recent research trends and identifying future directions, this survey offers a valuable reference for researchers and practitioners at the intersection of AI and next-generation wireless communication.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/6g-и-ии-почему-телеком-гиганты-готовят-технологическую-капитуляцию.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>OpenAI Safety Fellowship: почему программа безопасности ИИ превращается в PR-кампанию</title>
      <link>https://codver.ai/ru/openai-safety-fellowship-почему-программа-безопасности-ии-превращается-в-pr-кампанию.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/openai-safety-fellowship-почему-программа-безопасности-ии-превращается-в-pr-кампанию.html</guid>
      <description>A pilot program to support independent safety and alignment research and develop the next generation of talent</description>
      <pubDate>Mon, 06 Apr 2026 19:18:19 GMT</pubDate>
      <category>news</category>
      <content:encoded>
A pilot program to support independent safety and alignment research and develop the next generation of talent

https://codver.ai/ru/openai-safety-fellowship-почему-программа-безопасности-ии-превращается-в-pr-кампанию.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;OpenAI Safety Fellowship: почему программа безопасности ИИ превращается в PR-кампанию&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 19:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;A pilot program to support independent safety and alignment research and develop the next generation of talent&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/openai-safety-fellowship-почему-программа-безопасности-ии-превращается-в-pr-кампанию.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Google тайно выпустил офлайн-диктовку для iOS: признание поражения в облачной войне</title>
      <link>https://codver.ai/ru/google-тайно-выпустил-офлайн-диктовку-для-ios-признание-поражения-в-облачной-войне.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/google-тайно-выпустил-офлайн-диктовку-для-ios-признание-поражения-в-облачной-войне.html</guid>
      <description>Google's new offline-first dictation app uses Gemma AI models to take on the apps like Wispr Flow.</description>
      <pubDate>Mon, 06 Apr 2026 19:03:22 GMT</pubDate>
      <category>breaking</category>
      <content:encoded>
Google's new offline-first dictation app uses Gemma AI models to take on the apps like Wispr Flow.

https://codver.ai/ru/google-тайно-выпустил-офлайн-диктовку-для-ios-признание-поражения-в-облачной-войне.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Google тайно выпустил офлайн-диктовку для iOS: признание поражения в облачной войне&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 19:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;Google's new offline-first dictation app uses Gemma AI models to take on the apps like Wispr Flow.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/google-тайно-выпустил-офлайн-диктовку-для-ios-признание-поражения-в-облачной-войне.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Netflix запустил игровую платформу для малышей — и это капитуляция</title>
      <link>https://codver.ai/ru/netflix-запустил-игровую-платформу-для-малышей-и-это-капитуляция.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/netflix-запустил-игровую-платформу-для-малышей-и-это-капитуляция.html</guid>
      <description>&lt;a href="https://www.theverge.com/entertainment/907293/netflix-playground-kids-games-app"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i14.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p14#a260406p14" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Andrew Webster / &lt;a href="https://www.theverge.com/"&gt;The Verge&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: </description>
      <pubDate>Mon, 06 Apr 2026 18:48:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://www.theverge.com/entertainment/907293/netflix-playground-kids-games-app"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i14.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p14#a260406p14" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Andrew Webster / &lt;a href="https://www.theverge.com/"&gt;The Verge&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.theverge.com/entertainment/907293/netflix-playground-kids-games-app"&gt;Netflix launches Netflix Playground, a games app for kids aged eight and under, in the US, UK, Canada, Australia, the Philippines, and New Zealand&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; It's called Netflix Playground, and it's out now. &amp;hellip; Netflix has made family-friendly titles a key part of its current games strategy &amp;hellip; &lt;/p&gt;

https://codver.ai/ru/netflix-запустил-игровую-платформу-для-малышей-и-это-капитуляция.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Netflix запустил игровую платформу для малышей — и это капитуляция&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 18:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.theverge.com/entertainment/907293/netflix-playground-kids-games-app"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i14.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p14#a260406p14" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Andrew Webster / &lt;a href="https://www.theverge.com/"&gt;The Verge&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.theverge.com/entertainment/907293/netflix-playground-kids-games-app"&gt;Netflix launches Netflix Playground, a games app for kids aged eight and under, in the US, UK, Canada, Australia, the Philippines, and New Zealand&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; It's called Netflix Playground, and it's out now. &amp;hellip; Netflix has made family-friendly titles a key part of its current games strategy &amp;hellip; &lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/netflix-запустил-игровую-платформу-для-малышей-и-это-капитуляция.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Нью-Джерси проиграл войну со ставками — и это меняет всё</title>
      <link>https://codver.ai/ru/нью-джерси-проиграл-войну-со-ставками-и-это-меняет-всё.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/нью-джерси-проиграл-войну-со-ставками-и-это-меняет-всё.html</guid>
      <description>&lt;a href="https://www.reuters.com/world/new-jersey-cannot-regulate-kalshis-prediction-market-us-appeals-court-rules-2026-04-06/"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i16.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p16#a260406p16" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Nate Raymond / &lt;a href="http://www.reuters.com/"&gt;Reuters&lt;/a&gt;</description>
      <pubDate>Mon, 06 Apr 2026 18:33:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://www.reuters.com/world/new-jersey-cannot-regulate-kalshis-prediction-market-us-appeals-court-rules-2026-04-06/"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i16.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p16#a260406p16" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Nate Raymond / &lt;a href="http://www.reuters.com/"&gt;Reuters&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.reuters.com/world/new-jersey-cannot-regulate-kalshis-prediction-market-us-appeals-court-rules-2026-04-06/"&gt;A federal appeals court rules New Jersey cannot block Kalshi users in the state from sports-related event contracts, finding CFTC has exclusive jurisdiction&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; A federal appeals court ruled on Monday that New Jersey gaming regulators cannot prevent Kalshi from allowing people in the state &amp;hellip; &lt;/p&gt;

https://codver.ai/ru/нью-джерси-проиграл-войну-со-ставками-и-это-меняет-всё.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Нью-Джерси проиграл войну со ставками — и это меняет всё&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 18:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.reuters.com/world/new-jersey-cannot-regulate-kalshis-prediction-market-us-appeals-court-rules-2026-04-06/"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i16.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p16#a260406p16" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Nate Raymond / &lt;a href="http://www.reuters.com/"&gt;Reuters&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.reuters.com/world/new-jersey-cannot-regulate-kalshis-prediction-market-us-appeals-court-rules-2026-04-06/"&gt;A federal appeals court rules New Jersey cannot block Kalshi users in the state from sports-related event contracts, finding CFTC has exclusive jurisdiction&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; A federal appeals court ruled on Monday that New Jersey gaming regulators cannot prevent Kalshi from allowing people in the state &amp;hellip; &lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/нью-джерси-проиграл-войну-со-ставками-и-это-меняет-всё.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Meta под руководством Ванга делает ставку на open source — признание поражения в гонке?</title>
      <link>https://codver.ai/ru/meta-под-руководством-ванга-делает-ставку-на-open-source-признание-поражения-в-гонке.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/meta-под-руководством-ванга-делает-ставку-на-open-source-признание-поражения-в-гонке.html</guid>
      <description>&lt;a href="https://www.axios.com/2026/04/06/meta-open-source-ai-models"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i15.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p15#a260406p15" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Ina Fried / &lt;a href="https://www.axios.com/"&gt;Axios&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.</description>
      <pubDate>Mon, 06 Apr 2026 18:18:19 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://www.axios.com/2026/04/06/meta-open-source-ai-models"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i15.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p15#a260406p15" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Ina Fried / &lt;a href="https://www.axios.com/"&gt;Axios&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.axios.com/2026/04/06/meta-open-source-ai-models"&gt;Sources: Meta is preparing to release the first AI models developed under Alexandr Wang, with plans to offer versions of those models via an open source license&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Meta is preparing to release the first new AI models developed under Alexandr Wang, with plans to eventually offer versions &amp;hellip; &lt;/p&gt;

https://codver.ai/ru/meta-под-руководством-ванга-делает-ставку-на-open-source-признание-поражения-в-гонке.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Meta под руководством Ванга делает ставку на open source — признание поражения в гонке?&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 18:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.axios.com/2026/04/06/meta-open-source-ai-models"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i15.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p15#a260406p15" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Ina Fried / &lt;a href="https://www.axios.com/"&gt;Axios&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.axios.com/2026/04/06/meta-open-source-ai-models"&gt;Sources: Meta is preparing to release the first AI models developed under Alexandr Wang, with plans to offer versions of those models via an open source license&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Meta is preparing to release the first new AI models developed under Alexandr Wang, with plans to eventually offer versions &amp;hellip; &lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/meta-под-руководством-ванга-делает-ставку-на-open-source-признание-поражения-в-гонке.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Единственная метрика, которая покажет, заменит ли ИИ именно вашу работу</title>
      <link>https://codver.ai/ru/единственная-метрика-которая-покажет-заменит-ли-ии-именно-вашу-работу.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/единственная-метрика-которая-покажет-заменит-ли-ии-именно-вашу-работу.html</guid>
      <description>This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Within Silicon Valley’s orbit, an AI-fueled jobs apocalypse is spoken about as a given. The mood is so grim that a societal impacts researcher at Anthropic, responding Wednesday to a call for&amp;#8230;</description>
      <pubDate>Mon, 06 Apr 2026 18:03:21 GMT</pubDate>
      <category>news</category>
      <content:encoded>
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Within Silicon Valley’s orbit, an AI-fueled jobs apocalypse is spoken about as a given. The mood is so grim that a societal impacts researcher at Anthropic, responding Wednesday to a call for&amp;#8230;

https://codver.ai/ru/единственная-метрика-которая-покажет-заменит-ли-ии-именно-вашу-работу.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Единственная метрика, которая покажет, заменит ли ИИ именно вашу работу&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 18:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Within Silicon Valley’s orbit, an AI-fueled jobs apocalypse is spoken about as a given. The mood is so grim that a societal impacts researcher at Anthropic, responding Wednesday to a call for&amp;#8230;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/единственная-метрика-которая-покажет-заменит-ли-ии-именно-вашу-работу.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>ИИ учится забывать: почему мультимодальная память стала главной проблемой</title>
      <link>https://codver.ai/ru/ии-учится-забывать-почему-мультимодальная-память-стала-главной-проблемой.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/ии-учится-забывать-почему-мультимодальная-память-стала-главной-проблемой.html</guid>
      <description>arXiv:2604.02778v1 Announce Type: new 
Abstract: Real-world multimodal knowledge graphs (MMKGs) are dynamic, with new entities, relations, and multimodal knowledge emerging over time. Existing continual knowledge graph reasoning (CKGR) methods focus on structural triples and cannot fully exploit multimodal signals from new entities. Existing multimodal knowledge graph reasoning (MMKGR) methods, however, usually assume static graphs and suffer catastrophic forgetting as graphs evolve. To address </description>
      <pubDate>Mon, 06 Apr 2026 17:48:21 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02778v1 Announce Type: new 
Abstract: Real-world multimodal knowledge graphs (MMKGs) are dynamic, with new entities, relations, and multimodal knowledge emerging over time. Existing continual knowledge graph reasoning (CKGR) methods focus on structural triples and cannot fully exploit multimodal signals from new entities. Existing multimodal knowledge graph reasoning (MMKGR) methods, however, usually assume static graphs and suffer catastrophic forgetting as graphs evolve. To address this gap, we present a systematic study of continual multimodal knowledge graph reasoning (CMMKGR). We construct several continual multimodal knowledge graph benchmarks from existing MMKG datasets and propose MRCKG, a new CMMKGR model. Specifically, MRCKG employs a multimodal-structural collaborative curriculum to schedule progressive learning based on the structural connectivity of new triples to the historical graph and their multimodal compatibility. It also introduces a cross-modal knowledge preservation mechanism to mitigate forgetting through entity representation stability, relational semantic consistency, and modality anchoring. In addition, a multimodal contrastive replay scheme with a two-stage optimization strategy reinforces learned knowledge via multimodal importance sampling and representation alignment. Experiments on multiple datasets show that MRCKG preserves previously learned multimodal knowledge while substantially improving the learning of new knowledge.

https://codver.ai/ru/ии-учится-забывать-почему-мультимодальная-память-стала-главной-проблемой.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;ИИ учится забывать: почему мультимодальная память стала главной проблемой&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 17:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02778v1 Announce Type: new 
Abstract: Real-world multimodal knowledge graphs (MMKGs) are dynamic, with new entities, relations, and multimodal knowledge emerging over time. Existing continual knowledge graph reasoning (CKGR) methods focus on structural triples and cannot fully exploit multimodal signals from new entities. Existing multimodal knowledge graph reasoning (MMKGR) methods, however, usually assume static graphs and suffer catastrophic forgetting as graphs evolve. To address this gap, we present a systematic study of continual multimodal knowledge graph reasoning (CMMKGR). We construct several continual multimodal knowledge graph benchmarks from existing MMKG datasets and propose MRCKG, a new CMMKGR model. Specifically, MRCKG employs a multimodal-structural collaborative curriculum to schedule progressive learning based on the structural connectivity of new triples to the historical graph and their multimodal compatibility. It also introduces a cross-modal knowledge preservation mechanism to mitigate forgetting through entity representation stability, relational semantic consistency, and modality anchoring. In addition, a multimodal contrastive replay scheme with a two-stage optimization strategy reinforces learned knowledge via multimodal importance sampling and representation alignment. Experiments on multiple datasets show that MRCKG preserves previously learned multimodal knowledge while substantially improving the learning of new knowledge.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/ии-учится-забывать-почему-мультимодальная-память-стала-главной-проблемой.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>MOMO на Марсе: почему ИИ для космоса опаснее земного</title>
      <link>https://codver.ai/ru/momo-на-марсе-почему-ии-для-космоса-опаснее-земного.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/momo-на-марсе-почему-ии-для-космоса-опаснее-земного.html</guid>
      <description>arXiv:2604.02719v1 Announce Type: cross 
Abstract: We introduce MOMO, the first multi-sensor foundation model for Mars remote sensing. MOMO uses model merge to integrate representations learned independently from three key Martian sensors (HiRISE, CTX, and THEMIS), spanning resolutions from 0.25 m/pixel to 100 m/pixel. Central to our method is our novel Equal Validation Loss (EVL) strategy, which aligns checkpoints across sensors based on validation loss similarity before fusion via task arithme</description>
      <pubDate>Mon, 06 Apr 2026 17:33:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02719v1 Announce Type: cross 
Abstract: We introduce MOMO, the first multi-sensor foundation model for Mars remote sensing. MOMO uses model merge to integrate representations learned independently from three key Martian sensors (HiRISE, CTX, and THEMIS), spanning resolutions from 0.25 m/pixel to 100 m/pixel. Central to our method is our novel Equal Validation Loss (EVL) strategy, which aligns checkpoints across sensors based on validation loss similarity before fusion via task arithmetic. This ensures models are merged at compatible convergence stages, leading to improved stability and generalization. We train MOMO on a large-scale, high-quality corpus of $\sim 12$ million samples curated from Mars orbital data and evaluate it on 9 downstream tasks from Mars-Bench. MOMO achieves better overall performance compared to ImageNet pre-trained, earth observation foundation model, sensor-specific pre-training, and fully-supervised baselines. Particularly on segmentation tasks, MOMO shows consistent and significant performance improvement. Our results demonstrate that model merging through an optimal checkpoint selection strategy provides an effective approach for building foundation models for multi-resolution data. The model weights, pretraining code, pretraining data, and evaluation code are available at: https://github.com/kerner-lab/MOMO.

https://codver.ai/ru/momo-на-марсе-почему-ии-для-космоса-опаснее-земного.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;MOMO на Марсе: почему ИИ для космоса опаснее земного&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 17:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02719v1 Announce Type: cross 
Abstract: We introduce MOMO, the first multi-sensor foundation model for Mars remote sensing. MOMO uses model merge to integrate representations learned independently from three key Martian sensors (HiRISE, CTX, and THEMIS), spanning resolutions from 0.25 m/pixel to 100 m/pixel. Central to our method is our novel Equal Validation Loss (EVL) strategy, which aligns checkpoints across sensors based on validation loss similarity before fusion via task arithmetic. This ensures models are merged at compatible convergence stages, leading to improved stability and generalization. We train MOMO on a large-scale, high-quality corpus of $\sim 12$ million samples curated from Mars orbital data and evaluate it on 9 downstream tasks from Mars-Bench. MOMO achieves better overall performance compared to ImageNet pre-trained, earth observation foundation model, sensor-specific pre-training, and fully-supervised baselines. Particularly on segmentation tasks, MOMO shows consistent and significant performance improvement. Our results demonstrate that model merging through an optimal checkpoint selection strategy provides an effective approach for building foundation models for multi-resolution data. The model weights, pretraining code, pretraining data, and evaluation code are available at: https://github.com/kerner-lab/MOMO.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/momo-на-марсе-почему-ии-для-космоса-опаснее-земного.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>ИИ научился читать документы, но забыл думать последовательно</title>
      <link>https://codver.ai/ru/ии-научился-читать-документы-но-забыл-думать-последовательно.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/ии-научился-читать-документы-но-забыл-думать-последовательно.html</guid>
      <description>arXiv:2604.02371v1 Announce Type: cross 
Abstract: Visual long-document understanding is critical for enterprise, legal, and scientific applications, yet the best performing open recipes have not explored reasoning, a capability which has driven leaps in math and code performance. We introduce a synthetic data pipeline for reasoning in long-document understanding that generates thinking traces by scoring each page for question relevance, extracting textual evidence and ordering it from most to l</description>
      <pubDate>Mon, 06 Apr 2026 17:18:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02371v1 Announce Type: cross 
Abstract: Visual long-document understanding is critical for enterprise, legal, and scientific applications, yet the best performing open recipes have not explored reasoning, a capability which has driven leaps in math and code performance. We introduce a synthetic data pipeline for reasoning in long-document understanding that generates thinking traces by scoring each page for question relevance, extracting textual evidence and ordering it from most to least relevant. We apply SFT to the resulting traces within \texttt{} tags, gated by a \texttt{} control token, and the resulting reasoning capability is internalized via low-strength model merging. We study Qwen3 VL 32B and Mistral Small 3.1 24B. With Qwen3 VL, we achieve 58.3 on MMLongBenchDoc, surpassing the 7$\times$ larger Qwen3 VL 235B A22B (57.0). With Mistral, we show that synthetic reasoning outperforms distillation from the Thinking version's traces by 3.8 points on MMLBD-C, and internalized reasoning exhibits 12.4$\times$ fewer mean output tokens compared to explicit reasoning. We release our pipeline for reproducibility and further exploration.

https://codver.ai/ru/ии-научился-читать-документы-но-забыл-думать-последовательно.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;ИИ научился читать документы, но забыл думать последовательно&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 17:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02371v1 Announce Type: cross 
Abstract: Visual long-document understanding is critical for enterprise, legal, and scientific applications, yet the best performing open recipes have not explored reasoning, a capability which has driven leaps in math and code performance. We introduce a synthetic data pipeline for reasoning in long-document understanding that generates thinking traces by scoring each page for question relevance, extracting textual evidence and ordering it from most to least relevant. We apply SFT to the resulting traces within \texttt{} tags, gated by a \texttt{} control token, and the resulting reasoning capability is internalized via low-strength model merging. We study Qwen3 VL 32B and Mistral Small 3.1 24B. With Qwen3 VL, we achieve 58.3 on MMLongBenchDoc, surpassing the 7$\times$ larger Qwen3 VL 235B A22B (57.0). With Mistral, we show that synthetic reasoning outperforms distillation from the Thinking version's traces by 3.8 points on MMLBD-C, and internalized reasoning exhibits 12.4$\times$ fewer mean output tokens compared to explicit reasoning. We release our pipeline for reproducibility and further exploration.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/ии-научился-читать-документы-но-забыл-думать-последовательно.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>TRACE разоблачает иллюзию стабильности: ваш интернет меняется каждые 15 минут</title>
      <link>https://codver.ai/ru/trace-разоблачает-иллюзию-стабильности-ваш-интернет-меняется-каждые-15-минут.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/trace-разоблачает-иллюзию-стабильности-ваш-интернет-меняется-каждые-15-минут.html</guid>
      <description>arXiv:2604.02361v1 Announce Type: cross 
Abstract: Detecting Internet routing instability is a critical yet challenging task, particularly when relying solely on endpoint active measurements. This study introduces TRACE, a MachineLearning (ML)pipeline designed to identify route changes using only traceroute latency data, thereby ensuring independence from control plane information. We propose a robust feature engineering strategy that captures temporal dynamics using rolling statistics and aggre</description>
      <pubDate>Mon, 06 Apr 2026 17:03:21 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02361v1 Announce Type: cross 
Abstract: Detecting Internet routing instability is a critical yet challenging task, particularly when relying solely on endpoint active measurements. This study introduces TRACE, a MachineLearning (ML)pipeline designed to identify route changes using only traceroute latency data, thereby ensuring independence from control plane information. We propose a robust feature engineering strategy that captures temporal dynamics using rolling statistics and aggregated context patterns. The architecture leverages a stacked ensemble of Gradient Boosted Decision Trees refined by a hyperparameter-optimized meta-learner. By strictly calibrating decision thresholds to address the inherent class imbalance of rare routing events, TRACE achieves a superior F1-score performance, significantly outperforming traditional baseline models and demonstrating strong effective ness in detecting routing changes on the Internet.

https://codver.ai/ru/trace-разоблачает-иллюзию-стабильности-ваш-интернет-меняется-каждые-15-минут.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;TRACE разоблачает иллюзию стабильности: ваш интернет меняется каждые 15 минут&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 17:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02361v1 Announce Type: cross 
Abstract: Detecting Internet routing instability is a critical yet challenging task, particularly when relying solely on endpoint active measurements. This study introduces TRACE, a MachineLearning (ML)pipeline designed to identify route changes using only traceroute latency data, thereby ensuring independence from control plane information. We propose a robust feature engineering strategy that captures temporal dynamics using rolling statistics and aggregated context patterns. The architecture leverages a stacked ensemble of Gradient Boosted Decision Trees refined by a hyperparameter-optimized meta-learner. By strictly calibrating decision thresholds to address the inherent class imbalance of rare routing events, TRACE achieves a superior F1-score performance, significantly outperforming traditional baseline models and demonstrating strong effective ness in detecting routing changes on the Internet.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/trace-разоблачает-иллюзию-стабильности-ваш-интернет-меняется-каждые-15-минут.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Математика ИИ: почему «умные» модели учатся на чужих ошибках</title>
      <link>https://codver.ai/ru/математика-ии-почему-умные-модели-учатся-на-чужих-ошибках.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/математика-ии-почему-умные-модели-учатся-на-чужих-ошибках.html</guid>
      <description>arXiv:2509.19893v2 Announce Type: replace 
Abstract: Reinforcement Learning (RL) has emerged as the key driver for post-training complex reasoning in Large Language Models (LLMs), yet online RL introduces significant instability and computational overhead. Offline RL offers a compelling alternative by decoupling inference from training; however, offline algorithms for reasoning remain under-optimized compared to their online counterparts. A central challenge is gradient entanglement: in long-hor</description>
      <pubDate>Mon, 06 Apr 2026 16:48:17 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2509.19893v2 Announce Type: replace 
Abstract: Reinforcement Learning (RL) has emerged as the key driver for post-training complex reasoning in Large Language Models (LLMs), yet online RL introduces significant instability and computational overhead. Offline RL offers a compelling alternative by decoupling inference from training; however, offline algorithms for reasoning remain under-optimized compared to their online counterparts. A central challenge is gradient entanglement: in long-horizon reasoning trajectories, correct and incorrect solutions share substantial token overlap, causing gradient updates from incorrect trajectories to suppress tokens critical for correct ones. We propose Future Policy Approximation (FPA), a simple method that weights gradients against an estimate of the future policy rather than the current one, enabling proactive gradient reweighting. This future policy is estimated via logit-space extrapolation with negligible overhead. We provide theoretical intuition for FPA through the lens of Optimistic Mirror Descent and further ground it through its connection to DPO. Evaluating FPA across three models and seven mathematical benchmarks, we demonstrate consistent improvements over strong offline baselines including DPO, RPO, KTO, and vanilla offline RL. FPA stabilizes long-horizon training where vanilla objectives degrade and achieves comparable accuracy to online RLVR at a fraction of its GPU hours.

https://codver.ai/ru/математика-ии-почему-умные-модели-учатся-на-чужих-ошибках.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Математика ИИ: почему «умные» модели учатся на чужих ошибках&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 16:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2509.19893v2 Announce Type: replace 
Abstract: Reinforcement Learning (RL) has emerged as the key driver for post-training complex reasoning in Large Language Models (LLMs), yet online RL introduces significant instability and computational overhead. Offline RL offers a compelling alternative by decoupling inference from training; however, offline algorithms for reasoning remain under-optimized compared to their online counterparts. A central challenge is gradient entanglement: in long-horizon reasoning trajectories, correct and incorrect solutions share substantial token overlap, causing gradient updates from incorrect trajectories to suppress tokens critical for correct ones. We propose Future Policy Approximation (FPA), a simple method that weights gradients against an estimate of the future policy rather than the current one, enabling proactive gradient reweighting. This future policy is estimated via logit-space extrapolation with negligible overhead. We provide theoretical intuition for FPA through the lens of Optimistic Mirror Descent and further ground it through its connection to DPO. Evaluating FPA across three models and seven mathematical benchmarks, we demonstrate consistent improvements over strong offline baselines including DPO, RPO, KTO, and vanilla offline RL. FPA stabilizes long-horizon training where vanilla objectives degrade and achieves comparable accuracy to online RLVR at a fraction of its GPU hours.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/математика-ии-почему-умные-модели-учатся-на-чужих-ошибках.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>TechCrunch Disrupt 2026: Скидки в $500 сигнализируют о кризисе отрасли</title>
      <link>https://codver.ai/ru/techcrunch-disrupt-2026-скидки-в-500-сигнализируют-о-кризисе-отрасли.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/techcrunch-disrupt-2026-скидки-в-500-сигнализируют-о-кризисе-отрасли.html</guid>
      <description>Starting today, you have 5 days to save nearly $500 on your ticket to TechCrunch Disrupt 2026. This offer disappears Friday, April 10, at 11:59 p.m. PT. Register here to secure these low rates.</description>
      <pubDate>Mon, 06 Apr 2026 16:33:18 GMT</pubDate>
      <category>news</category>
      <content:encoded>
Starting today, you have 5 days to save nearly $500 on your ticket to TechCrunch Disrupt 2026. This offer disappears Friday, April 10, at 11:59 p.m. PT. Register here to secure these low rates.

https://codver.ai/ru/techcrunch-disrupt-2026-скидки-в-500-сигнализируют-о-кризисе-отрасли.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;TechCrunch Disrupt 2026: Скидки в $500 сигнализируют о кризисе отрасли&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 16:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;Starting today, you have 5 days to save nearly $500 on your ticket to TechCrunch Disrupt 2026. This offer disappears Friday, April 10, at 11:59 p.m. PT. Register here to secure these low rates.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/techcrunch-disrupt-2026-скидки-в-500-сигнализируют-о-кризисе-отрасли.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Startup Battlefield 200: Почему TechCrunch создаёт иллюзию меритократии в венчурном мире</title>
      <link>https://codver.ai/ru/startup-battlefield-200-почему-techcrunch-создаёт-иллюзию-меритократии-в-венчурном-мире.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/startup-battlefield-200-почему-techcrunch-создаёт-иллюзию-меритократии-в-венчурном-мире.html</guid>
      <description>Nominate your startup, or one you know that deserves the spotlight, and finish the process by applying. Selected 200 have a chance at VC access, TechCrunch coverage, and $100K for Startup Battlefield 200. Applications close on May 27.</description>
      <pubDate>Mon, 06 Apr 2026 16:18:25 GMT</pubDate>
      <category>news</category>
      <content:encoded>
Nominate your startup, or one you know that deserves the spotlight, and finish the process by applying. Selected 200 have a chance at VC access, TechCrunch coverage, and $100K for Startup Battlefield 200. Applications close on May 27.

https://codver.ai/ru/startup-battlefield-200-почему-techcrunch-создаёт-иллюзию-меритократии-в-венчурном-мире.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Startup Battlefield 200: Почему TechCrunch создаёт иллюзию меритократии в венчурном мире&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 16:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;Nominate your startup, or one you know that deserves the spotlight, and finish the process by applying. Selected 200 have a chance at VC access, TechCrunch coverage, and $100K for Startup Battlefield 200. Applications close on May 27.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/startup-battlefield-200-почему-techcrunch-создаёт-иллюзию-меритократии-в-венчурном-мире.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>OpenAI предлагает налоги на роботов: признание провала или мастерский план?</title>
      <link>https://codver.ai/ru/openai-предлагает-налоги-на-роботов-признание-провала-или-мастерский-план.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/openai-предлагает-налоги-на-роботов-признание-провала-или-мастерский-план.html</guid>
      <description>OpenAI proposes taxes on AI profits, public wealth funds, and expanded safety nets to address job loss and inequality, blending redistribution with capitalism as policymakers debate AI’s economic impact.</description>
      <pubDate>Mon, 06 Apr 2026 16:03:18 GMT</pubDate>
      <category>news</category>
      <content:encoded>
OpenAI proposes taxes on AI profits, public wealth funds, and expanded safety nets to address job loss and inequality, blending redistribution with capitalism as policymakers debate AI’s economic impact.

https://codver.ai/ru/openai-предлагает-налоги-на-роботов-признание-провала-или-мастерский-план.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;OpenAI предлагает налоги на роботов: признание провала или мастерский план?&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 16:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;OpenAI proposes taxes on AI profits, public wealth funds, and expanded safety nets to address job loss and inequality, blending redistribution with capitalism as policymakers debate AI’s economic impact.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/openai-предлагает-налоги-на-роботов-признание-провала-или-мастерский-план.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Self-Distilled RLVR: Почему ИИ учится обманывать сам себя</title>
      <link>https://codver.ai/ru/self-distilled-rlvr-почему-ии-учится-обманывать-сам-себя.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/self-distilled-rlvr-почему-ии-учится-обманывать-сам-себя.html</guid>
      <description>arXiv:2604.03128v1 Announce Type: cross 
Abstract: On-policy distillation (OPD) has become a popular training paradigm in the LLM community. This paradigm selects a larger model as the teacher to provide dense, fine-grained signals for each sampled trajectory, in contrast to reinforcement learning with verifiable rewards (RLVR), which only obtains sparse signals from verifiable outcomes in the environment. Recently, the community has explored on-policy self-distillation (OPSD), where the same mo</description>
      <pubDate>Mon, 06 Apr 2026 15:48:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.03128v1 Announce Type: cross 
Abstract: On-policy distillation (OPD) has become a popular training paradigm in the LLM community. This paradigm selects a larger model as the teacher to provide dense, fine-grained signals for each sampled trajectory, in contrast to reinforcement learning with verifiable rewards (RLVR), which only obtains sparse signals from verifiable outcomes in the environment. Recently, the community has explored on-policy self-distillation (OPSD), where the same model serves as both teacher and student, with the teacher receiving additional privileged information such as reference answers to enable self-evolution. This paper demonstrates that learning signals solely derived from the privileged teacher result in severe information leakage and unstable long-term training. Accordingly, we identify the optimal niche for self-distillation and propose \textbf{RLSD} (\textbf{RL}VR with \textbf{S}elf-\textbf{D}istillation). Specifically, we leverage self-distillation to obtain token-level policy differences for determining fine-grained update magnitudes, while continuing to use RLVR to derive reliable update directions from environmental feedback (e.g., response correctness). This enables RLSD to simultaneously harness the strengths of both RLVR and OPSD, achieving a higher convergence ceiling and superior training stability.

https://codver.ai/ru/self-distilled-rlvr-почему-ии-учится-обманывать-сам-себя.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Self-Distilled RLVR: Почему ИИ учится обманывать сам себя&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 15:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.03128v1 Announce Type: cross 
Abstract: On-policy distillation (OPD) has become a popular training paradigm in the LLM community. This paradigm selects a larger model as the teacher to provide dense, fine-grained signals for each sampled trajectory, in contrast to reinforcement learning with verifiable rewards (RLVR), which only obtains sparse signals from verifiable outcomes in the environment. Recently, the community has explored on-policy self-distillation (OPSD), where the same model serves as both teacher and student, with the teacher receiving additional privileged information such as reference answers to enable self-evolution. This paper demonstrates that learning signals solely derived from the privileged teacher result in severe information leakage and unstable long-term training. Accordingly, we identify the optimal niche for self-distillation and propose \textbf{RLSD} (\textbf{RL}VR with \textbf{S}elf-\textbf{D}istillation). Specifically, we leverage self-distillation to obtain token-level policy differences for determining fine-grained update magnitudes, while continuing to use RLVR to derive reliable update directions from environmental feedback (e.g., response correctness). This enables RLSD to simultaneously harness the strengths of both RLVR and OPSD, achieving a higher convergence ceiling and superior training stability.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/self-distilled-rlvr-почему-ии-учится-обманывать-сам-себя.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>ChatGPT интегрируется с Uber и DoorDash: почему это конец ИИ-ассистентов</title>
      <link>https://codver.ai/ru/chatgpt-интегрируется-с-uber-и-doordash-почему-это-конец-ии-ассистентов.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/chatgpt-интегрируется-с-uber-и-doordash-почему-это-конец-ии-ассистентов.html</guid>
      <description>Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.</description>
      <pubDate>Mon, 06 Apr 2026 15:33:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.

https://codver.ai/ru/chatgpt-интегрируется-с-uber-и-doordash-почему-это-конец-ии-ассистентов.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;ChatGPT интегрируется с Uber и DoorDash: почему это конец ИИ-ассистентов&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 15:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/chatgpt-интегрируется-с-uber-и-doordash-почему-это-конец-ии-ассистентов.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Intel стал упаковщиком чужих чипов — и это его лучшая стратегия</title>
      <link>https://codver.ai/ru/intel-стал-упаковщиком-чужих-чипов-и-это-его-лучшая-стратегия.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/intel-стал-упаковщиком-чужих-чипов-и-это-его-лучшая-стратегия.html</guid>
      <description>&lt;a href="https://www.wired.com/story/why-chip-packaging-could-decide-the-next-phase-of-the-ai-boom/"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i11.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p11#a260406p11" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Lauren Goode / &lt;a href="http://www.wired.com/"&gt;Wired&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size:</description>
      <pubDate>Mon, 06 Apr 2026 15:18:19 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://www.wired.com/story/why-chip-packaging-could-decide-the-next-phase-of-the-ai-boom/"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i11.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p11#a260406p11" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Lauren Goode / &lt;a href="http://www.wired.com/"&gt;Wired&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.wired.com/story/why-chip-packaging-could-decide-the-next-phase-of-the-ai-boom/"&gt;How advanced chip packaging became one of Intel's fast-growing businesses; sources: Intel is in talks with Google and Amazon for its advanced packaging services&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Advanced chip packaging is suddenly at the center of the AI boom.&amp;nbsp; Intel is going all in.&amp;nbsp; &amp;mdash;&amp;nbsp; Sixteen miles north of Albuquerque &amp;hellip; &lt;/p&gt;

https://codver.ai/ru/intel-стал-упаковщиком-чужих-чипов-и-это-его-лучшая-стратегия.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Intel стал упаковщиком чужих чипов — и это его лучшая стратегия&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 15:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.wired.com/story/why-chip-packaging-could-decide-the-next-phase-of-the-ai-boom/"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i11.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p11#a260406p11" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Lauren Goode / &lt;a href="http://www.wired.com/"&gt;Wired&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.wired.com/story/why-chip-packaging-could-decide-the-next-phase-of-the-ai-boom/"&gt;How advanced chip packaging became one of Intel's fast-growing businesses; sources: Intel is in talks with Google and Amazon for its advanced packaging services&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Advanced chip packaging is suddenly at the center of the AI boom.&amp;nbsp; Intel is going all in.&amp;nbsp; &amp;mdash;&amp;nbsp; Sixteen miles north of Albuquerque &amp;hellip; &lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/intel-стал-упаковщиком-чужих-чипов-и-это-его-лучшая-стратегия.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>OpenAI покупает TBPN: почему успешные AI-компании повторяют ошибки Twitter</title>
      <link>https://codver.ai/ru/openai-покупает-tbpn-почему-успешные-ai-компании-повторяют-ошибки-twitter.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/openai-покупает-tbpn-почему-успешные-ai-компании-повторяют-ошибки-twitter.html</guid>
      <description>&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p12#a260406p12" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Ben Thompson / &lt;a href="https://stratechery.com/"&gt;Stratechery&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://stratechery.com/2026/openai-buys-tbpn-tech-and-the-token-tsunami/"&gt;OpenAI buying TBPN makes little sense, par for the course for a company that, like Twitter, stumbled</description>
      <pubDate>Mon, 06 Apr 2026 15:03:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p12#a260406p12" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Ben Thompson / &lt;a href="https://stratechery.com/"&gt;Stratechery&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://stratechery.com/2026/openai-buys-tbpn-tech-and-the-token-tsunami/"&gt;OpenAI buying TBPN makes little sense, par for the course for a company that, like Twitter, stumbled into a big market and may never build a functional business&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; OpenAI's purchase of TBPN makes no sense, which may be par for the course for OpenAI.&amp;nbsp; Then, AI is breaking stuff, starting with tech services.&lt;/p&gt;

https://codver.ai/ru/openai-покупает-tbpn-почему-успешные-ai-компании-повторяют-ошибки-twitter.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;OpenAI покупает TBPN: почему успешные AI-компании повторяют ошибки Twitter&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 15:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p12#a260406p12" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Ben Thompson / &lt;a href="https://stratechery.com/"&gt;Stratechery&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://stratechery.com/2026/openai-buys-tbpn-tech-and-the-token-tsunami/"&gt;OpenAI buying TBPN makes little sense, par for the course for a company that, like Twitter, stumbled into a big market and may never build a functional business&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; OpenAI's purchase of TBPN makes no sense, which may be par for the course for OpenAI.&amp;nbsp; Then, AI is breaking stuff, starting with tech services.&lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/openai-покупает-tbpn-почему-успешные-ai-компании-повторяют-ошибки-twitter.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>ИИ-ассистенты провалили этический тест: эмпатия оказалась их слабым местом</title>
      <link>https://codver.ai/ru/ии-ассистенты-провалили-этический-тест-эмпатия-оказалась-их-слабым-местом.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/ии-ассистенты-провалили-этический-тест-эмпатия-оказалась-их-слабым-местом.html</guid>
      <description>arXiv:2604.02713v1 Announce Type: new 
Abstract: Conversational AI is increasingly deployed in emotionally charged and ethically sensitive interactions. Previous research has primarily concentrated on emotional benchmarks or static safety checks, overlooking how alignment unfolds in evolving conversation. We explore the research question: what breakdowns arise when conversational agents confront emotionally and ethically sensitive behaviors, and how do these affect dialogue quality? To stress-te</description>
      <pubDate>Mon, 06 Apr 2026 14:48:19 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02713v1 Announce Type: new 
Abstract: Conversational AI is increasingly deployed in emotionally charged and ethically sensitive interactions. Previous research has primarily concentrated on emotional benchmarks or static safety checks, overlooking how alignment unfolds in evolving conversation. We explore the research question: what breakdowns arise when conversational agents confront emotionally and ethically sensitive behaviors, and how do these affect dialogue quality? To stress-test chatbot performance, we develop a persona-conditioned user simulator capable of engaging in multi-turn dialogue with psychological personas and staged emotional pacing. Our analysis reveals that mainstream models exhibit recurrent breakdowns that intensify as emotional trajectories escalate. We identify several common failure patterns, including affective misalignments, ethical guidance failures, and cross-dimensional trade-offs where empathy supersedes or undermines responsibility. We organize these patterns into a taxonomy and discuss the design implications, highlighting the necessity to maintain ethical coherence and affective sensitivity throughout dynamic interactions. The study offers the HCI community a new perspective on the diagnosis and improvement of conversational AI in value-sensitive and emotionally charged contexts.

https://codver.ai/ru/ии-ассистенты-провалили-этический-тест-эмпатия-оказалась-их-слабым-местом.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;ИИ-ассистенты провалили этический тест: эмпатия оказалась их слабым местом&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 14:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02713v1 Announce Type: new 
Abstract: Conversational AI is increasingly deployed in emotionally charged and ethically sensitive interactions. Previous research has primarily concentrated on emotional benchmarks or static safety checks, overlooking how alignment unfolds in evolving conversation. We explore the research question: what breakdowns arise when conversational agents confront emotionally and ethically sensitive behaviors, and how do these affect dialogue quality? To stress-test chatbot performance, we develop a persona-conditioned user simulator capable of engaging in multi-turn dialogue with psychological personas and staged emotional pacing. Our analysis reveals that mainstream models exhibit recurrent breakdowns that intensify as emotional trajectories escalate. We identify several common failure patterns, including affective misalignments, ethical guidance failures, and cross-dimensional trade-offs where empathy supersedes or undermines responsibility. We organize these patterns into a taxonomy and discuss the design implications, highlighting the necessity to maintain ethical coherence and affective sensitivity throughout dynamic interactions. The study offers the HCI community a new perspective on the diagnosis and improvement of conversational AI in value-sensitive and emotionally charged contexts.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/ии-ассистенты-провалили-этический-тест-эмпатия-оказалась-их-слабым-местом.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Длинный контекст ИИ: почему больше памяти означает меньше интеллекта</title>
      <link>https://codver.ai/ru/длинный-контекст-ии-почему-больше-памяти-означает-меньше-интеллекта.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/длинный-контекст-ии-почему-больше-памяти-означает-меньше-интеллекта.html</guid>
      <description>arXiv:2604.02650v1 Announce Type: new 
Abstract: Existing studies on Long-Context Continual Pre-training (LCCP) mainly focus on small-scale models and limited data regimes (tens of billions of tokens). We argue that directly migrating these small-scale settings to industrial-grade models risks insufficient adaptation and premature training termination. Furthermore, current evaluation methods rely heavily on downstream benchmarks (e.g., Needle-in-a-Haystack), which often fail to reflect the intri</description>
      <pubDate>Mon, 06 Apr 2026 14:33:19 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02650v1 Announce Type: new 
Abstract: Existing studies on Long-Context Continual Pre-training (LCCP) mainly focus on small-scale models and limited data regimes (tens of billions of tokens). We argue that directly migrating these small-scale settings to industrial-grade models risks insufficient adaptation and premature training termination. Furthermore, current evaluation methods rely heavily on downstream benchmarks (e.g., Needle-in-a-Haystack), which often fail to reflect the intrinsic convergence state and can lead to "deceptive saturation". In this paper, we present the first systematic investigation of LCCP learning dynamics using the industrial-grade Hunyuan-A13B (80B total parameters), tracking its evolution across a 200B-token training trajectory. Specifically, we propose a hierarchical framework to analyze LCCP dynamics across behavioral (supervised fine-tuning probing), probabilistic (perplexity), and mechanistic (attention patterns) levels. Our findings reveal: (1) Necessity of Massive Data Scaling: Training regimes of dozens of billions of tokens are insufficient for industrial-grade LLMs' LCCP (e.g., Hunyuan-A13B reaches saturation after training over 150B tokens). (2) Deceptive Saturation vs. Intrinsic Saturation: Traditional NIAH scores report "fake saturation" early, while our PPL-based analysis reveals continuous intrinsic improvements and correlates more strongly with downstream performance. (3) Mechanistic Monitoring for Training Stability: Retrieval heads act as efficient, low-resource training monitors, as their evolving attention scores reliably track LCCP progress and exhibit high correlation with SFT results. This work provides a comprehensive monitoring framework, evaluation system, and mechanistic interpretation for the LCCP of industrial-grade LLM.

https://codver.ai/ru/длинный-контекст-ии-почему-больше-памяти-означает-меньше-интеллекта.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Длинный контекст ИИ: почему больше памяти означает меньше интеллекта&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 14:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02650v1 Announce Type: new 
Abstract: Existing studies on Long-Context Continual Pre-training (LCCP) mainly focus on small-scale models and limited data regimes (tens of billions of tokens). We argue that directly migrating these small-scale settings to industrial-grade models risks insufficient adaptation and premature training termination. Furthermore, current evaluation methods rely heavily on downstream benchmarks (e.g., Needle-in-a-Haystack), which often fail to reflect the intrinsic convergence state and can lead to "deceptive saturation". In this paper, we present the first systematic investigation of LCCP learning dynamics using the industrial-grade Hunyuan-A13B (80B total parameters), tracking its evolution across a 200B-token training trajectory. Specifically, we propose a hierarchical framework to analyze LCCP dynamics across behavioral (supervised fine-tuning probing), probabilistic (perplexity), and mechanistic (attention patterns) levels. Our findings reveal: (1) Necessity of Massive Data Scaling: Training regimes of dozens of billions of tokens are insufficient for industrial-grade LLMs' LCCP (e.g., Hunyuan-A13B reaches saturation after training over 150B tokens). (2) Deceptive Saturation vs. Intrinsic Saturation: Traditional NIAH scores report "fake saturation" early, while our PPL-based analysis reveals continuous intrinsic improvements and correlates more strongly with downstream performance. (3) Mechanistic Monitoring for Training Stability: Retrieval heads act as efficient, low-resource training monitors, as their evolving attention scores reliably track LCCP progress and exhibit high correlation with SFT results. This work provides a comprehensive monitoring framework, evaluation system, and mechanistic interpretation for the LCCP of industrial-grade LLM.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/длинный-контекст-ии-почему-больше-памяти-означает-меньше-интеллекта.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Xoople собрал $130M на спутники для ИИ: космос как новая нефтяная скважина</title>
      <link>https://codver.ai/ru/xoople-собрал-130m-на-спутники-для-ии-космос-как-новая-нефтяная-скважина.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/xoople-собрал-130m-на-спутники-для-ии-космос-как-новая-нефтяная-скважина.html</guid>
      <description>&lt;a href="https://techcrunch.com/2026/04/06/spains-xoople-raises-130-million-series-b-to-map-the-earth-for-ai/"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i10.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p10#a260406p10" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Tim Fernholz / &lt;a href="http://techcrunch.com/"&gt;TechCrunch&lt;/a&gt;:&lt;br /&gt;
&lt;span s</description>
      <pubDate>Mon, 06 Apr 2026 14:18:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://techcrunch.com/2026/04/06/spains-xoople-raises-130-million-series-b-to-map-the-earth-for-ai/"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i10.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p10#a260406p10" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Tim Fernholz / &lt;a href="http://techcrunch.com/"&gt;TechCrunch&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://techcrunch.com/2026/04/06/spains-xoople-raises-130-million-series-b-to-map-the-earth-for-ai/"&gt;Xoople, which is developing a satellite constellation to collect earth data for training AI models, raised a $130M Series B, bringing its total funding to $225M&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Space data companies have argued for years that the private sector needs their products, but the real uptake has been from government buyers.&lt;/p&gt;

https://codver.ai/ru/xoople-собрал-130m-на-спутники-для-ии-космос-как-новая-нефтяная-скважина.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Xoople собрал $130M на спутники для ИИ: космос как новая нефтяная скважина&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 14:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://techcrunch.com/2026/04/06/spains-xoople-raises-130-million-series-b-to-map-the-earth-for-ai/"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i10.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p10#a260406p10" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Tim Fernholz / &lt;a href="http://techcrunch.com/"&gt;TechCrunch&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://techcrunch.com/2026/04/06/spains-xoople-raises-130-million-series-b-to-map-the-earth-for-ai/"&gt;Xoople, which is developing a satellite constellation to collect earth data for training AI models, raised a $130M Series B, bringing its total funding to $225M&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Space data companies have argued for years that the private sector needs their products, but the real uptake has been from government buyers.&lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/xoople-собрал-130m-на-спутники-для-ии-космос-как-новая-нефтяная-скважина.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Xoople привлекла $130 миллионов, чтобы решить проблему которой не существует</title>
      <link>https://codver.ai/ru/xoople-привлекла-130-миллионов-чтобы-решить-проблему-которой-не-существует.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/xoople-привлекла-130-миллионов-чтобы-решить-проблему-которой-не-существует.html</guid>
      <description>The company is also announcing a deal with L3Harris to build the sensors for Xoople's spacecraft.</description>
      <pubDate>Mon, 06 Apr 2026 14:03:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
The company is also announcing a deal with L3Harris to build the sensors for Xoople's spacecraft.

https://codver.ai/ru/xoople-привлекла-130-миллионов-чтобы-решить-проблему-которой-не-существует.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Xoople привлекла $130 миллионов, чтобы решить проблему которой не существует&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 14:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;The company is also announcing a deal with L3Harris to build the sensors for Xoople's spacecraft.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/xoople-привлекла-130-миллионов-чтобы-решить-проблему-которой-не-существует.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>f-INE: Почему новый метод оценки влияния данных похоронит мечты о справедливом ИИ</title>
      <link>https://codver.ai/ru/f-ine-почему-новый-метод-оценки-влияния-данных-похоронит-мечты-о-справедливом-ии.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/f-ine-почему-новый-метод-оценки-влияния-данных-похоронит-мечты-о-справедливом-ии.html</guid>
      <description>arXiv:2510.10510v2 Announce Type: replace-cross 
Abstract: Influence estimation methods promise to explain and debug machine learning by estimating the impact of individual samples on the final model. Yet, existing methods collapse under training randomness: the same example may appear critical in one run and irrelevant in the next. Such instability undermines their use in data curation or cleanup since it is unclear if we indeed deleted/kept the correct datapoints. To overcome this, we introduc</description>
      <pubDate>Mon, 06 Apr 2026 13:48:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2510.10510v2 Announce Type: replace-cross 
Abstract: Influence estimation methods promise to explain and debug machine learning by estimating the impact of individual samples on the final model. Yet, existing methods collapse under training randomness: the same example may appear critical in one run and irrelevant in the next. Such instability undermines their use in data curation or cleanup since it is unclear if we indeed deleted/kept the correct datapoints. To overcome this, we introduce *f-influence* -- a new influence estimation framework grounded in hypothesis testing that explicitly accounts for training randomness, and establish desirable properties that make it suitable for reliable influence estimation. We also design a highly efficient algorithm **f**-**IN**fluence **E**stimation (**f-INE**) that computes f-influence **in a single training run**. Finally, we scale up f-INE to estimate influence of instruction tuning data on Llama-3.1-8B and show it can reliably detect poisoned samples that steer model opinions, demonstrating its utility for data cleanup and attributing model behavior.

https://codver.ai/ru/f-ine-почему-новый-метод-оценки-влияния-данных-похоронит-мечты-о-справедливом-ии.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;f-INE: Почему новый метод оценки влияния данных похоронит мечты о справедливом ИИ&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 13:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2510.10510v2 Announce Type: replace-cross 
Abstract: Influence estimation methods promise to explain and debug machine learning by estimating the impact of individual samples on the final model. Yet, existing methods collapse under training randomness: the same example may appear critical in one run and irrelevant in the next. Such instability undermines their use in data curation or cleanup since it is unclear if we indeed deleted/kept the correct datapoints. To overcome this, we introduce *f-influence* -- a new influence estimation framework grounded in hypothesis testing that explicitly accounts for training randomness, and establish desirable properties that make it suitable for reliable influence estimation. We also design a highly efficient algorithm **f**-**IN**fluence **E**stimation (**f-INE**) that computes f-influence **in a single training run**. Finally, we scale up f-INE to estimate influence of instruction tuning data on Llama-3.1-8B and show it can reliably detect poisoned samples that steer model opinions, demonstrating its utility for data cleanup and attributing model behavior.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/f-ine-почему-новый-метод-оценки-влияния-данных-похоронит-мечты-о-справедливом-ии.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>PR3DICTR от медицинского ИИ: модульность убивает точность диагностики</title>
      <link>https://codver.ai/ru/pr3dictr-от-медицинского-ии-модульность-убивает-точность-диагностики.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/pr3dictr-от-медицинского-ии-модульность-убивает-точность-диагностики.html</guid>
      <description>arXiv:2604.03203v1 Announce Type: cross 
Abstract: Three-dimensional medical image data and computer-aided decision making, particularly using deep learning, are becoming increasingly important in the medical field. To aid in these developments we introduce PR3DICTR: Platform for Research in 3D Image Classification and sTandardised tRaining. Built using community-standard distributions (PyTorch and MONAI), PR3DICTR provides an open-access, flexible and convenient framework for prediction model d</description>
      <pubDate>Mon, 06 Apr 2026 13:33:19 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.03203v1 Announce Type: cross 
Abstract: Three-dimensional medical image data and computer-aided decision making, particularly using deep learning, are becoming increasingly important in the medical field. To aid in these developments we introduce PR3DICTR: Platform for Research in 3D Image Classification and sTandardised tRaining. Built using community-standard distributions (PyTorch and MONAI), PR3DICTR provides an open-access, flexible and convenient framework for prediction model development, with an explicit focus on classification using three-dimensional medical image data. By combining modular design principles and standardization, it aims to alleviate developmental burden whilst retaining adjustability. It provides users with a wealth of pre-established functionality, for instance in model architecture design options, hyper-parameter solutions and training methodologies, but still gives users the opportunity and freedom to ``plug in'' their own solutions or modules. PR3DICTR can be applied to any binary or event-based three-dimensional classification task and can work with as little as two lines of code.

https://codver.ai/ru/pr3dictr-от-медицинского-ии-модульность-убивает-точность-диагностики.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;PR3DICTR от медицинского ИИ: модульность убивает точность диагностики&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 13:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.03203v1 Announce Type: cross 
Abstract: Three-dimensional medical image data and computer-aided decision making, particularly using deep learning, are becoming increasingly important in the medical field. To aid in these developments we introduce PR3DICTR: Platform for Research in 3D Image Classification and sTandardised tRaining. Built using community-standard distributions (PyTorch and MONAI), PR3DICTR provides an open-access, flexible and convenient framework for prediction model development, with an explicit focus on classification using three-dimensional medical image data. By combining modular design principles and standardization, it aims to alleviate developmental burden whilst retaining adjustability. It provides users with a wealth of pre-established functionality, for instance in model architecture design options, hyper-parameter solutions and training methodologies, but still gives users the opportunity and freedom to ``plug in'' their own solutions or modules. PR3DICTR can be applied to any binary or event-based three-dimensional classification task and can work with as little as two lines of code.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/pr3dictr-от-медицинского-ии-модульность-убивает-точность-диагностики.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Gradient Boosting в одном слое внимания: почему простота убивает сложность</title>
      <link>https://codver.ai/ru/gradient-boosting-в-одном-слое-внимания-почему-простота-убивает-сложность.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/gradient-boosting-в-одном-слое-внимания-почему-простота-убивает-сложность.html</guid>
      <description>arXiv:2604.03190v1 Announce Type: cross 
Abstract: Transformer attention computes a single softmax-weighted average over values -- a one-pass estimate that cannot correct its own errors. We introduce \emph{gradient-boosted attention}, which applies the principle of gradient boosting \emph{within} a single attention layer: a second attention pass, with its own learned projections, attends to the prediction error of the first and applies a gated correction. Under a squared reconstruction objective</description>
      <pubDate>Mon, 06 Apr 2026 13:18:17 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.03190v1 Announce Type: cross 
Abstract: Transformer attention computes a single softmax-weighted average over values -- a one-pass estimate that cannot correct its own errors. We introduce \emph{gradient-boosted attention}, which applies the principle of gradient boosting \emph{within} a single attention layer: a second attention pass, with its own learned projections, attends to the prediction error of the first and applies a gated correction. Under a squared reconstruction objective, the construction maps onto Friedman's gradient boosting machine, with each attention pass as a base learner and the per-dimension gate as the shrinkage parameter. We show that a single Hopfield-style update erases all query information orthogonal to the stored-pattern subspace, and that further iteration under local contraction can collapse distinct queries in the same region to the same fixed point. We also show that separate projections for the correction pass can recover residual information inaccessible to the shared-projection approach of Tukey's twicing. On a 10M-token subset of WikiText-103, gradient-boosted attention achieves a test perplexity of $67.9$ compared to $72.2$ for standard attention, $69.6$ for Twicing Attention, and $69.0$ for a parameter-matched wider baseline, with two rounds capturing most of the benefit.

https://codver.ai/ru/gradient-boosting-в-одном-слое-внимания-почему-простота-убивает-сложность.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Gradient Boosting в одном слое внимания: почему простота убивает сложность&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 13:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.03190v1 Announce Type: cross 
Abstract: Transformer attention computes a single softmax-weighted average over values -- a one-pass estimate that cannot correct its own errors. We introduce \emph{gradient-boosted attention}, which applies the principle of gradient boosting \emph{within} a single attention layer: a second attention pass, with its own learned projections, attends to the prediction error of the first and applies a gated correction. Under a squared reconstruction objective, the construction maps onto Friedman's gradient boosting machine, with each attention pass as a base learner and the per-dimension gate as the shrinkage parameter. We show that a single Hopfield-style update erases all query information orthogonal to the stored-pattern subspace, and that further iteration under local contraction can collapse distinct queries in the same region to the same fixed point. We also show that separate projections for the correction pass can recover residual information inaccessible to the shared-projection approach of Tukey's twicing. On a 10M-token subset of WikiText-103, gradient-boosted attention achieves a test perplexity of $67.9$ compared to $72.2$ for standard attention, $69.6$ for Twicing Attention, and $69.0$ for a parameter-matched wider baseline, with two rounds capturing most of the benefit.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/gradient-boosting-в-одном-слое-внимания-почему-простота-убивает-сложность.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>JoyAI Flash: Почему эффективность токенов важнее размера модели</title>
      <link>https://codver.ai/ru/joyai-flash-почему-эффективность-токенов-важнее-размера-модели.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/joyai-flash-почему-эффективность-токенов-важнее-размера-модели.html</guid>
      <description>arXiv:2604.03044v1 Announce Type: cross 
Abstract: We introduce JoyAI-LLM Flash, an efficient Mixture-of-Experts (MoE) language model designed to redefine the trade-off between strong performance and token efficiency in the sub-50B parameter regime. JoyAI-LLM Flash is pretrained on a massive corpus of 20 trillion tokens and further optimized through a rigorous post-training pipeline, including supervised fine-tuning (SFT), Direct Preference Optimization (DPO), and large-scale reinforcement learn</description>
      <pubDate>Mon, 06 Apr 2026 13:03:18 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.03044v1 Announce Type: cross 
Abstract: We introduce JoyAI-LLM Flash, an efficient Mixture-of-Experts (MoE) language model designed to redefine the trade-off between strong performance and token efficiency in the sub-50B parameter regime. JoyAI-LLM Flash is pretrained on a massive corpus of 20 trillion tokens and further optimized through a rigorous post-training pipeline, including supervised fine-tuning (SFT), Direct Preference Optimization (DPO), and large-scale reinforcement learning (RL) across diverse environments. To improve token efficiency, JoyAI-LLM Flash strategically balances \emph{thinking} and \emph{non-thinking} cognitive modes and introduces FiberPO, a novel RL algorithm inspired by fibration theory that decomposes trust-region maintenance into global and local components, providing unified multi-scale stability control for LLM policy optimization. To enhance architectural sparsity, the model comprises 48B total parameters while activating only 2.7B parameters per forward pass, achieving a substantially higher sparsity ratio than contemporary industry leading models of comparable scale. To further improve inference throughput, we adopt a joint training-inference co-design that incorporates dense Multi-Token Prediction (MTP) and Quantization-Aware Training (QAT). We release the checkpoints for both JoyAI-LLM-48B-A3B Base and its post-trained variants on Hugging Face to support the open-source community.

https://codver.ai/ru/joyai-flash-почему-эффективность-токенов-важнее-размера-модели.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;JoyAI Flash: Почему эффективность токенов важнее размера модели&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 13:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.03044v1 Announce Type: cross 
Abstract: We introduce JoyAI-LLM Flash, an efficient Mixture-of-Experts (MoE) language model designed to redefine the trade-off between strong performance and token efficiency in the sub-50B parameter regime. JoyAI-LLM Flash is pretrained on a massive corpus of 20 trillion tokens and further optimized through a rigorous post-training pipeline, including supervised fine-tuning (SFT), Direct Preference Optimization (DPO), and large-scale reinforcement learning (RL) across diverse environments. To improve token efficiency, JoyAI-LLM Flash strategically balances \emph{thinking} and \emph{non-thinking} cognitive modes and introduces FiberPO, a novel RL algorithm inspired by fibration theory that decomposes trust-region maintenance into global and local components, providing unified multi-scale stability control for LLM policy optimization. To enhance architectural sparsity, the model comprises 48B total parameters while activating only 2.7B parameters per forward pass, achieving a substantially higher sparsity ratio than contemporary industry leading models of comparable scale. To further improve inference throughput, we adopt a joint training-inference co-design that incorporates dense Multi-Token Prediction (MTP) and Quantization-Aware Training (QAT). We release the checkpoints for both JoyAI-LLM-48B-A3B Base and its post-trained variants on Hugging Face to support the open-source community.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/joyai-flash-почему-эффективность-токенов-важнее-размера-модели.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>FedSQ доказал: федеративное обучение работает не так, как все думали</title>
      <link>https://codver.ai/ru/fedsq-доказал-федеративное-обучение-работает-не-так-как-все-думали.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/fedsq-доказал-федеративное-обучение-работает-не-так-как-все-думали.html</guid>
      <description>arXiv:2604.02990v1 Announce Type: cross 
Abstract: Federated learning (FL) enables collaborative training across organizations without sharing raw data, but it is hindered by statistical heterogeneity (non-i.i.d.\ client data) and by instability of naive weight averaging under client drift. In many cross-silo deployments, FL is warm-started from a strong pretrained backbone (e.g., ImageNet-1K) and then adapted to local domains. Motivated by recent evidence that ReLU-like gating regimes (structur</description>
      <pubDate>Mon, 06 Apr 2026 12:48:21 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02990v1 Announce Type: cross 
Abstract: Federated learning (FL) enables collaborative training across organizations without sharing raw data, but it is hindered by statistical heterogeneity (non-i.i.d.\ client data) and by instability of naive weight averaging under client drift. In many cross-silo deployments, FL is warm-started from a strong pretrained backbone (e.g., ImageNet-1K) and then adapted to local domains. Motivated by recent evidence that ReLU-like gating regimes (structural knowledge) stabilize earlier than the remaining parameter values (quantitative knowledge), we propose FedSQ (Federated Structural-Quantitative learning), a transfer-initialized neural federated procedure based on a DualCopy, piecewise-linear view of deep networks. FedSQ freezes a structural copy of the pretrained model to induce fixed binary gating masks during federated fine-tuning, while only a quantitative copy is optimized locally and aggregated across rounds. Fixing the gating reduces learning to within-regime affine refinements, which stabilizes aggregation under heterogeneous partitions. Experiments on two convolutional neural network backbones under i.i.d.\ and Dirichlet splits show that FedSQ improves robustness and can reduce rounds-to-best validation performance relative to standard baselines while preserving accuracy in the transfer setting.

https://codver.ai/ru/fedsq-доказал-федеративное-обучение-работает-не-так-как-все-думали.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;FedSQ доказал: федеративное обучение работает не так, как все думали&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 12:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02990v1 Announce Type: cross 
Abstract: Federated learning (FL) enables collaborative training across organizations without sharing raw data, but it is hindered by statistical heterogeneity (non-i.i.d.\ client data) and by instability of naive weight averaging under client drift. In many cross-silo deployments, FL is warm-started from a strong pretrained backbone (e.g., ImageNet-1K) and then adapted to local domains. Motivated by recent evidence that ReLU-like gating regimes (structural knowledge) stabilize earlier than the remaining parameter values (quantitative knowledge), we propose FedSQ (Federated Structural-Quantitative learning), a transfer-initialized neural federated procedure based on a DualCopy, piecewise-linear view of deep networks. FedSQ freezes a structural copy of the pretrained model to induce fixed binary gating masks during federated fine-tuning, while only a quantitative copy is optimized locally and aggregated across rounds. Fixing the gating reduces learning to within-regime affine refinements, which stabilizes aggregation under heterogeneous partitions. Experiments on two convolutional neural network backbones under i.i.d.\ and Dirichlet splits show that FedSQ improves robustness and can reduce rounds-to-best validation performance relative to standard baselines while preserving accuracy in the transfer setting.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/fedsq-доказал-федеративное-обучение-работает-не-так-как-все-думали.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>ИИ убивает предпринимательскую интуицию малого бизнеса под видом помощи</title>
      <link>https://codver.ai/ru/ии-убивает-предпринимательскую-интуицию-малого-бизнеса-под-видом-помощи.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/ии-убивает-предпринимательскую-интуицию-малого-бизнеса-под-видом-помощи.html</guid>
      <description>For years Mike McClary sold the Guardian LTE Flashlight, a heavy-duty black model, online through his small outdoor brand. The product, designed for brightness and durability, became one of his most popular items ever. Even after he stopped offering it around 2017, customers kept sending him emails asking where they could buy it.&amp;#160; When McClary&amp;#8230;</description>
      <pubDate>Mon, 06 Apr 2026 12:33:21 GMT</pubDate>
      <category>news</category>
      <content:encoded>
For years Mike McClary sold the Guardian LTE Flashlight, a heavy-duty black model, online through his small outdoor brand. The product, designed for brightness and durability, became one of his most popular items ever. Even after he stopped offering it around 2017, customers kept sending him emails asking where they could buy it.&amp;#160; When McClary&amp;#8230;

https://codver.ai/ru/ии-убивает-предпринимательскую-интуицию-малого-бизнеса-под-видом-помощи.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;ИИ убивает предпринимательскую интуицию малого бизнеса под видом помощи&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 12:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;For years Mike McClary sold the Guardian LTE Flashlight, a heavy-duty black model, online through his small outdoor brand. The product, designed for brightness and durability, became one of his most popular items ever. Even after he stopped offering it around 2017, customers kept sending him emails asking where they could buy it.&amp;#160; When McClary&amp;#8230;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/ии-убивает-предпринимательскую-интуицию-малого-бизнеса-под-видом-помощи.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>OpenAI подписал подкаст-сделку: почему аудио станет могилой для текстовых моделей</title>
      <link>https://codver.ai/ru/openai-подписал-подкаст-сделку-почему-аудио-станет-могилой-для-текстовых-моделей.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/openai-подписал-подкаст-сделку-почему-аудио-станет-могилой-для-текстовых-моделей.html</guid>
      <description>Here are five key things investors need to know to start the trading day.</description>
      <pubDate>Mon, 06 Apr 2026 12:18:21 GMT</pubDate>
      <category>news</category>
      <content:encoded>
Here are five key things investors need to know to start the trading day.

https://codver.ai/ru/openai-подписал-подкаст-сделку-почему-аудио-станет-могилой-для-текстовых-моделей.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;OpenAI подписал подкаст-сделку: почему аудио станет могилой для текстовых моделей&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 12:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;Here are five key things investors need to know to start the trading day.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/openai-подписал-подкаст-сделку-почему-аудио-станет-могилой-для-текстовых-моделей.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Сэм Альтман: почему обвинения во лжи делают его идеальным CEO для AI-эры</title>
      <link>https://codver.ai/ru/сэм-альтман-почему-обвинения-во-лжи-делают-его-идеальным-ceo-для-ai-эры.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/сэм-альтман-почему-обвинения-во-лжи-делают-его-идеальным-ceo-для-ai-эры.html</guid>
      <description>&lt;a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted?currentPage=all"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i8.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p8#a260406p8" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; &lt;a href="http://www.newyorker.com/"&gt;New Yorker&lt;/a&gt;:&lt;br /&gt;
&lt;span sty</description>
      <pubDate>Mon, 06 Apr 2026 12:03:23 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted?currentPage=all"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i8.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p8#a260406p8" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; &lt;a href="http://www.newyorker.com/"&gt;New Yorker&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted?currentPage=all"&gt;Interviews with Sam Altman and 100+ people on if he can be trusted amid allegations of persistent lying and more: some defend him, others call him a sociopath&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.&lt;/p&gt;

https://codver.ai/ru/сэм-альтман-почему-обвинения-во-лжи-делают-его-идеальным-ceo-для-ai-эры.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Сэм Альтман: почему обвинения во лжи делают его идеальным CEO для AI-эры&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 12:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted?currentPage=all"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i8.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p8#a260406p8" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; &lt;a href="http://www.newyorker.com/"&gt;New Yorker&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.newyorker.com/magazine/2026/04/13/sam-altman-may-control-our-future-can-he-be-trusted?currentPage=all"&gt;Interviews with Sam Altman and 100+ people on if he can be trusted amid allegations of persistent lying and more: some defend him, others call him a sociopath&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.&lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/сэм-альтман-почему-обвинения-во-лжи-делают-его-идеальным-ceo-для-ai-эры.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Учёные переворачивают ИИ наизнанку: данные важнее алгоритмов</title>
      <link>https://codver.ai/ru/учёные-переворачивают-ии-наизнанку-данные-важнее-алгоритмов.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/учёные-переворачивают-ии-наизнанку-данные-важнее-алгоритмов.html</guid>
      <description>arXiv:2604.02889v1 Announce Type: cross 
Abstract: Data assimilation is the process of estimating the time-evolving state of a dynamical system by integrating model predictions and noisy observations. It is commonly formulated as Bayesian filtering, but classical filters often struggle with accuracy or computational feasibility in high dimensions. Recently, score-based generative models have emerged as a scalable approach for high-dimensional data assimilation, enabling accurate modeling and sam</description>
      <pubDate>Mon, 06 Apr 2026 11:48:18 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02889v1 Announce Type: cross 
Abstract: Data assimilation is the process of estimating the time-evolving state of a dynamical system by integrating model predictions and noisy observations. It is commonly formulated as Bayesian filtering, but classical filters often struggle with accuracy or computational feasibility in high dimensions. Recently, score-based generative models have emerged as a scalable approach for high-dimensional data assimilation, enabling accurate modeling and sampling of complex distributions. However, existing score-based filters often specify the forward process independently of the data assimilation. As a result, the measurement-update step depends on heuristic approximations of the likelihood score, which can accumulate errors and degrade performance over time. Here, we propose a measurement-aware score-based filter (MASF) that defines a measurement-aware forward process directly from the measurement equation. This construction makes the likelihood score analytically tractable: for linear measurements, we derive the exact likelihood score and combine it with a learned prior score to obtain the posterior score. Numerical experiments covering a range of settings, including high-dimensional datasets, demonstrate improved accuracy and stability over existing score-based filters.

https://codver.ai/ru/учёные-переворачивают-ии-наизнанку-данные-важнее-алгоритмов.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Учёные переворачивают ИИ наизнанку: данные важнее алгоритмов&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 11:48&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02889v1 Announce Type: cross 
Abstract: Data assimilation is the process of estimating the time-evolving state of a dynamical system by integrating model predictions and noisy observations. It is commonly formulated as Bayesian filtering, but classical filters often struggle with accuracy or computational feasibility in high dimensions. Recently, score-based generative models have emerged as a scalable approach for high-dimensional data assimilation, enabling accurate modeling and sampling of complex distributions. However, existing score-based filters often specify the forward process independently of the data assimilation. As a result, the measurement-update step depends on heuristic approximations of the likelihood score, which can accumulate errors and degrade performance over time. Here, we propose a measurement-aware score-based filter (MASF) that defines a measurement-aware forward process directly from the measurement equation. This construction makes the likelihood score analytically tractable: for linear measurements, we derive the exact likelihood score and combine it with a learned prior score to obtain the posterior score. Numerical experiments covering a range of settings, including high-dimensional datasets, demonstrate improved accuracy and stability over existing score-based filters.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/учёные-переворачивают-ии-наизнанку-данные-важнее-алгоритмов.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>QAPruner от исследователей: почему «оптимизация» ИИ стала эвфемизмом для деградации</title>
      <link>https://codver.ai/ru/qapruner-от-исследователей-почему-оптимизация-ии-стала-эвфемизмом-для-деградации.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/qapruner-от-исследователей-почему-оптимизация-ии-стала-эвфемизмом-для-деградации.html</guid>
      <description>arXiv:2604.02816v1 Announce Type: cross 
Abstract: Multimodal Large Language Models (MLLMs) have shown strong reasoning ability, but their high computational and memory costs hinder deployment in resource-constrained settings. While Post-Training Quantization (PTQ) and vision token pruning are standard compression techniques, they are usually treated as independent optimizations. In this paper, we show that these two techniques are strongly coupled: naively applying semantic-based token pruning </description>
      <pubDate>Mon, 06 Apr 2026 11:33:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
arXiv:2604.02816v1 Announce Type: cross 
Abstract: Multimodal Large Language Models (MLLMs) have shown strong reasoning ability, but their high computational and memory costs hinder deployment in resource-constrained settings. While Post-Training Quantization (PTQ) and vision token pruning are standard compression techniques, they are usually treated as independent optimizations. In this paper, we show that these two techniques are strongly coupled: naively applying semantic-based token pruning to PTQ-optimized MLLMs can discard activation outliers that are important for numerical stability and thus worsen quantization errors in low-bit regimes (\textit{e.g.}, W4A4). To address this issue, we propose a quantization-aware vision token pruning framework. Our method introduces a lightweight hybrid sensitivity metric that combines simulated group-wise quantization error with outlier intensity. By combining this metric with standard semantic relevance scores, the method retains tokens that are both semantically informative and robust to quantization. Experiments on standard LLaVA architectures show that our method consistently outperforms naive integration baselines. At an aggressive pruning ratio that retains only 12.5\% of visual tokens, our framework improves accuracy by 2.24\% over the baseline and even surpasses dense quantization without pruning. To the best of our knowledge, this is the first method that explicitly co-optimizes vision token pruning and PTQ for accurate low-bit MLLM inference.

https://codver.ai/ru/qapruner-от-исследователей-почему-оптимизация-ии-стала-эвфемизмом-для-деградации.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;QAPruner от исследователей: почему «оптимизация» ИИ стала эвфемизмом для деградации&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 11:33&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;arXiv:2604.02816v1 Announce Type: cross 
Abstract: Multimodal Large Language Models (MLLMs) have shown strong reasoning ability, but their high computational and memory costs hinder deployment in resource-constrained settings. While Post-Training Quantization (PTQ) and vision token pruning are standard compression techniques, they are usually treated as independent optimizations. In this paper, we show that these two techniques are strongly coupled: naively applying semantic-based token pruning to PTQ-optimized MLLMs can discard activation outliers that are important for numerical stability and thus worsen quantization errors in low-bit regimes (\textit{e.g.}, W4A4). To address this issue, we propose a quantization-aware vision token pruning framework. Our method introduces a lightweight hybrid sensitivity metric that combines simulated group-wise quantization error with outlier intensity. By combining this metric with standard semantic relevance scores, the method retains tokens that are both semantically informative and robust to quantization. Experiments on standard LLaVA architectures show that our method consistently outperforms naive integration baselines. At an aggressive pruning ratio that retains only 12.5\% of visual tokens, our framework improves accuracy by 2.24\% over the baseline and even surpasses dense quantization without pruning. To the best of our knowledge, this is the first method that explicitly co-optimizes vision token pruning and PTQ for accurate low-bit MLLM inference.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/qapruner-от-исследователей-почему-оптимизация-ии-стала-эвфемизмом-для-деградации.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>OpenAI готовит мир к сверхразуму — но план больше похож на капитуляцию</title>
      <link>https://codver.ai/ru/openai-готовит-мир-к-сверхразуму-но-план-больше-похож-на-капитуляцию.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/openai-готовит-мир-к-сверхразуму-но-план-больше-похож-на-капитуляцию.html</guid>
      <description>&lt;a href="https://www.wsj.com/tech/ai/what-to-know-about-openais-ideas-for-a-world-with-superintelligence-e97d6e7b"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i6.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p6#a260406p6" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Amrith Ramkumar / &lt;a href="https://www.wsj.com/"&gt;Wall Street Journal&lt;/a&gt;:&lt;br</description>
      <pubDate>Mon, 06 Apr 2026 11:18:20 GMT</pubDate>
      <category>news</category>
      <content:encoded>
&lt;a href="https://www.wsj.com/tech/ai/what-to-know-about-openais-ideas-for-a-world-with-superintelligence-e97d6e7b"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i6.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p6#a260406p6" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Amrith Ramkumar / &lt;a href="https://www.wsj.com/"&gt;Wall Street Journal&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.wsj.com/tech/ai/what-to-know-about-openais-ideas-for-a-world-with-superintelligence-e97d6e7b"&gt;OpenAI unveils policy proposals for a world with superintelligence: higher capital gains taxes, a public AI investment fund, strengthened safety nets, and more&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; ChatGPT maker put out policy proposals so consumers benefit from rapid advancements in artificial intelligence&lt;/p&gt;

https://codver.ai/ru/openai-готовит-мир-к-сверхразуму-но-план-больше-похож-на-капитуляцию.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;OpenAI готовит мир к сверхразуму — но план больше похож на капитуляцию&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 11:18&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://www.wsj.com/tech/ai/what-to-know-about-openais-ideas-for-a-world-with-superintelligence-e97d6e7b"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i6.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p6#a260406p6" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Amrith Ramkumar / &lt;a href="https://www.wsj.com/"&gt;Wall Street Journal&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://www.wsj.com/tech/ai/what-to-know-about-openais-ideas-for-a-world-with-superintelligence-e97d6e7b"&gt;OpenAI unveils policy proposals for a world with superintelligence: higher capital gains taxes, a public AI investment fund, strengthened safety nets, and more&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; ChatGPT maker put out policy proposals so consumers benefit from rapid advancements in artificial intelligence&lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/openai-готовит-мир-к-сверхразуму-но-план-больше-похож-на-капитуляцию.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
    <item>
      <title>Джек Дорси проиграл Китаю: почему это победа для протестующих</title>
      <link>https://codver.ai/ru/джек-дорси-проиграл-китаю-почему-это-победа-для-протестующих.html</link>
      <guid isPermaLink="true">https://codver.ai/ru/джек-дорси-проиграл-китаю-почему-это-победа-для-протестующих.html</guid>
      <description>&lt;a href="https://cointelegraph.com/news/bitchat-jack-dorsey-china-app-store-removed"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i7.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p7#a260406p7" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Stephen Katte / &lt;a href="http://cointelegraph.com/"&gt;Cointelegraph&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em</description>
      <pubDate>Mon, 06 Apr 2026 11:03:21 GMT</pubDate>
      <category>breaking</category>
      <content:encoded>
&lt;a href="https://cointelegraph.com/news/bitchat-jack-dorsey-china-app-store-removed"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i7.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p7#a260406p7" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Stephen Katte / &lt;a href="http://cointelegraph.com/"&gt;Cointelegraph&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://cointelegraph.com/news/bitchat-jack-dorsey-china-app-store-removed"&gt;Jack Dorsey says Apple removed his Bluetooth P2P messaging app Bitchat, used during protests in Iran and Uganda, from China's App Store following CAC's demands&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Bitchat launched in July last year and has been used during protests in Madagascar, Uganda, Nepal, Indonesia and Iran &amp;hellip; &lt;/p&gt;

https://codver.ai/ru/джек-дорси-проиграл-китаю-почему-это-победа-для-протестующих.html
</content:encoded>
      <turbo:content turbo="true">
&lt;header&gt;
  &lt;h1&gt;Джек Дорси проиграл Китаю: почему это победа для протестующих&lt;/h1&gt;
  &lt;div class="article-meta"&gt;
    &lt;span&gt;Codver.AI&lt;/span&gt;
    &lt;time&gt;06.04.2026 11:03&lt;/time&gt;
  &lt;/div&gt;
&lt;/header&gt;
&lt;p&gt;&lt;a href="https://cointelegraph.com/news/bitchat-jack-dorsey-china-app-store-removed"&gt;&lt;img align="RIGHT" border="0" hspace="4" src="http://www.techmeme.com/260406/i7.jpg" vspace="4" /&gt;&lt;/a&gt;
&lt;p&gt;&lt;a href="http://www.techmeme.com/260406/p7#a260406p7" title="Techmeme permalink"&gt;&lt;img height="12" src="http://www.techmeme.com/img/pml.png" style="border: none; padding: 0; margin: 0;" width="11" /&gt;&lt;/a&gt; Stephen Katte / &lt;a href="http://cointelegraph.com/"&gt;Cointelegraph&lt;/a&gt;:&lt;br /&gt;
&lt;span style="font-size: 1.3em;"&gt;&lt;b&gt;&lt;a href="https://cointelegraph.com/news/bitchat-jack-dorsey-china-app-store-removed"&gt;Jack Dorsey says Apple removed his Bluetooth P2P messaging app Bitchat, used during protests in Iran and Uganda, from China's App Store following CAC's demands&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&amp;nbsp; &amp;mdash;&amp;nbsp; Bitchat launched in July last year and has been used during protests in Madagascar, Uganda, Nepal, Indonesia and Iran &amp;hellip; &lt;/p&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href="https://codver.ai/ru/джек-дорси-проиграл-китаю-почему-это-победа-для-протестующих.html"&gt;Читать на сайте&lt;/a&gt;&lt;/p&gt;
&lt;footer&gt;
  &lt;p&gt;&lt;em&gt;Codver AI Platform — AI-generated content, verified by humans.&lt;/em&gt;&lt;/p&gt;
&lt;/footer&gt;
</turbo:content>
    </item>
  </channel>
</rss>