<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
	<channel>
		<title>Аспро: ЛайтШоп [тема: Ceph OSD авария]</title>
		<link>http://proxmox.su</link>
		<description>Новое в теме Ceph OSD авария форума Proxmox Виртуальная Среда на сайте Аспро: ЛайтШоп [proxmox.su]</description>
		<language>ru</language>
		<docs>http://backend.userland.com/rss2</docs>
		<pubDate>Sun, 03 May 2026 20:32:34 +0300</pubDate>
		<item>
			<title>Ceph OSD авария</title>
			<description><![CDATA[<b><a href="http://proxmox.su/forum/messages/forum63/message354141/81108-ceph-osd-avariya">Ceph OSD авария</a></b> <i>Proxmox Виртуальная Среда</i> в форуме <a href="http://proxmox.su/forum/forum63/">Proxmox Виртуальная Среда</a>. <br />
			Ceph 14.2.16 выпущен, но патч еще не был объединен. <br />
			<i>21.12.2020 10:51:00, Alwin.</i>]]></description>
			<link>http://proxmox.su/forum/messages/forum63/message354141/81108-ceph-osd-avariya</link>
			<guid>http://proxmox.su/forum/messages/forum63/message354141/81108-ceph-osd-avariya</guid>
			<pubDate>Mon, 21 Dec 2020 10:51:00 +0300</pubDate>
			<category>Proxmox Виртуальная Среда</category>
		</item>
		<item>
			<title>Ceph OSD авария</title>
			<description><![CDATA[<b><a href="http://proxmox.su/forum/messages/forum63/message354140/81108-ceph-osd-avariya">Ceph OSD авария</a></b> <i>Proxmox Виртуальная Среда</i> в форуме <a href="http://proxmox.su/forum/forum63/">Proxmox Виртуальная Среда</a>. <br />
			На этом есть продвижение: <noindex><a href="https://tracker.ceph.com/issues/48276#note-32" target="_blank" rel="nofollow" >https://tracker.ceph.com/issues/48276#note-32</a></noindex> ПР еще не слит в основной ветке, так что, полагаю, мы увидим этот (важный) фикс только в версии 14.2.16 или позже. <br />
			<i>19.12.2020 11:28:00, Lephisto.</i>]]></description>
			<link>http://proxmox.su/forum/messages/forum63/message354140/81108-ceph-osd-avariya</link>
			<guid>http://proxmox.su/forum/messages/forum63/message354140/81108-ceph-osd-avariya</guid>
			<pubDate>Sat, 19 Dec 2020 11:28:00 +0300</pubDate>
			<category>Proxmox Виртуальная Среда</category>
		</item>
		<item>
			<title>Ceph OSD авария</title>
			<description><![CDATA[<b><a href="http://proxmox.su/forum/messages/forum63/message354139/81108-ceph-osd-avariya">Ceph OSD авария</a></b> <i>Proxmox Виртуальная Среда</i> в форуме <a href="http://proxmox.su/forum/forum63/">Proxmox Виртуальная Среда</a>. <br />
			Просто обновление: я подал заявку на Ceph Redmine. Есть предложенный патч, который включает подробное логирование в случае этой конкретной ошибки, но пока непонятно, когда его перенесут в 14.x. <noindex><a href="https://tracker.ceph.com/issues/48276" target="_blank" rel="nofollow" >https://tracker.ceph.com/issues/48276</a></noindex> до скорого.. <br />
			<i>23.11.2020 13:52:00, Lephisto.</i>]]></description>
			<link>http://proxmox.su/forum/messages/forum63/message354139/81108-ceph-osd-avariya</link>
			<guid>http://proxmox.su/forum/messages/forum63/message354139/81108-ceph-osd-avariya</guid>
			<pubDate>Mon, 23 Nov 2020 13:52:00 +0300</pubDate>
			<category>Proxmox Виртуальная Среда</category>
		</item>
		<item>
			<title>Ceph OSD авария</title>
			<description><![CDATA[<b><a href="http://proxmox.su/forum/messages/forum63/message354138/81108-ceph-osd-avariya">Ceph OSD авария</a></b> <i>Proxmox Виртуальная Среда</i> в форуме <a href="http://proxmox.su/forum/forum63/">Proxmox Виртуальная Среда</a>. <br />
			*bump* произошел второй раз на другом узле в течение 24 часов. <br />
			<i>18.11.2020 21:31:00, Lephisto.</i>]]></description>
			<link>http://proxmox.su/forum/messages/forum63/message354138/81108-ceph-osd-avariya</link>
			<guid>http://proxmox.su/forum/messages/forum63/message354138/81108-ceph-osd-avariya</guid>
			<pubDate>Wed, 18 Nov 2020 21:31:00 +0300</pubDate>
			<category>Proxmox Виртуальная Среда</category>
		</item>
		<item>
			<title>Ceph OSD авария</title>
			<description><![CDATA[<b><a href="http://proxmox.su/forum/messages/forum63/message354137/81108-ceph-osd-avariya">Ceph OSD авария</a></b> <i>Proxmox Виртуальная Среда</i> в форуме <a href="http://proxmox.su/forum/forum63/">Proxmox Виртуальная Среда</a>. <br />
			Привет, похоже, у меня такая же проблема. Краш OSD без явных аппаратных проблем: Код: root@X# ceph crash info 2020-11-18_02:24:35.429967Z_800333e3-630a-406b-9a0e-c7c345336087<br />{<br /> &nbsp; &nbsp;"os_version_id": "10",<br /> &nbsp; &nbsp;"utsname_machine": "x86_64",<br /> &nbsp; &nbsp;"entity_name": "osd.29",<br /> &nbsp; &nbsp;"backtrace": [<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(()+0x12730) [0x7fb4df645730]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(gsignal()+0x10b) [0x7fb4df1287bb]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(abort()+0x121) [0x7fb4df113535]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a3) [0x55def3ba0419]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(()+0x5115a0) [0x55def3ba05a0]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(KernelDevice::aio_write(unsigned long, ceph::buffer::v14_2_0::list&, IOContext*, bool, int)+0x90) [0x55def4214570]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(BlueStore::_do_alloc_write(BlueStore::TransContext*, boost::intrusive_ptr&lt;BlueStore::Collection&gt;, boost::intrusive_ptr&lt;BlueStore::Onode&gt;, BlueStore::WriteContext*)+0x2237) [0x55def40f4247]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(BlueStore::_do_write(BlueStore::TransContext*, boost::intrusive_ptr&lt;BlueStore::Collection&gt;&, boost::intrusive_ptr&lt;BlueStore::Onode&gt;, unsigned long, unsigned long, ceph::buffer::v14_2_0::list&, unsigned int)+0x318) [0x55def411cef8]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(BlueStore::_write(BlueStore::TransContext*, boost::intrusive_ptr&lt;BlueStore::Collection&gt;&, boost::intrusive_ptr&lt;BlueStore::Onode&gt;&, unsigned long, unsigned long, ceph::buffer::v14_2_0::list&, unsigned int)+0xda) [0x55def411ddfa]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0x1671) [0x55def4121481]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(BlueStore::queue_transactions(boost::intrusive_ptr&lt;ObjectStore::CollectionImpl&gt;&, std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; &gt;&, boost::intrusive_ptr&lt;TrackedOp&gt;, ThreadPool::TPHandle*)+0x3c8) [0x55def4122eb8]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; &gt;&, boost::intrusive_ptr&lt;OpRequest&gt;)+0x54) [0x55def3e908b4]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(ReplicatedBackend::do_repop(boost::intrusive_ptr&lt;OpRequest&gt;)+0xdf8) [0x55def3f89978]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(ReplicatedBackend::_handle_message(boost::intrusive_ptr&lt;OpRequest&gt;)+0x267) [0x55def3f97ab7]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(PGBackend::handle_message(boost::intrusive_ptr&lt;OpRequest&gt;)+0x57) [0x55def3ea8e17]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(PrimaryLogPG::do_request(boost::intrusive_ptr&lt;OpRequest&gt;&, ThreadPool::TPHandle&)+0x61f) [0x55def3e5784f]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(OSD::dequeue_op(boost::intrusive_ptr&lt;PG&gt;, boost::intrusive_ptr&lt;OpRequest&gt;, ThreadPool::TPHandle&)+0x392) [0x55def3c83f02]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr&lt;PG&gt;&, ThreadPool::TPHandle&)+0x62) [0x55def3f27e92]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x7d7) [0x55def3c9fba7]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5b4) [0x55def426c0c4]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x55def426ead0]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(()+0x7fa3) [0x7fb4df63afa3]",<br /> &nbsp; &nbsp; &nbsp; &nbsp;"(clone()+0x3f) [0x7fb4df1ea4cf]"<br /> &nbsp; &nbsp;],<br /> &nbsp; &nbsp;"assert_line": 864,<br /> &nbsp; &nbsp;"utsname_release": "5.4.65-1-pve",<br /> &nbsp; &nbsp;"assert_file": "/build/ceph-JY24tx/ceph-14.2.11/src/os/bluestore/KernelDevice.cc",<br /> &nbsp; &nbsp;"utsname_sysname": "Linux",<br /> &nbsp; &nbsp;"os_version": "10 (buster)",<br /> &nbsp; &nbsp;"os_id": "10",<br /> &nbsp; &nbsp;"assert_thread_name": "tp_osd_tp",<br /> &nbsp; &nbsp;"assert_msg": "/build/ceph-JY24tx/ceph-14.2.11/src/os/bluestore/KernelDevice.cc: В функции 'virtual int KernelDevice::aio_write(uint64_t, ceph::bufferlist&, IOContext*, bool, int)' поток 7fb4c06c2700 время 2020-11-18 03:24:35.419736\n/build/ceph-JY24tx/ceph-14.2.11/src/os/bluestore/KernelDevice.cc: 864: FAILED ceph_assert(is_valid_io(off, len))\n",<br /> &nbsp; &nbsp;"assert_func": "virtual int KernelDevice::aio_write(uint64_t, ceph::bufferlist&, IOContext*, bool, int)",<br /> &nbsp; &nbsp;"ceph_version": "14.2.11",<br /> &nbsp; &nbsp;"os_name": "Debian GNU/Linux 10 (buster)",<br /> &nbsp; &nbsp;"timestamp": "2020-11-18 02:24:35.429967Z",<br /> &nbsp; &nbsp;"process_name": "ceph-osd",<br /> &nbsp; &nbsp;"archived": "2020-11-18 10:14:48.914391",<br /> &nbsp; &nbsp;"utsname_hostname": "X",<br /> &nbsp; &nbsp;"crash_id": "2020-11-18_02:24:35.429967Z_800333e3-630a-406b-9a0e-c7c345336087",<br /> &nbsp; &nbsp;"assert_condition": "is_valid_io(off, len)",<br /> &nbsp; &nbsp;"utsname_version": "#1 SMP PVE 5.4.65-1 (Пн, 21 Сен 2020 15:40:22 +0200)"<br />} <br />
			<i>18.11.2020 11:23:00, Lephisto.</i>]]></description>
			<link>http://proxmox.su/forum/messages/forum63/message354137/81108-ceph-osd-avariya</link>
			<guid>http://proxmox.su/forum/messages/forum63/message354137/81108-ceph-osd-avariya</guid>
			<pubDate>Wed, 18 Nov 2020 11:23:00 +0300</pubDate>
			<category>Proxmox Виртуальная Среда</category>
		</item>
		<item>
			<title>Ceph OSD авария</title>
			<description><![CDATA[<b><a href="http://proxmox.su/forum/messages/forum63/message354136/81108-ceph-osd-avariya">Ceph OSD авария</a></b> <i>Proxmox Виртуальная Среда</i> в форуме <a href="http://proxmox.su/forum/forum63/">Proxmox Виртуальная Среда</a>. <br />
			Привет! Кто-нибудь сталкивался с похожей проблемой? После обновления до ceph-14.2.11 OSD вылетает случайным образом, эта проблема случилась дважды: ceph crash info 2020-11-03_04:50:37.808243Z_e8e9fd54-27a2-4039-82ff-e13d3e7ca40b { "os_version_id": "10", "assert_condition": "is_valid_io(off, len)", "utsname_release": "5.4.65-1-pve", "os_name": "Debian GNU/Linux 10 (buster)", "entity_name": "osd.13", "assert_file": "/build/ceph-JY24tx/ceph-14.2.11/src/os/bluestore/KernelDevice.cc", "timestamp": "2020-11-03 04:50:37.808243Z", "process_name": "ceph-osd", "utsname_machine": "x86_64", "assert_line": 864, "utsname_sysname": "Linux", "os_version": "10 (buster)", "os_id": "10", "assert_thread_name": "tp_osd_tp", "utsname_version": "#1 SMP PVE 5.4.65-1 (Пн, 21 Сен 2020 15:40:22 +0200)", "backtrace": [ "(()+0x12730) [0x7fa137293730]", "(gsignal()+0x10b) [0x7fa136d767bb]", "(abort()+0x121) [0x7fa136d61535]", "(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a3) [0x557b97d2d419]", "(()+0x5115a0) [0x557b97d2d5a0]", "(KernelDevice::aio_write(unsigned long, ceph::buffer::v14_2_0::list&, IOContext*, bool, int)+0x90) [0x557b983a1570]", "(BlueStore::_do_alloc_write(BlueStore::TransContext*, boost::intrusive_ptr&lt;BlueStore::Collection&gt;, boost::intrusive_ptr&lt;BlueStore::Onode&gt;, BlueStore::WriteContext*)+0x2237) [0x557b98281247]", "(BlueStore::_do_write(BlueStore::TransContext*, boost::intrusive_ptr&lt;BlueStore::Collection&gt;&, boost::intrusive_ptr&lt;BlueStore::Onode&gt;, unsigned long, unsigned long, ceph::buffer::v14_2_0::list&, unsigned int)+0x318) [0x557b982a9ef8]", "(BlueStore::_write(BlueStore::TransContext*, boost::intrusive_ptr&lt;BlueStore::Collection&gt;&, boost::intrusive_ptr&lt;BlueStore::Onode&gt;&, unsigned long, unsigned long, ceph::buffer::v14_2_0::list&, unsigned int)+0xda) [0x557b982aadfa]", "(BlueStore::_txc_add_transaction(BlueStore::TransContext*, ObjectStore::Transaction*)+0x1671) [0x557b982ae481]", "(BlueStore::queue_transactions(boost::intrusive_ptr&lt;ObjectStore::CollectionImpl&gt;&, std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; &gt;&, boost::intrusive_ptr&lt;TrackedOp&gt;, ThreadPool::TPHandle*)+0x3c8) [0x557b982afeb8]", "(non-virtual thunk to PrimaryLogPG::queue_transactions(std::vector&lt;ObjectStore::Transaction, std::allocator&lt;ObjectStore::Transaction&gt; &gt;&, boost::intrusive_ptr&lt;OpRequest&gt;)+0x54) [0x557b9801d8b4]", "(ReplicatedBackend::submit_transaction(hobject_t const&, object_stat_sum_t const&, eversion_t const&, std::unique_ptr&lt;PGTransaction, std::default_delete&lt;PGTransaction&gt; &gt;&&, eversion_t const&, eversion_t const&, std::vector&lt;pg_log_entry_t, std::allocator&lt;pg_log_entry_t&gt; &gt; const&, boost: ptional&lt;pg_hit_set_history_t&gt;&, Context*, unsigned long, osd_reqid_t, boost::intrusive_ptr&lt;OpRequest&gt;)+0x644) [0x557b981133f4]", "(PrimaryLogPG::issue_repop(PrimaryLogPG::RepGather*, PrimaryLogPG::OpContext*)+0x102a) [0x557b97f7e0da]", "(PrimaryLogPG::execute_ctx(PrimaryLogPG::OpContext*)+0x110c) [0x557b97fdf26c]", "(PrimaryLogPG::do_op(boost::intrusive_ptr&lt;OpRequest&gt;&)+0x3101) [0x557b97fe2ba1]", "(PrimaryLogPG::do_request(boost::intrusive_ptr&lt;OpRequest&gt;&, ThreadPool::TPHandle&)+0xd77) [0x557b97fe4fa7]", "(OSD::dequeue_op(boost::intrusive_ptr&lt;PG&gt;, boost::intrusive_ptr&lt;OpRequest&gt;, ThreadPool::TPHandle&)+0x392) [0x557b97e10f02]", "(PGOpItem::run(OSD*, OSDShard*, boost::intrusive_ptr&lt;PG&gt;&, ThreadPool::TPHandle&)+0x62) [0x557b980b4e92]", "(OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x7d7) [0x557b97e2cba7]", "(ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x5b4) [0x557b983f90c4]", "(ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x557b983fbad0]", "(()+0x7fa3) [0x7fa137288fa3]", "(clone()+0x3f) [0x7fa136e384cf]" ], "utsname_hostname": "xxxxxxx", "assert_msg": "/build/ceph-JY24tx/ceph-14.2.11/src/os/bluestore/KernelDevice.cc: В функции 'virtual int KernelDevice::aio_write(uint64_t, ceph::bufferlist&, IOContext*, bool, int)' поток 7fa109af2700 время 2020-11-03 05:50:37.797725\n/build/ceph-JY24tx/ceph-14.2.11/src/os/bluestore/KernelDevice.cc: 864: УСПЕШНО ceph_assert(is_valid_io(off, len))\n", "crash_id": "2020-11-03_04:50:37.808243Z_e8e9fd54-27a2-4039-82ff-e13d3e7ca40b", "assert_func": "virtual int KernelDevice::aio_write(uint64_t, ceph::bufferlist&, IOContext*, bool, int)", "ceph_version": "14.2.11" } <br />
			<i>03.11.2020 10:24:00, szucs10.</i>]]></description>
			<link>http://proxmox.su/forum/messages/forum63/message354136/81108-ceph-osd-avariya</link>
			<guid>http://proxmox.su/forum/messages/forum63/message354136/81108-ceph-osd-avariya</guid>
			<pubDate>Tue, 03 Nov 2020 10:24:00 +0300</pubDate>
			<category>Proxmox Виртуальная Среда</category>
		</item>
	</channel>
</rss>
