#ceph IRC Log

Index

IRC Log for 2016-08-17

Timestamps are in GMT/BST.

[0:01] * oarra (~rorr@45.73.146.238) Quit (Quit: oarra)
[0:13] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[0:14] * ircolle (~Adium@2601:285:201:633a:3114:56ea:8433:8286) Quit (Quit: Leaving.)
[0:16] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[0:17] * boredatwork (~overonthe@199.68.193.62) Quit (Read error: No route to host)
[0:17] * blizzow (~jburns@50.243.148.102) Quit (Ping timeout: 480 seconds)
[0:17] * boredatwork (~overonthe@199.68.193.62) has joined #ceph
[0:18] * boredatwork (~overonthe@199.68.193.62) Quit (Read error: No route to host)
[0:18] * rburkholder (~overonthe@199.68.193.54) Quit (Read error: No route to host)
[0:18] * boredatwork (~overonthe@199.68.193.62) has joined #ceph
[0:18] * rburkholder (~overonthe@199.68.193.62) has joined #ceph
[0:23] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:24] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[0:32] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[0:42] * squizzi (~squizzi@71-34-69-94.ptld.qwest.net) Quit (Ping timeout: 480 seconds)
[0:43] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Quit: Leaving.)
[0:44] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) Quit (Remote host closed the connection)
[0:50] * kuku (~kuku@119.93.91.136) has joined #ceph
[0:58] * danieagle (~Daniel@179.97.148.125) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[1:01] * spgriffinjr (~spgriffin@66.46.246.206) Quit (Read error: Connection reset by peer)
[1:06] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:08] * lcurtis_ (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[1:08] * xarses (~xarses@4.35.170.198) Quit (Ping timeout: 480 seconds)
[1:09] * bniver (~bniver@pool-98-110-180-234.bstnma.fios.verizon.net) has joined #ceph
[1:11] <jiffe> I'm thinking it makes more sense to use raid on all disks in a machine than one osd per disk
[1:12] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[1:12] <jiffe> recovering from a disk failure in raid is a lot faster than moving PGs
[1:13] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) has joined #ceph
[1:14] <jiffe> most my IO seems to be from xfsaild
[1:15] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[1:16] * northrup (~northrup@201.103.87.199) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:19] * dbbyleo (~dbbyleo@50-198-202-93-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[1:25] * Drumplayr (~thomas@r74-192-135-250.gtwncmta01.grtntx.tl.dh.suddenlink.net) Quit (Quit: leaving)
[1:29] * cathode (~cathode@50.232.215.114) has joined #ceph
[1:39] * oms101 (~oms101@p20030057EA5F8100C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:40] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[1:42] * tunaaja (~Azru@tor-exit.squirrel.theremailer.net) has joined #ceph
[1:42] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[1:42] * jklare (~jklare@185.27.181.36) Quit (Ping timeout: 480 seconds)
[1:43] * srk (~Siva@2605:6000:ed04:ce00:a811:fc1f:1257:5a2b) has joined #ceph
[1:44] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:48] * oms101 (~oms101@p20030057EA679200C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:51] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[1:53] * sfrode_ (frode@sandholtbraaten.com) Quit (Ping timeout: 480 seconds)
[1:53] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:53] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:58] * jklare (~jklare@185.27.181.36) has joined #ceph
[1:59] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[2:02] * KindOne (sillyfool@0001a7db.user.oftc.net) Quit (Quit: ...)
[2:03] * srk (~Siva@2605:6000:ed04:ce00:a811:fc1f:1257:5a2b) Quit (Ping timeout: 480 seconds)
[2:07] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[2:07] * cathode (~cathode@50.232.215.114) Quit (Quit: Leaving)
[2:09] * jarrpa (~jarrpa@67.224.250.2) Quit (Ping timeout: 480 seconds)
[2:11] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[2:12] * tunaaja (~Azru@5AEAAA1GB.tor-irc.dnsbl.oftc.net) Quit ()
[2:15] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[2:20] * hellertime (~Adium@pool-71-162-119-41.bstnma.fios.verizon.net) has joined #ceph
[2:26] <The_Ball> Does anyone know what this means on one of the mon servers? 7f9e4c885700 -1 mon.node1@0(leader).mds e425 Missing health data for MDS 8457711
[2:27] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[2:27] <The_Ball> The two other mons are not reporting this, and if I "ceph-deploy mon destroy" the node the other nodes still stay "silent". Re-create the mon and the message reappears in the log every few seconds
[2:32] * Jeffrey4l (~Jeffrey@110.252.40.176) has joined #ceph
[2:33] * Jeffrey4l (~Jeffrey@110.252.40.176) Quit (Read error: No route to host)
[2:37] * Racpatel (~Racpatel@2601:87:0:24af::53d5) Quit (Read error: No route to host)
[2:38] * KindOne (sillyfool@h125.161.186.173.dynamic.ip.windstream.net) has joined #ceph
[2:41] * chunmei (~chunmei@134.134.139.72) Quit (Remote host closed the connection)
[2:49] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Remote host closed the connection)
[2:49] * Racpatel (~Racpatel@2601:87:0:24af::53d5) has joined #ceph
[2:49] * huangjun|2 (~kvirc@113.57.168.154) has joined #ceph
[2:55] * georgem (~Adium@107-179-157-134.cpe.teksavvy.com) has joined #ceph
[2:59] * yanzheng (~zhyan@118.116.114.80) has joined #ceph
[3:00] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[3:09] * derjohn_mobi (~aj@x4db25baf.dyn.telefonica.de) has joined #ceph
[3:11] * Racpatel (~Racpatel@2601:87:0:24af::53d5) Quit (Quit: Leaving)
[3:16] * derjohn_mob (~aj@x4db0fe13.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:30] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[3:34] * sankarshan (~sankarsha@106.216.183.133) has joined #ceph
[3:36] * hellertime (~Adium@pool-71-162-119-41.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[3:40] * Neon (~tZ@tor2r.ins.tor.net.eu.org) has joined #ceph
[3:42] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:44] <jiffe> seems this recovery has slowed to a crawl again
[3:56] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[4:00] * vbellur (~vijay@71.234.224.255) has joined #ceph
[4:01] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[4:03] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[4:06] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[4:10] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) has joined #ceph
[4:10] * Neon (~tZ@5AEAAA1IP.tor-irc.dnsbl.oftc.net) Quit ()
[4:10] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[4:15] * kefu (~kefu@114.92.101.38) has joined #ceph
[4:16] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[4:22] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[4:34] * Hemanth (~hkumar_@103.228.221.179) has joined #ceph
[4:40] * kuku (~kuku@119.93.91.136) has joined #ceph
[4:50] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) has joined #ceph
[4:52] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:52] * Hemanth (~hkumar_@103.228.221.179) Quit (Quit: Leaving)
[4:54] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:55] * davidzlap (~Adium@2605:e000:1313:8003:218b:daed:f39e:f090) Quit (Quit: Leaving.)
[4:56] * efirs1 (~firs@98.207.153.155) has joined #ceph
[4:56] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[5:04] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[5:07] * efirs1 (~firs@98.207.153.155) Quit (Ping timeout: 480 seconds)
[5:13] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[5:15] * georgem (~Adium@107-179-157-134.cpe.teksavvy.com) Quit (Quit: Leaving.)
[5:18] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[5:27] * jermudgeon (~jhaustin@tab.mdu.whitestone.link) Quit (Quit: jermudgeon)
[5:29] * Vacuum__ (~Vacuum@88.130.220.189) has joined #ceph
[5:36] * Vacuum_ (~Vacuum@88.130.212.33) Quit (Ping timeout: 480 seconds)
[5:39] * vimal (~vikumar@114.143.167.9) has joined #ceph
[5:53] * kefu is now known as kefu|afk
[6:01] * walcubi_ (~walcubi@p5795AC49.dip0.t-ipconnect.de) has joined #ceph
[6:08] * walbuci (~walcubi@p5795AE54.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:08] * [0x4A6F]_ (~ident@p4FC279D3.dip0.t-ipconnect.de) has joined #ceph
[6:08] * kefu|afk is now known as kefu
[6:09] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:09] * [0x4A6F]_ is now known as [0x4A6F]
[6:15] * northrup (~northrup@189-211-129-205.static.axtel.net) has joined #ceph
[6:17] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:18] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) has joined #ceph
[6:18] * bene2 (~bene@2601:193:4101:f410:ea2a:eaff:fe08:3c7a) Quit ()
[6:22] * T1w (~jens@node3.survey-it.dk) Quit (Remote host closed the connection)
[6:29] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[6:29] * vimal (~vikumar@114.143.167.9) Quit (Quit: Leaving)
[6:32] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[6:36] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[6:40] * wes_dillingham (~wes_dilli@209-6-222-74.c3-0.hdp-ubr1.sbo-hdp.ma.cable.rcn.com) Quit (Quit: wes_dillingham)
[6:46] * vikhyat (~vumrao@114.143.252.19) has joined #ceph
[6:47] * swami1 (~swami@49.38.1.168) has joined #ceph
[6:50] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:59] * sankarshan (~sankarsha@106.216.183.133) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[7:07] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[7:08] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[7:11] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[7:12] * kuku (~kuku@119.93.91.136) has joined #ceph
[7:19] * onyb (~ani07nov@119.82.105.66) has joined #ceph
[7:29] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[7:40] * toastyde1th (~toast@pool-71-255-253-39.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[7:44] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[7:50] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Read error: Connection reset by peer)
[7:50] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[7:52] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[7:55] * sebastian-w_ (~quassel@212.218.8.139) has joined #ceph
[7:55] * sebastian-w (~quassel@212.218.8.138) Quit (Read error: Connection reset by peer)
[8:05] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[8:06] * rdas (~rdas@121.244.87.116) has joined #ceph
[8:08] * TMM (~hp@185.5.121.201) has joined #ceph
[8:14] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:18] * squizzi (~squizzi@71-34-69-94.ptld.qwest.net) has joined #ceph
[8:21] * vikhyat_ (~vumrao@1.39.19.98) has joined #ceph
[8:22] * JohnPreston78 (sid31393@id-31393.ealing.irccloud.com) Quit (Read error: Connection reset by peer)
[8:23] * JohnPreston78 (sid31393@id-31393.ealing.irccloud.com) has joined #ceph
[8:25] * vikhyat (~vumrao@114.143.252.19) Quit (Ping timeout: 480 seconds)
[8:25] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[8:31] * karnan (~karnan@106.51.128.173) has joined #ceph
[8:36] * onyb_ (~ani07nov@119.82.105.66) has joined #ceph
[8:36] * onyb (~ani07nov@119.82.105.66) Quit (Read error: Connection reset by peer)
[8:37] * sfrode (frode@sandholtbraaten.com) has joined #ceph
[8:38] * saintpablo (~saintpabl@gw01.mhitp.dk) has joined #ceph
[8:39] * saintpablo (~saintpabl@gw01.mhitp.dk) Quit ()
[8:42] * sfrode (frode@sandholtbraaten.com) Quit ()
[8:42] * sfrode (frode@sandholtbraaten.com) has joined #ceph
[8:42] * derjohn_mobi (~aj@x4db25baf.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[8:42] * squizzi (~squizzi@71-34-69-94.ptld.qwest.net) Quit (Ping timeout: 480 seconds)
[8:46] * andrew (~andrew@38.83.109.50) has joined #ceph
[8:46] * andrew (~andrew@38.83.109.50) has left #ceph
[8:49] * vikhyat_ is now known as vikhyat
[8:49] * Jeffrey4l (~Jeffrey@110.252.40.176) has joined #ceph
[8:51] <IcePic> jiffe: in general, raiding ceph (or zfs or other filesystems that handle checksumming and spreading copies) isn't the most efficient way since you would be duplicating data more for less usable disk.
[8:51] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[8:51] * LiamM (~liam.monc@mail.moncur.eu) Quit (Quit: leaving)
[8:53] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[8:53] * LiamM (~liam.monc@mail.moncur.eu) has joined #ceph
[8:54] <IcePic> you would need to have ceph copies on more than one box anyhow, and if all boxes used raid1 for all disks, any ceph object would need to be on two or more ceph machines, and since R1, would end up on two disks per ceph box
[8:55] <IcePic> so any object would be in 4 disks, in that case you could aswell just ask ceph to have 4 copies of each object and spend as much disk
[8:56] <IcePic> and raid1 rebuilds might (depending on the raidcontroller code I guess) rebuild a whole disk even if it was only 25% filled, so it would put load on the disks while rebuilding for 100% of the size of the disk
[8:56] <IcePic> if you had ceph make 3 extra copies of any object, it could still serve clients using those 3 while rebuilding the fourth copy again
[8:56] <IcePic> for a single lost disk
[8:59] <Be-El> IcePic: speaking of raid....what about using raid-0 for osd disks? using a small raid stripe size (e.g. 64kb) should speed up reads, since most rados objects have a size of 4 mb and more disks can be involved in reading it
[9:00] * Nicho1as is curious
[9:00] <Gugge-47527> why would that speed up reads, if you have many reads on your cluster it would read from most or all disks all the time anyway :)
[9:00] <IcePic> there is nothing to prevent you from using raids, if you have $$$ to spend and want something else than what ceph provides. I just noted that the security scenario is something ceph (and zfs and so on) handles rather well, so you should normally hand over all devices to them and let them figure out the "raiding"
[9:01] * LiamM (~liam.monc@mail.moncur.eu) Quit (Quit: leaving)
[9:01] <IcePic> Gugge-47527: in theory a raid card for R0 would presumable be able to offload some small lowlevel part of reading and combining I/O for its R0. In theory
[9:01] * LiamM (~liam.monc@mail.moncur.eu) has joined #ceph
[9:01] <Gugge-47527> sure
[9:02] <IcePic> In general, I think Gugge-47527 is right, the disks will all be adding performance to your ceph
[9:02] * LiamM (~liam.monc@mail.moncur.eu) Quit ()
[9:02] <Gugge-47527> a single read from a single object in a non busy cluster will be faster from raid0 osd's :)
[9:02] <Gugge-47527> but is that gonna happen in real life? :)
[9:03] <Be-El> Gugge-47527: in case of multiple concurrent accesses you are right, the reads will already be distributed across the osd disks. but single thread performance (e.g. reading a file in cephfs) is restricted by the speed of the underlying OSD.
[9:03] <IcePic> I think that ceph clusters have a way of surprising people on how stuff gets used and whats fastest.
[9:03] * LiamM (~liam.monc@mail.moncur.eu) has joined #ceph
[9:04] <IcePic> like when people takes ssd/nvme and partition them so that it looks like 4 disks to speed up 4 queues in paralell instead of having one big cache on it
[9:04] <Gugge-47527> Be-El: correct ... but if your cluster is busy, that osd is serving more objects at the same time
[9:04] <Be-El> IcePic: unfortunately most users to not understand how to use a ceph cluster efficiently. they want single thread performance
[9:04] * bjarne (~saint@battlecruiser.thesaint.se) has joined #ceph
[9:04] <Gugge-47527> and with the raid0 you have half the osd's to spread the load over
[9:04] <IcePic> yeah, we get that from our customers here too. They expect benchmarks to be super from day 1 and if its not super now, it will degrade for VM 2,3,4,...
[9:05] <Gugge-47527> if single thread performance is what you need, fine :)
[9:05] <IcePic> as if it was an iscsi box from vendor X
[9:05] <Be-El> Gugge-47527: but the raid controller might also enable the osd/kernel/block device queue to perform multiple read operations
[9:05] <Gugge-47527> it would, but in the end you are limited to what the disks can do
[9:06] <Gugge-47527> one raid0 osd is faster than one non raid osd.
[9:06] * analbeard (~shw@support.memset.com) has joined #ceph
[9:07] <Gugge-47527> and if you need a fast single threaded read, you could use small objects, and readahead on the client :)
[9:08] <IcePic> Be-El: but we dont know if $500 on a 1G ram raid card would give more overall perf than $500 worth of RAM for general I/O caches on the box
[9:08] <Be-El> i'm already using 8mb readahead for cephfs mounts
[9:08] <IcePic> (Adjust cost to what a decent raid card costs, havent bought one in a while)
[9:08] <Be-El> 500 bucks should be fine for a current lsi controller
[9:09] <IcePic> so, my "on an envelope" calcs say that you get quite a lot of server RAM for that amount
[9:10] * branto (~branto@nat-pool-brq-t.redhat.com) has joined #ceph
[9:10] <Be-El> cache on the radi controller + a BBU might allow you to disable write barriers on OSD file systems, which should speed up some operations. but i haven't tested this yet
[9:11] * bara (~bara@213.175.37.12) has joined #ceph
[9:11] * bjarne (~saint@battlecruiser.thesaint.se) Quit (Quit: leaving)
[9:12] <IcePic> yeah. I just dont know how much easier it would be on spinning disks if you instead had say 64 or 128G more ram, which in turn would mean far less reads on frequent data and leave more time on the platters for writes
[9:13] * vikhyat_ (~vumrao@49.248.169.37) has joined #ceph
[9:13] <IcePic> or more opportunity for write combining and/or intelligent sorting of outgoing writes for optimal performance.
[9:14] <IcePic> overall, I think I'm just saying that handling a cluster like ceph is quite different from what I've been doing the last 20 years of storage, when it was almost always confined to a single box, or single SAN machinery
[9:14] <IcePic> old truths aren't all valid anymore. One need to bench it again and test old assumptions
[9:15] * vikhyat_ (~vumrao@49.248.169.37) Quit ()
[9:16] * vikhyat_ (~vumrao@49.248.169.37) has joined #ceph
[9:16] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[9:17] * vikhyat (~vumrao@1.39.19.98) Quit (Ping timeout: 480 seconds)
[9:19] * rendar (~I@host106-97-dynamic.182-80-r.retail.telecomitalia.it) has joined #ceph
[9:21] * DJComet (~aleksag@46.166.186.248) has joined #ceph
[9:22] * bjarne (~saint@battlecruiser.thesaint.se) has joined #ceph
[9:24] <Be-El> IcePic: oh yes.... i've been trying to convince my colleages that our new cloud
[9:25] * derjohn_mobi (~aj@2001:6f8:1337:0:1917:d084:1a82:58ed) has joined #ceph
[9:25] <Be-El> IcePic: oh yes.... i've been trying to convince my colleages that using a central core switch for our new cloud's network infrastructure might not be the best idea nowadays...
[9:26] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) has joined #ceph
[9:30] * ade (~abradshaw@p4FF7A7FC.dip0.t-ipconnect.de) has joined #ceph
[9:31] * Pulp (~Pulp@63-221-50-195.dyn.estpak.ee) Quit (Read error: Connection reset by peer)
[9:34] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) has joined #ceph
[9:36] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[9:37] * bjarne (~saint@battlecruiser.thesaint.se) Quit (Quit: leaving)
[9:40] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[9:41] <ronrib> ok what's going on, I'm getting super slow reads on flushed data stored on an erasure pool with a ssd cache tier. I've been poking around and I find 'osd_tier_promote_max_bytes_sec = 25' as a default setting
[9:41] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[9:41] <ronrib> do all reads need to be promoted if the data has been flushed from the cache tier to the erasure pool?
[9:42] <ronrib> in that case, won't that setting limit the per osd read rate to 25 bytes per second?
[9:49] <ronrib> humm looks like reads don't get promoted usually
[9:49] * onyb_ (~ani07nov@119.82.105.66) Quit (Quit: raise SystemExit())
[9:49] <Be-El> ronrib: flushed data should still be present in the cache tier. are you talking about evicted takes (which means flushed + removed from cache)?
[9:50] <ronrib> yeah evicted sorry
[9:51] * DJComet (~aleksag@26XAAA4VD.tor-irc.dnsbl.oftc.net) Quit ()
[9:51] <Be-El> ronrib: which ceph release do you use? hammer? jewel?
[9:51] <ronrib> i'm getting crazy slow reads, like 20MB/s when the tier can manage 2.5GB/s with no cache tier
[9:51] <ronrib> jewel
[9:52] * northrup (~northrup@189-211-129-205.static.axtel.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[9:52] <ronrib> trying for the cephfs on a ssd cache pool & 2+1 spinning disk erasure pool
[9:52] <ronrib> but even with rbd it's slow
[9:52] <ronrib> it's fine with data sitting in cache though
[9:54] <Be-El> jewel has changed a lot with respect to cache tiers (promotion throttleling, proxy reads etc.), and i haven't used it yet
[9:54] <ronrib> I think I'm too close to the cutting edge :V
[9:54] <Be-El> there's been a thread on the mailing list recently about cache tier setup, maybe you can find more information there
[9:55] <ronrib> I'll keep my eye out for it, thanks Be-El
[9:56] <Be-El> ronrib: search for posts from Christian Balzer, he has a number of in-depth threads including the cache tier setup one
[9:56] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Quit: Ex-Chat)
[9:56] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[10:01] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[10:01] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[10:06] * DanFoster (~Daniel@2a00:1ee0:3:1337:c155:570c:3e46:6941) has joined #ceph
[10:13] * vikhyat_ (~vumrao@49.248.169.37) Quit (Quit: Leaving)
[10:16] * vikhyat (~vumrao@49.248.169.37) has joined #ceph
[10:18] * karnan (~karnan@106.51.128.173) Quit (Ping timeout: 480 seconds)
[10:22] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[10:22] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[10:23] * kefu (~kefu@114.92.101.38) Quit (Max SendQ exceeded)
[10:23] * kefu (~kefu@114.92.101.38) has joined #ceph
[10:26] <sep> basicaly same question as a few weaks ago... i have set the crush weight for a osd to 0 to drain it out of the cluster. how can i be certain that all objects are properly drained ? the disk quickly vent from 80% usage to around 6%, but there are still 180GB on the disk, and the usage have not gone down any more today at all
[10:28] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[10:29] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:31] <Be-El> sep: the rados objects are in the 'current' subdirectory of the osd directory. if this directory is empty, all files have been transferred
[10:32] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[10:33] <sep> thanks. that directory is in no way empty
[10:34] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[10:34] <sep> perhaps the backfilling have stopped since i have one osd that is near full.
[10:35] <Be-El> the XYZ_head subdirectories contain the actual PGs; the XYZ_temp ones are only temporarly and should have been deleted by the osd daemon (e.g. after successful backfilling)
[10:35] <Be-El> if ceph- s complains about the full osd it is stopped
[10:49] <sep> i have both. ; find . | wc -l = 92607
[10:51] <sep> thanks. ill look at the full osd's
[10:55] * xarses_ (~xarses@66-219-216-151.static.ip.veracitynetworks.com) has joined #ceph
[10:56] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Read error: No route to host)
[10:56] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[10:57] * sfrode (frode@sandholtbraaten.com) Quit (Ping timeout: 480 seconds)
[10:57] * garphy is now known as garphy`aw
[10:58] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit ()
[10:58] * maku1 (~sese_@tsn109-201-154-148.dyn.nltelcom.net) has joined #ceph
[10:59] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[11:01] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[11:02] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) Quit (Ping timeout: 480 seconds)
[11:11] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[11:11] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[11:13] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:70e3:d605:eee0:baf4) has joined #ceph
[11:14] * swami2 (~swami@49.44.57.239) has joined #ceph
[11:21] * swami1 (~swami@49.38.1.168) Quit (Ping timeout: 480 seconds)
[11:21] * karnan_ (~karnan@121.244.87.117) has joined #ceph
[11:23] * sam15 (~sascha@p50931ba9.dip0.t-ipconnect.de) has joined #ceph
[11:25] <sam15> Hi, have you an idea why my ceph-mons show my osds as up even if I shut down the respective server?
[11:28] * maku1 (~sese_@tsn109-201-154-148.dyn.nltelcom.net) Quit ()
[11:29] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[11:35] * kefu (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:41] * swami1 (~swami@49.38.1.168) has joined #ceph
[11:46] * huangjun (~kvirc@113.57.168.154) has joined #ceph
[11:47] * swami2 (~swami@49.44.57.239) Quit (Ping timeout: 480 seconds)
[11:48] <sam15> Hi, have you an idea why my ceph-mons show my osds as up even if I shut down the respective server?
[11:48] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[11:49] * huangjun|2 (~kvirc@113.57.168.154) Quit (Read error: Connection reset by peer)
[11:50] * campgareth (~campgaret@149.18.114.222) Quit (Ping timeout: 480 seconds)
[11:57] * swami1 (~swami@49.38.1.168) Quit (Read error: Connection timed out)
[11:57] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) has joined #ceph
[11:59] * garphy`aw is now known as garphy
[12:04] <sam15> Hi, have you an idea why my ceph-mons show my osds as up even if I shut down the respective server?
[12:05] * garphy is now known as garphy`aw
[12:09] <koollman> sam15: even if you wait a little while ?
[12:11] * swami1 (~swami@49.38.1.168) has joined #ceph
[12:14] <sam15> koollman: 20 minutes?
[12:15] <sam15> koollmann: the server is a osd host as well as an mon. the mon is correctly shown as missing in the quorum but the osd are up and in.
[12:16] <The_Ball> Does anyone know what this means on one of the mon servers? 7f9e4c885700 -1 mon.node1@0(leader).mds e425 Missing health data for MDS 8457711
[12:16] <The_Ball> The two other mons are not reporting this, and if I "ceph-deploy mon destroy" the node the other nodes still stay "silent". Re-create the mon and the message reappears in the log every few seconds
[12:19] <sam15> the_ball: is the monitor shown in the quorum by ceph -w?
[12:20] <The_Ball> sam15, yes monmap e9: 3 mons at {node1=192.168.37.191:6789/0,node4=192.168.37.194:6789/0,node5=192.168.37.195:6789/0} \n election epoch 1024, quorum 0,1,2 node1,node4,node5
[12:20] * _mrp (~mrp@82.117.199.26) has joined #ceph
[12:24] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[12:25] * swami1 (~swami@49.38.1.168) Quit (Quit: Leaving.)
[12:27] <sam15> the_ball: do you use ceph_fs?
[12:28] <The_Ball> sam15, I do yes
[12:28] <The_Ball> sam15, fsmap e425: 1/1/1 up {0=node4=up:active}
[12:31] * sfrode (frode@sandholtbraaten.com) has joined #ceph
[12:32] <sam15> the_ball: my guess is, that your mds does not talk to your mon
[12:33] <sam15> the_ball: worth a try: http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-mds/
[12:33] <Be-El> or check on the hosts itself that the ceph daemons are indeed terminated...
[12:35] <sam15> the_ball: ceph tell mds.0 injectargs --debug_ms 1 --debug_mds 10
[12:35] <sam15> the_ball: ceph mds stat
[12:36] <sam15> the:ball: from http://docs.ceph.com/docs/hammer/rados/operations/control/
[12:39] <The_Ball> thanks, where should the increased mds logging appear. I'm looking in /var/log/ceph/ceph-mds.node4.log, but it is completely empty (on node4)
[12:40] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:40] <sam15> the_ball: is this particular mds even running?
[12:41] <The_Ball> yes: ceph 2878 0.0 2.5 804768 410152 ? Ssl Aug15 1:21 /usr/bin/ceph-mds -f --cluster ceph --id node4 --setuser ceph --setgroup ceph
[12:41] <The_Ball> and the cephfs is mounted and being accessed
[12:42] <The_Ball> Should I restart the mds?
[12:43] <sam15> the_ball: I would suggest to check the keyring first
[12:44] <The_Ball> sam15, /etc/ceph/ceph.client.admin.keyring is identical on node1 and node4
[12:45] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[12:45] <sam15> the_ball: sry, then I am out of ideas.
[12:46] <The_Ball> Hum, ceph.conf is not identical. That's odd I'm sure I used ceph-deploy to --overwrite-conf
[12:46] <The_Ball> Perhaps that's the cause
[12:47] <The_Ball> No it's just removed my comments and compacted the file, the directives are identical
[12:48] <sam15> the_ball: try to sync again. (check file permits, too)
[12:49] <The_Ball> restarting the mds on node4 has stopped the "mon.node1@0(leader).mds e425 Missing health data for MDS 8457711" messages
[12:50] <sam15> the_ball: congrats
[12:50] * kaisan (~kai@zaphod.kamiza.nl) Quit (Ping timeout: 480 seconds)
[12:50] <The_Ball> thanks for your help, not sure what the issue was though
[12:55] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[12:57] * vbellur (~vijay@71.234.224.255) has joined #ceph
[12:59] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[13:01] * huangjun (~kvirc@113.57.168.154) Quit (Ping timeout: 480 seconds)
[13:02] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[13:05] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[13:18] * b0e (~aledermue@213.95.25.82) has joined #ceph
[13:23] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[13:23] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[13:25] * swami1 (~swami@49.38.1.168) has joined #ceph
[13:25] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[13:26] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[13:27] * vikhyat_ (~vumrao@1.39.17.60) has joined #ceph
[13:29] * vikhyat (~vumrao@49.248.169.37) Quit (Ping timeout: 480 seconds)
[13:29] <sep> i have been using ceph osd tell to tweak settings on some struggeleing osd's ; but now i wanted to know what a given osd have as it's too full ratio. is there something similar to ceph osd "ask" command where you'd get the current value of the configuration
[13:29] * bniver (~bniver@pool-98-110-180-234.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:29] * vikhyat_ is now known as vikhyat
[13:30] <vikhyat> sep: you can check this with ceph daemon command
[13:30] <vikhyat> sep: ceph daemon osd.<id> config show | grep <config name>
[13:30] <sep> thanks that gave a lot of more sane results when googeling :)
[13:31] <vikhyat> :)
[13:32] * bniver (~bniver@pool-98-110-180-234.bstnma.fios.verizon.net) has joined #ceph
[13:34] * Racpatel (~Racpatel@2601:87:0:24af::53d5) has joined #ceph
[13:37] * hellertime (~Adium@72.246.3.14) has joined #ceph
[13:45] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[13:46] * rwheeler (~rwheeler@pool-173-48-195-215.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[13:46] * i_m (~ivan.miro@31.173.120.48) Quit (Read error: Connection reset by peer)
[13:46] * i_m (~ivan.miro@31.173.120.48) has joined #ceph
[14:01] * rakeshgm (~rakesh@121.244.87.117) Quit (Quit: Leaving)
[14:03] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Quit: Leaving.)
[14:06] * georgem (~Adium@24.114.69.1) has joined #ceph
[14:07] * georgem (~Adium@24.114.69.1) Quit ()
[14:07] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:14] <sam15> Hi, any idea why upon adding an osd the folder /var/lib/ceph/osd remains empty? If I add ceph-0 manually and restart the osd daemon, the mons shows osd.0 "up" but the status message remains "up" even if I shutdown the server
[14:15] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[14:15] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:15] * Dragonshadow (~x303@se10x.mullvad.net) has joined #ceph
[14:21] <jiffe> I shutdown two osds on a host and I ended up with 2 PGs in inactive+stuck, what does that mean?
[14:22] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[14:25] * Hemanth (~hkumar_@49.38.1.137) has joined #ceph
[14:27] * Hemanth (~hkumar_@49.38.1.137) Quit ()
[14:28] * derjohn_mobi (~aj@2001:6f8:1337:0:1917:d084:1a82:58ed) Quit (Ping timeout: 480 seconds)
[14:29] * vikhyat_ (~vumrao@123.252.149.81) has joined #ceph
[14:30] * shamsul (~shamsul@192.228.194.156) has joined #ceph
[14:30] * boolman (~boolman@83.140.140.146) has joined #ceph
[14:31] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[14:33] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Quit: valeech)
[14:34] * vikhyat (~vumrao@1.39.17.60) Quit (Ping timeout: 480 seconds)
[14:35] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[14:43] * kuku (~kuku@112.203.56.253) has joined #ceph
[14:45] * Dragonshadow (~x303@se10x.mullvad.net) Quit ()
[14:48] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:49] * shamsul (~shamsul@192.228.194.156) Quit (Quit: Leaving)
[14:55] * vikhyat_ (~vumrao@123.252.149.81) Quit (Quit: Leaving)
[14:55] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[14:56] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:57] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[14:59] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[15:00] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:03] * EinstCrazy (~EinstCraz@61.165.252.183) has joined #ceph
[15:06] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[15:08] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[15:08] * valeech (~valeech@wsip-98-175-102-67.dc.dc.cox.net) has joined #ceph
[15:14] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[15:21] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:22] * EinstCrazy (~EinstCraz@61.165.252.183) Quit (Remote host closed the connection)
[15:23] * valeech (~valeech@wsip-98-175-102-67.dc.dc.cox.net) Quit (Quit: valeech)
[15:23] * dbbyleo (~dbbyleo@c-24-8-87-68.hsd1.co.comcast.net) has joined #ceph
[15:24] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:35] * dbbyleo (~dbbyleo@c-24-8-87-68.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[15:36] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[15:38] * sep (~sep@95.62-50-191.enivest.net) Quit (Read error: Connection reset by peer)
[15:39] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:42] * nathani (~nathani@2607:f2f8:ac88::) Quit (Quit: WeeChat 1.4)
[15:44] * sep (~sep@95.62-50-191.enivest.net) has joined #ceph
[15:44] * LiamM (~liam.monc@mail.moncur.eu) Quit (Quit: leaving)
[15:45] * LiamM (~liam.monc@mail.moncur.eu) has joined #ceph
[15:46] * LiamM (~liam.monc@mail.moncur.eu) Quit ()
[15:47] * LiamM (~liam.monc@mail.moncur.eu) has joined #ceph
[15:47] * xarses_ (~xarses@66-219-216-151.static.ip.veracitynetworks.com) Quit (Remote host closed the connection)
[15:48] * xarses_ (~xarses@66-219-216-151.static.ip.veracitynetworks.com) has joined #ceph
[15:55] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:55] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:56] * Racpatel (~Racpatel@2601:87:0:24af::53d5) Quit (Ping timeout: 480 seconds)
[15:57] * david_ (~david@207.107.71.71) Quit (Quit: Leaving)
[15:59] * dbbyleo (~dbbyleo@50-198-202-93-static.hfc.comcastbusiness.net) has joined #ceph
[16:00] * jarrpa (~jarrpa@adsl-72-50-85-113.prtc.net) has joined #ceph
[16:00] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[16:01] * hellertime (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[16:01] * spgriffinjr (~spgriffin@66.46.246.206) has joined #ceph
[16:01] * hellertime (~Adium@72.246.3.14) has joined #ceph
[16:02] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[16:02] * LiamM (~liam.monc@mail.moncur.eu) Quit (Quit: leaving)
[16:03] * hellertime (~Adium@72.246.3.14) Quit ()
[16:03] * kuku (~kuku@112.203.56.253) Quit (Read error: Connection reset by peer)
[16:03] * LiamM (~liam.monc@mail.moncur.eu) has joined #ceph
[16:03] * hellertime (~Adium@72.246.3.14) has joined #ceph
[16:04] * kuku (~kuku@112.203.56.253) has joined #ceph
[16:04] * hellertime (~Adium@72.246.3.14) Quit ()
[16:04] * Racpatel (~Racpatel@2601:87:0:24af:4e34:88ff:fe87:9abf) has joined #ceph
[16:05] * LiamM (~liam.monc@mail.moncur.eu) Quit ()
[16:05] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:06] * LiamM (~liam.monc@mail.moncur.eu) has joined #ceph
[16:06] * kuku (~kuku@112.203.56.253) Quit (Read error: Connection reset by peer)
[16:06] * yanzheng (~zhyan@118.116.114.80) Quit (Quit: This computer has gone to sleep)
[16:06] * kuku (~kuku@112.203.56.253) has joined #ceph
[16:07] * hellertime (~Adium@72.246.3.14) has joined #ceph
[16:07] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:09] * SamYaple (~SamYaple@162.209.126.134) Quit (Quit: leaving)
[16:09] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[16:09] * LiamM (~liam.monc@mail.moncur.eu) Quit ()
[16:10] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[16:10] * xarses_ (~xarses@66-219-216-151.static.ip.veracitynetworks.com) Quit (Ping timeout: 480 seconds)
[16:10] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[16:11] * SamYaple (~SamYaple@162.209.126.134) Quit ()
[16:11] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[16:12] * hellertime (~Adium@72.246.3.14) Quit ()
[16:12] * LiamM (~liam.monc@mail.moncur.eu) has joined #ceph
[16:12] * LiamM (~liam.monc@mail.moncur.eu) Quit ()
[16:13] * SamYaple (~SamYaple@162.209.126.134) Quit ()
[16:13] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[16:14] * LiamMon (~liam.monc@mail.moncur.eu) has joined #ceph
[16:14] * LiamMon (~liam.monc@mail.moncur.eu) Quit ()
[16:14] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[16:14] * LiamMon (~liam.monc@mail.moncur.eu) has joined #ceph
[16:15] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[16:16] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[16:17] * SamYaple (~SamYaple@162.209.126.134) Quit ()
[16:17] * valeech (~valeech@pool-108-56-157-187.washdc.fios.verizon.net) has joined #ceph
[16:17] * SamYaple (~SamYaple@162.209.126.134) has joined #ceph
[16:17] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[16:20] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[16:20] * dbbyleo75 (~dbbyleo@50-198-202-93-static.hfc.comcastbusiness.net) has joined #ceph
[16:21] * LiamMon (~liam.monc@mail.moncur.eu) Quit (Quit: leaving)
[16:21] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[16:22] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[16:22] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[16:22] * LiamMon (~liam.monc@disco.moncur.eu) has joined #ceph
[16:24] * hellertime (~Adium@72.246.0.14) has joined #ceph
[16:25] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[16:25] * valeech (~valeech@pool-108-56-157-187.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[16:26] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[16:26] * squizzi (~squizzi@71-34-69-94.ptld.qwest.net) has joined #ceph
[16:28] * dbbyleo (~dbbyleo@50-198-202-93-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[16:28] * jarrpa (~jarrpa@adsl-72-50-85-113.prtc.net) Quit (Ping timeout: 480 seconds)
[16:29] * rraja (~rraja@122.166.168.85) has joined #ceph
[16:29] * xarses_ (~xarses@4.35.170.198) has joined #ceph
[16:29] * hellertime1 (~Adium@72.246.0.14) has joined #ceph
[16:30] * kefu_ (~kefu@114.92.101.38) has joined #ceph
[16:30] * salwasser (~Adium@72.246.0.14) has joined #ceph
[16:31] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[16:31] * hellertime (~Adium@72.246.0.14) Quit (Read error: Connection reset by peer)
[16:32] * derjohn_mobi (~aj@80.242.133.125) has joined #ceph
[16:32] * kuku (~kuku@112.203.56.253) Quit (Read error: Connection reset by peer)
[16:33] * kuku (~kuku@112.203.56.253) has joined #ceph
[16:36] * valeech (~valeech@pool-108-56-157-187.washdc.fios.verizon.net) has joined #ceph
[16:38] * joshd1 (~jdurgin@2602:30a:c089:2b0:ccfb:e13:c7cb:216d) has joined #ceph
[16:39] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[16:48] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[16:52] * derjohn_mobi (~aj@80.242.133.125) Quit (Ping timeout: 480 seconds)
[16:52] * sam15 (~sascha@p50931ba9.dip0.t-ipconnect.de) has left #ceph
[16:56] * oliveiradan2 (~doliveira@67.214.238.80) Quit (Read error: Connection reset by peer)
[16:57] * kefu_ is now known as kefu
[16:57] * kuku (~kuku@112.203.56.253) Quit (Read error: Connection reset by peer)
[16:57] * kuku (~kuku@112.203.56.253) has joined #ceph
[17:00] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[17:00] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[17:01] * boolman (~boolman@83.140.140.146) Quit (Quit: leaving)
[17:02] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[17:07] * kefu is now known as kefu|afk
[17:08] * danieagle (~Daniel@187.35.176.10) has joined #ceph
[17:10] * Vacuum_ (~Vacuum@i59F79166.versanet.de) has joined #ceph
[17:11] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[17:12] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:12] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[17:12] * mykola (~Mikolaj@91.245.78.8) has joined #ceph
[17:13] * karnan_ (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[17:16] * Vacuum__ (~Vacuum@88.130.220.189) Quit (Ping timeout: 480 seconds)
[17:17] * swami1 (~swami@49.38.1.168) Quit (Quit: Leaving.)
[17:17] * kefu|afk (~kefu@114.92.101.38) Quit (Ping timeout: 480 seconds)
[17:18] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[17:19] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[17:19] * srk (~Siva@32.97.110.53) has joined #ceph
[17:21] * valeech (~valeech@pool-108-56-157-187.washdc.fios.verizon.net) Quit (Quit: valeech)
[17:28] * northrup (~northrup@189-211-129-205.static.axtel.net) has joined #ceph
[17:28] * branto (~branto@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving.)
[17:29] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[17:30] * valeech (~valeech@pool-108-56-157-187.washdc.fios.verizon.net) has joined #ceph
[17:31] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[17:41] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[17:44] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[17:46] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Ping timeout: 480 seconds)
[17:47] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[17:48] * wes_dillingham (~wes_dilli@65.112.8.198) has joined #ceph
[17:50] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[17:50] * joshd1 (~jdurgin@2602:30a:c089:2b0:ccfb:e13:c7cb:216d) Quit (Quit: Leaving.)
[17:52] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[17:53] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[17:53] * valeech (~valeech@pool-108-56-157-187.washdc.fios.verizon.net) Quit (Quit: valeech)
[17:59] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[18:01] * salwasser (~Adium@72.246.0.14) Quit (Quit: Leaving.)
[18:01] * hellertime1 (~Adium@72.246.0.14) Quit (Quit: Leaving.)
[18:03] * kuku (~kuku@112.203.56.253) Quit (Remote host closed the connection)
[18:03] * blizzow (~jburns@50.243.148.102) has joined #ceph
[18:03] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[18:03] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) Quit (Read error: Connection reset by peer)
[18:04] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[18:04] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[18:08] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[18:12] <blizzow> !cephlogbot help
[18:12] <blizzow> ~help
[18:13] <- *blizzow* HELP
[18:13] <- *blizzow* HELP
[18:13] <- *blizzow* ?
[18:15] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[18:15] * tries (~tries__@2a01:2a8:2000:ffff:1260:4bff:fe6f:af91) Quit (Ping timeout: 480 seconds)
[18:21] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[18:22] * raphaelsc (~raphaelsc@2804:7f2:2080:47af:5e51:4fff:fe86:bbae) has joined #ceph
[18:23] * hellertime (~Adium@72.246.3.14) has joined #ceph
[18:24] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[18:26] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:30] * dbbyleo75 (~dbbyleo@50-198-202-93-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[18:33] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[18:37] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[18:39] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:43] <blizzow> Is there a way to see what images in a pool have active watchers? I am trying to decommission an old cluster and delete a pool, but still see read/write operations, so I don't want to remove images from the pool that are still being used.
[18:44] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[18:49] * davidzlap (~Adium@2605:e000:1313:8003:a81f:9b0a:2b8e:c818) has joined #ceph
[18:50] * nathani (~nathani@2607:f2f8:ac88::) has joined #ceph
[18:51] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[18:56] * Jeffrey4l (~Jeffrey@110.252.40.176) Quit (Ping timeout: 480 seconds)
[18:58] * DanFoster (~Daniel@2a00:1ee0:3:1337:c155:570c:3e46:6941) Quit (Quit: Leaving)
[19:04] * tries (~tries__@fw.green.ch) has joined #ceph
[19:05] * kefu (~kefu@114.92.101.38) has joined #ceph
[19:12] * BrianA (~BrianA@209.244.105.251) has joined #ceph
[19:12] * jermudgeon_ (~jhaustin@31.207.56.59) has joined #ceph
[19:12] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Read error: Connection reset by peer)
[19:12] * jermudgeon_ is now known as jermudgeon
[19:12] * jdillaman_afk (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (Quit: Leaving)
[19:13] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[19:15] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:18] * hellertime (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[19:18] * BrianA (~BrianA@209.244.105.251) has left #ceph
[19:20] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[19:25] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[19:28] * vikhyat (~vumrao@123.252.149.81) has joined #ceph
[19:29] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[19:29] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[19:32] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[19:39] * derjohn_mobi (~aj@x4db25baf.dyn.telefonica.de) has joined #ceph
[19:40] * kefu (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:44] * thomnico (~thomnico@2a01:e35:8b41:120:b502:3f94:e720:7610) has joined #ceph
[19:49] * salwasser (~Adium@72.246.3.14) has joined #ceph
[19:51] * ntpttr_ (~ntpttr@192.55.54.42) has joined #ceph
[19:53] * vbellur (~vijay@71.234.224.255) has joined #ceph
[19:53] * wes_dillingham (~wes_dilli@65.112.8.198) Quit (Quit: wes_dillingham)
[19:54] * hellertime (~Adium@72.246.3.14) has joined #ceph
[19:55] * dmonschein (~dmonschei@00020eb4.user.oftc.net) has joined #ceph
[19:56] * rakeshgm (~rakesh@106.51.29.33) has joined #ceph
[20:00] * Hemanth (~hkumar_@103.228.221.179) has joined #ceph
[20:01] * emerson (~emerson@92.222.93.46) has joined #ceph
[20:03] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[20:11] * northrup (~northrup@189-211-129-205.static.axtel.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:12] <diq> Hi. I've seen a lot of different methods for replacing failed drives. Most of them involve altering the crush maps. Is there really not a short way of saying "recreate this OSD with this new blank drive and copy all of the correct PG's here"?
[20:12] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[20:13] * i_m (~ivan.miro@31.173.120.48) Quit (Ping timeout: 480 seconds)
[20:13] <diq> I've even seen a RedHat bugzilla that basically says the official method is no good
[20:14] * swami1 (~swami@27.7.162.18) has joined #ceph
[20:16] <DanJ> Wondering if anyone can help me with an OSD start up problem I'm having. Lost 6 OSDs after a power failure, and all are complaining about leveldb corruption: "filestore(/var/lib/ceph/osd/ceph-1) Error initializing leveldb : Corruption: 6 missing files; e.g.: /var/lib/ceph/osd/ceph-1/current/omap/042421.ldb". How can I repair this?
[20:17] * xarses_ (~xarses@4.35.170.198) Quit (Ping timeout: 480 seconds)
[20:18] * thomnico (~thomnico@2a01:e35:8b41:120:b502:3f94:e720:7610) Quit (Quit: Ex-Chat)
[20:20] * wes_dillingham (~wes_dilli@65.112.8.198) has joined #ceph
[20:24] * vikhyat (~vumrao@123.252.149.81) Quit (Quit: Leaving)
[20:27] * _mrp (~mrp@82.117.199.26) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:27] * swami1 (~swami@27.7.162.18) Quit (Read error: Connection reset by peer)
[20:27] * swami1 (~swami@27.7.162.18) has joined #ceph
[20:29] * northrup (~northrup@201.141.57.255) has joined #ceph
[20:34] * blizzow (~jburns@50.243.148.102) Quit (Ping timeout: 480 seconds)
[20:36] * rraja (~rraja@122.166.168.85) Quit (Quit: Leaving)
[20:37] * ntpttr_ (~ntpttr@192.55.54.42) Quit (Remote host closed the connection)
[20:38] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[20:40] <Vaelatern> Is ceph designed for multiple datacenter replication where the links may go down regularly, without dedicated lines between datacenters?
[20:40] * dougf (~dougf@75-131-32-223.static.kgpt.tn.charter.com) Quit (Max SendQ exceeded)
[20:40] * dougf (~dougf@75-131-32-223.static.kgpt.tn.charter.com) has joined #ceph
[20:50] * squizzi (~squizzi@71-34-69-94.ptld.qwest.net) Quit (Ping timeout: 480 seconds)
[20:55] * swami1 (~swami@27.7.162.18) Quit (Quit: Leaving.)
[20:56] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:70e3:d605:eee0:baf4) Quit (Ping timeout: 480 seconds)
[20:56] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Ping timeout: 480 seconds)
[20:58] * mog_1 (~Unforgive@46.166.186.248) has joined #ceph
[20:59] <Aeso> Vaelatern, the RGW is designed to be federated across multiple Ceph clusters, yes
[21:00] <Aeso> However CephFS and RBD don't lend themselves to that type of model
[21:01] * northrup (~northrup@201.141.57.255) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:05] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[21:06] * rakeshgm (~rakesh@106.51.29.33) Quit (Quit: Leaving)
[21:15] * chris1 (~chris@mn-71-55-158-38.dhcp.embarqhsd.net) has joined #ceph
[21:17] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[21:18] * ntpttr_ (~ntpttr@192.55.54.42) has joined #ceph
[21:19] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[21:24] * wes_dillingham (~wes_dilli@65.112.8.198) Quit (Quit: wes_dillingham)
[21:25] * ntpttr_ (~ntpttr@192.55.54.42) Quit (Quit: Leaving)
[21:25] * rendar (~I@host106-97-dynamic.182-80-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:26] * _mrp (~mrp@178-222-84-200.dynamic.isp.telekom.rs) has joined #ceph
[21:28] * mog_1 (~Unforgive@5AEAAA149.tor-irc.dnsbl.oftc.net) Quit ()
[21:29] * xarses_ (~xarses@4.35.170.198) has joined #ceph
[21:37] * xarses_ (~xarses@4.35.170.198) Quit (Ping timeout: 480 seconds)
[21:38] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[21:38] * chris1 (~chris@mn-71-55-158-38.dhcp.embarqhsd.net) Quit (Quit: WeeChat 1.5)
[21:41] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (Quit: Leaving)
[21:42] * pepzi (~Shadow386@185.65.134.76) has joined #ceph
[21:43] * davidzlap (~Adium@2605:e000:1313:8003:a81f:9b0a:2b8e:c818) Quit (Quit: Leaving.)
[21:44] * davidzlap (~Adium@2605:e000:1313:8003:a81f:9b0a:2b8e:c818) has joined #ceph
[21:44] * xarses_ (~xarses@4.35.170.198) has joined #ceph
[21:46] * vbellur (~vijay@71.234.224.255) has joined #ceph
[21:46] * davidzlap (~Adium@2605:e000:1313:8003:a81f:9b0a:2b8e:c818) Quit ()
[21:46] * davidzlap (~Adium@2605:e000:1313:8003:a81f:9b0a:2b8e:c818) has joined #ceph
[21:48] * DanJ (~textual@173-19-68-180.client.mchsi.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:49] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[21:51] * rendar (~I@host106-97-dynamic.182-80-r.retail.telecomitalia.it) has joined #ceph
[21:52] * northrup (~northrup@201.141.57.255) has joined #ceph
[21:52] * mykola (~Mikolaj@91.245.78.8) Quit (Quit: away)
[21:53] * squizzi (~squizzi@71-34-69-94.ptld.qwest.net) has joined #ceph
[21:55] * srk (~Siva@32.97.110.53) Quit (Ping timeout: 480 seconds)
[21:55] * srk (~Siva@32.97.110.56) has joined #ceph
[21:56] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:58] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[22:12] * pepzi (~Shadow386@5AEAAA16D.tor-irc.dnsbl.oftc.net) Quit ()
[22:12] * xarses_ (~xarses@4.35.170.198) Quit (Read error: Connection reset by peer)
[22:12] * squizzi (~squizzi@71-34-69-94.ptld.qwest.net) Quit (Ping timeout: 480 seconds)
[22:12] * xarses_ (~xarses@4.35.170.198) has joined #ceph
[22:15] * hellertime (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[22:17] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[22:18] * northrup (~northrup@201.141.57.255) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:19] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:21] * northrup (~northrup@201.141.57.255) has joined #ceph
[22:26] * oliveiradan (~doliveira@137.65.133.10) Quit (Ping timeout: 480 seconds)
[22:29] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[22:32] * squizzi (~squizzi@184.252.73.110) has joined #ceph
[22:36] * squizzi_ (~squizzi@184.252.52.210) has joined #ceph
[22:38] * squizzi__ (~squizzi@66.87.64.22) has joined #ceph
[22:41] <wes_dillingham> can you physically replace an osd without removing it from crush (assuming the replaced osd is getting the same osd # assignment)?
[22:42] <wes_dillingham> it seems to me that the cluster rebalances three times for a failed disk, once when it is detected as being ???out??? again when it is removed from CRUSH, and a third time when it is added back in??? just trying to reduce the amount of time the cluster is in rebalance
[22:43] * squizzi (~squizzi@184.252.73.110) Quit (Ping timeout: 480 seconds)
[22:44] * squizzi_ (~squizzi@184.252.52.210) Quit (Ping timeout: 480 seconds)
[22:46] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph
[22:47] * userarpanet (~unknown@nat-23-0.nsk.sibset.net) has joined #ceph
[22:48] <m0zes> wes_dillingham: yes. you can. pretty sure you can skip the crush rm step.
[22:49] <m0zes> I also skip the ceph auth del step.
[22:49] <wes_dillingham> do you just do a ceph auth get and stick it in the keyring file on the osd?
[22:49] <m0zes> yep
[22:50] <userarpanet> Hi, all. I've removed monitor from cluster, and after this cluster has broken, why ? (http://pastebin.com/AaUhx1A8 works fine, but ceph -s ^CError connecting to cluster: InterruptedOrTimeoutError)
[22:51] <m0zes> its been a couple months since I did it, so my memory is fuzzy. I tend to not use ceph-deploy when recreating an osd. ceph-osd -i ${expected_num} --mkfs is basically what you're wanting.
[22:51] <wes_dillingham> im still ironing out processes for these tasks, currently my process of adding it back in a new / failed disk is using ceph-deploy
[22:51] <wes_dillingham> because it seems easiest, but i feel llike i should move away from ceph-deploy
[22:54] * m0zes has been neck deep in repairing cephfs and building a worst-case scenario extraction method for the data for the last 2 months.
[22:54] * bvi (~Bastiaan@102-117-145-85.ftth.glasoperator.nl) has joined #ceph
[22:54] <m0zes> and we hit worst case 2 weeks ago ;)
[22:57] * northrup (~northrup@201.141.57.255) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:00] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[23:00] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (Quit: WeeChat 1.5)
[23:00] * ska (~skatinolo@cpe-173-174-111-177.austin.res.rr.com) has joined #ceph
[23:03] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Ping timeout: 480 seconds)
[23:03] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[23:03] <userarpanet> Ok, another question, how wright remove server from cluster, because when I stop monitor, I can't connect to cluster?
[23:04] <ska> Whats a good Ceph based solution that allows for satellite office fileserving?
[23:05] <ska> I suppose RGW is a good starting point.
[23:13] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:15] * cyphase (~cyphase@c-50-148-131-137.hsd1.ca.comcast.net) has joined #ceph
[23:15] * georgem (~Adium@24.114.66.52) has joined #ceph
[23:16] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[23:18] * northrup (~northrup@201.141.57.255) has joined #ceph
[23:18] * georgem (~Adium@24.114.66.52) Quit ()
[23:18] * georgem (~Adium@206.108.127.16) has joined #ceph
[23:20] <ska> Is there any client that can communicate with a CephFS, despite high latency?
[23:20] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[23:21] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) has joined #ceph
[23:23] <devicenull> is there a downside to running 'ceph tell mon.* compact' on a cron?
[23:24] <devicenull> I keep hitting 'store is getting too big' warnings, which are annyoing
[23:24] <devicenull> because there's no way I can find of actually fixing the cause
[23:25] <[arx]> sounds like a bandaid
[23:26] <devicenull> yea, it does
[23:26] <devicenull> but since i havent been able to find any information about figuring why the store is getting too big...
[23:26] <[arx]> did you ask the mailing list?
[23:26] <devicenull> no
[23:26] <[arx]> i would try that
[23:30] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:35] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[23:49] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)
[23:51] * squizzi (~squizzi@mc60536d0.tmodns.net) has joined #ceph
[23:53] * rendar (~I@host106-97-dynamic.182-80-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[23:58] * squizzi__ (~squizzi@66.87.64.22) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.