#ceph IRC Log

Index

IRC Log for 2016-08-18

Timestamps are in GMT/BST.

[0:00] <badone> devicenull: is your cluster HEALTH_OK?
[0:04] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:05] * srk (~Siva@32.97.110.56) Quit (Ping timeout: 480 seconds)
[0:07] * squizzi (~squizzi@mc60536d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[0:12] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:12] * bvi (~Bastiaan@102-117-145-85.ftth.glasoperator.nl) Quit (Quit: Leaving)
[0:14] * Racpatel (~Racpatel@2601:87:0:24af:4e34:88ff:fe87:9abf) Quit (Quit: Leaving)
[0:14] * Racpatel (~Racpatel@2601:87:0:24af::cd3c) has joined #ceph
[0:17] * jermudgeon_ (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[0:19] * jermudgeon (~jhaustin@31.207.56.59) Quit (Ping timeout: 480 seconds)
[0:19] * jermudgeon_ is now known as jermudgeon
[0:20] * Racpatel (~Racpatel@2601:87:0:24af::cd3c) Quit (Quit: Leaving)
[0:24] * hellertime (~Adium@pool-71-162-119-41.bstnma.fios.verizon.net) has joined #ceph
[0:24] <jiffe> so I'm curious why when shutting down 2 osds on the same host I ended up with stuck PGs
[0:29] <jiffe> I may have done this wrong, I set them out but didn't remove them from crush
[0:29] <jiffe> now that I removed them from crush it is reorganizing again
[0:40] * lcurtis_ (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[0:46] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Read error: Connection reset by peer)
[0:46] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) has joined #ceph
[0:47] <devicenull> badone: it's healthy once I compress, but it shows as warn before that
[0:48] <badone> devicenull: OK. It not being healthy might have explained the large MON DB
[0:48] <badone> but if it's healthy then that's not the problem
[0:49] * Racpatel (~Racpatel@2601:87:0:24af::cd3c) has joined #ceph
[0:49] * xarses_ (~xarses@4.35.170.198) Quit (Ping timeout: 480 seconds)
[0:49] * yanzheng (~zhyan@118.116.114.80) has joined #ceph
[0:50] <devicenull> Yea. Everything I can find says it shouldn't grow like it does
[0:56] <badone> see what the ML says
[1:07] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:09] * kuku (~kuku@119.93.91.136) has joined #ceph
[1:22] * hellertime (~Adium@pool-71-162-119-41.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[1:24] * mrbojangles (~mrbojangl@c-50-180-242-71.hsd1.ga.comcast.net) has joined #ceph
[1:35] * aj__ (~aj@x4db01d39.dyn.telefonica.de) has joined #ceph
[1:36] * LegalResale (~LegalResa@66.165.126.130) Quit (Ping timeout: 480 seconds)
[1:36] * squizzi (~squizzi@mb50536d0.tmodns.net) has joined #ceph
[1:37] * Racpatel (~Racpatel@2601:87:0:24af::cd3c) Quit (Quit: Leaving)
[1:38] * oms101 (~oms101@p20030057EA679200C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:39] * DanJ (~textual@173-19-68-180.client.mchsi.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:40] * derjohn_mobi (~aj@x4db25baf.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[1:42] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[1:46] * oms101 (~oms101@p20030057EA64F300C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:47] * LegalResale (~LegalResa@66.165.126.130) has joined #ceph
[1:52] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:06] * efirs1 (~firs@98.207.153.155) has joined #ceph
[2:13] * hellertime (~Adium@pool-71-162-119-41.bstnma.fios.verizon.net) has joined #ceph
[2:14] * efirs1 (~firs@98.207.153.155) Quit (Ping timeout: 480 seconds)
[2:15] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[2:16] * Jeffrey4l (~Jeffrey@101.31.232.173) has joined #ceph
[2:16] * srk (~Siva@2605:6000:ed04:ce00:60d5:f3f:8565:fa4c) has joined #ceph
[2:17] * davidzlap (~Adium@2605:e000:1313:8003:a81f:9b0a:2b8e:c818) Quit (Quit: Leaving.)
[2:22] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[2:25] * squizzi (~squizzi@mb50536d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[2:28] * danieagle (~Daniel@187.35.176.10) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[2:31] * northrup (~northrup@201.141.57.255) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:33] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:38] * srk (~Siva@2605:6000:ed04:ce00:60d5:f3f:8565:fa4c) Quit (Ping timeout: 480 seconds)
[2:40] * Hemanth_ (~hkumar_@103.228.221.188) has joined #ceph
[2:41] * Hemanth (~hkumar_@103.228.221.179) Quit (Ping timeout: 480 seconds)
[2:47] * _mrp (~mrp@178-222-84-200.dynamic.isp.telekom.rs) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[2:58] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Quit: Leaving)
[3:01] * georgem (~Adium@107-179-157-134.cpe.teksavvy.com) has joined #ceph
[3:01] * georgem (~Adium@107-179-157-134.cpe.teksavvy.com) Quit ()
[3:01] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) Quit (Read error: Connection reset by peer)
[3:01] * georgem (~Adium@206.108.127.16) has joined #ceph
[3:04] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) has joined #ceph
[3:04] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) Quit (Remote host closed the connection)
[3:04] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) has joined #ceph
[3:15] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[3:15] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[3:16] * `Jin (~Knuckx@108.61.122.153) has joined #ceph
[3:23] * davidzlap (~Adium@2605:e000:1313:8003:a81f:9b0a:2b8e:c818) has joined #ceph
[3:26] * jermudgeon (~jhaustin@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[3:32] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:32] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[3:32] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[3:32] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Read error: Connection reset by peer)
[3:36] * toastydeath (~toast@pool-71-255-253-39.washdc.fios.verizon.net) has joined #ceph
[3:38] * wak-work (~wak-work@2620:15c:202:0:c82:2e9:6b8d:6875) Quit (Remote host closed the connection)
[3:38] * wak-work (~wak-work@2620:15c:202:0:a06f:a8f8:8b5e:4a6e) has joined #ceph
[3:45] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Quit: Nettalk6 - www.ntalk.de)
[3:46] * wak-work (~wak-work@2620:15c:202:0:a06f:a8f8:8b5e:4a6e) Quit (Remote host closed the connection)
[3:46] * wak-work (~wak-work@2620:15c:202:0:a06f:a8f8:8b5e:4a6e) has joined #ceph
[3:46] * `Jin (~Knuckx@5AEAAA2DF.tor-irc.dnsbl.oftc.net) Quit ()
[3:47] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[3:50] * haplo37 (~haplo37@107.190.44.23) has joined #ceph
[3:54] * DanJ (~textual@173-19-68-180.client.mchsi.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[3:57] * Hemanth_ (~hkumar_@103.228.221.188) Quit (Ping timeout: 480 seconds)
[3:58] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[3:58] * derjohn_mobi (~aj@x590e6307.dyn.telefonica.de) has joined #ceph
[4:03] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[4:03] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[4:06] * aj__ (~aj@x4db01d39.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[4:06] * ade_b (~abradshaw@p4FF7871C.dip0.t-ipconnect.de) has joined #ceph
[4:11] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[4:12] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:13] <jiffe> so I've taken a couple osds out of a machine and ceph is pretty much killing that machine now
[4:13] * ade (~abradshaw@p4FF7A7FC.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:13] <jiffe> load average was 64 last it responded
[4:15] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:22] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[4:24] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[4:24] * haplo37 (~haplo37@107.190.44.23) Quit (Ping timeout: 480 seconds)
[4:25] * adamcrume (~quassel@2601:647:cb01:f890:545c:afb7:17c1:f4fc) has joined #ceph
[4:27] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[4:28] * danieagle (~Daniel@187.35.176.10) has joined #ceph
[4:28] * swami1 (~swami@27.7.162.18) has joined #ceph
[4:31] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[4:33] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:36] * swami1 (~swami@27.7.162.18) Quit (Quit: Leaving.)
[4:40] * mrbojangles (~mrbojangl@c-50-180-242-71.hsd1.ga.comcast.net) Quit (Quit: Ex-Chat)
[4:46] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:53] * swami1 (~swami@27.7.162.18) has joined #ceph
[4:55] * emerson (~emerson@92.222.93.46) Quit (Quit: Leaving)
[5:00] * swami1 (~swami@27.7.162.18) Quit (Quit: Leaving.)
[5:02] * davidzlap (~Adium@2605:e000:1313:8003:a81f:9b0a:2b8e:c818) Quit (Quit: Leaving.)
[5:03] * kuku (~kuku@119.93.91.136) has joined #ceph
[5:09] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[5:11] * rakeshgm (~rakesh@106.51.29.33) has joined #ceph
[5:11] * rakeshgm (~rakesh@106.51.29.33) Quit ()
[5:13] * kuku (~kuku@119.93.91.136) Quit (Ping timeout: 480 seconds)
[5:17] * kuku (~kuku@119.93.91.136) has joined #ceph
[5:22] * KindOne_ (dtscode@h204.162.186.173.dynamic.ip.windstream.net) has joined #ceph
[5:29] * KindOne (sillyfool@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:29] * KindOne_ is now known as KindOne
[5:32] * DanJ (~textual@173-19-68-180.client.mchsi.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:33] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[5:37] * vimal (~vikumar@114.143.160.16) has joined #ceph
[5:41] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[5:44] * Vacuum__ (~Vacuum@88.130.200.144) has joined #ceph
[5:51] * Vacuum_ (~Vacuum@i59F79166.versanet.de) Quit (Ping timeout: 480 seconds)
[5:56] * DanJ (~textual@173-19-68-180.client.mchsi.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:02] * walcubi__ (~walcubi@p5795AA1E.dip0.t-ipconnect.de) has joined #ceph
[6:07] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:07] * kuku (~kuku@119.93.91.136) Quit (Read error: Connection reset by peer)
[6:07] * kuku (~kuku@119.93.91.136) has joined #ceph
[6:07] * [0x4A6F] (~ident@p4FC27C37.dip0.t-ipconnect.de) has joined #ceph
[6:09] * walcubi_ (~walcubi@p5795AC49.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:32] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[6:36] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) has joined #ceph
[6:36] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:50] <iggy> is it normal for an empty cluster to have almost 4% space used?
[6:58] * Hemanth_ (~hkumar_@103.228.221.188) has joined #ceph
[6:59] * vikhyat (~vumrao@123.252.149.81) has joined #ceph
[7:04] * SurfMaths (~Bonzaii@46.166.190.192) has joined #ceph
[7:17] * hellertime (~Adium@pool-71-162-119-41.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[7:18] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[7:23] * vimal (~vikumar@114.143.160.16) Quit (Quit: Leaving)
[7:25] * i_m (~ivan.miro@31.173.120.48) has joined #ceph
[7:26] * epicguy (~epicguy@41.164.8.42) has joined #ceph
[7:34] * SurfMaths (~Bonzaii@61TAABEAT.tor-irc.dnsbl.oftc.net) Quit ()
[7:43] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[7:43] * vimal (~vikumar@121.244.87.116) has joined #ceph
[7:53] * raphaelsc (~raphaelsc@2804:7f2:2080:47af:5e51:4fff:fe86:bbae) Quit (Remote host closed the connection)
[7:53] * kuku (~kuku@119.93.91.136) has joined #ceph
[7:54] * jermudgeon (~jhaustin@southend.mdu.whitestone.link) Quit (Quit: jermudgeon)
[7:55] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[7:55] * sebastian-w_ (~quassel@212.218.8.139) Quit (Remote host closed the connection)
[7:55] * sebastian-w (~quassel@212.218.8.138) has joined #ceph
[7:56] * kuku (~kuku@119.93.91.136) has joined #ceph
[8:04] * onyb (~ani07nov@119.82.105.66) has joined #ceph
[8:05] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[8:11] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[8:19] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[8:33] * chengpeng (~chengpeng@180.168.126.179) Quit (Quit: Leaving)
[8:34] * sam15 (~sascha@p50931ba9.dip0.t-ipconnect.de) has joined #ceph
[8:35] <sam15> good morning, can you tell me when a new ceph.conf is heeded by the system? Do I have to restart the demons or what is the mechanism here?
[8:46] * zviratko (~delcake@108.61.122.153) has joined #ceph
[8:56] * masber (~masber@129.94.15.152) Quit (Read error: Connection reset by peer)
[8:59] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[9:01] * yanzheng (~zhyan@118.116.114.80) Quit (Quit: This computer has gone to sleep)
[9:04] * yanzheng (~zhyan@118.116.114.80) has joined #ceph
[9:04] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[9:06] * yanzheng (~zhyan@118.116.114.80) Quit ()
[9:08] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) Quit (Remote host closed the connection)
[9:08] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) has joined #ceph
[9:10] * yanzheng (~zhyan@118.116.114.80) has joined #ceph
[9:12] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[9:16] * zviratko (~delcake@9YSAABFPT.tor-irc.dnsbl.oftc.net) Quit ()
[9:18] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[9:21] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:21] * Hemanth_ (~hkumar_@103.228.221.188) Quit (Ping timeout: 480 seconds)
[9:24] * Hemanth_ (~hkumar_@103.228.221.188) has joined #ceph
[9:25] * analbeard (~shw@support.memset.com) has joined #ceph
[9:26] * northrup (~northrup@189-211-129-205.static.axtel.net) has joined #ceph
[9:26] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[9:28] <IcePic> sam15: at the least, you would need to signal daemons to re-read the conf file
[9:28] <IcePic> just changing it will (qualified guess here) not change behaviour
[9:29] * northrup (~northrup@189-211-129-205.static.axtel.net) Quit ()
[9:30] * bvi (~Bastiaan@185.56.32.1) has joined #ceph
[9:34] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[9:36] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[9:41] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[9:48] <IcePic> sam15: the manpage for "ceph-mon" for instance, says rather specifically about -c option: "Use ceph.conf configuration file instead of the default /etc/ceph/ceph.conf to determine monitor addresses during startup."
[9:49] * Hemanth_ (~hkumar_@103.228.221.188) Quit (Quit: Leaving)
[9:51] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:52] <sam15> icepci: thx, I thought there could be a mechanism to watch over changes in ceph.conf. push config -> automatic reload on target node. this way you change your config, push it and have to reload manually on your node.
[9:53] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) has joined #ceph
[9:53] <sam15> icepci: quite cumbersome and error prone
[9:57] <iggy> that's more what ceph tell/injectargs is for
[9:57] <IcePic> of course one can code such a mechanism, but many config files that are larger will have stuff that depend on other things, so if someone edits half of it, saves and goes on to fix another part on which it depends, it would keep on erroring out until you get it all fixed and correct
[9:59] <sam15> icepic: you have a point there.
[9:59] <sam15> iggy: thx. I will take a look at it
[10:01] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[10:01] * tuhnis (~Hejt@176.56.230.79) has joined #ceph
[10:02] * vikhyat_ (~vumrao@114.143.44.216) has joined #ceph
[10:03] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[10:04] <sep> is ceph pg repair safe to run on hammer ? iow it will not mindlessly replace a replica with the master if it does not know witch is the corrupt ?
[10:05] * vikhyat (~vumrao@123.252.149.81) Quit (Ping timeout: 480 seconds)
[10:11] * vikhyat_ is now known as vikhyat
[10:21] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[10:23] * dosaboy_ (~dosaboy@33.93.189.91.lcy-02.canonistack.canonical.com) has joined #ceph
[10:24] * dosaboy_ (~dosaboy@33.93.189.91.lcy-02.canonistack.canonical.com) Quit ()
[10:24] * dosaboy (~dosaboy@33.93.189.91.lcy-02.canonistack.canonical.com) Quit (Read error: Connection reset by peer)
[10:25] * kaisan (~kai@zaphod.kamiza.nl) has joined #ceph
[10:27] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[10:29] * swami1 (~swami@27.7.161.20) has joined #ceph
[10:29] * dosaboy (~dosaboy@33.93.189.91.lcy-02.canonistack.canonical.com) has joined #ceph
[10:31] * tuhnis (~Hejt@61TAABEDE.tor-irc.dnsbl.oftc.net) Quit ()
[10:33] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:35] * northrup (~northrup@189-211-129-205.static.axtel.net) has joined #ceph
[10:39] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[10:40] * boolman (boolman@79.138.78.238) has joined #ceph
[10:44] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:45] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[10:46] * userarpanet (~unknown@nat-23-0.nsk.sibset.net) Quit (Ping timeout: 480 seconds)
[10:46] * userarpanet (~unknown@office.siplabs.ru) has joined #ceph
[10:47] * northrup (~northrup@189-211-129-205.static.axtel.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[10:48] * rendar (~I@host5-58-dynamic.49-79-r.retail.telecomitalia.it) has joined #ceph
[10:50] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[10:50] * XMVD (~Xmd@78.85.35.236) Quit (Read error: Connection reset by peer)
[10:54] * adun153 (~adun153@130.105.147.50) has joined #ceph
[10:55] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:70e3:d605:eee0:baf4) has joined #ceph
[10:59] * thomnico (~thomnico@2a01:e35:8b41:120:3c36:5753:3e99:ccc1) has joined #ceph
[11:00] * Hemanth (~hkumar_@103.228.221.188) has joined #ceph
[11:00] * TMM (~hp@185.5.121.201) has joined #ceph
[11:01] * The1w (~jens@node3.survey-it.dk) has joined #ceph
[11:01] * userarpanet (~unknown@office.siplabs.ru) Quit (Ping timeout: 480 seconds)
[11:01] <adun153> Hello everyone, this is my crush map: http://pastebin.com/4hkDWKMk I have a problem, though: OSD 44 went down. Why did an OpenStack VM of mine stop working? The pool it was using was of size 2, and when OSD 44 was down, the ceph health status turned to HEALTH_WARN, and around 30PGs were stuck in down+peering. Isn't the VM supposed to be able to continue working because of the "size=2" replica?
[11:02] * jfaj_ (~jan@p4FC5B20A.dip0.t-ipconnect.de) has joined #ceph
[11:03] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[11:04] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[11:06] <doppelgrau> adun153: min_size=2?
[11:07] <adun153> doppelgrau: min_size=1
[11:08] <adun153> The VM wasn't able to continue functioning until osd-44 was brought back up again.
[11:08] <doppelgrau> looks, reasonable, but down+peering seems strange. Some network problems?
[11:10] * userarpanet (~unknown@nat-23-0.nsk.sibset.net) has joined #ceph
[11:10] * germano (~germano@default-46-102-197-194.interdsl.co.uk) Quit (Quit: Leaving)
[11:11] <adun153> Ah, wait, so the VM going down IS expected behavior from my setup?
[11:12] <adun153> doppelgrau, But a day before, I did get a couple of lines like this: 2016-08-17 10:37:48.426432 7fe4b344d700 -1 osd.44 17513 heartbeat_check: no reply from osd.48 since back 2016-08-17 10:37:28.198454 front 2016-08-17 10:37:45.520111 (cutoff 2016-08-17 10:37:28.425635)
[11:13] * onyb (~ani07nov@119.82.105.66) Quit (Quit: raise SystemExit())
[11:14] <doppelgrau> adun153: no, a few seconds stuck IO is expected
[11:14] <doppelgrau> adun153: I would double check the network, especially larger MTUs if you use them
[11:16] <adun153> doppelgrau: MTU is 1500 for the storage nodes.
[11:17] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) has joined #ceph
[11:19] * derjohn_mobi (~aj@x590e6307.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[11:21] * elt (~Epi@tsn109-201-152-238.dyn.nltelcom.net) has joined #ceph
[11:21] <adun153> Doppelgrau, just to confirm: 1.) When OSD 44 went down, the VM should have just been stuck for a second or so until it switched over to the replica? 2.) My CRUSH rules should allow the VM to continue operation, assuming that its data was on OSD44 when the OSD went down? 3.) Are you suggesting that I increase the MTUs for the network between the storage nodes?
[11:22] <Be-El> adun153: clients (e.g. vm) are only able to use a PG if it is in an active state
[11:22] <Be-El> active+clean is the optimal case
[11:23] <Be-El> adun153: since the pg in your case was down+peering, you might want to check the pg's secondary osd
[11:24] <Be-El> does ceph provide an easy way to create the systemd symlinks for mons and mds? osds can be handled by ceph-disk, but the other daemon type are lacking this ability?
[11:28] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[11:30] <adun153> be-el, how do I check the pg's secondary osd?
[11:30] <Be-El> adun153: use ceph pg query to find out which osd is the secondary/tertiary one, and have a look at its local log
[11:30] <adun153> thanks, be-el!
[11:30] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[11:35] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[11:37] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:38] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:45] * derjohn_mobi (~aj@2001:6f8:1337:0:4c25:e7b2:cbbf:1db2) has joined #ceph
[11:46] * _mrp (~mrp@178-222-84-200.dynamic.isp.telekom.rs) has joined #ceph
[11:46] * rakeshgm (~rakesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[11:47] * bniver (~bniver@pool-98-110-180-234.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[11:48] * _mrp (~mrp@178-222-84-200.dynamic.isp.telekom.rs) Quit ()
[11:51] * elt (~Epi@tsn109-201-152-238.dyn.nltelcom.net) Quit ()
[11:51] * Architect (~Dinnerbon@ip95.ip-94-23-150.eu) has joined #ceph
[11:54] * Hemanth (~hkumar_@103.228.221.188) Quit (Ping timeout: 480 seconds)
[11:55] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[12:06] <jprins> Hi everyone. What does the following message mean? Is it in any way serious or can it be ignored? 2016-08-18 11:57:44.184385 7f1e6bfff700 0 RGWGC::process() failed to acquire lock on gc.17
[12:16] * _nick (~nick@zarquon.dischord.org) Quit (Quit: ZNC - http://znc.in)
[12:18] * _nick (~nick@zarquon.dischord.org) has joined #ceph
[12:20] * thomnico (~thomnico@2a01:e35:8b41:120:3c36:5753:3e99:ccc1) Quit (Quit: Ex-Chat)
[12:21] * Architect (~Dinnerbon@9YSAABFS5.tor-irc.dnsbl.oftc.net) Quit ()
[12:23] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[12:23] <Be-El> great, changing a pool ruleset crashes all monitor...
[12:23] * derjohn_mobi (~aj@2001:6f8:1337:0:4c25:e7b2:cbbf:1db2) Quit (Ping timeout: 480 seconds)
[12:25] * DanJ (~textual@173-19-68-180.client.mchsi.com) Quit ()
[12:31] * rakeshgm (~rakesh@121.244.87.117) Quit (Remote host closed the connection)
[12:32] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:32] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[12:39] * DanJ (~textual@166.177.184.87) has joined #ceph
[12:40] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[12:42] <mistur> hello
[12:42] <mistur> I have an issue with radosgw on jewel
[12:42] <mistur> # radosgw-admin --cluster cephprod region get
[12:43] <mistur> failed to init zonegroup: (2) No such file or directory
[12:43] <mistur> I try to create region and zone to have buckets on EC or replicated pool
[12:45] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[12:47] <sam15> Do you have experience with attaching ceph to citrix xenserver without openstack?
[12:49] <mistur> not at all
[12:51] <IcePic> mistur: perhaps you should make a default zonegroup and put your single zone in it?
[12:52] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[12:53] <doppelgrau> sam15: some experience with the opensource xen and ceph, but not with the citrix-flavor
[12:53] * jfaj__ (~jan@p4FC2583F.dip0.t-ipconnect.de) has joined #ceph
[12:54] <jprins> mistur: And you should probebly create a default region as well, because it will probebly not be in your base setup, and without it you get all kind of crazy errors.
[12:54] <sam15> doppelgrau (cool nick btw :-) ) afaik opensource xen supports libvirt, right? citirix does not
[12:55] <doppelgrau> sam15: libvirt and qdisk as storage backend (I use qdisk => qemu)
[12:56] <Be-El> does ceph support changing replicated pools into ec pools?
[12:58] <doppelgrau> Be-El: since it is impossible to change the EC-profile of an EC-pool, I guess not (but not tested)
[12:58] <sam15> doppelgrau: ok, then I will have to install an os xen server as testbed.
[12:59] <Be-El> doppelgrau: so changing the bucket pool for our test radosgw from replicated (default) to ec requires creating a new pool?
[12:59] * jfaj_ (~jan@p4FC5B20A.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[13:00] * DanJ (~textual@166.177.184.87) Quit (Ping timeout: 480 seconds)
[13:01] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[13:01] <doppelgrau> sam15: https://www.formann.de/2015/05/using-ceph-rbd-as-xen-backend/ <- getting that line in order was the hardest part :)
[13:02] <doppelgrau> Be-El: I guess
[13:03] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[13:04] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:05] * trociny (~mgolub@93.183.239.2) Quit (Ping timeout: 480 seconds)
[13:05] * _mrp (~mrp@82.117.199.26) has joined #ceph
[13:05] <mistur> I did radosgw-admin zonegroup create --rgw-zonegroup=default
[13:09] * trociny (~mgolub@93.183.239.2) has joined #ceph
[13:11] * m0zes (~mozes@ns1.beocat.ksu.edu) Quit (Ping timeout: 480 seconds)
[13:11] * bara (~bara@213.175.37.12) has joined #ceph
[13:15] * pakman__ (~theghost9@176.56.230.79) has joined #ceph
[13:15] * Linkmark (~Linkmark@252.146-78-194.adsl-static.isp.belgacom.be) has joined #ceph
[13:15] * jfaj__ (~jan@p4FC2583F.dip0.t-ipconnect.de) Quit (Quit: WeeChat 1.5)
[13:16] * userarpanet (~unknown@nat-23-0.nsk.sibset.net) Quit (Ping timeout: 480 seconds)
[13:18] <mistur> now I have :
[13:18] <mistur> radosgw-admin zone get
[13:18] <mistur> unable to initialize zone: (2) No such file or directory
[13:19] <mistur> looks like the same as https://www.mail-archive.com/ceph-users@lists.ceph.com/msg31391.html
[13:24] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[13:25] <jprins> I had the same issue here and I have created some documentation for myself on how to fix this. The default install in RGW is lacking the default Region. Which results in these errors and some more when you create more complicated setups.
[13:25] <IcePic> mistur: look at jprins post to the lists
[13:25] <IcePic> https://www.mail-archive.com/ceph-users@lists.ceph.com/msg31767.html
[13:26] <jprins> If I have time I will try to create a message in the mailinglist with the exact steps I took to fix this, basicly a summary of the messages I posted earlier in the mailinglists.
[13:26] <IcePic> perhaps we should ask if anyone ever got a jewel rgw up and running?
[13:26] <IcePic> seems like there is some very basic tests missing
[13:26] <jprins> I have it up and running now without errors.
[13:27] <IcePic> jprins: sure, but you also documented some 10 commands needing to be run on top of the normal provisioning steps
[13:27] <jprins> True to that.
[13:27] <IcePic> not to be excessively whiny, but a normal 1A test of a clean install running the documented setup should have shown if lots of installation and setup steps are missing
[13:27] <jprins> And some modifying json files etc.
[13:28] <jprins> Took me a few days to figure that all out :-)
[13:42] * haplo37 (~haplo37@107-190-44-23.cpe.teksavvy.com) has joined #ceph
[13:44] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[13:45] * pakman__ (~theghost9@61TAABEGM.tor-irc.dnsbl.oftc.net) Quit ()
[13:46] * hellertime (~Adium@72.246.3.14) has joined #ceph
[13:47] * georgem (~Adium@24.114.55.175) has joined #ceph
[13:48] * georgem (~Adium@24.114.55.175) Quit ()
[13:48] * georgem (~Adium@206.108.127.16) has joined #ceph
[13:49] * The1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[13:49] * MKoR (~anadrom@195-154-255-174.rev.poneytelecom.eu) has joined #ceph
[13:53] <mistur> IcePic: seems to help
[13:53] <mistur> thanks
[14:02] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[14:02] * overclk_ (~quassel@139.59.14.231) has joined #ceph
[14:02] * overclk (~quassel@2400:6180:100:d0::54:1) Quit (Ping timeout: 480 seconds)
[14:06] <sep> is ceph pg repair safe to run on hammer ? iow it will not mindlessly replace a replica with the master if it does not know witch is the corrupt ?
[14:10] <TMM> sep, as far as I'm aware you will only get an inconsistent error on a PG if crcs fail during a deep scrub
[14:10] <TMM> sep, as far as I'm aware at that point it is known that the local osd is definitely corrupt
[14:13] <IcePic> mistur: \o/
[14:13] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[14:13] * bviktor (~bviktor@213.16.80.50) has joined #ceph
[14:14] <mistur> IcePic: it's not finish yet unfortunalty...
[14:14] <mistur> I have now my zone and region declared
[14:14] <mistur> # radosgw-admin user create --uid=admin
[14:14] <mistur> 2016-08-18 14:14:30.215885 7f7960693900 0 Cannot find zone id=08969e69-2b04-42bf-9ac0-97e0afe13ac4 (name=default), switching to local zonegroup configuration
[14:14] <mistur> 2016-08-18 14:14:30.217897 7f7960693900 -1 Cannot find zone id=08969e69-2b04-42bf-9ac0-97e0afe13ac4 (name=default)
[14:14] <mistur> couldn't init storage provider
[14:15] * vbellur (~vijay@71.234.224.255) Quit (Ping timeout: 480 seconds)
[14:17] <sep> TMM, if one have a size 2. and the 2 objects differ in md5sum. how would ceph know which one is the corrupt one. ?
[14:17] <IcePic> sep: for raid1 and similar arrangements, you cant
[14:17] * b0e (~aledermue@213.95.25.82) has joined #ceph
[14:18] <IcePic> unless one of them fails some other checksum or so
[14:18] <IcePic> as in "a person with two clocks that show different time knows no more than a person with one clock"
[14:19] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[14:19] * MKoR (~anadrom@9YSAABFU5.tor-irc.dnsbl.oftc.net) Quit ()
[14:20] <sep> IcePic, correct. and my question is what ceph pg repair does in this case. ? thow a warning and do nothing. or just copies master to slave with a 50% change of corrupting your good object
[14:21] <IcePic> sep: but if you know "on osd 1 and osd 3432 there should be a part with md5 abc123"
[14:21] <IcePic> then you can tell if any single object is bad and the other is ok
[14:21] <sep> and if you have size3. is pg repair allways safe ?
[14:22] <sep> IcePic, you normaly do not know this. since deepscrub reads objects and compares md5sum between them, but have it not stored anywhere to check against.
[14:22] <IcePic> sep: cant help you with the trust part. When I am forced to run fsck on broken disks in general I just figure I am not one of the perhaps 10 people that would be smarter than fsck is, so I just let it repair what it can and try to deal with the fallout
[14:23] <IcePic> sep: thats the assumption I dont know anything about, if the mons somewhere know part of something about the objects in order to be able to solve your issue
[14:23] <sep> indeed. but i remember reading olf mailinglist post discussing this where it was claimed pg repair copied master obj to replicas. and i have never read anywhere what it actauly does
[14:24] <sep> and posts like this.. https://elkano.org/blog/ceph-health_err-1-pgs-inconsistent-2-scrub-errors/?PageSpeed=noscript where they instruct you to manualy check the objects does not add to my confidence :)
[14:24] * haplo37 (~haplo37@107-190-44-23.cpe.teksavvy.com) Quit (Ping timeout: 480 seconds)
[14:26] <IcePic> sep: but that "known digest" could be from the mons and not the first master copy
[14:27] <IcePic> as in "copy to broken from the one where checksum actuall is correct"
[14:28] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Quit: valeech)
[14:28] * uhtr5r (~demonspor@108.61.123.67) has joined #ceph
[14:28] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[14:29] <sep> didn't think ceph hammer filestore stored know correct digests anywhere...
[14:32] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[14:33] * adun153 (~adun153@130.105.147.50) Quit (Quit: Ex-Chat)
[14:33] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[14:33] <TMM> sep, ceph knows the expected crc
[14:34] <TMM> sep, the crc is internal to the osd, there is as far as I'm aware no comparison happening
[14:34] <sep> TMM awesome . do you have a link /url to that information? i have read http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/034646.html but did not know more recent developments
[14:35] <TMM> sep, I don't remember where I read that... sorry
[14:36] <mistur> IcePic: it's really annoying that rgw does not work as it works on infernalis
[14:37] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[14:37] <IcePic> mistur: to me also, I havent gotten my inf rgw (which had data consistency issues fixed in hammer and jewel) working yet
[14:37] <mistur> we are discussing on using infernalis instead jewel right now
[14:38] * Dr_O (~owen@00012c05.user.oftc.net) has joined #ceph
[14:39] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[14:39] <ska> Is there a client that can connect to a high latency connection to CephFS?
[14:40] <jprins> ska: What do you mean?
[14:41] <ska> I have windows/osx users that want read-only access to a CephFS, but they are very far away so network latency is large.
[14:44] <IcePic> mistur: we got hit by this on infernalis, so make sure you can get back all you do write to the rgw: http://tracker.ceph.com/issues/15886
[14:45] <mistur> IcePic: we already have an infernalis cluster
[14:45] <mistur> it's the "beta" cluster
[14:45] <mistur> and we are working on the "prod" cluster in Jewel
[14:45] <mistur> because when we start our tests in january, Jewel was not ready yet
[14:46] <ska> Maybe a samba server at the Ceph cluster is a better way to deal with that?
[14:46] <mistur> we expect that jewel will have lots of improvement as it's a LTS
[14:47] <mistur> but I didn't imagine get so much issue with the rgw
[14:47] <IcePic> ska: if you have an idea of a protocol that handles it very well, you might aswell re-export ceph using it
[14:48] <jprins> ska: You could use a S3 gateway and give them access using S3
[14:48] <jprins> Or you could use a MDS server with a Samba server in front of it.
[14:48] <mistur> IcePic: it is strange, when I try to create a user, I got "Cannot find zone id=08969e69-2b04-42bf-9ac0-97e0afe13ac4 (name=default)"
[14:48] <mistur> but I only have one zone with the id 08969e69-2b04-42bf-9ac0-97e0afe13ac4
[14:48] <jprins> mistur: That is all because the default region is not properly setup.
[14:49] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[14:49] <ska> jprins: We plan on having S3 for sure.. But different groups may need some global access to data.
[14:49] <IcePic> region or realm?
[14:49] <jprins> mistur: Sorry Realm ofcourse.
[14:49] <mistur> "id": "08969e69-2b04-42bf-9ac0-97e0afe13ac4",
[14:49] <mistur> "realm_id": "05e7bc55-0ec4-429b-a899-c06a4bf59f72"
[14:50] <mistur> # radosgw-admin realm default --rgw-realm=default
[14:50] <mistur> failed to init realm: (2) No such file or directory
[14:50] <mistur> 2016-08-18 14:50:19.817160 7f8dd17c7900 0 error in read_id for id : (2) No such file or directory
[14:50] <ska> jprins: a Samba server in front of MDS. I thought you needed access to the Monitors and OSD's as well.
[14:51] * johnavp1989 (~jpetrini@yakko.coredial.com) has joined #ceph
[14:51] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[14:51] <jprins> An MDS will create a regular filesystem so you can mount a pool as a regular filesystem. You can then export that filesystem using SMB.
[14:52] <jprins> I have not build that yet, so what you need exactly can be found in the docs .
[14:52] <IcePic> or run a vm on qemu/kvm that uses ceph as storage for its guests, then have the guest run samba
[14:53] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:54] <ska> jprins: I thought MDS's work exclusevely with CephFS? Is this a special use of an MDS?
[14:55] * Racpatel (~Racpatel@mobile-166-171-186-088.mycingular.net) has joined #ceph
[14:55] <sep> does a new scrub have to run before the "scrub errors" is removed ? ceph pg repair does seam to fix the corrupt object ; but the pg is still listed as inconsistant
[14:55] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[14:55] * jarrpa (~jarrpa@196.42.22.116) has joined #ceph
[14:56] * sam15 (~sascha@p50931ba9.dip0.t-ipconnect.de) Quit (Quit: sam15)
[14:56] * srk (~Siva@2605:6000:ed04:ce00:ece8:d891:8ffc:130a) has joined #ceph
[14:57] <sep> ska, mds is just the metadata store. the cephclient that mounts the cephfs, or uses the rbd image will need to speak to mon's osd's and mds's (cephfs) my samba server mounts the cephfs, but the mds is a separate dedicated host
[14:57] <jprins> ska: MDS works exclusively with Ceph. But what it does with Ceph is that it creates a filesystem structure so you can mount a pool as a regular filesystem.
[14:58] <jprins> What sep says.
[14:58] <jprins> mistur: See the private channel
[14:58] * uhtr5r (~demonspor@26XAAA51B.tor-irc.dnsbl.oftc.net) Quit ()
[14:59] * ira (~ira@nat-pool-bos-t.redhat.com) has joined #ceph
[15:00] <ska> sep: Are your samba clients running over any high-latency connections?
[15:02] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:02] <ska> jprins: The MDS stores the metadata part of CephFS (in an mds pool) and the data pool is a separate pool. Together they form the Cephfs.
[15:04] * srk (~Siva@2605:6000:ed04:ce00:ece8:d891:8ffc:130a) Quit (Ping timeout: 480 seconds)
[15:08] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[15:09] * Hemanth (~hkumar_@103.228.221.167) has joined #ceph
[15:17] * bjozet (~bjozet@82-183-17-144.customers.ownit.se) has joined #ceph
[15:18] * m0zes (~mozes@ns1.beocat.ksu.edu) has joined #ceph
[15:19] * Racpatel (~Racpatel@mobile-166-171-186-088.mycingular.net) Quit (Quit: Leaving)
[15:20] <sep> ska, not really no.
[15:23] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[15:23] * northrup (~northrup@189-211-129-205.static.axtel.net) has joined #ceph
[15:24] * derjohn_mobi (~aj@2001:6f8:1337:0:4909:775b:812e:4b4) has joined #ceph
[15:25] * Racpatel (~Racpatel@166.171.186.88) has joined #ceph
[15:26] * rraja (~rraja@121.244.87.117) has joined #ceph
[15:29] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[15:30] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[15:30] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Read error: Connection reset by peer)
[15:31] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[15:33] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Read error: Connection reset by peer)
[15:34] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[15:35] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[15:43] * thomnico (~thomnico@2a01:e35:8b41:120:3c36:5753:3e99:ccc1) has joined #ceph
[15:43] * hroussea (~hroussea@000200d7.user.oftc.net) Quit (Quit: Client exited.)
[15:44] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[15:45] * vikhyat (~vumrao@114.143.44.216) Quit (Quit: Leaving)
[15:49] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) has joined #ceph
[15:57] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) Quit (Remote host closed the connection)
[15:58] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) has joined #ceph
[16:04] * EinstCrazy (~EinstCraz@61.165.252.183) has joined #ceph
[16:07] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) Quit (Quit: valeech)
[16:08] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[16:12] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[16:13] * xarses (~xarses@66-219-216-151.static.ip.veracitynetworks.com) Quit (Ping timeout: 480 seconds)
[16:16] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[16:16] * salwasser (~Adium@72.246.3.14) has joined #ceph
[16:19] * yanzheng (~zhyan@118.116.114.80) Quit (Quit: This computer has gone to sleep)
[16:22] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[16:25] * thomnico (~thomnico@2a01:e35:8b41:120:3c36:5753:3e99:ccc1) Quit (Ping timeout: 480 seconds)
[16:28] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[16:31] * KungFuHamster (~Rosenblut@tor-exit.squirrel.theremailer.net) has joined #ceph
[16:33] * bvi (~Bastiaan@185.56.32.1) Quit (Ping timeout: 480 seconds)
[16:42] * xarses (~xarses@4.35.170.198) has joined #ceph
[16:43] * liumxnl (~liumxnl@45.32.74.135) has joined #ceph
[16:44] * liumxnl (~liumxnl@45.32.74.135) Quit ()
[16:56] * srk (~Siva@2605:6000:ed04:ce00:1924:f452:9093:c4b9) has joined #ceph
[16:56] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) has joined #ceph
[17:01] * KungFuHamster (~Rosenblut@9YSAABFZ0.tor-irc.dnsbl.oftc.net) Quit ()
[17:01] * airsoftglock (~danielsj@ip95.ip-94-23-150.eu) has joined #ceph
[17:02] * bvi (~Bastiaan@102-117-145-85.ftth.glasoperator.nl) has joined #ceph
[17:04] * eth00 (~eth00@74.81.187.100) has joined #ceph
[17:05] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:05] * Linkmark (~Linkmark@252.146-78-194.adsl-static.isp.belgacom.be) Quit (Quit: Leaving)
[17:05] * EinstCrazy (~EinstCraz@61.165.252.183) Quit (Remote host closed the connection)
[17:09] * joshd1 (~jdurgin@2602:30a:c089:2b0:e42d:722c:22c:73bb) has joined #ceph
[17:09] * snelly (~cjs@sable.island.nu) has joined #ceph
[17:09] <snelly> howdy
[17:09] <snelly> anybody using the docker images here?
[17:10] <snelly> I'm trying to understand how they are organized in the GH repo
[17:11] <snelly> this feels so odd: https://github.com/ceph/ceph-docker/blob/master/ceph-releases/jewel/ubuntu/16.04/daemon/entrypoint.sh
[17:11] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[17:13] * nilez (~nilez@96.44.144.90) has joined #ceph
[17:19] * kefu (~kefu@114.92.101.38) has joined #ceph
[17:20] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[17:21] * kefu (~kefu@114.92.101.38) Quit (Max SendQ exceeded)
[17:22] * kefu (~kefu@114.92.101.38) has joined #ceph
[17:26] * thomnico (~thomnico@2a01:e35:8b41:120:3c36:5753:3e99:ccc1) has joined #ceph
[17:27] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[17:30] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[17:31] * ade_b (~abradshaw@p4FF7871C.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[17:31] * airsoftglock (~danielsj@9YSAABF0T.tor-irc.dnsbl.oftc.net) Quit ()
[17:36] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[17:38] * ade (~abradshaw@p4FF7871C.dip0.t-ipconnect.de) has joined #ceph
[17:38] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Read error: Connection reset by peer)
[17:39] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[17:39] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit ()
[17:40] * kefu is now known as kefu|afk
[17:52] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:58] * i_m1 (~ivan.miro@31.173.120.48) has joined #ceph
[17:58] * i_m (~ivan.miro@31.173.120.48) Quit (Read error: Connection reset by peer)
[17:59] * davidzlap (~Adium@2605:e000:1313:8003:a13f:a5c4:2b91:a7bb) has joined #ceph
[18:02] * swami1 (~swami@27.7.161.20) Quit (Quit: Leaving.)
[18:06] * mykola (~Mikolaj@91.245.78.8) has joined #ceph
[18:15] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[18:16] * swami1 (~swami@27.7.161.20) has joined #ceph
[18:18] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[18:23] * scg (~zscg@181.122.4.166) has joined #ceph
[18:23] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:24] * srk (~Siva@2605:6000:ed04:ce00:1924:f452:9093:c4b9) Quit (Ping timeout: 480 seconds)
[18:25] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:29] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[18:33] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[18:34] * mattch (~mattch@w5430.see.ed.ac.uk) Quit (Ping timeout: 480 seconds)
[18:39] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:43] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[18:44] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:46] * jarrpa (~jarrpa@196.42.22.116) Quit (Ping timeout: 480 seconds)
[18:47] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[18:48] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:51] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) has joined #ceph
[18:53] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) Quit ()
[19:02] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[19:04] * srk (~Siva@2605:6000:ed04:ce00:c133:1dae:a6e5:17b9) has joined #ceph
[19:04] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[19:06] * swami1 (~swami@27.7.161.20) Quit (Quit: Leaving.)
[19:06] * ceph-ircslackbot3 (~ceph-ircs@ds9536.dreamservers.com) has joined #ceph
[19:10] * kefu|afk (~kefu@114.92.101.38) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:10] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) has joined #ceph
[19:11] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[19:11] * ira (~ira@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[19:14] * ceph-ircslackbot (~ceph-ircs@ds9536.dreamservers.com) Quit (Ping timeout: 480 seconds)
[19:18] <rkeene> I'm adding a watchdog to each of my nodes... what would be a good check to determine if the current Ceph node is still functional ?
[19:18] * thomnico (~thomnico@2a01:e35:8b41:120:3c36:5753:3e99:ccc1) Quit (Quit: Ex-Chat)
[19:19] * bviktor (~bviktor@213.16.80.50) Quit (Ping timeout: 480 seconds)
[19:19] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[19:24] * northrup (~northrup@189-211-129-205.static.axtel.net) Quit (Ping timeout: 480 seconds)
[19:27] * salwasser (~Adium@72.246.3.14) has joined #ceph
[19:27] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:31] * ade (~abradshaw@p4FF7871C.dip0.t-ipconnect.de) Quit (Quit: Too sexy for his shirt)
[19:36] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Quit: ZNC 1.6.3 - http://znc.in)
[19:36] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[19:40] * johnavp1989 (~jpetrini@yakko.coredial.com) Quit (Quit: Leaving.)
[19:40] * johnavp1989 (~jpetrini@yakko.coredial.com) has joined #ceph
[19:40] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[19:42] * srk (~Siva@2605:6000:ed04:ce00:c133:1dae:a6e5:17b9) Quit (Ping timeout: 480 seconds)
[19:45] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[19:48] * johnavp1989 (~jpetrini@yakko.coredial.com) Quit (Ping timeout: 480 seconds)
[19:48] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[19:49] * corevoid (~corevoid@ip68-5-125-61.oc.oc.cox.net) has joined #ceph
[19:50] * rakeshgm (~rakesh@106.51.29.33) has joined #ceph
[19:51] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) has joined #ceph
[19:52] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) Quit ()
[19:52] * _mrp (~mrp@82.117.199.26) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:53] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) has joined #ceph
[19:54] * walcubi__ is now known as walcubi
[19:55] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[19:55] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) Quit ()
[19:55] * Racpatel (~Racpatel@166.171.186.88) Quit (Ping timeout: 480 seconds)
[19:55] * PcJamesy (~AG_Scott@162.251.167.90) has joined #ceph
[19:57] <corevoid> So, I have a question about write performance. I have been testing with Ceph for a couple of months now, read all the docs on the main site, and been trying to read articles around, but I am still unsure where to start on this. Basically, I am seeing very spotty write performance. The test setup is on SSD's but only a 1Gbps(1 public, 1 cluster) network. I was thinking I should be able to saturate the pipe, which I can with read tests, but
[19:57] <corevoid> with write tests, it never gets close. Usually like half or less than the full 1Gbps. The disks can do easily 3-5x that directly. Anyway, I was just hoping someone might be able to point me in the write direction to either optimize/tune this better or understand why this is the case(?).
[19:57] * Ryfi (~ryan@d207-81-7-44.bchsia.telus.net) has joined #ceph
[19:58] <Ryfi> hey all, i recently ran into backfill_toofull problems, outed a tiny osd, replaced it with a bigger one
[19:58] <Ryfi> after reweight-by-utilization, i have 13 pgs stuck unclean and recovery has halted
[19:58] <Ryfi> any idea what i do now? :S
[19:59] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[20:00] <Ryfi> from what i can tell, i can wait a while, then run ceph pg 2.5 mark_unfound_lost revert|delete to forget about them
[20:02] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[20:03] * Racpatel (~Racpatel@166.170.25.165) has joined #ceph
[20:03] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[20:03] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) has joined #ceph
[20:04] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[20:06] * xarses (~xarses@4.35.170.198) Quit (Ping timeout: 480 seconds)
[20:07] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[20:10] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Read error: Connection reset by peer)
[20:14] * jarrpa (~jarrpa@adsl-72-50-87-78.prtc.net) has joined #ceph
[20:16] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[20:24] * ggarg (~Gaurav@x2f26ae2.dyn.telefonica.de) has joined #ceph
[20:25] * PcJamesy (~AG_Scott@26XAAA6BH.tor-irc.dnsbl.oftc.net) Quit ()
[20:31] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[20:32] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[20:34] * srk (~Siva@2605:6000:ed04:ce00:11ce:5337:cf97:1bcd) has joined #ceph
[20:36] * i_m1 (~ivan.miro@31.173.120.48) Quit (Ping timeout: 480 seconds)
[20:44] * Racpatel (~Racpatel@166.170.25.165) Quit (Quit: Leaving)
[20:46] * srk (~Siva@2605:6000:ed04:ce00:11ce:5337:cf97:1bcd) Quit (Ping timeout: 480 seconds)
[20:46] * ircolle (~ircolle@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[20:46] * jarrpa (~jarrpa@adsl-72-50-87-78.prtc.net) Quit (Ping timeout: 480 seconds)
[20:47] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[20:47] * derjohn_mobi (~aj@2001:6f8:1337:0:4909:775b:812e:4b4) Quit (Ping timeout: 480 seconds)
[20:47] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[20:52] * Hemanth (~hkumar_@103.228.221.167) Quit (Quit: Leaving)
[20:55] * Discovery (~Discovery@109.235.52.11) has joined #ceph
[21:02] * valeech (~valeech@50-205-143-162-static.hfc.comcastbusiness.net) Quit (Quit: valeech)
[21:02] * wjw-freebsd2 (~wjw@smtp.digiware.nl) has joined #ceph
[21:03] * wjw-freebsd3 (~wjw@smtp.digiware.nl) has joined #ceph
[21:07] * georgem (~Adium@206.108.127.16) has left #ceph
[21:08] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[21:09] <Ryfi> hmm that doesnt work.. im wondering if it's because I need to add another OSD or something
[21:09] <Ryfi> the docs arent clear on what to do when you see active+remapped and no reasoning behind it
[21:10] * wjw-freebsd2 (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[21:11] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[21:11] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[21:11] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[21:11] * DanJ (~textual@173-19-68-180.client.mchsi.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:13] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[21:13] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[21:20] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[21:20] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Read error: Connection reset by peer)
[21:20] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[21:21] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:21] * georgem (~Adium@206.108.127.16) has joined #ceph
[21:32] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[21:35] <Ryfi> hmm, everything is okay now lol
[21:35] <Ryfi> just recovered on its own... strangeeeee
[21:36] <vasu> Rfyi: http://docs.ceph.com/docs/master/rados/operations/pg-states/
[21:36] <vasu> Remapped
[21:36] <vasu> The placement group is temporarily mapped to a different set of OSDs from what CRUSH specified.
[21:36] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[21:38] <Ryfi> nevermind, looking at the wrong cluster.. still broken lol
[21:38] <Ryfi> how do i fix that?
[21:39] * rendar (~I@host5-58-dynamic.49-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:39] <vasu> you should have run ceph osd noout if you were replacing the drive, based on number of objects you will have to give it time to settle
[21:39] <Ryfi> im thinking it needs another disk, since this all came about after reweighting an osd
[21:40] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[21:40] <georgem> Ryfi: look at pg query output to see why the recovery is blocked, it might not be very evident though
[21:40] <Ryfi> oh i did, it's had all night and it hasn't budged from where it's at
[21:40] <vasu> also here: http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/
[21:41] <Ryfi> http://pastebin.com/hHqwvkF4
[21:41] * vbellur (~vijay@ip-64-134-64-4.public.wayport.net) has joined #ceph
[21:43] <vasu> did you remove the old one from the crush
[21:43] <Ryfi> yup
[21:43] <vasu> up: 7, 14, acting: 7, 14, 9
[21:44] <Ryfi> 9 is the one i had to reweight because it was too full
[21:45] <vasu> its not up
[21:45] <Ryfi> 9 2.71999 osd.9 up 0.72876 1.00000
[21:47] <Ryfi> osdmap e10708: 13 osds: 13 up, 13 in; 13 remapped pgs
[21:47] * srk (~Siva@2605:6000:ed04:ce00:11ce:5337:cf97:1bcd) has joined #ceph
[21:47] <vasu> you could just add one more drive on another host and make it go away,
[21:48] <vasu> and also ping in ceph-devel
[21:48] <Ryfi> i was assuming that was the fix
[21:48] <Ryfi> can i add it then remove it? or will i get the same result? heh
[21:48] <vasu> you mean remove this one after adding?
[21:49] <Ryfi> remove the one i add
[21:49] <Ryfi> like just curious if it needs extra space to sort itself out, then i can get rid of it again
[21:50] <vasu> yeah based on the number of replica's the pg's will need the osd's , if one of them is behaving bad
[21:50] <vasu> or else it wont be able to remap
[21:50] <Ryfi> hopefully my little 500gb drive will save the day heh, not at home to try at the moment though
[21:51] * KindOne (dtscode@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:54] * xarses (~xarses@63-158-87-14.dia.static.qwest.net) has joined #ceph
[21:56] * derjohn_mobi (~aj@x590e6307.dyn.telefonica.de) has joined #ceph
[22:02] * vbellur (~vijay@ip-64-134-64-4.public.wayport.net) Quit (Ping timeout: 480 seconds)
[22:03] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[22:05] * rendar (~I@host5-58-dynamic.49-79-r.retail.telecomitalia.it) has joined #ceph
[22:11] * dnunez (~dnunez@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[22:12] * _mrp (~mrp@178.254.148.42) has joined #ceph
[22:14] <jprins> Hi, anyone here from the Ceph development team? Or someone who can file a bug?
[22:16] <m0zes> anyone can file a bug
[22:16] <m0zes> http://tracker.ceph.com/
[22:16] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[22:17] * rakeshgm (~rakesh@106.51.29.33) Quit (Quit: Leaving)
[22:22] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[22:28] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[22:29] * srk (~Siva@2605:6000:ed04:ce00:11ce:5337:cf97:1bcd) Quit (Ping timeout: 480 seconds)
[22:32] * xarses_ (~xarses@172.56.16.123) has joined #ceph
[22:33] * KindOne (kindone@h204.162.186.173.dynamic.ip.windstream.net) has joined #ceph
[22:34] * _mrp (~mrp@178.254.148.42) Quit (Quit: Textual IRC Client: www.textualapp.com)
[22:35] * Jeffrey4l_ (~Jeffrey@110.244.243.189) has joined #ceph
[22:37] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[22:39] * Jeffrey4l (~Jeffrey@101.31.232.173) Quit (Ping timeout: 480 seconds)
[22:39] * xarses (~xarses@63-158-87-14.dia.static.qwest.net) Quit (Ping timeout: 480 seconds)
[22:48] * srk_ (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[22:52] * Racpatel (~Racpatel@166.177.58.167) has joined #ceph
[22:52] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[22:53] * analbeard (~shw@support.memset.com) has joined #ceph
[22:53] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Quit: ...)
[22:54] * KindOne (kindone@h204.162.186.173.dynamic.ip.windstream.net) has joined #ceph
[22:57] * derjohn_mobi (~aj@x590e6307.dyn.telefonica.de) Quit (Remote host closed the connection)
[22:58] * DanJ (~textual@173-19-68-180.client.mchsi.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:58] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[22:59] * DanJ (~textual@173-19-68-180.client.mchsi.com) Quit ()
[22:59] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Quit: wes_dillingham)
[23:02] * jarrpa (~jarrpa@adsl-72-50-87-78.prtc.net) has joined #ceph
[23:03] * Racpatel (~Racpatel@166.177.58.167) Quit (Quit: Leaving)
[23:03] * derjohn_mob (~aj@x590e6307.dyn.telefonica.de) has joined #ceph
[23:07] * bjozet (~bjozet@82-183-17-144.customers.ownit.se) Quit (Remote host closed the connection)
[23:07] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[23:08] * danieagle (~Daniel@187.35.176.10) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[23:10] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[23:14] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[23:17] * chunmei (~chunmei@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[23:19] * georgem (~Adium@24.114.53.222) has joined #ceph
[23:19] * derjohn_mob (~aj@x590e6307.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[23:19] * derjohn_mob (~aj@x590e6307.dyn.telefonica.de) has joined #ceph
[23:29] * georgem (~Adium@24.114.53.222) has left #ceph
[23:29] * Discovery (~Discovery@109.235.52.11) Quit (Read error: Connection reset by peer)
[23:32] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[23:32] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[23:33] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[23:34] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[23:35] * DanJ (~textual@173-19-68-180.client.mchsi.com) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:36] * DanJ (~textual@173-19-68-180.client.mchsi.com) has joined #ceph
[23:36] * madkiss1 (~madkiss@2a02:8109:8680:2000:4073:55d5:eac2:4ac4) Quit (Ping timeout: 480 seconds)
[23:40] * hellertime (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[23:56] * xarses_ (~xarses@172.56.16.123) Quit (Ping timeout: 480 seconds)
[23:58] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.