#ceph IRC Log

Index

IRC Log for 2013-04-08

Timestamps are in GMT/BST.

[0:00] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) Quit (Read error: Connection reset by peer)
[0:01] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[0:03] * MarkN2 (~nathan@197.204.233.220.static.exetel.com.au) has joined #ceph
[0:04] * MarkN1 (~nathan@197.204.233.220.static.exetel.com.au) Quit (Read error: Connection reset by peer)
[0:05] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) Quit (Read error: Connection reset by peer)
[0:06] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[0:06] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[0:06] * MarkN2 (~nathan@197.204.233.220.static.exetel.com.au) Quit (Read error: Connection reset by peer)
[0:07] * MarkN1 (~nathan@197.204.233.220.static.exetel.com.au) has joined #ceph
[0:07] * MarkN1 (~nathan@197.204.233.220.static.exetel.com.au) has left #ceph
[0:09] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) Quit (Read error: Connection reset by peer)
[0:12] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[0:12] * MarkN (~nathan@197.204.233.220.static.exetel.com.au) has joined #ceph
[0:14] * MarkN1 (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[0:16] * MarkN (~nathan@197.204.233.220.static.exetel.com.au) Quit (Read error: Connection reset by peer)
[0:17] * MarkN (~nathan@197.204.233.220.static.exetel.com.au) has joined #ceph
[0:17] * ivotron (~ivo@69-170-63-251.static-ip.telepacific.net) Quit (Ping timeout: 480 seconds)
[0:17] * MarkN (~nathan@197.204.233.220.static.exetel.com.au) has left #ceph
[0:19] * MarkN2 (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[0:19] * MarkN1 (~nathan@142.208.70.115.static.exetel.com.au) Quit (Read error: Connection reset by peer)
[0:22] * MarkN (~nathan@197.204.233.220.static.exetel.com.au) has joined #ceph
[0:22] * MarkN (~nathan@197.204.233.220.static.exetel.com.au) has left #ceph
[0:22] * MarkN2 (~nathan@142.208.70.115.static.exetel.com.au) Quit (Read error: Connection reset by peer)
[0:25] * MarkN1 (~nathan@197.204.233.220.static.exetel.com.au) has joined #ceph
[0:29] * MarkN (~nathan@197.204.233.220.static.exetel.com.au) has joined #ceph
[0:29] * MarkN1 (~nathan@197.204.233.220.static.exetel.com.au) Quit (Read error: Connection reset by peer)
[0:32] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) has joined #ceph
[0:32] * nz_monkey_ (~nz_monkey@222.47.255.123.static.snap.net.nz) Quit (Quit: No Ping reply in 180 seconds.)
[0:33] * nz_monkey (~nz_monkey@222.47.255.123.static.snap.net.nz) has joined #ceph
[0:33] * MarkN1 (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[0:34] * MarkN (~nathan@197.204.233.220.static.exetel.com.au) Quit (Read error: Connection reset by peer)
[0:35] * MarkN (~nathan@197.204.233.220.static.exetel.com.au) has joined #ceph
[0:37] * MarkN (~nathan@197.204.233.220.static.exetel.com.au) has left #ceph
[0:40] * BillK (~BillK@203-59-45-74.dyn.iinet.net.au) has joined #ceph
[0:41] * MarkN1 (~nathan@142.208.70.115.static.exetel.com.au) Quit (Ping timeout: 480 seconds)
[0:46] <Elbandi_> i need review to my commits: https://github.com/Elbandi/ceph/commits/wip-getlayout
[0:53] * ivotron (~ivo@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) has joined #ceph
[0:54] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:54] * ivotron (~ivo@adsl-76-254-17-170.dsl.pltn13.sbcglobal.net) Quit (Read error: Connection reset by peer)
[1:05] * MarkN1 (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[1:05] * MarkN1 (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[1:12] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[1:24] * maxiz (~pfliu@111.192.254.172) has joined #ceph
[1:25] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[1:28] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[1:29] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[1:40] * maxiz (~pfliu@111.192.254.172) Quit (Quit: Ex-Chat)
[1:47] * LeaChim (~LeaChim@176.248.17.141) Quit (Ping timeout: 480 seconds)
[1:54] * portante (~user@c-24-63-226-65.hsd1.ma.comcast.net) has joined #ceph
[1:58] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[1:59] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:03] * Yen (~Yen@ip-83-134-112-99.dsl.scarlet.be) Quit (Ping timeout: 480 seconds)
[2:08] * Yen (~Yen@ip-81-11-198-39.dsl.scarlet.be) has joined #ceph
[2:09] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[2:09] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:22] * Q310 (Q@2403-9000-2000-0200-0000-0000-0000-0020.ipv6.onqnetworks.net) Quit ()
[2:34] * danieagle (~Daniel@177.133.175.47) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[3:05] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[3:13] * diegows (~diegows@190.190.2.126) has joined #ceph
[3:24] * forced (~forced@205.132.255.75) has joined #ceph
[3:54] * winston-d (~Miranda@134.134.139.72) has joined #ceph
[3:54] <winston-d> joshd : ping
[4:13] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[4:21] * winston-d (~Miranda@134.134.139.72) Quit (Quit: is leaving...)
[4:22] * winston-d (~Miranda@192.55.54.40) has joined #ceph
[4:22] * forced (~forced@205.132.255.75) has left #ceph
[4:23] * winston-d (~Miranda@192.55.54.40) has left #ceph
[4:23] * winston-d (~Miranda@192.55.54.40) has joined #ceph
[4:24] * winston-d (~Miranda@192.55.54.40) Quit ()
[4:44] * themgt (~themgt@24-177-232-181.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[4:46] * themgt (~themgt@24-177-233-117.dhcp.gnvl.sc.charter.com) has joined #ceph
[4:57] * rahmu (~rahmu@ip-251.net-81-220-131.standre.rev.numericable.fr) Quit (Remote host closed the connection)
[5:26] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[5:26] * ChanServ sets mode +o elder
[5:41] * tnt (~tnt@pd95cfcb7.dip0.t-ipconnect.de) has joined #ceph
[5:41] * themgt (~themgt@24-177-233-117.dhcp.gnvl.sc.charter.com) Quit (Quit: themgt)
[5:47] * tnt_ (~tnt@ip-34-175-205-91.static.contabo.net) has joined #ceph
[5:49] * tnt (~tnt@pd95cfcb7.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:13] * dmick (~dmick@2607:f298:a:607:e9f3:12ff:3dfc:ed0b) Quit (Ping timeout: 480 seconds)
[6:22] * dmick (~dmick@2607:f298:a:607:9e6:1dc2:5b27:f931) has joined #ceph
[6:24] * tnt_ (~tnt@ip-34-175-205-91.static.contabo.net) Quit (Ping timeout: 480 seconds)
[6:49] * Qten (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Ping timeout: 480 seconds)
[7:10] * Yen (~Yen@ip-81-11-198-39.dsl.scarlet.be) Quit (Ping timeout: 480 seconds)
[7:28] * Yen (~Yen@ip-81-11-198-39.dsl.scarlet.be) has joined #ceph
[7:35] * capri (~capri@212.218.127.222) has joined #ceph
[7:45] * norbi (~nonline@buerogw01.ispgateway.de) has joined #ceph
[7:46] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[7:47] * norbi (~nonline@buerogw01.ispgateway.de) Quit ()
[7:49] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[7:49] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Read error: Connection reset by peer)
[7:51] * norbi (~nonline@buerogw01.ispgateway.de) has joined #ceph
[7:56] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[7:58] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit ()
[8:00] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[8:02] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[8:51] * vipr_ is now known as vipr
[8:53] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[8:55] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[9:07] * sjustlaptop (~sam@71-83-191-116.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[9:08] * jlogan1 (~Thunderbi@2600:c00:3010:1:64ea:852f:5756:f4bf) Quit (Ping timeout: 480 seconds)
[9:09] * jlogan (~Thunderbi@2600:c00:3010:1:64ea:852f:5756:f4bf) has joined #ceph
[9:09] * trond (~trond@trh.betradar.com) Quit (Quit: leaving)
[9:10] * loicd (~loic@185.10.252.15) has joined #ceph
[9:10] * gerard_dethier (~Thunderbi@85.234.217.115.static.edpnet.net) has joined #ceph
[9:19] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:19] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit ()
[9:20] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:21] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:32] * rustam (~rustam@94.15.91.30) has joined #ceph
[9:32] * leseb (~Adium@83.167.43.235) has joined #ceph
[9:34] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) has joined #ceph
[9:36] * Vjarjadian (~IceChat77@5ad6d005.bb.sky.com) Quit (Quit: If you can't laugh at yourself, make fun of other people.)
[9:38] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[9:40] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[9:46] * l0nk (~alex@83.167.43.235) has joined #ceph
[9:46] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[9:46] * musca (musca@tyrael.eu) has joined #ceph
[9:48] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[9:48] * mcclurmc_laptop (~mcclurmc@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) Quit (Ping timeout: 480 seconds)
[9:49] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[10:05] * rustam (~rustam@94.15.91.30) has joined #ceph
[10:06] * LeaChim (~LeaChim@176.248.17.141) has joined #ceph
[10:09] * Morg (b2f95a11@ircip2.mibbit.com) has joined #ceph
[10:16] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[10:18] * ctrl (~ctrl@83.149.9.232) has joined #ceph
[10:24] * rustam (~rustam@94.15.91.30) Quit (Remote host closed the connection)
[10:28] * tjikkun_ (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[10:33] * joao (~JL@89-181-149-4.net.novis.pt) has joined #ceph
[10:33] * ChanServ sets mode +o joao
[11:02] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[11:10] * ctrl (~ctrl@83.149.9.232) Quit (Ping timeout: 480 seconds)
[11:39] * loicd (~loic@185.10.252.15) Quit (Ping timeout: 480 seconds)
[11:51] * mcclurmc_laptop (~mcclurmc@firewall.ctxuk.citrix.com) has joined #ceph
[11:59] * slang1 (~slang@c-71-239-8-58.hsd1.il.comcast.net) Quit (Read error: Connection reset by peer)
[12:02] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[12:03] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[12:03] * ScOut3R_ (~ScOut3R@212.96.47.215) has joined #ceph
[12:04] * slang (~slang@c-71-239-8-58.hsd1.il.comcast.net) has joined #ceph
[12:10] * tnt (~tnt@109.130.90.161) has joined #ceph
[12:11] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[12:19] * tnt_ (~tnt@91.176.36.113) has joined #ceph
[12:21] * tnt (~tnt@109.130.90.161) Quit (Ping timeout: 480 seconds)
[12:25] * ctrl (~ctrl@83.149.9.190) has joined #ceph
[12:28] * ctrl (~ctrl@83.149.9.190) Quit ()
[12:30] * tnt_ (~tnt@91.176.36.113) Quit (Ping timeout: 480 seconds)
[12:33] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[12:44] * sleinen (~Adium@2001:620:0:46:bd22:1d10:9dda:2cf1) has joined #ceph
[12:49] * xdeller (~xdeller@broadband-77-37-224-84.nationalcablenetworks.ru) Quit (Quit: Leaving)
[13:13] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) has joined #ceph
[13:16] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Read error: Operation timed out)
[13:25] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[13:26] * Guest908 (~james@143.48.7.168) Quit (Quit: Leaving)
[13:28] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[13:36] * portante (~user@c-24-63-226-65.hsd1.ma.comcast.net) Quit (Read error: Operation timed out)
[13:38] * tnt (~tnt@91.176.24.85) has joined #ceph
[13:46] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[13:47] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[13:51] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:56] * itamar_ (~itamar@IGLD-84-228-64-202.inter.net.il) has joined #ceph
[13:58] * itamar_ (~itamar@IGLD-84-228-64-202.inter.net.il) Quit (Read error: Connection reset by peer)
[13:58] * itamar_ (~itamar@IGLD-84-228-64-202.inter.net.il) has joined #ceph
[13:58] * tnt (~tnt@91.176.24.85) Quit (Ping timeout: 480 seconds)
[14:03] <vipr> Is there a command to see which ip's have mapped an rbd?
[14:09] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:13] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Quit: Leaving.)
[14:16] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) has joined #ceph
[14:24] * diegows (~diegows@190.190.2.126) has joined #ceph
[14:31] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:31] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[14:37] * tnt (~tnt@228.204-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[14:45] * slang1 (~slang@c-71-239-8-58.hsd1.il.comcast.net) has joined #ceph
[14:45] * slang (~slang@c-71-239-8-58.hsd1.il.comcast.net) Quit (Read error: Connection reset by peer)
[14:57] * jerry66 (jerry66@79.126.187.25) has joined #ceph
[14:57] <jerry66> http://xeroticmomentsx.blogspot.com/2013/04/sandra-romain-double-anal-gangbang.html
[14:57] * jerry66 (jerry66@79.126.187.25) Quit (autokilled: This host violated network policy. Contact support@oftc.net for further information and assistance. (2013-04-08 12:57:53))
[15:02] * KevinPerks (~Adium@cpe-066-026-239-136.triad.res.rr.com) has joined #ceph
[15:06] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[15:09] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[15:10] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:12] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[15:13] * joao sets mode +b jerry66*!*@*
[15:15] * gaveen (~gaveen@175.157.217.11) has joined #ceph
[15:16] * tnt_ (~tnt@228.204-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[15:17] * tnt (~tnt@228.204-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[15:18] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) has joined #ceph
[15:19] * slang1 (~slang@c-71-239-8-58.hsd1.il.comcast.net) Quit (Remote host closed the connection)
[15:20] * slang1 (~slang@c-71-239-8-58.hsd1.il.comcast.net) has joined #ceph
[15:21] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[15:23] * itamar_ (~itamar@IGLD-84-228-64-202.inter.net.il) Quit (Remote host closed the connection)
[15:24] * itamar_ (~itamar@IGLD-84-228-64-202.inter.net.il) has joined #ceph
[15:26] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:30] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[15:31] * ctrl (~ctrl@83.149.8.213) has joined #ceph
[15:36] * ctrl (~ctrl@83.149.8.213) Quit ()
[15:36] * yehuda_hm (~yehuda@2602:306:330b:1410:4cf0:8225:f626:5c15) has joined #ceph
[15:37] * itamar_ (~itamar@IGLD-84-228-64-202.inter.net.il) Quit (Remote host closed the connection)
[15:37] * itamar_ (~itamar@IGLD-84-228-64-202.inter.net.il) has joined #ceph
[15:40] * itamar_ (~itamar@IGLD-84-228-64-202.inter.net.il) Quit (Read error: Connection reset by peer)
[15:40] * itamar_ (~itamar@IGLD-84-228-64-202.inter.net.il) has joined #ceph
[15:43] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[15:58] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[16:10] * norbi (~nonline@buerogw01.ispgateway.de) Quit (Quit: Miranda IM! Smaller, Faster, Easier. http://miranda-im.org)
[16:14] * loicd (~loic@3.46-14-84.ripe.coltfrance.com) Quit (Quit: Leaving.)
[16:16] * calebamiles1 (~caleb@c-50-138-218-203.hsd1.vt.comcast.net) has joined #ceph
[16:22] * calebamiles (~caleb@c-50-138-218-203.hsd1.vt.comcast.net) Quit (Ping timeout: 480 seconds)
[16:23] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[16:24] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[16:30] * leseb (~Adium@83.167.43.235) has left #ceph
[16:30] * leseb (~Adium@83.167.43.235) has joined #ceph
[16:33] * vata (~vata@2607:fad8:4:6:2c46:e351:5224:17e0) has joined #ceph
[16:38] * itamar_ (~itamar@IGLD-84-228-64-202.inter.net.il) Quit (Quit: Leaving)
[16:44] * eschnou (~eschnou@85.234.217.115.static.edpnet.net) Quit (Remote host closed the connection)
[16:45] * dignus (~dignus@bastion.jkit.nl) Quit (Ping timeout: 480 seconds)
[16:46] * dignus (~dignus@bastion.jkit.nl) has joined #ceph
[16:46] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[16:46] * ScOut3R_ (~ScOut3R@212.96.47.215) Quit (Remote host closed the connection)
[16:46] * ScOut3R (~ScOut3R@212.96.47.215) has joined #ceph
[16:47] * drokita (~drokita@199.255.228.128) has joined #ceph
[16:51] * gerard_dethier (~Thunderbi@85.234.217.115.static.edpnet.net) Quit (Quit: gerard_dethier)
[16:52] * Morg (b2f95a11@ircip2.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[16:55] <Elbandi_> hi, i want to remove a pool, but there is some object in pool
[16:56] <Elbandi_> how do i find out which files belong to?
[16:56] <Elbandi_> cephfs, ofc
[17:00] * yanzheng (~zhyan@101.84.14.177) has joined #ceph
[17:01] <Robe> wc
[17:01] * Robe (robe@amd.co.at) has left #ceph
[17:07] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:12] * loicd (~loic@magenta.dachary.org) has joined #ceph
[17:18] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) has joined #ceph
[17:22] <topro> anyone ever experienced that high file io load on ceph-fs might lead to other file-io on the same fs to block until relevant pgs get (deep-)scrubbed?
[17:23] <topro> ^^ with 0.56.4 on debian wheezy that is
[17:26] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:32] * l0nk (~alex@83.167.43.235) Quit (Read error: Connection reset by peer)
[17:32] * l0nk1 (~alex@83.167.43.235) has joined #ceph
[17:33] * sleinen (~Adium@2001:620:0:46:bd22:1d10:9dda:2cf1) Quit (Ping timeout: 480 seconds)
[17:53] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[17:55] <stiller> I successfully used the experimental PG split functionality on an existing pool. It seems to work well, new data is more evenly distributed over the OSD's, resulting in a higher throughput. However, existing data is not redistributed over the OSD's, resulting in uneven usage of free space. Can I somehow trigger a redistribution of this data?
[17:56] * gregaf1 (~Adium@2607:f298:a:607:592c:5c69:859e:5c4e) Quit (Quit: Leaving.)
[17:56] <stiller> To be more precise: I increased pg_num for .rgw.buckets from 8 to 1200.
[17:57] * gregaf (~Adium@2607:f298:a:607:e1be:c55f:97e4:8e93) has joined #ceph
[17:58] * mrjack_ (mrjack@office.smart-weblications.net) Quit (Ping timeout: 480 seconds)
[17:58] <gregaf> stiller: once you do a split it will re-distribute automatically, unless you forgot to update the placement count (pgp_count) as well as the pg count
[17:59] <stiller> gregaf: I almost certainly forgot to do that, thanks!
[17:59] <gregaf> topro: that sounds like quite a bizarre scenario; are you sure those are the relevant variables?
[17:59] <joao> mega_au, still around?
[18:00] <mega_au> yes, still here
[18:00] <gregaf> joao: can you talk to me about ceph-create-keys and mon authorization?
[18:01] <joao> gregaf, have barely looked into ceph-create-keys; let me take a closer look and get back to you shortly
[18:01] <joao> mega_au, are you able to reproduce those crashes with higher debug level?
[18:01] <gregaf> for now just wondering if you'd done anything yet
[18:02] * ScOut3R (~ScOut3R@212.96.47.215) Quit (Ping timeout: 480 seconds)
[18:02] <topro> gregaf: i omitted one variable on purpose to avoid getting slapped in the face: multiple clients mount this ceph-fs simultaneously, kernel-client (linux-3.8.5) on client boxes. fuse-client on the one osd that mounts the fs for convenice
[18:02] <gregaf> haha, no, that should be fine — although we do have a couple bug reports about it
[18:02] <mega_au> current log were done at 20. what level you need? Happy to run whatever needed
[18:02] <joao> gregaf, with regard to ceph-create-keys?
[18:02] <gregaf> topro: it's the "until after deep scrubbing" thing that is confusing me
[18:03] <gregaf> joao: yeah; somebody said you'd helped them debug it under v0.59, and I know I've seen something about it but I think it was in the context of a bug fix that Sage was discussing with somebody so I was hoping that was you
[18:03] <topro> gregaf: but doing so (ceph osd deep-scrub 0..n) repeatedly seemd to be the cause which fixed it
[18:04] <joao> mega_au, huh, yeah, sorry; was 20% into mon.0's log and was only seeing default debug levels; somewhere down the line it appears to have just what I was looking for, thanks!
[18:04] <joao> gregaf, I looked into it some two weeks ago, can't really recall why or what it was about
[18:04] <joao> let me refresh my memory
[18:05] <mega_au> oh... I did not trim logs in the beginning. Turned to 20 when was about to report to you.
[18:05] <gregaf> thanks joao — there were some emails over the weekend and it looks like maybe it's broken in authorization (using mon. key which doesn't have the necessary perms?), which would be not great
[18:05] <gregaf> topro: what leads you to conclude the IO is hanging?
[18:06] <topro> gregaf: i.e. "ls -l" blocks indefenitely
[18:06] <joao> mega_au, that's fine; there were a couple of crashes in the beginning that worry me, but unfortunately they don't have enough debug info to go with them
[18:06] <topro> gregaf: on some (sparse) subtrees
[18:06] <joao> but I'm hoping to find something useful somewhere down the line
[18:07] <topro> gregaf: not the whole fs
[18:08] * BillK (~BillK@203-59-45-74.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[18:08] <mega_au> it failed to convert claiming store was pre 0.52, then I've seen message saying no monmap and keyring in store ending in core
[18:09] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has left #ceph
[18:09] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[18:09] * ChanServ sets mode +o scuttlemonkey
[18:10] <gregaf> topro: okay, that could be a failure mode but it wouldn't be resolved by a deep scrub
[18:10] <gregaf> or it's possible that you had built up a bunch of dirty data on one of your clients that needed to be flushed before the other could do so, and it was taking the OSD a long time to handle
[18:10] <gregaf> (this one is something we'd like to fix as well, but it'll take some work)
[18:11] <gregaf> because CephFS requires that any data exposed to a third-party client be made durable on disk first
[18:11] * mcclurmc_laptop (~mcclurmc@firewall.ctxuk.citrix.com) Quit (Ping timeout: 480 seconds)
[18:11] <topro> gregaf: anything I can do to investigate?
[18:13] <gregaf> topro: well, is the subtree always one that's being written to?
[18:13] <gregaf> oh, and you're using the kernel client so we have less ability to control the amount of uncommitted dirty data too; what kind of high IO load are you subjecting it to?
[18:13] <joao> mega_au, which monitor was that?
[18:14] <mega_au> mon3
[18:14] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[18:14] <topro> gregaf: i seems to happen most often when I do a backup (high read load) maybe when coincidencing with some random write load to the same tree, but that is speculative
[18:14] * yanzheng1 (~zhyan@101.83.229.38) has joined #ceph
[18:15] <gregaf> that sounds odd; can you gather up all the details you can think of and create a bug in the tracker?
[18:17] <Elbandi_> gregaf: i run journal-check on mds, runs good, but i see old files (already removed) in log. is this normal?
[18:18] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[18:18] <gregaf> in the mds log? it can be; how long ago were they removed and how large are they?
[18:18] <topro> gregaf: not that easy. I learned that I'm very talented in always giving the exact wrong details which tend to be misleading ;)
[18:18] <gregaf> topro: nonetheless — I'm afraid I have a pretty big backlog today! :)
[18:19] * yanzheng (~zhyan@101.84.14.177) Quit (Ping timeout: 480 seconds)
[18:19] <topro> gregaf: nevermind. it's not that urgent. just wanted to know if its a known behaviour
[18:20] <topro> i'll try to create a tracker item
[18:20] <Elbandi_> i dont remember exactly, but >5 hours
[18:20] <Elbandi_> or more
[18:22] <gregaf> Elbandi_: what kinds of references were you seeing?
[18:23] <Elbandi_> 2013-04-08 17:58:14.203787 7f0a38f5d700 12 mds.0.cache.dir(20000006c01) add_null_dentry [dentry #1/videos/2012uj/12/01/.x.fzfpTA [2,head] auth NULL (dversion lock) pv=0 v=34095 inode=0 0x6e5a958]
[18:23] <Elbandi_> rsync tmp files
[18:24] <gregaf> hmm, I would have expected that to be trimmed out of the log by now
[18:24] <Elbandi_> 2013-04-08 17:58:14.203795 7f0a38f5d700 10 mds.0.journal EMetaBlob.replay added [dentry #1/videos/2012uj/12/01/.x.fzfpTA [2,head] auth NULL (dversion lock) v=34068 inode=0 | dirty 0x6e5a958]
[18:24] * tristanz (~tristanz@75-101-52-104.dsl.static.sonic.net) has joined #ceph
[18:26] * eschnou (~eschnou@173.213-201-80.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:28] <topro> is there anything I can check to see what the cause of such an ceph-fs block is?
[18:28] <topro> as I have that behaviour right now
[18:28] <Elbandi_> gregaf: i greped for one file from log: http://pastebin.com/t2vJyXGG
[18:29] <Elbandi_> add_null_dentry, remove_dentry, add_null_dentry, remove_dentry, ...
[18:31] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[18:31] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) has joined #ceph
[18:31] <topro> ok, immediately after unmounting cephfs on the only one host (osd-host) using fuse-client, the lockup was gone. interesting?
[18:32] * drokita1 (~drokita@199.255.228.128) has joined #ceph
[18:32] * leseb (~Adium@83.167.43.235) Quit (Quit: Leaving.)
[18:32] <gregaf> topro: you can see the requests that are currently in flight from somewhere in /proc, but I can't remember where off-hand and I don't have a mount handy to explore
[18:33] <gregaf> you'd want to look and see that the hanging client is waiting on a stat to the mds, and then go look at one of the nodes which was accessing the tree and see if it has a bunch of OSD requests out or something
[18:33] <topro> gregaf: read my post from [18:31]?
[18:33] <gregaf> Elbandi_: yeah, those are the right kinds of things for replay; I'd just expect them to have been trimmed from the log already
[18:34] <gregaf> topro: ah
[18:34] <gregaf> definitely somewhat, yes
[18:34] <topro> so after unmounting the fuse client there were only some kernel-client remaining mounted
[18:34] <gregaf> that could just be a cap issue then, and the deep scrub wasn't really doing anything, or was slowing things down enough that the ceph-fuse client went through a forced reconnect
[18:35] <gregaf> Elbandi_: if you can get the beginning log of a replay that might be interesting; I wonder if your logsegments aren't getting trimmed for some reason
[18:36] <paravoid> so since I upgraded to 0.56.4, I get a lot of deep scrub errors
[18:36] <paravoid> I briefly chatted with sage about them before
[18:36] <paravoid> I'm now up to 33 pgs inconsistent
[18:36] <paravoid> all them seem to be omap deep scrub errors
[18:36] <paravoid> the count keeps increasing and I'm a bit worried
[18:36] * drokita (~drokita@199.255.228.128) Quit (Read error: Operation timed out)
[18:38] * nhm_ (~nh@67-220-20-222.usiwireless.com) has joined #ceph
[18:39] * diegows (~diegows@190.190.2.126) has joined #ceph
[18:42] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) has joined #ceph
[18:42] * drokita (~drokita@199.255.228.128) has joined #ceph
[18:43] * sleinen1 (~Adium@2001:620:0:25:391d:f921:dae1:df04) has joined #ceph
[18:43] * nhm__ (~nh@65-128-150-185.mpls.qwest.net) has joined #ceph
[18:43] * cyclone (~cyclone@46.184.255.87) has joined #ceph
[18:44] <cyclone> anybody online, got some quick questions about CephFS
[18:44] <topro> gregaf: cap issue?
[18:45] <joao> err
[18:45] * nhm (~nh@184-97-180-204.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[18:46] * drokita1 (~drokita@199.255.228.128) Quit (Ping timeout: 480 seconds)
[18:46] * nhm_ (~nh@67-220-20-222.usiwireless.com) Quit (Ping timeout: 480 seconds)
[18:48] <yanzheng1> topro, did you mount the cephfs on the host that runs ods?
[18:50] * drokita (~drokita@199.255.228.128) Quit (Read error: Operation timed out)
[18:50] * sleinen (~Adium@217-162-132-182.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:50] <topro> yanzheng1: yes, but on that host using ceph-fuse client
[18:50] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[18:53] <topro> so its not due to the (well known?!?) kernel deadlock
[18:54] * l0nk1 (~alex@83.167.43.235) Quit (Quit: Leaving.)
[18:56] <yanzheng1> It's easy to trigger deadlock when mounting cephfs using the kernel driver on the host that runs osd
[18:56] <yanzheng1> I don't know if that's also true for ceph-fuse
[18:58] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) has joined #ceph
[18:58] <Elbandi_> gregaf: here is the log: http://elbandi.net/ceph/
[18:58] <Elbandi_> yet another 3giga log :>
[18:59] * sleinen1 (~Adium@2001:620:0:25:391d:f921:dae1:df04) Quit (Quit: Leaving.)
[18:59] <gregaf> thanks
[19:00] * cyclone (~cyclone@46.184.255.87) Quit ()
[19:03] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[19:04] <topro> yanzheng1: afaik thats only happening with kernel client, but I only read that here, maybe someone knows better
[19:15] * mcclurmc_laptop (~mcclurmc@cpc10-cmbg15-2-0-cust205.5-4.cable.virginmedia.com) has joined #ceph
[19:17] * dpippenger (~riven@216.103.134.250) has joined #ceph
[19:19] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) Quit (Quit: Leaving.)
[19:20] * sjustlaptop (~sam@38.122.20.226) has joined #ceph
[19:26] <Elbandi_> eh, i have a feeling that ceph and me arent in friendship... :(
[19:27] <Elbandi_> kernel module cannot access to a directory, fuse can
[19:29] <gregaf> topro: yeah, ceph-fuse on an OSD isn't the problem
[19:29] <gregaf> "caps"="capabilities", which are the internal locks which give a client permission to do things with files
[19:29] * sjustlaptop (~sam@38.122.20.226) Quit (Ping timeout: 480 seconds)
[19:30] * yanzheng1 (~zhyan@101.83.229.38) Quit (Ping timeout: 480 seconds)
[19:34] <Teduardo> is there any chance that eventually ceph will gain awareness of where particular volumes are needed and be able to automatically ensure that the volumes live on the OSDs directly connected to the host using those volumes?
[19:35] <Teduardo> meaning if you have nova and ceph running on the same machine, ensure that the volumes for nova are always on the machine that the VM is running on?
[19:35] <Teduardo> as to prevent extra IO on the network
[19:36] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[19:37] <dmick> Teduardo: that's not really the way the object placement works; for redundancy and efficiency, objects are spread out over OSDs on purpose. One RBD image is made up of multiple objects, and having them all stored on one host creates a SPOF
[19:38] * diegows (~diegows@190.190.2.126) has joined #ceph
[19:38] <dmick> even putting them on N specific hosts reduces availability/durability. The sorta central idea is that the data is spread evenly across the cluster
[19:43] * Cube (~Cube@cpe-76-172-67-97.socal.res.rr.com) Quit (Quit: Leaving.)
[19:44] * vipr_ (~root@78-23-119-116.access.telenet.be) has joined #ceph
[19:45] <Elbandi_> i need a review for this commit, before i pull them: https://github.com/Elbandi/ceph/commits/wip-getlayout
[19:47] <gregaf> if you have stuff for review, a pull request or emails to the list are how you tell us that :)
[19:48] <gregaf> I guarantee random irc messages are going to get lost
[19:51] * vipr (~root@78-23-112-11.access.telenet.be) Quit (Ping timeout: 480 seconds)
[19:52] * danieagle (~Daniel@177.205.180.150.dynamic.adsl.gvt.net.br) has joined #ceph
[19:59] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:05] * drokita (~drokita@199.255.228.128) has joined #ceph
[20:08] * rturk-away is now known as rturk
[20:09] * rturk is now known as rturk-away
[20:09] * rturk-away is now known as rturk
[20:11] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[20:22] * andreask (~andreas@h081217068225.dyn.cm.kabsi.at) has joined #ceph
[20:29] * alram (~alram@38.122.20.226) has joined #ceph
[20:30] <elder> slang1, do you have a minute to look at an mds log?
[20:30] <slang1> elder: sure
[21:00] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Read error: Connection reset by peer)
[21:01] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[21:01] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) has joined #ceph
[21:15] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[21:20] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[21:37] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[21:45] <paravoid> anyone that can help with what it looks like non-trivial debugging?
[21:58] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[22:00] * vipr_ is now known as vipr
[22:05] * BillK (~BillK@203-59-45-74.dyn.iinet.net.au) has joined #ceph
[22:12] * portante|afk (~user@66.187.233.206) Quit (Read error: Connection reset by peer)
[22:12] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:13] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:14] * portante|afk (~user@66.187.233.206) has joined #ceph
[22:22] * mrjack (mrjack@office.smart-weblications.net) Quit (Ping timeout: 480 seconds)
[22:26] <paravoid> is it office hours yet? :)
[22:28] <dmick> it is. what's up paravoid?
[22:28] <paravoid> hey :)
[22:28] <paravoid> so I've talked a bit about the problem before
[22:28] <paravoid> on friday too
[22:29] <paravoid> I upgraded to 0.56.4 recently
[22:29] <paravoid> and I started getting a lot of inconsistent pgs
[22:29] <paravoid> all of them are omap-related and sage told me that omap deep scrub is new to .4
[22:29] <paravoid> so this explains why I'm getting them /now/
[22:29] <paravoid> it doesn't explain why this happened though:
[22:29] <paravoid> health HEALTH_ERR 33 pgs inconsistent; 33 scrub errors
[22:30] <paravoid> all different OSDs
[22:31] <dmick> when you say "omap-related", what do you mean exactly?
[22:31] <dmick> this is what the scrub is reporting?
[22:32] <paravoid> http://p.defau.lt/?c1A_j9bEA_XT8Wb1Fdih4Q
[22:32] <dmick> I see
[22:33] <paravoid> and more since then
[22:33] <paravoid> it was 28 11h ago, it's 33 inconsistent now
[22:33] * diegows (~diegows@190.190.2.126) Quit (Read error: Operation timed out)
[22:33] <paravoid> I'm a bit alarmed as you might imagine :)
[22:34] <paravoid> I haven't run repair in all but the first one, when I still thought it was an isolated incident
[22:34] <dmick> indeed, that's not happy
[22:34] <paravoid> I can file a bug
[22:35] <paravoid> but I was wondering if you'd prefer something more interactive
[22:35] <dmick> and you'd probably like to stem the tide of damage if indeed damage is happening
[22:36] <paravoid> that would be nice, yes :)
[22:37] * imjustmatthew (~imjustmat@pool-72-84-255-246.rcmdva.fios.verizon.net) has joined #ceph
[22:38] <dmick> I don't suppose you have OSD debug turned up?..
[22:39] <paravoid> no, sorry
[22:40] <gregaf> it sounds like it's just detecting something that's been there for a while — sjust?
[22:40] <sjust> gregaf, paravoid: that is most likely the case
[22:40] <sjust> paravoid: xfs?
[22:40] <paravoid> yes
[22:40] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:40] <sjust> what version did you upgrade from?
[22:40] <paravoid> upgraded from 0.54 to 0.55, then 0.56
[22:40] <paravoid> heh
[22:40] <paravoid> then all the 0.56. point releases
[22:40] <sjust> hmm
[22:41] <sjust> sounds like it was probably caused by the journaling troubles some time in the deep past
[22:41] <imjustmatthew> If someone from the MDS side has a sec, can you glance at the assert at http://pastebin.com/m1BHre0r and see if it looks like something new or something from misconfiguration?
[22:41] <paravoid> so, shall I just run repair for all of them?
[22:41] <sjust> I would try just one first
[22:41] <paravoid> and run a scrub on all pgs too :)
[22:41] <paravoid> I tried one early on and it worked
[22:42] <sjust> ok, I would do them one at a time
[22:42] <sjust> we have automated testing for repair, of course
[22:42] <paravoid> what do you mean?
[22:42] <sjust> but it would be better to be cautious
[22:42] <sjust> paravoid: just that doing all at once is probably riskier than necessary
[22:43] <sjust> but then I'm conservative
[22:43] <paravoid> the repairs you mean?
[22:43] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Read error: Connection reset by peer)
[22:43] <gregaf> imjustmatthew: 502 bad gateway… :/
[22:43] <sjust> yeah
[22:43] <paravoid> or for pg; do deep scrub; done
[22:43] <gregaf> ah, got it
[22:43] <sjust> oh, doing the deep scrubs is no problem
[22:43] <sjust> that code gets exercised all the time
[22:44] <gregaf> that's not really an MDS thing, but can you get the line number imjustmatthew? That should help me or joao or whoever ends up tracking it down
[22:44] * BillK (~BillK@203-59-45-74.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[22:45] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[22:45] * gregaf1 (~Adium@2607:f298:a:607:4934:567d:851d:dcef) has joined #ceph
[22:45] <paravoid> sjust: so you mean running a deep scrub on 16k pgs simultaneously isn't going to kill everything? :-)
[22:45] <sjust> paravoid: that bug was squashed prior to 56.4, I think
[22:46] <paravoid> which bug?
[22:46] <sjust> there was a bug where manual scrubbing of a large number of pgs did tend to overwhelm the OSDs
[22:46] <sjust> not so any longer
[22:46] <sjust> it would probably be easier to issue the scrubs by osd though
[22:46] <sjust> same end effect, but easier
[22:47] <paravoid> too late
[22:47] <sjust> ok
[22:47] <sjust> it does pretty much the same thing, but doesn't require 16k invocations of the ceph command
[22:47] <paravoid> that's ceph osd deep-scrub?
[22:47] <sjust> I think so
[22:47] <sjust> it's in the docs somewhere
[22:47] <paravoid> ok
[22:47] <paravoid> I'll find it
[22:48] <joao> imjustmatthew, that crash has popped a couple of times already; do you happen to have logs with debug, and if so could you drop them somewhere and point me their way?
[22:48] <imjustmatthew> I do have them, several copies
[22:49] <paravoid> deep scrub on 126T, that should get interesting
[22:49] <imjustmatthew> I'm not getting meaningful line numbers from GDB, you don't happen to know what I need to load to get those do you?
[22:49] <imjustmatthew> Here's one from last night on mon.b: http://goo.gl/UmNs3
[22:50] <joao> imjustmatthew, I don't need the line numbers from gdb; I just really need you to run the monitor with 'debug mon = 20', 'debug paxos = 20' and 'debug ms = 1'
[22:50] <imjustmatthew> joao: Not a problem, I'll set it up
[22:51] <joao> thanks :)
[22:52] * gregaf (~Adium@2607:f298:a:607:e1be:c55f:97e4:8e93) Quit (Ping timeout: 480 seconds)
[22:54] <paravoid> yep, ceph osd deep-scrub, way faster :)
[22:55] * eschnou (~eschnou@173.213-201-80.adsl-dyn.isp.belgacom.be) Quit (Quit: Leaving)
[22:55] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:56] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:09] * rustam (~rustam@94.15.91.30) has joined #ceph
[23:24] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[23:27] * pib1923 (~pib1923@your.friendly.media.team.coder.ark-cr.info) has joined #ceph
[23:27] * pib1923 (~pib1923@your.friendly.media.team.coder.ark-cr.info) Quit (Remote host closed the connection)
[23:28] <imjustmatthew> joao: http://goo.gl/GX7Vl has the log from the mon that just crashed, please let me know if there's anything else that would be helpful
[23:28] <joao> downloading; thanks!
[23:33] * vata (~vata@2607:fad8:4:6:2c46:e351:5224:17e0) Quit (Quit: Leaving.)
[23:38] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:53] <nz_monkey> joshd: Hi Josh, any idea when the rbd async flush fix will make it in to qemu ?
[23:56] * leseb (~Adium@pha75-6-82-226-32-84.fbx.proxad.net) Quit (Quit: Leaving.)
[23:58] <joshd> nz_monkey: the patch on qemu-devel works if you want to use it, the final version should go in sometime this week
[23:59] <nz_monkey> joshd: thanks! We will wait for it to go final. Just waiting on this to finish our POC testing so we can move Ceph to production.

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.