#ceph IRC Log

Index

IRC Log for 2014-02-11

Timestamps are in GMT/BST.

[0:00] <fghaas> joshd: would you like a doc patch? :)
[0:01] <joshd> fghaas: of course!
[0:01] <andreask> and one more ;-) .... the imported and converted image is still thin provisinoned?
[0:02] <joshd> that might depend on the version of qemu-img used
[0:04] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Read error: Connection reset by peer)
[0:05] <joshd> looks sparse since rbd was supported, so yes, still thin provisioned
[0:06] * jeremydei (~jdeininge@ip-64-139-50-114.sjc.megapath.net) has joined #ceph
[0:08] <andreask> perfect, thx
[0:14] * fdmanana_ (~fdmanana@bl5-0-172.dsl.telepac.pt) has joined #ceph
[0:14] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) Quit (Read error: Connection reset by peer)
[0:19] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[0:20] * dis is now known as Guest42
[0:20] * dis (~dis@109.110.66.129) has joined #ceph
[0:21] * danieagle (~Daniel@186.214.63.14) Quit (Quit: Muito Obrigado por Tudo! :-))
[0:24] * renzhi (~renzhi@ec2-54-249-150-103.ap-northeast-1.compute.amazonaws.com) has joined #ceph
[0:24] * Guest42 (~dis@109.110.66.216) Quit (Ping timeout: 480 seconds)
[0:25] * kaizh (~oftc-webi@128-107-239-235.cisco.com) has joined #ceph
[0:25] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[0:25] <fghaas> joshd: re the doc patch -- "qemu-img create -f rbd rbd:{pool-name}/{image-name} {size}" ... is that intentional? shouldn't that be "-f raw", because the driver is only concerned about the rbd: path prefix?
[0:25] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[0:26] <joshd> fghaas: yes, that should just be raw for clarity
[0:27] <fghaas> ok
[0:28] * sage (~quassel@2607:f298:a:607:6d19:193:ed4a:57ef) Quit (Quit: No Ping reply in 180 seconds.)
[0:28] * sage (~quassel@2607:f298:a:607:619a:fd83:6b92:1b05) has joined #ceph
[0:28] * ChanServ sets mode +o sage
[0:29] * renzhi (~renzhi@ec2-54-249-150-103.ap-northeast-1.compute.amazonaws.com) Quit (Read error: Operation timed out)
[0:30] * sarob (~sarob@2001:4998:effd:600:8027:32e:727f:d1d) Quit (Remote host closed the connection)
[0:37] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[0:37] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (Remote host closed the connection)
[0:37] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[0:39] <fghaas> joshd: patch done, pull req sent
[0:40] <joshd> fghaas: great, thanks!
[0:40] * sage (~quassel@2607:f298:a:607:619a:fd83:6b92:1b05) Quit (Quit: No Ping reply in 180 seconds.)
[0:40] * sage (~quassel@2607:f298:a:607:619a:fd83:6b92:1b05) has joined #ceph
[0:40] * ChanServ sets mode +o sage
[0:41] * wusui (~Warren@2607:f298:a:607:118b:373d:22bd:745) Quit (Read error: Connection timed out)
[0:41] * WarrenUsui (~Warren@2607:f298:a:607:118b:373d:22bd:745) Quit (Read error: Connection timed out)
[0:41] * wusui (~Warren@2607:f298:a:607:118b:373d:22bd:745) has joined #ceph
[0:41] <fghaas> joshd: been a while since I updated my fork, I seem to recall there used to be a "make doc" that's no longer there, so I just ran it through rst2html from python-docutils and checked the output in a browser; I hope that qualifies as a "build" of sorts
[0:42] * WarrenUsui (~Warren@2607:f298:a:607:118b:373d:22bd:745) has joined #ceph
[0:42] <joshd> fghaas: there's admin/build-doc and admin/serve-doc, but I can check it out easily enough
[0:44] <dmick> build-doc will build to files you can browse, too
[0:45] * garphy`aw is now known as garphy
[0:45] * WarrenUsui (~Warren@2607:f298:a:607:118b:373d:22bd:745) Quit (Read error: Connection reset by peer)
[0:49] * garphy is now known as garphy`aw
[0:50] * sage (~quassel@2607:f298:a:607:619a:fd83:6b92:1b05) Quit (Quit: No Ping reply in 180 seconds.)
[0:50] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:50] * sage (~quassel@2607:f298:a:607:619a:fd83:6b92:1b05) has joined #ceph
[0:50] * ChanServ sets mode +o sage
[0:52] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:54] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) has joined #ceph
[0:54] * fdmanana_ (~fdmanana@bl5-0-172.dsl.telepac.pt) Quit (Read error: Connection reset by peer)
[0:56] * gregsfortytwo (~Adium@2607:f298:a:607:8df8:dbcc:3bf4:6ec) Quit (Read error: Connection reset by peer)
[0:56] * gregsfortytwo (~Adium@2607:f298:a:607:e051:67cd:6c4f:253c) has joined #ceph
[0:56] * alram (~alram@38.122.20.226) Quit (Read error: Connection reset by peer)
[0:57] * alram (~alram@38.122.20.226) has joined #ceph
[0:57] * gregsfortytwo (~Adium@2607:f298:a:607:e051:67cd:6c4f:253c) Quit ()
[0:57] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[0:57] * gregsfortytwo (~Adium@2607:f298:a:607:e051:67cd:6c4f:253c) has joined #ceph
[1:01] * fghaas (~florian@91-119-115-62.dynamic.xdsl-line.inode.at) has left #ceph
[1:01] * sarob_ (~sarob@2001:4998:effd:600:bdfd:47db:2cfb:a8b2) has joined #ceph
[1:01] * sarob_ (~sarob@2001:4998:effd:600:bdfd:47db:2cfb:a8b2) Quit (Remote host closed the connection)
[1:02] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[1:03] * sage (~quassel@2607:f298:a:607:619a:fd83:6b92:1b05) Quit (Quit: No Ping reply in 180 seconds.)
[1:04] * sage (~quassel@2607:f298:a:607:619a:fd83:6b92:1b05) has joined #ceph
[1:04] * ChanServ sets mode +o sage
[1:11] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) Quit (Quit: Leaving)
[1:12] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) has joined #ceph
[1:18] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[1:20] * mozg (~andrei@host86-184-125-218.range86-184.btcentralplus.com) Quit (Quit: Ex-Chat)
[1:33] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[1:51] * DLange_ (~DLange@sixtina.faster-it.de) has joined #ceph
[1:51] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[1:51] * DLange is now known as Guest50
[1:52] * Guest50 (~DLange@dlange.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:55] * renzhi (~renzhi@ec2-54-249-150-103.ap-northeast-1.compute.amazonaws.com) has joined #ceph
[1:55] * xmltok (~xmltok@216.103.134.250) Quit (Quit: Leaving...)
[2:00] * joelio__ (~Joel@88.198.107.214) has joined #ceph
[2:00] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * DLange_ (~DLange@dlange.user.oftc.net) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * dis (~dis@109.110.66.129) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * joelio (~Joel@88.198.107.214) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * svenneK (~sk@svenne.krap.dk) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * darkfader (~floh@88.79.251.60) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * lurbs (user@uber.geek.nz) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * jerker_ (jerker@82ee1319.test.dnsbl.oftc.net) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * mmmucky (~mucky@mucky.socket7.org) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * stj (~s@tully.csail.mit.edu) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * athrift (~nz_monkey@203.86.205.13) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * grifferz_ (~andy@bitfolk.com) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * nyerup (irc@jespernyerup.dk) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * Machske (~Bram@d5152D87C.static.telenet.be) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * ingard (~cake@tu.rd.vc) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * psieklFH_ (psiekl@wombat.eu.org) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * via (~via@smtp2.matthewvia.info) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * mjevans- (~mje@209.141.34.79) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * morse (~morse@supercomputing.univpm.it) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * toabctl (~toabctl@toabctl.de) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * dosaboy_ (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * zere (~matt@asklater.com) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * jnq (~jon@0001b7cc.user.oftc.net) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * tomaw (tom@tomaw.noc.oftc.net) Quit (reticulum.oftc.net kinetic.oftc.net)
[2:00] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[2:00] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[2:00] * ChanServ sets mode +v andreask
[2:00] * TheBitte_ (~thebitter@195.10.250.233) has joined #ceph
[2:01] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[2:01] * TheBittern (~thebitter@195.10.250.233) Quit (Read error: Connection reset by peer)
[2:01] * athrift_ (~nz_monkey@203.86.205.13) has joined #ceph
[2:02] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[2:03] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[2:03] * DLange_ (~DLange@dlange.user.oftc.net) has joined #ceph
[2:03] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) has joined #ceph
[2:03] * dis (~dis@109.110.66.129) has joined #ceph
[2:03] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[2:03] * zere (~matt@asklater.com) has joined #ceph
[2:03] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) has joined #ceph
[2:03] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) has joined #ceph
[2:03] * mjevans- (~mje@209.141.34.79) has joined #ceph
[2:03] * via (~via@smtp2.matthewvia.info) has joined #ceph
[2:03] * dosaboy_ (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[2:03] * psieklFH_ (psiekl@wombat.eu.org) has joined #ceph
[2:03] * ingard (~cake@tu.rd.vc) has joined #ceph
[2:03] * Machske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[2:03] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[2:03] * nyerup (irc@jespernyerup.dk) has joined #ceph
[2:03] * grifferz_ (~andy@bitfolk.com) has joined #ceph
[2:03] * toabctl (~toabctl@toabctl.de) has joined #ceph
[2:03] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[2:03] * jnq (~jon@0001b7cc.user.oftc.net) has joined #ceph
[2:03] * tomaw (tom@tomaw.noc.oftc.net) has joined #ceph
[2:03] * svenneK (~sk@svenne.krap.dk) has joined #ceph
[2:03] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[2:03] * darkfader (~floh@88.79.251.60) has joined #ceph
[2:03] * lurbs (user@uber.geek.nz) has joined #ceph
[2:03] * leseb (~leseb@88-190-214-97.rev.dedibox.fr) has joined #ceph
[2:03] * jerker_ (jerker@82ee1319.test.dnsbl.oftc.net) has joined #ceph
[2:03] * mmmucky (~mucky@mucky.socket7.org) has joined #ceph
[2:03] * stj (~s@tully.csail.mit.edu) has joined #ceph
[2:03] * zapotah_ (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) has joined #ceph
[2:03] * DLange_ (~DLange@dlange.user.oftc.net) Quit (Max SendQ exceeded)
[2:03] * zapotah (~zapotah@dsl-hkibrasgw1-58c08e-250.dhcp.inet.fi) Quit (Remote host closed the connection)
[2:03] * Machske (~Bram@d5152D87C.static.telenet.be) Quit (Remote host closed the connection)
[2:03] * Machske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[2:03] * clayg_ (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[2:05] * leochill (~leochill@nyc-333.nycbit.com) Quit (Quit: Leaving)
[2:05] * jerker_ (jerker@82ee1319.test.dnsbl.oftc.net) Quit (Read error: Connection reset by peer)
[2:05] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) Quit (Ping timeout: 480 seconds)
[2:06] * jerker (jerker@82ee1319.test.dnsbl.oftc.net) has joined #ceph
[2:06] * via_ (~via@smtp2.matthewvia.info) has joined #ceph
[2:06] * mikedawson_ (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[2:06] * via (~via@smtp2.matthewvia.info) Quit (Ping timeout: 480 seconds)
[2:06] * toabctl (~toabctl@toabctl.de) Quit (Ping timeout: 480 seconds)
[2:06] * Drumplayr (~thomas@107-192-218-58.lightspeed.austtx.sbcglobal.net) Quit (Quit: Leaving)
[2:07] * sjustwork (~sam@2607:f298:a:607:1826:d60:af0c:2a6) Quit (Quit: Leaving.)
[2:08] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[2:08] * svenneK (~sk@svenne.krap.dk) Quit (Ping timeout: 480 seconds)
[2:10] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[2:10] * mikedawson_ is now known as mikedawson
[2:11] * jnq (~jon@0001b7cc.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:11] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:12] * alram (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:13] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) has joined #ceph
[2:13] * ChanServ sets mode +o scuttlemonkey
[2:13] * KindTwo (KindOne@h98.58.186.173.dynamic.ip.windstream.net) has joined #ceph
[2:20] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:20] * KindTwo is now known as KindOne
[2:21] * yanzheng (~zhyan@134.134.139.72) has joined #ceph
[2:25] * reed (~reed@net-188-153-202-54.cust.dsl.teletu.it) Quit (Read error: Operation timed out)
[2:28] * Pedras (~Adium@216.207.42.132) Quit (Ping timeout: 480 seconds)
[2:29] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[2:30] * jerker (jerker@82ee1319.test.dnsbl.oftc.net) Quit (Remote host closed the connection)
[2:31] * jerker (jerker@Psilocybe.Update.UU.SE) has joined #ceph
[2:31] * Machske (~Bram@d5152D87C.static.telenet.be) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * athrift_ (~nz_monkey@203.86.205.13) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * DLange (~DLange@dlange.user.oftc.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * gregsfortytwo (~Adium@2607:f298:a:607:e051:67cd:6c4f:253c) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * wusui (~Warren@2607:f298:a:607:118b:373d:22bd:745) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * jeremydei (~jdeininge@ip-64-139-50-114.sjc.megapath.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * toutour (~toutour@causses.idest.org) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * madkiss1 (~madkiss@089144232199.atnat0041.highway.a1.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * Koma (~Koma@0001c112.user.oftc.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * Narb (~Narb@c-98-207-60-126.hsd1.ca.comcast.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * sekon (~harish@li291-152.members.linode.com) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * baffle (baffle@jump.stenstad.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * garphy`aw (~garphy@frank.zone84.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * kitz (~kitz@admin161-194.hampshire.edu) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * blahnana (~bman@us1.blahnana.com) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * flaxy (~afx@78.130.174.164) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * amospalla (~amospalla@0001a39c.user.oftc.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * lightspeed (~lightspee@81.187.0.153) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * simulx (~simulx@vpn.expressionanalysis.com) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * acaos (~zac@209.99.103.42) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * wattsmarcus5 (~mdw@aa2.linuxbox.com) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * kuu (~kuu@virtual362.tentacle.fi) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * yuriw1 (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (charon.oftc.net solenoid.oftc.net)
[2:31] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (charon.oftc.net solenoid.oftc.net)
[2:33] * Machske (~Bram@d5152D87C.static.telenet.be) has joined #ceph
[2:33] * athrift_ (~nz_monkey@203.86.205.13) has joined #ceph
[2:33] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[2:33] * gregsfortytwo (~Adium@2607:f298:a:607:e051:67cd:6c4f:253c) has joined #ceph
[2:33] * wusui (~Warren@2607:f298:a:607:118b:373d:22bd:745) has joined #ceph
[2:33] * jjgalvez (~jjgalvez@ip98-167-16-160.lv.lv.cox.net) has joined #ceph
[2:33] * jeremydei (~jdeininge@ip-64-139-50-114.sjc.megapath.net) has joined #ceph
[2:33] * toutour (~toutour@causses.idest.org) has joined #ceph
[2:33] * madkiss1 (~madkiss@089144232199.atnat0041.highway.a1.net) has joined #ceph
[2:33] * Koma (~Koma@0001c112.user.oftc.net) has joined #ceph
[2:33] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[2:33] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[2:33] * mjeanson (~mjeanson@00012705.user.oftc.net) has joined #ceph
[2:33] * Narb (~Narb@c-98-207-60-126.hsd1.ca.comcast.net) has joined #ceph
[2:33] * sekon (~harish@li291-152.members.linode.com) has joined #ceph
[2:33] * baffle (baffle@jump.stenstad.net) has joined #ceph
[2:33] * yuriw1 (~Adium@c-71-202-126-141.hsd1.ca.comcast.net) has joined #ceph
[2:33] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[2:33] * wattsmarcus5 (~mdw@aa2.linuxbox.com) has joined #ceph
[2:33] * kuu (~kuu@virtual362.tentacle.fi) has joined #ceph
[2:33] * garphy`aw (~garphy@frank.zone84.net) has joined #ceph
[2:33] * kitz (~kitz@admin161-194.hampshire.edu) has joined #ceph
[2:33] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[2:33] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[2:33] * flaxy (~afx@78.130.174.164) has joined #ceph
[2:33] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[2:33] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[2:33] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[2:33] * acaos (~zac@209.99.103.42) has joined #ceph
[2:33] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[2:33] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[2:35] * poelzi (~poelzi@p54B46334.dip0.t-ipconnect.de) has joined #ceph
[2:36] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[2:36] * via_ (~via@smtp2.matthewvia.info) Quit (Remote host closed the connection)
[2:36] * via (~via@smtp2.matthewvia.info) has joined #ceph
[2:36] * davidzlap1 (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[2:36] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[2:37] * blahnana (~bman@us1.blahnana.com) has joined #ceph
[2:39] <poelzi> i have a small patch i would like to get run through the test suite but i have no real ceph setup here
[2:40] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:52] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:52] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[2:53] * warrenSusui (~Warren@2607:f298:a:607:118b:373d:22bd:745) has joined #ceph
[3:00] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) Quit (Quit: Leaving)
[3:00] * sekon (~harish@li291-152.members.linode.com) Quit (Remote host closed the connection)
[3:05] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:08] * diegows (~diegows@190.190.17.57) has joined #ceph
[3:09] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) has joined #ceph
[3:10] * poelzi1 (~poelzi@2001:4dd0:fb82:c3d2:a288:b4ff:fe97:a0c8) has joined #ceph
[3:15] * poelzi (~poelzi@p54B46334.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:15] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:17] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[3:19] * kaizh (~oftc-webi@128-107-239-235.cisco.com) Quit (Quit: Page closed)
[3:22] * sarob (~sarob@2601:9:7080:13a:869:5188:e254:4ae2) has joined #ceph
[3:22] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) Quit (Quit: Leaving)
[3:24] * BillK (~BillK-OFT@106-68-142-217.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[3:26] * BillK (~BillK-OFT@124-148-118-126.dyn.iinet.net.au) has joined #ceph
[3:28] * diegows (~diegows@190.190.17.57) Quit (Ping timeout: 480 seconds)
[3:30] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:31] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[3:34] * sarob (~sarob@2601:9:7080:13a:869:5188:e254:4ae2) Quit (Remote host closed the connection)
[3:34] * sarob (~sarob@2601:9:7080:13a:869:5188:e254:4ae2) has joined #ceph
[3:34] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:37] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:40] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) Quit (Quit: Leaving.)
[3:40] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) has joined #ceph
[3:41] * sarob_ (~sarob@2601:9:7080:13a:a52a:bdcf:dd58:2a10) has joined #ceph
[3:41] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[3:42] * sarob (~sarob@2601:9:7080:13a:869:5188:e254:4ae2) Quit (Ping timeout: 480 seconds)
[3:43] * dlan_ (~dennis@116.228.88.131) Quit ()
[3:44] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[3:46] * CAPSLOCK2000 (~oftc@5418F8BD.cm-5-1d.dynamic.ziggo.nl) has joined #ceph
[3:47] * dlan_ (~dennis@116.228.88.131) Quit ()
[3:47] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[3:50] * dlan (~dennis@116.228.88.131) Quit (Quit: leaving)
[3:53] * markbby (~Adium@168.94.245.3) has joined #ceph
[3:56] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[4:01] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Remote host closed the connection)
[4:01] * svenneK (~sk@svenne.krap.dk) has joined #ceph
[4:01] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Quit: No Ping reply in 180 seconds.)
[4:02] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[4:02] * toabctl (~toabctl@toabctl.de) has joined #ceph
[4:03] * jnq (~jon@95.85.22.50) has joined #ceph
[4:04] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[4:04] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[4:06] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[4:11] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Quit: Leaving.)
[4:16] * sarob_ (~sarob@2601:9:7080:13a:a52a:bdcf:dd58:2a10) Quit (Remote host closed the connection)
[4:16] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[4:16] * sarob (~sarob@2601:9:7080:13a:a52a:bdcf:dd58:2a10) has joined #ceph
[4:16] <aarontc> Is anyone around that might be able to help me with 'incomplete' pgs?
[4:18] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:19] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[4:20] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit ()
[4:20] * KindTwo (KindOne@h242.215.89.75.dynamic.ip.windstream.net) has joined #ceph
[4:21] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[4:22] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[4:22] * KindTwo is now known as KindOne
[4:23] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[4:23] * Boltsky (~textual@office.deviantart.net) Quit (Ping timeout: 480 seconds)
[4:24] * sarob (~sarob@2601:9:7080:13a:a52a:bdcf:dd58:2a10) Quit (Ping timeout: 480 seconds)
[4:24] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:25] * Cube1 (~Cube@12.248.40.138) Quit ()
[4:28] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Ping timeout: 480 seconds)
[4:28] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[4:42] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[4:42] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:43] * sarob (~sarob@2601:9:7080:13a:7da7:a654:c832:fc60) has joined #ceph
[4:44] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[4:49] * sarob_ (~sarob@2601:9:7080:13a:f95a:88f3:ed7d:9c67) has joined #ceph
[4:51] * sarob (~sarob@2601:9:7080:13a:7da7:a654:c832:fc60) Quit (Ping timeout: 480 seconds)
[4:54] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[4:59] * tziOm (~bjornar@ns3.uniweb.no) Quit (Read error: Operation timed out)
[5:00] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (Remote host closed the connection)
[5:02] * tziOm (~bjornar@ns3.uniweb.no) has joined #ceph
[5:03] * Vacum_ (~vovo@i59F79CB4.versanet.de) has joined #ceph
[5:04] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[5:05] * Vacum (~vovo@88.130.203.241) Quit (Read error: Operation timed out)
[5:11] * leochill (~leochill@nyc-333.nycbit.com) has joined #ceph
[5:12] * mikedawson_ (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[5:13] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) Quit (Remote host closed the connection)
[5:14] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[5:15] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:17] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[5:17] * mikedawson_ is now known as mikedawson
[5:19] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[5:21] * Jahkeup (Jahkeup@gemini.ca.us.panicbnc.net) has joined #ceph
[5:28] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:40] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:43] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[5:46] * markbby (~Adium@168.94.245.3) has joined #ceph
[5:47] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) has joined #ceph
[5:48] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[5:52] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:53] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[6:06] * sarob_ (~sarob@2601:9:7080:13a:f95a:88f3:ed7d:9c67) Quit (Remote host closed the connection)
[6:07] * sarob (~sarob@2601:9:7080:13a:f95a:88f3:ed7d:9c67) has joined #ceph
[6:07] * sarob_ (~sarob@2601:9:7080:13a:d17e:f99:912a:75e3) has joined #ceph
[6:09] * sarob_ (~sarob@2601:9:7080:13a:d17e:f99:912a:75e3) Quit (Remote host closed the connection)
[6:09] * sarob_ (~sarob@2601:9:7080:13a:d17e:f99:912a:75e3) has joined #ceph
[6:09] * ShaunR- (~ShaunR@ip70-187-159-103.oc.oc.cox.net) has joined #ceph
[6:12] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) has joined #ceph
[6:13] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) has left #ceph
[6:13] * ShaunR (~ShaunR@staff.ndchost.com) Quit (Ping timeout: 480 seconds)
[6:14] * lupu (~lupu@86.107.101.246) has joined #ceph
[6:15] * sarob (~sarob@2601:9:7080:13a:f95a:88f3:ed7d:9c67) Quit (Ping timeout: 480 seconds)
[6:16] * haomaiwa_ (~haomaiwan@218.71.76.134) Quit (Remote host closed the connection)
[6:17] * sarob_ (~sarob@2601:9:7080:13a:d17e:f99:912a:75e3) Quit (Ping timeout: 480 seconds)
[6:19] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[6:22] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[6:30] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[6:30] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[6:31] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[6:31] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) has joined #ceph
[6:31] * senk (~Adium@ip-5-147-216-213.unitymediagroup.de) has left #ceph
[6:31] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:32] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[6:34] * sarob (~sarob@2601:9:7080:13a:7d48:e6d9:1c12:b7d9) has joined #ceph
[6:41] * markbby (~Adium@168.94.245.3) Quit (Ping timeout: 480 seconds)
[6:46] * pingu (~christian@nat-gw1.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[6:55] * madkiss1 (~madkiss@089144232199.atnat0041.highway.a1.net) Quit (Ping timeout: 480 seconds)
[6:56] * lupu (~lupu@86.107.101.246) has left #ceph
[6:58] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[7:01] * lupu (~lupu@86.107.101.246) has joined #ceph
[7:02] * rendar (~s@host26-115-dynamic.57-82-r.retail.telecomitalia.it) has joined #ceph
[7:04] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[7:05] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:10] * sarob (~sarob@2601:9:7080:13a:7d48:e6d9:1c12:b7d9) Quit (Ping timeout: 480 seconds)
[7:12] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[7:18] * cbob (~cbob@host-63-232-9-69.midco.net) has joined #ceph
[7:20] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[7:20] <cbob> anyone using ceph for cloudstack? and do they have snapshots working?
[7:22] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[7:22] <cbob> we like have it all up and running both with rbd for primary and s3 radosgw for secondary w/ kvm
[7:24] <cbob> just no snapshot functionality
[7:26] <cbob> also experiencing only about 8MB/s write speed in vms
[7:26] <cbob> will this improve if i add a 4th node and a pair of ssd's on each machine for journaling?
[7:34] * madkiss (~madkiss@089144232199.atnat0041.highway.a1.net) has joined #ceph
[7:35] * mattt (~textual@94.236.7.190) has joined #ceph
[7:36] * senk (~Adium@212.201.122.52) has joined #ceph
[7:36] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[7:37] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) Quit (Quit: Leaving.)
[7:46] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Read error: Connection reset by peer)
[7:48] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[7:58] * madkiss (~madkiss@089144232199.atnat0041.highway.a1.net) Quit (Quit: Leaving.)
[7:58] * madkiss (~madkiss@089144232199.atnat0041.highway.a1.net) has joined #ceph
[8:02] * julian (~julianwa@125.70.133.238) has joined #ceph
[8:04] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[8:05] * sarob (~sarob@2601:9:7080:13a:d99d:fa54:e59:816a) has joined #ceph
[8:07] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) has joined #ceph
[8:11] <poelzi1> can someone run the testsuite for a patch of mine please ?
[8:11] <poelzi1> i currently have no setup and its only a small one and its kinda important
[8:12] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:12] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) Quit (Read error: No route to host)
[8:13] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit ()
[8:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:16] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[8:16] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[8:19] * Narb (~Narb@c-98-207-60-126.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:24] * schmee (~quassel@phobos.isoho.st) Quit (Remote host closed the connection)
[8:25] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: No route to host)
[8:25] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[8:33] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[8:34] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[8:36] * dmsimard1 (~Adium@108.163.152.66) has joined #ceph
[8:37] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[8:37] * sarob (~sarob@2601:9:7080:13a:d99d:fa54:e59:816a) Quit (Ping timeout: 480 seconds)
[8:42] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit (Ping timeout: 480 seconds)
[8:44] * madkiss (~madkiss@089144232199.atnat0041.highway.a1.net) Quit (Quit: Leaving.)
[8:44] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[8:44] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit ()
[8:46] * dmsimard1 (~Adium@108.163.152.66) Quit (Read error: Connection reset by peer)
[9:02] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[9:03] * cofol1986 (~xwrj@110.90.119.113) Quit ()
[9:04] * steki (~steki@91.195.39.5) has joined #ceph
[9:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:07] * srenatus (~stephan@185.27.182.16) has joined #ceph
[9:11] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Read error: Connection reset by peer)
[9:13] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:19] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[9:26] * thb (~me@port-30228.pppoe.wtnet.de) has joined #ceph
[9:30] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:33] * danieagle (~Daniel@186.214.63.14) has joined #ceph
[9:34] * lupul (~lupu@86.107.101.157) has joined #ceph
[9:35] * lupu (~lupu@86.107.101.246) has left #ceph
[9:40] * garphy`aw is now known as garphy
[9:45] * ponyofdeath (~vladi@cpe-76-167-201-214.san.res.rr.com) has joined #ceph
[9:49] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[9:49] * ChanServ sets mode +v andreask
[9:55] * fghaas (~florian@91-119-115-62.dynamic.xdsl-line.inode.at) has joined #ceph
[9:56] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) has joined #ceph
[9:56] * yanzheng (~zhyan@134.134.139.72) Quit (Quit: Leaving)
[9:56] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) Quit (Read error: No route to host)
[10:02] * alexm_ (~alexm@83.167.43.235) has joined #ceph
[10:03] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[10:04] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[10:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[10:08] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:11] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[10:15] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[10:15] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[10:18] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:24] <fghaas> wido_: do you happen to be around?
[10:37] * thomnico (~thomnico@2a01:e35:8b41:120:5995:1299:b167:bba8) has joined #ceph
[10:41] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:41] * thomnico (~thomnico@2a01:e35:8b41:120:5995:1299:b167:bba8) Quit ()
[10:42] * thomnico (~thomnico@2a01:e35:8b41:120:5c79:175b:11d6:f7c5) has joined #ceph
[10:43] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[10:47] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[10:48] * ctd_ (~root@00011932.user.oftc.net) Quit (Remote host closed the connection)
[10:48] * ctd (~root@00011932.user.oftc.net) has joined #ceph
[10:50] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) has joined #ceph
[10:54] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) Quit (Read error: Operation timed out)
[11:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[11:13] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:16] * thomnico (~thomnico@2a01:e35:8b41:120:5c79:175b:11d6:f7c5) Quit (Quit: Ex-Chat)
[11:24] * allsystemsarego (~allsystem@188.25.135.30) has joined #ceph
[11:31] * orion195 (~oftc-webi@213.244.168.133) has joined #ceph
[11:31] <orion195> hi guys
[11:31] <orion195> how can I get ceph emperor running on a fc19 with openstack?
[11:32] <orion195> qemu is always breaking the dependencies
[11:32] <orion195> I created a new repo with ceph emperor rpms but, still, qemu breaks
[11:32] * thomnico (~thomnico@2a01:e35:8b41:120:5c79:175b:11d6:f7c5) has joined #ceph
[11:32] * lupul (~lupu@86.107.101.157) Quit (Ping timeout: 480 seconds)
[11:33] <fghaas> qemu ought to only link to librados, so if your qemu is compiled with rados support then upgrading to a new librados should Just Work???
[11:33] <fghaas> that being said, why not use centos and RDO?
[11:34] <orion195> because, here, we use fedora :(
[11:34] <orion195> the qemu I have is from the fedora repo
[11:34] <orion195> what qemu should I use then?
[11:35] * lupul (~lupu@86.107.101.157) has joined #ceph
[11:36] * JCL (~JCL@2601:9:3280:5a3:8480:23cf:94d3:1bda) Quit (Quit: Leaving.)
[11:37] <fghaas> when you type just "qemu-img", does your "Supported formats:" include rbd?
[11:38] <orion195> it does
[11:40] <fghaas> then the only thing you should need to do is install the librbd1 and librados2 for emperor, and you should be good to go
[11:41] <orion195> if I just want to install openstack-nova-compute, for example
[11:41] <orion195> (I have rdo installed)
[11:41] <orion195> it will break here:
[11:41] <orion195> http://fpaste.org/76077/
[11:41] <orion195> IMHO the only solution is to install qemu-common with --nodeps
[11:44] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) has joined #ceph
[11:44] <fghaas> oh great, nevermind then, I didn't know fedora did this (ubuntu/debian user here, or centos if I must), and I have no idea whether inktank chooses to make a ceph-libs package available to replace the fedora one
[11:45] <orion195> this is what just happens:
[11:45] <orion195> http://ur1.ca/glwnn
[11:46] <orion195> of course, using fc19 and all of this stuff, I canno use packstack to deploy openstaack because it will always break
[11:48] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[11:50] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) Quit (Read error: No route to host)
[11:54] * diegows (~diegows@190.190.17.57) has joined #ceph
[11:57] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[11:58] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[12:05] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Read error: Operation timed out)
[12:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[12:07] * mikedawson_ (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[12:08] * gnlwlb (~sglwlb@124.90.106.171) has joined #ceph
[12:09] * gnlwlb (~sglwlb@124.90.106.171) Quit ()
[12:09] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[12:09] * gnlwlb (~sglwlb@124.90.106.171) has joined #ceph
[12:11] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[12:11] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[12:11] * mikedawson_ is now known as mikedawson
[12:12] * lupul_rau (~lupu@86.107.101.245) has joined #ceph
[12:13] * yanzheng (~zhyan@134.134.139.72) has joined #ceph
[12:18] * lupul (~lupu@86.107.101.157) Quit (Ping timeout: 480 seconds)
[12:18] * senk (~Adium@212.201.122.52) Quit (Quit: Leaving.)
[12:21] * beardo_ (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[12:21] * chris_lu_ (~ccc2@bolin.Lib.lehigh.EDU) has joined #ceph
[12:22] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[12:22] <fghaas> orion195: like I said, I'm not a fedora user, but my suggestion would be to go with centos+rdo+packstack
[12:24] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[12:27] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (Ping timeout: 480 seconds)
[12:27] * lupul_rau (~lupu@86.107.101.245) Quit (Read error: Connection reset by peer)
[12:27] * chris_lu (~ccc2@bolin.Lib.lehigh.EDU) Quit (Ping timeout: 480 seconds)
[12:29] * fdmanana (~fdmanana@bl5-0-172.dsl.telepac.pt) has joined #ceph
[12:48] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[12:49] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit ()
[12:49] * i_m (~ivan.miro@deibp9eh1--blueice2n1.emea.ibm.com) has joined #ceph
[12:54] * lupul (~lupu@86.107.101.245) has joined #ceph
[13:00] * senk (~Adium@212.201.122.52) has joined #ceph
[13:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[13:05] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Read error: Operation timed out)
[13:12] * lupul (~lupu@86.107.101.245) Quit (Ping timeout: 480 seconds)
[13:13] * senk (~Adium@212.201.122.52) has left #ceph
[13:13] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:20] * markbby (~Adium@168.94.245.1) has joined #ceph
[13:20] * yanzheng (~zhyan@134.134.139.72) Quit (Remote host closed the connection)
[13:22] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Read error: Operation timed out)
[13:23] * danieagle (~Daniel@186.214.63.14) Quit (Quit: Muito Obrigado por Tudo! :-))
[13:25] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[13:25] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[13:27] * lupul (~lupu@86.107.101.245) has joined #ceph
[13:27] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[13:34] * dpippenger1 (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[13:35] * lupul (~lupu@86.107.101.245) Quit (Ping timeout: 480 seconds)
[13:37] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[13:44] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[13:46] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[13:47] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[13:48] * dpippenger1 (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[13:50] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[13:53] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[13:58] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[14:11] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[14:14] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:15] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[14:25] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[14:34] * sroy (~sroy@2607:fad8:4:6:6e88:14ff:feff:5374) has joined #ceph
[14:35] * hjjg (~hg@p3EE31E76.dip0.t-ipconnect.de) has joined #ceph
[14:35] * ksingh (~Adium@2001:708:10:10:6cfa:a915:c0dd:8db2) has joined #ceph
[14:37] <ksingh> guys where to get rpm packages for ceph 0.76 version ???
[14:37] <ksingh> i have checked on ceph website , RPM packages are not available
[14:39] <alfredodeza> ksingh: I don't think there is a dedicated download URL for it, but the links are in the mailing list announcement
[14:39] <alfredodeza> one sec
[14:41] <ksingh> i check on sage's email , i can find tarball and git link
[14:41] <sjm> http://ceph.com/rpm-emperor/
[14:41] <ksingh> but not RPM
[14:41] <alfredodeza> yeah you are right
[14:41] <alfredodeza> sjm: that is not 0.76 though
[14:41] <sjm> oh
[14:42] <ksingh> does it mean , only after firefly release , we can get RPM packages from INKTANK
[14:42] <ksingh> before that we need to build our own ??
[14:42] <alfredodeza> ksingh: it might be too bleeding edge but if you want to try it you can use ceph-deploy and get an RPM from master
[14:42] * haomaiwang (~haomaiwan@118.186.151.36) has joined #ceph
[14:42] <ksingh> HEY i found it
[14:42] <ksingh> http://ceph.com/rpm-testing/el6/x86_64/
[14:42] <alfredodeza> ah testing right
[14:43] <alfredodeza> but the ceph-deploy offer is still up though :) you can specify branches and tags iirc
[14:43] <ksingh> alfredodeza :: can i use ceph-deploy tool to push new version to all my NODES
[14:43] <alfredodeza> yes
[14:43] <alfredodeza> well
[14:43] <ksingh> i guess you are doing a lot more work on ceph-deploy
[14:43] <ksingh> :-)
[14:43] <alfredodeza> :)
[14:44] <ksingh> by any chance are you coming to frankfurt @ ceph day
[14:44] <ksingh> it would be pleasure to meet you
[14:44] <alfredodeza> kind words :) I don't think it is planned for me to go
[14:45] <alfredodeza> it can be tricky to grab rpms directly from rpm-testing but doable
[14:46] <alfredodeza> you would need to tell ceph-deploy where the packages are and where the GPG url is at too
[14:46] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[14:46] <alfredodeza> something like: ceph-deploy install --repo-url https://ceph.com/rpm-testing {nodes}
[14:47] <alfredodeza> you would need to uninstall ceph-release manually and then do a yum clean all
[14:47] <alfredodeza> which reminds me this is something that ceph-deploy should do
[14:47] * alfredodeza goes to write a ticket
[14:48] <ksingh> ahaa this this ceph-deploy install --repo-url https://ceph.com/rpm-testing??{nodes} looks to be very helpful
[14:49] <alfredodeza> it is also very well documented :) http://ceph.com/ceph-deploy/docs/install.html#behind-firewall
[14:49] <ksingh> will this also install depend like
[14:49] <ksingh> libcephfs1 = 0.76-0.el6 is needed by ceph-0.76-0.el6.x86_64
[14:49] <ksingh> librados2 = 0.76-0.el6 is needed by ceph-0.76-0.el6.x86_64
[14:49] <ksingh> librbd1 = 0.76-0.el6 is needed by ceph-0.76-0.el6.x86_64
[14:49] * haomaiwa_ (~haomaiwan@106.38.255.132) has joined #ceph
[14:51] <ksingh> alfredodeza :: before upgrading version , is it mandatory to have my cluster healthy
[14:51] <ksingh> health HEALTH_WARN 31 pgs degraded;
[14:51] * fghaas (~florian@91-119-115-62.dynamic.xdsl-line.inode.at) Quit (Ping timeout: 480 seconds)
[14:51] <alfredodeza> ksingh: this is production?
[14:51] * vhasi (vhasi@vha.si) Quit (Remote host closed the connection)
[14:51] <ksingh> nopes
[14:51] <ksingh> :-)
[14:52] * alfredodeza breathes once again
[14:52] <ksingh> thats Y 0.76
[14:52] <ksingh> :-)
[14:52] * vhasi (vhasi@vha.si) has joined #ceph
[14:52] * gnlwlb (~sglwlb@124.90.106.171) Quit (Remote host closed the connection)
[14:52] <alfredodeza> ksingh: I am not well versed in doing seamless upgrades
[14:52] <alfredodeza> because it is something I usually don't try at all
[14:52] * gnlwlb (~sglwlb@124.90.106.171) has joined #ceph
[14:52] <alfredodeza> I am more blasting things away
[14:52] <alfredodeza> :D
[14:52] <ksingh> haaa
[14:53] * KGremliza (~oftc-webi@host-82-135-6-194.static.customer.m-online.net) has joined #ceph
[14:54] <KGremliza> Hi There
[14:54] <KGremliza> Have a problem configuring osds
[14:54] <KGremliza> somebody here who might help ?
[14:55] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[14:55] <ksingh> Suggestion please on this error message
[14:55] <ksingh> 2014-02-11 11:49:26.757258 7f8e56ec5700 -1 osd.9 239965 heartbeat_check: no reply from osd.10 ever on either front or back, first ping sent 2014-02-11 11:48:59.647427 (cutoff 2014-02-11 11:49:06.757255)
[14:55] <ksingh> 2014-02-11 11:49:26.950525 7f8e7b525700 -1 osd.9 239965 heartbeat_check: no reply from osd.10 ever on either front or back, first ping sent 2014-02-11 11:48:59.647427 (cutoff 2014-02-11 11:49:06.950516)
[14:56] <ksingh> FYI all my OSD's are UP and IN
[14:56] * haomaiwang (~haomaiwan@118.186.151.36) Quit (Ping timeout: 480 seconds)
[14:58] <KGremliza> after this command: ceph-disk -v prepare --fs-type xfs /dev/sdg /dev/sdb5
[14:59] <KGremliza> i see these messages in the ceph-osd.12 log:
[14:59] <KGremliza> 2014-02-11 15:57:53.540453 7f56f87cb7c0 0 ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60), process ceph-osd, pid 12428 2014-02-11 15:57:53.545071 7f56f87cb7c0 1 filestore(/var/lib/ceph/tmp/mnt.yWWf3k) mkfs in /var/lib/ceph/tmp/mnt.yWWf3k 2014-02-11 15:57:53.545129 7f56f87cb7c0 1 filestore(/var/lib/ceph/tmp/mnt.yWWf3k) mkfs fsid is already set to 1d5bc417-9678-4afe-9260-993ed0409a16 2014-02-11 15:57:53.550
[14:59] <KGremliza> sorry for the formatting
[14:59] <KGremliza> 2014-02-11 15:57:53.540453 7f56f87cb7c0 0 ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60), process ceph-osd, pid 12428
[14:59] <KGremliza> 2014-02-11 15:57:53.545071 7f56f87cb7c0 1 filestore(/var/lib/ceph/tmp/mnt.yWWf3k) mkfs in /var/lib/ceph/tmp/mnt.yWWf3k
[14:59] <KGremliza> 2014-02-11 15:57:53.545129 7f56f87cb7c0 1 filestore(/var/lib/ceph/tmp/mnt.yWWf3k) mkfs fsid is already set to 1d5bc417-9678-4afe-9260-993ed0409a16
[15:00] <KGremliza> 2014-02-11 15:57:53.550140 7f56f87cb7c0 1 filestore(/var/lib/ceph/tmp/mnt.yWWf3k) leveldb db exists/created
[15:00] <KGremliza> 2014-02-11 15:57:53.550260 7f56f87cb7c0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
[15:00] <KGremliza> 2014-02-11 15:57:53.550292 7f56f87cb7c0 1 journal _open /var/lib/ceph/tmp/mnt.yWWf3k/journal fd 10: 33746731008 bytes, block size 4096 bytes, directio = 1, aio = 0
[15:00] <KGremliza> 2014-02-11 15:57:53.550450 7f56f87cb7c0 -1 journal read_header error decoding journal header
[15:00] <KGremliza> 2014-02-11 15:57:53.550516 7f56f87cb7c0 -1 filestore(/var/lib/ceph/tmp/mnt.yWWf3k) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.yWWf3k/journal: (22) Invalid argument
[15:00] <KGremliza> so I wonder why it failes to create a journal
[15:01] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[15:02] <ksingh> KGremliza :: can you have a look on ceph-deploy logs , it will say something MORE
[15:03] <alfredodeza> issue 7390
[15:03] <kraken> alfredodeza might be talking about http://tracker.ceph.com/issues/7390 [ceph-deploy should remove ceph-release and call yum clean all]
[15:04] <ksingh> Geeks on Duty :-) any suggestion on my error message
[15:05] <KGremliza> ksingh: I don't run ceph-deploy and I cannot find a ceph-deploy log file
[15:05] * sarob (~sarob@2601:9:7080:13a:4024:346f:ae7d:d61c) has joined #ceph
[15:05] <ksingh> Kgremliza : sorry i didnt say your command ceph-disk -v prepare
[15:05] <ksingh> *saw
[15:07] <zviratko> Hi there, i want to ask a (maybe silly) question - we currently use EMC SAN to store all data and VMware to spin our VMs, but we are considering switching to Openstack - and we also love the idea of Ceph very much
[15:08] <zviratko> but we are not quite sure whether it's stable enought
[15:08] <nhm_> zviratko: Right now Inktank doesn't provide production support for the distributed file system, but does support the block device layer in production.
[15:08] <zviratko> I don't want to troll or to imply it isn't stable - I just have no experience - should we worry, or shouldn't we? After all, storage is what matters most, and if data is gone, we are screwed
[15:09] <zviratko> nhm_: yeah, we're thinking about RBD only, no CephFS
[15:09] <zviratko> I tested CephFS a while ago and it didn't work well (slow, basically) , but with RBD it looks much simpler
[15:10] <nhm_> zviratko: I think the plan is to have CephFS in production later this year. Maybe Q3.
[15:10] <zviratko> I just have no on-hands experience with it, so I don't know whether it is likely to corrupt something from time to time, or how faults typically manifest (whole cluster unavailable x one RBD unavaliable) and so on
[15:10] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[15:10] * ksingh (~Adium@2001:708:10:10:6cfa:a915:c0dd:8db2) Quit (Quit: Leaving.)
[15:11] * fghaas (~florian@91-119-73-193.dynamic.xdsl-line.inode.at) has joined #ceph
[15:11] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[15:11] <zviratko> basically the question is: If something does go wrong, how much impact does the failure have - is it isolated to one OSD or RBD device, or does a fault manifest on all OSD's in a cluster (in a catastrophic way)
[15:12] <KGremliza> still need help. please !
[15:13] <nhm_> zviratko: well, probably like any distributed storage system you could potentially have either depending on the nature of what goes wrong.
[15:13] * sarob (~sarob@2601:9:7080:13a:4024:346f:ae7d:d61c) Quit (Ping timeout: 480 seconds)
[15:14] <zviratko> nhm_: the same applies to an enterprise SAN, I know :-) but those of you who use it somewhere must have experience what the usual failure looks like - a glitch mentioned in a log or catastrophe... it actually says a lot about stability
[15:17] <nhm_> zviratko: Ceph is pretty good about trying to do things like make sure that writes truly hit the disk, checksums, regular scrubs and deep-scrubs to look for errors, and self-healing. The downside of all of that is that it's complex. Typically my experience has been that the nastier problems are ones where something cascades. IE something goes wrong, causes tons of mon traffic, which causes the monitors to stop responding quickly, which causes hear
[15:17] <nhm_> zviratko: that's fairly unusual, but it can happen.
[15:17] * capri (~capri@212.218.127.222) has joined #ceph
[15:18] <mikedawson> zviratko: rbd backing an openstack cloud does in fact work, but you are right to question stability. Proper architecture and operational experience is critical to uptime. It'll be significantly more difficult than running a SAN, but for us the benefits outweigh all else.
[15:18] <zviratko> nhm_: thanks - that's what I was looking for (possibly from more people :)) - so in effect, did all IO stop, or did it just spin up all OSD machines?
[15:19] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[15:19] <zviratko> mikedawson: it will likely be supported (and in turn, architected) by Mirantis, so I guess we should be allright
[15:19] * thomnico (~thomnico@2a01:e35:8b41:120:5c79:175b:11d6:f7c5) Quit (Quit: Ex-Chat)
[15:19] <zviratko> mikedawson: we are not going to play with it on our own :-)
[15:20] <zviratko> mikedawson: but failures do happen, and failure domain could be either isolated or catastrophic if all you have is one huge ceph cluster..
[15:20] * thomnico (~thomnico@2a01:e35:8b41:120:5c79:175b:11d6:f7c5) has joined #ceph
[15:23] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[15:23] <mikedawson> zviratko: in openstack, you can use local ephemeral storage or multiple cinder pools to expand your failure domains. In fact you can run multiple Ceph clusters to create separate storage failure domains if you wish
[15:24] * capri_oner (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[15:24] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:24] <nhm_> zviratko: I'm kind of compounding multiple experiences. One example I remember is where we had some inefficient code calling leveldb (the key-value store that backs the monitors). In some instances, leveldb couldn't compact memory fast enough because it was constantly spinning iterators. This caused leveldb to just consume more and more memory and 100% CPU and eventually it would OOM the node and mons would go down. This was about 3-4 releases ba
[15:24] <mikedawson> zviratko: you can even have a Cinder pool backed by your existing EMC SAN playing nicely with one or more Ceph/RDB pools, I believe
[15:25] <zviratko> mikedawson: multiple ceph cluster sound nice
[15:25] <KGremliza> i get following error: -1 journal read_header error decoding journal head
[15:25] <zviratko> mikedawson: but it is still quite a large failure domain
[15:26] <KGremliza> when i do a : ceph-disk -v prepare --fs-type xfs /dev/sdg /dev/sdb5
[15:26] <KGremliza> any idea ??
[15:26] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[15:26] <mikedawson> nhm_: yep, there were lots of monitor issues around the Cuttlefish release
[15:27] <nhm_> mikedawson: Yeah, I did quite a bit of debugging on them. :)
[15:27] <mikedawson> nhm_: I did lots of getting kicked in the face by them! (and a bit of debugging)
[15:29] <fghaas> KGremliza: please be advised that you shouldn't feel entitled to snappy answers here; for that, there's support contracts. ever tried actually nuking both /dev/sdg and /dev/sdb5 by overwriting the first two megs with zeroes? also, why are you mucking around with ceph-disk when you can just use ceph-deploy?
[15:32] <nhm_> mikedawson: unfortunately a lot of those kinds of issues tend to show up at scale, and our internal test clusters are already pretty heavily used.
[15:32] <KGremliza> fghaas: I zapped sdg and dd if=/dev/zero of=/dev/sdb5 already: i also user ceph-deploy osd prepare ceph02:sdg:/dev/sdb5
[15:32] <nhm_> though our QA folks have a long running cluster now which I think has probably helped.
[15:33] <KGremliza> we will possibly go for support contracts, if I can proove things work fine for us.
[15:33] <alfredodeza> KGremliza: you should try the --zap-disk when creating
[15:33] <mikedawson> nhm_: +1 for a long-running QA cluster!
[15:33] <KGremliza> ah i forgot: i did: ph-deploy osd prepare --zap-disk ceph02:sdg:/dev/sdb5
[15:34] <nhm_> mikedawson: we need to do a similar cluster for nightly performance testing.
[15:34] <alfredodeza> KGremliza: that would look like 'ceph-deploy osd create --zap-disk ceph02:sdg:/dev/sdb5'
[15:34] <nhm_> One of these days. :)
[15:34] <alfredodeza> KGremliza: oh so you are preparing only?
[15:34] <diegows> is it possible to do a hot resize in ceph? I've tried but doesn't work... ran xfs_grows after "rdb resize" and the size of the fs is the same
[15:34] <alfredodeza> no activate?
[15:34] <KGremliza> yes. that fails already. activating does nothing
[15:35] <alfredodeza> KGremliza: can you try to call create and show the output of ceph-deploy?
[15:36] <fghaas> zviratko: as sage has said in the past, the dirty little secret of highly redundant storage systems is that if your software is buggy, all your beautiful redundancy can come to naught and your software eats your data. while I haven't seen Ceph do that, I have seen one instance where on Ceph component (radosgw) caused data to be ignored by another (osd). speaking of which, nhm_, any chance you could make osd filestore attr use omap =
[15:37] <KGremliza> can i paste to a specific person ?
[15:37] <alfredodeza> KGremliza: use a paste site like fpaste
[15:37] <alfredodeza> please :)
[15:38] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[15:38] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit ()
[15:38] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[15:38] <KGremliza> http://ur1.ca/gly0i
[15:39] <nhm_> fghaas: default?
[15:39] <zviratko> fghaas: thanks. Sounds like the answer to my problems would be completely independent ceph clusters with cron-ed replication in case one fails. I am pretty confident they won't fail at the same time in the same fashion... :-) but s*t happens
[15:39] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[15:39] <KGremliza> looks good to me
[15:40] * mattt (~textual@94.236.7.190) has joined #ceph
[15:40] <nhm_> zviratko: actually, some people have talked about running ceph-osds on different backend filesystems specifically in-case there is a filesystem bugs that causes every osd filesystem to fail at the same time in the same fashion! ;)
[15:41] <nhm_> note: we only currently support XFS in production
[15:41] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[15:41] <zviratko> nhm_: yeah, that's what I'm aiming at - if I were to do it literally, I'd have to run different versions on different kernels with different hardware, with different disks (in case all timers on them oveflow at once) and so on - but I'm trying to be more real than that :)
[15:42] <mikedawson> nhm_: Nightly performance testing / Weekly performance testing / Monthly performance testing would be interesting. It is tough to get the type of fragmentation I see in a nightly test.
[15:42] <KGremliza> still in the error log: http://ur1.ca/gly16
[15:43] <fghaas> nhm_: dc0dfb9e01d593afdd430ca776cf4da2c2240a20 was never backported to dumpling, afaics
[15:43] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:43] <nhm_> mikedawson: fragmentation is definitely an issue. I think the issue there though is that we already know it's happening. With BTRFS, RBD, and lots of small random writes it's exceptionally bad.
[15:44] <mikedawson> nhm_: then add metrics collection, perhaps via polling the admin sockets in a short interval (we do 10s). Talked with loicd and zackc about the method when we were having a weekly teuthology call
[15:44] <nhm_> mikedawson: I'm not sure there is a good way around it though. We just need autodefrag to work and see if it helps things.
[15:44] <mikedawson> nhm_: is someone working on autodefrag?
[15:45] <nhm_> mikedawson: josef told me the other day that he was going to try to fix the OOM issues.
[15:45] <nhm_> mikedawson: I don't know how high it is on his priority list though.
[15:45] <zviratko> nhm_: does fragmentation affect even larger blocksizes? I think ceph by default uses 1MiB, right? I'm asking because we're likely going to have some 4k RBD devices to avoid linked clone overhead...
[15:45] <diegows> is there an additional step to resize a block device? I've tried with rbd... resize then xfs_growfs and nothing
[15:45] <fghaas> nhm_: and there's a known issue in dumpling and prior releases where rgw can set so large an attribute set that listxattr() fails when an OSD enumerates the files in the filestore, making the OSD ignore that file. iiuc. the suggested workaround for dumpling seems to be to always enable osd filestore xattr use omap, so why not set that as the default on all filesystems?
[15:45] <diegows> blockdev reports the new size
[15:46] <zviratko> diegows: do you have a partition on top of the device?
[15:46] <diegows> no
[15:46] <zviratko> diegows: and how is the device attached?
[15:46] * thomnico (~thomnico@2a01:e35:8b41:120:5c79:175b:11d6:f7c5) Quit (Quit: Ex-Chat)
[15:46] <t0rn> when running QEMU/KVM, what benefit does running "log flush" from the admin socket yield on the QEMU/KVM machine housing the VM? I see it flushes log entries to the log file (disk), but would that also clear those entries out of memory?
[15:46] <zviratko> diegows: does /sys/block/xxx/size return the correct size?
[15:47] <mikedawson> nhm_: i use xfs, but would love it if josef made progress on btrfs
[15:47] <diegows> zviratko, rbd map (using pacemaker resoure script)
[15:47] <nhm_> zviratko: the issue specifically with btrfs and RBD is that when you have a 4MB object represneting a block, that is actually stored on BTRFS as a 4MB file. If you do small random IO to the block you cause overwrites to that file in BTRFS, but due to btrfs' use of COW it will allocate new space for the write. This very quickly causes those 4MB blocks to become very fragmented. You really notice it later on if you do sequential reads.
[15:47] * ninkotech__ (~duplo@cst-prg-93-14.cust.vodafone.cz) has joined #ceph
[15:47] <zviratko> diegows: sorry, I know nearly nothing about rbd and ceph, but I spent a lot of time trying to do online expansion of filesystems (pain on centos 5.x)
[15:48] <fghaas> diegows: blockdev --rereadpt <device>
[15:48] <diegows> this always worked with lvm and xfs for example :)
[15:48] <fghaas> if the device is currently open; use partprobe instead
[15:48] <zviratko> diegows: does it show in dmsetup table?
[15:48] <nhm_> fghaas: yes, I was talking to Yehuda about that. I think we should set it as default. Honestly I've been setting it as true for most of my performance testing for over a year.
[15:49] <nhm_> fghaas: I only disable it when I specifically want to test how performance is with it off.
[15:49] <fghaas> nhm_: it's an accident waiting to happen, and when it does there is no recovery except what gregsfortytwo has rightly been calling "a manual hack job"
[15:50] <fghaas> and dumpling being your stable/supported/enterprise/whatnot release, sane defaults should be rather of primary concern
[15:50] <diegows> zviratko, no
[15:50] <diegows> fghaas, I've tried with both and nothg, the device is currently in use
[15:50] <nhm_> fghaas: I think you are absolutely right. Want to make a ticket for it?
[15:51] <fghaas> surely there's already an issue for "rgw executes filestore entries by attribute firing squad", no?
[15:52] <fghaas> I'm sure there must be, as you guys are very aware of the issue, I just didn't find it so maybe it's marked private or something
[15:52] <nhm_> fghaas: hrm, I meant specifically to backport it, but maybe that's there already.
[15:53] <KGremliza> alfredodeza: any tips ?
[15:53] <fghaas> nhm_: well I'd like to at least x-ref the other one... help me out with an issue id?
[15:53] <fghaas> (sometimes your issue descriptions aren't very useful for mere mortals, so full text search isn't always helpful)
[15:54] <nhm_> fghaas: hah, I have the same problem. ;)
[15:54] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[15:54] <fghaas> diegows: you're on pacemaker, right? a simple failover or resource stop/start should re-map
[15:55] * thomnico (~thomnico@2a01:e35:8b41:120:5c79:175b:11d6:f7c5) has joined #ceph
[15:55] <diegows> fghaas, yes, I know... but I'll have to wait until night :)
[15:56] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:57] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[15:57] <mikedawson> nhm_: Last week you suggested I look at the folder splitting settings due to my workload (RDB, guests have full volumes and constantly discard oldest data to write newer data in ~16KB chunks). Do you think the resolution to Issue #7207 is relevant to me?
[15:57] <mikedawson> http://tracker.ceph.com/issues/7207
[15:58] * ScOut3R (~ScOut3R@254C032E.nat.pool.telekom.hu) has joined #ceph
[15:58] <nhm_> mikedawson: we had some internal discussion about that last week. The end result is Sam suspects probably not, Greg suspects probably so, and I'm in the middle. ;)
[15:59] * ScOut3R (~ScOut3R@254C032E.nat.pool.telekom.hu) Quit (Read error: No route to host)
[15:59] <mikedawson> nhm_: Do you know if that will be backported to Dumpling or Emperor? I'd be happy to test on my workload and give a real world opinion.
[15:59] <nhm_> mikedawson: Though actually I may be mischaracterizing the conversation a bit.
[16:00] * ScOut3R (~ScOut3R@254C032E.nat.pool.telekom.hu) has joined #ceph
[16:00] <nhm_> mikedawson: I think the big question is on your cluster now that it's full, is splitting continueing to happen much?
[16:00] <nhm_> mikedawson: it shouldn't be probably, but you may be suffering the effects by having a deep directory hierarchy.
[16:01] <nhm_> sorry, the effects of having lots of splits already happen in the past.
[16:01] * ScOut3R (~ScOut3R@254C032E.nat.pool.telekom.hu) Quit (Read error: Connection reset by peer)
[16:01] <nhm_> Not the effects of splits happening in the present.
[16:02] <mikedawson> nhm_: not sure, I haven't ever paid attention to directory splitting. We're also growing the cluster and adding more rbd volumes with similar workload.
[16:04] <fghaas> diegows: going out on a limb here, xfs_freeze -f <mountpouint>; blockdev --rereadpt <dev>; xfs_freeze -u <mountpoint> *might* work, but try that at your own peril
[16:04] <loicd> mikedawson: i'm learning more and more about admin sockets while fixing https://github.com/ceph/ceph/pull/1207 ;-)
[16:05] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[16:06] <alfredodeza> KGremliza: if ceph-deploy did not complain and calling ceph-disk -v didn't mention anything useful I am not sure :(
[16:06] * ninkotech__ (~duplo@cst-prg-93-14.cust.vodafone.cz) Quit (Ping timeout: 480 seconds)
[16:07] <nhm_> mikedawson: basically once you hit a certain number of objects per PG, it will split the PG folder into subfolders, which will themselves split if they get too big. There is both a short term cost to doing that while it's happening which the bug is discussing and a long term effect (more seeks for dentry lookups).
[16:07] <nhm_> mikedawson: increasing the threshold for splitting is nice for writes, but potentially may make backfilling slower.
[16:07] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[16:08] * ninkotech__ (~duplo@cst-prg-93-14.cust.vodafone.cz) has joined #ceph
[16:08] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Read error: Operation timed out)
[16:08] <mikedawson> nhm_: what's the best way to look at the current state of directory splitting? and look for the velocity of needing to split on an ongoing basis?
[16:09] <nhm_> mikedawson: I just go into the osd's current directory and examine the directory hierarchy
[16:10] <nhm_> mikedawson: specifically how deep it is
[16:10] <nhm_> mikedawson: you can guesstimate if you know approximately how many objects are in the cluster and how many PGs.
[16:11] <nhm_> My instinct is that our default thresholds for merge/split are too low, but I haven't done a test looking at backfill performance yet.
[16:12] <KGremliza> alfredodeza: i'll try to increase logging
[16:13] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:13] <fghaas> nhm_: I found it. but I'm sorry, that description is terrible: http://tracker.ceph.com/issues/6143
[16:15] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[16:16] <nhm_> fghaas: sorry, got distracted. Was looking for it, honest. :)
[16:19] * schmee_ (~quassel@phobos.isoho.st) has joined #ceph
[16:19] * schmee (~quassel@phobos.isoho.st) Quit (Ping timeout: 480 seconds)
[16:19] * vata (~vata@2607:fad8:4:6:c0bd:2f89:329d:82f4) has joined #ceph
[16:20] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[16:20] <mikedawson> nhm_: here is the directory structure of one of my PGs on osd.0 http://pastebin.com/raw.php?i=J86gsKcq
[16:21] * KGremliza (~oftc-webi@host-82-135-6-194.static.customer.m-online.net) Quit (Quit: Page closed)
[16:22] <mikedawson> nhm_: looks a bit odd to me. DIR_D only has a DIR_D which only has a DIR_F. After that it splits more like I would expect. Is that normal?
[16:22] <nhm_> mikedawson: I think so actually. I don't remember the rational for doing it that way.
[16:23] * markbby (~Adium@168.94.245.4) has joined #ceph
[16:23] <nhm_> mikedawson: Maybe it grows out the other ones eventually. When a split happens I think it makes a deeper hierarchy than just 1 level.
[16:24] <nhm_> In any event, as you can see, an object could potentially have a number of dentry lookups associated with it depending on how well they get cached
[16:25] <mikedawson> nhm_: i suppose it could fill in the other missing parts of the tree later, but it should also know that due to space limitations on the underlying drive that it is impossible to need that much space in the directory hierarchy before exhausting drive space
[16:26] <nhm_> mikedawson: you may find that setting /proc/sys/vm/vfs_cache_pressure to something like 10 may be useful.
[16:26] <fghaas> nhm_: issue updated
[16:26] <fghaas> http://tracker.ceph.com/issues/6143#note-8
[16:27] * steki (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[16:27] * steki (~steki@91.195.39.5) has joined #ceph
[16:28] <nhm_> fghaas: cool, thanks. I'll make sure Neil sees it.
[16:29] * kraken (~kraken@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[16:30] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[16:32] <glambert_> wow, irc active today :P
[16:32] <glambert_> anyone having issues with rbd-fuse or had issues with it?
[16:32] * JCL (~JCL@2601:9:3280:5a3:29a4:95a7:7d57:c551) has joined #ceph
[16:34] <fghaas> glambert_: the way you put that, the probability that the answer is "yes" is approximately 1-??
[16:36] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:36] <fretb> Hi guys
[16:37] <fretb> is there any option to limit the bandwith for backfilling operations when rebalancing?
[16:39] <loicd> fghaas: while reading https://github.com/ceph/ceph/pull/1176/files#diff-d71e7a28d9b138151c55d97f5000fd80R1378 I thought about you because it's clever and simple ;-)
[16:39] <mikedawson> fretb: look at osd max backfills, osd recovery op priority, osd recovery max active
[16:40] <fretb> Testing with 3 nodes, 7osd each, and when pulling 1 node (7 osds) from the cluster, whn it starts backfilling, all client IOPS drop to zero
[16:41] <mikedawson> fretb: do you have a write-heavy workload?
[16:41] <fretb> Yes, FIO test with small BS writes
[16:42] <fretb> so a lot of writes :)
[16:43] <mikedawson> fretb: thought so. backfilling and benchmarking small writes tend to saturate your disks. Confirm with 'iostat -xt 1'. %util will likely approach 100%
[16:43] <fretb> Yet I thought the client traffic has priority over backfilling by default
[16:43] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:45] <mikedawson> fretb: ceph doesn't really have much visibility into how hard your spinners are thrashing in my experience
[16:45] * sarob (~sarob@2601:9:7080:13a:b481:8d84:3349:fe84) has joined #ceph
[16:46] * linuxkidd (~linuxkidd@2001:420:2100:2258:39d3:de25:be2d:1e03) has joined #ceph
[16:47] <fretb> Hmm ok, will investigate further. Thanks!
[16:48] <mikedawson> fretb: so it's common have users with write-heavy workloads experience issues when backfilling or with deep-scrub enabled. A few things tend to help: 1) lower those settings I mentioned to help backfilling, 2) move osd journals, 3) add more and/or faster drives
[16:48] <fghaas> loicd: I appreciate the kind words, but I'll leave the simplicity and cleverness (in short: elegance) to sage there :)
[16:49] <loicd> I remember you saying how you like this elegance in Ceph.
[16:49] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Read error: Operation timed out)
[16:51] <fghaas> loicd: I do, except I'm happy with digging into the CRUSH papers,
[16:51] <fghaas> I'm not much of the type to find elegance in code, really
[16:52] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[16:52] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[16:53] * sarob (~sarob@2601:9:7080:13a:b481:8d84:3349:fe84) Quit (Ping timeout: 480 seconds)
[16:59] * i_m (~ivan.miro@deibp9eh1--blueice2n1.emea.ibm.com) Quit (Read error: Connection reset by peer)
[16:59] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[17:01] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[17:02] * sarob (~sarob@2601:9:7080:13a:dcde:6a49:cb41:1540) has joined #ceph
[17:05] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[17:06] * gnlwlb (~sglwlb@124.90.106.171) Quit ()
[17:09] * ninkotech__ (~duplo@cst-prg-93-14.cust.vodafone.cz) Quit (Ping timeout: 480 seconds)
[17:09] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Read error: Connection reset by peer)
[17:09] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:10] * sarob (~sarob@2601:9:7080:13a:dcde:6a49:cb41:1540) Quit (Ping timeout: 480 seconds)
[17:11] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[17:11] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[17:15] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[17:16] * steki (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:18] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Quit: Textual IRC Client: www.textualapp.com)
[17:19] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:25] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:25] * sprachgenerator (~sprachgen@vis-v410v141.mcs.anl-external.org) has joined #ceph
[17:27] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[17:27] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:31] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[17:32] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:32] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:32] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[17:34] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[17:36] * poelzi1 is now known as poelzi
[17:38] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[17:38] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[17:39] * hjjg (~hg@p3EE31E76.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[17:41] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[17:43] * sarob_ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[17:43] * sarob (~sarob@2601:9:7080:13a:512c:92d6:f3c2:987b) has joined #ceph
[17:45] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:45] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[17:48] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[17:51] * sarob (~sarob@2601:9:7080:13a:512c:92d6:f3c2:987b) Quit (Ping timeout: 480 seconds)
[17:53] <nhm_> regarding backfilling: we are considering changing the default osd recovery op priority to be lower
[17:53] * linuxkidd (~linuxkidd@2001:420:2100:2258:39d3:de25:be2d:1e03) Quit (Quit: Leaving)
[17:55] * dpippenger1 (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[17:57] * sjustlaptop (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[17:57] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:58] * thomnico (~thomnico@2a01:e35:8b41:120:5c79:175b:11d6:f7c5) Quit (Quit: Ex-Chat)
[17:59] * ScOut3R (~scout3r@54024282.dsl.pool.telekom.hu) has joined #ceph
[18:00] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:01] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[18:04] * mattt (~textual@94.236.7.190) Quit (Read error: Operation timed out)
[18:05] * bandrus (~Adium@adsl-75-5-250-54.dsl.scrm01.sbcglobal.net) has joined #ceph
[18:06] * alram (~alram@38.122.20.226) has joined #ceph
[18:07] * dpippenger1 (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Read error: Operation timed out)
[18:08] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[18:10] * nwat (~textual@eduroam-245-90.ucsc.edu) has joined #ceph
[18:11] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:14] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) has joined #ceph
[18:14] * sarob (~sarob@2601:9:7080:13a:fdf2:e7c2:c916:ef80) has joined #ceph
[18:19] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:27] * Cube (~Cube@66-87-65-226.pools.spcsdns.net) has joined #ceph
[18:29] * alexm_ (~alexm@83.167.43.235) Quit (Read error: Operation timed out)
[18:32] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) Quit (Ping timeout: 480 seconds)
[18:32] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[18:33] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:35] <kitz> Why might my writes be faster than my reads?
[18:35] * Underbyte (~jerrad@pat-global.macpractice.net) has joined #ceph
[18:36] <kitz> read : io=11224MB, bw=379557KB/s, iops=90 , runt= 30281msec
[18:36] <kitz> write: io=15568MB, bw=527397KB/s, iops=126 , runt= 30227msec
[18:38] * srenatus (~stephan@185.27.182.16) Quit (Read error: Operation timed out)
[18:39] <singler> kitz: try benchmarking longer, maybe write buffers are not full yet
[18:40] <kitz> ok. running for 5min. Thanks.
[18:40] * sarob (~sarob@2601:9:7080:13a:fdf2:e7c2:c916:ef80) Quit (Remote host closed the connection)
[18:40] * sarob (~sarob@2601:9:7080:13a:fdf2:e7c2:c916:ef80) has joined #ceph
[18:41] * linuxkidd (~linuxkidd@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[18:47] * lupul (~lupu@86.107.101.245) has joined #ceph
[18:48] * sarob (~sarob@2601:9:7080:13a:fdf2:e7c2:c916:ef80) Quit (Ping timeout: 480 seconds)
[18:50] <kitz> hum. closer but still *just* a little faster. weird.
[18:50] <kitz> read : io=136096MB, bw=464140KB/s, iops=113 , runt=300259msec
[18:50] <kitz> write: io=140384MB, bw=478141KB/s, iops=116 , runt=300650msec
[18:50] * schmee_ is now known as schmee
[18:51] * diegows (~diegows@190.190.17.57) Quit (Read error: Operation timed out)
[18:52] * garphy is now known as garphy`aw
[18:52] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[18:54] * kaizh (~oftc-webi@128-107-239-233.cisco.com) has joined #ceph
[18:55] * rotbeard (~redbeard@2a02:908:df10:5c80:76f0:6dff:fe3b:994d) has joined #ceph
[18:57] * dpippenger (~riven@cpe-172-249-32-219.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:04] * rmoe_ (~quassel@12.164.168.117) has joined #ceph
[19:04] <kitz> yeah. and again.
[19:04] <kitz> read : io=150364MB, bw=512878KB/s, iops=125 , runt=300213msec
[19:04] <kitz> write: io=156020MB, bw=532367KB/s, iops=129 , runt=300102msec
[19:08] <nhm_> kitz: is this sequential?
[19:08] <kitz> random
[19:08] <nhm_> large IOs?
[19:09] <kitz> 4m, iodepth 64
[19:09] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[19:09] <nhm_> ok, so 4m ios, even random, are going to be fairly sequential
[19:09] <nhm_> you might try changing the readahead on both the rbd volume and the underlying OSDs
[19:10] <nhm_> specifically on the OSDs, I see some pretty dramatic improvement by setting a higher readahead on some systems. client side readahead might be more complex, but it's probably worth playing with it.
[19:12] * sjustwork (~sam@2607:f298:a:607:8480:daad:c57d:e474) has joined #ceph
[19:14] <kitz> thanks, nhm_. I'll look into that.
[19:15] * TheBitte_ (~thebitter@195.10.250.233) Quit ()
[19:16] <nhm_> kitz: on some of our test systems we see good results with 4MB readahead, but that may be excessive.
[19:16] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[19:16] <kitz> ok. I'll try stuff in that neighborhood and see what I get. should I do regular readahead or filesystem readahead?
[19:23] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:27] * srenatus (~stephan@f055080148.adsl.alicedsl.de) has joined #ceph
[19:37] * rmoe_ (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[19:38] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[19:38] * dpippenger (~riven@66.192.9.78) has joined #ceph
[19:45] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[19:55] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:56] <cbob> we just stood up a cloudstack with ceph, everything is working except for snapshotting, does anyone have that working? also our r/w in vms is pretty slow like 8MB/s right now we only have 3 ceph nodes and dont have ssd in them, if i add a pair of ssd's to each and add another node will my performance increase?
[20:00] * jessebmiller (~Adium@2601:d:2780:891:c884:149:eec3:5267) has joined #ceph
[20:05] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:10] * kaizh (~oftc-webi@128-107-239-233.cisco.com) Quit (Remote host closed the connection)
[20:10] * rmoe (~quassel@12.164.168.117) has joined #ceph
[20:11] * sarob (~sarob@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:14] <fghaas> cbob: wido_ would be the person you're looking for, as he wrote the rbd/CloudStack integration bits, but I haven't seen him in the channel all day so you might want to take your question to the list
[20:15] <cbob> how bout performance wise?
[20:15] * dmsimard1 (~Adium@70.38.0.246) has joined #ceph
[20:16] <fghaas> what I can tell you though is that yes, and SSD journal is expected to increase your OSD performance, and rbd greatly benefits from rbd caching being enabled in libvirt/qemu, and while I can tell you how to do that in OpenStack, I'm really not a CloudStack guy and can't be of assistance much
[20:21] * dmsimard2 (~Adium@108.163.152.66) has joined #ceph
[20:21] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[20:24] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (Quit: ZNC - http://znc.sourceforge.net)
[20:27] * dmsimard1 (~Adium@70.38.0.246) Quit (Ping timeout: 480 seconds)
[20:28] * kaizh (~oftc-webi@128-107-239-235.cisco.com) has joined #ceph
[20:29] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[20:29] * ScOut3R (~scout3r@54024282.dsl.pool.telekom.hu) Quit ()
[20:33] * sarob (~sarob@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[20:35] * diegows (~diegows@190.190.17.57) has joined #ceph
[20:37] * todin (tuxadero@kudu.in-berlin.de) Quit (Remote host closed the connection)
[20:40] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[20:41] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (Quit: ZNC - http://znc.sourceforge.net)
[20:41] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[20:43] * poelzi (~poelzi@2001:4dd0:fb82:c3d2:a288:b4ff:fe97:a0c8) has left #ceph
[20:43] * poelzi (~poelzi@2001:4dd0:fb82:c3d2:a288:b4ff:fe97:a0c8) has joined #ceph
[20:43] * rmoe_ (~quassel@12.164.168.117) has joined #ceph
[20:44] <poelzi> how can i access the osd variable from a boost::statechart::result context ?
[20:45] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[20:45] * JeffK (~JeffK@38.99.52.10) Quit (Ping timeout: 480 seconds)
[20:45] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[20:46] * srenatus (~stephan@f055080148.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[20:47] * senk (~Adium@2a02:908:fd5d:4d81:fd56:90c5:f066:f8f2) has joined #ceph
[20:47] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[20:48] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[20:51] * dmsimard2 (~Adium@108.163.152.66) Quit (Ping timeout: 480 seconds)
[20:52] * Meths_ (~meths@2.30.104.137) has joined #ceph
[20:55] * Meths (~meths@2.25.214.69) Quit (Ping timeout: 480 seconds)
[20:58] <davidzlap> poelzi: You should post that to #ceph-devel
[20:58] <dmick> he did
[21:01] * senk (~Adium@2a02:908:fd5d:4d81:fd56:90c5:f066:f8f2) Quit (Quit: Leaving.)
[21:02] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[21:02] * ChanServ sets mode +v andreask
[21:04] * fghaas (~florian@91-119-73-193.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[21:05] * nwat (~textual@eduroam-245-90.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:08] * madkiss (~madkiss@zid-vpnn064.uibk.ac.at) has joined #ceph
[21:09] * nwat (~textual@eduroam-245-90.ucsc.edu) has joined #ceph
[21:09] * srenatus (~stephan@f055080148.adsl.alicedsl.de) has joined #ceph
[21:10] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[21:12] * madkiss1 (~madkiss@zid-vpnn077.uibk.ac.at) has joined #ceph
[21:15] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[21:16] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[21:17] * madkiss (~madkiss@zid-vpnn064.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[21:18] * madkiss (~madkiss@089144217105.atnat0026.highway.a1.net) has joined #ceph
[21:20] * madkiss1 (~madkiss@zid-vpnn077.uibk.ac.at) Quit (Read error: Operation timed out)
[21:24] * fghaas (~florian@91-119-73-193.dynamic.xdsl-line.inode.at) has joined #ceph
[21:25] * danieagle (~Daniel@186.214.63.14) has joined #ceph
[21:25] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[21:28] * Meths_ is now known as Meths
[21:32] * kaizh (~oftc-webi@128-107-239-235.cisco.com) Quit (Remote host closed the connection)
[21:32] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[21:35] * sarob (~sarob@2001:4998:effd:600:2086:3c39:2862:a4e8) has joined #ceph
[21:38] * mattt (~textual@92.52.76.140) has joined #ceph
[21:40] * jessebmiller (~Adium@2601:d:2780:891:c884:149:eec3:5267) Quit (Read error: Connection reset by peer)
[21:40] * poelzi (~poelzi@2001:4dd0:fb82:c3d2:a288:b4ff:fe97:a0c8) Quit (Quit: Leaving.)
[21:40] * poelzi (~poelzi@2001:4dd0:fb82:c3d2:a288:b4ff:fe97:a0c8) has joined #ceph
[21:42] * jessebmiller (~Adium@c-67-167-2-224.hsd1.il.comcast.net) has joined #ceph
[21:43] * sarob (~sarob@2001:4998:effd:600:2086:3c39:2862:a4e8) Quit (Ping timeout: 480 seconds)
[21:44] * sarob (~sarob@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:51] * sarob_ (~sarob@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:51] * sarob (~sarob@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[21:55] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has left #ceph
[21:56] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[21:56] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has left #ceph
[21:56] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[21:57] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[21:59] * sarob_ (~sarob@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[22:02] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[22:02] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[22:07] * mattt (~textual@92.52.76.140) Quit (Read error: Connection reset by peer)
[22:10] * sarob (~sarob@2001:4998:effd:600:8cb3:6537:bc0d:83fe) has joined #ceph
[22:14] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[22:14] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[22:16] * garphy`aw is now known as garphy
[22:17] * madkiss (~madkiss@089144217105.atnat0026.highway.a1.net) Quit (Ping timeout: 480 seconds)
[22:20] * nwat (~textual@eduroam-245-90.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:21] * srenatus (~stephan@f055080148.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[22:21] * sarob_ (~sarob@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:26] * sroy (~sroy@2607:fad8:4:6:6e88:14ff:feff:5374) Quit (Quit: Quitte)
[22:26] * sarob (~sarob@2001:4998:effd:600:8cb3:6537:bc0d:83fe) Quit (Ping timeout: 480 seconds)
[22:26] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[22:26] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[22:27] * ntranger (~ntranger@proxy2.wolfram.com) has joined #ceph
[22:28] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[22:28] <ntranger> hey all! I'm running in to an issue trying to delete a pool. I'm following the ceph instructions, but I keep getting an invalid command
[22:28] <xmltok> whats your command?
[22:28] <pmatulis> ntranger: pastebin your command and its output
[22:30] <ntranger> http://pastebin.com/QXc0pF04
[22:31] <dmick> --yes-i-really-mean-it is not --yes-i-really-really-mean-it
[22:31] <dmick> really++
[22:31] <dmick> and you don't enter brackets either
[22:32] <ntranger> ah, I'm such a dork. Thanks. :)
[22:32] <ntranger> thank you. :)
[22:32] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[22:33] <dmick> nw
[22:33] * nwat (~textual@eduroam-245-90.ucsc.edu) has joined #ceph
[22:39] * kaizh (~oftc-webi@128-107-239-236.cisco.com) has joined #ceph
[22:39] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[22:40] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[22:42] * sarob_ (~sarob@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[22:43] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[22:43] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[22:44] * nwat (~textual@eduroam-245-90.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:47] * rendar (~s@host26-115-dynamic.57-82-r.retail.telecomitalia.it) Quit ()
[22:48] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[22:50] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[22:51] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[22:54] * BillK (~BillK-OFT@124-148-118-126.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[22:59] * ScOut3R (~scout3r@54024282.dsl.pool.telekom.hu) has joined #ceph
[22:59] * sarob (~sarob@2001:4998:effd:600:bdf9:ce94:755:7197) has joined #ceph
[23:00] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[23:00] * sarob (~sarob@2001:4998:effd:600:bdf9:ce94:755:7197) Quit (Remote host closed the connection)
[23:01] * sarob (~sarob@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:01] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[23:01] * nwat (~textual@eduroam-245-90.ucsc.edu) has joined #ceph
[23:01] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[23:03] * ZyTer (~ZyTer@ghostbusters.apinnet.fr) has joined #ceph
[23:03] * cbob_ (~cbob@69.9.232.63) has joined #ceph
[23:05] * fghaas (~florian@91-119-73-193.dynamic.xdsl-line.inode.at) has left #ceph
[23:08] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[23:09] * sarob (~sarob@nat-dip32-wl-f.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[23:09] * cbob (~cbob@host-63-232-9-69.midco.net) Quit (Ping timeout: 480 seconds)
[23:13] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[23:13] * nwat (~textual@eduroam-245-90.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:14] * BillK (~BillK-OFT@58-7-125-144.dyn.iinet.net.au) has joined #ceph
[23:18] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[23:20] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[23:21] <bens> how do i use wildcards in rados
[23:22] <dmick> for what?
[23:25] <bens> rm
[23:25] <bens> killed a bench job
[23:25] <bens> didn't cleanup.
[23:25] * xarses (~andreww@12.164.168.117) has joined #ceph
[23:25] <bens> trying to clean it up manually
[23:26] <dmick> don't think you do, for that. you can process the output of ls tho
[23:26] <bens> thats what i'm doing with xargs
[23:28] <dmick> oh does rm take multiple objnames? I wasn't even aware of that; just thinking rados ls | while read o; do rados rm o; done
[23:28] * ScOut3R (~scout3r@54024282.dsl.pool.telekom.hu) Quit ()
[23:29] * Steki (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:30] <bens> it does unless there are 90 bazillion of them
[23:30] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: He who laughs last, thinks slowest)
[23:30] <bens> i like big buf()s and I cannot lie.
[23:31] <dmick> I guess you pretty much had to.
[23:32] <bens> yeah, sorry.
[23:32] <dmick> no you're not.
[23:35] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[23:36] * dmsimard (~Adium@70.38.0.246) has joined #ceph
[23:36] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[23:38] * dmsimard1 (~Adium@108.163.152.2) has joined #ceph
[23:38] <andreask> bens: you did "rados bench..."? there is "rados cleanup"
[23:41] * ninkotech__ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[23:43] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:44] * dmsimard (~Adium@70.38.0.246) Quit (Ping timeout: 480 seconds)
[23:44] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[23:47] * xarses (~andreww@12.164.168.117) has joined #ceph
[23:47] <dmick> ^ that too
[23:48] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[23:49] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[23:49] <bens> wait what
[23:50] <bens> what is the prefix?
[23:52] <dmick> presumably the common prefix of the test files?
[23:52] <bens> yep
[23:52] <bens> wildcard!
[23:53] <bens> Warning: using slow linear search
[23:53] * jayc (~jayc@nat-gw1.syd4.anchor.net.au) has joined #ceph
[23:54] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:54] * allsystemsarego (~allsystem@188.25.135.30) Quit (Quit: Leaving)
[23:58] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[23:59] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has left #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.