#ceph IRC Log

Index

IRC Log for 2014-04-02

Timestamps are in GMT/BST.

[0:09] * dmsimard1 (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[0:10] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:12] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[0:14] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[0:20] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:22] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) has joined #ceph
[0:22] * al (d@2001:41d0:2:1c00::5) Quit (Remote host closed the connection)
[0:23] <Gamekiller77> hey guys i seeing something on some write up that if you change the "chunk" to like 1meg you can get better performance from ceph
[0:23] <Gamekiller77> not sure what they are talking about as i do not see much on the ceph site about this term chunk
[0:26] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 28.0/20140314220517])
[0:33] * godog (~filo@0001309c.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:35] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[0:41] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Ping timeout: 480 seconds)
[0:41] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[0:43] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[0:44] * ircolle (~Adium@2601:1:8380:2d9:b152:4e31:e203:1d53) Quit (Quit: Leaving.)
[0:47] * Nats_ (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[0:47] * Nats (~Nats@telstr575.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[0:50] * al (d@niel.cx) has joined #ceph
[0:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:55] * BillK (~BillK-OFT@58-7-77-11.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[0:56] * BillK (~BillK-OFT@58-7-77-11.dyn.iinet.net.au) has joined #ceph
[1:03] * ghartz (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) Quit (Remote host closed the connection)
[1:03] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[1:03] * r0r_taga_ (~nick@greenback.pod4.org) Quit (Read error: Operation timed out)
[1:03] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[1:06] * r0r_taga (~nick@greenback.pod4.org) has joined #ceph
[1:10] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[1:13] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:19] * guppy (~quassel@guppy.xxx) Quit (Quit: No Ping reply in 180 seconds.)
[1:19] * guppy (~quassel@guppy.xxx) has joined #ceph
[1:21] * rturk-away is now known as rturk
[1:23] * yeled (~yeled@spodder.com) Quit (Ping timeout: 480 seconds)
[1:25] * yeled (~yeled@spodder.com) has joined #ceph
[1:29] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[1:30] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[1:31] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[1:32] * rturk is now known as rturk-away
[1:37] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[1:38] * bitblt (~don@128-107-239-233.cisco.com) Quit (Quit: Leaving)
[1:47] * rturk-away is now known as rturk
[1:48] * Cube1 (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[1:51] * gNetLabs (~gnetlabs@188.84.22.23) Quit (Ping timeout: 480 seconds)
[2:03] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:04] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[2:11] * rmoe_ (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:12] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Ping timeout: 480 seconds)
[2:22] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:24] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[2:25] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[2:27] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:32] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[2:34] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[2:36] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:37] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:37] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Remote host closed the connection)
[2:37] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[2:40] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[2:42] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) has joined #ceph
[2:43] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Leaving...)
[2:44] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) Quit (Quit: This computer has gone to sleep)
[2:45] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) Quit (Quit: leaving)
[2:47] * fdmanana (~fdmanana@bl9-168-27.dsl.telepac.pt) Quit (Quit: Leaving)
[2:52] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[2:57] * KaZeR (~KaZeR@64.201.252.132) Quit (Remote host closed the connection)
[3:04] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:04] * rturk is now known as rturk-away
[3:09] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[3:12] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:13] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[3:14] * JoeGruher (~JoeGruher@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[3:17] * yanzheng (~zhyan@134.134.137.75) has joined #ceph
[3:18] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[3:24] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[3:26] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[3:26] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[3:28] * diegows_ (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[3:31] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:38] * LeaChim (~LeaChim@host86-162-2-97.range86-162.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:40] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:50] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[3:52] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[3:53] * sz0 (~user@208.72.139.54) has joined #ceph
[3:57] <sz0> hello, i'm trying my first ceph cluster installation right now and i'm having a ssh connectivity problem with ceph-deploy. it just hangs. i made sure the user i'm running the ceph-deploy command is able to connect using ssh
[4:00] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Ping timeout: 480 seconds)
[4:08] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has joined #ceph
[4:12] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[4:17] <pmatulis> sz0: with passwordless sudo on the remote end as well?
[4:20] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[4:41] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[4:46] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[4:52] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Leaving...)
[4:53] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[5:03] * Vacum (~vovo@88.130.200.211) has joined #ceph
[5:05] * JoeGruher (~JoeGruher@134.134.137.75) has joined #ceph
[5:06] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[5:10] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[5:10] * Vacum_ (~vovo@88.130.200.80) Quit (Ping timeout: 480 seconds)
[5:12] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[5:14] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit ()
[5:14] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[5:15] * JoeGruher (~JoeGruher@134.134.137.75) Quit (Remote host closed the connection)
[5:15] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:15] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) Quit (Ping timeout: 480 seconds)
[5:36] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) has joined #ceph
[5:37] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) Quit (Read error: Operation timed out)
[5:41] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[5:41] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[5:49] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Ping timeout: 480 seconds)
[5:50] <sz0> pmatulis: unfortunately, yes
[5:52] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Read error: Operation timed out)
[5:53] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[5:54] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[5:58] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[6:00] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[6:00] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:01] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:08] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[6:12] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[6:16] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[6:18] * c74d3 (~c74d3a4eb@2002:4404:712c:0:76de:2bff:fed4:2766) has joined #ceph
[6:20] * heinrikter (~quassel@0001c91a.user.oftc.net) has joined #ceph
[6:21] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[6:28] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[6:32] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[6:38] <sz0> removing autogenerated ssh keys and letting ceph-deploy creating a new pair fixed the issue i had
[6:39] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:52] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[6:54] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[6:54] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[6:56] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:58] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[7:02] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[7:02] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[7:05] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[7:06] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:18] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[7:23] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[7:24] * AfC (~andrew@2407:7800:400:1011:6e88:14ff:fe33:2a9c) has joined #ceph
[7:28] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[7:29] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[7:37] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Ping timeout: 480 seconds)
[7:42] * AfC (~andrew@2407:7800:400:1011:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[7:43] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[7:43] * AfC (~andrew@nat-gw1.syd4.anchor.net.au) has joined #ceph
[7:49] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[7:51] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[7:57] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[7:57] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[8:05] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:06] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[8:07] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[8:08] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[8:11] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[8:13] * BillK (~BillK-OFT@58-7-77-11.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[8:14] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[8:15] * BillK (~BillK-OFT@124-148-97-51.dyn.iinet.net.au) has joined #ceph
[8:15] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:16] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:17] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:21] * thomnico (~thomnico@193.15.182.113) has joined #ceph
[8:22] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[8:27] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit ()
[8:30] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[8:31] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[8:34] * kiwigera_ (~kiwigerai@208.72.139.54) has joined #ceph
[8:34] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Read error: Connection reset by peer)
[8:35] * AfC (~andrew@nat-gw1.syd4.anchor.net.au) Quit (Read error: No route to host)
[8:36] * Vacum_ (~vovo@i59F793BB.versanet.de) has joined #ceph
[8:36] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[8:40] * thomnico (~thomnico@193.15.182.113) Quit (Quit: Ex-Chat)
[8:41] * carif (~mcarifio@ip-37-25.sn1.eutelia.it) has joined #ceph
[8:43] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[8:43] * Vacum (~vovo@88.130.200.211) Quit (Ping timeout: 480 seconds)
[8:43] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) has joined #ceph
[8:44] * davidzlap (~Adium@ip68-5-239-214.oc.oc.cox.net) Quit (Quit: Leaving.)
[8:45] * carif (~mcarifio@ip-37-25.sn1.eutelia.it) Quit ()
[8:51] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[8:54] * toabctl (~toabctl@toabctl.de) Quit (Quit: WeeChat 0.3.7)
[8:55] * toabctl (~toabctl@toabctl.de) has joined #ceph
[8:57] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[8:57] * ChanServ sets mode +v andreask
[9:00] * thb (~me@2a02:2028:223:f80:6267:20ff:fec9:4e40) has joined #ceph
[9:08] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[9:13] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Quit: Pull the pin and count to what?)
[9:15] * sz0 (~user@208.72.139.54) Quit (Quit: ERC Version 5.3 (IRC client for Emacs))
[9:17] * KaZeR (~KaZeR@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:17] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[9:25] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Ping timeout: 480 seconds)
[9:26] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:26] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[9:28] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[9:28] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:28] * ChanServ sets mode +v andreask
[9:33] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:36] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:37] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[9:40] * heinrikter (~quassel@0001c91a.user.oftc.net) Quit (Read error: Connection reset by peer)
[9:40] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) has joined #ceph
[9:42] * doubleg (~doubleg@69.167.130.11) Quit (Remote host closed the connection)
[9:42] * doubleg (~doubleg@69.167.130.11) has joined #ceph
[9:42] * analbeard (~shw@support.memset.com) has joined #ceph
[9:45] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[9:45] * TMM (~hp@c97185.upc-c.chello.nl) Quit (Quit: Ex-Chat)
[9:47] * garphy`aw is now known as garphy
[9:49] * ksingh (~Adium@2001:708:10:10:68f0:f64:40e:b439) has joined #ceph
[9:51] <ksingh> how to create a pool with 3 user defined OSD ?
[9:52] * oms101 (~oms101@nat.nue.novell.com) has joined #ceph
[9:52] * n1md4 (~nimda@anion.cinosure.com) has joined #ceph
[9:55] <n1md4> hi. I've removed RBD and pools, but still ceph -s reports the space being used. I have 8T in total, and actually use around 23G, but Ceph reports 85G usage. how can I reclaim used space?
[10:01] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[10:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:14] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:16] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:17] <Fruit> n1md4: couldn't that be due to replication?
[10:20] <n1md4> Fruit: Not sure, I don't think so, given how I watched it grow .. how could I tell?
[10:23] <Fruit> not sure, I'm no expert :)
[10:26] * Zethrok (~martin@95.154.26.34) has joined #ceph
[10:27] * godog (~filo@2001:41c8:1:537f::caca) has joined #ceph
[10:27] <Zethrok> Hi, just a general question. When creating pools for radosgw are there any good ways to determine pg count? Stick with default or (dumpling)
[10:28] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[10:31] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[10:32] * BillK (~BillK-OFT@124-148-97-51.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[10:33] * BillK (~BillK-OFT@106-69-41-46.dyn.iinet.net.au) has joined #ceph
[10:35] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[10:36] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:39] * TheBittern (~thebitter@195.10.250.233) Quit (Ping timeout: 480 seconds)
[10:47] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[10:50] * LeaChim (~LeaChim@host86-162-2-97.range86-162.btcentralplus.com) has joined #ceph
[10:52] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[10:52] * shang (~ShangWu@175.41.48.77) has joined #ceph
[10:54] * Vacum_ is now known as Vacum
[10:56] <Vacum> Hi. We are having a major issue with our ceph cluster. our 5 mons are flapping/dying. 4 of them keep running, but the mon_status (via admin socket) flap between "electing", "probing", "peon"/"leader"
[10:56] <Vacum> The rank-0 mon node has frequent bursts of way over 10000 lines per second in its log. always the same message:
[10:56] <Vacum> 2014-04-02 10:52:31.700940 7fbf3e1e3700 1 mon.csdeveubs-u01mon01@0(leader).paxos(paxos active c 695773..696418) is_readable now=2014-04-02 10:52:31.700940 lease_expire=2014-04-02 10:52:23.623790 has v0 lc 696418
[10:57] <Zethrok> Vacum: I was told that line doesn't matter (although I wonder why it is there in that case..)
[10:57] <Vacum> OSDs can't connect, they timeout during startup. ceph -s hangs for seconds, then returns a definitely wrong status about the OSDs
[10:58] <Vacum> clocks on all mons are in sync (being synced from the same internal NTP servers)
[10:58] <Vacum> Zethrok: interesting. can one turn that message off? :)
[10:58] <Zethrok> upgraded the cluster or did it just happen?
[10:58] <Zethrok> I'm not sure - I didn't look more into it at the time, but it is rather annoying
[10:58] <Vacum> we had a major havoc yesterday with the cluster
[10:59] * mlausch (~mlausch@2001:8d8:1fe:7:9195:beba:bebf:bd2c) has joined #ceph
[10:59] <Vacum> we removed a bunch of OSDs on some machines and transplanted them to different machines. during that operation one network cable got loose of one of the OSD machines :)
[10:59] * TheBittern (~thebitter@195.10.250.233) has joined #ceph
[11:00] <Vacum> the mons started logging like wild, until each of them stopped working, because the system drives became >90% full. :)
[11:00] <Zethrok> what version?
[11:01] * yanzheng (~zhyan@134.134.137.75) Quit (Quit: Leaving)
[11:01] <Vacum> ceph version 0.72.2 (a913ded2ff138aefb8cb84d347d72164099cfd60)
[11:01] <Zethrok> hmmn, still on dumpling here, but we had the same issue on an older cuttlefish cluster.
[11:02] <Zethrok> Ultimately I had to compact the leveldb datastore before I could even start the monitors
[11:02] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:02] <Fruit> Zethrok: there's a formula on http://ceph.com/docs/master/rados/configuration/pool-pg-config-ref/
[11:02] <Vacum> the rank-0 mmon is also frequently at 100% (or even more) CPU usage
[11:02] * oblu (~o@62.109.134.112) has joined #ceph
[11:03] <Zethrok> Vacum: when you say system drives is does that mean the drives where your mons are stored? Can you see if it is the leveldb filling your disk or is it something else?
[11:03] <Zethrok> Fruit: Thanks :)
[11:03] <Vacum> system drives: yes, the drives of the mon nodes
[11:04] <Vacum> Zethrok: du -h /var/lib/ceph/mon -> 1018M /var/lib/ceph/mon
[11:04] <Vacum> 1018M /var/lib/ceph/mon/ceph-csdeveubs-u01mon01/store.db
[11:04] <Vacum> so, the level dbs are roughly 1G
[11:05] <Vacum> logs are currently roughly 2.2G
[11:05] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[11:05] <Zethrok> Vacum: sounds pretty normal, so prob. not it (also I think the leveldb issues were finally solved in dumpling)
[11:06] <Vacum> logs are increasing at roughly 1MB/s though :)
[11:06] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[11:06] <Vacum> or a bit less
[11:06] <Zethrok> Ohh - so it is mostly logs taking up space?
[11:06] <Vacum> yes. we already deleted old logs yesterday, so currentyl there is (should be) enough disk sapce
[11:07] <Vacum> yes, still 14G available, so that should not be the problem at the moment
[11:07] <Zethrok> any hints in the cep-mon-x.logs?
[11:08] <Vacum> Zethrok: let me tail -f |grep -v is_readable :)
[11:08] * allsystemsarego (~allsystem@188.27.166.29) has joined #ceph
[11:09] <Zethrok> no issues with disk being satuated or something? And then marked as out because it cant keep up? Or just system overall?
[11:09] * allsystemsarego (~allsystem@188.27.166.29) Quit ()
[11:09] * allsystemsarego (~allsystem@188.27.166.29) has joined #ceph
[11:09] <Vacum> Zethrok: marked as out : this is about OSDs only?
[11:10] <Vacum> 2014-04-02 11:09:59.060339 7fbf3e1e3700 0 log [INF] : mdsmap e1: 0/0/1 up
[11:10] <Vacum> that is strange?
[11:10] <Vacum> 2014-04-02 11:10:11.870642 7fbf3e1e3700 0 log [INF] : osdmap e1418: 180 osds: 90 up, 122 in
[11:10] <Vacum> this is plain wrong
[11:10] <Vacum> 2014-04-02 11:10:11.870972 7fbf3e1e3700 0 log [INF] : monmap e5: 5 mons at {csdeveubap-u01mon01=10.88.32.6:6789/0,csdeveubap-u01mon02=10.88.32.7:6789/0,csdeveubs-u01mon01=10.88.7.11:6789/0,csdeveubs-u01mon02=10.88.7.12:6789/0,csdeveubs-u01mon03=10.88.7.13:6789/0}
[11:11] <Vacum> 2014-04-02 11:10:33.641326 7fbf3e1e3700 0 log [INF] : mon.csdeveubs-u01mon01 calling new monitor election
[11:11] <Zethrok> just curious if high mon load somehow force an election if the current winner can't keep up. I'm just guessing here.
[11:11] <Vacum> 2014-04-02 11:10:37.290713 7fbf3e9e4700 0 mon.csdeveubs-u01mon01@0(electing).data_health(11476) update_stats avail 68% total 20350640 used 5476548 avail 13840312
[11:12] <Vacum> that last line. the 68% is totally unclear to me. its NOT the available disk space, although the latter numbers are :)
[11:13] <Zethrok> I think it shows avail. space on the disk the mon reside on - but I might be remembering wrong.
[11:13] <Vacum> there is a new election roughly every 20 seconds. each time that mon node above wins
[11:13] <Vacum> then calls a new election itself
[11:13] <Vacum> yes, that matches the disk space of the mon
[11:13] <Vacum> btw, the mon nodes are dedicated, not shared with OSDs
[11:14] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Ping timeout: 480 seconds)
[11:14] * gNetLabs (~gnetlabs@188.84.22.23) has joined #ceph
[11:14] <Zethrok> yea, what happens to a mon when they win? Does system usage or disk usage spike?
[11:14] <Vacum> yes, goes to 100% CPU
[11:15] <Vacum> disk usage is fine, they are fast enough. no noticeable i/o wait
[11:16] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[11:17] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) Quit (Quit: Leaving.)
[11:19] * zidarsk8 (~zidar@194.249.247.164) has joined #ceph
[11:19] * zack_dolby (~textual@219.117.239.161.static.zoot.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:20] * zidarsk8 (~zidar@194.249.247.164) has left #ceph
[11:23] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) has joined #ceph
[11:25] <loicd> leseb: yeah !
[11:30] <Zethrok> Vacum: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-February/007829.html - sounds like they have the same issue as you
[11:31] <Vacum> Zethrok: thanks, reading :)
[11:32] * zidarsk8 (~zidar@194.249.247.164) has joined #ceph
[11:33] <zidarsk8> sorry for a silly question, but when I create attach a rbd to a cluser, ceph -s doesnt show that as used disk space ...
[11:34] <loicd> leseb: when could we announce brag.ceph.com (alpha ? beta ? )
[11:34] <andreask> zidarsk8: fill it with data ... it's thin-provisioned
[11:34] <zidarsk8> and is there any way of seeing all the clients(rbd) that are attached to a cluster?
[11:35] <andreask> sure ... netstat ;-)
[11:35] <zidarsk8> hehe :P
[11:36] <zidarsk8> thanks
[11:41] <Vacum> If a mon drops out of the monmap, should the monmap epoch increase?
[11:43] <Vacum> or is that the "election epoch", that should increase? and the monmap epoch only increases with a configuration change?
[11:44] <glambert> is there a way to backup radosgw data/
[11:45] <glambert> s/\//?/
[11:45] <Zethrok> Vacum: Pretty sure that epoch only increases if you make conf. changes and the election epoch ++ whenever a mon drops in/out
[11:47] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) has joined #ceph
[11:49] <glambert> I'm just concerned that if someone somehow gains access to our systems and deletes everything from the radosgw storage we have no way of at least restoring old data
[11:49] <Vacum> I deliberately stopped the mon node that won each election. now a different machine is constantly being elected - and there the ceph-mon process eats up CPU too 100%
[11:50] <Vacum> the non-leader monnodes do not have 100% cpu spikes
[11:53] <Vacum> Regarding pgmap and osdmap: Are they stored on each OSD host as well? on disk? so what if we kill all mons, remove the installation there, re-deploy them and start over (only with the mons!). Will the pgmap and osdmap then be recovered from the OSD hosts? Or will we effectively destroy all data in our cluster? :)
[11:54] * zack_dolby (~textual@e0109-49-132-42-109.uqwimax.jp) has joined #ceph
[12:00] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[12:03] * n1md4 (~nimda@anion.cinosure.com) Quit (Quit: leaving)
[12:03] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[12:04] * zack_dolby (~textual@e0109-49-132-42-109.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:07] * leseb (~leseb@185.21.174.206) has joined #ceph
[12:15] <Zethrok> Vacum: I think they're generated/stored on all mon/osd, but I'm not sure. I don't think you can remove all mons without zapping your entire cluster. What I've done a few times is to stop all mons, identify the last 'good' one, extract a monmap, alter it to contain just 1 monitor and inject it into the system. I think there is docs about it.
[12:17] <Zethrok> Vacum: http://ceph.com/docs/master/rados/operations/add-or-rm-mons/#removing-monitors-from-an-unhealthy-cluster -- I've had to use that a few times - only tried with argonaut/bobtail releases though.
[12:19] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:28] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[12:33] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:36] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[12:42] * oms101 (~oms101@nat.nue.novell.com) Quit (Ping timeout: 480 seconds)
[12:45] * fdmanana (~fdmanana@bl9-168-27.dsl.telepac.pt) has joined #ceph
[12:47] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[12:49] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit ()
[12:49] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[12:53] <ksingh> guys how to create a ssd-only pool ( i have 4 ssds and i want to create a ssd pool for them )
[12:54] <ksingh> i followed ceph documentation but Objects are not getting stored on my SSD pool , they are moving to other OSD
[12:54] <ksingh> pleae advice
[12:54] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[12:58] * giorgis (~oftc-webi@147.52.50.135) has joined #ceph
[12:59] <giorgis> Hi all!! Is it OK if I install OSDs at a device (/dev/vdc) provided by Cinder
[13:00] * zidarsk8 (~zidar@194.249.247.164) Quit (Read error: Operation timed out)
[13:02] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Ping timeout: 480 seconds)
[13:05] * jtangwk (~Adium@gateway.tchpc.tcd.ie) has joined #ceph
[13:11] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Ping timeout: 480 seconds)
[13:11] <ksingh> giorgis : i have done my first ceph installation on openstack provided VMS , but that was only test
[13:11] <ksingh> so as long you are doing testing you can use vdb volumes by cinder but a BIG NO for production
[13:12] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[13:12] * ChanServ sets mode +v andreask
[13:13] <giorgis> ksingh: yes this is a testing but I was thinking of implementing it in production as well! I though that CEPH could be installed and configured in VMs without major problems
[13:15] <ksingh> how you gonna use your ceph cluster that is hosted on vms , i mean who will use ceph cluster storage and how ?
[13:23] <ifur> giorgis: that is indeed a bit backwards, it would be more sensible with bare metal ceph, and the virtualize something like NFS inside CMs
[13:23] <ifur> *VMs
[13:31] * thomnico (~thomnico@2a01:e35:8b41:120:f826:d8b0:401b:56d) has joined #ceph
[13:36] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) has joined #ceph
[13:37] <giorgis> ksingh + ifur : My problem is that I am doing an all-in-one installation......I am running specific software from some of my VMs that needs access to buckets...therefore I am using ceph
[13:37] * oms101 (~oms101@charybdis-ext.suse.de) has joined #ceph
[13:38] <giorgis> as I said initially this is for testing-demonstration
[13:38] <giorgis> if I had more bare-meta machines
[13:39] <giorgis> I would definitely go the other way......but not for now unfortunately....
[13:40] <giorgis> furthermore if this works (at least for my needs) then I can lower the budget of bare-meta machines significantly....maybe with a sacriifice in performance but I know it's a compensation
[13:41] <ksingh> any one know how to create a ssd-only pool ( i have 4 ssds and i want to create a ssd pool for them )
[13:44] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[13:47] * yeled (~yeled@spodder.com) Quit (Quit: meh..)
[13:51] * yeled (~yeled@spodder.com) has joined #ceph
[13:54] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[13:58] * julian (~julianwa@125.69.104.83) Quit (Ping timeout: 480 seconds)
[14:03] <Vacum> Zethrok: thanks. we stumbled upon that doc as well. We'll definitely give it a try. Thanks!
[14:06] <ifur> giorgis: try and dedicate onje physical disk to each OSD at least
[14:07] <ifur> your IOPS is likely going to be dreadful
[14:16] * garphy is now known as garphy`aw
[14:23] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:24] * thomnico (~thomnico@2a01:e35:8b41:120:f826:d8b0:401b:56d) Quit (Ping timeout: 480 seconds)
[14:30] * garphy`aw is now known as garphy
[14:30] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) has joined #ceph
[14:32] * thomnico (~thomnico@2a01:e35:8b41:120:6093:7fbb:c2db:e90f) has joined #ceph
[14:35] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:36] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:38] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[14:45] * heinrikter (~quassel@109.201.154.209) has joined #ceph
[14:46] * heinrikter (~quassel@0001c91a.user.oftc.net) Quit (Read error: Connection reset by peer)
[14:48] * timidshark (~timidshar@70-88-62-73-fl.naples.hfc.comcastbusiness.net) has joined #ceph
[14:52] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[14:54] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[14:54] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[14:57] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (Remote host closed the connection)
[14:57] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:59] * BillK (~BillK-OFT@106-69-41-46.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:02] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[15:02] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:05] * dmsimard (~Adium@ap03.wireless.co.mtl.iweb.com) has joined #ceph
[15:08] * oms101 (~oms101@charybdis-ext.suse.de) Quit (Quit: Leaving)
[15:08] * oms101 (~oms101@charybdis-ext.suse.de) has joined #ceph
[15:08] * oms101 (~oms101@charybdis-ext.suse.de) Quit (Remote host closed the connection)
[15:08] * oms101 (~oms101@charybdis-ext.suse.de) has joined #ceph
[15:09] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[15:11] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[15:17] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[15:17] * leseb (~leseb@185.21.174.206) has joined #ceph
[15:21] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:22] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[15:33] <giorgis> ifur: thx for the suggestion!!! I 'll try to do that!!!
[15:34] <saturnine> ls
[15:34] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[15:35] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[15:35] * ChanServ sets mode +v andreask
[15:41] * `jpg (~josephgla@ppp121-44-202-175.lns20.syd7.internode.on.net) has joined #ceph
[15:43] * ghartz (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) has joined #ceph
[15:44] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:49] * sjm1 (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[15:50] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:50] * sjm1 (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Remote host closed the connection)
[15:52] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:06] * timidshark (~timidshar@70-88-62-73-fl.naples.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[16:15] * scalability-junk (uid6422@id-6422.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[16:15] * thanhtran (~thanhtran@113.172.154.106) has joined #ceph
[16:21] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[16:22] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[16:23] * hybrid512 (~walid@195.200.167.70) Quit ()
[16:23] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[16:24] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[16:25] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit (Quit: Leaving.)
[16:27] * giorgis (~oftc-webi@147.52.50.135) Quit (Quit: Page closed)
[16:28] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) has joined #ceph
[16:28] * rahatm1 (~rahatm1@CPE602ad089ce64-CM602ad089ce61.cpe.net.cable.rogers.com) Quit ()
[16:28] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[16:34] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Quit: ircuser-1)
[16:34] * Guest177 (~jeremy@ip23.67-202-99.static.steadfastdns.net) Quit (Remote host closed the connection)
[16:34] * tchmnkyz (~jeremy@ip23.67-202-99.static.steadfastdns.net) has joined #ceph
[16:35] * tchmnkyz is now known as Guest5227
[16:38] * thomnico (~thomnico@2a01:e35:8b41:120:6093:7fbb:c2db:e90f) Quit (Quit: Ex-Chat)
[16:38] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[16:41] * thomnico (~thomnico@2a01:e35:8b41:120:6093:7fbb:c2db:e90f) has joined #ceph
[16:44] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[16:46] * xmltok (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[16:50] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:04] <ksingh> how to stop a specific OSD service from monitor node in centos
[17:04] <ksingh> i dont want to login on OSD node for a server stop / stop
[17:06] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:08] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[17:08] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[17:11] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[17:14] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[17:15] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[17:17] <winston-d> if osd_journal_size is changed and then OSD daemon restarted, the new value of journal size should take effect after restart, right?
[17:20] * carif (~mcarifio@ip-37-25.sn1.eutelia.it) has joined #ceph
[17:20] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[17:22] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[17:23] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:24] * ircolle (~Adium@2601:1:8380:2d9:4803:934a:7a39:e56) has joined #ceph
[17:29] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[17:31] * bitblt (~don@128-107-239-234.cisco.com) has joined #ceph
[17:31] <andreask> winston-d: hmm ... do you see the journal file being resized? I'd expect you first need to do a flush-journal followed by a mkjournal ... while the osd is down
[17:46] * rturk-away is now known as rturk
[17:48] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:48] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[17:50] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Read error: Operation timed out)
[17:52] * carif (~mcarifio@ip-37-25.sn1.eutelia.it) Quit (Ping timeout: 480 seconds)
[17:56] * thanhtran (~thanhtran@113.172.154.106) Quit (Read error: Connection reset by peer)
[17:58] * thanhtran (~thanhtran@113.172.201.10) has joined #ceph
[18:00] * sprachgenerator (~sprachgen@130.202.135.217) has joined #ceph
[18:02] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) has joined #ceph
[18:04] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:04] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[18:07] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:10] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (Quit: Terminated with extreme prejudice - dircproxy 1.0.5)
[18:12] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[18:13] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[18:14] * rturk is now known as rturk-away
[18:18] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[18:18] * KaZeR (~KaZeR@64.201.252.132) has joined #ceph
[18:18] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[18:18] * rturk-away is now known as rturk
[18:18] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit ()
[18:19] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[18:22] * JoeGruher (~JoeGruher@134.134.139.72) has joined #ceph
[18:28] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:28] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[18:33] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[18:33] * rturk is now known as rturk-away
[18:33] * Cube (~Cube@66-87-131-42.pools.spcsdns.net) has joined #ceph
[18:35] * sprachgenerator (~sprachgen@130.202.135.217) Quit (Ping timeout: 480 seconds)
[18:36] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[18:38] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[18:38] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:38] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[18:39] * sprachgenerator (~sprachgen@130.202.135.217) has joined #ceph
[18:42] * zack_dolby (~textual@p852cae.tokynt01.ap.so-net.ne.jp) has joined #ceph
[18:43] * garphy is now known as garphy`aw
[18:44] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[18:47] * davidzlap (~Adium@ip68-5-239-214.oc.oc.cox.net) has joined #ceph
[18:52] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[18:54] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:00] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Quit: Leaving.)
[19:00] <Gamekiller77> hello people
[19:01] <Gamekiller77> i thought i one time i saw a way to change the way Ceph ack the write. I was not sure if it defaulted to wait for the 3rd ack before sending the client the finally ack
[19:01] <Gamekiller77> if this is true what setting is made to change this to ack on first write
[19:02] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[19:02] * leseb (~leseb@185.21.174.206) has joined #ceph
[19:05] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) has joined #ceph
[19:05] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[19:06] * sprachgenerator (~sprachgen@130.202.135.217) Quit (Quit: sprachgenerator)
[19:07] <Fruit> Gamekiller77: well there's rados_wait_for_complete() vs rados_wait_for_safe()
[19:07] <Gamekiller77> i think it more this
[19:07] <Gamekiller77> osd pool default min size
[19:07] <Gamekiller77> from the doc
[19:07] <Fruit> Gamekiller77: and then there's the min_size attribute for pools (http://ceph.com/docs/master/rados/operations/pools/)
[19:08] <Gamekiller77> i just did some bench mark test
[19:08] <Gamekiller77> read are super good
[19:08] <Gamekiller77> write are like really bad
[19:08] * timidshark (~timidshar@70-88-62-73-fl.naples.hfc.comcastbusiness.net) has joined #ceph
[19:08] <Fruit> writes are heavily affected by journal settings
[19:09] <Fruit> make sure the journal is a) not on the same physical disk as the osd and b) on something fast like an ssd
[19:09] <Gamekiller77> hmm
[19:09] <Gamekiller77> so yah i have them on SSD
[19:09] <Gamekiller77> but i did not follow the 5 OSD on a single SSD
[19:09] <Gamekiller77> this is a stage test plat form
[19:10] <Gamekiller77> i have 17 OSD sharing a single SSD today
[19:10] <Fruit> using iostat -kx 1 you can see where the bottleneck is
[19:10] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[19:10] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[19:11] <Gamekiller77> on the OSD node ?
[19:11] <Gamekiller77> sorry learning
[19:11] <Gamekiller77> btw thanks for the help
[19:12] <Fruit> yeah
[19:13] <Fruit> also be sure to have some parallellism in your benchmark
[19:13] <Gamekiller77> you community member
[19:13] <Gamekiller77> we using phoronix test suite
[19:13] <Fruit> I've been using fio so far
[19:14] <Fruit> with different iodepths
[19:14] * oms101 (~oms101@charybdis-ext.suse.de) Quit (Ping timeout: 480 seconds)
[19:15] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[19:15] <Fruit> in fact, the cartesian product of various iodepths (1-128), blocksizes (512-4M) and read/write mixtures
[19:16] * thanhtran (~thanhtran@113.172.201.10) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[19:17] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Read error: Connection reset by peer)
[19:29] <Gamekiller77> yup this app does it
[19:29] <Gamekiller77> so the SSD are very low util
[19:29] <Gamekiller77> 13% usage on my SSD
[19:29] <Gamekiller77> the IO looks low to me
[19:34] <ponyofdeath> hi, as soon as an backfill / recovery starts my cluster grinds to a halt how can i change recovery priority with the cluster online?
[19:35] <ponyofdeath> seems the recovery settings are on the default values
[19:35] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[19:40] <Gamekiller77> how full is the cluster ?
[19:40] <Gamekiller77> just wondering
[19:41] <ponyofdeath> 16024 GB used, 69614 GB / 85689 GB avail
[19:43] <Gamekiller77> what does ceph osd tree look like
[19:43] <Gamekiller77> all osd in
[19:44] <ponyofdeath> yup all in
[19:44] <ponyofdeath> http://paste.ubuntu.com/7195250
[19:44] <ksingh> any one know how to create a ssd-only pool ( i have 4 ssds and i want to create a ssd pool for them )
[19:53] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) has joined #ceph
[19:57] * allsystemsarego (~allsystem@188.27.166.29) Quit (Quit: Leaving)
[19:57] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[20:02] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[20:05] * thomnico (~thomnico@2a01:e35:8b41:120:6093:7fbb:c2db:e90f) Quit (Quit: Ex-Chat)
[20:07] * vilobhmm (~vilobhmm@c-50-152-188-98.hsd1.ca.comcast.net) has joined #ceph
[20:07] * JoeGruher (~JoeGruher@134.134.139.72) Quit (Remote host closed the connection)
[20:11] * vilobhmm_ (~vilobhmm@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:13] * b0e (~aledermue@rgnb-5d865372.pool.mediaWays.net) has joined #ceph
[20:15] * vilobhmm (~vilobhmm@c-50-152-188-98.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[20:21] * vilobhmm_ (~vilobhmm@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[20:22] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[20:29] * gNetLabs (~gnetlabs@188.84.22.23) Quit (Read error: Connection reset by peer)
[20:33] * nrs_ (~nrs@108-61-57-43ch.openskytelcom.net) has joined #ceph
[20:33] <nrs_> quick question: would it be fair to say that all of the cluster communications are TCP? No UDP comms?
[20:34] * b0e (~aledermue@rgnb-5d865372.pool.mediaWays.net) Quit (Quit: Leaving.)
[20:34] * The_Bishop_ (~bishop@2001:470:50b6:0:38c1:f719:c265:d189) Quit (Ping timeout: 480 seconds)
[20:34] * ksingh (~Adium@2001:708:10:10:68f0:f64:40e:b439) Quit (Ping timeout: 480 seconds)
[20:35] <joshd> nrs_: yes
[20:35] <nrs_> i'm somewhat confused about the language around "daemon placement"
[20:36] <nrs_> i'm sizing a system right now
[20:36] <nrs_> IIRC, you need MDS and OSD
[20:36] <nrs_> but then i see MON servers, etc
[20:37] <nrs_> it's a little confusing for someone new to Ceph
[20:37] * stepheno (~oskars@70.96.128.243) Quit (Quit: WeeChat 0.4.2)
[20:39] <joshd> nrs_: generally you have an osd per disk (possibly with the osd journal on a ssd partition), 3 monitors on separate nodes, and MDS is only needed for the filesystem, not rbd or object storage
[20:40] <nrs_> can you share a machine with OSD and an MON
[20:40] <joshd> nrs_: http://ceph.com/docs/master/start/hardware-recommendations/ should help
[20:40] <joshd> yes
[20:43] <nrs_> ok
[20:43] <wrencsok> ksingh: read up on crush maps to create your ssd pools.
[20:44] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[20:51] * ibrahimm (~horasanli@88.234.110.229) has joined #ceph
[20:51] * ibrahimm (~horasanli@88.234.110.229) Quit (autokilled: Do not spam other people. Mail support@oftc.net if you feel this is in error. (2014-04-02 18:51:52))
[20:53] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[20:54] <wrencsok> nrs: unless you have some big machines that can handle the load. i would not mix osd and mon daemons. keep them on separate box's/vm's. know how all components of your hardware perform, try to do a bandwidth/traffic budget with respect to your drives, controller, system bus, ssd journals, nic, etc. Ceph.com has guidlines, but its easy to choose the wrong hardware or set things up in a manner that you create limitations and bottlenecks wit
[20:57] <darkfader> ++ for splitting out mons
[20:58] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[21:00] <Fruit> wrencsok: your message was cut short at "and bottlenecks wit" ;)
[21:00] * JoeGruher (~JoeGruher@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[21:01] * kiwigera_ (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[21:03] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[21:03] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[21:03] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[21:04] <nrs_> so that was my thinking initially
[21:04] <nrs_> mon on seperate machines
[21:04] <nrs_> osds on others
[21:04] <nrs_> but i didn't know about one OSD per disk
[21:05] <nrs_> i'm assuming that there is one OSD daemon per disk and that translates to one listening TCP port, right?
[21:08] * giorgis (~oftc-webi@ppp046176079231.access.hol.gr) has joined #ceph
[21:08] <giorgis> HI! A very strange issue.... I am trying to install the radosgw-agent on CentOS and I get "No package radosgw-agent available"
[21:08] <giorgis> any ideas??
[21:10] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[21:12] * dmsimard (~Adium@ap03.wireless.co.mtl.iweb.com) Quit (Ping timeout: 480 seconds)
[21:13] * dmsimard (~Adium@ap01.wireless.co.mtl.iweb.com) has joined #ceph
[21:16] * dmsimard1 (~Adium@108.163.152.66) has joined #ceph
[21:22] * dmsimard (~Adium@ap01.wireless.co.mtl.iweb.com) Quit (Ping timeout: 480 seconds)
[21:25] <JoeGruher> giorgis try ceph-radosgw and radosgw-agent
[21:25] <JoeGruher> package names are different under centos in some cases
[21:26] <giorgis> JoeGruher: ceph-radosgw is available
[21:26] <giorgis> the problem is with radosgw-agent
[21:27] <giorgis> If I follow this link http://ceph.com/rpm-emperor/el6/noarch I can see the package
[21:27] <giorgis> but for some reason it is ignored when I try to install it...here is the full output
[21:27] <giorgis> [root@ceph1 yum.repos.d]# yum install radosgw-agent Loaded plugins: fastestmirror, priorities Loading mirror speeds from cached hostfile * base: ftp.riken.jp * epel: ftp.jaist.ac.jp * extras: ftp.riken.jp * updates: ftp.riken.jp 6 packages excluded due to repository priority protections Setting up Install Process No package radosgw-agent available. Error: Nothing to do [root@ceph1 yum.repos.d]#
[21:28] <JoeGruher> giorgis: hmmm, i have in my notes that's what i installed... afraid i can't tell you beyond that, i'm not really an expert. maybe a problem with how repos are set up, but i'm just guessing.
[21:28] <giorgis> I guess it's because some packages are excluded but I cannot understand why...
[21:29] <giorgis> exactly...the problem is because of the repos set up...but I have done that in the past without any problem and following the same now...with this very weird problem
[21:29] <giorgis> occuring when it shouldn't
[21:32] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[21:33] * gNetLabs (~gnetlabs@188.84.22.23) has joined #ceph
[21:38] * BillK (~BillK-OFT@106-69-94-85.dyn.iinet.net.au) has joined #ceph
[21:44] <mjevans> nrs_: there's really no point in having more than one OSD per physical block device.
[21:45] <nrs_> but the guideline of 1 OSD daemon per spindle/SSD is a good one?
[21:48] <mjevans> That is exactly what I just said using other words.
[21:48] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[21:49] <Fruit> how about multiple disks for one osd? in, say, raid5
[21:49] <mjevans> I should let you know that by default ceph auto-configures with a model where all OSDs are equal fault tollerance wise and that are weighted by the size of the OSD.
[21:49] <mjevans> Fruit: isn't there some kind of experimental thing to do raid5 support within ceph? Also, ceph effectively /is/ your redundancy layer, so running raid beneath it makes little sense.
[21:50] <Fruit> mjevans: yeah, erasure codes.
[21:51] <Fruit> mjevans: anyway, raid5 means that I can a) use hardware hotswap from my raid controller and b) don't have to use size=3 if I think just copies is a bit iffy
[21:52] <mjevans> Fruit: how does your raid setup protect against the 'write hole' issue?
[21:53] <Fruit> battery backed cache.
[21:53] <mjevans> What if a disk failes mid-operation during that window?
[21:53] <Fruit> then I replace it and the missing block gets written from the cache
[21:54] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[21:54] * leseb (~leseb@185.21.174.206) has joined #ceph
[21:56] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[21:58] <mjevans> Fruit: I've thought about it, and I suppose if you wanted to lower the number of replication copies to 2 you could engineer a custom crush map such that it described your failure domain in other aspects; such as network or power distribution.
[21:59] <mjevans> However aside from that I don't see how it's more cost effective, performance wise, to go with that method over just using ceph and cheeper hardware.
[21:59] <Fruit> well I'd still be running multiple osds per host
[21:59] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[21:59] * ChanServ sets mode +v andreask
[22:00] <mjevans> Fruit: Right, which is why ceph lets you create custom crush sets to better describe your logical failure domain combinations.
[22:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[22:00] <Fruit> the crush map is no different from using single disks
[22:01] <Fruit> it's just easier to change disks and much less likely to fail
[22:01] * fghaas (~florian@172.56.38.143) has joined #ceph
[22:03] * BillK (~BillK-OFT@106-69-94-85.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[22:03] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[22:05] * fghaas (~florian@172.56.38.143) Quit ()
[22:06] * nrs_ (~nrs@108-61-57-43ch.openskytelcom.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:09] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[22:11] * nrs_ (~nrs@66.55.152.53) has joined #ceph
[22:14] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has joined #ceph
[22:18] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has left #ceph
[22:20] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[22:22] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has left #ceph
[22:22] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:23] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[22:24] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[22:28] * timidshark (~timidshar@70-88-62-73-fl.naples.hfc.comcastbusiness.net) Quit (Quit: Leaving...)
[22:29] * Pedras1 (~Adium@216.207.42.132) has joined #ceph
[22:30] * The_Bishop_ (~bishop@2001:470:50b6:0:e9ff:1085:40ee:5837) has joined #ceph
[22:35] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has left #ceph
[22:46] * giorgis (~oftc-webi@ppp046176079231.access.hol.gr) Quit (Quit: Page closed)
[22:50] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:58] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[23:03] * ckranz (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[23:07] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:08] * nrs_ (~nrs@66.55.152.53) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:23] * dmsimard (~Adium@70.38.0.246) has joined #ceph
[23:25] * dmsimard1 (~Adium@108.163.152.66) Quit (Ping timeout: 480 seconds)
[23:25] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[23:26] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[23:26] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[23:27] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[23:29] * haomaiwa_ (~haomaiwan@117.79.232.153) Quit (Ping timeout: 480 seconds)
[23:31] * The_Bishop_ (~bishop@2001:470:50b6:0:e9ff:1085:40ee:5837) Quit (Ping timeout: 480 seconds)
[23:35] * The_Bishop_ (~bishop@2001:470:50b6:0:e9ff:1085:40ee:5837) has joined #ceph
[23:48] * ghartz (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) Quit (Remote host closed the connection)
[23:50] * vilobhmm (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.