#ceph IRC Log

Index

IRC Log for 2014-04-04

Timestamps are in GMT/BST.

[0:03] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[0:08] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[0:08] <athrift> hrmm, ceph-deploy still seems broken
[0:08] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[0:11] * schmee (~quassel@phobos.isoho.st) Quit (Ping timeout: 480 seconds)
[0:12] <athrift> it does not seem to sudo when writing out the keys on the "gatherkeys" step, so if the user does not have write permissions to /etc/ceph/ this step fails.
[0:16] * Nats (~Nats@telstr575.lnk.telstra.net) Quit (Read error: Connection reset by peer)
[0:17] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[0:18] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[0:19] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[0:20] * `jpg (~josephgla@ppp121-44-251-173.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[0:22] * mcms (~mcms@46.224.152.64) Quit (Read error: Connection reset by peer)
[0:22] * Nats (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[0:22] * mcms (~mcms@46.224.152.64) has joined #ceph
[0:23] * giorgis (~oftc-webi@46-93-243.adsl.cyta.gr) Quit (Quit: Page closed)
[0:29] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[0:32] * mfournier (~marc@2001:4b98:dc2:41:216:3eff:fe6d:dc0b) has joined #ceph
[0:35] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:44] * xdeller (~xdeller@109.188.124.66) Quit (Ping timeout: 480 seconds)
[0:46] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:47] * dmsimard (~Adium@70.38.0.246) Quit (Quit: Leaving.)
[0:48] * mcms (~mcms@46.224.152.64) Quit (Ping timeout: 480 seconds)
[0:48] * BillK (~BillK-OFT@124-148-94-184.dyn.iinet.net.au) has joined #ceph
[0:54] * lofejndif (~lsqavnbok@8JQAAHP1Q.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[1:03] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) has joined #ceph
[1:13] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[1:18] * fghaas (~florian@205.158.164.101.ptr.us.xo.net) Quit (Quit: Leaving.)
[1:28] * LeaChim (~LeaChim@host86-162-1-71.range86-162.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:35] * c74d (~c74d@2002:4404:712c:0:76de:2bff:fed4:2766) has joined #ceph
[1:36] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[1:44] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[1:46] * AfC (~andrew@101.119.15.94) has joined #ceph
[1:48] * sprachgenerator (~sprachgen@130.202.135.191) Quit (Quit: sprachgenerator)
[2:01] * Cube2 (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[2:03] * The_Bishop (~bishop@f050147209.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[2:04] * The_Bishop (~bishop@f050147209.adsl.alicedsl.de) has joined #ceph
[2:12] * bitblt (~don@128-107-239-234.cisco.com) Quit (Quit: Leaving)
[2:16] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Quit: Leaving)
[2:16] * AfC (~andrew@101.119.15.94) Quit (Quit: Leaving.)
[2:17] * julian (~julianwa@125.70.132.28) has joined #ceph
[2:23] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) has joined #ceph
[2:25] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[2:25] * mtanski (~mtanski@cpe-72-229-51-156.nyc.res.rr.com) Quit ()
[2:32] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:35] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[2:36] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[2:44] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:46] * ghartz (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) Quit (Remote host closed the connection)
[2:49] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[2:50] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[2:53] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:56] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[2:59] * Cube (~Cube@66-87-64-209.pools.spcsdns.net) has joined #ceph
[3:03] * zack_dol_ (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[3:03] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[3:08] * tristanz (~tristanza@c-24-5-38-61.hsd1.ca.comcast.net) has joined #ceph
[3:08] * xarses (~andreww@173-164-194-206-SFBA.hfc.comcastbusiness.net) has joined #ceph
[3:12] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[3:17] * diegows_ (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:22] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:23] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:29] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[3:30] * sz0 (~user@208.72.139.54) Quit (Ping timeout: 480 seconds)
[3:31] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:36] * zack_dol_ (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:37] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[3:38] * xarses (~andreww@173-164-194-206-SFBA.hfc.comcastbusiness.net) Quit (Read error: Operation timed out)
[3:39] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[3:46] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[3:47] * sjusthm (~sam@24-205-43-60.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[3:48] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has joined #ceph
[3:58] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) has joined #ceph
[4:14] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[4:18] * tristanz (~tristanza@c-24-5-38-61.hsd1.ca.comcast.net) Quit (Quit: tristanz)
[4:22] * Boltsky (~textual@office.deviantart.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[4:24] * KaZeR (~KaZeR@64.201.252.132) Quit (Remote host closed the connection)
[4:25] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[4:33] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[4:39] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[4:40] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[4:41] * sprachgenerator (~sprachgen@c-67-167-211-254.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[4:51] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[5:09] * Vacum_ (~vovo@i59F7AFCE.versanet.de) has joined #ceph
[5:16] * Vacum (~vovo@i59F79988.versanet.de) Quit (Ping timeout: 480 seconds)
[5:21] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[5:26] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[5:32] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Read error: Operation timed out)
[5:38] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[5:41] * fdmanana_ (~fdmanana@bl13-138-188.dsl.telepac.pt) has joined #ceph
[5:45] * `jpg (~josephgla@ppp121-44-251-173.lns20.syd7.internode.on.net) has joined #ceph
[5:48] * fdmanana (~fdmanana@bl9-168-27.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[5:49] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[5:58] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[5:59] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[6:06] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Ping timeout: 480 seconds)
[6:07] * bladejogger (~bladejogg@0001c1f3.user.oftc.net) Quit (Quit: bladejogger)
[6:56] * `jpg (~josephgla@ppp121-44-251-173.lns20.syd7.internode.on.net) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[7:01] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[7:09] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[7:13] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:17] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[7:26] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[7:30] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:30] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[7:33] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[7:34] * rdas (~rdas@110.224.130.18) has joined #ceph
[7:41] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[7:49] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[7:57] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[8:05] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[8:08] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[8:11] * rdas (~rdas@110.224.130.18) Quit (Quit: Leaving)
[8:13] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[8:16] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:20] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[8:28] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[8:36] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[8:38] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[8:38] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[8:44] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[8:46] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[8:47] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[8:47] * Nats (~Nats@telstr575.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[8:47] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[8:50] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[8:51] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[8:52] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[8:59] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[9:00] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[9:02] * Nats (~Nats@telstr575.lnk.telstra.net) has joined #ceph
[9:05] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[9:06] * ksingh (~Adium@2001:708:10:10:e5d2:df0a:c33d:34e3) has joined #ceph
[9:08] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[9:11] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[9:13] * analbeard (~shw@support.memset.com) has joined #ceph
[9:16] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[9:19] * athrift (~nz_monkey@203.86.205.13) Quit (Ping timeout: 480 seconds)
[9:21] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:24] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[9:25] * c74d3 (~c74d3a4eb@2002:4404:712c:0:76de:2bff:fed4:2766) Quit (Ping timeout: 480 seconds)
[9:25] * c74d (~c74d@2002:4404:712c:0:76de:2bff:fed4:2766) Quit (Ping timeout: 480 seconds)
[9:28] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[9:31] * kiwigera_ (~kiwigerai@208.72.139.54) has joined #ceph
[9:31] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Read error: Connection reset by peer)
[9:32] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[9:32] * kiwigera_ (~kiwigerai@208.72.139.54) Quit (Read error: Connection reset by peer)
[9:32] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[9:39] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[9:48] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[9:55] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[10:01] * mcms (~mcms@46.224.152.64) has joined #ceph
[10:03] * c74d (~c74d@2002:4404:712c:0:201d:2d0c:cf3e:4863) has joined #ceph
[10:04] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[10:04] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[10:08] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[10:11] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[10:11] * ChanServ sets mode +v andreask
[10:11] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:11] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[10:19] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[10:20] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[10:21] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has left #ceph
[10:22] * tristanz (~tristanza@mobile-166-137-185-156.mycingular.net) has joined #ceph
[10:24] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:27] * c74d3 (~c74d3a4eb@2002:4404:712c:0:76de:2bff:fed4:2766) has joined #ceph
[10:27] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[10:31] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:35] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[10:39] * Cube (~Cube@66-87-64-209.pools.spcsdns.net) Quit (Quit: Leaving.)
[10:43] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[10:43] * LeaChim (~LeaChim@host86-162-1-71.range86-162.btcentralplus.com) has joined #ceph
[10:51] * jtangwk (~Adium@gateway.tchpc.tcd.ie) Quit (Remote host closed the connection)
[10:51] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[10:51] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[10:58] * mcms (~mcms@46.224.152.64) Quit (Quit: Leaving)
[10:58] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[11:04] * glzhao (~glzhao@123.125.124.17) Quit (Quit: Lost terminal)
[11:05] * glzhao (~glzhao@123.125.124.17) has joined #ceph
[11:07] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[11:09] * Cube (~Cube@66.87.64.209) has joined #ceph
[11:15] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[11:21] * Cube (~Cube@66.87.64.209) Quit (Ping timeout: 480 seconds)
[11:23] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[11:25] * allsystemsarego (~allsystem@79.115.62.238) has joined #ceph
[11:30] * tristanz (~tristanza@mobile-166-137-185-156.mycingular.net) Quit (Quit: tristanz)
[11:31] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[11:36] * oms101 (~oms101@charybdis-ext.suse.de) Quit (Ping timeout: 480 seconds)
[11:39] * oms101 (~oms101@nat.nue.novell.com) has joined #ceph
[11:40] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[11:43] * xdeller (~xdeller@109.188.124.66) has joined #ceph
[11:47] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[11:56] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[12:01] * xmltok (~xmltok@216.103.134.250) Quit (Remote host closed the connection)
[12:01] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[12:03] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[12:08] * kosmas (~kosmasgia@capra.lib.uoc.gr) has joined #ceph
[12:10] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) has joined #ceph
[12:11] * madkiss (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[12:11] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[12:14] * kosmas (~kosmasgia@capra.lib.uoc.gr) has left #ceph
[12:17] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:19] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[12:26] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:27] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[12:35] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[12:35] * madkiss1 (~madkiss@chello062178057005.20.11.vie.surfer.at) Quit (Read error: Connection reset by peer)
[12:35] * madkiss (~madkiss@2001:6f8:12c3:f00f:606a:ba27:3917:b754) has joined #ceph
[12:37] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[12:38] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[12:43] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[12:48] * madkiss (~madkiss@2001:6f8:12c3:f00f:606a:ba27:3917:b754) Quit (Ping timeout: 480 seconds)
[12:48] * simulx (~simulx@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[12:49] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[12:50] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[12:55] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[12:58] * t0rn (~ssullivan@c-24-11-198-35.hsd1.mi.comcast.net) has joined #ceph
[12:59] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[13:00] * madkiss (~madkiss@2001:6f8:12c3:f00f:4dab:17e3:73c1:15cb) has joined #ceph
[13:01] * lofejndif (~lsqavnbok@72.52.91.30) has joined #ceph
[13:06] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[13:08] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[13:08] * ChanServ sets mode +v andreask
[13:14] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[13:20] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[13:22] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[13:30] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[13:38] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[13:41] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[13:46] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[13:47] * JeffK (~Jeff@38.99.52.10) Quit (Read error: Connection reset by peer)
[13:48] * JeffK (~Jeff@38.99.52.10) has joined #ceph
[13:52] * Cube (~Cube@66-87-65-82.pools.spcsdns.net) has joined #ceph
[13:53] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[13:57] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[13:59] * lofejndif (~lsqavnbok@1RHAAC96R.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[14:00] * Cube (~Cube@66-87-65-82.pools.spcsdns.net) Quit (Ping timeout: 480 seconds)
[14:02] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[14:09] * andrein (~andrein@84.247.84.22) has joined #ceph
[14:09] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[14:10] <andrein> Hi guys, I'm trying to deploy two osds on a new server. I've pushed the config file using ceph-deploy, but when I try to activate the osds it complains that it can't find any monitors. running ceph status from the command line also fails, but the monitors are clearly accessible since "ceph -m <monitor> status" works as expected. is there something I'm missing here?
[14:11] <alfredodeza> andrein: is that a new installation? it sounds like 2 different installations in the same host
[14:11] <alfredodeza> host/hosts
[14:12] <andrein> i ahave a 3 node cluster at the moment, each running two OSDs and a MON, and I'm trying to add another two OSDs on a different server
[14:13] <andrein> my ceph.conf is at http://pastebin.com/00SqhuTh for reference (don't mind the hostnames :) )
[14:13] <alfredodeza> are you following the guide to add OSDs?
[14:14] <andrein> yes, i ran the install, prepare, activate steps, everything went ok until i tried to activate them.
[14:14] <andrein> posting the log in a second
[14:15] <alfredodeza> this --> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-an-osd-manual
[14:15] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[14:15] <alfredodeza> what does `ceph -w` say?
[14:15] <alfredodeza> you added the OSDs to the crush map?
[14:17] <andrein> ceph-deploy activate output: http://pastebin.com/qZTUFxbm , ceph -w on the server I'm deploying to: http://pastebin.com/gM4CpNVJ
[14:17] <alfredodeza> oh wait
[14:17] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[14:17] <alfredodeza> you are doing this with ceph-deploy?
[14:17] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:18] <alfredodeza> andrein: ceph-deploy doesn't know how to *add* an OSD to an existing cluster
[14:18] <alfredodeza> we just recently added functionality to add a monitor
[14:19] <andrein> alfredodeza: it worked fine when I added the third server a few days ago...
[14:19] <alfredodeza> I am not sure how that worked, but if you see the steps required to add an OSD, it involved a bunch of extra things that ceph-deploy does not do
[14:20] <alfredodeza> one of them being adding the OSD to the crush map
[14:20] <alfredodeza> as per the guide http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#adding-an-osd-manual
[14:21] <alfredodeza> I would strongly suggest you follow the manual guide. Really not sure how you were able to add an OSD with ceph-deploy before
[14:21] <andrein> alfredodeza: I've been following this guide: http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
[14:21] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:22] <andrein> and it worked fine for the first three nodes
[14:22] <alfredodeza> hrmn
[14:23] <andrein> I'm using ceph-deploy 1.4.0 now, but I'm pretty sure it worked on 1.3.3 aswell
[14:24] <alfredodeza> andrein: I am reading this and I am highly suspicious :)
[14:24] <alfredodeza> one sec
[14:24] <alfredodeza> does the 4th node any different from the 3 nodes you already have?
[14:25] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[14:26] <andrein> they're identical, all running centos 6.5, same network setup
[14:27] <andrein> the weird thing is that if i manually specify a monitor address on the 4th server, i can connect to the cluster
[14:27] <andrein> if I let it fall back to ceph.conf it fails
[14:28] <alfredodeza> aha
[14:28] <alfredodeza> ok
[14:28] <alfredodeza> I think I know what is going on here
[14:28] <alfredodeza> so you initially had 3 servers
[14:28] <alfredodeza> correct?
[14:28] <alfredodeza> and you deployed your monitors and then your OSDs?
[14:28] <andrein> yes
[14:28] <alfredodeza> right
[14:29] <alfredodeza> and when the 4 node came along
[14:29] <alfredodeza> did you added a monitor?
[14:29] * TMM (~hp@c97185.upc-c.chello.nl) has joined #ceph
[14:32] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[14:32] <andrein> technically, i first had two nodes, 4 osds+1 mon, then i added node 3 and i had 6osds+3 mons
[14:32] <andrein> and now i'm trying to add node 4 with another two osds (no monitor here)
[14:33] * BillK (~BillK-OFT@124-148-94-184.dyn.iinet.net.au) Quit (Read error: Operation timed out)
[14:33] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[14:35] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:41] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[14:42] <madkiss> harhar.
[14:42] <madkiss> finally. working ceph-deployment with Puppet with properly configured externet journals.
[14:43] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[14:49] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[14:49] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[14:51] * wmat_ is now known as wmat
[14:53] * mnash (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[14:53] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[14:55] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[14:56] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[14:57] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:b475:434e:e061:a982) has joined #ceph
[15:03] * madkiss (~madkiss@2001:6f8:12c3:f00f:4dab:17e3:73c1:15cb) Quit (Ping timeout: 480 seconds)
[15:03] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:04] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[15:05] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[15:08] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[15:10] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[15:12] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[15:14] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[15:20] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[15:24] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[15:24] * jeff-YF (~jeffyf@64.191.222.109) has joined #ceph
[15:25] * ksingh (~Adium@2001:708:10:10:e5d2:df0a:c33d:34e3) Quit (Ping timeout: 480 seconds)
[15:28] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[15:32] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[15:36] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[15:39] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[15:41] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:42] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[15:44] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[15:45] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[15:46] * Kioob (~kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) has joined #ceph
[15:46] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:46] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[15:48] * TheBittern (~thebitter@195.10.250.233) Quit ()
[15:51] * jeff-YF (~jeffyf@64.191.222.109) Quit (Quit: jeff-YF)
[15:52] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[15:54] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:59] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[16:02] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[16:04] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[16:04] * fghaas (~florian@76.14.1.153) has joined #ceph
[16:04] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[16:09] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[16:12] <isodude> Hi, strange problem with 4k-benchmark here. I'm running rados -p <pool> bench -b 4096 -t 256 100 write, and I'm getting very slow values, like 3MB/s. Iostat on one of the OSDs http://pastebin.com/JXQG472D, seems that the OSD disks are going 100% while the journals sdc/sdd are just idling. Anything I can tune to optimize?
[16:12] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:16] <isodude> madkiss: sweet, much trouble with puppet or was it a breeze?
[16:16] <madkiss1> isodude: well. not all that easy.
[16:16] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[16:18] <madkiss1> isodude: https://github.com/madkiss/puppet-cephdeploy ??? some patches were in fact required.
[16:18] * Kioob (~kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) Quit (Remote host closed the connection)
[16:19] * fghaas (~florian@76.14.1.153) Quit (Quit: Leaving.)
[16:21] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[16:25] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[16:25] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[16:26] * timidshark (~timidshar@70-88-62-73-fl.naples.hfc.comcastbusiness.net) has joined #ceph
[16:26] * julian (~julianwa@125.70.132.28) Quit (Quit: afk)
[16:31] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:32] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[16:32] <isodude> madkiss1, nice :)
[16:33] <isodude> oh, how does one see if the journal is full?
[16:33] * madkiss1 is now known as madkiss
[16:38] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[16:38] <alphe> hello everyone
[16:39] <alphe> there is no way in RBD image to delete replicas data when the client OS mark as deleted data ?
[16:40] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[16:43] <alphe> guess not ...
[16:43] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[16:48] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[16:52] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) Quit (Remote host closed the connection)
[16:54] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[16:56] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[16:59] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:01] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[17:01] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit ()
[17:02] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[17:02] * vata (~vata@2607:fad8:4:6:21a2:43c3:2da0:63e3) has joined #ceph
[17:04] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[17:07] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:07] * andrein (~andrein@84.247.84.22) Quit (Read error: Connection reset by peer)
[17:07] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[17:08] * JoeGruher (~JoeGruher@134.134.139.74) has joined #ceph
[17:09] * t0rn (~ssullivan@c-24-11-198-35.hsd1.mi.comcast.net) Quit (Quit: Leaving.)
[17:09] * andrein (~andrein@46.108.33.138) has joined #ceph
[17:10] * t0rn (~ssullivan@c-24-11-198-35.hsd1.mi.comcast.net) has joined #ceph
[17:12] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[17:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[17:14] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:17] * andrein_ (~andrein@84.247.84.22) has joined #ceph
[17:17] * andrein (~andrein@46.108.33.138) Quit (Read error: Connection reset by peer)
[17:19] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[17:21] <janos> i could be mistaken but i think that a lazy process - eventually occurs but not immediately
[17:22] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:28] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[17:30] * markbby (~Adium@168.94.245.4) has joined #ceph
[17:35] * themgt (~themgt@24-181-212-170.dhcp.hckr.nc.charter.com) Quit (Quit: themgt)
[17:36] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[17:36] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:40] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[17:42] * joef (~Adium@2620:79:0:131:976:f4aa:48a1:4c1a) Quit (Quit: Leaving.)
[17:43] * joef (~Adium@2620:79:0:131:70cf:2798:622a:276) has joined #ceph
[17:43] * joef (~Adium@2620:79:0:131:70cf:2798:622a:276) Quit ()
[17:44] <ifur> http://www.supermicro.nl/products/system/4U/F617/SYS-F617H6-FTPTL_.cfm
[17:44] <ifur> in the voice of george takei, "Ohh myyy!"
[17:45] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[17:47] * joef (~Adium@2620:79:0:131:3419:7be1:5b1a:4381) has joined #ceph
[17:48] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has joined #ceph
[17:48] * angdraug (~angdraug@12.164.168.117) Quit (Ping timeout: 480 seconds)
[17:52] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) has joined #ceph
[17:52] <mozg> hello guys
[17:52] <tracphil> hi
[17:52] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[17:53] <mozg> I had two crashes of ceph-mon over last couple of weeks
[17:53] <mozg> which consumed most of the disk on the root partition
[17:53] <mozg> i was wondering if anyone would be interested in getting the logs
[17:53] <mozg> this is on Emperor
[17:54] <mozg> the log files are around 7gb in size and around 200mb in bz2
[17:54] <mozg> if oneone is interested, please let me know where to send it
[17:58] * rmoe (~quassel@12.164.168.117) has joined #ceph
[17:59] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[17:59] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:01] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[18:02] * cbob (~cbob@host-63-232-9-69.midco.net) has joined #ceph
[18:02] * The_Bishop (~bishop@f050147209.adsl.alicedsl.de) Quit (Remote host closed the connection)
[18:03] <aarontc> ifur: I'll take 10 ;)
[18:04] * ckranz (~ckranz@193.240.116.146) Quit (Quit: Leaving)
[18:06] * ghartz (~ghartz@ip-68.net-80-236-84.joinville.rev.numericable.fr) has joined #ceph
[18:08] * The_Bishop (~bishop@f050147209.adsl.alicedsl.de) has joined #ceph
[18:09] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[18:09] * markbby (~Adium@168.94.245.1) has joined #ceph
[18:13] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[18:14] * Pedras1 (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[18:14] <Serbitar> ifur: price per unit?
[18:15] * markbby (~Adium@168.94.245.4) has joined #ceph
[18:15] <ifur> Serbitar: googled a bit, seems to be about 5k USD barebone
[18:15] <Serbitar> definately not cheap
[18:16] <Serbitar> but with 12 disks per tray
[18:16] <Serbitar> thats quite nice
[18:16] <ifur> RSPU and 4x mobos with hardware raid controller and dual 10GbE ports are never cheap
[18:17] <ifur> box itself should be quite cheap, and adding cpu, mem and drives drives the price up
[18:17] * timidshark (~timidshar@70-88-62-73-fl.naples.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[18:17] <ifur> single 1U nodes without RPSU is probably going to be cheaper, but its also more stuff to manage
[18:17] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[18:18] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:18] <Serbitar> maybe, they probably will have fewer disks though
[18:20] <ifur> Serbitar: nah, supermicro have 1U with 12 disk slots
[18:20] <ifur> same thing, just without RPSU :P
[18:20] <Serbitar> right
[18:25] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) has joined #ceph
[18:27] * Pedras1 (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:32] * ircolle (~Adium@2601:1:8380:2d9:68e9:6778:af47:22ad) has joined #ceph
[18:32] * koleosfuscus (~koleosfus@130.125.119.204) has joined #ceph
[18:33] * chrisk (~Chris@cpc26-salf5-2-0-cust77.10-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[18:35] <cbob> anyone got ceph in production with cloudstack?
[18:36] * Cube (~Cube@66-87-65-82.pools.spcsdns.net) has joined #ceph
[18:37] <cbob> i've got cloudstack up and running on it but i've come across some limitations and im wondering if anyone has found a workaround for them
[18:38] <cbob> like 5gb upload to s3 limitations, no live snapshots w/ kvm, and creating a vm from a template that came from rbd fails.
[18:39] <cbob> im also wondering if anyone has geo replication up and running, im curious about network bandwidth/latency requirements for replication of s3 stuff vs attempting to make a rbd span multiple locations
[18:40] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:40] <cbob> also with cloudstack, creating a vm on ceph rbd takes forever the first time, as it thick provisions the whole first image, subsequent vm creation happens fast though
[18:42] <cbob> next week im going to begin working with a developer to try to fix some of these issues with cloudstack (we've set our hearts on ceph+cloudstack) and im wondering if anyone in the community has any input / ideas
[18:42] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[18:47] * dpippenger (~riven@cpe-198-72-157-189.socal.res.rr.com) Quit (Quit: Leaving.)
[18:48] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[18:48] <saturnine> cbob: I've been testing Ceph RBD with CS too.
[18:49] <saturnine> Haven't had any problems making/deploying templates from RBD VMs.
[18:49] <saturnine> I don't think geo-replication is advisable for RBD.
[18:50] <saturnine> Geo-locating Object Storage for snapshotting backups is probably a better idea.
[18:50] <saturnine> And I don't think live KVM snapshotting on RBD is implemented yet on the CloudStack side.
[18:52] * xdeller (~xdeller@109.188.124.66) Quit (Quit: Leaving)
[18:58] * mdxi_ (~mdxi@50-199-109-156-static.hfc.comcastbusiness.net) has joined #ceph
[18:58] * mdxi_ (~mdxi@50-199-109-156-static.hfc.comcastbusiness.net) has left #ceph
[18:58] * xarses (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[19:04] * The_Bishop_ (~bishop@g229103000.adsl.alicedsl.de) has joined #ceph
[19:06] * angdraug (~angdraug@12.164.168.117) Quit (Ping timeout: 480 seconds)
[19:07] * diegows_ (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[19:08] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:10] * The_Bishop (~bishop@f050147209.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[19:11] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Read error: Connection reset by peer)
[19:11] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[19:11] * elyograg (~oftc-webi@client175.mainstreamdata.com) has joined #ceph
[19:13] <elyograg> I built a test ceph filesystem. I put 2.4TB of data into it. Later, I deleted all of it. The delete took more than a full day to happen, which I was not surprised by. What I was surprised about was the it look about a full day before any of the data in the ceph back end was deleted.
[19:13] <elyograg> Now all of the data in the ceph filesystem is gone, but there is still a huge amount of that data left in the ceph back end, and I have no idea how to get it to release the space.
[19:14] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[19:14] <elyograg> The delete finished about three days ago, but the overall space used hasn't dropped since shortly after the 'rm -rf' finished.
[19:14] <elyograg> 2014-04-04 11:13:38.419527 mon.0 [INF] pgmap v239472: 192 pgs: 192 active+clean; 3302 MB data, 3282 GB used, 7263 GB / 10564 GB avail
[19:15] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:15] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit (Quit: Leaving.)
[19:18] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[19:19] <jcsp> elyograg: that's pretty odd. there is a lag, but it shouldn't be that bad. if you turn up the MDS logging level, you can perhaps watch it happening by monitoring messages about "purging strays"
[19:19] <elyograg> how do I do that? My exposure so far is fairly minimal.
[19:19] <ponyofdeath> hi, what can I do about these stuck / blocked pg's in ceph health detail report? http://paste.ubuntu.com/7204131
[19:23] <jcsp> elyograg: ceph mds tell <mds daemon name> injectargs "--debug-mds 10"
[19:23] <elyograg> ok ... how do I find the mds daemon name? (serious beginner here...)
[19:24] <elyograg> and then how to I see the resulting log?
[19:24] <jcsp> if you then "tail -f /var/log/ceph/ceph-mds-<daemon name>.log | grep purge_stray"
[19:24] <jcsp> in ceph status, look at the mdsmap line
[19:24] <jcsp> mine is "mdsmap e110: 1/1/1 up {0=gravel1=up:active}", where I have one mds called gravel1
[19:25] <jcsp> (also heads up: cephfs is not production ready so you are in rough territory!)
[19:26] * Kioob (~kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) has joined #ceph
[19:26] * koleosfuscus (~koleosfus@130.125.119.204) Quit (Quit: koleosfuscus)
[19:26] <elyograg> I'm aware of the 'not ready for production' status. I mostly wanted to know how it performed for future consideration. does really well at everything but deletes, but nothing else I've tested so far does well at deletes either.
[19:27] <jcsp> is your workload a lot of small files@
[19:28] <elyograg> all the files I tested with are jpegs. a few hundred K to a few MB.
[19:28] <elyograg> There were over a million of them in my test, though. Hundredds of millions in the archive.
[19:28] <elyograg> we have some text articles and some video, too.
[19:28] * kiwigeraint (~kiwigerai@208.72.139.54) Quit (Remote host closed the connection)
[19:29] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[19:30] <elyograg> a handful of metadata files (which are small) go with each asset, but those can live on a smaller system that doesn't need to scale.
[19:30] * diegows_ (~diegows@190.190.5.238) has joined #ceph
[19:32] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[19:32] * leseb (~leseb@185.21.174.206) has joined #ceph
[19:33] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[19:33] <elyograg> we are considering whether to move to a true object store. This would involve a lot of code changes for our system. I plan to ask a question in several places: "tell me why I should use your open source object store and not XXX or YYY instead. Sell me on your technology, if you can." Anyone here have a good answer as to why I should use Ceph's object store instead of swift or somebody else?
[19:33] <Sysadmin88> have you researched ceph at all?
[19:34] <elyograg> not in depth. I've been looking.
[19:34] * alram (~alram@38.122.20.226) has joined #ceph
[19:34] <Sysadmin88> self healing, self replicating, expandable, can use any hardware you decide you need... that sold it for me lol
[19:34] * themgt (~themgt@24-181-212-170.dhcp.hckr.nc.charter.com) has joined #ceph
[19:34] <Sysadmin88> there are plenty of videos on youtube
[19:35] <elyograg> for my "space not being reclaimed" problem on my testbed, I don't see anything useful in the mds debug. Every few seconds this comes through: https://dpaste.de/emdq
[19:35] <elyograg> Sysadmin88: if I understand Opentack Swift, they do all that too.
[19:35] * kiwigeraint (~kiwigerai@208.72.139.54) has joined #ceph
[19:35] <jcsp> elyograg: if you watch "ceph -w" for a few minutes, are you seeing the used space tick down?
[19:36] <elyograg> not for about three days now.
[19:36] <elyograg> could be two days. quite a long time after the delete finished.
[19:36] * timidshark (~timidshar@74.118.238.209) has joined #ceph
[19:36] <jcsp> what version of ceph?
[19:36] <ircolle> elyograg if by "all that" you mean Swift does object storage, correct - but Ceph provides object, block and file
[19:37] <Sysadmin88> and swift is 'eventually consistent'...
[19:37] <Sysadmin88> while ceph is strongly consistent
[19:37] <ircolle> Sysadmin88 - exactly
[19:37] <elyograg> ircolle: the block storage is actually quite a selling point, but there's DRBD too. As for the filesytem, it's not production ready. sage said a few days ago that he expects that to change sometime this year, but we need terabytes of space NOW.
[19:37] <Sysadmin88> does swift support erasure coding? ceph is implementing it
[19:38] <elyograg> i have seen 'erasure coding' mentioned. No idea what it even is.
[19:38] <Sysadmin88> where you make it easier to recover from more losses but reduce the space needed
[19:38] <Sysadmin88> in exchange for extra CPU used for recovery
[19:38] <Sysadmin88> research it
[19:39] <jcsp> erasure coding is the technical term for what a RAID5/6 array does: more efficient use of space by calculating parity data instead of making replicas.
[19:39] <Sysadmin88> if you need it NOW... your too late... you need to do lots of research into all the projects first
[19:41] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[19:41] <elyograg> we did some research on scalable distributed filesystems some time ago. Ceph got eliminated early because re-sharing with NFS was deemed by ceph's own documentation as completely unstable. Went forward with gluster. we've had problem after problem with it.
[19:42] <Sysadmin88> as well as space, you need to consider performance as well. and budget
[19:42] <ircolle> elyograg - what're your use cases? What kind of storage problems are you trying to solve?
[19:42] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[19:43] <elyograg> Currently we have fiberchannel SAN hardware (with SATA disks) providing storage to a pair of Solaris X86 servers running Solaris Cluster, to provide a failover NFS solution.
[19:43] <elyograg> Oracle went and bought Sun, making Solaris *way* too expensive, so we can't expand that solution.
[19:46] <elyograg> We've also had some issues with the SAN hardware, topped off by the fact that the specific hardware models we have are completely end of life now, to the point where we can't even get a spare controller from the manufacturer.
[19:48] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[19:48] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[19:48] <ircolle> elyograg - how much storage do you plan on providing?
[19:49] <elyograg> the SAN hardware is already over 200TB.
[19:49] <elyograg> We have 80TB of storage in gluster, that's probably about two thirds full.
[19:50] <elyograg> close to 100 million assets, six million are photos, about half a million are video. the rest are images, primarily jpeg.
[19:51] <elyograg> six million are text. sorry, typed that wrong.
[19:52] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:53] * sjustwork (~sam@2607:f298:a:607:2590:a8c2:b9bc:4115) has joined #ceph
[19:53] <elyograg> for each asset there are several very small supporting files in addition to the asset itself. Video assets have a few different formats, so the supporting files are not all small.
[19:53] * vilobh (~vilobhmm@nat-dip28-wl-b.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:53] <elyograg> our directory structure has 1000 assets per subdirectory.
[19:54] <elyograg> they're broken up into providers and into features within each provider.
[19:55] <vilobh> why isn't there something like a rbd.so or something like that tool in userspace to expose and create block devices. Is all the logic for block devices embedded into the kernel module rbd.ko ?
[19:55] <Sysadmin88> ceph has the capability to do processing... maybe that would be useful to you to generate the supporting files...
[19:56] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[19:59] * Pedras (~Adium@216.207.42.134) has joined #ceph
[20:01] <elyograg> Ceph would have to know how our system works in order to do processing for us. It would need to understand jpeg metadata and our naming scheme.
[20:01] <elyograg> we've got hundreds of feed processing scripts, because different feeds need things done differently.
[20:01] <Sysadmin88> then you would tell it what it needs if it would be suitable
[20:03] <dmick> vilobh: there are many ways to create and access rbd images besides the kernel driver
[20:04] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[20:04] <dmick> but if you want to create a kernel block device...
[20:04] * xmltok_ (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[20:05] * fdmanana_ (~fdmanana@bl13-138-188.dsl.telepac.pt) Quit (Quit: Leaving)
[20:06] * xmltok_ (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit ()
[20:06] * xmltok_ (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[20:09] <elyograg> jcsp: how do I turn this debug off now?
[20:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[20:10] <elyograg> might have figured it out. sent the same command with 0 instead of 10.
[20:10] * xmltok_ (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit ()
[20:11] * xmltok_ (~xmltok@cpe-76-90-130-148.socal.res.rr.com) has joined #ceph
[20:15] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[20:15] * gregsfortytwo1 (~Adium@38.122.20.226) has joined #ceph
[20:20] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[20:23] * xmltok_ (~xmltok@cpe-76-90-130-148.socal.res.rr.com) Quit (Quit: Bye!)
[20:23] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) has joined #ceph
[20:24] * zack_dol_ (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[20:25] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Read error: Connection reset by peer)
[20:32] * zack_dol_ (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[20:34] * gregsfortytwo1 (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[20:34] <JoeGruher> what would cause rbd map to hang?
[20:34] <JoeGruher> ceph@joceph-client01:~$ sudo rbd create testrbd01 --pool mycontainers_1 --size 100000
[20:34] <JoeGruher> ceph@joceph-client01:~$ sudo rbd map testrbd01 --pool mycontainers_1
[20:34] <JoeGruher> then it just sits there
[20:36] <JoeGruher> oh I guess it does fail eventually: rbd: add failed: (5) Input/output error
[20:45] * gregsfortytwo1 (~Adium@38.122.20.226) has joined #ceph
[20:48] <JoeGruher> I have these messages in dmesg, what does this mean? [1301268.582364] libceph: mon1 10.0.0.102:6789 feature set mismatch, my 4a042a42 < server's 104a042a42, missing 1000000000
[20:48] <JoeGruher> All systems have 0.78
[20:49] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[20:53] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (Remote host closed the connection)
[20:54] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) has joined #ceph
[20:59] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[21:00] <Fruit> JoeGruher: your kernel's rbd is probably too old
[21:02] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[21:02] * zack_dolby (~textual@e0109-114-22-65-255.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[21:05] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (Remote host closed the connection)
[21:08] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Read error: Connection reset by peer)
[21:11] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[21:11] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[21:12] * tristanz (~tristanza@c-24-5-38-61.hsd1.ca.comcast.net) has joined #ceph
[21:13] * tristanz (~tristanza@c-24-5-38-61.hsd1.ca.comcast.net) Quit ()
[21:13] * tristanz (~tristanza@c-24-5-38-61.hsd1.ca.comcast.net) has joined #ceph
[21:14] * pasha_ceph (~quassel@S0106362610c83979.ok.shawcable.net) Quit (Remote host closed the connection)
[21:14] <JoeGruher> fruit: i guess so... i had 3.13 (not very old!) but upgrading to 3.14 resolved the problem
[21:14] <JoeGruher> thanks
[21:16] * jeff-YF (~jeffyf@216.14.83.26) has joined #ceph
[21:17] <Fruit> np :)
[21:28] * The_Bishop_ (~bishop@g229103000.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[21:32] <vilobh> dmick: librbd being one of them right ? What are the advantages on creating a kernel block device vs using normal librbd route ?
[21:32] * Pedras1 (~Adium@216.207.42.132) has joined #ceph
[21:34] * Pedras (~Adium@216.207.42.134) Quit (Read error: Connection reset by peer)
[21:35] * gregsfortytwo1 (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[21:38] * tristanz (~tristanza@c-24-5-38-61.hsd1.ca.comcast.net) Quit (Quit: tristanz)
[21:42] * tristanz (~tristanza@c-24-5-38-61.hsd1.ca.comcast.net) has joined #ceph
[21:42] <tristanz> has there been any talk of CephFS minimum product since last year?
[21:43] <dmick> vilobh: librbd underlies most (all?) of the non-kernel access methods, yes
[21:43] <dmick> it's more about what you want to do/need to have
[21:44] <vilobh> snapshot/clone will only be supported with kernel block devices right ?
[21:44] <vilobh> dmick: ^^
[21:54] <dmick> vilobh: definitely not; in fact for older kernels just the opposite. Typically the userland development is much more up to date (because it's under Ceph's control and not the kernel, and because it tends to be easier to implement and experiment in userland so it gets done first there)
[21:54] <dmick> you might benefit from reading some basic documentation about Ceph and what access paths are available
[21:55] <vilobh> dmick : sure will do that but what will the kernel block device offer which the user land tools wont
[22:00] <dmick> a kernel block device, mostly
[22:01] <dmick> other solutions are things like qemu-kvm support for accessing images directly (so no kernel involvement) or iSCSI gateway (to create an iSCSI device that is implemented with a userland daemon and librbd)
[22:02] <dmick> OpenStack can use the kvm support, for example
[22:03] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[22:10] <saturnine> iperf: 0.0-10.0 sec 1.15 GBytes 989 Mbits/sec
[22:10] <saturnine> dd: 1048576000 bytes (1.0 GB) copied, 18.6466 s, 56.2 MB/s
[22:10] <saturnine> This is seriously blowing my mind. :X
[22:11] <nhm_> saturnine: still sequential reads?
[22:12] * sz0 (~user@208.72.139.54) has joined #ceph
[22:14] <saturnine> nhm_: Yeah
[22:14] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) Quit (Quit: Leaving)
[22:15] <saturnine> dd if=/dev/zero of=/mnt2/test bs=4M count=250 oflag=direct (76.2MB/s)
[22:15] <saturnine> dd if=/mnt2/test of=/dev/null bs=4M count=250 iflag=direct (41.7MB/s)
[22:15] <nhm_> saturnine: what underlying FS are you using?
[22:15] <saturnine> Just mounted an RBD volume on the hypervisor, no qemu or anything.
[22:15] <saturnine> nhm_: For OSDs XFS, for this volume ext4
[22:16] <saturnine> The SSDs are Intel 520 180GB.
[22:16] <nhm_> saturnine: Ok. one thing to watch out for is fragmentation, but if this is a fresh filesystem that's less likely to be the problem.
[22:16] <saturnine> Was wrong about the 530s.
[22:17] <nhm_> oh?
[22:18] <saturnine> As in I'm not using 530s, I'm using 520s. :D
[22:18] <saturnine> ceph -w shows: 101815 kB/s rd, 198 op/s
[22:19] <saturnine> But dd is reporting only 41MB/s, because reasons.
[22:19] <nhm_> ok 520s are super fast (but maybe dangerous!) they are what I use in my test rig
[22:20] <nhm_> but for sequential reads, the things to look for are fragmentation, IOs getting broken up, excessive seeks or other strange behaviour.
[22:20] <nhm_> I think the 520s are just your journals right?
[22:21] * ksingh (~Adium@a91-156-75-252.elisa-laajakaista.fi) Quit (Remote host closed the connection)
[22:21] <saturnine> No, the 520s are OSDs
[22:21] <nhm_> oh!
[22:21] <nhm_> that's right
[22:21] <saturnine> the journal is just on the same disk, since it wouldn't be any faster.
[22:21] <nhm_> so yeah, that's terrible. :)
[22:21] <saturnine> hmm
[22:21] <nhm_> how's CPU usage?
[22:22] <saturnine> ceph tell osd.26 bench: "bytes_per_sec": "69669207.000000"
[22:23] * jeff-YF (~jeffyf@216.14.83.26) Quit (Quit: jeff-YF)
[22:23] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[22:24] <saturnine> nhm_: CPU/memory usage on osds is negligible
[22:25] <nhm_> saturnine: and do I remember correctly that you tested raw disk read/write performance?
[22:26] * allsystemsarego (~allsystem@79.115.62.238) Quit (Quit: Leaving)
[22:26] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Ping timeout: 480 seconds)
[22:28] * BManojlovic (~steki@cable-94-189-160-252.dynamic.sbb.rs) has joined #ceph
[22:28] * tristanz (~tristanza@c-24-5-38-61.hsd1.ca.comcast.net) Quit (Quit: tristanz)
[22:29] * jeff-YF (~jeffyf@216.14.83.26) has joined #ceph
[22:29] <saturnine> nhm_: Yeah, raw disk speed is normal
[22:30] * zidarsk8 (~zidar@tm.78.153.58.217.dc.cable.static.telemach.net) has left #ceph
[22:30] <saturnine> I just ran an osd tell bench from the OSD node, and it came back at 75MB/s
[22:30] <saturnine> Which being on the same node, that seems really weird.
[22:32] <nhm_> saturnine: did you try testing all of the disks directly without ceph?
[22:33] <saturnine> nhm_: Yeah, speeds are ~200MB/s+
[22:34] <nhm_> definitely strange. :/
[22:35] <saturnine> I'll mess with it more next week.
[22:35] <saturnine> Haven't had time to today really, and I'm about burnt out for now. :D
[22:35] <saturnine> nhm_: Thanks for the help.
[22:36] <nhm_> saturnine: you may want to try looking at the dump_historic_ops command after running a test in the admin socket.
[22:36] <nhm_> saturnine: that will give you some examples of slow ops and where they spent time
[22:37] <mozg> is anyone interested in the log files of crashed Emperor mons?
[22:37] <saturnine> nhm_: Thanks, will do.
[22:37] <mozg> or shall I just bin it?
[22:38] <mozg> i've got mon failing on me twice in the matter of two weeks or so
[22:38] <mozg> iv'e got the log files
[22:38] <mozg> anyone interested/care for them?
[22:39] <nhm_> mozg: not sure, I'm seeing if joao is around
[22:39] <joao> I'm here
[22:40] <joao> what's up?
[22:40] * timidshark (~timidshar@74.118.238.209) Quit (Quit: Leaving...)
[22:40] <joao> mozg, mind pointing me to those logs?
[22:42] * tristanz (~tristanza@c-24-5-38-61.hsd1.ca.comcast.net) has joined #ceph
[22:46] * n1md4 (~nimda@anion.cinosure.com) has joined #ceph
[22:46] <joao> mozg, gotta run; mind opening a ticket and upload your logs there?
[22:46] * fghaas (~florian@207.164.2.188) has joined #ceph
[22:46] <joao> I'll take a look at them next week
[22:47] <n1md4> how can i find out what is using data in my cluster? 95G apparently ..
[22:47] <n1md4> would it be a ceph command, or rbd?
[22:48] <n1md4> there was a pool of images that I removed, but the capacity has not been returned
[22:51] <mikedawson> n1md4: Something like... strPool=images; for strImage in `rbd -p $strPool list`; do rbd -p images info $strImage; done
[22:52] <elyograg> that's essentially the same problem I'm having with cephfs. i deleted everything, but there's still almost 4 terabytes of disk space consumed.
[22:52] * JoeGruher (~JoeGruher@134.134.139.74) Quit (Ping timeout: 480 seconds)
[22:54] * fghaas (~florian@207.164.2.188) Quit (Quit: Leaving.)
[23:00] * JoeGruher (~JoeGruher@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[23:01] * schmee (~quassel@phobos.isoho.st) Quit (Quit: No Ping reply in 180 seconds.)
[23:05] * scuttlemonkey (~scuttlemo@c-107-5-193-244.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[23:06] <mozg> joao, sure, I will
[23:07] * tristanz (~tristanza@c-24-5-38-61.hsd1.ca.comcast.net) Quit (Quit: tristanz)
[23:08] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 28.0/20140314220517])
[23:10] * JoeGruher (~JoeGruher@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[23:10] <mozg> joao, the maximum upload size seems to be around 70mb
[23:10] <mozg> however, my log files are larger then 200mb in bz2
[23:10] <mozg> where can I upload them?
[23:11] <nhm_> mozg: I think we have some kind of public drop site
[23:12] <mozg> yeah, i remember using it about 6 months ago
[23:12] <mozg> but can't remember the details
[23:12] <mozg> could you please remind me so that i can upload
[23:12] <dmick> man ceph-post-file
[23:12] <mozg> i've already opened the bug report
[23:13] * vata (~vata@2607:fad8:4:6:21a2:43c3:2da0:63e3) Quit (Quit: Leaving.)
[23:14] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[23:14] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[23:14] <dmick> mozg: did you see that?
[23:15] <mozg> yeah, doing it now
[23:15] <mozg> thanks mate
[23:16] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Ping timeout: 480 seconds)
[23:16] <dmick> k. just making sure.
[23:17] * markbby (~Adium@168.94.245.1) has joined #ceph
[23:17] <mozg> dmick, it's asking me for the password
[23:17] <dmick> shouldn't be
[23:17] <mozg> will pastebin in a sec
[23:17] * jeff-YF (~jeffyf@216.14.83.26) Quit (Quit: jeff-YF)
[23:18] * jeff-YF (~jeffyf@216.14.83.26) has joined #ceph
[23:18] <dmick> /usr/share/ceph/*ceph.com exist?
[23:18] * JoeGruher (~JoeGruher@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[23:18] <mozg> http://ur1.ca/gzpl6
[23:18] <mozg> here you go
[23:18] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) Quit (Quit: quit)
[23:19] <mozg> any idea?
[23:19] <dmick> mozg: ^
[23:20] <mozg> s -la /usr/share/ceph/*ceph.com
[23:20] <mozg> -rw-r--r-- 1 root root 395 Dec 20 22:01 /usr/share/ceph/id_dsa_drop.ceph.com
[23:20] <mozg> -rw-r--r-- 1 root root 395 Dec 20 22:01 /usr/share/ceph/known_hosts_drop.ceph.com
[23:20] <mozg> yes
[23:22] <dmick> will have to check; can't find the host config
[23:23] <mozg> can I upload it any other way?
[23:23] <joao> cephdrop@ceph.com should work
[23:23] <mozg> joao, can you msg me the pass
[23:24] <mozg> ?
[23:24] <mozg> thanks
[23:25] <mozg> uploading as we speak
[23:25] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[23:25] <mozg> bug reference is 7991
[23:29] <mozg> upload is done
[23:30] <mozg> bug report updated
[23:30] <mozg> hope this helps
[23:30] * jeff-YF (~jeffyf@216.14.83.26) Quit (Ping timeout: 480 seconds)
[23:31] <n1md4> if i'm trying to delete a snap shot but it says it's busy, how can i check where / what it's doing? there should not be any snapshots on the cluster
[23:35] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[23:38] <mozg> guys, when is the Firefly going to be out? Any time estimates?
[23:39] <dmick> mozg: http://tracker.ceph.com/issues/7992 Sorry about that! We're fixing it now...
[23:42] <mozg> no problem )) glad to help ))
[23:43] <mozg> dmick, can you check if you've got all four files for bug 7991 please
[23:43] <kraken> mozg might be talking about http://tracker.ceph.com/issues/7991 [ceph-mon crash]
[23:43] <dmick> joao: ^
[23:43] <mozg> so that i can remove it from my server
[23:43] * themgt (~themgt@24-181-212-170.dhcp.hckr.nc.charter.com) Quit (Quit: Pogoapp - http://www.pogoapp.com)
[23:43] <mozg> there should be 4 bz2 files
[23:44] <mozg> two files per each mon
[23:44] <mozg> corresponding to each of the crashes i had
[23:51] * LeaChim (~LeaChim@host86-162-1-71.range86-162.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[23:52] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has left #ceph
[23:57] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.