#ceph IRC Log

Index

IRC Log for 2013-09-13

Timestamps are in GMT/BST.

[0:00] <wrencsok> i've only noticed it on our dumpling cluster. have not seen the same behaviour on the 3 cuttlefish ones.
[0:00] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[0:00] <mikedawson> happens on boxes with or without MONs
[0:00] <wrencsok> both
[0:00] <wrencsok> for me
[0:00] <mikedawson> wrencsok: yep
[0:01] <dmick> checking one of my machines, I see logs for all the OSDs I expect, and also ceph-osd.0.log and ceph-osd..log
[0:01] <dmick> both of those latter are definitely wrong and I don't know why/how they're there
[0:01] <mikedawson> dmick: add a [osd.999] stanza, I bet you get ceph-osd.999.log
[0:01] <wrencsok> that happens for me too.
[0:02] <dmick> this ceph.conf has never had osd stanzas
[0:02] <dmick> the ceph-osd..log files are all empty
[0:02] <dmick> the ceph-osd.0.log files occasionally have messages
[0:03] <dmick> look like bootup messages: the version banner and the journal _open. Maybe it's logging before id is set
[0:03] <wrencsok> we use a uniform ceph.conf on all nodes. that way i can roll out tweaks that preserve crashes and restarts.
[0:03] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:04] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[0:04] <wrencsok> i also use ceph,conf a a file to parse for some custom scripts and it just makes my life a bit easier there.
[0:04] <mikedawson> dmick: I also get ceph-mon.*.log on all my nodes
[0:06] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[0:06] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[0:06] <dmick> you mean where * is "every osdid"?
[0:07] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) has joined #ceph
[0:08] <mikedawson> dmick: where * is a,b,c (same as the monitors I have defined)
[0:08] <dmick> sorry, missed the mon
[0:08] <dmick> fascinating.
[0:08] <wrencsok> mikedawson: i see the exact same behaviour.
[0:08] <dmick> I don't get that eitehr, but I do have a "ceph-mon.admin.log" that I have no idea what it is
[0:09] <dmick> again with just a boot message
[0:09] * carif (~mcarifio@honeydew.cictr.com) has joined #ceph
[0:10] <dmick> could be "id" defaults to 0 for osd and "admin" for mon, and both are the same "logging before id is set properly" problem
[0:10] * carif (~mcarifio@honeydew.cictr.com) Quit ()
[0:10] <dmick> but the "every id created" is a different thing
[0:12] <mikedawson> dmick, wrencsok: http://tracker.ceph.com/issues/6299
[0:12] <dmick> tnx
[0:17] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[0:18] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[0:19] * sagelap (~sage@2600:1010:b021:92ae:3424:4060:73c3:2dac) has joined #ceph
[0:20] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[0:21] <wrencsok> broke down and created an account. added a comment/confirmation.
[0:25] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[0:30] * carif (~mcarifio@honeydew.cictr.com) has joined #ceph
[0:32] * carif (~mcarifio@honeydew.cictr.com) Quit ()
[0:37] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[0:43] * mschiff (~mschiff@port-49377.pppoe.wtnet.de) Quit (Remote host closed the connection)
[0:45] * jjgalvez (~jjgalvez@64.34.151.178) has joined #ceph
[0:45] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[0:50] * AfC (~andrew@1.129.141.139) has joined #ceph
[0:55] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:58] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[0:59] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[1:01] * shang (~ShangWu@207.96.227.9) has joined #ceph
[1:07] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:08] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[1:13] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[1:16] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) has joined #ceph
[1:22] * AfC (~andrew@1.129.141.139) Quit (Quit: Leaving.)
[1:23] * DLange (~DLange@dlange.user.oftc.net) Quit (Remote host closed the connection)
[1:23] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[1:27] * LeaChim (~LeaChim@054073b1.skybroadband.com) Quit (Ping timeout: 480 seconds)
[1:29] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[1:29] * ChanServ sets mode +o scuttlemonkey
[1:31] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[1:31] * sagelap (~sage@2600:1010:b021:92ae:3424:4060:73c3:2dac) Quit (Read error: No route to host)
[1:32] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[1:34] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[1:36] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Remote host closed the connection)
[1:38] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[1:38] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[1:41] <buck> is there a reason that the centos gitbuilders are not picking up all the updates?
[1:41] <buck> er rebuilding on each commit?
[1:42] <buck> on the next branch, looks like the last commit they build was from sept 07 (bc552...)
[1:42] * scuttlemonkey changes topic to 'Latest stable (v0.67.3 "Dumpling" or v0.61.8 "Cuttlefish") -- http://ceph.com/get || CDS Vids and IRC logs posted http://ceph.com/cds/ || New dev channel #ceph-devel'
[1:49] <dmick> buck: ima go out on a limb and say they're busted :)
[1:50] <buck> dmick: NOT IT
[1:50] <buck> dmick: someone is on it (just an FYI)
[1:50] <dmick> yeah
[1:51] <dmick> tnx for making sure I know :)
[1:51] <xarses> also not it
[1:51] <xarses> =p
[1:52] <xarses> darn alfredo is missing
[1:56] * xarses (~andreww@204.11.231.50.static.etheric.net) has left #ceph
[1:57] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:57] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[1:58] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (Remote host closed the connection)
[1:58] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[2:02] * shang (~ShangWu@207.96.227.9) Quit (Ping timeout: 480 seconds)
[2:07] * jeff-YF (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) has joined #ceph
[2:11] * vbellur (~vijay@122.172.196.110) Quit (Ping timeout: 480 seconds)
[2:17] * xarses (~andreww@204.11.231.50.static.etheric.net) has left #ceph
[2:18] * angdraug (~angdraug@204.11.231.50.static.etheric.net) Quit (Quit: Leaving)
[2:22] * vbellur (~vijay@122.172.237.244) has joined #ceph
[2:25] * jeff-YF (~jeffyf@pool-173-66-21-43.washdc.fios.verizon.net) Quit (Quit: jeff-YF)
[2:31] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[2:31] * KevinPerks (~Adium@64.34.151.178) Quit (Quit: Leaving.)
[2:32] * clayb (~kvirc@proxy-ny2.bloomberg.com) Quit (Read error: Connection reset by peer)
[2:32] * sjm (~sjm@64.34.151.178) Quit (Quit: Leaving)
[2:32] * jjgalvez (~jjgalvez@64.34.151.178) Quit (Quit: Leaving.)
[2:32] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Remote host closed the connection)
[2:45] * yy-nm (~Thunderbi@122.233.46.14) has joined #ceph
[2:48] * malcolm (~malcolm@silico24.lnk.telstra.net) has joined #ceph
[2:52] <malcolm> Hey, are there any big case studies on ceph? or details of any of the big installs (well apart from Dreamhost)
[2:53] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[2:54] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit ()
[2:54] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[2:54] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[2:57] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[3:03] * berant (~blemmenes@24-236-241-163.dhcp.trcy.mi.charter.com) has joined #ceph
[3:14] * jjgalvez (~jjgalvez@207.96.227.9) has joined #ceph
[3:22] * kyann (~kyann@did75-15-88-160-187-237.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[3:26] * nerdtron (~kenneth@202.60.8.252) has joined #ceph
[3:27] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) has joined #ceph
[3:32] * berant (~blemmenes@24-236-241-163.dhcp.trcy.mi.charter.com) Quit (Quit: berant)
[3:41] * todin (tuxadero@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[3:43] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[3:46] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[3:47] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[3:47] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[3:47] * berant (~blemmenes@24-236-241-163.dhcp.trcy.mi.charter.com) has joined #ceph
[3:56] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[3:57] * nhm (~nhm@63.110.51.11) Quit (Read error: Operation timed out)
[4:01] * berant (~blemmenes@24-236-241-163.dhcp.trcy.mi.charter.com) Quit (Quit: berant)
[4:10] * thomnico (~thomnico@207.96.227.9) Quit (Quit: Ex-Chat)
[4:11] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[4:17] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[4:17] * glzhao (~glzhao@117.79.232.216) has joined #ceph
[4:21] * davidzlap (~Adium@ip68-5-239-214.oc.oc.cox.net) Quit (Quit: Leaving.)
[4:22] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:29] * angdraug (~angdraug@c-98-248-39-148.hsd1.ca.comcast.net) has joined #ceph
[4:34] * sagelap (~sage@2600:1010:b013:a0a3:3424:4060:73c3:2dac) has joined #ceph
[4:45] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[4:49] * julian (~julianwa@125.70.133.27) has joined #ceph
[4:51] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[5:00] * jcl (~Adium@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[5:01] * jcl (~Adium@71-94-44-243.static.trlk.ca.charter.com) has left #ceph
[5:03] * yasu` (~yasu`@99.23.160.231) Quit (Remote host closed the connection)
[5:05] * fireD_ (~fireD@93-142-198-251.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD (~fireD@93-139-157-255.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:07] * jcl (~Adium@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[5:11] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[5:14] * grepory (~Adium@236.sub-70-192-193.myvzw.com) has joined #ceph
[5:29] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) has joined #ceph
[5:40] * markl (~mark@tpsit.com) Quit (Remote host closed the connection)
[5:42] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[5:43] * doubleg (~doubleg@69.167.130.11) Quit (Remote host closed the connection)
[5:43] * doubleg (~doubleg@69.167.130.11) has joined #ceph
[5:46] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) has joined #ceph
[5:55] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) Quit (Quit: Konversation terminated!)
[5:56] * sagelap (~sage@2600:1010:b013:a0a3:3424:4060:73c3:2dac) Quit (Read error: Connection reset by peer)
[5:57] * sagelap (~sage@2600:1010:b013:a0a3:3424:4060:73c3:2dac) has joined #ceph
[6:02] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) Quit (Read error: No route to host)
[6:14] * sagelap (~sage@2600:1010:b013:a0a3:3424:4060:73c3:2dac) Quit (Ping timeout: 480 seconds)
[6:14] * mech422 (~steve@ip68-2-159-8.ph.ph.cox.net) has joined #ceph
[6:14] <mech422> Morning
[6:20] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[6:23] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:26] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[6:30] * yy-nm (~Thunderbi@122.233.46.14) Quit (Quit: yy-nm)
[6:34] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[6:34] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[6:45] * S0d0 (~joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[6:51] * nwf (~nwf@67.62.51.95) has joined #ceph
[6:51] <nwf> Would some kind soul tell me what my Dumpling OSD (on ZFS on Linux) is doing wrong? http://pastebin.com/wJuA6e5n
[6:52] * jcl (~Adium@71-94-44-243.static.trlk.ca.charter.com) Quit (Quit: Leaving.)
[7:02] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[7:04] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:04] * KindTwo (~KindOne@198.14.192.241) has joined #ceph
[7:04] * KindTwo is now known as KindOne
[7:05] <yanzheng> nwf, the bug should be fixed in zfs 0.62
[7:06] <nwf> yanzheng: Alright! I'll see about upgrading. Thanks!
[7:06] <nwf> Erm, according to dpkg, I'm running 0.6.2-1~raring
[7:07] <nwf> Maybe I haven't rebooted since upgrading (d'oh). Hold on. :)
[7:16] <nwf> Whoo, that worked. OK. Sorry for the noise and thanks for the quick response!
[7:19] * S0d0 (~joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[7:28] * grepory (~Adium@236.sub-70-192-193.myvzw.com) Quit (Quit: Leaving.)
[7:28] * sagelap (~sage@245.sub-70-197-81.myvzw.com) has joined #ceph
[7:35] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[7:35] * capri (~capri@212.218.127.222) Quit (Quit: Verlassend)
[7:38] * capri (~capri@212.218.127.222) has joined #ceph
[7:38] * KindTwo (~KindOne@198.14.197.54) has joined #ceph
[7:41] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:41] * KindTwo is now known as KindOne
[7:43] * xmltok_ (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Bye!)
[7:57] * sagelap (~sage@245.sub-70-197-81.myvzw.com) Quit (Ping timeout: 480 seconds)
[7:58] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[8:00] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[8:00] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[8:03] * zhangjf_zz2 (~zjfhappy@222.128.1.105) has joined #ceph
[8:08] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[8:14] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[8:22] * davidzlap (~Adium@ip68-5-239-214.oc.oc.cox.net) has joined #ceph
[8:26] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[8:27] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:27] * Vjarjadian (~IceChat77@05453253.skybroadband.com) Quit (Quit: Do fish get thirsty?)
[8:28] <jerker> malcolm: it would pretty sweet if some of the users setting up prototypes could fill in. i have only run on eight (old) nodes.
[8:37] <mech422> would I be correct in assuming to use qemu-img with cephx, you need to supply the security credentials in the CEPH_XXX env var?
[8:50] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[9:02] <malcolm> jerker: all good. I've got a client who is reluctant to use ceph because "its not at all mature and nobody uses it"
[9:02] <malcolm> jerker: so I'm looking for some solid case studies.
[9:02] <mech422> malcolm: hehe - try selling mgmt on moosefs :-P "You wanna use WHAT ? "
[9:02] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:03] <mech422> btw - moose was pretty good (we used it in production for a couple of years...) but has the single meta-data SPOF/bottleneck like hadoop stuff
[9:03] <malcolm> mech422: heh. Well they are doing openstack, so you know its kinda a 'good idea' to go for ceph.
[9:04] <mech422> malcolm: I'm just starting to play with that now...
[9:05] <mech422> malcolm: OpenNebula actually seemed a bit more 'hacker friendly' - much less intrusive on network setup
[9:05] <mech422> malcolm: but Openstack is gonna be the 'big deal' - so I figured I should at least play with it
[9:06] <malcolm> mech422: Ahh this is a big formal thing.. they are 100% sold on the Openstack direction.
[9:06] <mech422> malcolm: yeah - IMHO, its a 'safe' direction to go - like "no one ever got fired for buying IBM"
[9:07] <mech422> its got the backing and the momentum
[9:07] <malcolm> mech422: well they did over here.. lol :D (I'm in QLD Australia, IBM stuffed up a major implementation of our state helth payrol software)
[9:08] <mech422> malcolm: ROFL - guess I"m showing my age :-P
[9:08] <malcolm> mech422: nah I know what you mean by that one tho.
[9:08] <mech422> I was amazed at the hoops you have to jump thru with these systems to pin a VM to a particular routable IP... say for a mail server
[9:09] <mech422> cloudstack just didn't seem to do it, open nebula did it via directly overwriting the IP address in the 'contextualization'....
[9:09] * abcd (~saumya@14.139.82.6) has joined #ceph
[9:10] <malcolm> Its the whole 'follow AWS' thing. I didn't think they were supposed to get 'static' ips.. I could be going crazy of course
[9:10] <mech422> openstack actually could do it, which was a nice surprise
[9:10] * capri (~capri@212.218.127.222) Quit (Quit: Verlassend)
[9:14] * capri (~capri@212.218.127.222) has joined #ceph
[9:14] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[9:17] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:18] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[9:18] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[9:19] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:20] * abcd (~saumya@14.139.82.6) Quit (Ping timeout: 480 seconds)
[9:21] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Remote host closed the connection)
[9:23] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[9:24] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:24] * malcolm (~malcolm@silico24.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[9:30] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[9:31] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:34] * S0d0 (~joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[9:44] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[9:50] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[9:57] * vbellur (~vijay@122.172.237.244) Quit (Ping timeout: 480 seconds)
[9:59] * glzhao_ (~glzhao@106.3.103.174) has joined #ceph
[10:00] * haomaiwang (~haomaiwan@117.79.232.211) has joined #ceph
[10:02] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[10:02] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[10:04] <jerker> malcolm: well, it totally depends on the usage. I am still just planning to use Ceph for disk backups as a fast complement for the TSM (tivoly storage manager) backups we store off-site... In time Ceph will stabilize. File systems are notoriously hard to get stable. I have spent so much time over the years messing with NFS in Linux and wishing to skip this crap compared to NFS in BSD or Solaris. (But Linux was better in other ways.)
[10:05] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[10:05] * ChanServ sets mode +v andreask
[10:05] * haomaiwa_ (~haomaiwan@117.79.232.248) Quit (Ping timeout: 480 seconds)
[10:06] * glzhao (~glzhao@117.79.232.216) Quit (Ping timeout: 480 seconds)
[10:11] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[10:28] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[10:29] * LeaChim (~LeaChim@054073b1.skybroadband.com) has joined #ceph
[10:29] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[10:31] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit ()
[10:31] * allsystemsarego (~allsystem@188.25.134.128) has joined #ceph
[10:32] * S0d0 (~joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Read error: Connection reset by peer)
[10:32] * S0d0 (~joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[10:37] * glzhao (~glzhao@117.79.232.243) has joined #ceph
[10:42] * sidarali (~asid@46.28.99.16) has joined #ceph
[10:44] * glzhao_ (~glzhao@106.3.103.174) Quit (Ping timeout: 480 seconds)
[10:50] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[10:56] * penguinLord (~penguinLo@14.139.82.6) has joined #ceph
[10:56] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[10:57] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[10:58] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Quit: Leaving)
[11:00] <sidarali> Hello, can somebody help me with creating a bucket via radosgw ? I've setup radosgw and trying s3 api as per http://ceph.com/docs/master/radosgw/s3/java/, http://ceph.com/docs/master/radosgw/s3/java/#listing-owned-buckets here it gets list of buckets and there are 3 buckets, how these buckets were created ?
[11:00] <sidarali> I have 405 error when I try to create my own bucket
[11:03] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[11:05] * ScOut3R (~ScOut3R@4E5C2305.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[11:09] <mech422> sidarali: Sorry - seems a bit quiet in here tonight... (and I'm a noob, so I'm not much help :-P )
[11:09] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[11:10] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[11:10] * ChanServ sets mode +v andreask
[11:14] <sidarali> yep, will ask later
[11:16] * sidarali (~asid@46.28.99.16) has left #ceph
[11:24] * Meths_ (~meths@2.25.191.175) has joined #ceph
[11:27] * KindTwo (~KindOne@h216.40.186.173.dynamic.ip.windstream.net) has joined #ceph
[11:28] * Meths (~meths@2.25.213.185) Quit (Ping timeout: 480 seconds)
[11:28] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:28] * KindTwo is now known as KindOne
[11:30] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[11:30] * vbellur (~vijay@nat-pool-blr-t.redhat.com) has joined #ceph
[11:34] * YD (YD@b.clients.kiwiirc.com) has joined #ceph
[11:42] * capri (~capri@212.218.127.222) has joined #ceph
[11:43] * sidarali (~asid@46.28.99.16) has joined #ceph
[11:47] * jjgalvez (~jjgalvez@207.96.227.9) Quit (Quit: Leaving.)
[11:50] * malcolm (~malcolm@101.165.48.42) has joined #ceph
[11:53] * ScOut3R (~scout3r@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[11:53] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[11:56] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[11:58] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[12:01] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[12:08] <nerdtron> is there a way to decrease the number of placement groups assigned to an osd pool?
[12:11] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:12] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[12:12] * penguinLord (~penguinLo@14.139.82.6) Quit (Quit: irc2go)
[12:18] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:22] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Quit: shimo)
[12:26] * andreask1 (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[12:26] * ChanServ sets mode +v andreask1
[12:26] * andreask is now known as Guest6520
[12:26] * andreask1 is now known as andreask
[12:26] * todin (tuxadero@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[12:26] * Guest6520 (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[12:29] * glzhao (~glzhao@117.79.232.243) Quit (Quit: leaving)
[12:30] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[12:32] * claenjoy (~leggenda@37.157.33.36) has joined #ceph
[12:41] * roald (~roaldvanl@139-63-21-115.nodes.tno.nl) has joined #ceph
[12:41] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:50] * penguinLord (~penguinLo@14.139.82.6) has joined #ceph
[12:51] <penguinLord> I am new to ceph. I used sudo rbd foo --size 1024 to create a new image.I wanted to know the location where it is getting created .Can someone help?
[13:00] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) has joined #ceph
[13:01] <mech422> I'm a newb - but sinc you didn't specify a pool - I'd assume its in the default pool
[13:01] * todin_ (tuxadero@kudu.in-berlin.de) has joined #ceph
[13:02] * vbellur (~vijay@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[13:02] <penguinLord> mech422 : what's the physical location or memory mapping of the default pool?
[13:03] <mech422> umm - 'ceph' :-P
[13:03] <mech422> ceph is gonna determine the 'physical location' based on your crush maps etc
[13:03] <mech422> the 'pool' is cluster wide - only the objects have a 'location'
[13:03] * todin (tuxadero@kudu.in-berlin.de) Quit (Ping timeout: 480 seconds)
[13:04] <mech422> ceph osd lspools will list the pools in your cluster
[13:04] * lordinvader (~lordinvad@14.139.82.6) has joined #ceph
[13:04] <mech422> and there is a 'default' pool called 'rbd' which is where I'd assume your object is
[13:04] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[13:04] * ChanServ sets mode +v andreask
[13:05] <lordinvader> hi, I'm running a very basic Ceph setup having one mon and two osds on one host only, it works fine but the ceph-mon vanishes from ps aux after some time, and I have to run 'sudo restart ceph-mon-a;;
[13:05] <lordinvader> * 'sudo restart ceph-mon-all'
[13:06] <mech422> lordinvader: I'd assume its crashing ? Did you look in your log files to see ?
[13:06] <lordinvader> oh, sorry, just saw the logs, apparently it's shutting down because of low disk space
[13:06] <mech422> hehe - good ceph ! :-)
[13:07] <lordinvader> mech422, hehe :)
[13:07] <mech422> kinda a neat feature - never knew about that ... is it a clean shutdown or a crash?
[13:07] <lordinvader> seems like a neat shutdown
[13:07] <penguinLord> lordinvader : can you tell why i am able to create images using rbd command and not python api?? any reasons
[13:07] <lordinvader> 2013-09-13 16:32:49.514937 7ffd3d6c6700 0 mon.artoo@0(leader).data_health(1) update_stats avail 0% total 54730144 used 51610872 avail 339328
[13:07] <lordinvader> 2013-09-13 16:32:49.515008 7ffd3d6c6700 -1 mon.artoo@0(leader).data_health(1) reached critical levels of available space on data store -- shutdown!
[13:07] <lordinvader> 2013-09-13 16:32:49.515023 7ffd3d6c6700 0 ** Shutdown via Data Health Service **
[13:08] <mech422> oh sweet!
[13:08] <lordinvader> penguinLord, I'm no expert but are you sure you are passing the correct conf file to the rados instance?
[13:10] <penguinLord> lordinvader : I am passing the ceph.conf created in the local directory and not in the /etc with auth supported as none
[13:10] <mech422> I was supposed to be setting up radosgw tonight, but instead I'm sitting trying to figure out why vlans don't work on one of my boxes :-P
[13:13] <lordinvader> penguinLord, if you're not specifying the conf file explicitly while running rbd, it would automatically pick up the one in /etc/ceph, so I'm guessing that's the one you need to use
[13:21] * zhangjf_zz2 (~zjfhappy@222.128.1.105) Quit (Remote host closed the connection)
[13:23] * dlan (~dennis@116.228.88.131) Quit (Quit: Lost terminal)
[13:23] <penguinLord> I am trying to make 3 osd for demo on my machine using ceph deploy .But when I do ps aux | grep osd I can see only two osd daemons running .Is there some limit to number of osds or am I doing something wrong?
[13:25] <mech422> I found ceph-deploy a bit touchy
[13:25] <mech422> but that was mostly for mons - osd's didn't give me much trouble
[13:26] <mech422> I _did_ have to xfs format the partitions I wanted to use BEFORE I told ceph-deploy about them
[13:26] * dlan (~dennis@116.228.88.131) has joined #ceph
[13:34] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[13:50] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[13:58] * nhm (~nhm@mdf0536d0.tmodns.net) has joined #ceph
[13:58] * penguinLord (~penguinLo@14.139.82.6) Quit (Quit: irc2go)
[13:58] * Clabbe (~oftc-webi@193.15.240.60) has joined #ceph
[13:59] <Clabbe> Hi, when creating an osd is there any way to define which id it should get?
[13:59] <Clabbe> using osd create just gives you a number
[14:01] * malcolm_ (~malcolm@101.165.48.42) has joined #ceph
[14:01] * malcolm (~malcolm@101.165.48.42) Quit (Quit: Konversation terminated!)
[14:03] <sidarali> Hello, can somebody help me with creating a bucket via radosgw ? I've setup radosgw and trying s3 api as per http://ceph.com/docs/master/radosgw/s3/java/, http://ceph.com/docs/master/radosgw/s3/java/#listing-owned-buckets here it gets list of buckets and there are 3 buckets, how these buckets were created ?
[14:03] <sidarali> I have 405 error when I try to create my own bucket
[14:08] * jjgalvez (~jjgalvez@207.96.227.9) has joined #ceph
[14:08] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[14:09] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:10] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:12] * penguinLord (~penguinLo@14.139.82.6) has joined #ceph
[14:12] * berant (~blemmenes@gw01.ussignalcom.com) has joined #ceph
[14:13] * nhm (~nhm@mdf0536d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[14:17] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[14:17] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:22] <joelio> umm, I seem to have lost my admin sockets?
[14:22] <joelio> /var/run/ceph empty on osd hosts
[14:23] <joelio> root@vm-ds-05:~# ceph --admin-daemon /var/run/ceph/ceph-mds.vm-ds-05.mcuk.asok config show | grep rbd_cache "rbd_cache": "true",
[14:23] <joelio> oops, I have one I mean
[14:23] <joelio> on and mds
[14:23] <joelio> but no osds/mon
[14:26] * claenjoy (~leggenda@37.157.33.36) Quit (Remote host closed the connection)
[14:27] * yanzheng (~zhyan@134.134.137.71) has joined #ceph
[14:30] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) Quit (Read error: Operation timed out)
[14:31] * jjgalvez (~jjgalvez@207.96.227.9) Quit (Quit: Leaving.)
[14:36] * jcfischer (~fischer@macjcf.switch.ch) has joined #ceph
[14:37] <jcfischer> I'm trying to mount CephFS with ceph-fuse (according to a post on the mailing list) and can't get it to mount. I i run on the command line, it never gets back to me, if I run it from /etc/fstab I get the following error
[14:38] <jcfischer> fuse: bad mount point `/mnt/instances': Transport endpoint is not connected
[14:38] <jcfischer> ceph-fuse[52042013-09-13 14:38:22.632446 7f0630f2e7c0 -1 fuse_parse_cmdline failed.
[14:38] <jcfischer> ]: fuse failed to initialize
[14:38] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) has joined #ceph
[14:38] <jcfischer> ceph-fuse[5195]: mount failed: (22) Invalid argument
[14:39] <jcfischer> this is the manual command line: ceph-fuse -m 130.zzz.yyy.xx /mnt/instances -d -r instances
[14:39] <jcfischer> and here the /etc/fstab line:
[14:41] <jcfischer> id=admin /mnt/instances fuse.ceph defaults 0 0
[14:42] <jcfischer> nothing in any /var/log/ceph/ log file or in syslog
[14:50] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[14:51] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[14:54] <yanzheng> ceph-fuse -m 130.zzz.yyy.xx /mnt/instances -d -r /instances
[14:54] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[14:55] <roald> Clabbe, no, ceph will provide you the osd id, you can�t supply it yourself
[14:57] <absynth> not anymore...
[14:57] <absynth> oh alas, those were great times...
[14:58] <jcfischer> yanzheng: fuse: bad mount point `/mnt/instances': Transport endpoint is not connected
[15:00] <yanzheng> fusermount -u /mnt/instances
[15:01] * berant_ (~blemmenes@gw01.ussignalcom.com) has joined #ceph
[15:02] <jcfischer> yanzheng: perfect - except that it doesn't daemonize
[15:03] * glzhao (~glzhao@118.195.65.67) has joined #ceph
[15:03] <jcfischer> next step - /etc/fstab
[15:05] * jjgalvez (~jjgalvez@64.34.151.178) has joined #ceph
[15:05] * berant (~blemmenes@gw01.ussignalcom.com) Quit (Ping timeout: 480 seconds)
[15:05] * berant_ is now known as berant
[15:09] <joao> <absynth> oh alas, those were great times... <- when you could have huge gaps on your osdmap? :p
[15:10] <jcfischer> yanzheng: hmm root=/instances in stab is flagged as unknown option `--root=/instances' (same for r=/instances)
[15:11] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[15:12] <BillK> trying to grab stable from git ... tells me stable doesnt exist but "next" and "master" are ok ... http://ceph.com/docs/next/install/clone-source/
[15:12] * lordinvader (~lordinvad@14.139.82.6) Quit (Ping timeout: 480 seconds)
[15:13] * YD (YD@b.clients.kiwiirc.com) Quit (Remote host closed the connection)
[15:14] * YD (YD@b.clients.kiwiirc.com) has joined #ceph
[15:14] <yanzheng> jcfischer, remove -d option if you want it to daemonize
[15:14] <jcfischer> ah - misread. Anyway, trying to get it mounted via fstab now
[15:17] * roald (~roaldvanl@139-63-21-115.nodes.tno.nl) Quit (Ping timeout: 480 seconds)
[15:21] * KevinPerks (~Adium@64.34.151.178) Quit (Quit: Leaving.)
[15:23] * grepory (~Adium@8.25.24.2) has joined #ceph
[15:23] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[15:23] * markbby (~Adium@168.94.245.2) has joined #ceph
[15:36] * claenjoy (~leggenda@37.157.33.36) has joined #ceph
[15:36] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[15:36] * markbby (~Adium@168.94.245.2) has joined #ceph
[15:42] * YD (YD@b.clients.kiwiirc.com) Quit (Remote host closed the connection)
[15:42] * glzhao (~glzhao@118.195.65.67) Quit (Quit: leaving)
[15:42] * YD (YD@b.clients.kiwiirc.com) has joined #ceph
[15:43] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[15:45] * sjm (~sjm@64.34.151.178) has joined #ceph
[15:52] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:52] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:53] * shang (~ShangWu@64.34.151.178) has joined #ceph
[15:54] * grepory (~Adium@8.25.24.2) has joined #ceph
[15:55] * grepory (~Adium@8.25.24.2) Quit ()
[15:55] * dmsimard (~Adium@ap05.wireless.co.mtl.iweb.com) has joined #ceph
[15:57] * dmsimard1 (~Adium@108.163.152.2) has joined #ceph
[15:57] * grepory (~Adium@8.25.24.2) has joined #ceph
[15:57] * grepory (~Adium@8.25.24.2) Quit ()
[16:00] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:01] * markbby (~Adium@168.94.245.2) Quit (Ping timeout: 480 seconds)
[16:03] <jtang> any europeans going to ceph days in london?
[16:03] * dmsimard (~Adium@ap05.wireless.co.mtl.iweb.com) Quit (Ping timeout: 480 seconds)
[16:06] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[16:07] <malcolm_> Random question, when adding a new OSD, is there a way you can get it to determin its own weight based on size?
[16:07] <absynth> don't think so
[16:09] <malcolm_> damn. I know they auto set weight during the build of a new cluster.. Oh well..
[16:10] <absynth> if you build it with whatchamacallit, ceph-deploy or something?
[16:11] <malcolm_> nah
[16:12] <malcolm_> I did it mkcephfs styles :P
[16:12] <malcolm_> All the OSD's that were available at 'first' start, got auto assigned weights
[16:14] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[16:19] <Clabbe> jtang: give me a planeticket :D
[16:21] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:22] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:24] * julian (~julianwa@125.70.133.27) Quit (Quit: afk)
[16:26] * markbby (~Adium@168.94.245.2) Quit ()
[16:26] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:29] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) has joined #ceph
[16:31] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[16:37] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:37] * dmsimard1 is now known as dmsimard
[16:40] * erice_ (~erice@50.240.86.181) has joined #ceph
[16:41] * erice (~erice@50.240.86.181) Quit (Read error: Operation timed out)
[16:44] <joao> jtang, loicd and I, at least
[16:44] * julian (~julianwa@125.70.133.27) has joined #ceph
[16:44] <joao> I recall others mentioning they had just registered, so my guess is yes
[16:46] <scuttlemonkey> jtang: at last count I think we had around 40 registrations...and that was a couple weeks ago
[16:47] <scuttlemonkey> and with the exception of Bryan, Sage, and I...all speakers are non-Inktank Europeans
[16:47] <scuttlemonkey> http://cephdaylondon.eventbrite.com/
[16:48] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:50] * grepory (~Adium@8.25.24.2) has joined #ceph
[16:50] * ScOut3R (~scout3r@4E5C2305.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[16:52] * clayb (~kvirc@proxy-nj1.bloomberg.com) has joined #ceph
[16:53] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[16:53] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit ()
[16:54] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[16:56] * yanzheng (~zhyan@134.134.137.71) Quit (Remote host closed the connection)
[16:56] * kislotniq (~kislotniq@193.93.77.54) Quit (Read error: Connection reset by peer)
[17:07] * ntranger (~ntranger@proxy2.wolfram.com) has joined #ceph
[17:10] * vata (~vata@2607:fad8:4:6:64ae:587d:165f:d125) has joined #ceph
[17:14] * ntranger_ (~ntranger@proxy2.wolfram.com) Quit (Ping timeout: 480 seconds)
[17:15] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[17:19] * sidarali (~asid@46.28.99.16) Quit (Ping timeout: 480 seconds)
[17:22] <sage> zackc: https://github.com/ceph/teuthology/pull/91
[17:24] * diegows (~diegows@190.190.11.42) has joined #ceph
[17:25] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:25] <alfredodeza> sage zack is not feeling to good, he might join later
[17:25] <alfredodeza> let me take a look
[17:26] <alfredodeza> ah yesssss that would've helped me yesterday :D
[17:27] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[17:30] * alfredodeza commented
[17:30] * yehuda_hm (~yehuda@2602:306:330b:1410:8178:aace:6e68:e9f2) Quit (Ping timeout: 480 seconds)
[17:31] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[17:35] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Operation timed out)
[17:37] * julian (~julianwa@125.70.133.27) Quit (Quit: afk)
[17:39] * yehuda_hm (~yehuda@2602:306:330b:1410:9de:8b13:f9a3:9b8) has joined #ceph
[17:39] <sage> alfredodeza: pushed 2 more patches
[17:39] * alfredodeza looks
[17:42] * YD (YD@b.clients.kiwiirc.com) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[17:42] <ccourtaut> yehuda_hm: ping
[17:42] * sagelap (~sage@76.89.177.113) has joined #ceph
[17:47] * yehuda_hm (~yehuda@2602:306:330b:1410:9de:8b13:f9a3:9b8) Quit (Ping timeout: 480 seconds)
[17:47] * todin_ (tuxadero@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[17:49] * thb (~me@port-3910.pppoe.wtnet.de) has joined #ceph
[17:50] <ntranger> hey alfredodeza, I'm trying to mount ceph fs, and when I keep getting a "unknown filesystem type 'ceph'". Do I need to do this on the ceph node first before I mount to another system?
[17:50] <alfredodeza> ntranger: what command are you running?
[17:50] <alfredodeza> is this ceph-deploy you are using?
[17:50] <alfredodeza> also, try whatever ceph-deploy is attempting to do on the remote host
[17:51] <ntranger> ok. I'm following these instructions.
[17:51] <ntranger> http://ceph.com/docs/master/cephfs/kernel/
[17:54] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[17:55] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[17:56] <ntranger> I didn't use ceph-deploy to make the mount
[17:56] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:56] <alfredodeza> ntranger: I am about to get into a meeting, let me get back at you
[17:57] * berant (~blemmenes@gw01.ussignalcom.com) Quit (Quit: berant)
[17:57] <ntranger> sure thing. Thanks. :)
[17:58] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[17:58] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[17:59] * ircolle1 (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[17:59] * angdraug (~angdraug@c-98-248-39-148.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[18:00] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) has joined #ceph
[18:01] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[18:01] * thomnico (~thomnico@64.34.151.178) Quit (Read error: Connection reset by peer)
[18:03] * danieagle (~Daniel@177.133.173.100) has joined #ceph
[18:05] * malcolm_ (~malcolm@101.165.48.42) Quit (Ping timeout: 480 seconds)
[18:06] <mattch> ntranger: I take it you have the ceph kernel module available on the client you're mounting on?
[18:07] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[18:08] <ntranger> mattch I was told it was on there, I didn't install it. That could be the problem
[18:09] <mattch> ntranger: what doe s'modinfo ceph' show ?
[18:10] <ntranger> couldn't find module
[18:11] <mattch> ntranger: then you don't have ceph support in your kernel. In which case you probably want to look at ceph-fuse for mounts
[18:12] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[18:13] * jcfischer (~fischer@macjcf.switch.ch) Quit (Ping timeout: 480 seconds)
[18:13] * todin (tuxadero@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[18:15] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:16] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Quit: my troubles seem so far away, now yours are too...)
[18:17] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[18:18] <ntranger> mattch thanks so much. I figured as much. They said using fuse would be slower or something, but there might not be a driver for CentOS/Scientific Linux 6.4.
[18:18] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:18] <mattch> ntranger: Yep - no ceph kernel support pre 3. something kernels iirc
[18:20] <ntranger> mattch: so fuse is pretty much the only option we have here?
[18:20] * angdraug (~angdraug@204.11.231.50.static.etheric.net) has joined #ceph
[18:20] <dmsimard> Hi, getting a weird issue - I'm not sure where to look .. ? I'm expecting this command to work: "ceph --name 'mon.' --keyring '/var/lib/ceph/tmp/keyring.mon.01'" but I'm getting this garbage output instead: http://pastebin.com/raw.php?i=r4KnUtQ3
[18:21] <mattch> ntranger: Yep - I can't comment on the speed issues, but the one upside is that if cephfs crashes (as it is occasionally want to do) then it's at least all in userspace
[18:21] <mattch> dmsimard: What does 'ceph status' show you? Looks like your mons aren't started, or are having trouble with quorum/communication
[18:23] <dmsimard> mattch: I'm in the process of setting up a ceph cluster, in fact - the mons aren't yet started - I was guessing they should be started for this to work as well
[18:23] <dmsimard> It looks like they're trying to communicate between each other
[18:23] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[18:23] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[18:23] * ChanServ sets mode +o scuttlemonkey
[18:24] <ntranger> mattch: I tried ceph-fuse, and got "command not found"
[18:24] <mattch> dmsimard: Yep - the mons must be up and quorate to be able to speak to them with the ceph command.
[18:24] <mattch> ntranger: Are you installing the rpms from: http://ceph.com/rpm/el6/ ? If so you need the ceph-fuse pkg
[18:25] <dmsimard> mattch: Yeah, you're right .. the command works now that the mons are started - expected as much. Thanks :)
[18:25] * sagelap (~sage@76.89.177.113) Quit (Ping timeout: 480 seconds)
[18:25] <mattch> dmsimard: Easy mistake :)
[18:26] <ntranger> install the ceph-fuse pkg on the ceph node, or on the machine I'm mounting too?
[18:26] <cmdrk> fwiw i've had success building custom kernels with CephFs and RBD modules for SL6.4
[18:27] <cmdrk> of course the warranty goes out the window with that route :)
[18:27] <ntranger> mattch: sorry to be such a pain. I'm fairly new to all this
[18:27] <dmsimard> Hmm, I found what my problem was
[18:28] <dmsimard> I expected "service ceph start mon.01" to work but apparently it doesnt - I did it through "ceph-mon -i 01" and it worked. Gotta figure that out.
[18:30] <mattch> dmsimard: If you're trying to start/stop a mon or osd on a different server from where you're running the command, you need 'service ceph -a stop mon.x'
[18:30] <dmsimard> What about if I'm trying to start it from the mon itself ?
[18:30] <mattch> dmsimard: That should just work - assuming you have a ceph.conf that defines a [mon.01] section
[18:31] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[18:32] <dmsimard> Have an entry in the ceph.conf for that mon, "service ceph start mon.01" doesn't provide any output - nothing in the logs either
[18:32] * nwat (~nwat@eduroam-237-79.ucsc.edu) has joined #ceph
[18:32] <dmsimard> I'll try and see if I can do it remotely with ceph -a
[18:32] * berant (~blemmenes@gw01.ussignalcom.com) has joined #ceph
[18:32] <ntranger> mattch: I need to install ceph-fuse on the ceph node, correct?
[18:33] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[18:33] <mattch> ntranger: On the node where you want to mount the fs
[18:33] <ntranger> ok
[18:35] * erice_ (~erice@50.240.86.181) Quit (Ping timeout: 480 seconds)
[18:37] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[18:38] * sjm (~sjm@64.34.151.178) Quit (Quit: Leaving)
[18:40] * todin (tuxadero@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[18:40] <odyssey4me> dmsimard - try 'service ceph-mon start mon.01' ?
[18:41] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[18:41] <dmsimard> odyssey4me: Investigating something right now, probably has to do with my ceph.conf file - i'll report back in a bit :)
[18:42] * jjgalvez (~jjgalvez@64.34.151.178) Quit (Quit: Leaving.)
[18:42] * todin (tuxadero@kudu.in-berlin.de) has joined #ceph
[18:44] * gillesMo (~gillesMo@00012912.user.oftc.net) has joined #ceph
[18:44] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[18:45] <dmsimard> Okay, I found the root cause - all is well!
[18:49] * rocker_raj (~oftc-webi@14.139.82.8) has joined #ceph
[18:49] <joao> dmsimard, can you share with the rest of us your findings (for future reference and in case anyone is currently interested)? :)
[18:50] <rocker_raj> I am new to ceph. Can anyone tell me what is the python api call for mapping as is done using rbd mapon command line???
[18:51] <decede> is there still a howto around that doesn�t use ceph-deploy?
[18:51] * KevinPerks (~Adium@64.34.151.178) has left #ceph
[18:52] <dmsimard> joao: in ceph.conf, "host" was at 01 instead of mon01
[18:52] <joao> cool, thanks :)
[18:54] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[18:57] * youyou24 (~youcef@41.200.16.36) has joined #ceph
[18:59] <rocker_raj> I am new to ceph. Can anyone tell me what is the python api call for mapping as is done using rbd mapon command line???
[19:00] * youyou24 (~youcef@41.200.16.36) Quit ()
[19:00] <rocker_raj> *map on
[19:03] <gregaf1> I don't use the python bindings, but maybe wido or joshd will become available later to discuss it
[19:05] * sidarali (~asid@80.249.91.216) has joined #ceph
[19:06] * ikla (~lbz@c-67-190-136-245.hsd1.co.comcast.net) has joined #ceph
[19:07] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:08] * rocker_raj (~oftc-webi@14.139.82.8) Quit (Quit: Page closed)
[19:11] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[19:12] * shang (~ShangWu@64.34.151.178) Quit (Remote host closed the connection)
[19:12] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[19:13] * sidarali (~asid@80.249.91.216) Quit (Ping timeout: 480 seconds)
[19:17] * bclark (~bclark@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[19:18] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[19:22] <ikla> anyone alive?
[19:22] * jksM (~jks@3e6b5724.rev.stofanet.dk) Quit (Ping timeout: 480 seconds)
[19:27] <dmick> ikla: no
[19:32] * sagelap (~sage@12.248.40.138) has joined #ceph
[19:34] * gillesMo (~gillesMo@00012912.user.oftc.net) Quit (Quit: Konversation terminated!)
[19:35] * claenjoy (~leggenda@37.157.33.36) Quit (Quit: Leaving.)
[19:36] <joao> it depends on what you mean by alive
[19:36] <joao> and whether we're going deep into a metaphysical discussion
[19:39] <ikla> im looking to do a new ceph system with 2 or 3x replication for data doing about 20-30TB usable, what would you recommend for hardware or # of systems
[19:39] * yasu` (~yasu`@dhcp-59-166.cse.ucsc.edu) has joined #ceph
[19:40] <dmick> 60-90 TB of disk :)
[19:40] <ikla> ok
[19:40] <dmick> seems like the sweet spot these days is 2-3TB drives; that feels like say 30-40, which feels like, say, 8-10 hosts
[19:40] <ikla> normally run an osd per disk or a hw raid of multiple disks for each osd
[19:41] <dmick> osd per disk is what we usually recommend unless you have really good reasons not to
[19:41] <ikla> amount of memory or cores needed typically?
[19:41] <janos> isn't there a hardware recommendation page?
[19:42] <janos> http://ceph.com/docs/next/install/hardware-recommendations/
[19:44] <janos> ooh this looks much more robust since last i read it
[19:47] * jjgalvez (~jjgalvez@207.96.227.9) has joined #ceph
[19:48] * danieagle (~Daniel@177.133.173.100) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[19:48] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[19:48] * ChanServ sets mode +v andreask
[19:49] * sagelap (~sage@12.248.40.138) Quit (Quit: Leaving.)
[19:49] * sagelap (~sage@12.248.40.138) has joined #ceph
[19:51] * chamings (~jchaming@134.134.139.70) Quit (Remote host closed the connection)
[19:55] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[19:55] * yehuda_hm (~yehuda@2602:306:330b:1410:9de:8b13:f9a3:9b8) has joined #ceph
[19:56] * ScOut3R (~scout3r@4E5C2305.dsl.pool.telekom.hu) has joined #ceph
[19:59] <ikla> how many journals per ssd do people typically use?
[19:59] * ismell (~ismell@host-24-56-171-198.beyondbb.com) Quit (Ping timeout: 480 seconds)
[20:01] * thb (~me@port-3910.pppoe.wtnet.de) has joined #ceph
[20:01] * thb is now known as Guest6561
[20:07] <med> ikla, CEPH folks recommend no more than you can afford to lose at one time (4-6)
[20:07] <wrencsok> it would depend on your ssd,bus,drivecontroller,and disks. I have 2 different systems i work on tuning. determine your ssd's io throughput, divide by what your disks can handle. for us it was a bottleneck
[20:08] <wrencsok> we had 7 journals on a 220MB/s ssd
[20:08] <wrencsok> serving seven disks, load was an issue. and throughput on that path a huge issue. i am changing that to 4 ssd's that are 550MB/s serving only 3 drives.
[20:08] <wrencsok> with 12 drives per chassis and a bus and controller that can handle all that
[20:10] <wrencsok> each ssd in my box's serves only 3 drives, i'd like to reduce that to a 1 ssd to 2 drives, to really optimize that, but its a bit of hard sell for our mgmt.
[20:10] * rocker_raj (~oftc-webi@14.139.82.8) has joined #ceph
[20:10] <dmick> ideally you have ssds for journal and main storage but that runs to money :D
[20:11] <dmsimard> really? I didn't think journals took so much resources
[20:11] <rocker_raj> Hi guys . I am a newbie to ceph.For a project work I need to discover disk resources accross the servers. Is there any ceph command which can give me info regarding the disk size or available disk size??? thanks..
[20:13] <dmick> dmsimard: tanstaafl
[20:13] <dmick> rocker_raj: depends on what you mean. there's ceph df
[20:14] <wrencsok> rados df? if i read your question correctly.
[20:14] * roald (~roaldvanl@87.209.150.214) has joined #ceph
[20:14] <dmsimard> dmick: what!?
[20:14] <dmsimard> Oh okay, I googled that.
[20:15] * roald (~roaldvanl@87.209.150.214) Quit (Read error: No route to host)
[20:15] * roald (~roaldvanl@87.209.150.214) has joined #ceph
[20:15] * rovar (~oftc-webi@proxy-nj2.bloomberg.com) has joined #ceph
[20:15] <dmick> sorry, old sf geekspeek
[20:15] <rovar> *sigh*
[20:16] <dmsimard> Damn, I'm going to run into problems then ? I'm planning to bench 8 pairs of raid-0 (16 mechanical sata drives) with a pair of SSDs in raid-1 for os/journaling
[20:16] * Guest6561 is now known as thb
[20:16] <dmick> heh. interesting; I was not aware of rados df. I wonder how it compares to ceph df
[20:16] <rovar> question: what is the best path to start debugging bugs relating to: verify_authorizer could not get service secret for serv ice osd secret_id=228
[20:16] <dmick> dmsimard: that sounds like a lot of load on that ssd pair, but that's only a gut feeling
[20:17] <rovar> i think its related to the fact that my rgw stopped working even though nothing was changed
[20:17] <dmsimard> dmick: I'll have to do some tests with that I guess, we'll find out - really good to know.
[20:19] <dmick> rovar: I don't know much, but it sounds like something happened to the osd's keys and/or fsid; is it possible its store got damaged?
[20:20] <ikla> no mirroring on journals
[20:20] <rovar> dmick: I'm not sure how best to check,, it seems some requests are still working..
[20:21] <rovar> it may be that the radosgw keyring on this specific machine was hosed somehow
[20:21] <rovar> hmm, the keyring hasn't been touched since I installed.. so that's out,,
[20:22] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[20:22] <dmick> there are many keyrings
[20:23] <dmick> I mean the one in /var/lib/ceph/osd/osd-<n>/keyring (and fsid and ceph_fsid there)
[20:24] <rovar> so I guess the question is: is the osd failing to auth to something? I was thinking that the osd was trying to auth an incoming request and failed
[20:24] <rovar> so it was some other keyring that was broken..
[20:24] <rovar> i'll check the osd's..
[20:24] <dmick> yeah, I'm not sure
[20:24] * roald (~roaldvanl@87.209.150.214) Quit (Read error: Connection reset by peer)
[20:24] * roald (~roaldvanl@87.209.150.214) has joined #ceph
[20:25] * rocker_raj (~oftc-webi@14.139.82.8) Quit (Remote host closed the connection)
[20:26] * nwat (~nwat@eduroam-237-79.ucsc.edu) Quit (Ping timeout: 480 seconds)
[20:28] <ikla> how big are journals usually per osd?
[20:28] <ikla> say 4TB
[20:29] * peetaur is now known as Guest6564
[20:29] * peetaur (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) has joined #ceph
[20:31] * Guest6564 (~peter@CPEbc1401e60493-CMbc1401e60490.cpe.net.cable.rogers.com) Quit (Ping timeout: 480 seconds)
[20:32] * lordinvader (~lordinvad@14.139.82.6) has joined #ceph
[20:35] * nwat (~nwat@eduroam-237-79.ucsc.edu) has joined #ceph
[20:37] * ArtVark (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) has joined #ceph
[20:47] * sjm (~sjm@207.164.135.98) has joined #ceph
[20:48] * grepory (~Adium@8.25.24.2) has joined #ceph
[20:52] <ntranger> mattch: Sorry, I got yanked in to a meeting, then lunch. I got fuse working fine, and all is mounted and well. Thanks so much for your assistance, as well as Tamil, dmick, alfredodeza, sage, and anyone else I might have bothered. :)
[20:53] * Meths_ is now known as Meths
[20:53] * sjm (~sjm@207.164.135.98) Quit (Quit: Leaving)
[20:53] * sjm (~sjm@207.164.135.98) has joined #ceph
[20:53] * sjm_ (~sjm@207.164.135.98) has joined #ceph
[20:54] * sjm_ (~sjm@207.164.135.98) Quit ()
[20:54] * sjm (~sjm@207.164.135.98) has left #ceph
[20:54] * sjm (~sjm@207.164.135.98) has joined #ceph
[21:01] * lordinvader (~lordinvad@14.139.82.6) Quit (Ping timeout: 480 seconds)
[21:01] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[21:02] * yasu` (~yasu`@dhcp-59-166.cse.ucsc.edu) Quit (Remote host closed the connection)
[21:04] * Nikhar (~nikhar@14.139.82.6) has joined #ceph
[21:05] <Nikhar> Hi, when I type "ceph osd pool create libvirt-pool 128 128
[21:05] <Nikhar> " i get the error " ERROR: missing keyring, cannot use cephx for authentication" but if I type "ceph osd pool create libvirt-pool 128 128 -k ceph.client.admin.keyring", it works fine
[21:06] <Nikhar> Is there a way to avoid having to type the name of the keyring?
[21:07] <dmsimard> I believe the client expects the key to be in /etc/ceph/keyring
[21:07] <dmsimard> Is it there ?
[21:07] <Nikhar> just a sec I'll check
[21:08] * tobru_ (~quassel@217-162-50-53.dynamic.hispeed.ch) has joined #ceph
[21:08] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[21:08] * sagelap (~sage@12.248.40.138) Quit (Read error: Operation timed out)
[21:08] <Nikhar> I have /etc/ceph/ceph.client.admin.keyring
[21:09] <Nikhar> should it rather be /etc/ceph/keyring/ceph.client.admin.keyring?
[21:09] <dmsimard> Well, what does your ceph.conf say ? For instance, mine has a line "keyring = /etc/ceph/keyring"
[21:10] * roald (~roaldvanl@87.209.150.214) Quit (Read error: Connection reset by peer)
[21:10] * roald (~roaldvanl@87.209.150.214) has joined #ceph
[21:11] <Nikhar> ahh....I dont have a line specyfying a location of keyring
[21:12] <Nikhar> I added "keyring = /etc/ceph" to config file...still doesn't work
[21:14] <rovar> mine is : keyring = /etc/ceph/$cluster.$name.keyring
[21:14] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[21:14] <rovar> then there is a ceph.client.admin.keyring
[21:14] <rovar> in that dir
[21:15] <rovar> dmick: interestingly, this cephx problem goes away when I bounce the radosgw on that host
[21:15] <rovar> dmick: so it seems like the auth session is going away and not being renewed?
[21:16] <Nikhar> I added keyring = /etc/ceph/$cluster.$name.keyring to ceph.conf ... still the same error
[21:17] <rovar> Nikhar: with the appropriately named file?
[21:17] <rovar> did you generate it correctly?
[21:18] <rovar> Nikhar: looking at our chef scripts, we specify the key location when doing this stuff
[21:18] <rovar> so maybe your working command is good enough :)
[21:19] <Nikhar> Oh...ok thanks :)
[21:20] <Nikhar> Now, i'm facing another problem,on trying to execute "qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2G", i get the foll error "Formatting 'rbd:libvirt-pool/new-libvirt-image', fmt=rbd size=2147483648 cluster_size=0
[21:20] <Nikhar> qemu-img: error connecting
[21:20] <Nikhar> qemu-img: rbd:libvirt-pool/new-libvirt-image: error while creating rbd: Input/output error"
[21:21] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[21:22] <dmsimard> Oh yeah, finally managed to set up a ceph cluster using puppet :D
[21:22] <xarses> huzzah!
[21:23] <dmsimard> xarses: Now I can start checking for that integration with openstack :)
[21:23] <Nikhar> oops....kindly ignore my previous error... i wasn't executing the command as root
[21:25] <xarses> dmsimard: the fun begins!
[21:27] <rovar> dmsimard: We have full stack ceph and openstack on chef.. https://github.com/bloomberg/chef-bcpc
[21:28] <rovar> it's not puppet.. but the recipes are similar.. :)
[21:31] * grepory (~Adium@8.25.24.2) has joined #ceph
[21:39] * todin_ (tuxadero@kudu.in-berlin.de) has joined #ceph
[21:39] * todin (tuxadero@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[21:44] * sjm (~sjm@207.164.135.98) has left #ceph
[21:48] * LeaChim (~LeaChim@054073b1.skybroadband.com) Quit (Remote host closed the connection)
[21:50] <dmsimard> rovar: I'll certainly have a look - if only there were enough hours in a day so I could learn/use every application… :)
[21:52] <xarses> blah, github is broken :(
[21:52] <dmick> rovar: yeah, I just don't know enough about the auth flow. There's a reasonably-good theory document that might help
[21:53] <dmick> http://ceph.com/docs/master/rados/operations/auth-intro/
[21:53] <kraken> \o
[21:53] <dmick> xarses: broken-ish. it limps
[21:54] <saumya> hey! Can someone please help me on this, if I need to map the disk image from a ceph-server to a virtual machine on some other physical machine, how do I use rbd map for that from commandline?
[21:55] <xarses> dmick, thats bascily useless
[21:55] <dmick> rbd map performs the operation "use an rbd image in the Ceph cluster as a local block device"
[21:55] <dmick> so you use it on the machine where you want a block device to appear
[21:58] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[21:59] * grepory (~Adium@8.25.24.2) has joined #ceph
[22:01] * meenal (uid13325@id-13325.highgate.irccloud.com) has joined #ceph
[22:01] <saumya> dmick: so I have a ceph server, where I am creating the block device image, then I want to attach this block device to another machine, so how do I do that? Not map?
[22:03] <dmick> yes. map. on the machine where you want the image to appear and be usable.
[22:04] * Vjarjadian (~IceChat77@05453253.skybroadband.com) has joined #ceph
[22:04] * nwat (~nwat@eduroam-237-79.ucsc.edu) Quit (Ping timeout: 480 seconds)
[22:05] <saumya> dmick: sudo rbd map foo --pool rbd --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] , so here the IP will be of the ceph server or the physical machine where I want to attach it? and path of whose keyring?
[22:05] <meenal> hi..i am trying to attach block device image to my vm by editing the .xml of vm using "sudo virsh attach-device testrun try.xml"... testrun is my vm...but i am getting this error http://pastebin.com/raw.php?i=jyLF7xBm
[22:06] <dmick> saumya: it says "mon-IP". that means Ceph mon. You know what a Ceph mon is? and, the path is to ceph.client.admin.keyring, the same keyring any ceph client must have
[22:06] <dmick> but if your machine is a Ceph client already you don't need to specify -m {mon-IP}; that's why that argument is in []
[22:06] <dmick> it's optional
[22:07] <dmick> in fact --name is optional as well
[22:07] <dmick> since that's the default
[22:07] <Gugge-47527> and so it --pool :)
[22:07] <dmick> yep, rbd is the default pool
[22:07] <meenal> dmick: could you please suggest something
[22:08] <dmick> don't know meenal, and I have a meeting
[22:10] <meenal> dmick: ok...thnks :)
[22:10] <saumya> dmick: thanks :)
[22:12] <meenal> Could anyone please suggest anything about the error http://pastebin.com/raw.php?i=jyLF7xBm
[22:13] * nwat (~nwat@eduroam-237-79.ucsc.edu) has joined #ceph
[22:20] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[22:20] <xarses> dmick: this is epic http://video.foxbusiness.com/v/2667694577001/writing-a-new-dictionary/
[22:21] <xarses> gotta watch through 1:36
[22:21] <jmlowe1> meenal: what's the xml you are using?
[22:22] * thb (~me@0001bd58.user.oftc.net) Quit (Remote host closed the connection)
[22:22] * clayb (~kvirc@proxy-nj1.bloomberg.com) Quit (Read error: Connection reset by peer)
[22:27] * mikedawson_ (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[22:32] * berant (~blemmenes@gw01.ussignalcom.com) Quit (Quit: berant)
[22:32] * m0zes (~mozes@beocat.cis.ksu.edu) Quit (Read error: Connection reset by peer)
[22:32] * m0zes (~mozes@beocat.cis.ksu.edu) has joined #ceph
[22:32] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[22:32] * sjustlaptop1 (~sam@172.56.17.40) has joined #ceph
[22:32] * mikedawson_ is now known as mikedawson
[22:34] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[22:34] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[22:35] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[22:35] * markbby (~Adium@168.94.245.2) has joined #ceph
[22:38] * wusui (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) Quit (Ping timeout: 480 seconds)
[22:38] * WarrenUsui (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) Quit (Ping timeout: 480 seconds)
[22:38] * ArtVark (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) Quit (Ping timeout: 480 seconds)
[22:38] * aardvark (~Warren@2607:f298:a:607:8d7c:bc47:8ee1:520d) Quit (Ping timeout: 480 seconds)
[22:39] * wusui (~Warren@2607:f298:a:607:956:fb3f:2383:3023) has joined #ceph
[22:39] * aardvark (~Warren@2607:f298:a:607:956:fb3f:2383:3023) has joined #ceph
[22:40] * WarrenUsui (~Warren@2607:f298:a:607:956:fb3f:2383:3023) has joined #ceph
[22:41] * sjm (~sjm@207.164.135.98) has joined #ceph
[22:44] * m0zes (~mozes@beocat.cis.ksu.edu) Quit (Read error: Connection reset by peer)
[22:44] * m0zes (~mozes@beocat.cis.ksu.edu) has joined #ceph
[22:47] * allsystemsarego (~allsystem@188.25.134.128) Quit (Quit: Leaving)
[22:53] * WarrenUsui (~Warren@2607:f298:a:607:956:fb3f:2383:3023) Quit (Quit: Leaving)
[22:53] * bclark (~bclark@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[22:53] * sjm (~sjm@207.164.135.98) Quit (Quit: Leaving)
[22:53] * wusui (~Warren@2607:f298:a:607:956:fb3f:2383:3023) Quit (Quit: Leaving)
[22:55] * wusui (~Warren@2607:f298:a:607:956:fb3f:2383:3023) has joined #ceph
[22:56] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[22:58] * vata (~vata@2607:fad8:4:6:64ae:587d:165f:d125) Quit (Quit: Leaving.)
[23:03] <meenal> jmlowe1: the xml file for my vm is http://pastebin.com/raw.php?i=YSX1De4i and i am trying to attach block device to this vm using "sudo virsh attach-device testrun try.xml" where try.xml is http://pastebin.com/raw.php?i=F8Yi4TAF
[23:07] * marrusl (~mark@64.34.151.178) Quit (Quit: outta here)
[23:09] * daMaestro (~jon@denver.beatport.com) has joined #ceph
[23:09] * sjustlaptop1 (~sam@172.56.17.40) Quit (Read error: Operation timed out)
[23:12] <daMaestro> I'm attempting to use ceph-deploy following http://ceph.com/docs/next/start/quick-ceph-deploy/ for Fedora 19 and after `ceph-deploy mon create` http://fpaste.org/39496/91067271/ on the node I am seeing very little in mon logs (http://fpaste.org/39495/79106641/)
[23:12] * jmlowe (~Adium@c-50-172-105-141.hsd1.in.comcast.net) has joined #ceph
[23:12] <daMaestro> /usr/sbin/ceph-create-keys is running, but never completing
[23:13] <daMaestro> stracing ceph-create-keys is not helping: http://fpaste.org/39498/91068191/
[23:14] <Tamil> daMaestro: do you have the firewall turned off?
[23:15] <daMaestro> Tamil, yes, iptables is stopped
[23:15] <daMaestro> http://fpaste.org/39499/91069421/
[23:16] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[23:18] * jmlowe1 (~Adium@c-50-172-105-141.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[23:19] <daMaestro> I'm seeing the same behavior with three nodes too.
[23:19] <ikla> if i have 4 ceph systems with 5 osds with 2 replicas set I can down two systems and be good right?
[23:20] <dmsimard> I'm mounting an image through "rbd map". Running some tests, my performances are directly affected by the amount of replicas I have configured in a pool - which makes sense reading about rbd caching. I am trying to get caching to work but to no avail. "rbd cache = true" under [client] in ceph.conf does not seem to work ..
[23:23] * Nikhar (~nikhar@14.139.82.6) Quit (Ping timeout: 480 seconds)
[23:26] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[23:28] <Tamil> daMaestro: was it already turned off?
[23:30] <daMaestro> Tamil, i'm burning this down, gonna start new instances and ensure everything is configured in a known state and will try a single Fedora 19 node
[23:30] <daMaestro> Using ceph-deploy.
[23:34] <saumya> Hey, when do I use export command with rdb? I used it to send an image file from one machine to another but it didn't work. My machines are connected over LAN, and it did show that the export was completed 100%
[23:35] <meenal> hey...i am trying to attach block device image to my vm by editing the .xml of vm using "sudo virsh attach-device testrun try.xml"... testrun is my vm...but i am getting this error http://pastebin.com/raw.php?i=jyLF7xBm
[23:36] <meenal> please someone suggest some fix
[23:36] <Tamil> daMaestro: ok
[23:37] * penguinLord (~penguinLo@14.139.82.6) Quit (Ping timeout: 480 seconds)
[23:38] * shang (~ShangWu@207.96.227.9) has joined #ceph
[23:40] <joshd> meenal: might be a libvirt/qemu version mismatch, it shouldn't be trying virtio when you specified ide
[23:41] <meenal> actually i replaced the bus with virtio..sorry for the mistake
[23:41] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[23:41] <joshd> meenal: did you add a virtio controller to the vm too?
[23:42] <meenal> joshd: no i didn't change the xml for the vm
[23:43] * grepory (~Adium@8.25.24.2) Quit (Quit: Leaving.)
[23:43] <saumya> anyone?
[23:47] <dmick> saumya: "it didn't work".
[23:50] * xmltok (~xmltok@pool101.bizrate.com) Quit (Quit: Bye!)
[23:50] <saumya> dmick: I can't find the exported image on the remote machine where I exported it, and the rbd and ceph commands started giving faults
[23:51] <joshd> meenal: you might need to add a virtio controller to the vm first to be able to attach a virtio disk - libvirt does that automatically on startup, but maybe not for device-add
[23:51] <joshd> saumya: export goes to a local file or stdout
[23:51] * markl (~mark@tpsit.com) has joined #ceph
[23:53] <saumya> joshd: so if I need to use the block device created on the ceph server on some other machine, that is mount it on some other machine, how do I do it? I will have to send the image file somehow right?
[23:53] <dmick> saumya: the ceph cluster is a network-accessible resource
[23:53] <dmick> you don't "send" things from the clsuter to other machines for access
[23:53] <dmick> the other machines access the cluster
[23:53] * sjustlaptop (~sam@67-203-191-242.static-ip.telepacific.net) has joined #ceph
[23:53] <joshd> this isn't iscsi where you need to explicitly export devices
[23:53] <saumya> dmick: what command do I use for that?
[23:55] <dmick> if you create an image "foo" in the cluster, then other machines access the cluster to see "foo". rbd map is one way (using the kernel rbd driver). rbd commands is another way (for management, and export/import). qemu-rbd can make a VM that can access the image directly in the cluster. stgt can export the rbd image as an iSCSI target. rbd-fuse can access the images as files in a FUSE filesystem.
[23:55] <meenal> joshd: yeah i was also sensing the problem..but don't know where to make the changes ?
[23:55] <dmick> so there isn't "a command".
[23:57] <joshd> meenal: if you change the root disk to use virtio and restart the vm libvirt will generate the virtio controller for you
[23:58] <saumya> dmick: so I run rbd map on the machine which wants to access the cluster?
[23:58] <meenal> joshd: could you please tell how to change the root disk to use virtio?
[23:58] * ntranger (~ntranger@proxy2.wolfram.com) Quit ()

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.