#ceph IRC Log

Index

IRC Log for 2013-08-26

Timestamps are in GMT/BST.

[0:03] * ScOut3R (~scout3r@54026B73.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[0:04] * danieagle (~Daniel@177.97.251.212) has joined #ceph
[0:07] * jksM (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[0:07] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (Read error: Connection reset by peer)
[0:18] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[0:18] * ChanServ sets mode +v andreask
[0:18] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has left #ceph
[0:24] * AfC (~andrew@2407:7800:200:1011:6c7f:504b:c379:4fb8) has joined #ceph
[0:45] * vipr_ (~vipr@78-23-119-24.access.telenet.be) Quit (Remote host closed the connection)
[0:54] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[0:58] * jlogan1 (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[1:02] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) has joined #ceph
[1:02] * tnt (~tnt@91.177.230.140) Quit (Ping timeout: 480 seconds)
[1:35] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[1:47] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[1:47] * danieagle (~Daniel@177.97.251.212) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[1:54] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[2:00] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[2:11] * dmsimard (~Adium@69.165.206.93) has joined #ceph
[2:27] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[2:32] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[2:32] * dmsimard (~Adium@69.165.206.93) Quit (Quit: Leaving.)
[3:09] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[3:16] * silversurfer (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) has joined #ceph
[3:23] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[3:27] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[3:27] * jantje (~jan@paranoid.nl) Quit (Ping timeout: 480 seconds)
[3:39] * jaydee (~jeandanie@124x35x46x11.ap124.ftth.ucom.ne.jp) has joined #ceph
[3:43] * etienne (~etienne@155.106.77.86.rev.sfr.net) has joined #ceph
[3:43] * etienne (~etienne@155.106.77.86.rev.sfr.net) has left #ceph
[3:46] * jantje (~jan@paranoid.nl) has joined #ceph
[3:46] * silversurfer (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[3:49] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) has joined #ceph
[3:53] * dpippenger (~riven@cpe-75-85-17-224.socal.res.rr.com) has joined #ceph
[3:55] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[3:56] * AfC (~andrew@2407:7800:200:1011:6c7f:504b:c379:4fb8) Quit (Remote host closed the connection)
[3:56] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[3:58] * AfC (~andrew@2407:7800:200:1011:f946:9508:d31e:c9fa) has joined #ceph
[4:02] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[4:04] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit (Ping timeout: 480 seconds)
[4:16] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[4:23] * ScOut3R (~ScOut3R@54026B73.dsl.pool.telekom.hu) has joined #ceph
[4:23] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[4:26] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[4:31] * ScOut3R (~ScOut3R@54026B73.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[4:37] * yy-nm (~Thunderbi@122.233.46.4) has joined #ceph
[4:46] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[4:56] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) has joined #ceph
[4:59] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[5:00] * madkiss (~madkiss@2001:6f8:12c3:f00f:e86e:1b33:d047:497) Quit (Quit: Leaving.)
[5:01] * haomaiwa_ (~haomaiwan@218.71.124.49) Quit (Remote host closed the connection)
[5:02] * haomaiwang (~haomaiwan@li498-162.members.linode.com) has joined #ceph
[5:17] * yy-nm (~Thunderbi@122.233.46.4) Quit (Quit: yy-nm)
[5:18] * dpippenger (~riven@cpe-75-85-17-224.socal.res.rr.com) Quit (Remote host closed the connection)
[5:24] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[5:25] * jantje (~jan@paranoid.nl) Quit (Ping timeout: 480 seconds)
[5:35] * jantje (~jan@paranoid.nl) has joined #ceph
[5:35] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:35] * KindTwo (~KindOne@h124.2.40.162.dynamic.ip.windstream.net) has joined #ceph
[5:36] * KindTwo is now known as KindOne
[5:53] * haomaiwa_ (~haomaiwan@218.71.124.49) has joined #ceph
[5:57] * haomaiwang (~haomaiwan@li498-162.members.linode.com) Quit (Ping timeout: 480 seconds)
[5:57] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[6:03] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Ping timeout: 480 seconds)
[6:16] * brzm (~medvedchi@node199-194.2gis.com) has joined #ceph
[6:44] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Bye!)
[6:56] * KindTwo (~KindOne@h62.41.28.71.dynamic.ip.windstream.net) has joined #ceph
[6:58] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:58] * KindTwo is now known as KindOne
[7:16] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[7:22] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:22] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[7:35] * yy-nm (~Thunderbi@122.233.46.4) has joined #ceph
[7:36] * sglwlb (~sglwlb@221.12.27.202) has joined #ceph
[7:38] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[7:57] * tnt (~tnt@91.177.230.140) has joined #ceph
[8:02] * silversurfer (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) has joined #ceph
[8:04] * jaydee (~jeandanie@124x35x46x11.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[8:08] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[8:08] * ChanServ sets mode +v andreask
[8:09] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has left #ceph
[8:24] * AfC (~andrew@2407:7800:200:1011:f946:9508:d31e:c9fa) Quit (Quit: Leaving.)
[8:24] * AfC (~andrew@2407:7800:200:1011:f946:9508:d31e:c9fa) has joined #ceph
[8:24] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[8:27] * rendar (~s@host129-182-dynamic.19-79-r.retail.telecomitalia.it) has joined #ceph
[8:30] * matt_ (~matt@mail.base3.com.au) has joined #ceph
[8:33] * jaydee (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) has joined #ceph
[8:33] * silversurfer (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) Quit (Read error: Connection reset by peer)
[8:37] * vipr (~vipr@78-23-119-24.access.telenet.be) has joined #ceph
[8:37] * silversurfer (~jeandanie@124x35x46x11.ap124.ftth.ucom.ne.jp) has joined #ceph
[8:39] * tobru (~quassel@2a02:41a:3999::94) Quit (Remote host closed the connection)
[8:39] * symmcom (~wahmed@S0106001143030ade.cg.shawcable.net) has left #ceph
[8:41] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[8:41] * jaydee (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[8:41] <sglwlb> silence
[8:44] <odyssey4me> Is there a way to show the current value of a running config key?
[8:45] <matt_> Is anyone else having cpu usage issues in Dumpling?
[8:45] * rendar (~s@host129-182-dynamic.19-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[8:46] <odyssey4me> Alternatively - the documentation specifies "mds max file size" at 1TB by default, with "1ULL << 40" as the default value of the config entry. I need to make it 2TB... what value do I put in to make that work?
[8:53] * ssejour (~sebastien@out-chantepie.fr.clara.net) has joined #ceph
[8:53] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Read error: Connection reset by peer)
[8:55] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[8:56] <yy-nm> odyssey4me: 1ULL << 41 means 2tb. but it need ceph support!
[8:56] <odyssey4me> yy-nm: I figured it out, thanks... the value is in bytes
[8:56] <odyssey4me> ceph --admin-daemon /var/run/ceph/ceph-mon.ctpcph001.asok config show | grep mds_max
[8:57] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[8:57] <odyssey4me> shows the current value
[9:05] * artworklv (~artworklv@brln-d9ba3566.pool.mediaWays.net) has joined #ceph
[9:06] * artworklv (~artworklv@brln-d9ba3566.pool.mediaWays.net) Quit ()
[9:08] * allsystemsarego (~allsystem@5-12-37-127.residential.rdsnet.ro) has joined #ceph
[9:09] <yanzheng> odyssey4me, put "mds max_file_size = xxxx" to mon section of ceph.conf
[9:10] <yanzheng> then recreate the fs
[9:10] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:12] * wogri_risc (~Adium@85.233.126.167) has joined #ceph
[9:12] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[9:22] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:25] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Read error: Connection reset by peer)
[9:25] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[9:32] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[9:34] * tnt (~tnt@91.177.230.140) Quit (Ping timeout: 480 seconds)
[9:35] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[9:35] * ChanServ sets mode +v andreask
[9:37] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[9:37] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[9:49] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:53] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[9:56] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) has joined #ceph
[10:00] * AfC (~andrew@2407:7800:200:1011:f946:9508:d31e:c9fa) Quit (Quit: Leaving.)
[10:03] * bergerx_ (~bekir@78.188.101.175) has joined #ceph
[10:06] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[10:12] <topro> anyone knows the strange ceph-mds behaviour that after restarting MDS and after it having gone to "active" mode, it would enter a cache read loop forever without serving any FS requests? I had been able to get it up and running by increasing "mds cache size" and restarting multiple times in the ppast, but as of now i won't get it out of that strange state back into normal operation. anyone, please?
[10:15] <topro> btw. i cannot even further increase mds cache size as i'm at 700000 already which keeps eating too much memory already
[10:18] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[10:22] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[10:23] <yanzheng> topro, it's a know bug
[10:23] <yanzheng> s/know/known
[10:24] <ccourtaut> morning
[10:25] <topro> yanzheng: is there anything (other then always further increasing cache size) i can do to get the mds back working?
[10:26] <topro> ^^ with dumpling that is (was the same with cuttlefish)
[10:26] <yanzheng> you can try restarting the mds periodically
[10:27] <topro> well, I've been trying to restart if for about 50 times that morning :/
[10:28] <topro> with different cache sizes, no matter what i try, eventually it turns int reading at about 50MB/s with 1op/s even before it would start to accept mounts from clients
[10:29] * wogri_risc1 (~Adium@85.233.124.80) has joined #ceph
[10:30] <yanzheng> see http://tracker.ceph.com/issues/4405
[10:32] <yanzheng> you have too many entries in the stray directory
[10:33] * wogri_risc (~Adium@85.233.126.167) Quit (Ping timeout: 480 seconds)
[10:33] <yanzheng> the mds tries purging the stray entries during startup
[10:35] <topro> yanzheng: just read thet link. what does "stray directory" refer to?
[10:35] <yanzheng> restarting the mds periodically will keep the stay entries to a minimum
[10:36] <topro> ok, i could do that... once I get it back online ;)
[10:36] <yanzheng> is stores unlinked but referenced inode
[10:36] <yanzheng> s/is/it
[10:37] <yanzheng> the bug should be easy to fix
[10:38] <topro> actually, doing that mds dump gives me a file with 1.7M lines. doing a "grep stray" on it it still has 1.4M lines. does that give any clue about what cache size I need to restart mds?
[10:39] <yanzheng> 2M
[10:41] * dalegaar1 is now known as dalegaard
[10:43] * psteyn (~pieter@105-237-67-242.access.mtnbusiness.co.za) has joined #ceph
[10:44] <psteyn> Hi there, using Ubuntu Server 12.04 and Dumpling with ceph-deploy 1.22, I get the following when running 'ceph-deploy gatherkeys lb1'
[10:44] <psteyn> [ceph_deploy.gatherkeys][WARNIN] Unable to find /etc/ceph/ceph.client.admin.keyring on ['lb1']
[10:44] <psteyn> how do I get ceph.client.admin.keyring generated?
[10:50] <yanzheng> deploy monitor first?
[10:52] <loicd> morning ceph
[10:53] <loicd> it would be great if someone had time to review this patch series : https://github.com/ceph/ceph/pull/538
[10:53] * matt_ (~matt@mail.base3.com.au) Quit (Quit: Leaving)
[10:53] <ccourtaut> loicd: oui je vais y jeter un oeil
[10:53] <ccourtaut> loicd: i'll do that again in english, yes i'll take a look at it :)
[10:54] <loicd> merci
[10:54] <loicd> thank you :-)
[10:54] * yanzheng 看不懂
[10:54] <ccourtaut> XD
[10:54] <loicd> yanzheng: :-D
[10:56] <psteyn> yanzheng I did do: ceph-deploy mon create lb1 lb2, and it seemed to have completed..
[10:56] <psteyn> before running gatherkeys
[10:56] <topro> yanzheng: I got MDS running again with 715k cache size, but not sure for how long that will hold up. and I'm not sure if I can spend as much memory to increase caches isze to 2M. any idea on how to proceed. btw. the machine has 24G of ram, but 700k cache size already gives MDS daemon about 7G of memory footprint, increasing (maybe due to memory leak)
[10:57] <yanzheng> topro, do you see lots of "purge_stray ..." in the mds log?
[10:58] <yanzheng> psteyn, maybe you should run gatherkeys on node lb1 or lb2
[10:59] <topro> mds log or dump? the log gives nothing at all at default log level
[10:59] <yanzheng> mds log
[10:59] <topro> which log level to use?
[10:59] <yanzheng> 10
[11:00] <yanzheng> ceph mds tell 0 injectargs '--debug_mds 10'
[11:01] <yanzheng> it may take the mds a while to purge the stray dir
[11:02] <topro> well I know that MDS log level 10 gives me HUGE log files of gigabytes withing a _very_ short time
[11:02] <ccourtaut> loicd: after overlooking the pull request, seems fine to me
[11:02] <yanzheng> yes
[11:02] <psteyn> yanzheng: can you maybe have a look: http://pastebin.com/Tyj2cZtn
[11:02] <psteyn> that's the exact commands I just ran
[11:03] <yanzheng> no idea,
[11:04] <yanzheng> I'm not familiar with ceph-deploy
[11:04] <yanzheng> topro, just check if the mds is purging the stray inodes
[11:04] <yanzheng> then change debug level to 0
[11:04] <ccourtaut> and iirc, ceph-deploy is moving a lot theses times
[11:05] <topro> yanzheng: would I see a significant process memory usage reduction at the very moment the stray dir get purged? cause I see regular massive memory frees after a longer time of memory usage growth
[11:05] <topro> about once a day, perhaps
[11:07] <yanzheng> your workload does lots of file creation/deletion?
[11:07] <ccourtaut> topro: iirc, i think alfredodeza is developing on ceph-deploy, he might have an idea for your problem
[11:08] <yanzheng> topro, you are using kernel client or fuse?
[11:08] <topro> yanzheng: using cephfs to provide useres /home, so the answer is yes I would say. using linux 3.9 kernel client
[11:09] <yanzheng> how many client?
[11:10] <topro> about 8 clients
[11:10] <topro> btw. I know its not stable yet... ;)
[11:11] <yanzheng> 3.9 kernel is a little old for cephfs
[11:12] <erwan_taf> lol
[11:13] <topro> I tried debian supplied linux-3.10 from wheezy-backports but that client showed me stuff like empty directorys that are not empty
[11:13] <topro> erwan_taf: ?!?
[11:13] <erwan_taf> 3.9 isn't that old :)
[11:13] <erwan_taf> but I understand ceph need some features
[11:13] <erwan_taf> so reading 3.9 is a bit old sounds funny for me running a 3.8 :p
[11:14] <topro> ack, anyway. would giving 3.10 another try be of any help or would I need to patch it with latest git code anyway?
[11:15] <yanzheng> https://github.com/ceph/ceph-client/commits/testing
[11:16] <yanzheng> the empty directory issue has been fixed in the test branch
[11:16] <topro> yanzheng: to answer your question: increasing log level to 10 for about one minute gave ma a logfile with about 500MB. it has 2.4M lines with 500k lines containing "stray"
[11:16] <yanzheng> great
[11:17] <topro> does that give you any clue?
[11:18] <yanzheng> the mds is purging stray
[11:19] * bergerx_ (~bekir@78.188.101.175) Quit (Ping timeout: 480 seconds)
[11:19] <yanzheng> if you are using old kernel, I suggest using ceph instead of the kclient
[11:19] <topro> so when I periodically restart mds (i.e. once per night) before I crashes due to memory exhausted, chanches are high it would just start up again without making me trouble?
[11:19] <yanzheng> yes
[11:20] <yanzheng> if you are using old kernel, I suggest using fuse instead of the kclient
[11:21] <topro> I was trying to use fuse, but I was not able to give fuse client the right directory argument to mount a subdir for the ceph filesystem.
[11:21] <topro> my fstab entry whith kclient looks like that:
[11:21] <topro> 1.2.3.4:6789:/home/ /home ceph ...some_options... 0 0
[11:22] * wogri_risc1 (~Adium@85.233.124.80) Quit (Quit: Leaving.)
[11:22] <topro> and I was not able to do that with fuse client
[11:22] * bergerx_ (~bekir@78.188.101.175) has joined #ceph
[11:25] * yy-nm (~Thunderbi@122.233.46.4) Quit (Quit: yy-nm)
[11:25] <topro> yanzheng: which commit in testing is the one you were referring to, and do you expect it to go into dumpling soon?
[11:25] <yanzheng> it's a kernel client bug
[11:26] <topro> so, a kernel client bug which is in vanilla linux-3.10 I assume.
[11:27] <topro> ahh, now I see, the link goes to ceph-client repo, sorry, didn't see that
[11:27] <yanzheng> it's longstanding kernel bug
[11:28] <topro> and it won't be in 3.11 either I assume. so either patch kernel with latest ceph git or use fuse instead, right?
[11:28] * artworklv (~artworklv@brln-d9ba3566.pool.mediaWays.net) has joined #ceph
[11:29] <yanzheng> ceph-fuse has a '-r' option, it can specify subdir
[11:30] <yanzheng> yes
[11:30] * artworklv (~artworklv@brln-d9ba3566.pool.mediaWays.net) Quit ()
[11:30] <topro> thats right, but that cannot be specified in fstab, can it?
[11:31] <yanzheng> I don't know
[11:31] <topro> i was trying some time ago, i was aware of that option that time, but the fstab helper didn't pass that through.
[11:34] * rongze (~quassel@106.120.176.78) has joined #ceph
[11:34] * sel (~sel@212.62.233.233) has joined #ceph
[11:34] <topro> with that beeing a lonstanding kclient bug, strange to me is why I don't encounter that issue with linux-3.9 kclient then, just with 3.10
[11:37] <topro> is there anything I can do to help cephfs despite crying? ;)
[11:38] <yanzheng> compile kernel
[11:39] <topro> which repo/branch to clone?
[11:39] * rongze_ (~quassel@211.155.113.206) Quit (Ping timeout: 480 seconds)
[11:40] <yanzheng> testing
[11:41] <topro> I'll see how for I can get
[11:46] * andreask1 (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[11:46] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[11:46] * ChanServ sets mode +v andreask1
[11:46] * andreask1 is now known as andreask
[11:51] <topro> I'm not experienced in linux development, is it supposed to work if I use linux-3.9 source and merge ceph-client branch "testing" into it, as I can't use any kernel above 3.9 a.t.m. for third party driver incompatibility issues
[11:51] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[11:56] * silversurfer (~jeandanie@124x35x46x11.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[12:05] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Read error: Connection reset by peer)
[12:07] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[12:09] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[12:18] * tnt (~tnt@91.177.230.140) has joined #ceph
[12:20] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[12:28] * gaveen (~gaveen@175.157.226.159) has joined #ceph
[12:36] * tnt (~tnt@91.177.230.140) Quit (Ping timeout: 480 seconds)
[12:43] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[12:46] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[12:58] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) Quit (Quit: artwork_lv)
[13:00] <odyssey4me> yanzheng - recreate the fs?
[13:07] * brzm (~medvedchi@node199-194.2gis.com) Quit (Remote host closed the connection)
[13:09] * wogri_risc (~Adium@85.233.124.80) has joined #ceph
[13:09] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:10] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) has joined #ceph
[13:16] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[13:20] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) has joined #ceph
[13:30] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) Quit (Ping timeout: 480 seconds)
[13:48] * turduks (~hddddhd@bzq-79-176-213-94.red.bezeqint.net) has joined #ceph
[13:52] * miniyo (~miniyo@0001b53b.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:01] <odyssey4me> joshd - I'm getting the same error as this with grizzly: https://answers.launchpad.net/nova/+question/201366
[14:02] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[14:02] <odyssey4me> I have the ceph.conf configured with the monitor addresses and client key, along with libvirt also having the client key as a secret.
[14:02] <odyssey4me> Can you help me work through to solve it please?
[14:03] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[14:05] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[14:05] * ChanServ sets mode +v andreask
[14:18] * gaveen (~gaveen@175.157.226.159) Quit (Ping timeout: 480 seconds)
[14:19] * markbby (~Adium@168.94.245.4) has joined #ceph
[14:23] * turduks (~hddddhd@bzq-79-176-213-94.red.bezeqint.net) Quit ()
[14:25] * miniyo (~miniyo@0001b53b.user.oftc.net) has joined #ceph
[14:25] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:25] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:27] * gaveen (~gaveen@175.157.238.162) has joined #ceph
[14:30] * rekrej (jerker@82ee1319.test.dnsbl.oftc.net) Quit (Remote host closed the connection)
[14:38] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[14:40] * turduks (~hddddhd@bzq-79-176-213-94.red.bezeqint.net) has joined #ceph
[14:47] * turduks (~hddddhd@bzq-79-176-213-94.red.bezeqint.net) Quit ()
[14:47] * turduks (~hddddhd@69.174.99.94) has joined #ceph
[14:50] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[14:50] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit ()
[14:53] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:53] * topro_ (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[14:56] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[14:56] * markbby (~Adium@168.94.245.4) has joined #ceph
[15:07] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[15:11] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[15:13] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[15:14] * wogri_risc1 (~Adium@85.233.124.80) has joined #ceph
[15:15] * yanzheng (~zhyan@101.83.161.186) has joined #ceph
[15:16] * wogri_risc (~Adium@85.233.124.80) Quit (Read error: Connection reset by peer)
[15:19] * CliMz (~CliMz@195.65.225.142) has joined #ceph
[15:22] * Bada (~Bada@195.65.225.142) has joined #ceph
[15:22] * Bada (~Bada@195.65.225.142) Quit ()
[15:22] * CliMz (~CliMz@195.65.225.142) Quit ()
[15:22] * Bada (~Bada@195.65.225.142) has joined #ceph
[15:26] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[15:32] * topro_ (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[15:36] <sel> What's the downside of using mkcephfs to deploy ceph? I'm finding it hard to get ceph-deploy to do what I want
[15:37] <alfredodeza> I believe mkcephfs is deprecated
[15:37] <alfredodeza> ceph-deploy on the other hand, is actively developed and worked on
[15:38] <alfredodeza> if you have a reproducible problem with it, you might want to open a new issue in the tracker
[15:38] <alfredodeza> or maybe it is not an issue? why don't you give me an example of what the problem is
[15:38] <alfredodeza> :)
[15:39] <tnt> alfredodeza: btw does ceph-deploy support added new nodes to a setup that was not originally created with ceph-deploy ?
[15:39] <sel> The big problem I have with it is the lack of documentation. How for instance do I tell ceph deploy which IP to use for cluster traffic, and which device it should use for which osd.
[15:42] * ChanServ sets mode +v wogri
[15:44] <alfredodeza> sel: documentation is something that has been lacking for a while, but I am trying to improve it as we move forward
[15:44] <alfredodeza> for the past few weeks we've doubled down on fixing bugs and we've had 3 big bug-fix releases
[15:45] <alfredodeza> documentation will follow through once we are a bit stable
[15:45] <alfredodeza> tnt: sure, I don't see any reason why not
[15:45] * turduks (~hddddhd@69.174.99.94) Quit ()
[15:45] * turduks (~hddddhd@69.174.99.94) has joined #ceph
[15:47] <tnt> alfredodeza: well it might expect things to be done a certain way and be incompatible with any pre-existing setup that doesn't match its way of configuring stuff.
[15:47] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:48] <alfredodeza> sel, the ceph-deploy help menu, although not too verbose, indicates that for OSDs you can do it in the form of HOST:DISK[:JOURNAL]
[15:48] <sel> alfredodeza, Sorry if I seemed a bit rude, but I'm under pressure to get this up, and I find it a bit frustrating using a lot of time on a tool that ain't well documented.
[15:48] <alfredodeza> I believe that by 'DISK' you can do that by ID or by path
[15:49] <alfredodeza> sel: no problem, not at all, I am really working hard to get it up to a point where it is better
[15:49] <alfredodeza> and I am pretty sure it is doing much better than before :)
[15:49] <alfredodeza> sel: have you used it recently?
[15:49] <alfredodeza> or have you been able to compare it with some older version?
[15:50] <sel> I've used mkcephfs before....
[15:52] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:52] * zhyan_ (~zhyan@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[15:53] * turduks (~hddddhd@69.174.99.94) Quit ()
[15:54] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[15:55] * yanzheng (~zhyan@101.83.161.186) Quit (Ping timeout: 480 seconds)
[15:55] * jlogan (~Thunderbi@2600:c00:3010:1:1::40) has joined #ceph
[15:56] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[15:56] * markbby (~Adium@168.94.245.4) has joined #ceph
[16:00] * markl_ (~mark@tpsit.com) has joined #ceph
[16:00] * markl_ (~mark@tpsit.com) Quit ()
[16:01] <sel> Can ceph-deply read ceph.conf as a template for what it should do? For instance can I set that osd.100 should be /dev/disk-by-path/XXXXX
[16:02] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[16:02] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:04] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[16:04] * zhyan_ (~zhyan@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[16:11] * artwork_lv (~artwork_l@adsl.office.mediapeers.com) Quit (Ping timeout: 480 seconds)
[16:19] * wogri_risc1 (~Adium@85.233.124.80) Quit (Quit: Leaving.)
[16:20] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:22] * vata (~vata@2607:fad8:4:6:dc15:99b6:ac8:2490) has joined #ceph
[16:28] * gaveen (~gaveen@175.157.238.162) Quit (Ping timeout: 480 seconds)
[16:39] * rudolfsteiner (~federicon@181.21.135.200) has joined #ceph
[16:51] * gaveen (~gaveen@175.157.139.239) has joined #ceph
[16:56] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[16:56] * markbby (~Adium@168.94.245.4) has joined #ceph
[17:01] * sprachgenerator (~sprachgen@130.202.135.217) has joined #ceph
[17:02] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[17:03] <loicd> now that https://github.com/ceph/ceph/tree/wip-5510 has been merged, should I destroy the branch ?
[17:12] <dmsimard> Hi there, can anyone comment on deploying ceph using puppet ? There's ceph-deploy, chef recipes.. but also puppet: https://github.com/enovance/puppet-ceph
[17:14] * yehudasa_ (~yehudasa@2602:306:330b:1410:ea03:9aff:fe98:e8ff) Quit (Ping timeout: 480 seconds)
[17:14] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:14] <Kioob`Taff> Hi, any news about http://tracker.ceph.com/issues/5760 ?
[17:15] <Kioob`Taff> (linux 3.10 compatibility)
[17:17] <Kioob`Taff> I still have same problem with Linux 3.10.9
[17:24] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) has joined #ceph
[17:24] * med (~medberry@ec2-50-17-21-207.compute-1.amazonaws.com) has joined #ceph
[17:24] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) Quit ()
[17:25] * sprachgenerator (~sprachgen@130.202.135.217) Quit (Quit: sprachgenerator)
[17:27] * BillK (~BillK-OFT@220-253-162-118.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[17:33] * Bada (~Bada@195.65.225.142) Quit (Ping timeout: 480 seconds)
[17:33] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[17:38] * rudolfsteiner (~federicon@181.21.135.200) Quit (Quit: rudolfsteiner)
[17:43] * xarses (~andreww@c-50-136-199-72.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:45] * nwat (~nwat@ext-40-205.eduroam.rwth-aachen.de) has joined #ceph
[17:46] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:49] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:53] * doxavore (~doug@99-7-52-88.lightspeed.rcsntx.sbcglobal.net) has joined #ceph
[17:53] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[17:55] <sel> Is there a way to tell ceph-deploy which id a new osd should have?
[17:57] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[17:57] * nwat (~nwat@ext-40-205.eduroam.rwth-aachen.de) has left #ceph
[17:58] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[17:59] * markbby (~Adium@168.94.245.4) has joined #ceph
[18:00] * devoid (~devoid@130.202.135.234) has joined #ceph
[18:00] * sagelap1 (~sage@2600:1012:b016:3ab4:10a4:efbc:77f3:da42) has joined #ceph
[18:00] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[18:00] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[18:01] * tnt (~tnt@91.177.230.140) has joined #ceph
[18:03] * foosinn (~stefan@office.unitedcolo.de) Quit (Quit: Leaving)
[18:05] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[18:05] <MACscr> So just got a storage server with older Dual 5420 cpu's. Think I would be fine with removing one of the CPU's if I am using it for a ceph-osd system?
[18:05] * DarkAce-Z is now known as DarkAceZ
[18:06] * sagelap (~sage@76.89.177.113) Quit (Ping timeout: 480 seconds)
[18:06] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[18:06] * ssejour (~sebastien@out-chantepie.fr.clara.net) Quit (Quit: Leaving.)
[18:09] * alram (~alram@38.122.20.226) has joined #ceph
[18:10] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[18:10] * sagelap1 is now known as sagelap
[18:10] * ChanServ sets mode +o sagelap
[18:10] * sagelap changes topic to 'Latest stable (v0.67.2 "Dumpling" or v0.61.8 "Cuttlefish") -- http://ceph.com/get || CDS Vids and IRC logs posted http://ceph.com/cds/'
[18:12] * psteyn (~pieter@105-237-67-242.access.mtnbusiness.co.za) Quit (Quit: Konversation terminated!)
[18:12] * xmltok (~xmltok@pool101.bizrate.com) Quit (Remote host closed the connection)
[18:12] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[18:13] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[18:15] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) has joined #ceph
[18:16] <matt_> Anyone else still having CPU usage problems with 0.67.2?
[18:16] <Kioob`Taff> Very good question. I'm waiting it's solved before upgrading.
[18:17] <matt_> My load averages are still through the roof so I'm having to downgrade to 0.61.8 for now
[18:19] <Kioob`Taff> on the Mailing List, "Oliver Daudey" have same problem
[18:21] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Read error: Connection reset by peer)
[18:24] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[18:27] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[18:27] * ChanServ sets mode +v andreask
[18:27] <loicd> anyone willing to do an easy review ? https://github.com/ceph/ceph/pull/539/files ( it's documentation ;-)
[18:40] <joao> loicd, is the K+M=5 an invariant?
[18:40] * sagelap1 (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[18:40] <loicd> no, it's an example
[18:41] <joao> maybe a "For instance, an erasure coded pool created to use five OSDs..." ?
[18:41] <joao> it's not obvious it's an example
[18:41] <loicd> correct
[18:41] <loicd> I'll fix this by making clear the whole document is example based with no theory
[18:42] * diegows (~diegows@host63.186-108-72.telecom.net.ar) has joined #ceph
[18:46] <loicd> and I added the "For instance, as suggested"
[18:46] <loicd> and I added the "For instance," as suggested
[18:47] * sagelap (~sage@2600:1012:b016:3ab4:10a4:efbc:77f3:da42) Quit (Ping timeout: 480 seconds)
[18:47] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[18:47] <joao> loicd, added some comments; haven't gone through it all, but you should definitely split those long lines into 80-char lines
[18:47] <loicd> ok
[18:47] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[18:47] <loicd> thanks !
[18:48] * markbby (~Adium@168.94.245.4) has joined #ceph
[18:49] * bergerx_ (~bekir@78.188.101.175) Quit (Quit: Leaving.)
[18:51] * yehudasa_ (~yehudasa@2607:f298:a:607:ea03:9aff:fe98:e8ff) has joined #ceph
[18:51] <loicd> non wrapped lines indeed makes it really hard to review
[18:54] <joao> if no one reviews the rest, I'll finish up reading it in the next hour or so
[18:54] <joao> my code just finished compiling :)
[18:55] <loicd> joao: cool
[18:56] * matt_ (~matt@220-245-1-152.static.tpgi.com.au) Quit (Ping timeout: 480 seconds)
[18:57] <loicd> https://github.com/ceph/ceph/pull/539/files now has 80 character lines ;-)
[18:57] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[18:59] * gregaf (~Adium@2607:f298:a:607:3d75:84b1:f173:d53b) Quit (Quit: Leaving.)
[19:00] * gregaf (~Adium@2607:f298:a:607:99b4:5900:8f78:54a0) has joined #ceph
[19:08] * jluis (~JL@89.181.146.94) has joined #ceph
[19:08] * kDaser (kDaser@c-69-142-166-209.hsd1.pa.comcast.net) has joined #ceph
[19:14] <zackc> sagelap1: just saw your request; merged it
[19:15] <kDaser> Has anyone used Ceph in any high IO systems? I'm looking for a solution to handle approx 3 million "s3 puts" of 10kb xml objects daily, with fast response times (sub 1s) returns on a 20TB dataset. Eucalyptus claims their Walrus solution can't handle it, so I am looking for alternatives.
[19:15] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[19:16] * artworklv (~artworklv@brln-d9ba3566.pool.mediaWays.net) has joined #ceph
[19:16] * jluis (~JL@89.181.146.94) Quit (Ping timeout: 480 seconds)
[19:21] * artworklv (~artworklv@brln-d9ba3566.pool.mediaWays.net) Quit (Quit: Leaving)
[19:21] <Kioob> kDaser: I bench 6000*2MB write/s on my cluster
[19:21] <Kioob> mm
[19:21] <Kioob> I rechecked
[19:23] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[19:27] * jluis (~JL@89.181.146.94) has joined #ceph
[19:30] <Kioob> kDaser: I retried, on standard SAS drives (instead of full SSD for my previous test)
[19:30] <Kioob> http://pastebin.com/qXnD1u1y
[19:30] <Kioob> Puts of 10Kb, with 64 threads
[19:31] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[19:31] <Kioob> But here it's at the RBD level, didn't try with Rados Gateway
[19:32] <Kioob> (for the record : elapsed: � 104 �ops: � 207876 �ops/sec: �1990.24 �bytes/sec: 19607868.48 )
[19:34] <gregaf> kDaser: depends on how the data's partitioned and how many nodes you want, but that's 34 puts/s which should be fine
[19:34] <gregaf> maybe yehudasa has more elaborate comments?
[19:41] <yehudasa> kDaser: the gateway shouldn't have trouble handling ~30-40 puts/second
[19:41] <yehudasa> kDaser, though you may want to consider putting data on multiple buckets to reduce index contention
[19:46] * markbby (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[19:46] * markbby (~Adium@168.94.245.4) has joined #ceph
[19:50] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Remote host closed the connection)
[19:50] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[19:51] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Remote host closed the connection)
[19:51] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[19:59] * wogri_risc (~Adium@85.233.126.167) has joined #ceph
[20:02] * wogri_risc (~Adium@85.233.126.167) Quit ()
[20:08] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[20:12] * wogri_risc (~Adium@85.233.126.167) has joined #ceph
[20:19] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[20:20] <joao> loicd, reviewed it; comments on gh
[20:21] * dmick (~dmick@2607:f298:a:607:6d69:5957:f7a4:b0b6) Quit (Ping timeout: 480 seconds)
[20:21] <joao> cannot vouch for the accuracy as I'm not familiar with the erasure coding stuff
[20:21] <joao> gotta run; bbl
[20:21] * vipr (~vipr@78-23-119-24.access.telenet.be) Quit (Remote host closed the connection)
[20:23] * Vjarjadian (~IceChat77@176.254.37.210) has joined #ceph
[20:26] * jcsp (~john@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[20:30] * dmick (~dmick@2607:f298:a:607:b938:cc3a:9a41:7c65) has joined #ceph
[20:34] * compbio (~compbio@nssc.nextspace.us) has joined #ceph
[20:35] * ScOut3R (~scout3r@54026B73.dsl.pool.telekom.hu) has joined #ceph
[20:44] <xarses> hi, I'm having problems with cinder-volume starting, it terminates instantly using the cinder user, but starting it as root, the service starts fine and can interact with ceph wonderfully.
[20:45] <xarses> i've checked the perms on the client keyring for cinder and it looks fine
[20:46] * markbby (~Adium@168.94.245.4) has joined #ceph
[20:46] <xarses> cinder also creates no log entry's starting
[20:47] <xarses> cinder-api, and cinder-scheduler are happy
[20:50] * jharley (~jharley@75-119-224-217.dsl.teksavvy.com) has joined #ceph
[20:53] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (Remote host closed the connection)
[20:54] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[20:59] * andreask1 (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[20:59] * ChanServ sets mode +v andreask1
[20:59] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[20:59] * andreask1 is now known as andreask
[21:00] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has left #ceph
[21:04] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[21:04] * ChanServ sets mode +v andreask
[21:06] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Remote host closed the connection)
[21:07] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[21:07] * ChanServ sets mode +v andreask
[21:07] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has left #ceph
[21:08] * jharley (~jharley@75-119-224-217.dsl.teksavvy.com) Quit (Read error: Connection reset by peer)
[21:13] <dmick> xarses: can you run it with strace -f and see what's failing?
[21:14] <joshd> xarses: if this is grizzly, have you set CEPH_ARGS="--id volumes" or whatever ceph client you'd like cinder to use?
[21:14] <xarses> joshd, yes and it works fine with root
[21:15] <joshd> apparmor or selinux restricting access to the keyring or ceph.conf maybe?
[21:18] <xarses> rhel so no apparmor and selinux is disabled
[21:18] * ScOut3R (~scout3r@54026B73.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[21:19] <xarses> some one on another channel mentioned it might be rootwrap
[21:19] <xarses> but i have no idea how to update the config
[21:19] <joshd> it wouldn't be rootwrap - none of the rbd calls need to run as root
[21:20] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[21:20] <joshd> no logs at all from cinder-volume is pretty suspiciuos - dmick's suggestion of strace might give a better idea of what's actually going on
[21:23] <xarses> ya, working on getting that up, theres alot of failed opens, but it appears to not be deathly. so I'm not sure
[21:24] <dmick> pastebin the last 200-300 lines and I can look
[21:25] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[21:25] <ccourtaut> yehudasa: hi
[21:26] <ccourtaut> i still got a problem with my testing around radosgw-agent
[21:26] <xarses> ok
[21:26] <xarses> aparently leaving syslog on you loose all of the stack traces from the logs
[21:26] <ccourtaut> i just find out about the rgw-usage-log-flush-threshold and the rgw_usage_log_tick_interval
[21:27] <xarses> Stdout: ''
[21:27] <xarses> Stderr: "2013-08-26 19:24:45.428127 7f8afcbd5760 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication\n2013-08-26 19:24:45.428131 7f8afcbd5760 0 librados: client.admin initialization error (2) No such file or directory\ncouldn't connect to cluster! error -2\n"
[21:27] <ccourtaut> i was wondering why mdlog list was returning []
[21:27] <dmick> xarses: so is the keyring missing, or unreadable as Josh suggested?
[21:27] <ccourtaut> so i find out that the cache wouldn't have yet flushed to rados
[21:28] <ccourtaut> but if i set low value for this options, my s3cmd mb doesn't work anymore
[21:29] <xarses> dimick: why is it trying to use client.admin (which explains why root works)?
[21:29] <xarses> volume_driver=cinder.volume.drivers.rbd.RBDDriver
[21:29] <xarses> rbd_user=volumes
[21:29] <xarses> rbd_pool=volumes
[21:29] <xarses> rbd_secret_uuid=<something>
[21:29] <xarses> glance_api_version=2
[21:31] <ccourtaut> yehudasa: neverming, seems that the processes aren't running
[21:31] * ccourtaut trying to figure out why
[21:31] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[21:33] <dmick> xarses: id comes from CEPH_ARGS, as joshd said
[21:34] * danieagle (~Daniel@177.97.251.212) has joined #ceph
[21:36] * wogri_risc (~Adium@85.233.126.167) Quit (Quit: Leaving.)
[21:37] <xarses> dimick, ok that helps from the command line
[21:37] <xarses> no i need to figure why it's initscript doesn't use this like it should
[21:37] <xarses> no/now
[21:38] <dmick> export?
[21:38] <xarses> cat /etc/sysconfig/openstack-cinder-volume
[21:38] <xarses> CEPH_ARGS="--id volumes"
[21:40] <dmick> right. maybe you need to export CEPH_ARGS, is what I meant by "export"
[21:40] <ccourtaut> yehudasa: got my setup right now, was a problem in my script
[21:40] <xarses> the init script is supposed to source it
[21:40] <ccourtaut> btw i still have nothing displayed on my cluster using radosgw-admin mdlog list
[21:40] <ccourtaut> after creating a bucket and adding a file into it
[21:41] <ccourtaut> is this a usual behaviour?
[21:47] <xarses> dimick, joshd: thanks
[21:47] <xarses> i had to switch to export CEPH_ARGS
[21:47] <xarses> the debian example of env dosn't work
[21:47] <xarses> in rhel the same
[21:48] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[21:50] * houkouonchi-work (~linux@gw.sepia.ceph.com) has joined #ceph
[21:51] <dmick> xarses: where is the example?
[21:51] <xarses> http://ceph.com/docs/next/rbd/rbd-openstack/
[21:52] <xarses> just above "restart openstack"
[21:52] * allsystemsarego (~allsystem@5-12-37-127.residential.rdsnet.ro) Quit (Quit: Leaving)
[21:53] <xarses> if you're going to issue a fix for it, the filename is different too
[21:54] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[22:10] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Quit: Ex-Chat)
[22:12] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:40] <yehudasa> ccourtaut: did you turn on the logging for the zone?
[22:40] <dmick> xarses: would you care to file an issue?
[22:42] <xarses> dimick: sure, on which tracker?
[22:42] <dmick> tracker.ceph.com, Documentation
[22:42] <xarses> ty
[22:45] * danieagle (~Daniel@177.97.251.212) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[22:50] <ccourtaut> yehudasa: which option do you refer to?
[22:52] <yehudasa> ccourtaut: actually it's set on and off by region, just do 'radosgw-admin region get', modify the correct field, then 'radosgw-admin region put'
[22:52] <yehudasa> .. then restart gateway
[22:52] <ccourtaut> yehudasa: what is the field you refer to?
[22:53] <ccourtaut> i doesn't appear when i do a region get
[22:53] <ccourtaut> oh
[22:53] <ccourtaut> sry
[22:53] <ccourtaut> yes it's there
[22:53] <yehudasa> cool
[22:53] <ccourtaut> is it mandatory to restart rgw?
[22:53] <yehudasa> yes
[22:53] <yehudasa> (currently)
[22:54] <ccourtaut> ok, i'll try to figure out a way to do this
[22:54] <ccourtaut> with vstart
[22:54] <ccourtaut> or to fix it :)
[22:54] <ccourtaut> thanks btw, helpful as usual
[23:08] * vata (~vata@2607:fad8:4:6:dc15:99b6:ac8:2490) Quit (Quit: Leaving.)
[23:11] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) Quit (Remote host closed the connection)
[23:18] <xarses> dimick: created http://tracker.ceph.com/issues/6128
[23:18] <gregaf> sagewk: okay, I think the failed 6029 runs might just have been failing to propagate user versions on split pgs
[23:19] <sagewk> ah
[23:19] <dmick> tnx xarses
[23:19] <gregaf> I'd love suggestions for making those misses less likely, but the pg versions seem to have the same "fix all the places" habits
[23:20] <xarses> I also copied over the glance --location issue i was having http://tracker.ceph.com/issues/6129
[23:20] <dmick> that...doesn't seem to be about export CEPH_ARGS?
[23:20] <gregaf> I'm also not sure about sub_op_modify but that doesn't seem to have caused any failures so I think I'm just missing why it doesn't matter there
[23:20] <xarses> oops transposed numbers
[23:20] <xarses> http://tracker.ceph.com/issues/6127
[23:21] <dmick> ah. that makes more sense :)
[23:21] <xarses> 6128 is the glance --location issue
[23:21] <alfredodeza> issue 6128
[23:21] <kraken> alfredodeza might be talking about: http://tracker.ceph.com/issues/6128 [glance image-create with rbd --location fails to create image in rdb]
[23:21] <alfredodeza> issue 6127
[23:21] <kraken> alfredodeza might be talking about: http://tracker.ceph.com/issues/6127 [CEPH_ARGS example for RHEL]
[23:24] * yehudasa_ (~yehudasa@2607:f298:a:607:ea03:9aff:fe98:e8ff) Quit (Ping timeout: 480 seconds)
[23:30] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[23:30] * markbby (~Adium@168.94.245.4) has joined #ceph
[23:34] * ScOut3R (~scout3r@54026B73.dsl.pool.telekom.hu) has joined #ceph
[23:40] * alfredodeza is now known as alfredo|afk
[23:43] * ishkabob (~c7a82cc0@webuser.thegrebs.com) has joined #ceph
[23:44] <ishkabob> hey ceph devs, i think there is a problem with your dependencies in the current version of ceph-deploy for FC19 (and probably other redhat variants)
[23:44] <ishkabob> the package ceph-deploy explictly requires python-pushy >= 0.5.3
[23:45] <ishkabob> but the pushy package is simply named "pushy" (and is the correct version)
[23:47] <xarses> it works for me in RHEL6 using the rpm python-pushy
[23:47] <xarses> but with the older ceph-deploy 1.0.0
[23:48] <xarses> i heard something previous about the current version should be able to handle it
[23:48] <ishkabob> its not a problem with the actual software, it's just the spec file that was generated when the RPM was built
[23:48] <ishkabob> i looked at the ceph-deploy git repo, the current spec file looks fine, but whatever was used to generate these RPMs is wrong
[23:48] * madkiss (~madkiss@tmo-096-227.customers.d1-online.com) has joined #ceph
[23:49] <ishkabob> xarses: if you were to try to upgrade to the latest ceph-deploy, I don't think it would let you
[23:49] <xarses> ya, the RHEL6x repos have the atom as python-pushy in the rpm
[23:50] <ishkabob> yeah, its the same as FC19 no doubt
[23:50] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[23:50] <ishkabob> if I read this spec file correctly, there are very few differences between the redhat variants
[23:50] <xarses> yes, there usually are few
[23:51] <ishkabob> anyway, if you regenerate the RPMs with the current code in ceph-deploy it will probably work fine
[23:51] <ishkabob> i would submit a pull-request, but there's nothing to fix, just re-run the RPM generation :)
[23:57] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.