#ceph IRC Log

Index

IRC Log for 2013-12-20

Timestamps are in GMT/BST.

[0:04] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:04] <iggy> Pedras: that depends on how you are accessing ceph
[0:04] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[0:04] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[0:04] <Pedras> sorry fellas...
[0:05] <Pedras> pasted that on the wrong window ahaha
[0:05] <Pedras> iggy: only notice when you replied
[0:06] <iggy> my comment still stands
[0:06] <Pedras> :)
[0:07] * mwarwick (~mwarwick@2407:7800:400:1011:3e97:eff:fe91:d9bf) has left #ceph
[0:08] <iggy> if you are using librbd to access ceph, it's basically the same as passing O_DIRECT
[0:09] * diegows (~diegows@190.190.17.57) Quit (Ping timeout: 480 seconds)
[0:11] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[0:13] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[0:17] * DarkAce-Z (~BillyMays@50-32-42-176.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[0:18] <alphe> their is a ton of error message with ceph-deploy osd create
[0:18] <alphe> it doesn t find the partition if you give a disk name
[0:18] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has joined #ceph
[0:19] <alphe> it doesn t find the keyring after creating it ...
[0:19] <alphe> the monitors are ok but why it is not working now ?
[0:20] <alphe> why the /var/lib/ceph directory is not created ?
[0:21] * sjm (~Adium@pool-96-234-124-66.nwrknj.fios.verizon.net) has joined #ceph
[0:21] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has left #ceph
[0:21] * Dark-Ace-Z (~BillyMays@50-32-42-157.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[0:22] * DarkAceZ (~BillyMays@50-32-22-236.drr01.hrbg.pa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[0:22] * dmsimard (~Adium@108.163.152.2) Quit (Ping timeout: 480 seconds)
[0:22] * sjm (~Adium@pool-96-234-124-66.nwrknj.fios.verizon.net) Quit ()
[0:22] * sjm (~Adium@rtp-isp-nat1.cisco.com) has joined #ceph
[0:24] * sprachgenerator (~sprachgen@130.202.135.213) Quit (Quit: sprachgenerator)
[0:26] * TiCPU (~jeromepou@190-130.cgocable.ca) Quit (Ping timeout: 480 seconds)
[0:27] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[0:27] * DarkAce-Z (~BillyMays@50-32-42-176.drr01.hrbg.pa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[0:30] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[0:31] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[0:31] * LPG_ (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Read error: Connection reset by peer)
[0:32] * Dark-Ace-Z (~BillyMays@50-32-42-157.drr01.hrbg.pa.frontiernet.net) Quit (Read error: Operation timed out)
[0:33] * sarob_ (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Quit: Leaving...)
[0:33] <alphe> ok solved the issue by creating manualy /var/lib/ceph/bootstrap-osd and /var/lib/ceph/tmp
[0:33] * DarkAce-Z (~BillyMays@50-32-48-109.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[0:33] <alphe> don t know why the osd prepare don t create the needed directories in /var/lib/ceph
[0:35] <alphe> oooooooooooh !!!
[0:35] <alphe> it is the installation of ceph that create those dirs !
[0:35] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:35] <alphe> apt-get install ceph-common ...
[0:36] <alphe> after a purgedata I should run the apt-get install !
[0:36] <andreask> hey ... anyone an idea how I can "fix" the hostname .... osds came up when resulotion did not work and now say "localhost" in ceph config show
[0:39] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Remote host closed the connection)
[0:39] * sjm (~Adium@rtp-isp-nat1.cisco.com) has left #ceph
[0:39] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:39] <alphe> edit config file /etc/ceph/ceph.conf
[0:40] <alphe> make the changes neededed and propagate it to all the ceph node in your ceph cluster
[0:40] <alphe> have fun
[0:40] <dmick> when you say "ceph config show"....
[0:40] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[0:42] <andreask> omg ..... puplic network ...
[0:42] * wwang001 (~wwang001@fbr.reston.va.neto-iss.comcast.net) Quit (Remote host closed the connection)
[0:45] <robbat2> who can activate my openid account on the Ceph tracker?
[0:45] <robbat2> username robbat2
[0:45] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) Quit (Read error: Operation timed out)
[0:46] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[0:47] * DarkAce-Z (~BillyMays@50-32-48-109.drr01.hrbg.pa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[0:50] <andreask> dmick: any idea how to get the correct hostname without adding it manually to ceph.conf? .... I restarted all daemons after correction of "puplic network" typo
[0:50] <dmick> robbat2: you should have gotten email. check your spam folder.
[0:51] <dmick> andreask: where are you seeing the wrong hostname exactly
[0:51] <andreask> ceph --admin-daemon /var/run/ceph/ceph-osd.30.asok config show
[0:51] <andreask> dmick: ^^^
[0:51] <dmick> I...don't even know what that shows. looking.
[0:52] <andreask> there is host: localhost .... and also crush-map is crap with that
[0:52] <dmick> so crush you can definitely update
[0:53] <dmick> http://ur1.ca/g83dx
[0:53] <dmick> (where you replace {host}, obviously)
[0:54] <dmick> I honestly don't know what the host: value in config is used for, if anything
[0:54] <andreask> dmick ... there is no host add all in the crush-map
[0:54] <andreask> at all
[0:54] <dmick> uh...what?
[0:54] <andreask> only osds
[0:54] <dmick> ok, I guess I have no idea what your cluster config is then
[0:55] <robbat2> dmick, and with an openid registration, how exactly would have it emailed me? the openid spec by definition would not have given out my email to the server
[0:55] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has joined #ceph
[0:55] <dmick> robbat: missed the openid part. that's broken AFAIK
[0:55] <dmick> ^robbat2
[0:56] <robbat2> ok, then I really need an admin to fix something
[0:56] <robbat2> if I try to register, robbat2@gentoo.org, it tells me it already exists
[0:56] <robbat2> if I try lost password
[0:56] <robbat2> it tells me it doesn't exist
[0:57] <robbat2> "Unknown user"
[0:57] <dmick> I mean, I can try activating it, but I doubt the openid will work; if it doesn't I can delete it and you can try again
[0:57] <dmick> activated; see if that helps
[0:58] <robbat2> and my account works now :-)
[0:58] <dmick> cool; I guess it's not broken
[0:58] <dmick> I was told it was and had tried and failed; maybe redmine got better since then
[0:58] <robbat2> just that you cannot reply on openid making the email address available
[1:00] * vata (~vata@2607:fad8:4:6:d594:116f:e4be:aece) Quit (Quit: Leaving.)
[1:04] <AfC> following up the question I asked yesterday, we did some code diving and it would seem the limit on the length of object ID strings is 4096 chars [bytes] and that rados will reject it if longer. So there we are.
[1:07] <robbat2> dmick, thanks for that. I have my issue ticket (#7043) created now, so that Yehuda will merge my fix
[1:07] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Quit: Something came up.)
[1:07] <dmick> robbat2: no worries
[1:07] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[1:12] * alram (~alram@ip-47.net-89-3-14.rev.numericable.fr) has joined #ceph
[1:13] <Discard> hi there just want to know how to assign a folder to an osd
[1:20] <houkouonchi-work> robbat2: well openID is working for me but I have heard some people have had issues with it
[1:20] <houkouonchi-work> I think it figures out your email from the openID API so you should still get the registration email I would expect
[1:23] <houkouonchi-work> robbat2: looks like the problem stemmed from emails being 450 status (deferred) when sending to your email server
[1:24] * ScOut3R (~scout3r@4E5C7421.dsl.pool.telekom.hu) has joined #ceph
[1:25] <houkouonchi-work> robbat2: aparantly gentoo.com checks to see if the sending server is listening on port 25 which it would be if it was postfix but its just using sm-mta to send mail as it does not receive anything. aparrantly gentoo defers mail because of this
[1:25] <houkouonchi-work> rBJNtvuX006111: to=<robbat2@gentoo.org>, delay=00:05:54, xdelay=00:00:00, mailer=esmtp, pri=212641, relay=mail.gentoo.org. [140.211.166.183], dsn=4.1.7, stat=Deferred: 450 4.1.7 <redmine@tracker.ceph.com>: Sender address rejected: unverified address: connect to tracker.ceph.com[64.90.32.38]:25: Connection refused
[1:26] <robbat2> yes, at gentoo we used sender-verification
[1:27] <houkouonchi-work> maybe I will switch it over to postfix so you can get update emails
[1:27] <robbat2> you just need to make sure that address redmine is sending with can accept emails
[1:27] <robbat2> from outside
[1:27] <houkouonchi-work> yeah well I guess I can just blackwhole it too
[1:28] <robbat2> so if you need an address that you don't care about
[1:28] <robbat2> what we do, is to accept MAIL/RCPT, but deny after DATA
[1:28] <houkouonchi-work> yeah well I don't want to change it incase someone is already doing filtering based off sender so I can just blackwhole that email to somewhere
[1:28] <robbat2> it passes sender-verification, but still makes it clear that the address should not be sent emails
[1:29] <robbat2> other than bounces
[1:29] * zhyan_ (~zhyan@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[1:29] <robbat2> unrelated now, has anybody run coverage/dead-code tests on radosgw?
[1:30] * cronix (~cronix@5.199.139.166) has joined #ceph
[1:30] * mattbenjamin (~matt@aa2.linuxbox.com) Quit (Quit: Leaving.)
[1:31] <robbat2> https://github.com/ceph/ceph/blob/master/src/rgw/rgw_common.cc#L403 <-- hex_str is assigned values, but then discarded
[1:32] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[1:34] * ircolle (~Adium@2601:1:8380:2d9:6c5f:7132:ca76:5b5d) Quit (Quit: Leaving.)
[1:36] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[1:39] * zhyan_ (~zhyan@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[1:39] * cronix (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[1:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:41] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[1:46] * JC (~JC@nat-dip5.cfw-a-gci.corp.yahoo.com) Quit (Quit: Leaving.)
[1:51] * tsnider1 (~tsnider@198.95.226.40) has joined #ceph
[1:57] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[2:04] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has left #ceph
[2:07] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has left #ceph
[2:08] * ScOut3R (~scout3r@4E5C7421.dsl.pool.telekom.hu) Quit ()
[2:12] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[2:13] * sagelap (~sage@38.122.20.226) has joined #ceph
[2:13] * mozg (~andrei@host86-184-120-168.range86-184.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:15] * alram (~alram@ip-47.net-89-3-14.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[2:17] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[2:19] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[2:24] * yanzheng (~zhyan@jfdmzpr06-ext.jf.intel.com) has joined #ceph
[2:24] * sagelap (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:28] * shang (~ShangWu@175.41.48.77) has joined #ceph
[2:29] * sagelap (~sage@253.sub-70-197-82.myvzw.com) has joined #ceph
[2:32] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[2:37] * angdraug (~angdraug@12.164.168.116) Quit (Quit: Leaving)
[2:38] * aliguori (~anthony@74.202.210.82) Quit (Quit: Ex-Chat)
[2:40] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Read error: Operation timed out)
[2:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:45] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) Quit (Ping timeout: 480 seconds)
[2:47] * shang_ (~ShangWu@175.41.48.77) has joined #ceph
[2:48] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[2:54] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[2:58] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[2:58] * Cube (~Cube@66-87-67-102.pools.spcsdns.net) has joined #ceph
[3:00] * xmltok (~xmltok@216.103.134.250) Quit (Quit: Leaving...)
[3:05] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[3:05] * sagelap (~sage@253.sub-70-197-82.myvzw.com) Quit (Read error: Connection reset by peer)
[3:05] * sagelap (~sage@253.sub-70-197-82.myvzw.com) has joined #ceph
[3:06] * sagelap1 (~sage@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[3:06] * sagelap (~sage@253.sub-70-197-82.myvzw.com) Quit (Read error: Connection reset by peer)
[3:08] * alram (~alram@ip-47.net-89-3-14.rev.numericable.fr) has joined #ceph
[3:10] * clayb (~kvirc@proxy-ny1.bloomberg.com) Quit (Read error: Connection reset by peer)
[3:16] * xarses (~andreww@64-79-127-122.static.wiline.com) Quit (Ping timeout: 480 seconds)
[3:16] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[3:16] * alram (~alram@ip-47.net-89-3-14.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[3:17] * jcsp1 (~jcsp@2607:f298:a:607:cd42:5518:5a2e:8ae1) Quit (Ping timeout: 480 seconds)
[3:18] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[3:19] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[3:20] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit ()
[3:20] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[3:21] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit ()
[3:25] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) has joined #ceph
[3:26] * tsnider1 (~tsnider@198.95.226.40) Quit (Quit: Leaving.)
[3:27] * haomaiwang (~haomaiwan@117.79.232.187) has joined #ceph
[3:28] * xarses (~andreww@12.164.168.116) has joined #ceph
[3:29] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) has joined #ceph
[3:31] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:31] * sarob_ (~sarob@2001:4998:effd:600:75f2:9882:af6b:2d8f) has joined #ceph
[3:32] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[3:34] * haomaiwang (~haomaiwan@117.79.232.187) Quit (Remote host closed the connection)
[3:34] * haomaiwang (~haomaiwan@199.30.140.94) has joined #ceph
[3:35] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[3:35] * jcsp (~jcsp@0001bf3a.user.oftc.net) has joined #ceph
[3:39] * sarob_ (~sarob@2001:4998:effd:600:75f2:9882:af6b:2d8f) Quit (Ping timeout: 480 seconds)
[3:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:43] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[3:43] * Guest5305 (~coyo@thinks.outside.theb0x.org) Quit (Read error: Operation timed out)
[3:45] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[3:48] * haomaiwa_ (~haomaiwan@117.79.232.187) has joined #ceph
[3:48] * haomaiwa_ (~haomaiwan@117.79.232.187) Quit (Remote host closed the connection)
[3:48] * haomaiwang (~haomaiwan@199.30.140.94) Quit (Read error: Connection reset by peer)
[3:48] * haomaiwang (~haomaiwan@199.30.140.94) has joined #ceph
[3:53] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[3:58] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[3:59] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[4:03] * xarses (~andreww@12.164.168.116) Quit (Ping timeout: 480 seconds)
[4:07] * i_m (~ivan.miro@deibp9eh1--blueice4n1.emea.ibm.com) Quit (Quit: Leaving.)
[4:09] * doubt (~doubt@188.241.112.26) has joined #ceph
[4:09] * carif (~mcarifio@pool-173-76-155-34.bstnma.fios.verizon.net) Quit (Quit: Ex-Chat)
[4:09] <doubt> hi guys...
[4:09] * sarob (~sarob@2001:4998:effd:600:1593:140c:ba6e:170c) has joined #ceph
[4:09] <doubt> any docs that i can read related to CEPH and ovirt?
[4:10] * haomaiwa_ (~haomaiwan@117.79.232.155) has joined #ceph
[4:10] <doubt> i m looking for howto/doc in using ceph with ovirt...
[4:10] * sarob_ (~sarob@2001:4998:effd:600:b860:1d93:fe3f:3b22) has joined #ceph
[4:17] * haomaiwang (~haomaiwan@199.30.140.94) Quit (Ping timeout: 480 seconds)
[4:17] * sarob (~sarob@2001:4998:effd:600:1593:140c:ba6e:170c) Quit (Ping timeout: 480 seconds)
[4:22] * sarob_ (~sarob@2001:4998:effd:600:b860:1d93:fe3f:3b22) Quit (Remote host closed the connection)
[4:22] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[4:26] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[4:26] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[4:28] * sarob_ (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[4:28] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[4:33] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[4:38] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[4:40] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[4:41] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[4:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:41] * sarob_ (~sarob@2001:4998:effd:600:c0da:a98:9d75:8b2) has joined #ceph
[4:41] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[4:58] * diegows (~diegows@190.190.17.57) has joined #ceph
[5:02] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[5:02] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) Quit (Remote host closed the connection)
[5:02] * simulx (~simulx@vpn.expressionanalysis.com) Quit (Read error: Connection reset by peer)
[5:02] * Shmouel1 (~Sam@ns1.anotherservice.com) Quit (Read error: Connection reset by peer)
[5:03] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[5:03] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) has joined #ceph
[5:03] * Shmouel (~Sam@ns1.anotherservice.com) has joined #ceph
[5:05] * fireD (~fireD@93-142-245-129.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD_ (~fireD@93-139-139-129.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:07] * diegows (~diegows@190.190.17.57) Quit (Read error: Operation timed out)
[5:10] * shang (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[5:10] * shang_ (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[5:10] * shang (~ShangWu@175.41.48.77) has joined #ceph
[5:21] * nregola_comcast (~nregola_c@fw01.300crls-pitt.pa.trr.comcast.net) has joined #ceph
[5:22] * nregola_comcast (~nregola_c@fw01.300crls-pitt.pa.trr.comcast.net) has left #ceph
[5:27] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[5:28] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[5:33] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[5:36] * Vacum (~vovo@i59F7A48F.versanet.de) has joined #ceph
[5:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[5:43] * Vacum_ (~vovo@88.130.202.167) Quit (Ping timeout: 480 seconds)
[5:51] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:51] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[5:58] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:02] <iggy> doubt: I suspect you'll have more luck just finding docs on ceph+libvirt and filling in the blanks
[6:02] <iggy> but I'd be surprised if the version of qemu in ovirt supported ceph
[6:09] * ScOut3R (~ScOut3R@4E5C7421.dsl.pool.telekom.hu) has joined #ceph
[6:15] * hemantb (~hemantb@117.192.241.114) has joined #ceph
[6:15] * lx0 is now known as lxo
[6:19] * ScOut3R (~ScOut3R@4E5C7421.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[6:28] * hemantb (~hemantb@117.192.241.114) Quit (Quit: hemantb)
[6:33] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[6:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:42] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[6:51] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[6:53] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:54] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[6:55] * KindTwo (~KindOne@198.14.201.60) has joined #ceph
[6:56] * AfC (~andrew@203-219-79-122.static.tpgi.com.au) Quit (Quit: Leaving.)
[6:56] * sarob_ (~sarob@2001:4998:effd:600:c0da:a98:9d75:8b2) Quit (Ping timeout: 480 seconds)
[6:58] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:58] * KindTwo is now known as KindOne
[7:00] * illya (~illya_hav@16-158-133-95.pool.ukrtel.net) has joined #ceph
[7:00] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[7:00] <illya> hi
[7:01] <illya> MDS start to fail on startup
[7:01] <illya> with next
[7:01] <illya> http://pastebin.com/iGzLbU6a
[7:01] <illya> any ideas ?
[7:01] <illya> thx
[7:04] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[7:07] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[7:07] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) Quit (Quit: Leaving.)
[7:07] * mnash_ (~chatzilla@vpn.expressionanalysis.com) has joined #ceph
[7:09] * jhurlbert_ (~jhurlbert@216.57.209.252) has joined #ceph
[7:10] * athrift_ (~nz_monkey@203.86.205.13) has joined #ceph
[7:10] * [caveman] (~quassel@boxacle.net) has joined #ceph
[7:12] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:12] * Azrael_ (~azrael@terra.negativeblue.com) has joined #ceph
[7:12] * xmir- (~xmeer@cm-84.208.159.149.getinternet.no) has joined #ceph
[7:12] * cce_ (~cce@50.56.54.167) has joined #ceph
[7:12] * jerker_ (jerker@Psilocybe.Update.UU.SE) has joined #ceph
[7:12] * MapspaM (~clint@xencbyrum2.srihosting.com) has joined #ceph
[7:12] * via_ (~via@smtp2.matthewvia.info) has joined #ceph
[7:12] * jeffhung_ (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[7:12] * stj_ (~s@tully.csail.mit.edu) has joined #ceph
[7:12] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * stj (~s@tully.csail.mit.edu) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * jhurlbert (~jhurlbert@216.57.209.252) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * cce (~cce@50.56.54.167) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * Azrael (~azrael@terra.negativeblue.com) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * houkouonchi-home (~linux@66-215-209-207.dhcp.rvsd.ca.charter.com) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * djezz (~jasper.si@target15.rcitlab.rug.nl) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * terje_ (~joey@184-96-157-197.hlrn.qwest.net) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * Anticimex (anticimex@95.80.32.80) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * joelio (~Joel@88.198.107.214) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * bkero (~bkero@216.151.13.66) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * athrift (~nz_monkey@203.86.205.13) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * Gugge-47527 (gugge@kriminel.dk) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * jerker (jerker@82ee1319.test.dnsbl.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * _nick (~nick@digo.dischord.org) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * xmir (~xmeer@cm-84.208.159.149.getinternet.no) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * plantain (~plantain@106.187.96.118) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * Norby (~norby@bender.gigo.com) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * via (~via@smtp2.matthewvia.info) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * jpds (~jpds@00014011.user.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * chris38 (~chris38@193.49.124.64) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * loicd (~loicd@bouncer.dachary.org) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * [cave] (~quassel@boxacle.net) Quit (resistance.oftc.net oxygen.oftc.net)
[7:12] * jhurlbert_ is now known as jhurlbert
[7:13] * Anticimex (anticimex@95.80.32.80) has joined #ceph
[7:13] * jpds_ (~jpds@91.189.93.33) has joined #ceph
[7:13] * djezz (~jasper.si@target15.rcitlab.rug.nl) has joined #ceph
[7:13] * Norby (~norby@bender.gigo.com) has joined #ceph
[7:13] * joelio (~Joel@88.198.107.214) has joined #ceph
[7:13] * mnash_ is now known as mnash
[7:13] * terje_ (~joey@184-96-157-197.hlrn.qwest.net) has joined #ceph
[7:13] * plantain (~plantain@106.187.96.118) has joined #ceph
[7:13] * i_m (~ivan.miro@217.26.6.147) has joined #ceph
[7:13] * loicd (~loicd@bouncer.dachary.org) has joined #ceph
[7:14] * nick (~nick@digo.dischord.org) has joined #ceph
[7:14] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[7:14] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[7:15] * sarob (~sarob@2001:4998:effd:600:1c7e:1b7e:b3bb:45b0) has joined #ceph
[7:17] * chris38 (~chris38@193.49.124.64) has joined #ceph
[7:17] * houkouonchi-home (~linux@houkouonchi-1-pt.tunnel.tserv15.lax1.ipv6.he.net) has joined #ceph
[7:18] * haomaiwa_ (~haomaiwan@117.79.232.155) Quit (Remote host closed the connection)
[7:18] * bkero (~bkero@216.151.13.66) has joined #ceph
[7:18] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[7:19] * haomaiwang (~haomaiwan@117.79.232.155) has joined #ceph
[7:19] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[7:23] * sarob (~sarob@2001:4998:effd:600:1c7e:1b7e:b3bb:45b0) Quit (Ping timeout: 480 seconds)
[7:26] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[7:27] * haomaiwang (~haomaiwan@117.79.232.155) Quit (Remote host closed the connection)
[7:28] * haomaiwang (~haomaiwan@117.79.232.187) has joined #ceph
[7:29] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[7:29] * jjgalvez (~jjgalvez@12.204.99.166) has joined #ceph
[7:32] * hemantb (~hemantb@182.71.241.130) has joined #ceph
[7:33] * haomaiwang (~haomaiwan@117.79.232.187) Quit (Remote host closed the connection)
[7:34] * haomaiwang (~haomaiwan@117.79.232.187) has joined #ceph
[7:37] * haomaiwang (~haomaiwan@117.79.232.187) Quit (Remote host closed the connection)
[7:38] * haomaiwang (~haomaiwan@199.30.140.94) has joined #ceph
[7:40] * haomaiwa_ (~haomaiwan@211.155.113.224) has joined #ceph
[7:41] * haomaiwa_ (~haomaiwan@211.155.113.224) Quit (Remote host closed the connection)
[7:41] * haomaiwa_ (~haomaiwan@117.79.232.187) has joined #ceph
[7:45] * haomaiwa_ (~haomaiwan@117.79.232.187) Quit (Remote host closed the connection)
[7:46] * haomaiwa_ (~haomaiwan@117.79.232.187) has joined #ceph
[7:47] * haomaiwang (~haomaiwan@199.30.140.94) Quit (Ping timeout: 480 seconds)
[7:50] * KindTwo (KindOne@h38.42.28.71.dynamic.ip.windstream.net) has joined #ceph
[7:51] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:51] * KindTwo is now known as KindOne
[7:56] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[8:04] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:11] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:12] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:19] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[8:22] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:25] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:27] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[8:29] * i_m (~ivan.miro@217.26.6.147) Quit (Ping timeout: 480 seconds)
[8:33] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:34] * yanzheng (~zhyan@jfdmzpr06-ext.jf.intel.com) Quit (Quit: Leaving)
[8:36] * thomnico (~thomnico@2a01:e35:8b41:120:9c45:ce56:c68:8672) has joined #ceph
[8:42] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[8:45] * Sysadmin88 (~IceChat77@90.208.9.12) Quit (Quit: Few women admit their age. Few men act theirs.)
[8:45] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[8:45] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[8:59] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:02] * i_m (~ivan.miro@217.26.6.147) has joined #ceph
[9:06] * i_m1 (~ivan.miro@217.26.6.147) has joined #ceph
[9:06] * i_m (~ivan.miro@217.26.6.147) Quit (Read error: Connection reset by peer)
[9:15] * jjgalvez (~jjgalvez@12.204.99.166) Quit (Quit: Leaving.)
[9:28] * sleinen (~Adium@2001:620:0:26:ddf2:dbdb:23e:376) has joined #ceph
[9:29] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) Quit (Quit: Leaving)
[9:31] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[9:33] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[9:38] * ntranger_ (~ntranger@proxy2.wolfram.com) has joined #ceph
[9:39] * ntranger (~ntranger@proxy2.wolfram.com) Quit (Ping timeout: 480 seconds)
[9:41] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[9:42] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:46] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[9:54] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[9:59] * haomaiwa_ (~haomaiwan@117.79.232.187) Quit (Remote host closed the connection)
[9:59] * haomaiwang (~haomaiwan@117.79.232.187) has joined #ceph
[10:00] * ScOut3R__ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:00] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[10:01] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[10:01] * ChanServ sets mode +v andreask
[10:04] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:06] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[10:06] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) has joined #ceph
[10:10] * rendar (~s@host226-182-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[10:10] * ScOut3R__ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[10:17] * i_m1 (~ivan.miro@217.26.6.147) Quit (Ping timeout: 480 seconds)
[10:20] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:21] * Siva (~sivat@generalnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[10:22] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[10:24] * ScOut3R__ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:25] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Read error: Operation timed out)
[10:25] * sha (~kvirc@81.17.168.194) has joined #ceph
[10:26] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[10:26] <sha> Hi. Any body can tell me why @reweight@ field osd.3 is not 1? http://pastebin.com/GnBQbQfR
[10:28] <sha> also why have stoped degradation (ceph -w) --->http://pastebin.com/FN8W5Gng
[10:29] <sha> ceph health detail -->http://pastebin.com/y2xuXndw
[10:29] <sha> ceph osd crush tunables optimal
[10:30] <sha> its look like ceph stoped replicated
[10:31] * ScOut3R__ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Read error: Operation timed out)
[10:32] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:32] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) Quit (Read error: Connection reset by peer)
[10:32] * perfectsine (~perfectsi@if01-gn01.dal05.softlayer.com) has joined #ceph
[10:35] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:35] * garphy`aw is now known as garphy
[10:38] * LeaChim (~LeaChim@host81-159-251-38.range81-159.btcentralplus.com) has joined #ceph
[10:41] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[10:44] * garphy is now known as garphy`aw
[10:47] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[10:47] * garphy`aw is now known as garphy
[10:50] * Guest823 (~Isaaac@109.89.64.35) has joined #ceph
[10:51] * Guest823 (~Isaaac@109.89.64.35) Quit ()
[11:10] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[11:22] <Gugge-47527> sha: because someone changed the weight, what does ceph health detail say?
[11:23] <sha> Gugge-47527: http://pastebin.com/y2xuXndw
[11:23] <Gugge-47527> sha: one reason it would stop replicating is if the osd's are too full
[11:25] <Gugge-47527> but i think ceph health detail should show that
[11:25] <sha> Gugge-47527: yes we deleted some data and now ceph -w http://pastebin.com/etcQMRB3
[11:25] <Gugge-47527> it seems osd.3 is in all the pg's listed
[11:25] <sha> Gugge-47527: degraded (-0.00XX). why ist (-)
[11:25] <Gugge-47527> i would try to restart that osd
[11:26] <sha> Gugge-47527: we restart it alrady
[11:26] <Gugge-47527> did you restart the 2 and 4 too?
[11:26] <Discard> hey Gugge-47527 ! thanks for your help again yesterday, i've stopped to use ceph-deploy to install mon and odd and create my own script ! ans now all is ok
[11:27] <Gugge-47527> Discard: great :)
[11:27] <Discard> osd
[11:27] <Discard> ceph-deploy is a little bit buggy on my config
[11:27] <sha> Gugge-47527: whe negative values degradation???
[11:27] * haomaiwang (~haomaiwan@117.79.232.187) Quit (Remote host closed the connection)
[11:28] <Gugge-47527> sha: because ceph, like all software, has errors :)
[11:28] * haomaiwang (~haomaiwan@123.151.28.75) has joined #ceph
[11:29] <Gugge-47527> i would be more worried about the 3 degraded pg's that only seem to want osd.1
[11:29] <sha> Gugge-47527: yes...but I see a growing free space
[11:30] <sha> Gugge-47527: http://pastebin.com/173dFvC0
[11:30] <Gugge-47527> that is expected, when you delete data :)
[11:30] <sha> Gugge-47527: ok. will wait 1 TB erased
[11:31] <Gugge-47527> i think i would restart all the osd's and maybe reweight them all a bit, to kickstart a remap of the degraded pg's
[11:32] <Gugge-47527> sha: can you paste "ceph pg dump" ?
[11:32] <Discard> Gugge-47527: just a question, MDS is only for filse system cephfs or black file system now for file gateway like swift/S3 right ?
[11:32] <Gugge-47527> mds is only cephfs yes
[11:32] <Discard> ok
[11:32] <Gugge-47527> it handles cephfs metadata
[11:33] <Gugge-47527> ive never setup an mds on my clusters
[11:33] * Machske (~Bram@d5152D87C.static.telenet.be) Quit (Ping timeout: 480 seconds)
[11:33] <sha> Gugge-47527: http://pastebin.com/f5ppzTu0
[11:34] <Gugge-47527> are you sure that is all?
[11:34] <Discard> Gugge-47527: h??h?? are you using ceph in Object gateway ?
[11:34] <Gugge-47527> im only using rbd
[11:34] <sha> Gugge-47527: crushmap http://pastebin.com/eL7AbjTY
[11:34] <Gugge-47527> does ceph pg dump really only show active+clean pg's?
[11:35] <Gugge-47527> or did you only paste the last bit of it? :)
[11:36] * haomaiwang (~haomaiwan@123.151.28.75) Quit (Ping timeout: 480 seconds)
[11:36] * alram (~alram@ip-47.net-89-3-14.rev.numericable.fr) has joined #ceph
[11:37] <sha> Gugge-47527: http://pastebin.com/B8VsxZyd
[11:37] <Discard> Gugge-47527: ok to export fs to your vas ,
[11:37] <Discard> your vms
[11:39] <Discard> Gugge-47527: for me it's just a big file gateway with a lot a media objects
[11:43] <sha> Gugge-47527: after service ceph-a restart http://pastebin.com/HUNBEBNt
[11:49] * alram (~alram@ip-47.net-89-3-14.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[11:54] <sha> how we can reweight the osd.3 from 0.7844 to 1 http://pastebin.com/2wL8X98w
[11:54] <Gugge-47527> ceph reweight
[11:54] <Gugge-47527> or ceph something :)
[11:54] <Gugge-47527> ceph -h|grep reweight
[11:54] <sha> lol
[11:54] <Gugge-47527> im not infront of a ceph cli right now :P
[12:06] <sha> Gugge-47527: ceph osd reweight 3 1.0
[12:10] * diegows (~diegows@190.190.17.57) has joined #ceph
[12:14] * Hakisho (~Hakisho@0001be3c.user.oftc.net) has joined #ceph
[12:14] <Gugge-47527> sha: sounds right.
[12:15] * alram (~alram@ip-47.net-89-3-14.rev.numericable.fr) has joined #ceph
[12:15] <sha> Gugge-47527: yes its help us...seems all be alright after degradation ends
[12:16] * madkiss (~madkiss@p4FE05042.dip0.t-ipconnect.de) has joined #ceph
[12:20] * madkiss (~madkiss@p4FE05042.dip0.t-ipconnect.de) Quit ()
[12:22] * alram (~alram@ip-47.net-89-3-14.rev.numericable.fr) Quit (Quit: leaving)
[12:24] <Discard> Gugge-47527: when i don't put a weight it'll be calculated automatically ?
[12:38] * madkiss (~madkiss@p4FE05042.dip0.t-ipconnect.de) has joined #ceph
[12:39] <loicd> joao: will you organize a meetup in lisboa next year ?
[12:40] <joao> I sure will try
[12:40] <joao> I guess it's only a matter of finding people
[12:46] <madkiss> can I somehow find out what version of librados/librbd a qemu was built against?
[12:48] * haomaiwang (~haomaiwan@118.186.151.36) has joined #ceph
[12:50] * zidarsk8 (~zidar@2001:1470:fffe:fe01:e2ca:94ff:fe34:7822) has joined #ceph
[12:50] * zidarsk8 (~zidar@2001:1470:fffe:fe01:e2ca:94ff:fe34:7822) has left #ceph
[12:52] * haomaiwa_ (~haomaiwan@117.79.232.223) has joined #ceph
[12:54] * allsystemsarego (~allsystem@188.26.167.169) has joined #ceph
[12:56] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[12:56] * ChanServ sets mode +v andreask
[12:56] * haomaiwang (~haomaiwan@118.186.151.36) Quit (Ping timeout: 480 seconds)
[13:02] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[13:06] * Siva (~sivat@117.192.47.87) has joined #ceph
[13:13] <Discard> I've juste add a new osd do I have to way rebalancing to see space added ? http://pastebin.com/0t3ivSqv
[13:14] <Discard> wait
[13:16] * Siva (~sivat@117.192.47.87) Quit (Ping timeout: 480 seconds)
[13:17] <madkiss> i think i just ran into http://comments.gmane.org/gmane.comp.file-systems.ceph.user/4260
[13:18] <madkiss> ah
[13:18] <madkiss> kernel too old
[13:19] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:22] <madkiss> rbd -p test --image-format 1 create test --size 1024
[13:22] <madkiss> still creates a format v2 image
[13:23] <madkiss> what's the right command?
[13:24] * yanzheng (~zhyan@134.134.137.75) has joined #ceph
[13:37] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[13:42] * ScOut3R__ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[13:43] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[13:43] <Discard> madkiss: sorry i can't help you but wait for Gugge-47527 he is a pro odor ddb system :-)
[13:44] <Discard> he is a pro in rdb system
[13:47] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[13:56] * madkiss (~madkiss@p4FE05042.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[14:04] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[14:05] * stj_ is now known as stj
[14:05] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[14:07] * hjjg (~hg@p3EE323BB.dip0.t-ipconnect.de) has joined #ceph
[14:12] * haomaiwang (~haomaiwan@117.79.232.254) has joined #ceph
[14:12] * haomaiwa_ (~haomaiwan@117.79.232.223) Quit (Read error: Connection reset by peer)
[14:12] * haomaiwa_ (~haomaiwan@117.79.232.223) has joined #ceph
[14:13] * illya (~illya_hav@16-158-133-95.pool.ukrtel.net) Quit (Read error: Connection reset by peer)
[14:14] * jf-jenni (~jf-jenni@stallman.cse.ohio-state.edu) Quit (Ping timeout: 480 seconds)
[14:15] * haomaiwa_ (~haomaiwan@117.79.232.223) Quit (Remote host closed the connection)
[14:15] * haomaiwa_ (~haomaiwan@199.30.140.94) has joined #ceph
[14:18] * tsnider (~tsnider@nat-216-240-30-23.netapp.com) has joined #ceph
[14:19] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[14:20] * haomaiwang (~haomaiwan@117.79.232.254) Quit (Ping timeout: 480 seconds)
[14:20] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[14:21] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[14:26] * ScOut3R__ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[14:26] <sha> we use ceph + proxmox like rbd
[14:33] <alphe> more and more weird stuff are happenning to me with ceph ..
[14:33] <alphe> alfredodeza I had a ton of warning stating that directories like /var/lib/ceph/bootstrap-osd did exist
[14:34] <alphe> do I need to resize pgs of pools I will not use ?
[14:34] <sha> pastebin
[14:40] * haomaiwang (~haomaiwan@118.186.151.36) has joined #ceph
[14:41] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[14:43] <alphe> pastbin ?
[14:43] <alphe> sha what for ?
[14:44] <alphe> sha the errors are solved creating the related directories but It amaze me that on the osd create ceph-deploy does not test if the /var/lib/ceph and subdirs exist
[14:44] <alphe> and if not create them
[14:44] <alphe> instead of sending weird error message
[14:46] * haomaiwa_ (~haomaiwan@199.30.140.94) Quit (Ping timeout: 480 seconds)
[14:53] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[14:58] <alphe> how can i rename osd ?
[14:59] <alphe> for a weird reason i have 25 osd registered and only 20 running from osd id 0 1 2 are down 5 and 6 too
[15:00] <alphe> I want to rename them is there a better way to do it than kill all and try again ?
[15:00] * clayb (~kvirc@proxy-nj2.bloomberg.com) has joined #ceph
[15:04] * markbby (~Adium@168.94.245.1) has joined #ceph
[15:05] * peedu (~peedu@adsl89.uninet.ee) has joined #ceph
[15:05] <peedu> hi
[15:05] <peedu> anyone has idea how to get like last 5 sec read write latency per OSD
[15:05] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[15:05] <peedu> pers counter gives avarage
[15:05] <peedu> perf counter*
[15:08] <linuxkidd> iostat -x 5
[15:08] <linuxkidd> look at the 'await' / 'r_await' / 'w_await' columns
[15:08] <linuxkidd> the 5 = 5 seconds, and the values are averaged across the interval time..
[15:08] <linuxkidd> So, if you want 1 second averages, make it iostat -x 1
[15:09] <peedu> ok
[15:09] <peedu> it is for disks, but ceph itself any latency
[15:09] <peedu> like how long it takes to process requests
[15:10] <linuxkidd> ah, would have to do more looking for that value..
[15:11] <glambert> I'm trying to upload to my ceph s3 gateway but getting 400 bad request back
[15:11] <glambert> saying "Your browser sent a request that this server could not understand"
[15:11] <glambert> using the amazon s3 php api
[15:16] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[15:23] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[15:24] * peedu (~peedu@adsl89.uninet.ee) Quit (Quit: Leaving...)
[15:24] * peedu (~peedu@adsl89.uninet.ee) has joined #ceph
[15:24] * peedu (~peedu@adsl89.uninet.ee) Quit (Remote host closed the connection)
[15:24] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[15:26] * yanzheng (~zhyan@134.134.137.75) Quit (Ping timeout: 480 seconds)
[15:28] <clayb> Has anyone seen an issue where rgw returns a times out (and thus Apache returns a 500) when getting a request for an invalid bucket?
[15:30] * cronix (~cronix@5.199.139.166) has joined #ceph
[15:30] <alphe> is there a reason for ceph-deploy osd create host:disk to only do the prepare part and don t do the activate part without trapping an error ?
[15:30] <clayb> I've been seeing this on a simple install of 0.67.4 but 0.61.7 seems to work as expect and return an error
[15:39] * cronix (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[15:40] * haomaiwang (~haomaiwan@118.186.151.36) Quit (Remote host closed the connection)
[15:43] <alphe> ceph-deploy really behave strangely ...
[15:44] <alphe> sometime it prepares a disk without journal sometimes it prepares is with journal ...
[15:44] * hemantb (~hemantb@182.71.241.130) Quit (Quit: hemantb)
[15:45] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) has joined #ceph
[15:48] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[15:48] * nwat (~textual@99.102.49.194) has joined #ceph
[15:50] <alphe> how can I rename an already created osd ?
[15:52] <pmatulis2> alphe: if you can reproduce kindly open a bug
[15:53] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[15:53] <Discard> hello I have a clock skew problem on my mons but i have ntpd on all my servers any idea ?
[15:53] <alphe> pmatulis2 hum ... it is ceph deploy I can t get it to work and create my osd in order from 0 to 20 for nodes 1 to 10 ...
[15:53] <alphe> discard replace ntp by openntpd on all your nodes
[15:54] <Discard> it's already openntp
[15:54] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:54] <alphe> then make sure the ntpd service are running
[15:54] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[15:54] <alphe> verify the time on your machines with date
[15:54] <Discard> http://pastebin.com/wZKtZH7a
[15:55] <glambert> ok I've got a 403 response coming back when running if_bucket_exists() on an s3 bucket, any ideas why?
[15:56] <Discard> alphe: same time on all my node
[15:57] <Discard> strange things
[15:58] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[15:59] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:01] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[16:02] <glambert> so I edited that function to take 403 and 404 is being false so that's that sorted
[16:02] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[16:02] <glambert> but now the create_bucket() is returning a 405 response
[16:03] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[16:07] * i_m (~ivan.miro@79-101-228-253.dynamic.isp.telekom.rs) has joined #ceph
[16:08] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[16:16] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[16:17] * i_m (~ivan.miro@79-101-228-253.dynamic.isp.telekom.rs) Quit (Read error: Operation timed out)
[16:18] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[16:19] * tobru (~quassel@2a02:41a:3999::94) Quit (Remote host closed the connection)
[16:20] <glambert> hmm
[16:20] * TiCPU (~jeromepou@190-130.cgocable.ca) has joined #ceph
[16:21] <glambert> looks like I fixed the 405 issue but still getting the 403 thing
[16:23] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[16:23] * ChanServ sets mode +v andreask
[16:24] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit ()
[16:24] <alphe> Discard sorry I was away
[16:25] <alphe> doing stuff for my ceph cluster that is doing stupid things
[16:25] <Discard> alphe: no pb :-)
[16:25] <alphe> ok so I had that prob once
[16:25] <alphe> I moved every not to regular ntp to openntpd and restarted the monitors
[16:26] <alphe> with restart ceph-mon-all on all my nodes with monitors
[16:26] <alphe> and the clock skew that was bothering me for a week was gone
[16:26] <alphe> I moved every node from regular ntp to openntpd and restarted the monitors
[16:27] <alphe> alfredodeza are you around ?
[16:27] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[16:30] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[16:31] * diegows (~diegows@190.190.17.57) Quit (Read error: Operation timed out)
[16:34] * xdeller (~xdeller@91.218.144.129) Quit (Quit: Leaving)
[16:35] * thomnico (~thomnico@2a01:e35:8b41:120:9c45:ce56:c68:8672) Quit (Quit: Ex-Chat)
[16:35] * noob2 (~cjh@pool-173-67-95-10.snfcca.dsl-w.verizon.net) has joined #ceph
[16:36] * madkiss (~madkiss@p4FE05042.dip0.t-ipconnect.de) has joined #ceph
[16:36] <Discard> alphe: it has corrected 3 of 4 clock skew
[16:36] <Discard> alphe: thanks
[16:37] * nwat (~textual@99.102.49.194) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[16:38] <noob2> ceph: every 3 or 4th time i do ceph osd tree my terminal gets spammed forever with these messages: sd=3 :33763 s=1 pgs=0 cs=0 l=1 c=0x7f6230022d80).connect got BADAUTHORIZER
[16:38] <noob2> any idea what they mean?
[16:39] <alphe> means one of your monitors is gone avok
[16:39] <alphe> on of your monitor is gone berserker unhappy peon
[16:39] <alphe> or simply it is shut down
[16:40] <noob2> ah
[16:40] <noob2> alphe: i see
[16:40] <noob2> i do see one monitor as down
[16:40] <alphe> hum .. BADAUTHORIZER means one of your node don t have the keyring for some reason
[16:41] <alphe> you can ceph-deploy it again that should recreate the file and solve the problem ...
[16:41] <noob2> it's probably the node that i swapped hardware into
[16:41] * madkiss (~madkiss@p4FE05042.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[16:41] <noob2> i took one node down and moved the drives to another motherboard and then i started getting these issues
[16:43] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[16:44] * madkiss (~madkiss@p4FE05042.dip0.t-ipconnect.de) has joined #ceph
[16:45] <alphe> noob2 the disk you moved had fixed ip or dhcp ip ?
[16:45] <noob2> it had a fixed ip
[16:45] <alphe> ok
[16:45] <noob2> i had trouble getting the osd's to add back into the cluster also
[16:45] <noob2> i had to restart them several times
[16:47] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[16:47] <noob2> alphe: when i see this message does it mean that .21 is the bad guy? 0 -- 192.168.1.21:0/1029968 >> 192.168.1.20:6789/0 pipe
[16:47] <noob2> or is it .20?
[16:49] <alphe> best way to know that is to do a ceph mon tree
[16:49] <alphe> if that exists
[16:49] <noob2> ok
[16:50] * nwat (~textual@adsl-99-102-49-194.dsl.tul2ok.sbcglobal.net) has joined #ceph
[16:50] <noob2> alphe: it says i have 3 mons up but only 2 in the quorum: e6: 3 mons at {a=192.168.1.20:6789/0,b=192.168.1.21:6789/0,c=192.168.1.22:6789/0}, election epoch 2808, quorum 1,2 b,c
[16:50] <alphe> a is down so
[16:51] <noob2> when i go onto A it says it's running
[16:51] <alphe> or the quorum would be 0,1,2,a,b,c
[16:51] <alphe> or the quorum would be 0,1,2 a,b,c
[16:51] <noob2> right
[16:51] <alphe> {a=192.168.1.20:6789/0
[16:51] <alphe> that is your bad guy
[16:51] <noob2> yup
[16:51] <noob2> ah interesting
[16:51] * haomaiwang (~haomaiwan@118.186.151.36) has joined #ceph
[16:51] <noob2> i'll shut it down
[16:52] <alphe> noob2 look the content of /var/lib/ceph/mon
[16:52] <noob2> on the bad guy?
[16:52] <alphe> yes
[16:52] <noob2> ok
[16:52] <noob2> i'm there
[16:52] <alphe> you should have in that folder keyring that match the "admin.keyring"
[16:53] <noob2> ok
[16:53] <alphe> and a store.db subfolder
[16:53] <alphe> ok so clear the logs
[16:53] <alphe> it will ease our life
[16:53] <noob2> nuke the store.db files?
[16:53] <noob2> the keyring seems to match the others
[16:53] <noob2> so that's good
[16:53] <alphe> cat /dev/null > /var/log/ceph-mon.a.log
[16:54] <noob2> ok
[16:54] <alphe> cat /dev/null > /var/log/ceph/ceph-mon.a.log
[16:54] <alphe> then be sure the mon is stoped
[16:54] <noob2> yeah it's down
[16:54] <alphe> ps aux | grep mon
[16:54] <noob2> yeah it's toast
[16:55] <alphe> then start it manually ceph-mon --cluster=ceph -i a -f &
[16:55] <noob2> ok
[16:55] <alphe> then tail -f /var/log/ceph-mon.a.log
[16:55] <alphe> and see what is going on
[16:55] <alphe> don t copy paste here please
[16:55] <noob2> oh i won't ;)
[16:55] <alphe> use pastebin instead if needed
[16:55] <noob2> i'll fpaste it
[16:55] <alphe> ok
[16:56] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Remote host closed the connection)
[16:56] <noob2> doesn't seem to be doing much
[16:56] <noob2> just says it started it at rank 0
[16:57] <noob2> awesome there we go
[16:57] <noob2> it core dumped
[16:57] <noob2> alphe: http://fpaste.org/63562/55506913/
[16:58] <noob2> alphe: oh man i feel silly now haha. it ran out of disk space
[16:59] * haomaiwang (~haomaiwan@118.186.151.36) Quit (Ping timeout: 480 seconds)
[17:00] <alphe> that happends dude
[17:00] <noob2> haha
[17:01] <alphe> ceph should have told you that your system drive was near full
[17:01] <noob2> yeah i need to add a bigger drive into this one
[17:03] * fouxm (~fouxm@185.23.92.11) Quit (Remote host closed the connection)
[17:06] * thomnico (~thomnico@2a01:e35:8b41:120:9c45:ce56:c68:8672) has joined #ceph
[17:06] * BillK (~BillK-OFT@106-68-227-246.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[17:06] * thomnico (~thomnico@2a01:e35:8b41:120:9c45:ce56:c68:8672) Quit ()
[17:07] * hjjg (~hg@p3EE323BB.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[17:08] * Sysadmin88 (~IceChat77@90.208.9.12) has joined #ceph
[17:10] * i_m (~ivan.miro@95.180.8.206) has joined #ceph
[17:15] * illya (~illya_hav@28-167-112-92.pool.ukrtel.net) has joined #ceph
[17:16] <alphe> if I only use rbd do I need to set more pgs to data and metadata ? can I remove them ?
[17:16] <noob2> i think you can remove them
[17:16] <noob2> metadata is for mds
[17:16] <noob2> i'm not sure where data is used
[17:16] <illya> hi
[17:16] <alphe> illya hello
[17:16] <illya> was chatting early
[17:16] <alphe> ?
[17:16] <alphe> ?
[17:17] <illya> any idea about this MDS issue at startup
[17:17] <illya> http://pastebin.com/iGzLbU6a
[17:18] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:18] <alphe> version 0.67.4
[17:19] <illya> yes
[17:19] <illya> :)
[17:19] <alphe> does it makes some more explicite logs ?
[17:19] * noob21 (~cjh@173.252.71.189) has joined #ceph
[17:19] <alphe> because I dont have the mds source code in mind ...
[17:21] <alphe> mds can t do a MDSTable::load2 probably because one of the data is missing
[17:22] <illya> one of the local data ?
[17:23] <alphe> probably ...
[17:23] <illya> I tried to setup cluster several times
[17:23] <alphe> illya same here !
[17:23] <illya> any idea of what folders I should clean ?
[17:23] <illya> between my retries...
[17:24] * TiCPU (~jeromepou@190-130.cgocable.ca) Quit (Ping timeout: 480 seconds)
[17:24] * noob2 (~cjh@pool-173-67-95-10.snfcca.dsl-w.verizon.net) Quit (Ping timeout: 480 seconds)
[17:25] <alphe> when you reinstall a cluster you have to use ceph-deploy purgedata then rm -rf /etc/ceph/* then apt-get -f remove -y --force-yes ceph ceph-mds ceph-common ceph-fs-common
[17:25] <alphe> and then reinstall it
[17:25] <alphe> same line you replace remove with install
[17:25] * garphy is now known as garphy`aw
[17:25] <alphe> then you should have /var/lib/ceph folder and subfolders like osd mon tmp mds bootstrap-osd bootstrap-mds
[17:26] * hemantb (~hemantb@14.96.41.115) has joined #ceph
[17:26] <alphe> then ready to do the ceph-deploy new initial hosts for mon list
[17:27] <alphe> then ceph-deploy mon create osd01
[17:27] <madkiss> leseb: are you there? :)
[17:27] <alphe> then ceph-deploy mon create osd02 osd03 you wait a bit
[17:27] <alphe> then ceph-deploy mon create firstmonitor
[17:27] <alphe> then ceph-deploy mon create slavemonitorlist
[17:27] <leseb> madkiss: yup
[17:29] <alphe> then ceph-deploy osd create node1:disk1 etc..
[17:29] <alphe> you wait a bit that should start the osds everywhere
[17:32] * sleinen (~Adium@2001:620:0:26:ddf2:dbdb:23e:376) Quit (Quit: Leaving.)
[17:32] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[17:32] * sleinen (~Adium@2001:620:0:2d:75f5:302a:1041:27fe) has joined #ceph
[17:32] * TiCPU (~jeromepou@190-130.cgocable.ca) has joined #ceph
[17:33] <alphe> then ceph-deploy mds create node with mds
[17:33] * nwat (~textual@adsl-99-102-49-194.dsl.tul2ok.sbcglobal.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:34] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[17:36] * sleinen1 (~Adium@2001:620:0:2d:9952:9111:513d:d272) has joined #ceph
[17:38] * sleinen2 (~Adium@2001:620:0:25:44d7:9329:c3cc:d911) has joined #ceph
[17:40] * sleinen2 (~Adium@2001:620:0:25:44d7:9329:c3cc:d911) Quit ()
[17:40] * sleinen2 (~Adium@2001:620:0:2d:1958:8ce5:72f3:f765) has joined #ceph
[17:40] * sleinen (~Adium@2001:620:0:2d:75f5:302a:1041:27fe) Quit (Ping timeout: 480 seconds)
[17:41] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:42] <illya> removed osd's
[17:43] <illya> clean /var/lib/ceph/mds
[17:43] <illya> redeployed mds
[17:43] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Quit: Leaving.)
[17:43] <illya> now mds seems fine
[17:43] <illya> but osd's now out of a tree
[17:43] <illya> http://pastebin.com/dcL5XbPi
[17:44] <illya> I think I need to run "ceph osd crush add.."
[17:44] <illya> but I can't make it :(
[17:44] * sleinen1 (~Adium@2001:620:0:2d:9952:9111:513d:d272) Quit (Ping timeout: 480 seconds)
[17:48] * sleinen2 (~Adium@2001:620:0:2d:1958:8ce5:72f3:f765) Quit (Ping timeout: 480 seconds)
[17:48] <illya> solved with
[17:48] <illya> ceph osd crush set osd.0 0 root=default rack=unknownrack host=ubuntu41
[17:49] * jhurlbert (~jhurlbert@216.57.209.252) Quit (Quit: jhurlbert)
[17:50] <illya> and MDS crashed after this :(
[17:52] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[17:54] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) has joined #ceph
[17:55] <doubt> hi guys, i m new with ceph, noob question: ceph-deploy new ceph-node <-- i need to repeat this for all my nodes?
[17:59] * joshd1 (~jdurgin@2602:306:c5db:310:9175:ce02:7208:5e85) Quit (Quit: Leaving.)
[18:01] * angdraug (~angdraug@12.164.168.116) has joined #ceph
[18:01] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Ping timeout: 480 seconds)
[18:11] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) has joined #ceph
[18:12] * ntranger_ (~ntranger@proxy2.wolfram.com) Quit ()
[18:16] * hemantb (~hemantb@14.96.41.115) Quit (Quit: hemantb)
[18:16] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:16] * ircolle (~Adium@2601:1:8380:2d9:9401:2cd1:5b83:b514) has joined #ceph
[18:19] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[18:22] <bandrus> doubt: ceph-deploy new mon1 mon2 mon3
[18:23] <bandrus> in other words??? run it once with all the nodes you plan on being monitors
[18:23] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:28] * JC (~JC@nat-dip5.cfw-a-gci.corp.yahoo.com) has joined #ceph
[18:29] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) Quit (Ping timeout: 480 seconds)
[18:30] <alphe> if I user rbd do I need mds and metadata ?
[18:31] <janos> for just rbd? no
[18:31] <alphe> sure ?
[18:31] <janos> iirc that's for cephfs
[18:32] <janos> i would get a second opinion ;) but i'm pretty sure
[18:32] <bandrus> janos is correct, MDS is purely for cephfs
[18:33] <alphe> bandrus and cephfs makes for a odd reason disapear my folder tree with it is side share as nfs
[18:33] <alphe> ...
[18:33] <alphe> so cephfs is gone all the way down to hell and is locked there to never come back
[18:33] <bandrus> sorry - not too familiar with troubleshooting cephfs
[18:33] <illya> bandrus: if you remember my yesterday issue
[18:33] <alphe> now I try to optimise my rbd ceph cluster
[18:33] <illya> my OSDs fine now
[18:34] <alphe> illya good to now !
[18:34] <bandrus> great to hear illya
[18:34] <janos> cephfs is still in "use at your own risk" mode
[18:34] <illya> but MDS crashes
[18:34] <alphe> illya good to know !
[18:34] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[18:34] <alphe> janos yes but it s been for ever that it is in that state and I don t see really a big effort on it ...
[18:35] <alphe> it is more a big effort to make the whole ceph inter service communication smoother the install smoother too
[18:35] <alphe> and rbd rock stable
[18:35] <alphe> radosgw is getting a lot of attention too
[18:36] <janos> yeah
[18:36] <alphe> it s not a critic it s juste the way it is ...
[18:36] <janos> yep
[18:39] <alphe> and for most people new to ceph it is tricky because cephfs seems to be the most straight way and flexible solution
[18:39] <alphe> cephfs best argument is to just throw to it new osd with backrelation to some host/disk and your virtual drives expends to the infinite
[18:39] * xevwork (~xevious@6cb32e01.cst.lightpath.net) Quit (Read error: Connection reset by peer)
[18:40] <alphe> no need to xfs_growfs
[18:40] <alphe> and eventually get a crash there
[18:41] <alphe> forcing you to start from scratch wich is not very possible with a cluster in production ..
[18:42] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) Quit (Read error: Operation timed out)
[18:42] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) has joined #ceph
[18:48] <alphe> where ceph-deploy records the id of osd he is in ?
[18:49] <ron-slc> Does anybody know where to find the Release note details on Dumpling 67.5?
[18:49] <ron-slc> I don't see them in docs/next
[18:50] <alphe> ceph.com in the log/blog stuff that gives news and announcements
[18:50] <alphe> you scroll back in time and should find it there
[18:51] <ron-slc> 0.67.5 was very recently released. It seems they have stopped announcing minor point releases.
[18:51] <bandrus> release notes are here: http://ceph.com/docs/next/release-notes/ I'll see what our timeline is for getting 0.67.5 added
[18:51] <ron-slc> bandrus: cool! thanks
[18:52] * joao|lap (~JL@a79-168-11-205.cpe.netcabo.pt) has joined #ceph
[18:52] * ChanServ sets mode +o joao|lap
[18:52] <alphe> ron-slc there is no entry for 67.5 yet in release notes
[18:52] <ron-slc> alphe: lol thus my question
[18:54] * ircolle1 (~Adium@mobile-166-147-083-155.mycingular.net) has joined #ceph
[18:55] * ircolle (~Adium@2601:1:8380:2d9:9401:2cd1:5b83:b514) Quit (Ping timeout: 480 seconds)
[18:55] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) Quit (Read error: Operation timed out)
[18:56] <alphe> ceph-deploy osd create is doing what everting comes to its mind !!!
[18:57] <alphe> on some nodes it start the service osd on some not
[18:57] <alphe> it prepare some disks and some other no ...
[19:01] * ircolle1 (~Adium@mobile-166-147-083-155.mycingular.net) Quit (Read error: Connection reset by peer)
[19:01] * ircolle (~Adium@2601:1:8380:2d9:7d71:76b:3844:663a) has joined #ceph
[19:02] * nwat (~textual@adsl-99-102-49-194.dsl.tul2ok.sbcglobal.net) has joined #ceph
[19:03] * hemantb (~hemantb@14.96.40.210) has joined #ceph
[19:03] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[19:04] <alphe> ceph-deploy osd create is doing what everting comes to its mind !!!
[19:04] <alphe> on some nodes it start the service osd on some not
[19:04] <alphe> it prepare some disks and some other no ...
[19:07] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[19:07] * Drumplayr (~oftc-webi@66-87-99-133.pools.spcsdns.net) has joined #ceph
[19:07] * hemantb (~hemantb@14.96.40.210) Quit ()
[19:08] <Drumplayr> Hi! Ceph newbie here
[19:08] * i_m (~ivan.miro@95.180.8.206) Quit (Ping timeout: 480 seconds)
[19:11] <alphe> welcome to the club
[19:11] <Drumplayr> Thanks.
[19:13] <Drumplayr> I have a problem and I can't stop thinking about it until I get this question out in the open.
[19:14] <Drumplayr> I set up Ceph, Connected to the filesystem, sucessfully created test folders, files, started copying files and everything seemed good to go.
[19:15] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[19:15] <alphe> Drumplayr using what feeding technologies (cephfs, rbd, radosgw?)
[19:16] <Drumplayr> Now, I can't connect to the files system anymore. I sent the command "ceph -a stop", no response, "ceph -a start" and get messages back stating the osds are already mounted so we're unmounting ours. CephFS.
[19:17] * jhurlbert (~jhurlbert@216.57.209.252) has joined #ceph
[19:17] <Drumplayr> Oh, and issuing "ceph status" I get a response that initialization failed.
[19:18] <Drumplayr> I looked through a few log files but don't see any errors.
[19:20] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) has joined #ceph
[19:22] <Drumplayr> Anyway, I was hoping someone might know the answer off the top of his/her head. I won't be able to continue troubleshooting until later today, but can't stop thinking about the problem.
[19:22] * noob21 (~cjh@173.252.71.189) Quit (Read error: Connection reset by peer)
[19:23] * HauM1 (~HauM1@login.univie.ac.at) Quit (Remote host closed the connection)
[19:24] * xarses (~andreww@12.164.168.116) has joined #ceph
[19:31] * HauM1 (~HauM1@login.univie.ac.at) has joined #ceph
[19:31] * ScOut3R (~ScOut3R@catv-89-133-44-70.catv.broadband.hu) has joined #ceph
[19:32] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has joined #ceph
[19:32] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has left #ceph
[19:32] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[19:35] * xevwork (~xevious@6cb32e01.cst.lightpath.net) has joined #ceph
[19:35] * xevwork (~xevious@6cb32e01.cst.lightpath.net) Quit (Remote host closed the connection)
[19:36] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[19:36] <alphe> wow ceph-deploy osd create did again a soup ...
[19:37] <alphe> why it works so badly now ?
[19:37] <Drumplayr> What do you mean?
[19:37] <alphe> [osd10][WARNIN] added key for osd.23 --> I have 2 disk per node 10 nodes of 2 disks that 20 -1 because it start numerating a 0
[19:38] <alphe> so how can I get my last activated osd to be osd.23 when it should be osd.19
[19:38] <alphe> and the ceph-deploy create only does the prepare part not the activate part which is weird
[19:39] <Drumplayr> alphe Really??? I used ceph-deploy osd create and all seemed good.
[19:40] <Drumplayr> I was able to write to the cluster for a while.
[19:40] <alphe> yeah I don t know what it behave that way ...
[19:41] <bandrus> alphe: ceph-deploy purge, then manually umount all osds, remove /var/{lib,run}/ceph entirely, purge packages with apt-get, and try deploying your cluster again.
[19:41] <Drumplayr> How can I tell if the osds are activated?
[19:41] <bandrus> Drumplayr: if they are part of your cluster
[19:42] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[19:42] <alphe> banrus did that like 10 times !
[19:42] <alphe> and still ceph-deploy goes berserk on osd create
[19:42] <bandrus> I bet you missed one of the above steps??? It did that to me yesterday as well but after I made sure all of the files and packages were gone, it has worked perfectly many times in a row
[19:44] <alphe> never cleaned the /var/run/ceph
[19:44] <alphe> I will do it again for the 6th time today
[19:45] <Drumplayr> I think I'll have to do the same.
[19:46] <bandrus> stop all daemons, ceph-deploy purge <nodes>, umount /var/lib/ceph/ceph-*, rm -rf /var/{lib,run}/ceph, apt-get remove --purge ceph ceph-common radosgw, rm -rf /home/ceph-deploy/ceph*
[19:46] <bandrus> make sure any rms are removing the right things of course??? ;]
[19:47] * xevwork (~xevious@6cb32e01.cst.lightpath.net) has joined #ceph
[19:47] <alphe> otherwhile the node wont boot anymore ...
[19:47] <bandrus> look for any errors with ceph-deploy, if any errors are returned, something is probably not ideal
[19:47] <bandrus> (besides errors on ceph-deploy osd create)
[19:48] * Azrael_ is now known as Azrael
[19:48] <bandrus> actually osd create might show warnings, but should not show errors
[19:50] <Drumplayr> Yes. I believe the warning is just informational depending on how you're setting up the osd.
[19:50] <bandrus> exactly
[19:50] <alphe> bandrus the normal warning is when it says that it doesn t have the keyring and will create it for osd creation
[19:50] <Drumplayr> In my case, the warning is telling me that the journal is going to be on the same drive as the osd.
[19:50] <alphe> then their is warning abount journal being in a file on the same system than the data blah blah
[19:51] <bandrus> alphe: at what step are the keyring warnings shown?
[19:52] <bandrus> Drumplayr: a warning that can be ignored ^
[19:53] <alphe> osd create
[19:53] <alphe> the prepare step
[19:53] * xevwork (~xevious@6cb32e01.cst.lightpath.net) Quit (Remote host closed the connection)
[19:53] <alphe> it says hey I dont see the keyring in /var/lib/ceph/bootstrap-osd so I will get one :P
[19:54] <alphe> i can t use ceph-deploy isntall since I use gitbuilder 0.72-1-10 for saucy salamander ubuntu
[19:55] <Drumplayr> bandrus: I agree. I think in my case, I didn't delete the /var/{lib,run}/ceph folders and there is residue that is causing my cluster to fail, although it seemded to work fine for a few hours.
[19:56] <bandrus> alphe: did you run ceph-deploy gatherkeys before running osd create?
[19:56] * ScOut3R (~ScOut3R@catv-89-133-44-70.catv.broadband.hu) Quit (Remote host closed the connection)
[19:57] * ScOut3R (~ScOut3R@catv-89-133-44-70.catv.broadband.hu) has joined #ceph
[19:57] <alphe> bandrus yes
[19:58] <bandrus> any errors or warnings on that step?
[19:58] <alphe> absolutly non all is blue OK
[19:58] <bandrus> okay, and are you using --zap-disk with osd create?
[19:58] <alphe> but the gatherkeys involves only one node or all the nodes with monitors ?
[19:59] <bandrus> just one mon node is fine
[19:59] <alphe> bandrus no --zap-disk
[19:59] <alphe> because I got surprise with that but I will try
[19:59] <bandrus> okay, if you're okay with losing the data on the disks, I'd recommend running it with --zap-disk
[19:59] <alphe> banrus the data where already zapped
[20:00] <alphe> i have to uninstall the admin node too
[20:01] * sarob (~sarob@2001:4998:effd:600:d071:7ba9:51bd:b030) has joined #ceph
[20:01] <Drumplayr> bandrus: So after uninstalling, deleting the folders... starting all over, I may be able to retain the data (as long as I don't zap the disks)???
[20:02] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[20:02] <Drumplayr> I wish I would have known that a couple days ago...
[20:02] <bandrus> I don't believe so??? I'm sure it's possible but I don't know the procedure for that
[20:02] * Pedras (~Adium@216.207.42.132) has joined #ceph
[20:02] <Drumplayr> Well I'll give it a shot
[20:03] <bandrus> if you nuke the cluster, it wouldn't really be an easy task to bring an osd back in that has data on it
[20:03] * markbby (~Adium@168.94.245.1) has joined #ceph
[20:04] <alphe> i just want them to be installed in order from 0 to 19 and on the right osd
[20:04] <alphe> i just want them to be installed in order from 0 to 19 and on the right nodes
[20:04] <Drumplayr> Yeah, unless I can get around adding an osd without it formatting.
[20:05] * ScOut3R (~ScOut3R@catv-89-133-44-70.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[20:05] <alphe> Drumplayr one of the tries I ended with 5 extra osd related to nothing on no node ...
[20:05] <alphe> that is way I m a bit tired of ceph-deploy
[20:06] <bandrus> Drumplayr: then you'd have to match PGs somehow??? the data will be split up into PG folders on the OSDs. Those PGs would have to be created, setting the primary OSD and then tell it to replicate to other OSDs, I just am not really sure how practical it would be to do so
[20:07] <bandrus> alphe: I understand your frustrations and I've also had some troubles with it, but I can guarantee that it works every time if everything is prepared properly
[20:07] * nwat (~textual@adsl-99-102-49-194.dsl.tul2ok.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[20:08] <alphe> bandrus hum it is like 15 th install using ceph.deploy and from 10th to 15th the osd create stage did crap
[20:08] <Drumplayr> alphe I had a similar problem a while back. I had removed a node from the cluster. I must have done it wrong because the cluster kept trying to say that the drives were missing. I ended up starting all over. I think that's what may be causing my problem now. Going to make sure the folders mentioned earlier are removed.
[20:09] <alphe> Drumplayr you tried to mark it as lost ?
[20:09] <alphe> ceph osd lost numid
[20:09] <bandrus> using the proper procedures when changing things in a cluster are definitely beneficial??? =]
[20:10] <bandrus> alphe: if you can afford to lose the data on disk, use --zap-disk
[20:10] <alphe> yes
[20:10] <bandrus> that is my recommendation. Also to make sure all mounts are unmounted, and all ceph folders are removed. Also remove any files that ceph-deploy creates in your local folder
[20:11] <Drumplayr> I can't remember what I did. I was really frustrated at the time and may have just given up and started over...
[20:11] <bandrus> for now, I've got to go, good luck
[20:11] <alphe> ok so now that it is fully cleaned I need to reboot my nodes then reinstall from gitbuilder.ceph.com
[20:11] * alphe does sad face
[20:11] <bandrus> no reboot necessary
[20:11] <Drumplayr> Thanks for the info bandrus
[20:11] <bandrus> you can leave ceph-deploy installed too
[20:12] <bandrus> all you need to do is run ceph-deploy commands, no need to mess with installing packages etc
[20:12] <alphe> see you around bandrus
[20:12] <bandrus> unless you are manually installing ceph packages
[20:12] <bandrus> alright, I'll be around, just have things to do
[20:13] <alphe> bandrus hum but how ceph-deploy installs from gitbuilder
[20:13] <alphe> unfortunatly for saucy the package are there and nowhere else ...
[20:13] <Drumplayr> I know there's deploy options.
[20:13] <bandrus> that's fine, you will need to reinstall it
[20:13] <alphe> for some unknown reason it never was officialised
[20:14] <bandrus> but in the future, you should not need to remove it between retries
[20:14] <alphe> ok great
[20:14] <Drumplayr> Something like ceph-deploy --latest install....
[20:15] <Drumplayr> alphe: http://ceph.com/docs/master/rados/deployment/ceph-deploy-install/
[20:16] * nwat (~textual@adsl-99-102-50-207.dsl.tul2ok.sbcglobal.net) has joined #ceph
[20:16] <alphe> ain't working
[20:17] <Drumplayr> aint working for the salamander version?
[20:17] <alphe> tryes to get the saucy emperor package from regular repo not gitbuilder and it crashs
[20:17] <loicd> joao: how could I ask the monitor to show me what it knows about a given pool ?
[20:18] <alphe> loicd ceph osd pool ls ?
[20:18] <alphe> loicd ceph osd pool stats poolname?
[20:18] <Drumplayr> oh. Well, that's all I got.
[20:18] <loicd> alphe: I would like to know all there is to know
[20:18] <loicd> namely the properties but also pgnum etc etc
[20:19] <alphe> loicd I think it is still in the ceph osd
[20:19] <Drumplayr> I know adding the option "detail" works in a lot of cases.
[20:19] <alphe> I can t help you more precisely at the moment my ceph is uninstalled ...
[20:22] <Drumplayr> ceph osd pool get {pool-name} {key}
[20:23] <alphe> loicd definatly I saw that kind of precise information around in the ceph osd layer
[20:24] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[20:25] * vata (~vata@2607:fad8:4:6:34d3:d2cf:d52a:94f) has joined #ceph
[20:26] <alphe> loicd dont know if the info are so synthetical as you need
[20:26] * markbby (~Adium@168.94.245.1) has joined #ceph
[20:26] <alphe> ceph osd pool get {pool-name} {key} key can be pg_num pgp_num
[20:26] <loicd> I'll do without it
[20:26] <alphe> the you can get a map
[20:27] <alphe> the you can get a map dump and use the osdmaptool --pint mapdump
[20:27] <alphe> that will end to be pretty close what you want no ?
[20:31] * ScOut3R (~ScOut3R@catv-89-133-44-70.catv.broadband.hu) has joined #ceph
[20:33] * xarses (~andreww@12.164.168.116) Quit (Read error: Operation timed out)
[20:37] <aarontc> so what does this mean? -414/16742338 objects degraded (-0.002%)
[20:37] * nwat (~textual@adsl-99-102-50-207.dsl.tul2ok.sbcglobal.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[20:38] <aarontc> I don't understand the negative count
[20:39] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[20:40] * xevwork (~xevious@6cb32e01.cst.lightpath.net) has joined #ceph
[20:40] * noob2 (~cjh@mpk-nat-7.thefacebook.com) has joined #ceph
[20:44] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[20:44] <Drumplayr> It means that you're system is running so good that it's giving you credit for degraded objects.
[20:45] * sarob (~sarob@2001:4998:effd:600:d071:7ba9:51bd:b030) Quit (Remote host closed the connection)
[20:45] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:45] <aarontc> Drumplayr: Awesome, I love Ceph! :)
[20:49] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[20:50] * allsystemsarego (~allsystem@188.26.167.169) Quit (Quit: Leaving)
[20:50] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[20:51] * Crshman (~bhill@64-71-16-66.static.wiline.com) has joined #ceph
[20:51] <Crshman> hey guys, how do I stop an OSD for maintenance? this doesn't seem to work: http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/#stopping-w-out-rebalancing
[20:52] <Crshman> I get Error EINVAL: invalid command
[20:53] * JC (~JC@nat-dip5.cfw-a-gci.corp.yahoo.com) Quit (Quit: Leaving.)
[20:53] * JC (~JC@nat-dip5.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:54] * JC (~JC@nat-dip5.cfw-a-gci.corp.yahoo.com) Quit ()
[20:58] * tsnider (~tsnider@nat-216-240-30-23.netapp.com) has left #ceph
[20:58] * ScOut3R (~ScOut3R@catv-89-133-44-70.catv.broadband.hu) Quit (Remote host closed the connection)
[20:59] * ScOut3R (~ScOut3R@catv-89-133-44-70.catv.broadband.hu) has joined #ceph
[20:59] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[21:00] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:01] * zirpu (~zirpu@2600:3c02::f03c:91ff:fe96:bae7) has joined #ceph
[21:03] * sarob (~sarob@nat-dip31-wl-e.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[21:03] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[21:04] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[21:04] * gregsfortytwo (~Adium@2607:f298:a:607:1c79:4d36:bc0a:af5d) has joined #ceph
[21:07] * ScOut3R (~ScOut3R@catv-89-133-44-70.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[21:11] * madkiss (~madkiss@p4FE05042.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[21:11] * diegows (~diegows@190.190.17.57) has joined #ceph
[21:12] * gregsfortytwo (~Adium@2607:f298:a:607:1c79:4d36:bc0a:af5d) Quit (Quit: Leaving.)
[21:20] * Pedras1 (~Adium@216.207.42.134) has joined #ceph
[21:20] * Pedras1 (~Adium@216.207.42.134) Quit ()
[21:22] <pmatulis2> Crshman: what command did you use exactly?
[21:24] <Crshman> pmatulis2: ceph osd stop osd.1
[21:28] * Pedras (~Adium@216.207.42.132) Quit (Ping timeout: 480 seconds)
[21:30] * sagelap1 (~sage@cpe-23-242-158-79.socal.res.rr.com) Quit (Quit: Leaving.)
[21:30] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[21:32] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) Quit (Read error: Operation timed out)
[21:33] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) has joined #ceph
[21:34] * xevwork (~xevious@6cb32e01.cst.lightpath.net) Quit (Remote host closed the connection)
[21:34] <alphe> ok so after the full clean of my ceph cluster nodes I still have strange behaviors with my ceph cluster
[21:37] * jcsp (~jcsp@0001bf3a.user.oftc.net) has joined #ceph
[21:38] * ScOut3R (~scout3r@4E5C7421.dsl.pool.telekom.hu) has joined #ceph
[21:42] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[21:44] * sagelap (~sage@2600:1012:b01a:ad01:41df:3948:1900:e8aa) has joined #ceph
[21:51] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Ping timeout: 480 seconds)
[21:53] * sagelap (~sage@2600:1012:b01a:ad01:41df:3948:1900:e8aa) Quit (Ping timeout: 480 seconds)
[21:57] <bandrus> Crshman: try service ceph-osd stop id={num}
[22:01] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) Quit (Read error: Operation timed out)
[22:02] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Remote host closed the connection)
[22:02] * nwat (~textual@adsl-99-102-50-207.dsl.tul2ok.sbcglobal.net) has joined #ceph
[22:04] * sagelap (~sage@2600:1012:b021:547b:41df:3948:1900:e8aa) has joined #ceph
[22:07] <illya> hi
[22:07] <illya> started new deployment
[22:08] <illya> is it good
[22:08] <illya> http://pastebin.com/SzWYqtPa
[22:08] <illya> no OSDs so far
[22:08] <illya> and
[22:08] <illya> mdsmap e3: 1/1/1 up {0=ubuntu41=up:creating}
[22:08] <illya> could I try to add OSDs now
[22:09] <illya> or should I wait
[22:10] <pmatulis2> interesting. i didn't think you could have an MDS without an OSD
[22:10] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[22:11] <pmatulis2> and you have PGs with no OSDs, huh
[22:11] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[22:11] <pmatulis2> illya: it could very well be normal. just unexpected from my point of view
[22:15] * nwat (~textual@adsl-99-102-50-207.dsl.tul2ok.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[22:19] * markbby (~Adium@168.94.245.1) has joined #ceph
[22:21] * illya (~illya_hav@28-167-112-92.pool.ukrtel.net) Quit (Ping timeout: 480 seconds)
[22:25] * nwat (~textual@adsl-99-102-50-207.dsl.tul2ok.sbcglobal.net) has joined #ceph
[22:27] * BillK (~BillK-OFT@106-68-44-144.dyn.iinet.net.au) has joined #ceph
[22:27] * illya (~illya_hav@28-167-112-92.pool.ukrtel.net) has joined #ceph
[22:28] * xevwork (~xevious@6cb32e01.cst.lightpath.net) has joined #ceph
[22:29] * madkiss (~madkiss@p4FE05042.dip0.t-ipconnect.de) has joined #ceph
[22:29] * xevwork (~xevious@6cb32e01.cst.lightpath.net) Quit (Read error: Connection reset by peer)
[22:30] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[22:31] * nwat (~textual@adsl-99-102-50-207.dsl.tul2ok.sbcglobal.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:34] <alphe> loicd you have ceph osd pool dump
[22:34] <alphe> loicd you have ceph osd pool dump
[22:37] <alphe> why when I do ceph osd pool delete <name> <name> --yes-i-really-mean-it
[22:37] <alphe> why when I do ceph osd pool delete <name> <name> --yes-i-really-really-mean-it
[22:38] <alphe> and then create a new pool i get in the stats pgmap v371: 2048 pgs, 4 pools, 0 bytes data, 0 objects
[22:43] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:46] <alphe> how can I change the id of a osd ?
[22:47] <Gugge-47527> alphe: why would you? :)
[22:48] <alphe> Gugge-47527 because ceph-deploy created me what ever stuff
[22:48] <Gugge-47527> so?
[22:48] <Gugge-47527> its just an id
[22:48] <Gugge-47527> why do you care? :)
[22:49] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[22:51] * i_m (~ivan.miro@95.180.8.206) has joined #ceph
[22:56] <alphe> because I have 23 created and only 20 running
[22:58] * rendar (~s@host226-182-dynamic.1-87-r.retail.telecomitalia.it) Quit ()
[23:01] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has left #ceph
[23:01] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has joined #ceph
[23:01] * illya (~illya_hav@28-167-112-92.pool.ukrtel.net) has left #ceph
[23:02] <Gugge-47527> alphe: then remove the 3 you dont need
[23:06] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) Quit (Quit: Leaving.)
[23:11] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[23:18] * nhm (~nhm@184-97-148-136.mpls.qwest.net) Quit (Quit: Lost terminal)
[23:19] * fireD_ (~fireD@93-142-198-228.adsl.net.t-com.hr) has joined #ceph
[23:19] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[23:21] * fireD (~fireD@93-142-245-129.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[23:23] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[23:30] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) Quit (Quit: Leaving.)
[23:46] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[23:49] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[23:49] * ChanServ sets mode +o elder
[23:51] * noob2 (~cjh@mpk-nat-7.thefacebook.com) has left #ceph
[23:52] * Drumplayr (~oftc-webi@66-87-99-133.pools.spcsdns.net) Quit (Quit: Page closed)
[23:55] * nwat (~textual@99.120.176.135) has joined #ceph
[23:56] * madkiss (~madkiss@p4FE05042.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[23:58] * mattbenjamin (~matt@aa2.linuxbox.com) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.