#ceph IRC Log

Index

IRC Log for 2013-09-10

Timestamps are in GMT/BST.

[0:00] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[0:00] * thomnico (~thomnico@64.34.151.178) Quit (Ping timeout: 480 seconds)
[0:06] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[0:06] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Read error: Operation timed out)
[0:09] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) has joined #ceph
[0:11] <dmick> good deal madkiss
[0:16] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[0:21] <sagewk> loicd: ping
[0:21] <loicd> pong
[0:21] <sagewk> CXX test/osd/unittest_erasure_code_plugin-TestErasureCodePluginExample.o
[0:21] <sagewk> make[4]: *** No rule to make target `libosd.a', needed by `unittest_erasure_code_plugin'. Stop.
[0:21] <sagewk> presumably from the makefile change.. probably just need to adjust that one line in the makefile
[0:21] <loicd> ok... I'm on it
[0:21] <sagewk> thanks!
[0:21] * loicd curses himself
[0:23] <loicd> how could that work on my computer ...
[0:24] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[0:24] * roald (~oftc-webi@87.209.150.214) Quit (Quit: Page closed)
[0:29] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[0:30] <loicd> presumably because libosd.a was left over from a previous compilation, before the makefile.am change
[0:35] * gregaf (~Adium@2607:f298:a:607:89a0:81e3:33b:bccc) Quit (Ping timeout: 480 seconds)
[0:36] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[0:38] <loicd> sagewk: https://github.com/ceph/ceph/pull/581 should fix the problem ( works for me on a fresh clone ). Sorry for the trouble.
[0:38] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[0:39] <sagewk> loicd: no worries, my fault for being so slow getting this stuff merged
[0:40] <sagewk> merged
[0:40] * loicd keeps an eye on http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-deb-precise-amd64-basic/#origin/master
[0:41] <loicd> BTW, which typically builds first after a commit ? i.e. which one is likely to be the first to display a compilation error ?
[0:42] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:43] * Steki (~steki@198.199.65.141) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:44] * mschiff (~mschiff@46.189.28.159) Quit (Remote host closed the connection)
[0:44] <sagewk> the -gcov one seems slightly faster than -basic
[0:44] <sagewk> http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-deb-precise-amd64-gcov/
[0:48] <joao> sagewk, is it expected one to be able to move a 'host' underneath another 'host' on a crushmap?
[0:50] * malcolm (~malcolm@silico24.lnk.telstra.net) has joined #ceph
[0:50] <sagewk> hmm... i dont' think it checks for that, but it is a little strange.
[0:52] <joao> yeah, I don't think it checks; if it does it's not working that well :p
[0:52] <joao> anyway, pull request 582 for #6260
[0:52] <joao> issue 6260
[0:52] <joao> oh, no kraken today?
[0:53] <joao> no alfredo either; I wonder if there's a connection here
[0:53] <loicd> :-D
[0:54] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[0:57] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:57] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[1:01] <sagewk> joao: looks good, merging!
[1:01] <sagewk> joao: can you add a test for this in the crush_ops.sh script?
[1:01] <sagewk> (or somewhere else if there is a better place)
[1:03] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[1:06] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[1:07] <sagewk> joshd: https://github.com/ceph/teuthology/pull/85
[1:08] <sagewk> actually, that can wait until zackc is in tomorrow
[1:08] * gregaf (~Adium@2607:f298:a:607:bda3:e5c4:e74a:5c46) has joined #ceph
[1:09] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[1:14] <joao> sagewk, sure, just a sec
[1:14] <sagewk> cd src
[1:14] <sagewk> `~/cov.sh
[1:15] * gregaf1 (~Adium@38.122.20.226) has joined #ceph
[1:18] <davidzlap> sagewk: pushed new version of wip-6246.
[1:20] <sagewk> davidzlap: i think insert_item needs cur = newid there
[1:21] <sagewk> oh, wait.
[1:21] * gregaf (~Adium@2607:f298:a:607:bda3:e5c4:e74a:5c46) Quit (Ping timeout: 480 seconds)
[1:23] <sagewk> hmm, i see lots of add_bucket callers that expect an id as a return value
[1:24] <sagewk> might be simplest to just add a check in crushtool, if (id < 0 && crush.bucket_exists(id)) ... and then assert in builder if it gets an invalid input.
[1:24] <sagewk> oh, i see you fixed them all
[1:24] <davidzlap> sagewk: yes, I think I do need a cur= newid. I think those callers are in the test/old directory I was asking about.
[1:26] <sagewk> yeah
[1:26] <davidzlap> I wasn't sure how to hit all the code paths I hit.
[1:26] <davidzlap> touched
[1:30] <joao> sagewk, pull request 583
[1:30] <joao> 3-liner
[1:31] * ross_ (~ross@60.208.111.209) has joined #ceph
[1:31] <sagewk> joao: cool, merged
[1:32] <joao> ty
[1:34] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Remote host closed the connection)
[1:34] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[1:36] * malcolm (~malcolm@silico24.lnk.telstra.net) Quit (Read error: No route to host)
[1:36] * malcolm (~malcolm@silico24.lnk.telstra.net) has joined #ceph
[1:36] <sagewk> dmick: https://github.com/ceph/ceph/pull/569/files
[1:38] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[1:39] <dmick> sagewk: ship it
[1:39] <dmick> I can merge if you like
[1:39] <sagewk> tnx
[1:40] <dmick> you don't want that in next, right?...
[1:40] <dmick> (I mean you can always cherrypick but...)
[1:44] <sagewk> dmick: https://github.com/ceph/ceph-deploy/pull/70
[1:44] <sagewk> just master
[1:44] * The_Bishop (~bishop@g230109085.adsl.alicedsl.de) has joined #ceph
[1:47] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:48] <dmick> sagewk: on that one: I don't understand why there are some "from package import symbol" and some "from . import package", so it *might* be more in tune to say "from .misc import mon_hosts" and just use mon_hosts. I don't care nearly as much as alfredo will tho so either way is good with me I think
[1:49] <dmick> do we...have a plan for supporting ipv6? because splitting on : will not be the way to do that
[1:49] <sagewk> yeah no opinion on the syntax; was just trying to follow the other import lines
[1:49] <sagewk> we do, it will have to ignore the first : i guess.
[1:50] <sagewk> er, ignore all but the first : that is
[1:50] <xarses> https://github.com/ceph/ceph-deploy/pull/70
[1:50] <dmick> reading the docstring, I don't know what a "name" and a "host" are
[1:50] <xarses> owich
[1:50] <xarses> that hurts
[1:50] * jf-jenni (~jf-jenni@stallman.cse.ohio-state.edu) has joined #ceph
[1:51] * jmlowe1 (~Adium@2601:d:a800:97:accb:5d01:29cf:c17b) has joined #ceph
[1:51] <dmick> xarses: ?
[1:52] <dmick> sagewk: is 'host' in that code "some token other than shortname, whch might be a fqdn or might be an ipv4 or v6"?
[1:52] <sagewk> right
[1:52] * dmick hates these names
[1:53] <xarses> dmick ceph-deploy pull 70
[1:54] <xarses> we where just about to attack that
[1:55] <dmick> still not clear what you mean; you mean you saw the bug and were working on a fix, but Sage beat you to it?
[1:55] <dmick> he's like that
[1:56] <xarses> ya
[1:57] <sagewk> repushed with the better import (and now i understand what the import .foo means :)
[1:58] <sagewk> xarses: sorry ;) looks ok now?
[1:58] * jmlowe (~Adium@2601:d:a800:97:34ed:df80:912:bb08) Quit (Ping timeout: 480 seconds)
[1:59] <xarses> sagewk: just got raise ValueError("No JSON object could be decoded")
[1:59] <xarses> ValueError: No JSON object could be decoded
[1:59] <xarses> does 69 fix that?
[1:59] <sagewk> xarses: you should totally make it work for ipv6 though :)
[1:59] * grepory (~Adium@212.sub-70-192-194.myvzw.com) has joined #ceph
[1:59] <sagewk> on the mon status command?
[1:59] <xarses> mon create
[1:59] <sagewk> thought it was supposed to; i havne't been following closely tho
[2:00] <xarses> will let you know shortly
[2:00] <sagewk> cool
[2:00] <dmick> oooh more minimal. me likey.
[2:03] * malcolm (~malcolm@silico24.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[2:04] <xarses> sagewk, ok that fixed the next traceback
[2:04] <xarses> erm next/that
[2:07] <xarses> so
[2:07] <via> is it a bad idea to upgrade from cuttlefish to dumpling while a backfill for a lost drive is underway?
[2:07] <xarses> ceph-deploy 1.2.3's purgedata rmdir's /etc/ceph
[2:08] <xarses> which prevents ceph-deploy mon create from copying over the ceph.conf file
[2:08] <sagewk> purgedata should only run after the package is uninstalled
[2:08] <sagewk> and mon create should run after it is (re)installed...
[2:08] <xarses> which should be fixed that the folder is erased, or that wirte cluster conf should create the data?
[2:09] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[2:09] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit ()
[2:09] * matsuhashi (~matsuhash@124x35x46x9.ap124.ftth.ucom.ne.jp) has joined #ceph
[2:09] <xarses> but purge erases the packages, but purgedata dosent?
[2:09] <xarses> purge should be responsible for the folder then
[2:10] <xarses> not purgedata
[2:12] * grepory (~Adium@212.sub-70-192-194.myvzw.com) Quit (Quit: Leaving.)
[2:16] <sagewk> purge is apt-get remove --purge; it removes (many/most) config files, but doesn't remove data or logs
[2:16] <sagewk> purgedata gets the rest and is less descriminant
[2:18] * LeaChim (~LeaChim@054073b1.skybroadband.com) Quit (Ping timeout: 480 seconds)
[2:22] <xarses> hmm
[2:22] * gaveen (~gaveen@175.157.81.32) Quit (Remote host closed the connection)
[2:23] <xarses> ceph-deploy mon create hostname:ip (single node) appeared to have started the monitor, but the keys never generated, and the key processes isn't running
[2:24] <xarses> hmm
[2:24] <xarses> odd, stopped ceph (/etc/init.d/ceph stop; and then started -a /etc/init.d/ceph -a start
[2:25] <xarses> and it started ceph-create-keys and the node is happy now
[2:25] <xarses> === mon.node-7 ===
[2:25] <xarses> Starting Ceph mon.node-7 on node-7...
[2:25] <xarses> Starting ceph-create-keys on node-7...
[2:26] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:26] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[2:28] * angdraug (~angdraug@204.11.231.50.static.etheric.net) Quit (Quit: Leaving)
[2:30] * ScOut3R (~ScOut3R@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[2:31] * diegows (~diegows@190.190.11.42) has joined #ceph
[2:31] * sagelap1 (~sage@97.sub-70-197-72.myvzw.com) has joined #ceph
[2:31] * gaveen (~gaveen@175.157.128.166) has joined #ceph
[2:32] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Ping timeout: 480 seconds)
[2:36] <sagelap1> gregaf1: you're sure the explicit locator is problematic? i still like that approach better since it captures any future locator changes and makes the objecter code simpler
[2:38] <gregaf1> there was some reason, but it might have been early and stupid instead of something that actually matters :/
[2:39] <gregaf1> I guess everywhere we want to build this up we either have or can trivially obtain the locator used for the request
[2:39] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[2:40] <sagelap1> i suspect m->get_locator() should do it
[2:41] <gregaf1> yeah
[2:41] * xmltok_ (~xmltok@pool101.bizrate.com) Quit (Quit: Bye!)
[2:41] <sagelap1> fwiw, the redir blueprint stores proposes storing the locator and object for the target, so it would just be copying that value
[2:42] <gregaf1> oh, for explicit ones instead of generated cache-pool redirects?
[2:43] <sagelap1> yeah
[2:44] * sagelap1 is now known as sagelap
[2:47] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:49] * sprachgenerator (~sprachgen@150.sub-70-208-128.myvzw.com) has joined #ceph
[2:52] * yy-nm (~Thunderbi@218.74.35.201) has joined #ceph
[2:55] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[2:55] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[3:00] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (Ping timeout: 480 seconds)
[3:07] * gaveen (~gaveen@175.157.128.166) Quit (Ping timeout: 480 seconds)
[3:09] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) has joined #ceph
[3:12] * ScOut3R (~ScOut3R@91EC1DC5.catv.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[3:16] * gaveen (~gaveen@175.157.31.206) has joined #ceph
[3:22] * yanzheng (~zhyan@134.134.137.71) has joined #ceph
[3:22] * sagelap (~sage@97.sub-70-197-72.myvzw.com) Quit (Read error: No route to host)
[3:26] * nerdtron (~kenneth@202.60.8.252) has joined #ceph
[3:31] * jmlowe (~Adium@c-98-223-198-138.hsd1.in.comcast.net) has joined #ceph
[3:37] * jmlowe1 (~Adium@2601:d:a800:97:accb:5d01:29cf:c17b) Quit (Ping timeout: 480 seconds)
[3:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[3:40] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[3:43] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[3:47] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit ()
[3:48] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:50] * dpippenger (~riven@tenant.pas.idealab.com) Quit (Quit: Leaving.)
[3:50] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[3:52] <cofol1986> Hello,while using cephfs, I got a strange problem, after deletetling bunch of files at a time, the moutpoint dir can't "ls", but some of the files in it can.
[3:53] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:56] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[4:03] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[4:04] <yanzheng> cofol1986, "ls" hangs or "ls" is slow
[4:06] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (Ping timeout: 480 seconds)
[4:10] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[4:13] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[4:13] * hugo (~hugo@210.65.146.4) has joined #ceph
[4:15] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[4:17] * grepory (~Adium@2600:1003:b01a:6d42:e1eb:5971:1487:95f3) has joined #ceph
[4:19] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[4:22] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[4:23] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[4:27] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[4:29] * hugo_ (~hugo@50-197-147-249-static.hfc.comcastbusiness.net) has joined #ceph
[4:29] * LPG_ (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) Quit (Ping timeout: 480 seconds)
[4:37] * hugo (~hugo@210.65.146.4) Quit (Ping timeout: 480 seconds)
[4:37] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[4:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[4:43] <hugo_> Where can I find the usage of each pool for rgw ?
[4:43] <hugo_> I have no idea about what .rgw.gc .rgw.control .rgw.buckets .rgw for ?
[4:45] * malcolm (~malcolm@silico24.lnk.telstra.net) has joined #ceph
[4:47] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[4:47] * grepory1 (~Adium@15.sub-70-192-193.myvzw.com) has joined #ceph
[4:48] * carif (~mcarifio@146-115-183-141.c3-0.wtr-ubr1.sbo-wtr.ma.cable.rcn.com) has joined #ceph
[4:48] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:50] <dmick> hugo_: you can read the sources, but you shouldn't need to know the details to use rgw
[4:51] <hugo_> dmick: Well, I don't know why that container list still shows me all the containers those I deleted
[4:52] <hugo_> dmick: And I can not create a new one with same name tho.
[4:52] <dmick> I've no idea what you have done to your cluster; if you can talk about the story from the beginning maybe I can offer a theory.
[4:53] * grepory (~Adium@2600:1003:b01a:6d42:e1eb:5971:1487:95f3) Quit (Ping timeout: 480 seconds)
[4:53] <hugo_> dmick: I got the 404 not found even there's an object been uploaded into this container. I try to delete all objects in .rgw & .rgw.buckets, But the list containers still returned deleted containers that's odd.
[4:53] <dmick> by "container", do you mean "bucket"?
[4:53] <dmick> and
[4:53] <dmick> you can't both use s3 and then go messing around with the rados objects and expect anything to work
[4:54] <hugo_> sort of.... Swift's container = S3's bucket
[4:54] <dmick> that's like creating files in a filesystem and then randomly writing blocks to the disk
[4:55] <hugo_> k .... the reason why I did so due to the purge temp data is not working in Cultfish tho
[4:56] <hugo_> dmick: I tried to clean-up the cluster ....
[4:56] * sprachgenerator (~sprachgen@150.sub-70-208-128.myvzw.com) Quit (Quit: sprachgenerator)
[4:57] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[4:58] * yy-nm (~Thunderbi@218.74.35.201) Quit (Quit: yy-nm)
[4:58] <hugo_> anyhow, I removed all radosgw related pools ...
[4:59] * yy-nm (~Thunderbi@218.74.35.201) has joined #ceph
[4:59] * clayb (~kvirc@proxy-nj2.bloomberg.com) Quit (Read error: Connection reset by peer)
[4:59] <hugo_> so if I'd like to have 3 copies in Ceph for a object which is uploaded by RadosGW, which pool should I set the size to 3 ?
[4:59] <dmick> ok. god knows what state your cluster is in then
[5:00] <dmick> the replication level 3 is an interesting question.
[5:00] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[5:01] * yy-nm (~Thunderbi@218.74.35.201) Quit ()
[5:01] <hugo_> hmm......
[5:02] * yy-nm (~Thunderbi@218.74.35.201) has joined #ceph
[5:05] <hugo_> Well, that's why I need to know the details about using rgw. I'm evaluating features & performance of Restful API based object-Storage. S3 , ceph , Swift ... etc
[5:05] * fireD (~fireD@93-139-191-152.adsl.net.t-com.hr) has joined #ceph
[5:07] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[5:07] * fireD_ (~fireD@93-139-154-230.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:07] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[5:07] <hugo_> well, not too much information about RadosGW's architecture XD
[5:10] <dmick> I can't find documentation either
[5:10] <dmick> I'm sure it's written down somewhere
[5:11] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) Quit ()
[5:11] * grepory1 (~Adium@15.sub-70-192-193.myvzw.com) Quit (Quit: Leaving.)
[5:11] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[5:11] <dmick> but I can't find it. I'd ask on the ceph-users mailing list "where do I look to find out which of the rgw pools is used for data objects, so I can set the replication level higher on those pools"?
[5:11] <dmick> it's a reasonable question to ask
[5:11] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) Quit ()
[5:11] <hugo_> dmick: whatever, thanks for your information ... appreciate, I'll keep looking for it
[5:12] * clayg (~clayg@ec2-54-235-44-253.compute-1.amazonaws.com) has joined #ceph
[5:12] <hugo_> dmick: sounds good .... I'll do that
[5:30] * chutz (~chutz@rygel.linuxfreak.ca) Quit (Quit: brb)
[5:31] * S0d0 (~joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[5:32] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[5:39] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[5:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[5:48] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:05] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:06] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:20] * carif (~mcarifio@146-115-183-141.c3-0.wtr-ubr1.sbo-wtr.ma.cable.rcn.com) Quit (Ping timeout: 480 seconds)
[6:43] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[6:45] * sleinen1 (~Adium@2001:620:0:26:d5d:7f6:e00c:9113) has joined #ceph
[6:50] * haomaiwang (~haomaiwan@119.6.75.221) has joined #ceph
[6:51] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:12] * cofol1986 (~xwrj@110.90.119.113) Quit (Read error: Connection reset by peer)
[7:12] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Leaving...)
[7:14] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[7:15] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[7:18] * gaveen (~gaveen@175.157.31.206) Quit (Ping timeout: 480 seconds)
[7:19] * isaac_ (~isaac@mike-alien.esc.auckland.ac.nz) Quit (Ping timeout: 480 seconds)
[7:27] * gaveen (~gaveen@175.157.234.45) has joined #ceph
[7:31] * julian (~julianwa@125.70.133.187) has joined #ceph
[7:32] * chutz (~chutz@rygel.linuxfreak.ca) Quit (Quit: Leaving)
[7:34] * madkiss (~madkiss@a6264-0299838063.pck.nerim.net) Quit (Quit: Leaving.)
[7:37] * gaveen (~gaveen@175.157.234.45) Quit (Ping timeout: 480 seconds)
[7:39] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[7:39] * chutz (~chutz@rygel.linuxfreak.ca) Quit ()
[7:42] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[7:43] * isaac_ (~isaac@mike-alien.esc.auckland.ac.nz) has joined #ceph
[7:46] * gaveen (~gaveen@175.157.102.35) has joined #ceph
[7:51] * isaac_ (~isaac@mike-alien.esc.auckland.ac.nz) Quit (Ping timeout: 480 seconds)
[8:00] * hugo (~hugo@210.65.146.4) has joined #ceph
[8:05] * kislotniq (~kislotniq@193.93.77.54) Quit (Remote host closed the connection)
[8:06] * kislotniq (~kislotniq@193.93.77.54) has joined #ceph
[8:06] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit (Quit: Bye!)
[8:07] * hugo_ (~hugo@50-197-147-249-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[8:08] * MooingLemur (~troy@phx-pnap.pinchaser.com) Quit (Remote host closed the connection)
[8:09] * seif (uid11725@ealing.irccloud.com) Quit (Ping timeout: 480 seconds)
[8:09] * cce (~cce@50.56.54.167) Quit (Remote host closed the connection)
[8:09] * cce (~cce@50.56.54.167) has joined #ceph
[8:11] * phantomcircuit (~phantomci@covertinferno.org) Quit (Ping timeout: 480 seconds)
[8:11] * mschiff (~mschiff@46.189.28.48) has joined #ceph
[8:11] * phantomcircuit (~phantomci@covertinferno.org) has joined #ceph
[8:13] * MooingLemur (~troy@phx-pnap.pinchaser.com) has joined #ceph
[8:15] * LCF (ball8@193.231.broadband16.iol.cz) Quit (Remote host closed the connection)
[8:16] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Remote host closed the connection)
[8:16] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[8:19] * Qu310 (~Qten@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[8:20] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Read error: Connection timed out)
[8:20] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[8:20] * LCF (ball8@193.231.broadband16.iol.cz) has joined #ceph
[8:29] * gaveen (~gaveen@175.157.102.35) Quit (Read error: Operation timed out)
[8:30] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[8:34] * malcolm (~malcolm@silico24.lnk.telstra.net) Quit (Quit: Konversation terminated!)
[8:34] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[8:38] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[8:38] * wenjianhn (~oftc-webi@proxy.wenjian.me) has joined #ceph
[8:39] * gaveen (~gaveen@175.157.140.167) has joined #ceph
[8:39] * wenjianhn (~oftc-webi@proxy.wenjian.me) Quit ()
[8:40] * Vjarjadian (~IceChat77@176.254.37.210) Quit (Quit: When the chips are down, well, the buffalo is empty)
[8:40] * wenjianhn (~wenjianhn@123.118.208.217) has joined #ceph
[8:40] * a2_ (~avati@ip-86-181-132-209.redhat.com) Quit (Read error: Connection reset by peer)
[8:41] * a2_ (~avati@ip-86-181-132-209.redhat.com) has joined #ceph
[8:48] * sleinen1 (~Adium@2001:620:0:26:d5d:7f6:e00c:9113) Quit (Quit: Leaving.)
[8:48] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:50] * mschiff (~mschiff@46.189.28.48) Quit (Remote host closed the connection)
[8:51] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[8:52] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[8:56] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:56] * hugo_ (~hugo@50-197-147-249-static.hfc.comcastbusiness.net) has joined #ceph
[8:57] * malcolm (~malcolm@silico24.lnk.telstra.net) has joined #ceph
[8:57] * malcolm (~malcolm@silico24.lnk.telstra.net) Quit (Remote host closed the connection)
[9:03] * hugo (~hugo@210.65.146.4) Quit (Ping timeout: 480 seconds)
[9:05] * LiRul (~lirul@91.82.105.2) has joined #ceph
[9:05] <LiRul> hi
[9:05] <LiRul> is there any method to decrease radosgw logging verbosity?
[9:08] * sleinen (~Adium@2001:620:0:2d:fc9a:f61f:d934:e6a) has joined #ceph
[9:08] * capri_wk (~capri@212.218.127.222) Quit (Quit: Verlassend)
[9:09] * sleinen1 (~Adium@2001:620:0:26:5c53:1e09:e6d1:ca17) has joined #ceph
[9:11] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[9:11] * ChanServ sets mode +v andreask
[9:14] * capri (~capri@212.218.127.222) has joined #ceph
[9:16] * sleinen (~Adium@2001:620:0:2d:fc9a:f61f:d934:e6a) Quit (Ping timeout: 480 seconds)
[9:16] * seif (uid11725@id-11725.ealing.irccloud.com) has joined #ceph
[9:20] * mattt (~mattt@92.52.76.140) has joined #ceph
[9:22] * PITon (~pavel@195.182.195.107) Quit (Ping timeout: 480 seconds)
[9:23] * PITon (~pavel@195.182.195.107) has joined #ceph
[9:28] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Remote host closed the connection)
[9:37] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[9:37] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit (Quit: tryggvil)
[9:41] * Bada (~Bada@195.65.225.142) has joined #ceph
[9:42] * yy-nm (~Thunderbi@218.74.35.201) Quit (Quit: yy-nm)
[9:49] * vipr (~vipr@frederik.pw) Quit (Quit: leaving)
[9:51] * fretb (~fretb@frederik.pw) has joined #ceph
[9:51] * fretb (~fretb@frederik.pw) Quit ()
[9:53] * fretb (~fretb@frederik.pw) has joined #ceph
[9:59] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[10:10] * mschiff (~mschiff@p4FD7DDCE.dip0.t-ipconnect.de) has joined #ceph
[10:11] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[10:25] * LeaChim (~LeaChim@054073b1.skybroadband.com) has joined #ceph
[10:26] * hybrid5121 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[10:32] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[10:32] * gaveen (~gaveen@175.157.140.167) Quit (Ping timeout: 480 seconds)
[10:34] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[10:37] * ScOut3R (~ScOut3R@catv-89-133-21-203.catv.broadband.hu) has joined #ceph
[10:40] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[10:42] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[10:45] <cofol1986> Hi everybody, How does cephfs get to data block if primary OSD is out, will it point to the 2rd or 3rd replica's location automatically?
[10:45] <topro> trying to get fuse.ceph mounted by fstab, debian will either try to mount before network is up, or using fstab option _netdev, fuse.ceph will fail parsing options string. anyone solved that issue?
[10:48] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Read error: Operation timed out)
[10:48] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:49] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[10:49] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[10:50] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[10:50] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:50] * ScOut3R (~ScOut3R@catv-89-133-21-203.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[10:50] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) Quit ()
[10:51] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[10:53] * allsystemsarego (~allsystem@188.25.130.226) has joined #ceph
[10:53] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[10:55] * PITon (~pavel@195.182.195.107) Quit (Ping timeout: 480 seconds)
[10:56] * PITon (~pavel@195.182.195.107) has joined #ceph
[10:57] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[10:58] * cofol1986 (~xwrj@110.90.119.113) Quit (Read error: Connection reset by peer)
[11:03] * AfC (~andrew@2407:7800:200:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[11:04] * yanzheng (~zhyan@134.134.137.71) Quit (Quit: Leaving)
[11:04] * hugo (~hugo@210.65.146.4) has joined #ceph
[11:10] * tryggvil (~tryggvil@178.19.53.254) has joined #ceph
[11:11] <Kioob`Taff> HI
[11:11] * hugo_ (~hugo@50-197-147-249-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[11:12] <Kioob`Taff> Any idea why 613GB of data take 1212GB of place on OSD ? (my CRUSH rules should inply only one replica on thoses OSD)
[11:13] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[11:14] * newbie (~kvirc@111.172.32.75) has joined #ceph
[11:14] * newbie (~kvirc@111.172.32.75) Quit ()
[11:19] <jerker> topro: a work around would be to let NetworkManager, if you run it, do it's stuff (ceph mounting) when the correct interface is up.... For example something like this: https://wiki.archlinux.org/index.php/NetworkManager#Mount_remote_folder_with_sshfs
[11:24] <niklas> Hi there. I have 88 OSDS on 2 Nodes and have 4k PGs. Now I filled my cluster up to 75% and I have 9 OSDs that ceph reports as "near full"
[11:24] <andreask> Kioob`Taff: what does "ceph osd dump | grep 'rep size'" tell you?
[11:25] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[11:25] <niklas> No I checked how many PGs there are on each OSD. On average it should be about 49, but I get up to 69 PGs on one osd
[11:25] <niklas> Why isnt it distributed equally, and what can I do about it?
[11:26] <niklas> because like this my cluster is basically full at 75%…
[11:28] <Kioob`Taff> andreask: the "rep size" is 3, but my CRUSH rule take only the fisrt replica in the SSDroot. Others are in SASroot.
[11:31] <topro> jerker: as requested by someone else I now mount fuse.ceph manually from rc.local. I know its not a very clean solution but it solves me more issues than it imposes. thanks anyway!
[11:31] * foosinn (~stefan@office.unitedcolo.de) Quit (Remote host closed the connection)
[11:33] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[11:34] <andreask> Kioob`Taff: hmm ... can you pastebin your crush map?
[11:34] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[11:34] * matsuhashi (~matsuhash@124x35x46x9.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[11:34] * wenjianhn (~wenjianhn@123.118.208.217) Quit (Ping timeout: 480 seconds)
[11:35] <Kioob`Taff> http://pastebin.com/Y19cTbzU
[11:35] <Kioob`Taff> and the osd tree : http://pastebin.com/vAg29RiE
[11:36] * agh (~oftc-webi@gw-to-666.outscale.net) has joined #ceph
[11:36] * matsuhashi (~matsuhash@124x35x46x9.ap124.ftth.ucom.ne.jp) has joined #ceph
[11:36] <agh> Hello
[11:36] <agh> is the header x-amz-server-side-encryption supported by radosgw ?
[11:37] * sel (~sel@python.home.selund.se) has joined #ceph
[11:37] <sel> Does ceph use luks for encryption?
[11:47] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[11:50] <andreask> Kioob`Taff: and you assigned that SSDperOSDfirst rule to the pools you want to use it?
[11:50] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[12:00] * hugo (~hugo@210.65.146.4) Quit (Remote host closed the connection)
[12:02] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[12:03] * matsuhashi (~matsuhash@124x35x46x9.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[12:12] <Kioob`Taff> andreask: yes
[12:21] <andreask> Kioob`Taff: ceph osd dump | grep rule ?
[12:22] <Kioob`Taff> crush_ruleset 7
[12:23] <Kioob`Taff> (and crush_ruleset 4 for other pools)
[12:26] <Kioob`Taff> I also checked my rule with "pg dump"
[12:26] * mxmln (~maximilia@212.79.49.65) Quit ()
[12:26] <Kioob`Taff> all PG have the master copy on a SSD OSD, and no other replica on SSD OSD
[12:27] <Kioob`Taff> note : "ceph pg dump | grep ^6\\. | awk '{ SUM+=$6 } END { print $6 }'" report only 543GB of data
[12:30] <andreask> strange thing ... hmm
[12:31] <Kioob`Taff> yes... I was very surprised to see the cluster full...
[12:35] <Kioob`Taff> (my awk is wrong)
[12:36] <Kioob`Taff> it's "ceph pg dump | grep ^6\\. | awk '{ SUM+=($6/1024/1024) } END { print SUM }'", which give a correct result : 616GB
[12:36] <Kioob`Taff> If I look for a specific OSD, the 50th, I have :
[12:36] <Kioob`Taff> # ceph pg dump | grep ^6\\. | grep '\[50,' | awk '{ SUM+=($6/1024/1024) } END { print SUM }'
[12:36] <Kioob`Taff> 52881.3
[12:37] <Kioob`Taff> so, 52GB
[12:37] <Kioob`Taff> but : # df -h /var/lib/ceph/osd/ceph-50
[12:37] <Kioob`Taff> Filesystem Size Used Avail Use% Mounted on
[12:37] <Kioob`Taff> /dev/sda4 275G 105G 170G 39% /var/lib/ceph/osd/ceph-50
[12:39] <sel> I've got a feature request, but I'm not sure where it should go. What I want is to be able to set size and min_size to 2, and still have a operational rbd image. If I set both to 2, and one osd goes down, some placementgroups ends up incomplete, and ain't usable before the osd comes back online. The reason for my request is that I want to ensure that data is replicated before a ack is given.
[12:42] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[12:44] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[12:45] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[12:45] * ChanServ sets mode +v andreask
[12:46] <Kioob`Taff> in fact, if I look just the 6.31f PG for example, Ceph report a size of 630MB. On disk it take 1.3GB
[12:47] <andreask> what do you mean by "on disk it takes 1.3G"?
[12:48] <Kioob`Taff> /var/lib/ceph/osd/ceph-45/current/6.31f_head# du -sh .
[12:48] <Kioob`Taff> 1,3G .
[12:48] <Kioob`Taff> is it wrong ?
[12:48] <Kioob`Taff> (I have same amount on each replica)
[12:51] * julian (~julianwa@125.70.133.187) Quit (Quit: afk)
[12:52] <Kioob`Taff> maybe the method is wrong... don't know. If I count only files named with "_head_", there is 448MB (I use : find . -name '*head*' -print0 | xargs -r -0 du -shc )
[12:53] <andreask> Kioob`Taff: well ... like: ls -lh /var/lib/ceph/osd/ceph-45/current/6.31f_head
[12:54] <Kioob`Taff> # ls -lh /var/lib/ceph/osd/ceph-45/current/6.31f_head
[12:54] <Kioob`Taff> total 8,0K
[12:54] <Kioob`Taff> drwxr-xr-x 3 root root 4,0K sept. 9 09:15 DIR_F
[12:54] <andreask> Kioob`Taff: sorry ... missing / at the end
[12:54] <Kioob`Taff> same result
[12:55] <Kioob`Taff> you want the recursive one ?
[12:55] <andreask> Kioob`Taff: ye
[12:56] <Kioob`Taff> http://pastebin.com/MLyimD2U
[12:57] <Kioob`Taff> (osd-45 is a replica, but I have the same output for osd-45 the master)
[12:58] <Kioob`Taff> osd-50, the master
[13:01] <andreask> Kioob`Taff: I'm afraid I can't help you further here, sorry
[13:02] <Kioob`Taff> ok, no problem andreask, thanks ;)
[13:02] <Kioob`Taff> I sent an email on the list, but should maybe open an issue on tracker
[13:02] <andreask> Kioob`Taff: but if you wait a little bit all the Inktank guys are also online
[13:03] <Kioob`Taff> yes
[13:04] <jerker> So v0.68 (Emperor) is the one to run if one wants ZFS support, right? Is ZFS the only reasonably stable file system for Ceph with support for compression?
[13:06] * andreask needs lunch
[13:07] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[13:07] * nerdtron (~kenneth@202.60.8.252) Quit (Remote host closed the connection)
[13:16] <Kioob`Taff> jerker: since btrfs is not stable at all, I suppose yes
[13:20] <joao> jerker, Emperor shall be v0.71, not 0.68
[13:20] <joao> 0.68 is a dev release
[13:21] <joao> sel, you should send that feature request to ceph-devel
[13:22] <joao> someone will certainly follow up on it, and may even suggest you to create a blueprint on the wiki
[13:26] <jerker> joao: Thank you. I should have written (on the way to Emperor). ZFS fits very well with my secondary purpose with these machines, old school ZFS backup.
[13:33] * kraken (~kraken@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[13:33] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[13:38] * foosinn (~stefan@office.unitedcolo.de) has joined #ceph
[13:41] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[13:47] * markbby (~Adium@168.94.245.3) has joined #ceph
[13:52] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[13:55] <topro> Kioob`Taff: I know its some kind of off-topic but do you know (or have a feeling of) when to expect a btrfs release which might be considered stable?
[13:56] <Gugge-47527> I think its some kind of 2020 or 2030 plan :P
[13:57] <Gugge-47527> My guess is zfsonlinux for osd storage is gonna be stable and supported before btrfs :)
[14:01] * yanzheng (~zhyan@jfdmzpr06-ext.jf.intel.com) has joined #ceph
[14:05] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[14:08] <foosinn> hey, anyone here using ceph deploy? i cant figure out how to setup public and cluster network with ceph-deploy
[14:09] <alfredodeza> foosinn: what do you mean by public cluster?
[14:13] <foosinn> this section: http://ceph.com/docs/master/rados/configuration/network-config-ref/#ceph-networks
[14:13] <topro> Gugge-47527: as long as there is no native support for ZFS in linux kernel (i.e. licensing issues resolved) it is not a viable alternative to me. anyway, 2030 sounds feasible :P
[14:14] <Gugge-47527> well, im not saying ZFS is ever gonna be a supported and stable way to run OSD's :)
[14:14] <Gugge-47527> Im just saying i think its gonna be before btrfs :)
[14:14] <topro> ok, i got that message :/
[14:15] <alfredodeza> foosinn: are you just getting started with ceph?
[14:16] <foosinn> alfredodeza, yes im running currently a small test system with 2 nodes, each 12 disks
[14:16] <alfredodeza> the idea with ceph-deploy is just that really, to give you an overview on how to get started, how to set things up so that you can then move on to more granular tools
[14:16] <alfredodeza> foosinn: ah perfect!
[14:16] <alfredodeza> so you should follow the getting started guide
[14:16] <alfredodeza> that guide uses ceph-deploy exclusively and it will tell you how to set up the networks
[14:17] <alfredodeza> foosinn: http://ceph.com/docs/next/start/quick-ceph-deploy/
[14:17] <alfredodeza> ceph-deploy will create the ceph.conf file for you :)
[14:17] <topro> another thing i couldn't get an answer yet is why my OSD processes grow up to 10GB memory consuption each. anyone experienced OSD regularly growing above 1G?
[14:18] <topro> btw. if I don't restart them, I'm sure they would keep growing forever :/
[14:18] <topro> 0.67.3 ^^
[14:20] <foosinn> alfredodeza, i didnt find anything about the public and cluster network with ceph deploy. neither in the quckstart or the ceph storage cluster section.
[14:21] <alfredodeza> foosinn: I think you are right, the quick start doesn't mention that
[14:21] <foosinn> the only reference i found was the document i posted, but that doesnt seem to fit for ceph-deploy.
[14:21] <alfredodeza> foosinn: so once you have your ceph.conf you can push that to your nodes with ceph-deploy too
[14:21] <Kioob`Taff> topro: really don't know. I use it in production for some purpose... but often have problems.
[14:22] <alfredodeza> so you could add extra options there and push them
[14:22] <alfredodeza> it seems that is whats missing right foosinn?
[14:24] <foosinn> alfredodeza, ceph-deploy seems to have only a very basic config.
[14:24] <alfredodeza> yes, but you can add whatever you want to there and push that out
[14:25] <alfredodeza> ceph-deploy is meant to *not* give you every possible option/flag/configuration possible
[14:25] <alfredodeza> just the most simple stuff to get you going
[14:25] <foosinn> i already set up 24 osds and an additional mon node but none of them is menitoned there
[14:25] <foosinn> http://ix.io/7WB
[14:26] <foosinn> also the first three monitors are added as a single line. so i dont know how to add a cluster ip for them
[14:26] <foosinn> and this is what confuses me a lot :D
[14:29] <alfredodeza> there is a line where ceph-deploy no longer can help, as like I said, it is really meant to give you a very small subset of items to get you started
[14:30] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[14:32] <foosinn> ok, so ill set it up manually. thanks for you help alfredodeza
[14:32] <alfredodeza> foosinn: there is one more thing though...
[14:32] <alfredodeza> if you feel that this is a *must have* for someone getting started with ceph, then it is probably a good candidate to add to ceph-deploy
[14:33] * diegows (~diegows@190.190.11.42) has joined #ceph
[14:33] <alfredodeza> whenever new features are proposed for ceph-deploy, I ask something like "is this something that a new user needs? or is this a more advanced option?"
[14:34] <alfredodeza> that is what defines what needs to be added (or not)
[14:35] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[14:38] <foosinn> it would be at least nice if i could add it now without recreating the cluster. is there a way to get the full configuiration ceph is running on? as already said my osds and my added monitors are not in my ceph.conf. where are they defined?
[14:38] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[14:39] * dmsimard1 (~Adium@108.163.152.66) has joined #ceph
[14:40] <dmsimard1> foosinn, tried looking at the CRUSH map ? Either "ceph osd tree" or by getting and decompiling the CRUSH map
[14:41] <dmsimard1> Might not be what you're exactly after but it gives you a good idea of the running configuration
[14:42] <dmsimard1> shoosh, dmsimard, go away, you're disconnected by peer :)
[14:43] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[14:45] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit (Ping timeout: 480 seconds)
[14:46] * dmsimard1 is now known as dmsimard
[14:47] * grepory (~Adium@50-200-116-163-static.hfc.comcastbusiness.net) has joined #ceph
[14:51] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[14:52] <dmsimard> I think Ceph could really use a dedicated puppet initiative such as with Chef, the puppet-ceph project is so fragmented https://github.com/enovance/puppet-ceph/network
[14:58] * mrprud (~mrprud@ANantes-557-1-135-141.w2-1.abo.wanadoo.fr) has joined #ceph
[15:03] * ScOut3R_ (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[15:03] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Read error: Connection reset by peer)
[15:05] <mrprud> hi everybody, i've got a problem with an osd node
[15:06] <mrprud> i can't start it
[15:06] <mrprud> ERROR:ceph-disk:Failed to activate
[15:08] <mrprud> try start step by step, but ceph-disk-activate failed
[15:08] * agh (~oftc-webi@gw-to-666.outscale.net) Quit (Quit: Page closed)
[15:10] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[15:11] * sjm (~sjm@64.34.151.178) has joined #ceph
[15:12] * ScOut3R_ (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Remote host closed the connection)
[15:13] <dmsimard> Are you setting up a new node or is that an existing one ?
[15:14] <mrprud> it's an existing one, just reboot physic node
[15:14] <mrprud> i mount xfs device, it's ok !
[15:15] * shang (~ShangWu@64.34.151.178) has joined #ceph
[15:15] <dmsimard> Hmmm, don't know - but isn't activate part of the set up for a new OSD ? Like when you've formatted and prepared a disk, you activate it so it's part of the cluster ?
[15:15] <mrprud> i just make ceph upgrade 0.67.2 to 0.67.3
[15:15] <dmsimard> Is your ceph-osd process running ?
[15:15] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[15:16] <mrprud> no, /etc/init.d/ceph start osd failed
[15:16] <mrprud> INFO:ceph-disk:Activating /dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.af86280a-f2de-4d53-8cce-edca91978e61
[15:16] <mrprud> ERROR:ceph-disk:Failed to activate
[15:16] <mrprud> ceph-disk: Error: No cluster conf found in /etc/ceph with fsid 259dfbdc-744d-4a48-ae67-ee67bf687f9a
[15:16] <mrprud> ceph-disk: Error: One or more partitions failed to activate
[15:17] <mrprud> other osd node works
[15:17] <dmsimard> Another OSD on the same server ?
[15:18] * yanzheng (~zhyan@jfdmzpr06-ext.jf.intel.com) Quit (Remote host closed the connection)
[15:18] <mrprud> no :) each osd is on a different server
[15:18] <dmsimard> Okay, so, following the logic of that error message - is the ceph.conf on the failing OSD the same as on a working one, then ?
[15:20] <mrprud> yes, it's the same on all physic servers (md5sum check)
[15:21] <dmsimard> You're able to mount /dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.af86280a-f2de-4d53-8cce-edca91978e61 no problem you said, right ?
[15:22] <mrprud> yes, i can mount i, try a exf repair, this is clean
[15:22] <mrprud> xfs_repair
[15:24] <dmsimard> The other OSDs upgraded/rebooted just fine ?
[15:26] * sjm (~sjm@64.34.151.178) has joined #ceph
[15:26] * vata (~vata@2607:fad8:4:6:8480:a48a:ce87:f2e1) has joined #ceph
[15:27] <mrprud> yes, all other worked after update and reboot
[15:27] * thomnico (~thomnico@64.34.151.178) has joined #ceph
[15:28] <dmsimard> Hmm, I have no clue.. you have me curious though. I'm really interested in the solution if you manage to fix that.
[15:36] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley_)
[15:36] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[15:39] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[15:44] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Remote host closed the connection)
[15:56] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:57] * KevinPerks1 (~Adium@64.34.151.178) has joined #ceph
[15:57] * KevinPerks (~Adium@64.34.151.178) Quit (Read error: Connection reset by peer)
[16:01] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[16:01] * Meths (~meths@2.25.193.204) Quit (Read error: Operation timed out)
[16:07] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Ping timeout: 480 seconds)
[16:08] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:09] * sprachgenerator (~sprachgen@150.sub-70-208-128.myvzw.com) has joined #ceph
[16:17] * mattt (~mattt@92.52.76.140) Quit (Ping timeout: 480 seconds)
[16:18] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:19] * ishkabob (~c7a82cc0@webuser.thegrebs.com) has joined #ceph
[16:23] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[16:29] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:30] * dty (~derek@proxy00.umiacs.umd.edu) has joined #ceph
[16:35] * ntranger_ (~ntranger@proxy2.wolfram.com) has joined #ceph
[16:35] * ntranger (~ntranger@proxy2.wolfram.com) Quit (Remote host closed the connection)
[16:35] <ishkabob> hey guys, can I reduce/change the number of placement groups in an existing pool?
[16:35] <ishkabob> i'm trying to cut a pool from 1000 to 64 placement groups, and it doesn't seem to be working
[16:36] * dmsimard (~Adium@108.163.152.66) Quit (Ping timeout: 480 seconds)
[16:36] <jmlowe> ishkabob: I don't think pg merging is possible yet
[16:36] * sjm (~sjm@64.34.151.178) has joined #ceph
[16:37] <ishkabob> jmlowe: does pg merging refer to both increasing AND decreasing the number of pgs?
[16:37] <jmlowe> ishkabob: pg splitting makes more, merging collapses pg's to decrease the number
[16:38] <jmlowe> ishkabob: splitting came in with cuttlefish
[16:38] <ishkabob> jmlowe: thanks so much!
[16:38] <jmlowe> ishkabob: np
[16:40] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[16:40] * sprachgenerator (~sprachgen@150.sub-70-208-128.myvzw.com) Quit (Quit: sprachgenerator)
[16:41] <dty> So I have a somewhat stupid question, the pgmap version increases every time an object is written?
[16:41] * absynth (~absynth@irc.absynth.de) Quit (Remote host closed the connection)
[16:42] * S0d0 (~joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[16:44] * LiRul (~lirul@91.82.105.2) Quit (Quit: Leaving.)
[16:44] * \ask (~ask@oz.develooper.com) Quit (Quit: Bye)
[16:44] * \ask (~ask@oz.develooper.com) has joined #ceph
[16:45] * ishkabob (~c7a82cc0@webuser.thegrebs.com) Quit (Quit: TheGrebs.com CGI:IRC)
[16:46] * absynth (~absynth@irc.absynth.de) has joined #ceph
[16:46] * berant (~blemmenes@gw01.ussignalcom.com) has joined #ceph
[16:53] * sjm (~sjm@64.34.151.178) has joined #ceph
[16:53] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[16:53] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[16:54] * mancdaz (~darren.bi@94.236.7.190) has joined #ceph
[16:57] * haomaiwang (~haomaiwan@119.6.75.221) Quit (Remote host closed the connection)
[17:01] * KevinPerks1 (~Adium@64.34.151.178) Quit (Read error: No route to host)
[17:01] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[17:03] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[17:05] * KevinPerks (~Adium@64.34.151.178) Quit (Read error: Connection reset by peer)
[17:06] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:06] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[17:06] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[17:09] * foosinn (~stefan@office.unitedcolo.de) Quit (Quit: Leaving)
[17:09] * sleinen (~Adium@2001:620:0:2d:6c74:4352:3314:a27b) has joined #ceph
[17:10] * carif (~mcarifio@pool-96-233-32-122.bstnma.fios.verizon.net) has joined #ceph
[17:15] * sleinen1 (~Adium@2001:620:0:26:5c53:1e09:e6d1:ca17) Quit (Ping timeout: 480 seconds)
[17:16] * lyncos (~chatzilla@208.71.184.41) has joined #ceph
[17:17] * ircolle (~Adium@c-67-165-237-235.hsd1.co.comcast.net) has joined #ceph
[17:18] <lyncos> hi everyone . I would like to know if there is any way of including config file into the main configfile ala conf.d
[17:18] * KevinPerks (~Adium@64.34.151.178) Quit (Read error: No route to host)
[17:19] <lyncos> maybe there is an undocumented feature :-)
[17:19] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[17:19] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:20] <mtanski> sagewk: Can you take the the patch I emailed into the tree to be submitted.
[17:22] * KevinPerks1 (~Adium@64.34.151.178) has joined #ceph
[17:22] * KevinPerks (~Adium@64.34.151.178) Quit (Read error: Connection reset by peer)
[17:23] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[17:24] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[17:30] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[17:31] * sjm (~sjm@64.34.151.178) has joined #ceph
[17:32] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:33] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[17:33] * Bada (~Bada@195.65.225.142) Quit (Ping timeout: 480 seconds)
[17:35] * KevinPerks1 (~Adium@64.34.151.178) Quit (Read error: Connection reset by peer)
[17:35] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[17:38] * masterpe (~masterpe@2a01:670:400::43) Quit (Ping timeout: 480 seconds)
[17:42] * doxavore (~doug@99-89-22-187.lightspeed.rcsntx.sbcglobal.net) has joined #ceph
[17:42] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[17:44] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[17:50] * diegows (~diegows@190.190.11.42) Quit (Read error: Operation timed out)
[17:51] * xarses (~andreww@c-71-202-167-197.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:52] * sjm (~sjm@64.34.151.178) has joined #ceph
[17:54] <mancdaz> anyone here had any experience of using ceph in openstack? specifically with glance/cinder integration
[17:55] <mancdaz> It's *almost* working but not quite but I'm not sure if that's just me or that it literally doesn't work properly in grizzly
[17:55] * KindTwo (~KindOne@h159.43.28.71.dynamic.ip.windstream.net) has joined #ceph
[17:56] * sagelap (~sage@2600:1012:b020:b34b:f19a:3ea9:afe1:ad87) has joined #ceph
[17:57] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[17:58] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:58] * KindTwo is now known as KindOne
[17:59] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[17:59] * angdraug (~angdraug@204.11.231.50.static.etheric.net) has joined #ceph
[17:59] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) has joined #ceph
[18:00] <decede> mancdaz: I'm hoping it does but not got round to testing yet
[18:01] <saumya> Hi, I am trying to use ceph for a project. I have been trying to add a monitor after installing ceph on server node, which is my system. The monitor gets added with this output on terminal http://pastebin.com/F5qkg0Em , but there are no keys generated in bootstrap-mds and bootstrap-osd...any help?
[18:02] <decede> mancdaz: might want to logon to freenode and join the #openstack room and ask a more specific question though
[18:02] <jmlowe> mancdaz: mikedawson is the most experienced user of openstack with ceph that I know of
[18:03] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[18:05] * mschiff (~mschiff@p4FD7DDCE.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[18:05] <saumya> decede,jmlowe: could you please suggest some help? I am stuck at this
[18:06] <jmlowe> I predate ceph-deploy so I'm afraid I don't know anything about it, sorry
[18:06] * topro (~topro@host-62-245-142-50.customer.m-online.net) Quit ()
[18:07] <saumya> jmlowe:
[18:07] <saumya> sure np :)
[18:08] <saumya> jmlowe: Can you think of someone else who could help?
[18:08] <jmlowe> most of the inktank guys should be showing some life any minute now as they are on PDT
[18:09] <jmlowe> dmick maybe?
[18:09] * topro (~topro@host-62-245-142-50.customer.m-online.net) has joined #ceph
[18:09] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[18:10] <saumya> dmick: ping, could you help? :)
[18:10] * berant (~blemmenes@gw01.ussignalcom.com) Quit (Quit: berant)
[18:12] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[18:13] * mikedawson_ (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[18:14] * diegows (~diegows@190.190.11.42) has joined #ceph
[18:14] <alfredodeza> hi saumya
[18:15] <saumya> hi alfredodeza :)
[18:15] <alfredodeza> what OS are you running on?
[18:15] <alfredodeza> oh Ubuntu
[18:15] <mrprud> saumya: are your use ceph-deploy install with real server hostname ?
[18:15] <alfredodeza> saumya: if you log in to saumya-Lapi and run `hostname` what do you get back?
[18:16] <alfredodeza> mrprud: your hostname needs to match, correct.
[18:16] <saumya> alfredodeza: hostname returns saumya-Lapi
[18:16] * berant (~blemmenes@gw01.ussignalcom.com) has joined #ceph
[18:16] <mrprud> i thinks it's suamya's problem ceph-server != saumua-lapi
[18:17] <saumya> I am running the command ceph-deploy install saumya@ceph-server
[18:17] * S0d0 (joku@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[18:17] <alfredodeza> hrmnnn I don't think you can pass in the user
[18:17] <mrprud> you must ust ceph-deploy install saumua-lapi
[18:17] <saumya> I was using this because I had set the hostname to ceph-server and username as saumya
[18:18] <sagelap> zackc: can you take a look at https://github.com/ceph/teuthology/pull/85 ?
[18:18] * sjm (~sjm@64.34.151.178) has joined #ceph
[18:18] <alfredodeza> saumya: ok, so ceph-server is not saumya-Lapi
[18:18] <alfredodeza> that is a problem
[18:18] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[18:18] <sagelap> we'll need to backport this to other branches too since the current code removes /home/ubuntu/cephtest/foo but not cephtest itself.
[18:18] <saumya> alfredodeza: so should I change it to saumya-Lapi in the .ssh and host files?
[18:19] <mrprud> you must use the real hostname
[18:20] <loicd> zackc: http://amo-probos.org/post/15 is an interesting read on the topic of queueing + archiving + scaling teuthology
[18:20] <saumya> mrprud: okay, so I'll change it in both the files and see if I can get through!
[18:20] <saumya> Thanks :)
[18:20] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[18:20] * markbby (~Adium@168.94.245.3) has joined #ceph
[18:20] <mrprud> saumya: good luck
[18:22] * mrprud (~mrprud@ANantes-557-1-135-141.w2-1.abo.wanadoo.fr) Quit (Remote host closed the connection)
[18:22] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[18:23] * KevinPerks1 (~Adium@64.34.151.178) has joined #ceph
[18:23] * mancdaz (~darren.bi@94.236.7.190) Quit (Ping timeout: 480 seconds)
[18:23] * KevinPerks (~Adium@64.34.151.178) Quit (Read error: No route to host)
[18:27] * sjm (~sjm@64.34.151.178) has joined #ceph
[18:28] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[18:30] <absynth> someone from inktank around for a quick question (non-technical)?
[18:30] <absynth> i promise it takes only 30 secs
[18:31] <mikedawson_> jmlowe: thanks for the compliment! who needed help?
[18:31] <absynth> hah, he twitched!
[18:32] * sleinen1 (~Adium@2001:620:0:25:1905:46d5:d57d:704c) has joined #ceph
[18:32] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[18:37] <jmlowe> mikedawson_: mancdaz was asking about grizzly, I think he was wondering if his problems were general to openstack or just his install, looks like he left though
[18:37] * sleinen1 (~Adium@2001:620:0:25:1905:46d5:d57d:704c) Quit (Quit: Leaving.)
[18:37] * sleinen1 (~Adium@130.59.94.190) has joined #ceph
[18:38] * sleinen (~Adium@2001:620:0:2d:6c74:4352:3314:a27b) Quit (Ping timeout: 480 seconds)
[18:39] <mikedawson_> jmlowe: so you're saying I'm off the hook? excellent!
[18:39] * sagelap (~sage@2600:1012:b020:b34b:f19a:3ea9:afe1:ad87) Quit (Read error: Connection reset by peer)
[18:39] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[18:40] <nhm> absynth: I'm here, but I promise nothing. :)
[18:44] <bstillwell> So I removed a pool with 'rados rmpool pool_name', but while the deletes were going I had about half the cluster OSDs marked down.
[18:44] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[18:45] * sleinen1 (~Adium@130.59.94.190) Quit (Ping timeout: 480 seconds)
[18:45] <bstillwell> I rebooted the servers they were on and the OSDs came back up, but now they have data on those OSDs for that placement group, but the pg is gone.
[18:45] <sagewk> bstillwell: what version are you running?
[18:45] <bstillwell> dumpling
[18:45] <sagewk> the osd should clean that up in the background
[18:46] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[18:46] <bstillwell> that's what usually happens, but it doesn't appear to be cleaning them up right now
[18:46] <bstillwell> I tried scrubbing one of the osds, but that came back clean
[18:47] <absynth> nhm: i pinged sage, all cleared.
[18:47] <bstillwell> scrubbing the pg doesn't work, because it doesn't exist anymore
[18:47] <nhm> absynth: ok, cool. :)
[18:48] <bstillwell> sagewk: so ceph will recognize all these 38.*_head directories as not being needed any more and clean them up?
[18:49] <sagewk> bstillwell: yep
[18:49] <bstillwell> how long should I wait?
[18:50] <bstillwell> I was guessing the scrub process would take care of it, but it must be something else?
[18:50] <sagewk> it should be deleting now.. how long it takes dependson how fast the disks are
[18:52] <bstillwell> it's been about 8 minutes since the last update from 'ceph -w':
[18:52] <bstillwell> 2013-09-10 10:43:14.106171 mon.0 [INF] pgmap v841652: 2272 pgs: 2272 active+clean; 778 bytes data, 175 GB used, 31420 GB / 31595 GB avail
[18:52] <bstillwell> So I'm not thinking it is
[18:55] <sagewk> can you check an iostat or something on one of the osds to see if the disk is busy?
[18:56] <bstillwell> oh, so what I think happened is while I was removing a pool with ~110 GB of data, I created a new pool, then when I ran 'ceph osd pool set test pg_num 2048', that's when all the OSDs were marked down.
[18:56] * yasu` (~yasu`@99.23.160.231) has joined #ceph
[18:56] <bstillwell> sagewk: running 'dstat -fr' shows hardly any activity
[18:57] <bstillwell> mostly 0 iops
[18:57] * yasu` (~yasu`@99.23.160.231) Quit (Remote host closed the connection)
[18:57] <bstillwell> not all the OSDs, about half
[18:59] <sjust> bstillwell: do you have logs for the crashed osds?
[19:00] * The_Bishop_ (~bishop@f048089073.adsl.alicedsl.de) has joined #ceph
[19:00] <bstillwell> I should
[19:00] * berant (~blemmenes@gw01.ussignalcom.com) Quit (Ping timeout: 480 seconds)
[19:01] <sjust> can you post one to cephdrop?
[19:03] <bstillwell> sure, let me see how to do that
[19:04] <saumya> alfredodeza: I am still not able to solve the error. I am following these steps here http://ceph.com/docs/next/start/quick-ceph-deploy/ and the output of all the commands is http://pastebin.com/3MpEPXaL ... am I doing some silly mistake or some other config issues?
[19:04] <bstillwell> sjust: not finding the docs for cephdrop, could you give me a pointer?
[19:05] <alfredodeza> saumya: what is the error
[19:05] * alfredodeza doesn't see errors in that log output
[19:05] * mschiff (~mschiff@46.189.28.116) has joined #ceph
[19:06] <sjust> sftp to cephdrop@ceph.com
[19:06] <saumya> alfredodeza: thats all the output I get, no errors on the console, but still the keys are not generated
[19:07] * The_Bishop (~bishop@g230109085.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[19:07] <alfredodeza> saumya: did you run gatherkeys?
[19:07] <alfredodeza> `ceph-deploy gatherkeys saumya-Lapi`
[19:07] <saumya> alfredodeza: a log file is generated in bootstrap-osd which reads [ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No such file or directory: 'ceph.conf' ..
[19:07] <saumya> alfredodeza: yeah, that gives error
[19:07] <alfredodeza> aha
[19:07] <alfredodeza> that is the missing output :)
[19:08] <saumya> Unable to find /etc/ceph/ceph.client.admin.keyring on ['saumya-Lapi']
[19:08] <saumya> alfredodeza: oh, my bad :)
[19:08] <bstillwell> sjust: Uploading ceph-osd.10.log to /home/cephdrop/ceph-osd.10.log
[19:09] <bstillwell> sjust: For timing purposes, this is what I saw from 'ceph -w' for that OSD:
[19:09] <bstillwell> 2013-09-10 10:25:59.861453 mon.0 [INF] osd.10 10.2.4.114:6800/12357 failed (3 reports from 3 peers after 20.000384 >= grace 20.000000)
[19:09] <sjust> k
[19:09] <alfredodeza> saumya: what happens when you attempt to log in to that host and start the monitor manually ?
[19:11] <saumya> when I do ssh saumya@saumya-Lapi I get unprotected private key file error and ignore key
[19:11] <sjust> bstillwell: how many pgs were in that pool?
[19:11] <sjust> was that the bucket pool?
[19:12] * dontalton (~don@128-107-239-234.cisco.com) has joined #ceph
[19:13] <saumya> alfredodeza: I corrected that, I can ssh now
[19:13] <bstillwell> sjust: there were 2048 pgs in .rgw.buckets which I was removing
[19:13] <sjust> ok
[19:16] <sjust> bstillwell: how many buckets were there in the pool?
[19:17] <bstillwell> sjust: around 2 million
[19:17] <sjust> objects/bucket?
[19:18] <bstillwell> could question. Let me see if I can find the 'rados df' output.
[19:18] <bstillwell> s/could/good/
[19:19] <xarses> alfredodeza, I'm having problems with ceph-deploy not creating a single node monitor it either gets stuck on ceph-create-keys (probing quorum) or ceph-create-keys isn't even invoked
[19:19] <angdraug> alfredodeza: ceph-deploy mon.py:116 remote_hostname = conn.modules.socket.gethostname()
[19:19] <angdraug> everywhere else in the code it's gethostname().split('.')[0]
[19:19] <alfredodeza> is the host you are passing to the same as in `hostname` in the remote host?
[19:20] <alfredodeza> angdraug: different purpose
[19:20] <saumya> alfredodeza: yes, I set it to saumya-Lapi
[19:20] <xarses> alfredodeza, yes, im using ceph-deploy on the node i want it to install
[19:21] <bstillwell> sjust: .rgw.buckets says 1429553 objects, and .rgw.buckets.index says 166038 objects, so ~8.6 objects/bucket?
[19:21] * xdeller (~xdeller@91.218.144.129) Quit (Quit: Leaving)
[19:21] * sjm (~sjm@64.34.151.178) has joined #ceph
[19:21] <sjust> did you perhaps have any really massive buckets?
[19:21] <xarses> # hostname
[19:21] <xarses> node-7.domain.tld
[19:21] <bstillwell> sjust: I was trying to go wider last night to see how it affected a performance problem I'm seeing.
[19:21] <angdraug> mon_hosts() strips out the domain
[19:21] <sjust> wider?
[19:21] <bstillwell> more buckets
[19:22] <sjust> oh, so the objects were pretty evenly distributed?
[19:22] <xarses> ceph-deploy new ${HOSTNAME} && ceph-deploy mon create ${HOSTNAME} << this complains that hostname doesn't equal remote hostname
[19:22] <angdraug> I think the problem is that ceph-deploy has 2 entities where it should have 3
[19:22] <bstillwell> should be, I was using the first 4 characters in a hash
[19:22] <sjust> ok
[19:22] <sjust> odd
[19:22] <sjust> so much for that theory
[19:22] <sjust> anything in dmesg?
[19:22] <angdraug> 1) short name of the node; 2) fqdn; 3) a way to ssh into the node (fqdn or ip)
[19:22] <sjust> xfs?
[19:22] <alfredodeza> angdraug: what is an entity
[19:22] * toplelnoob (~toplelnoo@BSN-143-124-148.dial-up.dsl.siol.net) has joined #ceph
[19:23] <angdraug> well concept
[19:23] <alfredodeza> angdraug: sorry I am not understanding what the problem is
[19:23] <angdraug> I'm still thinking about the mon_host() change I've proposed yesterday
[19:23] <angdraug> looking at the problem xarses is reporting I'm not sure I've done it right
[19:23] <xarses> alfredodeza, angdrang is working the host problem with me
[19:23] <alfredodeza> xarses: are you using the master branch for that?
[19:24] <alfredodeza> ah
[19:24] <angdraug> hostname_is_compatible() compares remote hostname with local hostname
[19:24] <alfredodeza> see, you guys are working together! no wonder why I got confused :)
[19:24] <angdraug> local hostname is received from mon_hosts()
[19:24] <bstillwell> sjust: I rebooted the server, so I looked in /var/log/messages and see this:
[19:24] <bstillwell> Sep 10 10:27:55 den2ceph003 abrt[12542]: Saved core dump of pid 12359 (/usr/bin/ceph-osd) to /var/spool/abrt/ccpp-2013-09-10-10:27:45-12359 (831086592 bytes)
[19:24] <xarses> alfredodeza: 1.2.3 + pull 69 and 70
[19:25] <angdraug> mon_hosts() strips the domain, remote host has fqdn in $HOSTNAME
[19:25] <xarses> ceph-deploy new ${HOSTNAME}:10.0.0.130 && ceph-deploy mon create ${HOSTNAME}:10.0.0.130 << this sits and hugs its self waiting for quorum
[19:25] <alfredodeza> xarses: CentOS ?
[19:25] <bstillwell> sjust: nothing else stands out and it doesn't appear that the core dump was actually written to disk
[19:25] <xarses> alfredodeza yes, 6.4
[19:26] <alfredodeza> right
[19:26] * tryggvil (~tryggvil@178.19.53.254) Quit (Quit: tryggvil)
[19:26] <alfredodeza> I have a PR waiting for that fix :)
[19:26] <alfredodeza> I know it hangs
[19:26] <alfredodeza> it is a problem with the library we use to connect to remote hosts
[19:26] <alfredodeza> it should be fixed as soon as the PR gets reviewed and merged
[19:26] * alfredodeza breathes again
[19:26] <angdraug> #71?
[19:27] <alfredodeza> yes sir
[19:27] <xarses> alfredodeza, regardless of ceph-deploy, ceph-create-keys isn't running
[19:27] <xarses> well, its stuck
[19:27] <xarses> waiting for quorum
[19:30] <xarses> sigh, this time create-keys finished even though it hasn't been. here are the logs for the hostname warning angdruag is looking at. http://paste.openstack.org/show/46500/
[19:33] <xarses> here is ${HOSTNAME}:10.0.0.130 which is the addr attached to node-7.domain.tld. This just sits waiting for quorum http://paste.openstack.org/show/46501/
[19:34] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[19:36] * KevinPerks1 (~Adium@64.34.151.178) Quit (Read error: Connection reset by peer)
[19:36] * KevinPerks (~Adium@64.34.151.178) has joined #ceph
[19:37] <angdraug> is there any more documentation on this bit "provided hostname must match remote hostname"?
[19:37] * lyncos (~chatzilla@208.71.184.41) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 23.0/20130803192641])
[19:38] * vata (~vata@2607:fad8:4:6:8480:a48a:ce87:f2e1) Quit (Quit: Leaving.)
[19:38] <angdraug> I'm looking for the ways these names are used and when they get resolved
[19:40] <angdraug> it's just a feeling, but I have a growing suspicion that the host:fqdn notation is not followed consistently, something somewhere would try to resolv host instead of using the supplied fqdn...
[19:43] <sagewk> xarses: what does 'ceph daemon mon.$hostname mon_status' say?
[19:43] <xarses> updated paste with log files http://paste.openstack.org/show/46503/
[19:44] <sagewk> 2013-09-10 17:31:22,575 [ceph_deploy.new][DEBUG ] Monitor initial members are ['node-7.domain.tld']
[19:44] <xarses> sagewk: cephx failure
[19:44] <sagewk> i think that should be node-7, not the fqdn.. did you do node-7:node-7.domain.tld ?
[19:45] <sagewk> xarses: is this dumpling or cuttlefish?
[19:45] <xarses> cuttlefish
[19:45] <angdraug> he did node-7:ip
[19:45] <sagewk> ah, then: ceph --admin-daemon /var/run/ceph/ceph-mon.$hostname.asok mon_status
[19:45] <angdraug> or rather fqdn:ip
[19:45] * tryggvil (~tryggvil@17-80-126-149.ftth.simafelagid.is) has joined #ceph
[19:45] <sagewk> weird; wehre did domain.tld come from?
[19:45] <xarses> sagewk node-7.domain.tld:10.0.0.130
[19:45] <sagewk> ah; it shoudl be name:ip not fqdn:ip
[19:46] <sagewk> where the name == `hostname -s` on the target machine
[19:46] <xarses> I'll re-run with out but i think it yells at you for not porviding the correct hostname
[19:47] <angdraug> does this mean I was right about mon.py:116>
[19:47] <angdraug> does this mean I was right about mon.py:116?
[19:47] <angdraug> that's the line I think would yell at xarses
[19:48] <xarses> yes, it does still yell for not providing the correct hostname
[19:48] <xarses> checking if it actually started...
[19:48] <sagewk> the remote gethostname() returns a fqdn?
[19:48] <angdraug> yes: http://paste.openstack.org/show/46500/
[19:48] <sagewk> probably need to fix that to split off the shortname, or whatever it takes to get the hostname -s equivalent
[19:48] <alfredodeza> sagelap: gethostname() returns the hostname, just like calling `hostname`
[19:49] <angdraug> and not hostname -s
[19:49] <sjust> bstillwell: xfs on the osds?
[19:50] <sagewk> alfredodeza: ah; we want hostname -s or equivalent
[19:50] <bstillwell> sjust: yeah
[19:50] <bstillwell> centos 6.4
[19:50] <alfredodeza> sagewk: ok. I will open a new ticket
[19:50] <xarses> log using node-7:10.0.0.130 http://paste.openstack.org/show/46505/ works =)
[19:51] <xarses> but get yelled at for hostname validation
[19:51] <angdraug> in the current ceph-deploy it's just a warning, we can live with that for now :)
[19:51] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:51] <alfredodeza> created issue 6269
[19:51] <kraken> alfredodeza might be talking about: http://tracker.ceph.com/issues/6269 [ceph-deploy needs to use the equivalent of `hostname -s`]
[19:52] <alfredodeza> right, it was just meant to give you a heads up for a possible cause to not have a monitor running
[19:52] <alfredodeza> not to prevent you from moving forwards
[19:52] <alfredodeza> *forward
[19:52] * vata (~vata@2607:fad8:4:6:d95:8301:d3fb:4535) has joined #ceph
[19:54] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[19:54] <angdraug> assuming that it won't try to resolve that shortname at some point later on :p
[19:57] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit ()
[19:58] <sagewk> angdraug: it won't; i think the warning is the only bit that went wrong here.
[19:59] * Meths (~meths@2.25.213.185) has joined #ceph
[19:59] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[19:59] <xarses> sagewk, yes looks like it's only complaining, im moving forward into the next iteration of why i was testing the other examples
[20:00] <sjust> bstillwell: there weren't any other buckets left over with a lot of objects?
[20:01] * mschiff (~mschiff@46.189.28.116) Quit (Remote host closed the connection)
[20:01] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[20:02] <angdraug> if I were to extract socket.gethostname().split('.')[0] into a method, is misc or hosts.common the right place for it?
[20:03] <Kioob> Hi
[20:04] <Kioob> can I have some help / explanation about my used space problem (I already posted a message on the ceph-user list).
[20:04] <Kioob> ?
[20:05] * markbby (~Adium@168.94.245.4) has joined #ceph
[20:06] <Kioob> I don't know if "(ceph|rados) df" is wrong, or if I have garbage data in my /var/lib/ceph/osd/* dirs
[20:07] <alfredodeza> angdraug: it depends
[20:07] <alfredodeza> hrmn
[20:07] <alfredodeza> that sounds to me like a utility that can be reused in more than one place
[20:08] <alfredodeza> hosts/common should not grow with helpers
[20:08] <alfredodeza> it was meant for ceph-deploy commands that are supposed to be re-used
[20:09] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[20:10] <angdraug> misc then?
[20:10] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[20:15] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:19] <MACscr1> is ceph-mon that resource intensive? If I am going to have a small openstack cluster (8 computes, 4 storage nodes), can i just run ceph-mon and ceph-osd on the same ones since i will have storage spread out among 4 servers?
[20:19] <xarses> MACscr1: we are running our monitors on our controllers
[20:20] <xarses> if you run 4 mons you're quorum will require 3/4 mon's
[20:20] <MACscr1> should i only run 3 mons?
[20:21] <xarses> as long as its 3 or more, you shouldn't have a problem
[20:21] <MACscr1> i just hate the idea of running 3 separate servers servers just for mon, though i guess i could run other openstack tools on it
[20:21] * MACscr1 is now known as MACscr
[20:22] * rudolfsteiner (~federicon@200.68.116.185) has joined #ceph
[20:22] <xarses> we don't, also it doesn't seem that un-common for small clusters for the monitors to run with the osd's
[20:23] <xarses> i would however not encourage sharing the osd role with much of anything but the monitor
[20:24] <MACscr> now do note that my arrangement is a bit unique in the fact that two of my storage nodes have 12 disks each and the other two nodes only have 6 each
[20:24] * aliguori (~anthony@72.183.121.38) has joined #ceph
[20:24] * sjm (~sjm@64.34.151.178) has joined #ceph
[20:27] * rudolfsteiner (~federicon@200.68.116.185) Quit (Quit: rudolfsteiner)
[20:28] * tobru_ (~quassel@217-162-50-53.dynamic.hispeed.ch) has joined #ceph
[20:29] <xarses> MACscr: I dont think that matters, the only thing is to make sure you create one OSD per disk
[20:30] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[20:31] * sjm (~sjm@64.34.151.178) has joined #ceph
[20:31] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[20:33] * bandrus (~Adium@12.248.40.138) has joined #ceph
[20:38] * mcatudal (~mcatudal@142-217-209-54.telebecinternet.net) has joined #ceph
[20:38] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[20:39] <mcatudal> Hi everybody ;-)
[20:39] <Kioob> Hi mcatudal
[20:39] <mcatudal> I just discuss with Chris from Inktank and he introduce me to that channel.
[20:40] <mcatudal> I'm ready to acquire my hardware and share my cloud deployment here using OpenStack and Ceph as storage solution.
[20:40] <xarses> sounds fun
[20:41] <xarses> the next release of fuel will have ceph intergration
[20:41] <mcatudal> Fuel is for October 2013?
[20:42] <xarses> I think it's sometime near the end of the month for fuel 3.2
[20:42] <xarses> which is sad, since im working on the ceph support for it =)
[20:42] <xarses> so I might be able to help with you're deployment
[20:43] <MACscr> xarses: oh really? that would be huge
[20:43] <mcatudal> New reading for me from Mirantis!!
[20:43] <MACscr> i really wanted to use that tool, but with no ceph support, i couldnt =/
[20:44] * Cube1 is now known as Cube
[20:44] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:44] <mcatudal> Basically what I will do is deploy three server 2U, 12 drive total in each server.
[20:44] <mcatudal> 8 drives will be 3 TB and 4 other SSD M500 960GB
[20:45] <xarses> mmm beefy =)
[20:45] <mcatudal> Compute and Ceph will run bare metal and services will be virtualize
[20:46] <mcatudal> Ceph will be use as backend storage and Glance Image service and SSD will act for direct access for VM
[20:46] <mcatudal> Sound good for you guys?
[20:46] <mcatudal> I wish to write the recipe for small cloud storage infractructure for small business.
[20:46] <xarses> by backend storage you mean cinder?
[20:46] <mcatudal> Yes
[20:47] <mcatudal> Cinder will usr Ceph and I will use Ceph with the Swift plugin to be able to promote OwnCloud for the access to users
[20:48] <xarses> sounds good, i've never virtualized the openstack services inside of MAAS but it should work
[20:48] <mcatudal> OwnCloud software is opensource client for Windows users
[20:48] <xarses> by switft plugin you mean object storage using the switft api?
[20:48] <mcatudal> As the docs says, most services could be virtualize but not Compute and Ceph OSD if I'm not mistaken
[20:49] <angdraug> alfredodeza: #72. I checked the imports this time :)
[20:49] <mcatudal> Swift API could use Ceph as backend to act for an Object Storage
[20:49] * Jedicus (~user@108-64-153-24.lightspeed.cicril.sbcglobal.net) has joined #ceph
[20:49] <xarses> ok, so you want to setup radiosgw then
[20:49] <mcatudal> Yes
[20:50] <xarses> sound's good, would be an interesting blueprint
[20:50] <alfredodeza> angdraug: thanks
[20:50] <alfredodeza> running tests right now
[20:56] <loicd> scuttlemonkey: ping
[20:57] <MACscr> mcatudal: that next fuel version going to work with ubuntu then?
[20:58] <MACscr> im pretty much running away from rhel these days
[20:58] <xarses> Macscr, yes support will be back for ubuntu
[20:59] <MACscr> early alpha access? =P
[20:59] <xarses> Macscr, its all available at github.com/Mirantis
[21:00] <bstillwell> sjust: Sorry, got pulled into a troubleshooting session on something else.
[21:00] <xarses> its annoying but you could build your own image. Also, i think ubuntu support is still slightly flakey
[21:00] <bstillwell> sjust: As for other buckets, I don't believe so. I was removing all the rgw related buckets at the same time
[21:03] <mcatudal> MACscr: I never read about Mirantis... it's seem a good way to achieve the project
[21:03] <mcatudal> I red all the documentation from OpenStack and Ceph site
[21:03] <xarses> fair notice, I work @Mirantis =)
[21:03] <mcatudal> That will be my first real play with OpenStack
[21:03] <MACscr> mcatudal: they have some good stuff. Unfortunately their business model is geared towards larger businesses
[21:04] <xarses> MACscr, we'd welcome any feedback
[21:04] <mcatudal> I will draw my network deployment this week
[21:04] <MACscr> xarses: ive provided it on numerous occasions =P
[21:05] <MACscr> but as mentioned, SMB's just arent your target audience
[21:05] <MACscr> harder to overprice that way =p
[21:08] <loicd> is there a way to download https://www.brighttalk.com/webcast/8847/84173 ?
[21:09] * loicd does not have flahs
[21:09] <loicd> falsh
[21:09] <loicd> flah
[21:09] <loicd> ...
[21:11] * grepory (~Adium@50-200-116-163-static.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[21:22] * The_Bishop__ (~bishop@f049184244.adsl.alicedsl.de) has joined #ceph
[21:23] <MACscr> xarses: which repo at github would i need?
[21:24] <xarses> for?
[21:24] <xarses> fuel alpha?
[21:24] <MACscr> to test whatever work you have done for the test release of fuel
[21:24] <MACscr> yes
[21:24] <xarses> with ceph support?
[21:24] <MACscr> yes
[21:25] * vata (~vata@2607:fad8:4:6:d95:8301:d3fb:4535) Quit (Ping timeout: 480 seconds)
[21:26] * The_Bishop_ (~bishop@f048089073.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[21:28] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[21:31] <xarses> ubuntu wont work, we don't have the packages integrated yet. Centos is set to use cuttlefish.
[21:32] <xarses> the working location for the ceph branche is https://github.com/xarses/fuelweb/tree/ceph-stage-3 but it appears some of the submodules are out of sync
[21:33] * mrprud (~mrprud@ANantes-554-1-275-192.w2-9.abo.wanadoo.fr) has joined #ceph
[21:33] <xarses> I'm uploading a built iso to https://www.dropbox.com/sh/wkket1422uxm89g/9njlXi6mwa but it will take about 50 min
[21:37] * sleinen (~Adium@2001:620:0:25:5de5:2743:46c3:2320) has joined #ceph
[21:38] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) has joined #ceph
[21:39] * markbby1 (~Adium@168.94.245.3) has joined #ceph
[21:39] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[21:44] * vata (~vata@2607:fad8:4:6:1446:1d9e:1a85:3b9) has joined #ceph
[21:46] * toplelnoob (~toplelnoo@BSN-143-124-148.dial-up.dsl.siol.net) Quit (Remote host closed the connection)
[21:46] * grepory (~Adium@155.sub-70-192-203.myvzw.com) has joined #ceph
[21:46] * allsystemsarego (~allsystem@188.25.130.226) Quit (Quit: Leaving)
[21:48] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[21:52] <infernix> is there any updated documentation on the multi-region radosgw code?
[21:54] <infernix> as I see it, the API docs is missing the region parts of it?
[21:55] * sjm (~sjm@64.34.151.178) has joined #ceph
[21:57] <Kioob> sjust: Hi. If you need more informations about by "ceph space problem", you can ask here too.
[21:59] * gucki (~smuxi@77-56-39-154.dclient.hispeed.ch) has joined #ceph
[22:00] * jackhill (jackhill@pilot.trilug.org) Quit (Remote host closed the connection)
[22:00] * mancdaz (~darren.bi@94-195-16-87.zone9.bethere.co.uk) has joined #ceph
[22:01] <mancdaz> decede/jmlowe thanks
[22:01] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has joined #ceph
[22:01] * ChanServ sets mode +v andreask
[22:06] * via (~via@smtp2.matthewvia.info) Quit (Ping timeout: 480 seconds)
[22:06] * Vjarjadian (~IceChat77@05453253.skybroadband.com) has joined #ceph
[22:06] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:08] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[22:08] * mrprud_ (~mrprud@ANantes-554-1-275-192.w2-9.abo.wanadoo.fr) has joined #ceph
[22:15] * mrprud (~mrprud@ANantes-554-1-275-192.w2-9.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[22:15] * via (~via@smtp2.matthewvia.info) has joined #ceph
[22:16] * mrprud_ (~mrprud@ANantes-554-1-275-192.w2-9.abo.wanadoo.fr) Quit (Remote host closed the connection)
[22:16] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[22:17] * mrprud (~mrprud@ANantes-554-1-275-192.w2-9.abo.wanadoo.fr) has joined #ceph
[22:17] * sel (~sel@python.home.selund.se) Quit (Quit: Leaving)
[22:18] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[22:19] * jackhill (jackhill@pilot.trilug.org) has joined #ceph
[22:19] * thomnico (~thomnico@64.34.151.178) Quit (Read error: Operation timed out)
[22:20] * markbby1 (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[22:20] * andreask (~andreask@h081217135028.dyn.cm.kabsi.at) has left #ceph
[22:21] <diegows> hi
[22:23] * mrprud_ (~mrprud@ANantes-554-1-275-192.w2-9.abo.wanadoo.fr) has joined #ceph
[22:25] <MACscr> xarses: out of curiosity, why wouldnt it work pretty much the same on ubuntu and centos if you just use repo's and configs?
[22:25] <MACscr> and obviously the versions were the same
[22:26] * xarses (~andreww@204.11.231.50.static.etheric.net) Quit (Ping timeout: 480 seconds)
[22:28] * sjm (~sjm@64.34.151.178) has joined #ceph
[22:28] * mrprud (~mrprud@ANantes-554-1-275-192.w2-9.abo.wanadoo.fr) Quit (Read error: Operation timed out)
[22:30] <saumya> alfredodeza: ping, I am still getting this error Cannot load config: [Errno 2] No such file or directory: 'ceph.conf', this inspite of having ceph.conf in /etc/ceph
[22:30] * xarses (~andreww@204.11.231.50.static.etheric.net) has joined #ceph
[22:30] <alfredodeza> saumya: where are you seeing that error?
[22:31] <xarses> MACscr, there was some major refactoring of the puppet scripts from 3.0.1 to 3.1 that broke ubuntu support so it was disabled. There was an amout of work done for 3.2 to fix these so the puppet scripts should work again
[22:31] <saumya> alfredodeza: in the ceph.log file in /var/lib/ceph/bootstrap-osd
[22:31] <alfredodeza> can you paste that log output?
[22:31] <xarses> so yes, there should just need packages, they will be included in the iso by the end of the week
[22:32] <saumya> alfredodeza: here http://pastebin.com/BBm2P1FB
[22:32] <MACscr> xarses: ah, ok. Makes sense.
[22:32] <alfredodeza> saumya: that doesn't look like /var/lib/ceph stuff that looks like ceph-deploy logs
[22:33] <alfredodeza> and what is happening there is that it is telling you that the ceph.conf file that ceph-deploy needs to work with doesn't exist
[22:33] <alfredodeza> you should have a ceph.conf file in the directory where ceph-deploy is being used
[22:33] <alfredodeza> when you called `ceph-deploy new {hostname}` it should've created one for you
[22:33] <saumya> alfredodeza: yes I do have it
[22:33] <MACscr> xarses: what kind of restrictions are there on how ceph is deployed with openstack
[22:34] <saumya> alfredodeza: I have ceph.conf ,ceph.log and ceph.mon.keyring in the folder in which I am running the commands
[22:35] <xarses> MACscr, we will install mon's on each controller and then you just need to add the ceph-osd role to any node you want to run osd's. Currently there is no hard validator, but you need to have one controller and two osd's on seperate hosts (or update the crush map)
[22:35] <alfredodeza> can you paste the complete output of what you are running and what the log says (not just the error)
[22:35] * mrprud_ (~mrprud@ANantes-554-1-275-192.w2-9.abo.wanadoo.fr) Quit (Remote host closed the connection)
[22:36] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[22:37] * mrprud (~mrprud@ANantes-554-1-275-192.w2-9.abo.wanadoo.fr) has joined #ceph
[22:39] <saumya> alfredodeza: http://pastebin.com/ShX7Zk2M this is the log file
[22:39] * shang (~ShangWu@64.34.151.178) Quit (Ping timeout: 480 seconds)
[22:42] * gucki (~smuxi@77-56-39-154.dclient.hispeed.ch) Quit (Remote host closed the connection)
[22:44] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[22:45] * tobru_ (~quassel@217-162-50-53.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:45] <saumya> alfredodeza: ^
[22:45] <alfredodeza> yep, looking
[22:47] <alfredodeza> saumya: do you have the actual console log? e.g. I would like to see how you called ceph-deploy
[22:47] <alfredodeza> for example, there is a --overwrite-conf that seems like was not called
[22:47] <alfredodeza> from what I can read in the logs
[22:48] <saumya> alfredodeza: I called the command again with --overwrite-conf, but nothing happened
[22:48] <saumya> I'll paste the logs
[22:49] <saumya> alfredodeza: http://pastebin.com/EbLrUZJc here
[22:51] <alfredodeza> saumya: if you log into saumyapc and call `hostname` what does it return?
[22:51] <saumya> alfredodeza: saumyapc
[22:52] * angdraug (~angdraug@204.11.231.50.static.etheric.net) Quit (Ping timeout: 480 seconds)
[22:52] <MACscr> xarses: grr, i really wish there were better options than 1 or 3 for ceph-mon and openstack controllers. I only have 12 nodes, so using 3 of them just for ceph mon is a bit overkill. No way to run ceph-mon on the same systems as the osd's?
[22:54] <MACscr> with fuel that is
[22:54] * mancdaz (~darren.bi@94-195-16-87.zone9.bethere.co.uk) Quit (Quit: mancdaz)
[22:54] * angdraug (~angdraug@204.11.231.50.static.etheric.net) has joined #ceph
[22:55] <wrale> MACscr: wild guess, but maybe running ceph mon and openstack controllers in a three node Ganeti cluster would be a way to use only three.. even libvirtd on its own would get you that far, i suppose
[22:55] * sprachgenerator (~sprachgen@150.sub-70-208-128.myvzw.com) has joined #ceph
[22:56] <xarses> MACscr, We are still working on a way to do that through the UI, if you where to add 'ceph-mon' role to node it should add its self as a monitor
[22:56] <alfredodeza> saumya: is it possible that you've run ceph-deploy commands to install and deploy monitors more than once in that host?
[22:56] <alfredodeza> you might need to start over if things are highly polluted
[22:56] <alfredodeza> monitors will fail to get quorum if your keys are different
[22:57] <saumya> alfredodeza: yes, I've run the commands a couple of times, but the error was always there
[22:57] <saumya> alfredodeza: start over meaning?
[22:57] <alfredodeza> I cannot replicate your problem with a 12.04 machine from scratch
[22:57] <alfredodeza> can you purge and purgedata on that host?
[22:57] <saumya> alfredodeza: I was initially making the mistake of not having the hostname same, but even after I changed, the error remains
[22:57] <alfredodeza> that destroys all ceph-related stuff including disks that may have been set
[22:58] <xarses> alternatly, you edit /etc/puppet/modules/ceph/manafests/init.pp and add class { 'ceph::mon': } into one of the other roles in the case it should do the same thing.
[22:58] <alfredodeza> saumya: yep, but it seems that something is not right in that host and I cannot replicate your problem (if I try with a 12.04 machine from scratch)
[22:59] <alfredodeza> from scratch I mean: I run ceph-deploy new hostname, ceph-deploy install hostname, and ceph-deploy mon create hostname
[23:00] * iggy_ (~iggy@theiggy.com) Quit (Quit: No Ping reply in 180 seconds.)
[23:01] * Steki (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[23:02] * iggy (~iggy@theiggy.com) Quit (Remote host closed the connection)
[23:02] * MooingLemur (~troy@phx-pnap.pinchaser.com) Quit (Read error: Connection reset by peer)
[23:02] * imjustmatthew (~imjustmat@pool-173-53-100-217.rcmdva.fios.verizon.net) Quit (Read error: Connection reset by peer)
[23:02] * MooingLemur (~troy@phx-pnap.pinchaser.com) has joined #ceph
[23:02] * Meths_ (~meths@2.25.213.185) has joined #ceph
[23:03] * markl (~mark@tpsit.com) Quit (Read error: Connection reset by peer)
[23:03] * markl (~mark@tpsit.com) has joined #ceph
[23:03] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Read error: Connection reset by peer)
[23:03] * iggy (~iggy@theiggy.com) has joined #ceph
[23:03] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[23:03] * Meths (~meths@2.25.213.185) Quit (Read error: Connection reset by peer)
[23:04] <saumya> alfredodeza: that doesn't harm the ceph installation right?
[23:04] <saumya> alfredodeza: should I run ceph-deploy purgedata {hostname} [{hostname} ...] ?
[23:04] <saumya> alfredodeza: so purge or purgedata or both?
[23:04] <saumya> alfredodeza: I am doing all this on the same machine
[23:04] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) has joined #ceph
[23:04] <alfredodeza> wait
[23:04] <alfredodeza> it totally does
[23:04] <alfredodeza> it blows away everything
[23:04] * imjustmatthew (~imjustmat@pool-173-53-100-217.rcmdva.fios.verizon.net) has joined #ceph
[23:04] <saumya> alfredodeza: I just did purgedata
[23:05] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Ping timeout: 480 seconds)
[23:05] * thomnico (~thomnico@207.96.227.9) has joined #ceph
[23:06] * sprachgenerator (~sprachgen@150.sub-70-208-128.myvzw.com) Quit (Quit: sprachgenerator)
[23:06] <xarses> MACscr, I've added a point to try and add ceph-mon to the role list in the UI
[23:06] <xarses> If we have time it will make 3.2
[23:07] * antoinerg (~antoine@dsl.static-187-116-74-220.electronicbox.net) Quit (Ping timeout: 480 seconds)
[23:07] * antoinerg (~antoine@dsl.static-187-116-74-220.electronicbox.net) has joined #ceph
[23:07] * iggy_ (~iggy@theiggy.com) has joined #ceph
[23:08] <MACscr> xarses: cool. Crossing my fingers =)
[23:08] <MACscr> and thanks
[23:08] * Meths_ is now known as Meths
[23:08] <saumya> alfredodeza: what do I do now? I did purgedata and then ran all the commands, part of it is here http://tny.cz/d895170d
[23:09] <alfredodeza> did you ran `ceph-deploy purge {hostname}` as well?
[23:09] * mtanski (~mtanski@69.193.178.202) Quit (Ping timeout: 480 seconds)
[23:09] <saumya> alfredodeza: nope
[23:09] <alfredodeza> that host is in a very inconsistent state, try running both purge commands
[23:09] <alfredodeza> and start from scratch
[23:10] <alfredodeza> but be mindful that those commands are *very* destructive
[23:10] <alfredodeza> to everything that is ceph-related
[23:10] <saumya> alfredodeza: will that harm my ceph-deploy installation, because installing ceph-deploy was a little trouble too :-/
[23:11] * sjm (~sjm@64.34.151.178) has joined #ceph
[23:11] <xarses> saumya: ceph-deploy purgedata erases folders that ceph-deploy expects to exist
[23:11] <xarses> you have to re-install the package or create the folders by hand
[23:12] <alfredodeza> saumya: reinstalling with ceph-deploy is straightforward
[23:12] * dty (~derek@proxy00.umiacs.umd.edu) Quit (Quit: dty)
[23:13] <saumya> xarses: hence the error this time right? So alfredodeza if I do purge, I'll have to start from creating a cluster again right? No harm to the ceph-deploy installation?
[23:14] <alfredodeza> saumya: purge and purgedata destroys/removes/uninstalls everything ceph-related
[23:14] <alfredodeza> if you are worried about your ceph installation you should probably try with something else :/
[23:15] <saumya> alfredodeza: It will be a problem to me only if I have to install ceph-deploy again
[23:15] <saumya> :-/
[23:15] <alfredodeza> how so?
[23:15] <alfredodeza> `ceph-deploy install {hostname}` was giving you trouble?
[23:15] <saumya> alfredodeza: nope, not that, installing ceph-deploy itself was troublesome
[23:16] <alfredodeza> I am sorry to hear that
[23:16] <alfredodeza> there are numerous ways to install ceph-deploy, how did you get it installed?
[23:16] <saumya> alfredodeza: I have been following this http://ceph.com/docs/next/start/quick-start-preflight/
[23:17] * sjm (~sjm@64.34.151.178) Quit (Remote host closed the connection)
[23:17] <xarses> saumya, with ceph-deploy 1.2.3 ceph-deploy isn't removed
[23:17] <xarses> just ceph packages
[23:17] * KevinPerks (~Adium@64.34.151.178) Quit (Quit: Leaving.)
[23:18] <saumya> alfredodeza: there were some python-pushy errors on my end, but thats done now, so np :) I am stuck with this :-/
[23:18] <xarses> MACscr, the iso is uploaded now
[23:19] <saumya> xarses: so if I run purge, I will have to just start from creating a cluster...stuff that is mentioned here http://ceph.com/docs/next/start/quick-ceph-deploy/ or somthing else?
[23:20] <Tamil> saumya: please make sure when you retry, you remove ceph.conf and mon.keyring and --overwrite-conf is not required unless you you have modified your ceph.conf for some reason
[23:20] <MACscr> xarses: so if i were to test it, its going to use centos for the nodes?
[23:20] * vata (~vata@2607:fad8:4:6:1446:1d9e:1a85:3b9) Quit (Quit: Leaving.)
[23:21] <saumya> Tamil: okay, I'll try again
[23:21] <MACscr> xarses: lol, nvm, was obviously when i saw the iso
[23:22] * The_Bishop__ (~bishop@f049184244.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[23:23] <saumya> Tamil: I still get the error :-/
[23:24] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Quit: www.adiirc.com - Free mIRC alternative)
[23:24] <xarses> the fuel master node will install with centos, i dont think there is any option in that
[23:25] <xarses> MACscr, after that you can select ubuntu 12.04 when you create a new cluster. but i don't know where it will get it's packages from since they aren't on that iso (and would normally be sourced from there)
[23:27] <saumya> xarses: alfredodeza so I am only left with purging option?
[23:27] * thomnico (~thomnico@207.96.227.9) Quit (Quit: Ex-Chat)
[23:27] <alfredodeza> it seems like it
[23:28] <MACscr> xarses: ha. Its at least worth looking at. I havent mess with openstack or ceph yet
[23:28] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) has joined #ceph
[23:29] <xarses> saumya: i use this to reset after testing
[23:29] <xarses> ps axu | grep ceph | awk '//{system("kill "$2)}' && ceph-deploy purge localhost && ceph-deploy purgedata localhost && yum install -y ceph && rm ~/ceph*
[23:29] <xarses> then ceph-deploy new ....
[23:29] <Tamil> saumya: what is the error you are seeing now? which distro?
[23:31] * tziOm (~bjornar@ti0099a340-dhcp0395.bb.online.no) Quit (Remote host closed the connection)
[23:33] * BillK (~BillK-OFT@58-7-172-nwork.dyn.iinet.net.au) has joined #ceph
[23:36] <saumya> Tamil, xarses alfredodeza purging worked guys! Thanks a lot! :)
[23:37] <Tamil> saumya: do you have a running cluster now?
[23:38] <saumya> Tamil: nope, I have just created a cluster, added a monitor and gathered keys, I am beginner, trying to learn how ceph works for a project :)
[23:38] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[23:39] <Tamil> saumya: kool
[23:39] <saumya> Tamil: thanks :)
[23:46] * mcatudal (~mcatudal@142-217-209-54.telebecinternet.net) Quit (Ping timeout: 480 seconds)
[23:48] * ScOut3R (~scout3r@91EC1DC5.catv.pool.telekom.hu) Quit (Remote host closed the connection)
[23:54] * mikedawson_ (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:56] * smiley_ (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley_)
[23:58] * sage (~sage@76.89.177.113) Quit (Ping timeout: 480 seconds)
[23:59] * jmlowe (~Adium@c-98-223-198-138.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.