#ceph IRC Log

Index

IRC Log for 2011-11-14

Timestamps are in GMT/BST.

[0:12] * adjohn (~adjohn@70-36-139-211.dsl.dynamic.sonic.net) has joined #ceph
[0:12] * adjohn (~adjohn@70-36-139-211.dsl.dynamic.sonic.net) Quit ()
[0:19] * aa (~aa@r190-135-225-242.dialup.adsl.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[0:38] * gregorg (~Greg@78.155.152.6) has joined #ceph
[0:38] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[1:22] <grape> Does anyone have any tips on setting up the ssh connections between the machines in the cluster. Looking at the docs here http://ceph.newdream.net/docs/latest/ops/install/mkcephfs/#ssh-config
[1:24] <grape> Is this just for the setup of the cluster or for routine operations?
[1:39] <grape> all it was missing was a beer. that was easy :-)
[1:55] * Nightdog (~karl@190.84-48-62.nextgentel.com) Quit (Read error: Connection reset by peer)
[1:55] * Nightdog (~karl@190.84-48-62.nextgentel.com) has joined #ceph
[2:56] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:51] * Nightdog (~karl@190.84-48-62.nextgentel.com) Quit (Read error: Connection reset by peer)
[3:52] * Nightdog (~karl@190.84-48-62.nextgentel.com) has joined #ceph
[3:53] * aa (~aa@r190-135-146-140.dialup.adsl.anteldata.net.uy) has joined #ceph
[3:56] * Nightdog (~karl@190.84-48-62.nextgentel.com) Quit (Remote host closed the connection)
[4:30] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:09] * cp (~cp@c-98-234-218-251.hsd1.ca.comcast.net) has joined #ceph
[5:10] * cp (~cp@c-98-234-218-251.hsd1.ca.comcast.net) Quit ()
[5:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[8:47] * Nightdog (~karl@190.84-48-62.nextgentel.com) has joined #ceph
[10:39] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[10:43] <psomas> Can you modify a live cluster conf so that it uses cluster/public_addr for osd-osd/osd-client communication?
[10:44] <psomas> I tried it a couple of days ago, and apparently there was some kind of loop where osds where repeatedly marked as down, then rejoined the cluster, etc etc
[10:47] * morse (~morse@supercomputing.univpm.it) Quit (Ping timeout: 480 seconds)
[10:49] * Hugh (~hughmacdo@soho-94-143-249-50.sohonet.co.uk) Quit (Quit: Ex-Chat)
[10:53] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[11:28] * stefanha (~stefanha@yuzuki.vmsplice.net) has left #ceph
[11:49] * gregaf (~Adium@aon.hq.newdream.net) has joined #ceph
[11:49] * gregaf1 (~Adium@aon.hq.newdream.net) Quit (Read error: Connection reset by peer)
[12:49] * MK_FG (~MK_FG@188.226.51.71) Quit (Ping timeout: 480 seconds)
[13:34] <chaos__> hello ;-)
[13:36] <chaos__> i wan't contribute some code to ceph wiki, how to connect to perf sockets using something beside c/c++ and maybe munin plugin which uses ceph perf socket
[13:36] <chaos__> how can i do that?
[13:52] * MK_FG (~MK_FG@219.91-157-90.telenet.ru) has joined #ceph
[14:55] * MK_FG (~MK_FG@219.91-157-90.telenet.ru) Quit (Remote host closed the connection)
[14:56] * MK_FG (~MK_FG@219.91-157-90.telenet.ru) has joined #ceph
[15:21] * mgalkiewicz (~maciej.ga@85.89.186.247) has joined #ceph
[15:24] <mgalkiewicz> after enabling cephx only the first mon is able to use ceph tool get statistics etc.
[15:29] <mgalkiewicz> How to allow other machines to do this? I have changed caps (ceph auth) but I still cant execute ceph -s. It hangs.
[15:43] * aa (~aa@r190-135-146-140.dialup.adsl.anteldata.net.uy) Quit (Remote host closed the connection)
[16:59] * adjohn (~adjohn@70-36-139-211.dsl.dynamic.sonic.net) has joined #ceph
[17:02] * adjohn (~adjohn@70-36-139-211.dsl.dynamic.sonic.net) Quit ()
[17:02] * adjohn (~adjohn@70-36-139-211.dsl.dynamic.sonic.net) has joined #ceph
[17:08] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[18:12] * adjohn (~adjohn@70-36-139-211.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[18:28] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) Quit (Quit: Leaving)
[18:33] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has joined #ceph
[18:49] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[18:57] * mtk (~mtk@ool-44c35967.dyn.optonline.net) has joined #ceph
[19:25] * adjohn (~adjohn@208.90.214.43) has joined #ceph
[19:49] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[19:50] * Hugh (~hughmacdo@soho-94-143-249-50.sohonet.co.uk) has joined #ceph
[20:02] <gregaf> mgalkiewicz: if you didn't set up your cluster with cephx initially you're going to need to distribute the appropriate keys to your cluster
[20:02] <gregaf> I think this is documented in the wiki
[20:02] <gregaf> chaos_: you should be able to edit the wiki just by registering :)
[20:05] <gregaf> psomas: switching communications like that is going to be tricky; though I think you should be able to do it you probably only want to do one OSD at a time or you'll set off a recovery storm (not sufficiently tested/optimized right now)
[20:45] <stingray> why pg can be in "down+peering"?
[20:49] * sagelap (~sage@wireless-wscc-users-2930.sc11.org) has joined #ceph
[20:49] <gregaf> stingray: peering is when the "primary" OSD for that PG is trying to talk to everybody who has data for it, and data in that PG is temporarily inaccessible
[20:50] <stingray> interesting
[20:50] <stingray> all osds are there
[20:50] <stingray> but it's still down+peering
[20:50] <gregaf> I don't remember what "down" is (sjust might)
[20:50] <gregaf> but it might just mean that nobody's working on it yet?
[20:50] <stingray> I kicked both osds and monitor
[20:50] <stingray> but nothing changed
[20:51] <stingray> I still have it as peering+down
[20:51] <sjust> stingray: iirc, it means that for some prior interval, no osd survivde
[20:51] <sjust> *survived
[20:51] <sjust> so it is waiting for at least one of those osds to come back up
[20:52] <todin> is there somewhere the output of ceph pg dump documetet? what it all means?
[20:54] <stingray> sjust: so now all of them are up
[20:54] <stingray> sjust: for how long it is going to wait?
[20:56] <stingray> ?
[21:09] <stingray> no matter what I do it's stuck in down+peering
[21:09] * stingray headdesk
[21:15] * verwilst (~verwilst@dD57690CA.access.telenet.be) has joined #ceph
[21:28] * sagelap (~sage@wireless-wscc-users-2930.sc11.org) Quit (Quit: Leaving.)
[21:28] <stingray> okay, so, this was because one osd was out
[21:28] <stingray> although I have 3 replicas, it doesn't help
[21:29] <sjust> stingray: odd, it should resume peering now
[21:29] <stingray> as I brought up that osd, it did
[21:30] <sjust> it's not still down+peering?
[21:30] <stingray> it went from down+peering to peering to clean
[21:32] <stingray> that one osd was refusing to start because it tried to create directories in current/
[21:32] <stingray> that already exist
[21:32] <stingray> so it was failing with error -17 file already exists not handled
[21:32] <stingray> so after rmdir *_head
[21:32] <stingray> it started working
[21:33] <stingray> and then, from down+peering it went to clean, and now it's 1 peering but it is cleaning up extra replicas I believe
[21:45] * sagelap (~sage@wireless-wscc-users-2930.sc11.org) has joined #ceph
[21:53] * conner (~conner@leo.tuc.noao.edu) Quit (Remote host closed the connection)
[22:32] <grape> are there any differences in installation procedures if I am only going to be using block storage? (before I learn the hard way)
[22:33] <grape> also, where do I define the mount points for the storage volumes?
[22:35] <wido> I'm getting a "Read failed" from Qemu when booting a just imported VM. This seems to be a Qemu message, any ideas?
[22:35] <wido> I exported a RBD image from one cluster and imported it on a second, both import and export completed succesfully
[22:37] <todin> wido: you could try and mount the rbd image as a block device an see if everything is ok with the image
[22:37] <wido> Yep, indeed a error from Qemu, while reading the boot sector
[22:38] <wido> todin: Could give that a try indeed
[22:38] <joshd> wido: you could also try exporting from the second cluster to make sure it's readable (easier to debug than qemu)
[22:39] <gregaf> grape: should all be the same, although you won't need to do stuff like define an MDS
[22:39] <grape> regarding my second question, is osd data = /srv/ceph/osd$id still accurate?
[22:40] <grape> gregaf: ah yes, MDS is what I was forgetting... I knew there was something
[22:41] <wido> hmm, I think I see it. /dev/rbd0 is 3G big, while the disk had a virtual size of 100G before the export
[22:41] <wido> there was only 3G of data on it, but somehow the image got corrupted during the export
[22:42] <gregaf> grape: yep, OSD data specification hasn't changed
[22:45] <todin> wido: maybe the image got corrupted on the transport from one cluster to the other? did you do a checksum of the image on both sides?
[22:45] <grape> gregaf: thanks
[22:46] <wido> todin: No, I didn't, so that could be the problem. Should have looked at that first
[22:46] <wido> anyway, it's just a test cluster, so no problem
[22:46] <grape> gregaf: should the MDS simply be omitted from the conf file prior to setup?
[22:47] <gregaf> yep
[22:47] <grape> gregaf: sweet! less to break ;-)
[22:58] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:07] * adjohn is now known as Guest17054
[23:07] * adjohn (~adjohn@208.90.214.43) has joined #ceph
[23:09] <yehudasa_> Tv: I'm missing permissions on that tree (can't push)
[23:10] <Tv> yehudasa_: oh sorry didn't realize the defaults weren't good enough
[23:10] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[23:10] <Tv> yehudasa_: try again
[23:11] <yehudasa_> Tv: working, thanks!
[23:11] * Guest17054 (~adjohn@208.90.214.43) Quit (Ping timeout: 480 seconds)
[23:20] * adjohn is now known as Guest17056
[23:20] * adjohn (~adjohn@m870536d0.tmodns.net) has joined #ceph
[23:22] <grape> are line comments in ceph.conf using a semi-colon or a #?
[23:23] <stingray> shall I try to have some fun and run ceph daemons as non-root?
[23:25] <stingray> FIEMAP needs CAP_SYS_RAWIO
[23:27] * Guest17056 (~adjohn@208.90.214.43) Quit (Ping timeout: 480 seconds)
[23:27] <yehudasa_> grape: both
[23:32] * verwilst (~verwilst@dD57690CA.access.telenet.be) Quit (Quit: Ex-Chat)
[23:36] <grape> yehudasa_: thanks
[23:38] * sagelap (~sage@wireless-wscc-users-2930.sc11.org) Quit (Ping timeout: 480 seconds)
[23:46] <grape> just to make sure I get it straight, I partition the drive for the osd data partition, and ceph takes care of mounting & formatting it?
[23:47] <Tv> grape: if you choose to use it like that
[23:47] <grape> Tv: Is that a sane choice?
[23:47] <Tv> though i might have my eye on removing that feature too ;)
[23:47] * Tv is big on taking features out of ceph core
[23:47] <grape> Tv: the less there is for me to break the better
[23:48] <Tv> yeah, exactly
[23:48] <Tv> and we don't want to start mkfs'ing all the possible fs'es, etc
[23:50] <Tv> also, mkcephfs won't handle replacing faulty hardware etc anyway
[23:50] <grape> Tv: so if I just go ahead and set it up and give it the proper directives, it will still work and my wife won't have to listen to me carrying on this evening ;-)
[23:50] <grape> Tv: Yeah, the hardware is my job. I'm not giving it up.
[23:56] <grape> Tv: so "btrfs devs = /dev/sda" can simply turn into "ods data = /srv/ceph/ods.$id & ods journal = /srv/ceph/ods.$id.journal"
[23:56] <Tv> grape: you'll need osd data & osd journal anyway i recall
[23:57] <grape> Tv: so do I put it in /etc/fstab or does ceph do the mounting?
[23:57] <Tv> btrfs devs means two things 1) mkfs.btrfs call in mkcephfs 2) mount/umount in ceph init script
[23:58] <grape> Tv: so should it be more specific and include the partition?
[23:58] <Tv> grape: unless you like to wipe the whole disk ;)
[23:58] <grape> lol
[23:58] <Tv> grape: personally, i think you're better off not using it

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.