#ceph IRC Log

Index

IRC Log for 2012-07-14

Timestamps are in GMT/BST.

[0:01] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[0:15] <dmick> http://ceph.com/docs/master/rbd/rbd-ko/ seems somewhat mute on the subject of snapshots
[0:16] <elder> create_snap
[0:16] <elder> You should magically know the rest.
[0:17] <dmick> ceph-rbdnamer implies a connection, at least. but that seems to imply that image@snapname is one path component
[0:18] <dmick> NAME_MAX will certainly be not only big enough, but too big, it seems
[0:23] <elder> I think all it will do is define the size of the buffer into which stuff coming in from the wire gets copied.
[0:23] <elder> I'm sure it's plenty.
[0:23] <dmick> yeah.
[0:24] <dmick> I wonder, though, about limit-checking somewhere in the Ceph side. Seems like it would be potentially a shame to invent a snapshot that could not be accessed from the kernel rbd.
[0:25] * nhmlap (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[0:27] <elder> That's true.
[0:27] <elder> It's nice architecturally to not be limited, but where there are practical limits imposed by the environment you should try to be cognizent of them.
[0:31] <dmick> if nothing else, try to avoid *encouraging* users to hang themselves with the rope you've given them :)
[0:38] * joshd (~jdurgin@2602:306:c5db:310:1e6f:65ff:feaa:beb7) has joined #ceph
[0:42] * tremon (~aschuring@d594e6a3.dsl.concepts.nl) Quit (Quit: getting boxed in)
[0:50] <elder> Looks to me like the comment for output is wrong for get_features() in cls_rbd.cc
[0:51] <elder> It's returning both a compatible features (le64) and an incompatible features (le64) value.
[0:52] <joshd> indeed, I'll fix that
[0:54] <elder> Is it really necessary to supply both the compatible and the incompatible features? Is that to distinguish between the features at one time supported by an rbd images, and the features the implementation serving it right now offers?
[0:55] <sagewk> elder: yay, kdb works now :)
[0:55] <elder> What was wrong?
[0:55] <elder> (I noticed it wasn't working but I thought I'd wedged my machine but good when it happened.)
[0:55] <sagewk> the kenrel task was neither installing the kenrel nor enabling kdb when the yaml had more than 1 entry in it
[0:55] <elder> more than one entry?
[0:55] <sagewk> now i get kdb on wedged machines instead of a black hole
[0:55] <elder> In that sectino of the file?
[0:56] <sagewk> kernel:
[0:56] <sagewk> branch: foo
[0:56] <sagewk> ok
[0:56] <elder> Oh.
[0:56] <sagewk> kernel:
[0:56] <sagewk> branch: foo
[0:56] <sagewk> kdb: true
[0:56] <sagewk> not ok
[0:56] <elder> Well, I have that.
[0:56] <elder> So the kdb thing was happening.
[0:56] <sagewk> maybe you are running an old teuthology commit
[0:56] <elder> But I'm pretty sure I've been getting my kernels to install.
[0:56] <elder> Could be, but I think I updated it at least this week.
[0:56] <elder> Maybe all my testing is for naught.
[0:57] <elder> Everything seemed to be hunky-dory though.
[1:01] <dmick> it's not just another timezone, it's another reality
[1:01] <dmick> sage: powercycling 37 now
[1:01] <dmick> sorry sagewk
[1:02] <sagewk> dmick: thanks!
[1:05] <joshd> elder: incompatible features is all features that the image uses that are the client needs to understand to be able to correctly use the image - if the client doesn't support an incompatible feature, it should return an error when trying to open the image
[1:06] <dmick> so, "required"?
[1:06] <elder> Right, but the get_features method returns features and incompatible features, the latter being just the result of:
[1:06] <elder> incompatible = features & RBD_FEATURES_INCOMPATIBLE
[1:07] <elder> This method has nothing to do with what features the client supports.
[1:07] <joshd> the osd may have new features the client does not know about, so it needs to do masking on its end, and the client can then check that it supports those features
[1:08] <elder> Oh wait, I think I see what you're saying.
[1:08] <elder> If the client has a feature that's incompatible with the image the client shouldn't support it.
[1:08] <elder> But that means we should just return RBD_FEATURES_INCOMPATIBLE, right?
[1:09] <elder> Seems to me that:
[1:09] <joshd> yeah, that would work too
[1:09] <elder> The server should say "these are the features I support"
[1:09] <elder> And also "these features, if you use them, are not compatible with this image"
[1:09] <joshd> there's get_all_features for the former
[1:10] * Cube (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[1:10] <dmick> well, "features the server supports" may be different from "features this image has" which may be different from "features the client must support to be able to interpret this image successfully"
[1:10] <elder> Yes.
[1:11] <joshd> we could have a get_incompatible_features that just returns the server's RBD_FEATURES_INCOMPATIBLE though
[1:11] <elder> What the client cares about is "features supported by this image that are also supported by this server" and "features that I must not use with this image"
[1:11] <joshd> that would be a bit cleaner than doing it in get_features
[1:12] <elder> Are we blurring server features and image features though? Are they overlapping but not 100% so?
[1:12] <dmick> I think the client also cares about "features this image requires for use"
[1:12] <joshd> elder: also "features this image supports that I do not, but are backwards-compatible" and "features this image supports that I do not, but are not backwards-compatible"
[1:13] <elder> I.e., server might support some sort of quick response protocol or something, which has nothing to do with an image.
[1:14] <elder> And an image might support some magic encoding of a field that requires no server support.
[1:14] <elder> I don't know, just thinking aloud.
[1:15] <dmick> I hate feature bit discussions
[1:15] <elder> For now I just need to know what I should expect to get in response to my requests...
[1:16] <joshd> just go with the existing implementation for now (features and incompatible aka required features)
[1:17] <elder> That's what I'm doing.
[1:22] <James259> Hi Josh. The same 24 pgs are still stuck. I have copied everything off the cluster onto a portable hard disk so that I can re-initialize it. Is there a simple command I run to clear and re-create all pgs or should I just reformat and start again?
[1:26] <joshd> you could remove the pools and create new ones, but starting clean is safest
[1:29] * BManojlovic (~steki@212.200.241.106) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:41] <elder> joshd, am I able to create a snapshot for v2 right now?
[1:42] <elder> rbd snap create <image> <snapname>?
[1:42] <elder> (The man page is not very clear.)
[1:42] <joshd> elder: yes, it's just like a normal image
[1:42] <joshd> but it's rbd snap create --snap <snapname> <image>
[1:43] <joshd> or 'rbd snap create <image>@<snapname>
[1:43] <joshd> '
[1:44] <joshd> elder: which userspace branch are you using? I don't want to delete it out from under you
[1:46] <elder> master now I think. Let me check.
[1:47] <elder> Yes, master
[1:47] <joshd> ok, well that's one I won't delete :)
[1:50] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[2:01] * sagelap (~sage@2607:f298:a:607:d942:1186:3b57:31fa) Quit (Read error: Operation timed out)
[2:13] * Tv_ (~tv@38.122.20.226) Quit (Quit: Tv_)
[2:24] * JJ1 (~JJ@12.248.40.138) Quit (Quit: Leaving.)
[2:35] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[2:37] * lofejndif (~lsqavnbok@04ZAAEIEE.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[2:38] * lofejndif (~lsqavnbok@83TAAHD8D.tor-irc.dnsbl.oftc.net) has joined #ceph
[3:13] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[3:49] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[3:51] * loicd (~loic@magenta.dachary.org) has joined #ceph
[3:58] * lofejndif (~lsqavnbok@83TAAHD8D.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[4:06] * chutzpah (~chutz@100.42.98.5) Quit (Quit: Leaving)
[4:28] * joshd (~jdurgin@2602:306:c5db:310:1e6f:65ff:feaa:beb7) Quit (Quit: Leaving.)
[5:00] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[5:02] * loicd (~loic@magenta.dachary.org) has joined #ceph
[5:29] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[5:31] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[6:13] * renzhi (~renzhi@180.169.73.90) has joined #ceph
[6:13] <renzhi> morning
[6:41] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[6:44] <renzhi> I'm setting up a new cluster, with multiple osd on a host, mkcephfs seems to finish correctly, but none of the mds/osd keyring is generated. How come?
[6:44] <renzhi> running 0.48 on Debian wheezy
[6:45] <renzhi> the osd journal and data directory are generated though, the mon folder is too
[6:47] <dmick> renzhi: you're using -x?
[6:47] <dmick> er, sorry, I'm thinking of vstart.sh. hang on
[6:50] <renzhi> nolan, just mkcephfs, same as when I did for the testing cluster
[6:50] <renzhi> for but testing, I have only one osd per node.
[6:51] <renzhi> For this, I have 6 nodes, with 10 osds each, one per disk
[6:51] <dmick> it can be useful to try -v
[6:51] <dmick> mkcephfs is just a shell script
[6:52] <renzhi> yeah, I'm going to re-run it again, but it's annoying to go clean up all the directories :)
[6:53] <dmick> yeah
[6:53] <dmick> dsh/cssh is your friend :)
[6:54] <dmick> it's been a while since I used mkcephfs; I'm not sure if it's changed recently with respect to the keyrings
[6:54] <renzhi> we are ready to go production, and bang, got one obstacle right in the morning already :)
[6:57] <dmick> I need to go catch a train. Good luck!
[6:57] * dmick (~dmick@38.122.20.226) Quit (Quit: Leaving.)
[6:58] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[6:59] <renzhi> thanks
[7:40] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[8:04] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:06] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:15] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[8:21] <renzhi> ok, I made a mistake in the ceph.conf file, the keyring is generated correctly, but it just overwrite the keyring file. Now, everything is created correctly, but ceph does not start
[8:21] <renzhi> running the command service ceph start
[8:21] <renzhi> and nothing happens
[8:21] <renzhi> :(
[8:21] <renzhi> no log, nothing
[8:22] <renzhi> I had setup two test clusters, everything was flawless
[8:23] <renzhi> anyone has an idea where to look at?
[8:50] <renzhi> manually starting each daemon seems to work, but that's an hassle
[8:55] * widodh (~widodh@minotaur.apache.org) Quit (Read error: Connection reset by peer)
[8:55] * widodh (~widodh@minotaur.apache.org) has joined #ceph
[8:57] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[9:00] * loicd (~loic@magenta.dachary.org) has joined #ceph
[10:15] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[10:21] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Ping timeout: 480 seconds)
[10:24] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[10:31] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Ping timeout: 480 seconds)
[12:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Read error: Operation timed out)
[12:09] * LarsFronius (~LarsFroni@95-91-243-243-dynip.superkabel.de) has joined #ceph
[12:11] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[12:11] * loicd (~loic@magenta.dachary.org) has joined #ceph
[12:23] * renzhi (~renzhi@180.169.73.90) Quit (Quit: Leaving)
[12:33] * widodh (~widodh@minotaur.apache.org) Quit (Read error: Operation timed out)
[12:35] * widodh (~widodh@minotaur.apache.org) has joined #ceph
[12:50] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[13:04] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[13:48] * BManojlovic (~steki@212.200.241.106) has joined #ceph
[14:04] * LarsFronius (~LarsFroni@95-91-243-243-dynip.superkabel.de) Quit (Quit: LarsFronius)
[14:12] * tremon (~aschuring@d594e6a3.dsl.concepts.nl) has joined #ceph
[14:33] * tremon (~aschuring@d594e6a3.dsl.concepts.nl) Quit (Quit: getting boxed in)
[14:56] * nhmlap (~Adium@65-128-158-48.mpls.qwest.net) has joined #ceph
[15:19] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) Quit (Read error: Connection reset by peer)
[15:24] * mtk (~mtk@ool-44c35bb4.dyn.optonline.net) has joined #ceph
[16:24] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[16:27] * tremon (~aschuring@d594e6a3.dsl.concepts.nl) has joined #ceph
[16:30] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) has joined #ceph
[16:30] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) has left #ceph
[16:43] * The_Bishop_ (~bishop@f052101193.adsl.alicedsl.de) has joined #ceph
[16:47] * The_Bishop (~bishop@e179019194.adsl.alicedsl.de) Quit (Read error: Operation timed out)
[16:57] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[17:00] * loicd (~loic@magenta.dachary.org) has joined #ceph
[18:39] * nhmlap (~Adium@65-128-158-48.mpls.qwest.net) has left #ceph
[18:43] * BManojlovic (~steki@212.200.241.106) Quit (Ping timeout: 480 seconds)
[19:08] * brambles (brambles@79.133.200.49) Quit (Quit: leaving)
[19:08] * brambles (brambles@79.133.200.49) has joined #ceph
[19:38] * ryant5000 (~ryan@cpe-67-247-9-63.nyc.res.rr.com) has joined #ceph
[19:39] <ryant5000> i'm trying to set up a simple one-node ceph system using the debian repositories, and i keep getting errors like this in mkcephfs: cat: /tmp/mkcephfs.Y6Of2XaODF/key.*: No such file or directory
[19:39] <ryant5000> i'm following the steps at http://ceph.com/docs/master/start/quick-start/
[19:39] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[19:40] <ryant5000> is there some way i can fix or work around that error?
[19:42] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:50] <tremon> ryant5000: look inside the mkcephfs script. It's quite easy to do by hand, especially for one node
[19:51] <ryant5000> hm, alright
[19:51] <ryant5000> i started going through it a bit, but i wasn't quite clear on which keys needed to be where
[19:52] <ryant5000> (i've just started playing around with ceph)
[19:54] <tremon> you can probably put all keys into /etc/ceph/keyring, otherwise in the data (root) directory of the daemon
[19:57] <ryant5000> hm, alright; i'll give that a try
[19:57] <ryant5000> thanks
[19:57] * ryant5000 (~ryan@cpe-67-247-9-63.nyc.res.rr.com) Quit (Remote host closed the connection)
[20:05] * lofejndif (~lsqavnbok@04ZAAEITM.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:10] * The_Bishop_ (~bishop@f052101193.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[20:12] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:14] <tremon> any particular reason why increasing the journal size 10x would decrease performance 10x?
[20:14] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:15] <tremon> I'm running ceph-osd on a particularly underpowered arm-based nas, and I'm seeing cpu usage jump through the roof if the (btrfs, ssd) journal is larger than ~600MB
[20:15] <tremon> (under systained write)
[20:46] * LarsFronius (~LarsFroni@95-91-243-243-dynip.superkabel.de) has joined #ceph
[20:47] * ryant5000 (~ryan@cpe-67-247-9-63.nyc.res.rr.com) has joined #ceph
[20:51] <tremon> to be specific, when monitoring the system with sar, %sys jumps to 100% and all other measurements (blk i/o, context swaps, i/o tps) drop to zero. Could it be that the system is spending too much time syncing the journal?
[20:55] * brambles (brambles@79.133.200.49) Quit (Remote host closed the connection)
[20:56] <ryant5000> when i run ceph-osd -i 0 --mkfs, it takes a few seconds, and then nothing seems to happen
[20:56] <ryant5000> nothing shows up in /var/lib/ceph/osd/ceph-0
[20:57] <ryant5000> that directory is a symlink to a btrfs mount at /mnt/ceph-0
[20:57] <ryant5000> (not sure if that matters)
[21:02] <ryant5000> also, it seems like ceph status just hangs forever
[21:51] * ryant5000 (~ryan@cpe-67-247-9-63.nyc.res.rr.com) Quit (Remote host closed the connection)
[21:56] * BManojlovic (~steki@212.200.241.106) has joined #ceph
[21:56] * ryant5000 (~ryan@cpe-67-247-9-63.nyc.res.rr.com) has joined #ceph
[21:57] <ryant5000> so, i've managed to get my ceph cluster up, i think, but all the PGs seem to be stuck creating
[21:57] <ryant5000> when i run ceph health detail, i get a ton of lines that look like this: pg 0.0 is stuck creating, last acting []
[21:57] <ryant5000> from what i can see, the last acting list should not be empty
[22:12] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:13] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[22:20] <ryant5000> do the ceph extended attributes show up in the fuse client?
[22:20] <ryant5000> i'm running getfattr -d and not getting anything
[22:20] <ryant5000> *getfattr -d .
[22:22] * MarkDude (~MT@67.23.204.5) has joined #ceph
[23:07] <ryant5000> huh; everything seems to be working now, but the upstart script doesn't seem to do anything at all
[23:16] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[23:17] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.