#ceph IRC Log

Index

IRC Log for 2011-12-16

Timestamps are in GMT/BST.

[0:02] * sjustlaptop (~sam@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[0:07] * fronlius (~fronlius@g231139059.adsl.alicedsl.de) Quit (Quit: fronlius)
[0:25] * adjohn (~adjohn@208.90.214.43) Quit (Remote host closed the connection)
[0:25] * adjohn (~adjohn@208.90.214.43) has joined #ceph
[0:41] * gregaf1 (~Adium@aon.hq.newdream.net) has joined #ceph
[0:48] * gregaf (~Adium@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[1:24] * aa (~aa@r190-135-25-156.dialup.adsl.anteldata.net.uy) Quit (Quit: Konversation terminated!)
[1:25] * aa (~aa@r190-135-25-156.dialup.adsl.anteldata.net.uy) has joined #ceph
[1:30] * andresambrois (~aa@r190-64-64-78.dialup.adsl.anteldata.net.uy) has joined #ceph
[1:30] * adjohn is now known as Guest20553
[1:30] * _adjohn (~adjohn@208.90.214.43) has joined #ceph
[1:30] * _adjohn is now known as adjohn
[1:34] * aa (~aa@r190-135-25-156.dialup.adsl.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[1:37] * Guest20553 (~adjohn@208.90.214.43) Quit (Ping timeout: 480 seconds)
[1:44] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[1:52] * cp (~cp@76-220-17-197.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[2:13] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[2:31] * sjustlaptop (~sam@96-41-121-194.dhcp.mtpk.ca.charter.com) has joined #ceph
[2:42] * aliguori_ (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[2:48] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:51] * sjustlaptop (~sam@96-41-121-194.dhcp.mtpk.ca.charter.com) Quit (Ping timeout: 480 seconds)
[3:25] * adjohn (~adjohn@208.90.214.43) Quit (Remote host closed the connection)
[3:26] * adjohn (~adjohn@208.90.214.43) has joined #ceph
[3:26] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:31] * cp (~cp@76-220-17-197.lightspeed.sntcca.sbcglobal.net) Quit (Quit: cp)
[3:45] * aliguori_ (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:21] * gregaf1 (~Adium@aon.hq.newdream.net) Quit (Read error: Connection reset by peer)
[4:21] * gregaf (~Adium@aon.hq.newdream.net) has joined #ceph
[5:04] * adjohn (~adjohn@208.90.214.43) Quit (Quit: adjohn)
[5:05] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:31] * Ludo (~Ludo@88-191-129-65.rev.dedibox.fr) has joined #ceph
[7:47] * votz (~votz@pool-108-52-122-97.phlapa.fios.verizon.net) Quit (Remote host closed the connection)
[8:14] <wonko_be> wido: thx
[8:20] * verwilst (~verwilst@d51A5B517.access.telenet.be) has joined #ceph
[8:54] * fghaas (~florian@85-127-155-32.dynamic.xdsl-line.inode.at) has joined #ceph
[8:56] * fronlius (~fronlius@testing78.jimdo-server.com) has joined #ceph
[8:57] * fronlius (~fronlius@testing78.jimdo-server.com) has left #ceph
[9:19] <wido> np wonko_be
[9:54] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[11:03] * NightDog (~karl@52.84-48-58.nextgentel.com) Quit (Quit: This computer has gone to sleep)
[11:04] * cp (~cp@adsl-75-6-253-220.dsl.pltn13.sbcglobal.net) has joined #ceph
[11:04] * cp (~cp@adsl-75-6-253-220.dsl.pltn13.sbcglobal.net) Quit ()
[11:33] * fronlius (~fronlius@testing78.jimdo-server.com) has joined #ceph
[11:42] * fronlius_ (~fronlius@testing78.jimdo-server.com) has joined #ceph
[11:42] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (Read error: Connection reset by peer)
[11:42] * fronlius_ is now known as fronlius
[11:52] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[11:52] * fronlius (~fronlius@testing78.jimdo-server.com) has joined #ceph
[12:42] * verwilst (~verwilst@d51A5B517.access.telenet.be) Quit (Quit: Ex-Chat)
[13:11] * `gregorg` (~Greg@78.155.152.6) Quit (Quit: Quitte)
[13:30] * aa__ (~aa@r190-135-26-243.dialup.adsl.anteldata.net.uy) has joined #ceph
[13:34] * andresambrois (~aa@r190-64-64-78.dialup.adsl.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[14:00] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) Quit (Remote host closed the connection)
[14:00] * tjikkun (~tjikkun@2001:7b8:356:0:225:22ff:fed2:9f1f) has joined #ceph
[14:20] * fronlius_ (~fronlius@testing78.jimdo-server.com) has joined #ceph
[14:20] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (Read error: Connection reset by peer)
[14:20] * fronlius_ is now known as fronlius
[14:27] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) Quit (Read error: Connection reset by peer)
[14:28] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has joined #ceph
[14:38] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[15:04] * aa__ (~aa@r190-135-26-243.dialup.adsl.anteldata.net.uy) Quit (Ping timeout: 480 seconds)
[15:07] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[15:07] * fronlius (~fronlius@testing78.jimdo-server.com) has joined #ceph
[15:24] * aa (~aa@r190-135-26-243.dialup.adsl.anteldata.net.uy) has joined #ceph
[15:54] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) Quit (Quit: Leaving)
[16:02] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has joined #ceph
[16:25] * julienhuang (~julienhua@77130.cuirdandy.com) has joined #ceph
[16:34] * yiH (~rh@83.217.113.221) has joined #ceph
[16:35] <yiH> hi, I'm testing ceph 0.39
[16:35] <yiH> 3 mon, 2 mds, 3 osd
[16:36] <yiH> (on three machines). when one of the machines it often freezes the whole FS.
[16:36] <yiH> when one of the machines is _down_ it often freezes the whole FS.
[16:37] <yiH> now one of the osd-s was down, and all the processes trying to access the fs were freezed although both mds were up, and each file is replicated at least two places
[16:38] <yiH> yesterday I unplugged the power cord from one of the mds and ceph kept waiting for it to come back, and didn't switch to the other mds
[16:42] <wido> yiH: what did 'ceph -s' show you?
[16:42] <wido> did it go into a degraded state?
[17:00] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[17:02] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[17:10] * fronlius_ (~fronlius@testing78.jimdo-server.com) has joined #ceph
[17:10] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (Read error: Connection reset by peer)
[17:10] * fronlius_ is now known as fronlius
[17:12] <yiH> the first hit for that term shows: "replay(laggy or crashed)"
[17:12] <yiH> I think that's what I've seen
[17:14] <yiH> but le me reproduce it (will take some time, bloody sysengineers messing with sg.. :/ )
[17:14] <yiH> do you need anything else besides `ceph -w`?
[17:15] <wido> yiH: you should see something like: osd e340: 40 osds: 40 up, 40 in
[17:15] <wido> the line starting with "pg", like: pg v1170309: 7808 pgs: 7807 active+clean
[17:30] <todin> wido: I have not the scrubbin stall, like you, but I also have only the tenth of pgs. so perheps im less likly to run into it
[17:41] <wido> todin: Hmm, weird. Currently it is blocking my RBD VM's which are running
[17:42] <wido> like I said, it takes some time, but eventually a PG will block
[17:42] <wido> I could crank up the logs now, but I don't think it will reveil why the PG is blocking, no information leading up to the event
[17:45] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[17:47] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:59] * mgalkiewicz (~maciej.ga@staticline18746.toya.net.pl) has joined #ceph
[18:01] <mgalkiewicz> Hello I have a problem with adding new monitors. I tried to follow instructions from https://github.com/NewDreamNetwork/ceph/commit/7178f1caa8cf7c6f65f8d72a28718da417426af8
[18:01] <mgalkiewicz> I have enabled cephx and the commands are not working as expected
[18:02] <mgalkiewicz> Can anyone help me with extending the cluster?
[18:02] * julienhuang (~julienhua@77130.cuirdandy.com) Quit (Quit: julienhuang)
[18:07] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) has joined #ceph
[18:11] * Tv (~Tv|work@aon.hq.newdream.net) has joined #ceph
[18:24] * adjohn (~adjohn@70-36-197-80.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[18:31] * mgalkiewicz (~maciej.ga@staticline18746.toya.net.pl) Quit (Ping timeout: 480 seconds)
[18:41] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[18:41] * mgalkiewicz (~maciej.ga@staticline18746.toya.net.pl) has joined #ceph
[19:01] <wido> mgalkiewicz: what is not working? At what step do you get stuck?
[19:04] * sagelap (~sage@adsl-99-107-84-120.dsl.pltn13.sbcglobal.net) has joined #ceph
[19:05] <wido> The docs for the RADOS gw seem to be a bit outdated
[19:06] <wido> I'd volunteer to get them up to date, but is there a sample virtual host somewhere for fcgid?
[19:06] <sagelap> wido: wiki or ceph.git docs?
[19:06] <wido> sagelap: the manpage of radosgw, so I checked /docs online
[19:06] <wido> I keep hitting "radosgw: must specify 'rgw socket path' to run as a daemon"
[19:06] * fghaas (~florian@85-127-155-32.dynamic.xdsl-line.inode.at) Quit (Read error: Connection reset by peer)
[19:07] <wido> http://ceph.newdream.net/docs/latest/man/8/radosgw/?highlight=radosgw
[19:07] <wido> IfModule mod_fcgid.c and FastCgiExternalServer
[19:07] <wido> those conflict
[19:08] <wido> I'm not trying to run the radosgw multithreaded
[19:08] <sagelap> that's the best way to run it.. :)
[19:09] <wido> Ok, no problem, but I'm currently in the dark to get it up and running
[19:09] <sagelap> i'll open a bug to update the man page. there are a few things that shoudl go in teh conf too.
[19:10] <wido> Simply adding --rgw-socket-path in s3gw.fcgi won't do the job, that will keep spawning processes, but there is no "connection" back to Apache
[19:10] <mgalkiewicz> wido: first step: ceph mon getmap -o /tmp/monmap
[19:10] <sagelap> wido: the short answer is
[19:10] <sagelap> FastCgiExternalServer /var/www/dummyradosgw.fcgi -socket /var/run/ceph/radosgw.client.radosgw.peon1435
[19:10] <wido> Like I said, I'd volunteer to update the docs, I just need a example (pastbin) virtualhost
[19:10] <sagelap> to match the socket path for radosgw. (and you are responsible for starting it via /etc/init.d/radosgw or whatever)
[19:10] <mgalkiewicz> how am I suppose to getmap from unauthorized machine?
[19:11] <wido> sagelap: But that is if you use mod_fastcgi
[19:11] <wido> mgalkiewicz: dump the map on a machine which is authorized and scp it there
[19:11] <wido> sagelap: the example vhost shows mod_fcgid and mod_fastcgi mixed up
[19:11] <sagelap> wido: oh, right. not sure where things stand with mod_fcgid... yehudasa_?
[19:11] <sagelap> k
[19:12] <mgalkiewicz> ok but when I execute step three ceph complains that my new machine is not in monmap
[19:12] <mgalkiewicz> and what ceph auth export mon. -o /tmp/monkey is for?
[19:13] <mgalkiewicz> why do I need keys at all if cephx is disabled (I assume that this documentation is for ceph without authentication)?
[19:13] <mgalkiewicz> and it is weird that I can export the key from new machine
[19:14] <wido> mgalkiewicz: you need to add the new machine first to the monmap
[19:14] <wido> give me a sec
[19:14] <mgalkiewicz> ok
[19:14] <mgalkiewicz> brb
[19:16] <wido> mgalkiewicz: I actually think that doc misses a piece, see: http://ceph.newdream.net/wiki/Monitor_cluster_expansion
[19:16] <wido> you need to add the monitor to the cluster, then dump the monmap
[19:16] * adjohn (~adjohn@208.90.214.43) has joined #ceph
[19:17] <sagelap> wido: btw that wiki page is somewhat obsolete now.. there is an update process in the new doc tree
[19:18] <darkfader> sagelap: sayyyy.... about those beta support contracts.... any date on the horizon for them yet?
[19:18] <sagelap> http://ceph.newdream.net/docs/latest/ops/manage/grow/mon/
[19:19] <sagelap> darkfader: the person to talk to about that is bryan@dreamhost.com. he'll be interested in talking about exactly what you're after, and when we can provide it
[19:19] <darkfader> ok
[19:20] <darkfader> thanks
[19:20] * adjohn (~adjohn@208.90.214.43) Quit ()
[19:21] <darkfader> last weekend i spent many hours doing a full backup for a md raid with truncated lvm config. and at some point i figured... where's the difference... ceph is one well designed layer that is experimental, vs. 2-3 layers that are "Stable" yet badly designed
[19:22] <wido> sagelap: I know the Wiki is outdated, but in this case it showed the missing step
[19:22] <wido> outdated = obsolete
[19:22] <sagelap> ah
[19:23] <sagelap> hmm. lemme check
[19:25] <wido> imho the step of adding the new monitor to the cluster is missing
[19:25] <mgalkiewicz> wido: ok so I should add monitor to monmap and then follow steps from wiki you have pointed?
[19:25] <wido> mgalkiewicz: Yes, add the new monitor, then dump the monmap and scp/rsync/copy it to the new monitor
[19:25] <wido> otherwise the monitor will start, but won't find itself in the monmap
[19:26] <mgalkiewicz> ok so the second step which exports mon. key is also invalid?
[19:26] <sagelap> mgalkiewicz,wido: the new process should work. let me test to verify.
[19:27] <sagelap> the main point of the new code is to avoid the cumbersome rsync and manual monmap manipulation. testing..
[19:27] <mgalkiewicz> sagelap: notice that I am using cephx
[19:27] <sagelap> yeah
[19:27] <mgalkiewicz> :)
[19:27] <wido> sagelap: Ah, with '/tmp/monkey' the monitor should be able to add himself?
[19:27] <sagelap> yeah
[19:28] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[19:28] <sagelap> as long as it can find at least one existing mon in quorum and has the secret key and matching fsid
[19:29] <mgalkiewicz> so the first step should be about getting secret key
[19:32] <sagelap> mgalkiewicz: just tested the process in the docs and it worked for me..
[19:32] <mgalkiewicz> ok so how did your provide secret key to new mon?
[19:32] <sagelap> ceph auth export mon. -o /tmp/monkey
[19:32] <sagelap> and pass --keyring /tmp/monkey when you ceph-mon --mkfs
[19:33] <mgalkiewicz> executed on which machine?
[19:33] <sagelap> the export on some machine that's allowed to talk to the existing monitors (i.e. ceph commadn works)
[19:33] <mgalkiewicz> and there is a wrong monmap file name on wiki
[19:33] <sagelap> may need to copy it to the target machine
[19:34] <mgalkiewicz> ok it is not clear from this doc
[19:34] <mgalkiewicz> but logical
[19:34] <sagelap> i'll make a note to clarify that.
[19:35] <sagelap> in reality, you shoudl probably copy the /etc/ceph/keyring with the client.admin key to the machine where the new monitor will run too so that the 'ceph' command will work there as well
[19:35] <mgalkiewicz> ok and the first command ceph mon getmap -o /tmp/monmap was executed where?
[19:35] <sagelap> i replaced the wiki page with a link to the new docs.
[19:35] <sagelap> same.. somewhere where teh 'ceph' command already works
[19:35] <mgalkiewicz> ok I did sth like this
[19:35] <sagelap> (ceph.conf and keyring in /etc/ceph)
[19:36] <mgalkiewicz> and in the third step I got complaints that my new machine is not in monmap
[19:36] <mgalkiewicz> I assume that i have to execute the third command on new machine right?
[19:37] <sagelap> yeah
[19:38] <sagelap> the ceph-mon --mkfs call you mean?
[19:38] <mgalkiewicz> ceph-mon -i newname --mkfs --monmap /tmp/foo --keyring /tmp/monkey
[19:38] <sagelap> oh, that should be /tmp/monmap not /tmp/foo?
[19:38] <mgalkiewicz> yes there is a mistake on wiki
[19:39] <mgalkiewicz> and when I should add new mon to ceph.conf?
[19:40] <sagelap> when you're done
[19:40] <mgalkiewicz> can I do this at the beginning?
[19:40] <sagelap> yeah. it doesn't really matter when
[19:41] <mgalkiewicz> my questions might be weird but I am trying to automatize the whole process with chef
[19:41] <sagelap> great
[19:41] <mgalkiewicz> monmap changes only during cluster expansion?
[19:42] <sagelap> are you working against the our cookbooks on github?
[19:42] <sagelap> fwiw, our plan was to take a slightly different process for chef
[19:42] <mgalkiewicz> I havent seen your cookbooks
[19:42] <mgalkiewicz> are they available on github?
[19:42] <sagelap> basically, you would declare apriori in the environment or something what your monitor machines were going to be, and that would translate into a 'mon host = host1, host2, host3' line in [global] of ceph.conf
[19:43] <mgalkiewicz> I am using roles: attribute from chef's node
[19:44] <mgalkiewicz> ceph.conf is generated based on machines which have role ceph_mon
[19:44] <mgalkiewicz> ok I will check this process once again now brb
[19:45] <sagelap> the new doc lists the 3 pieces of info that ceph-mon needs to creat itself.. mon. key, fsid, and a list of existing monitors. if those all come from the envinroment, or attributes, or whatever, then there's no need to modify ceph.conf and move the monmap around or anything like that, and it works during the initial cluster bootstrap/mkfs too.
[19:46] <sagelap> tv knows more about the specific plan there, but it's probably a few weeks before we were going to get to it.
[19:47] <sagelap> bottom line, you can also run ceph-mon --mkfs -i `hostname` --key <mon. secret> and get --mon-host and fsid from [global] in ceph.conf. no [mon.foo] sections needed.
[19:49] <mgalkiewicz> I have a problem
[19:49] <mgalkiewicz> http://pastie.org/3027680
[19:49] <mgalkiewicz> monmap and monkey are copied from existing mon
[19:50] <sagelap> add --public-addr <your ip> or --public-subnet <your subnet> (and it'll find a local ip in that subnet)
[19:50] <mgalkiewicz> ?
[19:50] <mgalkiewicz> ok now i get it
[19:51] * nhm (~nh@68.168.168.19) Quit (Read error: Operation timed out)
[19:51] <Tv> mgalkiewicz: i'll get back to you in a bit, if there's still something unclear
[19:51] <mgalkiewicz> ok mon started
[19:52] <mgalkiewicz> and it is listed in ceph -s
[19:52] <sagelap> ceph quorum_status will also show you
[19:52] <sagelap> yay
[19:52] <mgalkiewicz> ok and now I should add new mon to ceph.conf on the old one?
[19:52] <sagelap> i'll fix up the public-addr step in teh docs.
[19:53] <sagelap> yeah, at your leisure.
[19:53] <sagelap> it only needs to be there so that daemons/tools starting up will initially try that monitor when hunting for a connection.
[19:53] <sagelap> and so the init script will start the ceph-mon daemon
[19:54] <mgalkiewicz> ok and how new mon is authenticated?
[19:55] <mgalkiewicz> monkey file is still needed?
[19:56] <mgalkiewicz> and does monmap change only during cluster expansion?
[19:57] <mgalkiewicz> note that init.d script should be modified as well
[20:02] * sagelap (~sage@adsl-99-107-84-120.dsl.pltn13.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[20:04] <mgalkiewicz> and I am not able to execute ceph -s from new mon
[20:04] <mgalkiewicz> there is not output from that command
[20:10] * nhm (~nh@68.168.168.19) has joined #ceph
[20:14] * fronlius (~fronlius@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[20:16] * fronlius (~fronlius@testing78.jimdo-server.com) has joined #ceph
[20:19] <wido> I think I got a bit further with setting up the radosgw, set up a new keyring, added to the ceph.conf, but I'm just missing the Apache conf
[20:20] <joshd> wido: this is the conf we use in our tests https://github.com/NewDreamNetwork/teuthology/blob/master/teuthology/task/apache.conf
[20:21] <wido> joshd: ah, great! That will help
[20:21] <sjust> wido: about how often are you seeing the scrub hang bug?
[20:23] <wido> sjust: I've been busy with some other stuff the last few weeks, so I'm not a 100% sure, but it would show up every 48 hours or so
[20:23] <wido> I've also got a smaller cluster with about a tenth of that PG's, I've only seen it block there once
[20:24] <sjust> wido: it looks like there is a bug in the watch timeout handling that could cause the bug, how often do you think you end up killing an rbd-using process without shutting it down cleanly?
[20:24] <sjust> *that could cause the scrub hang
[20:24] <wido> sjust: I never kill my processes
[20:24] <sjust> wido: hmm, ok
[20:25] <wido> That smaller cluster has a VM that I never shut down, running semi-production
[20:25] <sjust> wido: could still happen through network hiccups, perhaps
[20:25] <wido> sjust: don't think so, all machines are on the same switch, nothing else going on there
[20:25] <sjust> wido: ok, thanks for the info
[20:25] <wido> same subnet, no router
[20:26] <wido> running IPv6 though
[20:26] <wido> shouldn't make a difference I think
[20:27] <sjust> wido: no, just trying to gauge whether this bug caused the behavior in your case, doesn't look like it though
[20:28] <wido> the cluster I sent the e-mail about had been idle for 24 hours
[20:28] <wido> upgraded to 0.39 yesterday and did no I/O
[20:29] <wido> I'm running RBD only, not using the filesystem
[20:31] <sjust> wido: is there a live vm attached?
[20:33] <wido> sjust: At the moment 4 VM's running from 4 different hosts
[20:34] <wido> sjust: If you want access, just give me your public key
[20:34] <sjust> wido: not at the moment
[20:35] <mgalkiewicz> hmm after a while I cant get any statistics from any mon machine
[20:35] <mgalkiewicz> ceph -s just hangs
[20:36] * adjohn (~adjohn@208.90.214.43) has joined #ceph
[20:36] <mgalkiewicz> I have tried restarting both mons
[20:37] <mgalkiewicz> logs: http://pastie.org/3027868
[20:55] * fghaas (~florian@85-127-155-32.dynamic.xdsl-line.inode.at) has joined #ceph
[20:56] * adjohn (~adjohn@208.90.214.43) Quit (Quit: adjohn)
[21:09] * adjohn (~adjohn@208.90.214.43) has joined #ceph
[21:11] <yiH> guys, did a test, this is what I see: http://pastebin.com/sZBNTpaU
[21:13] * jclendenan (~jclendena@204.244.194.20) Quit (Read error: Connection reset by peer)
[21:13] <Tv> gmane cracks me up.. http://article.gmane.org/gmane.comp.file-systems.ceph.devel/
[21:14] <yiH> 'data'l size: 3, 'metadata' size: 3. I have 2 mds... so why does it freeze if I shut down the master mds?
[21:24] <yiH> http://pastebin.com/R4MgdhUy
[21:25] <Tv> yiH: the MDS system expects to have the same number of active MDSs always available
[21:25] <Tv> yiH: it'll automatically bring ones from standby to active, but the number is expected to remain the same in normal operation
[21:25] <yiH> so.. how can I achieve HA?
[21:26] <Tv> yiH: by having standby MDSes
[21:26] <yiH> I have one..
[21:26] <Tv> yiH: if you have a total of 2 servers, make one active and the other standby
[21:26] <Tv> yiH: it sounds like you had both active
[21:26] <yiH> if one MDS crashes, then, by definition there will be less MDS than needed... right?
[21:26] <yiH> aaaaaah
[21:26] <yiH> I see
[21:26] <yiH> didn't know that. THX. How do I do that?
[21:27] <yiH> "1 up:standby"
[21:27] <Tv> yiH: my current understanding is that shrinking the number of active MDSes is not supported :(
[21:27] <yiH> doesn't this mean that the other was in standby?
[21:27] <Tv> actually yes it does
[21:28] <Tv> i see it is detected as "laggy or crashed"
[21:28] <Tv> there might be timeout there that's longer than you waited for
[21:29] <yiH> 10 minutes is a looong time for me :>
[21:29] <Tv> yeah the delay might be 15min by default, i'm not sure
[21:29] <yiH> configurable? have to recompile?
[21:29] <Tv> most of the QA is focusing on rados, radosgw, rbd so that's where i've been working more, let's see if we have an mds expert at hand
[21:30] <Tv> gregaf: ping
[21:30] <yiH> I'm not even sure it's mds related
[21:30] <Tv> i'm 95% sure it is ;)
[21:30] <yiH> seen the same stuff when crashed an mon+osd node
[21:31] <yiH> it freezed the FS
[21:33] <Tv> anyone: when does a laggy mds go down?
[21:36] <yiH> big thanks for your suggestions, unfortunately I really have to go now
[21:37] <Tv> yiH: try "ceph mds fail 0"
[21:37] <yiH> will try it tomorrow.. maybe the mailing list
[21:37] <yiH> thx, going to test this tomorrow!!!
[21:37] <yiH> bye
[21:37] <Tv> actually "ceph mds fail a"
[21:38] <Tv> err
[21:38] <Tv> actually "ceph mds fail beta"
[21:38] * NightDog (~karl@52.84-48-58.nextgentel.com) has joined #ceph
[21:38] <Tv> mgalkiewicz: are you still having trouble? i'm done with my previous task, i can help you now
[21:50] <mgalkiewicz> hmm restarting once again new mon helped
[21:51] <mgalkiewicz> I'll check one thing
[21:52] <mgalkiewicz> still have one problem
[21:53] <mgalkiewicz> when I added second mon and shutted it down I cant get stats from ceph -s on the first one
[21:53] <mgalkiewicz> ceph -s hangs
[21:54] <mgalkiewicz> Tv: do you need some logs or it is a normal behaviour?
[21:54] <Tv> mgalkiewicz: i'd like to know the exact commands you tried
[21:54] <mgalkiewicz> ceph -s
[21:54] <Tv> mgalkiewicz: more like, how you set it up
[21:54] <Tv> how many monitors did you have in the first place
[21:55] <mgalkiewicz> one
[21:55] <mgalkiewicz> and I added another one, I am aware that the number must be odd, I am going to add one more
[21:55] <Tv> if you have a 2-monitor cluster, you can't lose either one of them
[21:55] <Tv> because 1 is not a majority of 2
[21:56] <Tv> http://ceph.newdream.net/docs/latest/ops/manage/grow/mon/
[21:56] <Tv> bring it up again (to make the cluster reach majority), then use "ceph mon remove" to downsize it
[21:56] <Tv> if you really want to go back to 1
[21:56] <mgalkiewicz> I just wanted to stop it
[21:57] <mgalkiewicz> Im going to add another one
[21:57] <Tv> that's fine
[21:57] <Tv> but you have to understand
[21:57] <Tv> during the time you have 1 of 2 monitors up, it cannot do anything
[21:58] <Tv> there's no damage done, but it also fundamentally cannot operate under those circumstances
[21:58] <mgalkiewicz> ok so in case of 3 mons I always need at least 2 right?
[21:58] <Tv> yes
[21:58] <Tv> you always need a clear majority
[21:59] <mgalkiewicz> ok
[21:59] <mgalkiewicz> ahh
[22:00] <mgalkiewicz> I have tried to brought it up again
[22:00] <Tv> as long as you didn't delete its data directory, it should be able to come up and recover
[22:02] <mgalkiewicz> http://pastie.org/3028233
[22:03] <mgalkiewicz> I did start once again with the same command and now it works
[22:04] <mgalkiewicz> however my pastie looks like a bug for me
[22:06] <Tv> oh yes
[22:07] <Tv> mgalkiewicz: if you can describe how that happened (and especially if it happens again), and put that backtrace in a bug report, that would be much appreciated -- http://tracker.newdream.net/
[22:07] <mgalkiewicz> ok
[22:08] <Tv> also what node is that log from -- is it from 1.1.1.1 or not, etc
[22:08] <Tv> i think not but i'm not that familiar with the log entries
[22:08] <mgalkiewicz> Tv: ok one more question
[22:09] <mgalkiewicz> why running ceph -s from new monitors does not work?
[22:09] <mgalkiewicz> there is no output
[22:09] <mgalkiewicz> and the command returns 1
[22:09] <Tv> "ceph --log-to-stderr --debug-ms=1 -s" will tell you more
[22:10] <mgalkiewicz> http://pastie.org/3028264
[22:11] <Tv> mgalkiewicz: it's missing the crypto key
[22:11] <mgalkiewicz> yeah:) but how to workaround this
[22:11] <Tv> mgalkiewicz: your ceph.conf says keyring=, in the [client.*] or [client] or [default] section
[22:11] <Tv> mgalkiewicz: that path should contain the key to use
[22:12] <mgalkiewicz> I dont have client or default section at all only global and sections for mds, mon and osd
[22:13] <mgalkiewicz> I will try to add keyring= to mon section
[22:13] <Tv> sorry global not default
[22:13] <mgalkiewicz> hmm adding to mon section did not help
[22:13] <Tv> the ceph command-line tool will not read a [mon] section
[22:14] <Tv> it looks at [client.$id], [client], and [global]
[22:14] <mgalkiewicz> in global there is an entry with keyring.admin
[22:14] <Tv> default id == "admin"
[22:14] <Tv> so there you go
[22:14] <Tv> that path has to be readable and a valid keyring
[22:14] <sjust> wido: I've pushed a patch which may fix the scrub hanging problem, I also pushed a patch with some asserts that may tell us where the problem is coming if I'm wrong
[22:14] <mgalkiewicz> but I dont have admin keyring on new monitors and dont like to have
[22:15] <sjust> 061e7619aacf60a828e0ce84a108d5a0bea247c6 and 5274e88d2cb8c0449a4ecd1ff0cf8bb0af2cfc97
[22:15] <Tv> mgalkiewicz: then don't run "ceph -s" there
[22:15] <Tv> mgalkiewicz: "ceph -s" needs some way of telling the cluster it's authorized to see that information
[22:15] <mgalkiewicz> ok but I still have mon. key
[22:16] <Tv> mgalkiewicz: but a command line tool is not a monitor
[22:16] <Tv> mgalkiewicz: it expects a client.* key
[22:16] <Tv> mgalkiewicz: strictly, you *could* use the mon. key, but the system is just not built to accommodate that
[22:17] <mgalkiewicz> I can use mds/osd key like this ceph -s -n mds.my_node
[22:17] <gregaf> yiH: it looks like maybe the standby MDS you had isn't running any more either — check if the daemon is still up; see if the logs have a backtrace or there's a core dump or anything
[22:17] <mgalkiewicz> why not to use mon key in the same way?
[22:17] <Tv> mgalkiewicz: you might be able to get away with -n mon.
[22:18] <Tv> mgalkiewicz: but you're hiking off trail, there may be rattlenakes and poison ivy there
[22:18] <Tv> (it seems "-n mon." makes my ceph -s hang)
[22:18] <mgalkiewicz> does not work
[22:19] <mgalkiewicz> how ceph know where is my keyring.mon
[22:19] <mgalkiewicz> when there is no entry in ceph.conf
[22:19] <Tv> well then it wouldn't
[22:20] <mgalkiewicz> I added keyring=... to mon section just like I did for mds and osd but still does not work
[22:20] <Tv> as i said, "-n mon." makes ceph -s hang on my known-good installation
[22:20] <Tv> so just don't do that, mmkay?
[22:20] * verwilst (~verwilst@d51A5B517.access.telenet.be) has joined #ceph
[22:20] <Tv> if you want to run "ceph -s", you need a client key
[22:20] <mgalkiewicz> a little bit inconvenient and incoherent
[22:20] <wido> sjust: Building now
[22:21] <mgalkiewicz> well I would like to use it in case of failure of my "primary" mon
[22:22] <Tv> mgalkiewicz: i'm not sure what your perceived difficulty is.. you'll need to add keys to the system to have things contact the cluster.
[22:23] <Tv> mgalkiewicz: mkcephfs creates a client.admin key and adds it for you
[22:23] <Tv> default name the command line tools use is client.admin
[22:23] <Tv> mgalkiewicz: is there something else i can explain about this? I know the docs aren't very clear on this currently.
[22:24] <mgalkiewicz> the problem is that I dont use mkcephfs and I am preparing chef cookbook
[22:24] <mgalkiewicz> so it is not so easy as running few commands by hand
[22:25] <mgalkiewicz> however everything seems to work fine and I am quite satisfied with the results
[22:25] <mgalkiewicz> thank you very much for your help
[22:27] <Tv> mgalkiewicz: https://github.com/NewDreamNetwork/ceph-cookbooks
[22:27] <mgalkiewicz> maybe your documentation is not very helpful however I really appreciate your help on irc:)
[22:28] <Tv> mgalkiewicz: there are two major features going in the code, without which building cookbooks will be an exercise is frustration
[22:28] <Tv> mgalkiewicz: i frankly recommend waiting, or actually figuring out the new features with me
[22:28] <mgalkiewicz> oh great
[22:29] <mgalkiewicz> I will take a look however I already have cookbooks for installing mon, mds, osd and some scripts for managing rbd
[22:29] <Tv> http://thread.gmane.org/gmane.comp.file-systems.ceph.devel/4380
[22:35] * fghaas (~florian@85-127-155-32.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[22:37] * Kioob (~kioob@luuna.daevel.fr) Quit (Quit: Leaving.)
[22:49] <wido> sjust: have you compiled the code and verified that OSD's start with it?
[22:50] <wido> all my OSD's crash on boot: http://pastebin.com/37L9AhJs
[22:50] <wido> same backtrace with all OSD's
[22:54] <wido> hmm, seems to be something different? Doesn't go out on the new asserts
[23:03] <sjust> wido: looking
[23:04] <sjust> wido: you were running 0.39 before?
[23:05] <wido> sjust: Yes
[23:05] <sjust> and you upgraded to current master? or did you cherry-pick those commits?
[23:07] <wido> sjust: I upgraded to the current master, no cherry-pick
[23:08] <sjust> wido: ok
[23:19] * jclendenan (~jclendena@204.244.194.20) has joined #ceph
[23:23] * verwilst (~verwilst@d51A5B517.access.telenet.be) Quit (Quit: Ex-Chat)
[23:35] <todin> wido: sjust current master failed for me as well, but the trace is diffrent http://pastebin.com/tTsDtR5Z
[23:37] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:37] <gregaf> todin: that's an odd one
[23:40] * adjohn (~adjohn@208.90.214.43) Quit (Quit: adjohn)
[23:40] <todin> gregaf: yep, the core is 32G big, I am looking into it
[23:40] <gregaf> yeah, that's pthread_create failing
[23:40] <gregaf> 32GB?
[23:40] <gregaf> okay, so probably we ran out of memory
[23:40] <sjust> gregaf: oom?
[23:41] <todin> yep, the osd has only 4G
[23:41] <todin> but no oom in dmesg
[23:41] <gregaf> I guess maybe the kernel didn't need RAM, then
[23:41] <gregaf> but it still failed to allocate the requested stack space
[23:42] <gregaf> did you have any logging on?
[23:42] <todin> gregaf: no
[23:43] <wido> I'm going afk, ttyl. sjust: If you find something, I'll try tomorrow
[23:43] <sjust> wido: ok, thanks
[23:43] <gregaf> bummer
[23:43] <gregaf> if it's repeatable you should try it with logging; I'd expect the problem to become apparent pretty quickly if this is happening right on startup
[23:44] <todin> gregaf: which logging should I turn on?
[23:45] <gregaf> try the OSD on high and messenger on 1 for now
[23:46] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[23:48] <sjust> wido: if you're still here, it's possible that the patch I just pushed to master will fix that for you
[23:48] <sjust> bfbde5b18525406fc3b678751459e989ea5d4977
[23:50] <todin> gregaf: http://pastebin.com/AYzsPEe4
[23:51] <gregaf> that's the tail?
[23:51] <gregaf> todin: does the rest of the log contain that cycle too?
[23:52] <todin> gregaf: you mean this reconnecting thing?
[23:52] <gregaf> yeah
[23:52] <todin> gregaf: yes
[23:53] <gregaf> huh
[23:53] <todin> gregaf: if you want I can uplod the log.
[23:53] <gregaf> yes, please :)
[23:56] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.