#ceph IRC Log

Index

IRC Log for 2011-06-08

Timestamps are in GMT/BST.

[0:18] * MarkN (~nathan@59.167.240.178) has left #ceph
[0:37] * verwilst_ (~verwilst@d51A5B6F6.access.telenet.be) Quit (Quit: Ex-Chat)
[0:51] * allsystemsarego (~allsystem@188.27.167.240) Quit (Quit: Leaving)
[2:03] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[2:15] * df (davidf@dog.thdo.woaf.net) Quit (Remote host closed the connection)
[2:15] * df (davidf@dog.thdo.woaf.net) has joined #ceph
[2:15] * jrosser (jrosser@dog.thdo.woaf.net) Quit (Remote host closed the connection)
[2:20] * jrosser (jrosser@dog.thdo.woaf.net) has joined #ceph
[2:45] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[3:06] * cmccabe (~cmccabe@208.80.64.174) has left #ceph
[3:07] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[3:51] * eternaleye__ (~eternaley@195.215.30.181) has joined #ceph
[3:52] * eternaleye_ (~eternaley@195.215.30.181) Quit (Remote host closed the connection)
[4:28] * eternaleye__ is now known as eternaleye
[4:41] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[4:45] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[4:59] * joshd (~jdurgin@adsl-75-28-69-238.dsl.irvnca.sbcglobal.net) has joined #ceph
[5:14] * joshd (~jdurgin@adsl-75-28-69-238.dsl.irvnca.sbcglobal.net) Quit (Quit: Leaving.)
[5:23] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) has joined #ceph
[5:40] * lidongyang_ (~lidongyan@222.126.194.154) Quit (Remote host closed the connection)
[5:49] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:49] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[5:54] * lidongyang (~lidongyan@222.126.194.154) has joined #ceph
[6:08] * djlee (~dlee064@des152.esc.auckland.ac.nz) Quit (Quit: Ex-Chat)
[6:59] * joshd (~jdurgin@adsl-75-28-69-238.dsl.irvnca.sbcglobal.net) has joined #ceph
[7:57] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (synthon.oftc.net weber.oftc.net)
[7:57] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (synthon.oftc.net weber.oftc.net)
[7:57] * dwm (~dwm@vm-shell4.doc.ic.ac.uk) Quit (synthon.oftc.net weber.oftc.net)
[7:57] * nolan (~nolan@phong.sigbus.net) Quit (synthon.oftc.net weber.oftc.net)
[7:57] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) has joined #ceph
[7:57] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[7:57] * nolan (~nolan@phong.sigbus.net) has joined #ceph
[7:57] * dwm (~dwm@vm-shell4.doc.ic.ac.uk) has joined #ceph
[8:16] <iggy> 6y
[8:49] * joshd (~jdurgin@adsl-75-28-69-238.dsl.irvnca.sbcglobal.net) Quit (Quit: Leaving.)
[9:32] * bhem (~bhem@82VAABXQH.tor-irc.dnsbl.oftc.net) has joined #ceph
[10:24] * allsystemsarego (~allsystem@188.27.167.240) has joined #ceph
[10:54] * bhem (~bhem@82VAABXQH.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[11:11] * bhem (~bhem@9YYAAA5M8.tor-irc.dnsbl.oftc.net) has joined #ceph
[12:28] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[12:32] * mrfree (~mrfree@host2-89-static.40-88-b.business.telecomitalia.it) has joined #ceph
[12:32] <mrfree> hi all
[12:33] * mtk (~mtk@ool-182c8e6c.dyn.optonline.net) has joined #ceph
[12:33] <mrfree> using the squeeze repo suggested in the wiki I can't see any ceph package doesn't depends on gtk
[13:03] <mrfree> I don't want to pull all X deps on my server... is there a way to install ceph without X stuff?
[13:06] * bhem (~bhem@9YYAAA5M8.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[13:30] <wonko_be> doesn't look like it, you will have to compile it yourself if you don't want the X dependancy
[13:31] * sugoruyo (~george@athedsl-408992.home.otenet.gr) has joined #ceph
[13:32] <sugoruyo> hi all, i'm having problems with my ceph setup - while all osds seem to start only two of them are listed as up/in
[13:51] <sugoruyo> there are 3 machines, two disks each, 1 OSD/disk which means 2 OSDs/machine. all OSDs seem to start up ok, however only the first two (which are on the same machine) seem to join the cluster, the other four are considered non-existent. while maxosds is 6 when i do `ceph osd in 2` to mark the 3rd in it says it doesn't exist.
[13:53] <sugoruyo> I have noticed that when i start everything up, the OSDs on the first machine bind to the first available port above 6800 (the first machine also runs a standby MDS) but the other OSDs mention 6800 in the output from the start up script
[13:58] <wonko_be> the mdsses should connect to the monitoring to report as being online
[13:58] <wonko_be> so you should check the connectivity from the osd to the mds
[14:01] <sugoruyo> wonko_be: well the machines can sure talk to each other, i can ping and ssh from one another
[14:02] <sugoruyo> what's weird is that on the 4th machine which runs mon0 and mds0 i can see ports bound, but not on the other ones
[14:04] <sugoruyo> netstat -anp shows the ports mentioned by the start-up scripts bound to the appropriate processes on the 4th machine
[14:05] <sugoruyo> as well as the first which runs the backup mds and 2 osds, i see 6800 bound to cmds, 6801-6803 to cosd, 6804-6806 to cosd (diff. pid)
[14:09] <sugoruyo> `ps` indicates no cosd process on either of the machines with problems...
[14:47] * mrfree_ (~mrfree@host18-207-static.115-2-b.business.telecomitalia.it) has joined #ceph
[14:53] * mrfree (~mrfree@host2-89-static.40-88-b.business.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[16:02] * MK_FG (~MK_FG@188.226.51.71) Quit (Quit: o//)
[16:03] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[16:08] * alexxy (~alexxy@79.173.81.171) Quit (Ping timeout: 480 seconds)
[16:11] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[16:15] * MK_FG (~MK_FG@188.226.51.71) Quit ()
[16:18] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[16:20] * MK_FG (~MK_FG@188.226.51.71) Quit ()
[16:23] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[16:24] * MK_FG (~MK_FG@188.226.51.71) Quit ()
[16:26] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[16:31] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[16:37] * MK_FG (~MK_FG@188.226.51.71) Quit (Quit: o//)
[16:38] * mrfree_ (~mrfree@host18-207-static.115-2-b.business.telecomitalia.it) Quit (Quit: Leaving)
[16:38] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[16:42] * MK_FG (~MK_FG@188.226.51.71) Quit ()
[16:43] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[17:11] <greglap> sugoroyo: are you sure the OSDs are getting started? If they are starting, the logs should tell you why they're shutting down
[17:39] * greglap (~Adium@cpe-76-170-84-245.socal.res.rr.com) Quit (Quit: Leaving.)
[17:46] * bchrisman (~Adium@c-98-207-207-62.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:50] * greglap (~Adium@mobile-198-228-210-060.mycingular.net) has joined #ceph
[18:28] <sugoruyo> greglap: sorry, i was afk, I reran mkcephfs and got all the osds on the second and third machines to work, but now osd0 is always down/out
[18:29] <sugoruyo> i can't figure out how to get that thing in and up
[18:29] <greglap> what do the logs say?
[18:29] <sugoruyo> which of all the logs?
[18:29] <greglap> the ones on osd0
[18:29] <greglap> which aren't working ;)
[18:35] <sugoruyo> they're filled with messages like this
[18:35] <sugoruyo> 2011-06-08 19:29:59.131931 aeef0b70 osd0 0 handle_osd_map fsid 1d3c0f59-b495-9bf1-8aa4-0bd362d14a94 != c30b0c94-a057-4416-2ac0-3a9ddf2d64af
[18:35] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[18:36] <greglap> okay, so that means the fsid (FileSystem ID) they're getting from the monitors is different than what they have in their local stores
[18:36] <greglap> so something's gone wrong in setting up the on-disk pieces (ie, mkcephfs)
[18:38] <sugoruyo> i did not notice any abnormal thing in mkcephfs output
[18:39] <greglap> how did you run it?
[18:39] <sugoruyo> you mean the command?
[18:39] <greglap> yeah
[18:39] <greglap> and what ceph.conf did you use?
[18:39] <sugoruyo> you want me to pastie those?
[18:39] <greglap> yes, please :)
[18:40] <greglap> my train's at the station, gotta run ??? back in 15 :)
[18:40] * greglap (~Adium@mobile-198-228-210-060.mycingular.net) Quit (Quit: Leaving.)
[18:43] * Tv (~Tv|work@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:50] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:54] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[18:55] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit ()
[18:57] <gregaf> back
[18:57] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[19:00] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit ()
[19:00] * cmccabe (~cmccabe@c-24-23-254-199.hsd1.ca.comcast.net) has joined #ceph
[19:00] <sugoruyo> gregaf: are you the same as greglap?
[19:00] <gregaf> heh, yeah
[19:01] <sugoruyo> ok so here's the pastie URL for the ceph.conf and the CRUSH map I used: http://pastie.org/2038535
[19:02] <gregaf> okay, that looks fine
[19:03] <gregaf> so you ran mkcephfs with which options?
[19:03] <sugoruyo> sudo mkcephfs -c /etc/ceph/ceph.conf --crushmap /etc/ceph/crush.map --allhosts -v
[19:04] <sugoruyo> I'm on Ubuntu 10.10 Server, my OSDs use ext4 partitions
[19:04] <gregaf> and you didn't have any old nodes running while you did this?
[19:05] <sugoruyo> i stopped everything, rebooted the machines just in case something was running accessing something in the f/s of the partitions for the OSD
[19:05] <sugoruyo> and ran mkcephfs
[19:06] <sugoruyo> in fact this was ran just after an mkfs.ext4 on the partitions
[19:06] <gregaf> hmmm
[19:07] <sugoruyo> in the osd1 log i see this:
[19:07] <sugoruyo> 2011-06-08 18:49:00.833947 a956bb70 -- 0.0.0.0:6805/3965 >> 10.254.254.30:6804/2450 pipe(0x98938c0 sd=13 pgs=0 cs=0 l=0).connect claims to be 0.0.0.0:6804/2450 not 10.254.254.30:6804/2450 - presumably this is the same node!
[19:07] <gregaf> yeah, that's not a problem
[19:08] <sugoruyo> i'm guessing that's just because 2 OSDs run on the same machine
[19:08] <gregaf> it's the mismatch in fsids and I'm not sure how that could be happening if you don't have remnants of an old filesystem sticking around either in memory or on-disk
[19:08] <sugoruyo> i also see this
[19:08] <sugoruyo> 2011-06-08 18:48:59.855792 aef78b70 osd1 0 handle_osd_map fsid 1d3c0f59-b495-9bf1-8aa4-0bd362d14a94 != 00000000-0000-0000-0000-000000000000
[19:09] <gregaf> hmm, I don't remember if that's a separate problem from the other fsid mismatch or just something that happens on startup
[19:09] <gregaf> but I think it's just a misleading startup output message
[19:10] <sugoruyo> I do have a cosd -i 0 process on running on the machine
[19:11] <gregaf> an extra process?
[19:11] <sugoruyo> 2 OSDs on each machine
[19:12] <gregaf> yeah, so you should have two cosd processes running, one of which would be cosd -i 0???.
[19:12] <sugoruyo> this one has osd.0, osd.1, and i see a process called with cosd -i 0 on the machine and one called with cosd -i 1
[19:12] <gregaf> yep, those are what they should be
[19:12] <sugoruyo> they also both have connections in ESTABLISHED state to my mon and the other machines on the ports where the OSDs listen
[19:14] <gregaf> huh
[19:16] <gregaf> I'm not sure what could have gone wrong then, although I'm pretty sure it was in the mkcephfs stage
[19:16] <gregaf> I think Tv's done some stuff with that, maybe he has an idea
[19:16] <sugoruyo> do you know how to read the output from `ceph -s`?
[19:16] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[19:17] <gregaf> yeah
[19:17] <gregaf> but you said you had 4 OSDs in and up
[19:17] <Tv> gregaf: i haven't actually touched mkcephfs in master, just looked at it
[19:17] <sugoruyo> could you explain it to me?
[19:17] <Tv> i am somewhat familiar with the underlying lower-level actions
[19:18] <sugoruyo> i have 5 OSD in & up out of 6
[19:18] <cmccabe> did anything crash?
[19:19] <sugoruyo> cmccabe: no, i ran mkcephfs, then started everything up and ran ceph -s
[19:19] <cmccabe> did mkcephfs succeed?
[19:19] <sugoruyo> i didn't notice any error messages, although i did not read its output line by line
[19:20] <cmccabe> try it again and look at the output, as well as the return code
[19:21] <sugoruyo> i just stopped and restarted everything and it shows all 6 are up & in...
[19:21] <Tv> cmccabe: fwif i don't have 100% faith in mkcephfs error checking..
[19:21] <Tv> *fwiw
[19:22] <Tv> reading the osd log of the failing node should be interesting
[19:22] <sugoruyo> not sure what went wrong earlier, or whether it'll stay up
[19:22] <Tv> sugoruyo: and this is without re-mkcephfs'ing or anything like that, just stop & start the daemons?
[19:22] <sugoruyo> however i see some stuff in `ceph -s` output that seems a little weird and I'd like if someone could explain the output of `ceph -s`
[19:23] <Tv> sugoruyo: getting the log from the osd that misbehaved earlier would be interesting
[19:23] <sugoruyo> Tv: correct
[19:23] <cmccabe> yeah, mkcephfs should perhaps be using sh -e
[19:23] <cmccabe> since it's not checking the return code of most commands
[19:23] <Tv> cmccabe: it does set -e, but it has `` and pipes in it, set -e is not enough
[19:24] <Tv> $ false | cat; echo $?
[19:24] <Tv> 0
[19:24] <gregaf> sugoruyo: well it sounds like there was an osd which got initialized with the old info then and got stuck in memory
[19:24] <Tv> $ echo `false`; echo $?
[19:24] <cmccabe> tv: yeah, I know.
[19:24] <Tv> 0
[19:24] <cmccabe> tv: however, those unchecked calls are mostly sed and grep, and occassionally cconf
[19:27] <sugoruyo> Tv: I'm looking at that log right now, there's a bunch of messages about nodes claiming to be a certain node and some startup stuff
[19:27] <sugoruyo> 2011-06-08 20:27:29.716684 pg v470: 1188 pgs: 396 creating, 792 active+clean; 24 KB data, 1463 MB used, 281 GB / 297 GB avail
[19:27] <sugoruyo> what does this mean?
[19:27] <gregaf> that's normal startup noise, the nodes initially don't know what their IP is and they need others to tell them who they are
[19:28] <gregaf> you don't have any detailed logging turned on so you're not going to see anything interesting unless a daemon crashes, basically.
[19:29] <sugoruyo> can someone explain the line i just pasted above as well as this one?
[19:29] <sugoruyo> 2011-06-08 20:27:29.724247 mds e5: 1/1/1 up {0=0=up:active}, 1 up:standby
[19:29] <gregaf> the ceph -s output there is telling you about the placement groups, which are grouping that objects are placed into for data placement and tracking purposes
[19:30] <sugoruyo> gregaf: yeah I know, but it says 24 KB data, 1463 MB used, 281 GB / 297 GB avail
[19:30] <gregaf> so you have 1188 placement groups, 396 of them are in the creating state (where the OSDs are setting up their initial metadata), 792 are active and clean (so they're available and happy)
[19:30] <gregaf> so that line is a little confusing
[19:31] <gregaf> Ceph is storing 24 KB of data itself
[19:31] <sugoruyo> i'm guessing there's 396 PGs for each pool
[19:31] <gregaf> it's using 1463MB (it's counting the space in your OSD journals)
[19:32] <sugoruyo> why 281/297 GB avail though?
[19:32] <gregaf> and the partitions that are hosting OSD data stores have 281 out of 297 GB free
[19:32] <sugoruyo> it's never even been mounted... unless that's just overhead
[19:32] <gregaf> there might be other things besides Ceph using up space on them though
[19:32] <sugoruyo> they're mounted to each osd's data dir directly
[19:33] <sugoruyo> what about the line about the mds?
[19:33] <gregaf> are there any other files in those partitions?
[19:33] <sugoruyo> nope
[19:33] <sugoruyo> i mkfs.ext4 'd them and then mkcephfs 'd right after that (they're mounted /srv/osd{0,1,2,3,4,5}
[19:34] <gregaf> you've got one mds up; it's mds.0 and it's acting as mds 0, and you have one daemon in standby
[19:34] <sagewk1> skype!
[19:34] <gregaf> sorry, team standup now, we'll be back
[19:34] <sugoruyo> what does the 1/1/1 {0=0=up:active} mean though?
[19:36] <cmccabe> 1 osd is in the cluster and it is both up and active
[19:49] <sugoruyo> after starting the whole thing, I mounted it and tried to cp a file into it but it's hung after about a hundred megs
[19:49] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has left #ceph
[19:49] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[19:56] <gregaf> that's MDS data, not OSD data...
[19:56] <gregaf> it means that you have one active daemon, I forget what the middle number is for, and the third number is the number of allowed daemons
[19:57] <gregaf> the middle number might be the number of MDSes that the cluster has, which can be different from the other two if max_mds is six but you've only ever had 4 running simultaneously
[19:58] <gregaf> sugoruyo, did the output of ceph -s change after the copy hung?
[20:02] * jantje (~jan@paranoid.nl) Quit (Read error: Connection reset by peer)
[20:02] * jantje (~jan@paranoid.nl) has joined #ceph
[20:02] <sugoruyo> gregaf: no the ouput did not change, except for the fact it now says 48KB data instead of 24 KB
[20:03] <gregaf> remind me which version you're using?
[20:03] <sugoruyo> which version of ceph?
[20:03] <gregaf> yeah
[20:04] <sugoruyo> ceph -v output: ceph version 0.28.2 (commit:23242045db6b0ec87400441acbe0ea14eedbe6cc)
[20:04] <gregaf> it's probably hanging because it ran out of buffer space on the client and is for some reason failing to flush that data out to the OSDs
[20:04] <gregaf> but in a freshly-created cluster without cephx that shouldn't be a problem
[20:05] <sugoruyo> well the cp process is using 100% now and has fallen to an S+ state according to top and ps
[20:05] <sugoruyo> whenever I've managed to get all daemons running with Ceph this always happens
[20:05] <Tv> no hatin' on the cephx ;)
[20:06] <Tv> you should see how hard the hadoop guys are scrambling to get a security story together now, it's better when you have the answer from the start..
[20:06] <sugoruyo> this would be the sixth attempt, and it always hangs after righting at most 120MB to it
[20:06] <gregaf> that's???.bizarre
[20:06] <gregaf> kernel client?
[20:06] <bchrisman> Tv: it is quite nice that it's there underneath and low overhead???
[20:06] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[20:07] <sugoruyo> gregaf: you want me to give you the version of the kernel client?
[20:07] <bchrisman> We just recently took it out for a matter of simplicity??? but it's absolutely essential for anybody using ceph as it's intended.
[20:07] <gregaf> Tv: not hating on cephx, just it has some UI issues...
[20:07] <gregaf> just making sure you were using the kclient and not the uclient
[20:07] <Tv> gregaf: what part of ceph doesn't ;)
[20:07] <gregaf> I know a lot more about the userspace one :(
[20:08] <sugoruyo> gregaf: well how can I tell which one I'm using?
[20:09] <gregaf> it probably doesn't matter which one, but I'm going to have to pass this off to yehudasa now
[20:09] <gregaf> he asks if there's anything in dmesg
[20:10] <sugoruyo> gregaf: yeah, there's a bunch of messages
[20:10] <sugoruyo> should I paste 4 lines here or in pastie?
[20:11] <yehudasa> sugoruyo: wherever
[20:11] <sugoruyo> [10565.138028] ceph: loaded (mon/mds/osd proto 15/32/24, osdmap 5/5 5/5)
[20:11] <sugoruyo> [10565.143161] ceph: client4210 fsid 1d3c0f59-b495-9bf1-8aa4-0bd362d14a94
[20:11] <sugoruyo> [10565.143681] ceph: mon0 10.254.254.100:6789 session established
[20:11] <sugoruyo> [10613.681065] ceph: no crush rule pool 0 type 1 size 2
[20:12] <sugoruyo> that last one repeats about twenty times
[20:14] <sugoruyo> i don't even know how to kill the `cp` process... kill -9 won't work
[20:15] <Tv> that looks fatal, and would explain cp hanging
[20:15] <Tv> the last line, that is
[20:16] <Tv> it literally doesn't know what osd to talk to, to serve that request
[20:16] <Tv> no clue how it got to that state..
[20:16] <yehudasa> sugoruyo: we'll need you to dump the crush map
[20:19] <sugoruyo> yehudasa: the one that's currently in the cluster, or the one i set it up with?
[20:20] <yehudasa> sugoruyo: the one currently in the cluster for now
[20:20] <sugoruyo> pastie?
[20:20] <yehudasa> pastie
[20:21] <sugoruyo> ok, just to be sure, i get the crush map by doing: ceph osd getmap -o <file>, and then crushtool -d <file> -o <file> to make a human readable one right?
[20:22] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:22] <yehudasa> ceph osd getcrushmap -p <file>
[20:24] <Tv> anyone fiddling with teuthology: rm -rf virtualenv && ./bootstrap
[20:24] <sugoruyo> yehudasa: that doesn't seem to work
[20:24] <Tv> (it needs a newer version of one of the dependencies)
[20:24] <yehudasa> should be -o, not -p
[20:24] <yehudasa> sorry
[20:25] <sugoruyo> oh yeah, that's what i did, i just wrote getmap instead of getcrushmap above
[20:26] <sugoruyo> that's already in pastie, here: http://pastie.org/2038896
[20:31] * jbd (~jbd@ks305592.kimsufi.com) Quit (Remote host closed the connection)
[20:32] * jbd (~jbd@ks305592.kimsufi.com) has joined #ceph
[20:33] <yehudasa> sugoruto: you don't have ruleset 0 specified
[20:34] <sugoruyo> yehudasa: are you saying the rules in my crush map should start to be numbered from 0?
[20:34] <yehudasa> can you post your osdmap?
[20:35] <sugoruyo> gimme a sec
[20:36] <yehudasa> sugoruyo: what kernel version are you using/
[20:36] <yehudasa> ?
[20:36] <sugoruyo> here you go; http://pastie.org/2038939
[20:37] <sugoruyo> kernel 2.6.35-28-generic #50-Ubuntu SMP Fri Mar 18 18:42:20 UTC 2011 x86_64
[20:38] <yehudasa> sugoruyo: the ruleset line in your crushmap need to match your osdmap
[20:38] <yehudasa> so you'll need to start the numbering from 0, as specified there
[20:38] <yehudasa> also, note that in your crushmap you have a typo 'rdb' instead of 'rbd'
[20:39] <yehudasa> and your kernel client is kinda old..
[20:39] <sugoruyo> yehudasa: ok, so should i just fix the text file, recompile the map and feed it to Ceph using `ceph osd setcrushmap`?
[20:39] <yehudasa> yes
[20:39] <yehudasa> but it might be that you'll need to restart your client
[20:39] <sugoruyo> as for the client i'm not so familiar with fiddling with my linux kernel so I'm not sure what i can do to update it
[20:41] <sugoruyo> yehudasa: the problem with that is that it hangs so bad restarting hangs as well... one time i tried to do that and the machine hang at the shutdown screen for 2 days
[20:41] <yehudasa> reboot -f -n
[20:41] <sugoruyo> what does -n do?
[20:41] <yehudasa> don't sync filesystems
[20:43] <sugoruyo> ok, i just uploaded the new map
[20:44] <sugoruyo> and it seems to have unhang, i had hit ^C before i mentioned it here, and right after i sent the map back up it gave me my prompt back
[20:45] <yehudasa> cool
[20:45] <sugoruyo> i'll restart the copy and see how it goes
[20:46] * aliguori (~anthony@32.97.110.64) has joined #ceph
[20:46] <sugoruyo> so far all the problems i've had with (and I've had more than I'd like) have been due to something wrong with configuration or the set up or something like that
[20:47] <sugoruyo> sad thing is it turns out for software that's considered experimental/under heavy development (according to the website) it's pretty good, quite stable and reliable - which is not what I can say about the available documentation
[20:48] <sugoruyo> some of my first problems where due to inconsistencies on the syntax of e.g. crushtool, on thing in the wiki another in man page and a third on the usage message when calling it
[20:52] <sugoruyo> I restarted the copy, it seems to be going well although i don't understand why it takes up so much space...
[20:53] <gregaf> you're seeing some startup costs, and the reporting is admittedly a little weird
[20:53] <gregaf> see how much it changes after you finish your writes ??? that should just be 2x as much as you wrote (for the replication)
[20:54] <sugoruyo> gregaf: could you elaborate? i expect to see between 2 and 3 times the amount i put in
[20:55] <gregaf> the information that's reported in the usage is:
[20:55] <gregaf> 1) the data the filesystem contains
[20:55] <gregaf> 2) the amount of data used across all the OSD partitions
[20:55] <gregaf> 3) the amount of space available on all the OSD partitions
[20:55] <gregaf> with some odd summations going on, sometimes
[20:56] <gregaf> like a Ceph instance that's running with one OSD on my local machine produces this right now:
[20:56] <gregaf> 2011-06-08 12:42:34.932464 pg v33: 18 pgs: 18 active+clean+degraded; 102364 KB data, 25005 MB used, 794 GB / 863 GB avail; 133/266 degraded (50.000%)
[20:56] <gregaf> it started out at 24789MB used, but it's not actually Ceph using all that because there's a lot of other stuff on the disk in question
[20:57] <sugoruyo> my corresponding output is
[20:57] <sugoruyo> 2011-06-08 21:56:20.091871 pg v845: 1188 pgs: 1188 active+clean; 3813 MB data, 10271 MB used, 272 GB / 297 GB avail
[20:57] <gregaf> all those numbers are pulled from the same source as df
[20:57] <gregaf> which sometimes provides strange output
[20:58] <gregaf> /dev/sda3 864G 25G 795G 4% /home
[20:58] <gregaf> is what mine gives me
[20:58] <sugoruyo> the data amount is correct, now df corroborates the output: it says i have 11GB used
[20:58] <gregaf> that's where the numbers are coming from in ceph -s, and they don't seem to match but Ceph can't do much about it
[20:58] <sugoruyo> and i'm down by 15GB on the space available
[20:59] <gregaf> anyway, as I keep writing data to the Ceph install, because I've now already paid the startup costs in terms of journal spaces and things, I notice that the space used goes up twice as fast as the data stored, because it's got 2x replication on and the metadata isn't large enough to really register :)
[20:59] <sugoruyo> i'm just wondering whether these misreported, if the extra data i'm seeing is just stuff that gets created early on, like the journal
[21:00] <gregaf> some of it's from odd df output, some of it's from early creation of stuff :)
[21:00] <sugoruyo> gregaf: i'll feed it another large file and see how much the space used changes, i'll also look into the mountpoints
[21:02] <sugoruyo> gregaf: i read something about replication strategies (primary-copy, splay, chain) do you know anything about those?
[21:02] <gregaf> in Sage's thesis?
[21:03] <gregaf> Ceph only uses primary-copy these days; it's much simpler in terms of data safety and management :)
[21:03] <gregaf> going to lunch, bbl
[21:44] * aliguori (~anthony@32.97.110.64) Quit (Quit: Ex-Chat)
[21:44] * aliguori (~anthony@32.97.110.64) has joined #ceph
[21:47] * jbd (~jbd@ks305592.kimsufi.com) has left #ceph
[22:06] * pombreda1 (~Administr@149.96-136-217.adsl-dyn.isp.belgacom.be) has joined #ceph
[22:07] * pombreda1 (~Administr@149.96-136-217.adsl-dyn.isp.belgacom.be) Quit ()
[22:16] * DanielFriesen (~dantman@S0106001731dfdb56.vs.shawcable.net) has joined #ceph
[22:21] * Dantman (~dantman@S0106001731dfdb56.vs.shawcable.net) Quit (Read error: Operation timed out)
[22:38] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[23:05] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[23:06] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[23:32] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)
[23:43] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) has joined #ceph
[23:52] * ghaskins_mobile (~ghaskins_@66-189-113-47.dhcp.oxfr.ma.charter.com) Quit (Quit: This computer has gone to sleep)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.