#ceph IRC Log

Index

IRC Log for 2012-03-16

Timestamps are in GMT/BST.

[0:00] <Tv|work> invisible mistake
[0:00] <Tv|work> in other news, a reinstalled plana box is perfectly fine for running teuthology jobs
[0:06] <elder> I bet it was actually *fast*
[0:06] <elder> That's a lot of 0 bits.
[0:08] * softcrack (de8084f0@ircip3.mibbit.com) has joined #ceph
[0:11] <softcrack> Hello, is Mr. Greg Farnum here?
[0:11] <Tv|work> softcrack: he walked away from his keyboard a few minutes ago; he'll be back soon
[0:12] <softcrack> ok, thanks
[0:13] <Tv|work> "Storing the power control usernames and passwords in Cobbler means that information is essentially public (this data is available via XMLRPC without access control)"
[0:13] <Tv|work> how much worse can you design software?
[0:14] <Tv|work> i do believe this same thing will let me edit files as root, over the web...
[0:14] <Tv|work> anyway, good news is, it wouldn't do much good with that information anyway; literally it'd perform exactly what you request, no automation or smarts
[0:16] <Tv|work> yeah, it won't even understand a "please reimage this" that'd Do The Right Thing
[0:18] <joshd> Tv|work: is this cobbler itself, or something on top of it?
[0:18] <Tv|work> cobbler
[0:19] <nhm> Tv|work: Huh, I was thinking about using cobbler before. Maybe I'm glad I never ended up doing anything with it.
[0:28] <gregaf> softcrack: hey, wasn't watching my window :)
[0:29] * softcrack (de8084f0@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[0:37] * softcrack (~samuel@202.85.210.57) has joined #ceph
[0:37] * lofejndif (~lsqavnbok@04ZAAB2Y4.tor-irc.dnsbl.oftc.net) Quit (Quit: Leaving)
[0:40] <nhm> Man, the day goes by so fast.
[0:41] <gregaf> softcrack: I've gotta run for a bit, be back later…meanwhile if you're having trouble gathering up information or understanding what things mean joshd should be able to help you :)
[0:43] <softcrack> ok, thanks. Is Mr. joshd here?
[0:43] <joshd> yup
[0:44] <softcrack> Shall I post 'ceph -s' here?
[0:44] <joshd> pastebin or similar is preferred
[0:44] * softcrack (~samuel@202.85.210.57) has left #ceph
[0:46] * softcrack (~samuel@202.85.210.57) has joined #ceph
[1:00] * yoshi (~yoshi@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[1:30] <joao> sagewk, still there?
[1:30] <dmick> softcrack: do you need help with pastebin?
[1:31] <nhm> joao: what time is it out there? :P
[1:31] <dmick> 12:31 AM
[1:31] * tnt_ (~tnt@148.47-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:31] <joao> yeah, what dan said
[1:31] <dmick> he's a grad student, he doesn't care
[1:31] <nhm> dmick: that was impressively fast. :)
[1:31] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[1:31] <dmick> I haz a configurationz
[1:32] <softcrack> Perphaps it block by China Greate Firewall
[1:32] <nhm> dmick: I can't stay up that late anymore. 6:30am is sleeping in.
[1:32] <dmick> softcrack: what happens when you try?
[1:32] <joao> so, do any of you happen to have super powers on metropolis?
[1:32] <softcrack> Just a blank page.
[1:33] <dmick> softcrack: http://pastebin.com?
[1:33] <softcrack> yes, dmick
[1:33] <dmick> :(
[1:34] <softcrack> I works now.
[1:34] <nhm> softcrack: how about http://pastie.org?
[1:34] <joshd> dmick: it's ok, he can access fpaste.org
[1:35] <softcrack> thanks you, dmick
[1:40] <joshd> joao: what do you need on metropolis?
[1:41] <dmick> joshd: I got it
[1:41] <dmick> https://github.com/pypa/virtualenv/pull/231
[1:42] <joshd> hooray! I've hit that way too many times
[1:42] <dmick> everyone has
[1:42] <dmick> it's really annoying
[1:54] <joao> does anyone have a clue on why would teuthology fail to connect to a target, even though the target is reachable?
[1:55] <joao> something like this: ValueError: failed connect to ubuntu@plana19.front.sepia.ceph.com
[2:01] <joshd> joao: what's the stacktrace before that?
[2:05] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[2:28] * softcrack (~samuel@202.85.210.57) has left #ceph
[2:34] * gohko (~gohko@natter.interq.or.jp) Quit (Quit: Leaving...)
[2:35] * joao (~JL@89.181.145.13) Quit (Remote host closed the connection)
[2:41] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[2:53] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[2:55] * ottod_ (~ANONYMOUS@li127-75.members.linode.com) has joined #ceph
[2:57] * ottod (~ANONYMOUS@9KCAAC20S.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[3:02] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:03] * softcrack (~samuel@202.85.210.57) has joined #ceph
[3:05] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:11] * groovious (~Adium@64-126-49-62.dyn.everestkc.net) Quit (Quit: Leaving.)
[3:13] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[3:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:30] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Remote host closed the connection)
[3:37] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[3:37] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:41] * softcrack (~samuel@202.85.210.57) Quit (Quit: softcrack)
[3:46] * chutzpah (~chutz@216.174.109.254) Quit (Quit: Leaving)
[4:17] <darkfader> sagewk: i'm gonna try to leave $early and make it for thu night beverages
[4:18] <darkfader> ideally we'd be able to waste 15 mins on ceph monitoring so i can account it as work time :)))
[5:02] <sage> darkfader: great!
[5:17] * dmick (~dmick@aon.hq.newdream.net) Quit (Quit: Leaving.)
[7:04] * adjohn (~adjohn@50-0-164-119.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[7:16] * adjohn (~adjohn@50-0-164-119.dsl.dynamic.sonic.net) has joined #ceph
[7:55] * adjohn (~adjohn@50-0-164-119.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[8:17] * tnt_ (~tnt@148.47-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[8:41] * matthew_ (~imjustmat@pool-96-228-59-130.rcmdva.fios.verizon.net) Quit (Remote host closed the connection)
[8:49] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[8:57] * tnt_ (~tnt@148.47-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[8:59] * Liam_SA (~Liam_SA@41.161.35.80) has joined #ceph
[8:59] <Liam_SA> Hi all can anyone help me with an error im having
[9:01] * Liam_SA (~Liam_SA@41.161.35.80) Quit (Remote host closed the connection)
[9:02] * Liam_SA (~Liam_SA@41.161.35.80) has joined #ceph
[9:04] * tnt_ (~tnt@office.intopix.com) has joined #ceph
[9:09] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[9:12] <wonko_be> try putting the error here, or if it is spread over multiple lines, in a pastie
[9:15] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[9:51] * yoshi (~yoshi@p1062-ipngn1901marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[9:53] <wido> Liam_SA: What error?
[9:55] * Liam_SA (~Liam_SA@41.161.35.80) Quit (Quit: MegaIRC v4.06 http://ironfist.at.tut.by)
[10:01] * Liam_SA (~Liam_SA@41.161.35.68) has joined #ceph
[10:05] <Liam_SA> hi i am having the following error after creating mkcephfs ceph -w
[10:05] <Liam_SA> failed to open keyring from /etc/ceph/client.admin.keyring
[10:05] <Liam_SA> 2012-03-16 11:03:40.161382 b7167710 monclient(hunting): failed to open keyring: (2) No such file or directory
[10:05] <Liam_SA> 2012-03-16 11:03:40.161405 b7167710 ceph_tool_common_init failed.
[10:06] <Liam_SA> help would be much appreciated
[10:07] <Liam_SA> after starting it i get the following error
[10:07] <Liam_SA> ** ERROR: unable to open OSD superblock on /mnt/ceph/osd.1: (2) No such file or directory
[10:07] <Liam_SA> failed: ' /usr/bin/ceph-osd -i 1 -c /etc/ceph/ceph.conf
[10:39] <NaioN> euhmmm when do you get the error?
[10:39] <NaioN> did mkcephfs return errors?
[10:53] <Liam_SA> no mkcephfs does not return any errors, but after running it if i run ceph -w i get
[10:53] <Liam_SA> failed to open keyring from /etc/ceph/client.admin.keyring
[10:53] <Liam_SA> b70d4710 monclient(hunting): failed to open keyring: (2) No such file or directory
[10:53] <Liam_SA> b70d4710 ceph_tool_common_init failed.
[10:55] <NaioN> does the file exist?
[10:58] <NaioN> do you need auth at the moment?
[10:59] <NaioN> if you're experimenting it easier to build a cluster without auth and give it a try before enabling auth
[11:06] <wonko_be> go in /etc/ceph
[11:06] <wonko_be> create a symlink from client.admin.keyring to admin.keyring (or keyring.admin)
[11:08] <Liam_SA> no the file doesnt exist, but i thought mkcephfs could create that file?
[11:09] <wonko_be> it is a "feature" of mkcephfs
[11:09] <wonko_be> :)
[11:09] <wonko_be> just do the symlink, and try ceph -w again
[11:11] <Liam_SA> i ran my mkcephfs like this mkcephfs -a -c /etc/ceph/ceph.conf -k /etc/ceph/dcceph.keyring, so i obviously have a dccech.keyring
[11:12] <wonko_be> symlink to yours then
[11:12] <wonko_be> do ceph-authtool -l -n client.admin /etc/ceph/dcceph.keyring
[11:12] <wonko_be> it should output a strange string
[11:13] <Liam_SA> ok i will try that now thanks, will let you know
[11:16] <wonko_be> btw, anyone interested in ceph chef cookbooks? I just published mine at https://github.com/wonko/ceph-cookbook
[11:16] <wonko_be> it sets up a mon and some osds for now, handles the keyrings and the config file
[11:16] <wonko_be> input is appreciated (pull requests even more)
[11:18] <Liam_SA> if i run ceph-authtool i do get the key
[11:18] <wonko_be> so, do the symlink
[11:20] <Liam_SA> i did but when i run ceph -w now i dont get anything, does that mean there are no errors?
[11:20] <wonko_be> is your mon running?
[11:23] <Liam_SA> it is now, i dont get the error anymore, thanks
[11:26] <Liam_SA> if i run ceph -k dcceph.keyring -c ceph.conf health i get mon.0 -> 'HEALTH_ERR no osds' (0), but ill go check out your cookbook now if i have more problems ill ask.
[11:28] <wonko_be> start your osds
[11:28] <wonko_be> there are no osds running apparently
[11:40] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[12:00] <verwilst> hi wonko_be :)
[12:03] * ArtemGr (~6dbccbdb@webuser.thegrebs.com) has joined #ceph
[12:03] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[12:03] <Liam_SA> im just (trying to) run 1 osd until i get it running it does start the OSD but at IP 0 it that the problem?
[12:03] <Liam_SA> === osd.1 ===
[12:04] <Liam_SA> Starting Ceph osd.1 on DCCeph1...
[12:04] <Liam_SA> ** WARNING: Ceph is still under development. Any feedback can be directed **
[12:04] <Liam_SA> ** at ceph-devel@vger.kernel.org or http://ceph.newdream.net/. **
[12:04] <Liam_SA> starting osd.1 at 0.0.0.0:6801/4532 osd_data /mnt/ceph/osd.1 /mnt/ceph/osd.1.jou
[12:08] <ArtemGr> `futimens` used in src/common/HeartbeatMap.cc is not available in older glibc versions, and with CentOS 5 the glibc is very hard to upgrade. I don't see a reason not to use `utimes` in HeartbeatMap.cc to update the file timestamp?
[12:13] * ArtemGr (~6dbccbdb@webuser.thegrebs.com) Quit (Quit: TheGrebs.com CGI:IRC (EOF))
[12:29] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[12:36] * joao (~JL@89-181-145-28.net.novis.pt) has joined #ceph
[13:10] <wonko_be> Liam_SA: ip "0.0.0.0" usually means "any ip on the host"
[13:10] <wonko_be> verwilst: hey!
[13:13] <Liam_SA> wonko_be: once i start it and run ceph -w I get
[13:13] <Liam_SA> pg v2: 198 pgs: 198 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
[13:13] <Liam_SA> mds e3: 1/1/1 up {0=0=up:creating}
[13:13] <Liam_SA> osd e1: 0 osds: 0 up, 0 in
[13:13] <Liam_SA> log 2012-03-16 14:12:28.260977 mon.0 192.168.1.39:6789/0 3 : [INF] mds.? 192.168.1.39:6801/7994 up:boot
[13:13] <Liam_SA> mon e1: 1 mons at {0=192.168.1.39:6789/0}
[13:13] <wonko_be> your osd is not registring with the mon
[13:13] <wonko_be> 13:13 < Liam_SA> osd e1: 0 osds: 0 up, 0 in
[13:14] <wonko_be> that should say 1 up, 1 in
[13:14] <wonko_be> check the osd logs
[13:18] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has left #ceph
[13:20] <Liam_SA> wonko_be: sorry noob question where are the osd logs
[13:20] <wonko_be> /var/log/ceph/
[13:20] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[13:20] <wonko_be> Liam_SA: don't expect ceph to be noob-proof yet, you got a steep learning path ahead
[13:31] <Liam_SA> wonko_be: this the only error i find in the osd.0.log file
[13:32] <Liam_SA> filestore(/mnt/ceph/osd.0) error (17) File exists not handled on operation 20 (op num 1, counting from 1)
[13:32] <Liam_SA> filestore(/mnt/ceph/osd.0) unexpected error code
[13:32] <Liam_SA> filestore(/mnt/ceph/osd.0) transaction dump:
[13:35] <Liam_SA> :-)
[13:54] * oliver1 (~oliver@p4FD061CF.dip.t-dialin.net) has joined #ceph
[13:58] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[13:58] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Remote host closed the connection)
[13:58] * Eduard_Munteanu (~Eduard_Mu@188.25.92.98) has joined #ceph
[13:58] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[14:00] <wonko_be> Liam_SA: http://tracker.newdream.net/issues/2105
[14:38] <Liam_SA> wonko_be: i cant find the solution The mkfs doesn't create an initial snap, so if we crash/stop before creating one, our first journal events will get replayed against a dirty current/. See wip-2105 for fix.
[14:39] <Liam_SA> wonko_be?
[14:43] * groovious (~Adium@64-126-49-62.dyn.everestkc.net) has joined #ceph
[14:44] <wonko_be> you hit a bug
[14:45] <wonko_be> it will be fixed in the next version of ceph (.44)
[14:45] <wonko_be> remove everything and recreate your whole cluster will be the easiest way to get everything going
[14:49] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[14:49] <Liam_SA> wonko_be: thanks for all the help, enjoy your weekend:-)
[14:50] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[14:57] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[15:04] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[15:05] * Liam_SA (~Liam_SA@41.161.35.68) Quit (Remote host closed the connection)
[15:31] <sage> artemgr: is futimes(2) available?
[15:31] <sage> bla nm
[15:37] <joao> sage, how long should that teuthology run for?
[15:37] <joao> I mean, it usually ends in about 40 minutes
[15:41] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[16:00] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) Quit (Remote host closed the connection)
[16:06] * aliguori (~anthony@cpe-70-123-132-139.austin.res.rr.com) has joined #ceph
[16:06] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has joined #ceph
[16:14] * Eduard_Munteanu (~Eduard_Mu@188.25.92.98) Quit (Quit: Lost terminal)
[16:22] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[16:25] <sagewk> joao yeah.. i run it in a loop w/ hammer.sh
[16:27] * ghaskins (~ghaskins@68-116-192-32.dhcp.oxfr.ma.charter.com) has left #ceph
[16:32] <oliver1> @sage: Hi, did you notice the update of #2132? Another crash today, only 2/4 ODS running as of now :-\
[16:42] <oliver1> We have a VIP-customers image, where I already updated the header, but the first rb.*-block seems to be broken, too, is there a safe way to identify the block in /osd/data/current/*/*..., perhaps I can replace the broken one with one of the replicas? Just my last resort...
[16:48] <oliver1> Sage? U there? We just had third OSD crashed... and wating for journal replay... Perhaps you can drop in?
[16:49] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[16:56] <sagewk> oliver1: here
[16:57] <sagewk> elder: was your rbd fix in testing last night?
[16:57] <elder> Yes, why?
[16:57] <elder> Failure?
[16:57] <sagewk> yeah, same rbd thrashing job failed
[16:57] <sagewk> ubuntu@teuthology:/a/nightly_coverage_2012-03-16-a/1614
[16:58] <oliver1> @sage: Dropped you an email, too... uhm, would be cool to handle things offline from #ceph...?
[16:58] <elder> You answered my next question.
[16:59] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:01] <elder> Is it possible to see the console again?
[17:01] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[17:05] <sagewk> when tv gets in
[17:06] <elder> OK. It probably won't help much but it would help complete the picture
[17:08] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[17:08] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit ()
[17:23] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[17:34] <Tv|work> whhaaa vercoi reinstall is prompting me about iSCSI? how.. what..
[17:35] <Tv|work> elder, sagewk: what host?
[17:39] <elder> plana31, 51, or 39 . Just a sec, trying to find which is the client.
[17:39] <sagewk> elder: grep role teuthology.log
[17:40] <elder> 39
[17:40] * joao (~JL@89-181-145-28.net.novis.pt) Quit (Remote host closed the connection)
[17:42] <elder> Actually it looks like the other two might be the dead ones. But I'm not very adept at teuthology scatology (yet).
[17:43] <Tv|work> elder: which one cannot you ssh in to?
[17:45] <elder> Hmm. Actually I can get into all of them. Sage reported that testing last night crashed on the same tests as it did the night before. In that case the console showed the machine was dead. So I assumed it would be helpful to again get the console content.
[17:45] <elder> Maybe it was different from the previous test.
[17:45] <elder> (result)
[17:47] <elder> SSHException: SSH session not active
[17:47] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[17:49] * joao (~JL@89.181.145.28) has joined #ceph
[17:49] <Tv|work> if you can ssh in, you can dmesgt
[17:49] <wonko_be> just a little self-promotion for any of the ceph-people in another timezone: 11:16 < wonko_be> btw, anyone interested in ceph chef cookbooks? I just published mine at https://github.com/wonko/ceph-cookbook
[17:49] <Tv|work> and then you don't need this crap
[17:50] <elder> Wait, I mistyped. I believe it's 39 after all.
[17:51] <elder> Yes, I mistakenly got into 59. It's plana39 and I can't get to it via sshl.
[17:51] <Tv|work> http://ceph.newdream.net/temp/plana39.png
[17:52] <Tv|work> (download and keep a copy, i'll rm -rf that temp dir after we're done here)
[17:52] <elder> Got it. Thank you.
[17:56] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[17:57] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[18:00] * BManojlovic (~steki@212.200.240.216) has joined #ceph
[18:00] <Tv|work> this progress bar isn't making progress :-(
[18:00] <nhm> Tv|work: hrm, maybe it should be called "stationary bar"?
[18:01] <Tv|work> oh i crashed the iDRAC
[18:01] <nhm> awesome
[18:01] <Tv|work> i can crash a drac like 5 times a day
[18:03] <Tv|work> http://ceph.newdream.net/temp/plana23.png
[18:03] <sagewk> wonko_be: nice. are you the one with the open pull request on github? (guilhem)
[18:03] <wonko_be> no
[18:03] <wonko_be> i talked to guilhem, he will incorporate his radosgw (which I don't use) in mine if he feels like it
[18:04] <dmick> Tv|work: tjat
[18:04] <dmick> that's the drac crashing? looks like the kernel
[18:04] <wonko_be> i looked at your recipes, but sorry to say, but they are a crude way of setting up a default setup
[18:05] <wonko_be> a lot of big ruby blocks, no lwrp's, etc...
[18:05] <wonko_be> mine aren't all that nice either, I know, but they should be more versatile
[18:05] * chutzpah (~chutz@216.174.109.254) has joined #ceph
[18:05] <Tv|work> wonko_be: the truth is, i don't know much about ruby of chef
[18:06] <Tv|work> wonko_be: two things 1) read the "brain dump" email i sent to ceph-devel a while ago 2) do not use mkcephfs in a cookbook
[18:07] <Tv|work> wonko_be: apart from that, you're probably better off at making progress with them right now; i'll try to get back to that, but it's not realistic -- i'll happily review stuff next week, but to sit down & spend days on this, it'll take a few weeks until i get to that point
[18:08] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[18:08] <Tv|work> http://ceph.newdream.net/temp/plana37.png
[18:08] <wonko_be> Tv|work: i don't use mkcephfs, i tried to manage the keyrings, etc... through resources
[18:08] <Tv|work> sagewk, elder: 3 screenshots posted, please download and ack
[18:08] <Tv|work> wonko_be: did you see the bootstrap trick in my cookbooks?
[18:08] <elder> 37, 39, and...
[18:08] <elder> Oh, 23?
[18:08] <wonko_be> so I analyzed wat mkcephfs actually does, pulled it a bit apart, and compiled it all
[18:08] <wonko_be> Tv|work: with the attribute
[18:09] <elder> Got all three. Thank you.
[18:09] <wonko_be> yeah, got a glimpse of it after i did most of my work, ... i kind of got to the same result, but a bit different
[18:10] <wonko_be> Tv|work: please don't be offended that I started my own cookbooks, but I needed a good cookbook for testing, as we do it here... it is the only thing I can contribute to ceph, so ...
[18:10] <Tv|work> wonko_be: i'm just afraid a lot of people are writing "ceph cookbooks" that are of no help in actually maintaining a cluster
[18:10] <Tv|work> wonko_be: i'm looking at things like hard drive replacement; the way we do those will *also* cover the initial install
[18:11] <wonko_be> Tv|work: eventually, it would be the goal to have that in them also
[18:11] <Tv|work> wonko_be: and that's pretty much what the brain dump email & current cookbooks are all about; i could have done a mkcephfs-based cookbook ages ago, i just don't feel it's useful
[18:11] <wonko_be> can you get me the link to your braindump-email?
[18:12] <wonko_be> i'm not following the mailingslist, I'll have a look at it, and might either try and incorporate my cookbooks/ideas in a pull to yours, or do it vice-versa
[18:12] <Tv|work> wonko_be: http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/5567
[18:13] <Tv|work> i know the whole concept is not coherently explained; it's not coherently formed as an idea yet
[18:13] <wonko_be> give me a sec, I'll skim it quickly
[18:13] <Tv|work> all i know is that mkcephfs is a poor model for ongoing operation
[18:14] <sagewk> wonko_be: please follow with any comments on the list.. would love to start moving us in the right direction
[18:14] <wonko_be> well, as a start, I've pulled apart all the keyring management stuff, authentication, etc...
[18:15] * ceph (~hylick@32.97.110.63) has joined #ceph
[18:17] <ceph> has anyone else run into connection timeouts when using ceph-fuse to mount ceph? it has worked in the past, but now it is not mounting ceph any longer. I did not update or change anything. i simply unmounted ceph, restarted ceph, and tried to mount ceph again.
[18:17] <ceph> any ideas?
[18:17] <sagewk> elder, nhm: join the vidYO room, yo!
[18:17] <elder> Oh yeah
[18:18] <nhm> oh whoops
[18:18] <wonko_be> sagewk: i'll subscribe
[18:18] <nhm> sagewk: might be a bit
[18:18] <elder> Dayum. Go ahead without me.
[18:18] <wonko_be> Tv|work: I'll digest your mail (phew, lengthy) over the weekend
[18:18] <elder> Yeah, I have to go get the e-mail./
[18:18] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[18:20] * ceph (~hylick@32.97.110.63) Quit (Quit: Leaving.)
[18:20] <nhm> hrm, their debian package is apparently "of bad quality"
[18:21] * groovious (~Adium@64-126-49-62.dyn.everestkc.net) Quit (Quit: Leaving.)
[18:21] * hylick (~hylick@32.97.110.63) has joined #ceph
[18:22] <hylick> (repost: with my name correct this time) has anyone else run into connection timeouts when using ceph-fuse to mount ceph? it has worked in the past, but now it is not mounting ceph any longer. I did not update or change anything. i simply unmounted ceph, restarted ceph, and tried to mount ceph again.
[18:22] <wonko_be> Tv|work: a lot of stuff in your mail actually would make the cookbooks a lot cleaner
[18:23] <elder> nhm yes, I also was given that observation.
[18:23] <wonko_be> i'm willing to maintain and update the cookbooks as ceph moves forward, but I don't have a say in the development
[18:23] <wonko_be> so I created them to do just my thing (which is "test ceph one every month, at least")
[18:26] <wonko_be> Tv|work: parts of what you say in your mail are already in mine... /ceph is my base, /ceph/osd/$id, /ceph/mon/$id, ... I keep the keyrings neatly per function, adding the keys as needed on hosts...
[18:27] <wonko_be> as for all the "moving data, etc..." a lot of work/input of you guys is needed
[18:28] <elder> nhm, you have any luck installing?
[18:34] <nhm> elder: I got it to install finally, and then it kept erroring out with "could not start client" or something like that. After screwing around for a while it mysteriously started working.
[18:35] <elder> Hmm.
[18:35] <elder> I keep getting "it isn't running"
[18:35] <nhm> elder: yeah, I think I was able to get it to run by starting from the command line.
[18:35] <elder> The install, after saying "I dont care if it's crappy", ended with an un-reassuring "Install" button still there after it apparently completed.
[18:35] <joshd> hylick: can you pastebin the output of 'ceph -s'?
[18:35] <elder> Oh, I'll try that. Just VidYo, yo?
[18:36] * tnt_ (~tnt@office.intopix.com) Quit (Ping timeout: 480 seconds)
[18:36] <nhm> elder: yeah, that happened to me too. I think it's actually installed. I ended up just doing "dpkg --install" of the deb from the commandline to be sure.
[18:39] <elder> What the hell would this thing need with libblkid?
[18:40] <elder> OK, what command do you run to start it, once you got it installed with dpkg?
[18:41] <nhm> I ended up running /opt/vidyo/VidyoDesktop/VidyoDesktop which ultimately seemed to make it work.
[18:41] <nhm> never worked when I tried through the gnome menus.
[18:41] <elder> /opt/vidyo/VidyoDesktop/VidyoDesktop: error while loading shared libraries: libblkid.so.1: cannot open shared object file: No such file or directory
[18:41] <nhm> huh
[18:42] <sagewk> for me it showed up under applications > internet (ubuntu, gnome classic)
[18:42] <nhm> sagewk: yeah, it's there for me too, but it never worked when I tried that.
[18:42] <elder> Well I see it there, but it doesn't work on Ubuntu 11.10 with the default fancy pants UI
[18:43] * tnt_ (~tnt@148.47-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:43] <elder> I presume we're done with our meeting now. I'm going to set this aside, I'm losing focus, and patience.
[18:43] <joao> so, is there any chance I can deploy a custom kernel for a teuthology without the need of going through the gitbuilder server?
[18:46] <dmick> joao: not yet. kernel installs are Coming Very Soon
[18:47] <Tv|work> elder: "apt-get install libblkid1" perhaps?
[18:47] <Tv|work> joao: lock a machine, install a kernel manually, run teuth job specifying explicit targets, cackle like the evil mastermind you are
[18:48] <joao> eheh
[18:48] <joao> will do, thanks :)
[18:49] <dmick> the two problems I had on Ubuntu were: 1) the package whined about it's "installed size" being invalid, and 2) it hadn't run ldconfig afterward, so couldn't find libs it installed in /usr/lib32
[18:50] <dmick> AFAICT the menu option is just to start the daemon; you then have to browse to the portal to get the client actually running
[18:50] <joao> that's why I've grown to love mac osx
[18:50] <nhm> dmick: yeah, tried that. It kept erroring out that that client couldn't start until I ran it manually from the command line.
[18:50] <dmick> (the preceding two messages were about Vid-wye-oh
[18:50] <dmick> )
[18:56] <wonko_be> actually, there are a lot of hidden gems in there to build cookbooks: "Each cluster has a uuid, and each ceph-osd instance "
[18:56] <wonko_be> gets a uuid when you do ceph-osd --mkfs. That uuid is recorded in the osd
[18:56] <wonko_be> data dir and in the journal, so you know that they go together.
[18:57] <elder> Tv|work, already installed
[18:57] <wonko_be> i've been struggling to keep everything tied together in the cookbooks
[18:58] <dmick> elder: which dir is it in? could be you also need ldconfig
[18:59] <elder> Perhaps. I couldn't remember the name of that program.
[18:59] <elder> Or whether it was needed...
[19:00] <elder> Apparently it was.
[19:12] <oliver1> uhm... as Sage had to leave, he advised me to cherry pick f2e6b8d7501f5f7cbae024b399a7938f7509d796, anybody there to confirm, that the result: v0.43-1-g3e789bb
[19:12] <oliver1> is most current?
[19:13] <Tv|work> oliver1: if you cherry-pick, the resulting sha1 is essentially random, we can't help you on that one
[19:13] <Tv|work> (it includes your email & current time)
[19:14] <yehudasa> oliver1: on top of what commit did you cherry-pick it?
[19:15] <oliver1> git describe (make sure it says 0.43)
[19:15] <oliver1> all done on top of "git clone git://github.com/ceph/ceph.git"
[19:16] <yehudasa> oliver1: did you check out a specific branch?
[19:17] <oliver1> Nope.
[19:17] <yehudasa> oliver1: just cloning that would not get you into 0.43, but rather to the top of the devel branch, pretty sure that's not what you was aiming at
[19:17] <Tv|work> oliver1: i did it here, verify that this output matches:
[19:17] <Tv|work> $ git rev-parse HEAD^{tree}
[19:17] <Tv|work> 4ddb5e9d8281b7af734c09374b602fe2d18ca5ee
[19:17] <oliver1> git describe showed up 0.43
[19:18] <Tv|work> that's f2e6b8 cherry-picked on top of v0.43
[19:19] <Tv|work> to reproduce: git checkout v0.43 && git cherry-pick f2e6b8d7501f5f7cbae024b399a7938f7509d796
[19:19] <oliver1> git rev-parse HEAD^{tree} is OK.
[19:19] <joao> sheer curiosity, how does one avoid the cleanup of /tmp/cephtest after a teuthology dies a terrible death?
[19:19] <joao> I mean, without resorting to --archive
[19:19] * adjohn (~adjohn@50-0-164-119.dsl.dynamic.sonic.net) has joined #ceph
[19:20] <dmick> insert an 'interactive:' task; that makes teuth wait in a python shell before cleaning up
[19:20] <joao> yeah, it's there
[19:20] <dmick> (assuming it's the test case dying and not teuthology itself)
[19:20] <joao> interactive-on-error: true
[19:21] <dmick> - interactive:
[19:21] <dmick> as the last task
[19:22] <joao> I guess that would wait for user input whenever a test run finished, when using hammer.sh
[19:22] <joao> no?
[19:22] <joao> in any case, I have no idea what just happened here
[19:22] <joao> INFO:teuthology.task.ceph:Checking cluster ceph.log for badness...
[19:22] <joao> WARNING:teuthology.task.ceph:Found errors (ERR|WRN|SEC) in cluster log
[19:22] <joao> but one thing I know for sure: it wasn't what I was looking for
[19:22] <joao> :(
[19:23] <dmick> oh. I don't know about hammer
[19:23] <dmick> I would expect it to pause after each run in that loop, yes
[19:23] <yehudasa> oliver1: just to make sure: what does git rev-parse HEAD^{tree} show you?
[19:24] <joao> dmick, thanks anyway. I'll re-hammer it and hope to trigger all the bad things I want to trigger ;)
[19:24] <dmick> I would have thought interactive-on-error: true would have been what you wanted, but maybe the caveat there applies
[19:25] <joao> dmick, yeah, I supposed so as well
[19:25] <Tv|work> joao: why would it not be what you want?
[19:25] <joao> Tv|work, because I want to reproduce #1975, and no warns appeared on dmesg on either machine
[19:26] <joao> so I gather it was something else which I'm totally unable to explain since /tmp/cephtest is gone
[19:26] <oliver1> @yehudasa, "is ok" should read: 4ddb5e9d8281b7af734c09374b602fe2d18ca5ee ;-)
[19:26] <Tv|work> joao: you said "after a teuthology dies a terrible death"?
[19:27] <joao> too strong?
[19:27] <Tv|work> joao: on errors, interactive-on-error: true will drop you into an interactive python shell
[19:27] <yehudasa> oliver1: then you're ok
[19:27] <joao> Tv|work, it didn't
[19:27] <Tv|work> joao: *and* there was an error? That'd be a bug. Share the output.
[19:27] <oliver1> Interesting point remains: what if it's compiled... will there be another version-conflict with the other nodes running: ceph version 0.43 (commit:9fa8781c0147d66fcef7c2dd0e09cd3c69747d37) ?
[19:28] <joao> Tv|work, those two lines were the only thing that popped up
[19:28] * groovious (~Adium@64-126-49-62.dyn.everestkc.net) has joined #ceph
[19:28] <joao> after the warning, it just cleaned up
[19:28] <yehudasa> oliver1: no
[19:28] <joao> and previous to that, it was restarting a run
[19:28] <joao> *starting a new run
[19:29] <oliver1> @yehudasa: "no" is a much appreciated answer :-D
[19:29] <joao> but unfortunately that log is gone, started another run meanwhile
[19:29] <Tv|work> joao: oh yes, the log check just marks success=False, it doesn't actually make that an error
[19:29] <joao> I'll make sure to keep it if it happens again
[19:30] <Tv|work> joao: i got what i needed, thanks; not sure yet whether that justifies a behavior change
[19:32] <Tv|work> basically, the log check was written to be a "silent failure"
[19:34] <joao> I wonder if the log contains any relevant info though
[19:36] <Tv|work> joao: yeah always use --archive=, it's just better that way
[19:37] * oliver1 (~oliver@p4FD061CF.dip.t-dialin.net) has left #ceph
[19:37] <joao> Tv|work, yeah, I meant to change hammer.sh to archive runs in different dirs depending on the run
[19:38] <Tv|work> $ cat /home/tv/src/teuthology.git/run
[19:38] <Tv|work> #!/bin/sh
[19:38] <Tv|work> set -e
[19:38] <Tv|work> exec ./virtualenv/bin/teuthology --lock --archive="archive/$(date +%Y-%m-%dT%H-%M-%S)" "$@"
[19:38] <Tv|work> <3 that
[19:38] <Tv|work> i've been meaning to add --archive-dir=archive as option to teuthology, let it do the timestamp for you
[19:38] <joao> but in-between thinking it and actually doing it, I started a new run and now I'm waiting :p
[19:39] * Oliver1 (~oliver1@p4FD061CF.dip.t-dialin.net) has joined #ceph
[19:51] * sagewk is very happy those ugly autoconf errors are finally gone
[19:52] <dmick> someone should thank the guy that fixed those
[20:00] <gregaf> so it looks like git 1.7.5 does The Right Thing with submodules, fyi...
[20:01] <gregaf> updates them when the master repo changes, etc
[20:03] <sagewk> gregaf: is that a new feature, or did they perhaps make an existing option enabled by default?
[20:07] <joao> oh joy
[20:07] <joao> I triggered #1975 :D
[20:08] <sagewk> joao: yay!
[20:08] <joao> yay! indeed :D
[20:08] <joao> now, to the kernel-intrumentation mobile!
[20:09] <joao> sometimes I think this happiness I'm feeling can't be a good sign of mental health
[20:10] <sagewk> "hooray, i broke it!"
[20:12] <joao> unfortunately, the dog is nagging me and I'll have to walk him before I can look at this :(
[20:12] <joao> brb
[20:16] <dwm__> Howdy chaps. Ceph in London?
[20:18] * adjohn (~adjohn@50-0-164-119.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[20:19] * adjohn (~adjohn@50-0-164-119.dsl.dynamic.sonic.net) has joined #ceph
[20:28] <sagewk> sjust: see updated wip_watchers branch
[20:31] * adjohn (~adjohn@50-0-164-119.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[20:41] * Oliver1 (~oliver1@p4FD061CF.dip.t-dialin.net) has left #ceph
[20:42] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[20:55] * Oliver1 (~oliver1@ip-88-153-227-154.unitymediagroup.de) has joined #ceph
[20:56] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[21:04] * hylick (~hylick@32.97.110.63) has left #ceph
[21:05] * absynth (~absynth@mail.absynth.de) has joined #ceph
[21:05] <absynth> evening, everyone
[21:05] <absynth> sagewk: i'm Oliver1's replacement, he's hitting the road. stxShadow should be around soonish, too
[21:06] <Oliver1> … driving 170km… should not take _that_ long ;)
[21:06] <absynth> heh
[21:06] * Oliver1 (~oliver1@ip-88-153-227-154.unitymediagroup.de) Quit (Quit: Leaving.)
[21:09] * stxShadow (~jens@ip-88-153-224-220.unitymediagroup.de) has joined #ceph
[21:20] <sagewk> sjust: repushed wip_watchers
[21:20] <sagewk> stxshadow: this is your fix. reviewing and testing now.
[21:20] <sagewk> absynth: you too :)
[21:21] <stxShadow> sagewk :) thanks a lot
[21:21] <elder> sagewk, I don't expect to fix the rbd thing today.
[21:21] <gregaf> dwm__: not sure what you mean by "Ceph in London"?
[21:21] <sagewk> elder: ok. is it the same thing or something else?
[21:21] <elder> I don't know. I haven't actually been looking at it (yet).
[21:21] <sagewk> gregaf: dona and bryan are in uk today
[21:22] <gregaf> yeah, wasn't sure if it was that or something else — and nobody else will be there
[21:22] <sagewk> dwm__: dona got your info@ email, but i'm not sure about her availability
[21:22] <gregaf> but if you want to talk about giving us money, I'm sure they could squeeze out some time ;)
[21:22] <sagewk> elder: ok
[21:23] <elder> Just a minute
[21:24] <elder> It looks the same, but apparently due to something else. The message is:
[21:24] <elder> INFO: rcu_sched detected stalls on CPUs/tasks: { 0} (detected by 3, t=3215698 jiffies)
[21:25] <elder> (where "3" is a CPU other than 1, and "3215698" is different each time. Tv gave me three console screen grabs, all look similar.
[21:25] <elder> Due to a locking problem though. Were the old sepia machines single-processor?
[21:25] <sjust> sgaewk: looking
[21:27] <sagewk> elder: 4 proc, i think
[21:27] <elder> Hmm. And the new ones?
[21:27] <elder> plana
[21:27] <sagewk> 8 core
[21:27] <elder> Interesting. Well as before there is very little information to go on.
[21:28] <sagewk> gregaf: can you peek at wip-config sometime today?
[21:28] <elder> I could maybe try out my core dumping stuff though,
[21:28] <sagewk> yeah
[21:29] <elder> I'm doing a test right now but my target machine takes ridiculously long time to dump its core.
[21:30] <elder> It's been 40 minutes since I triggered it. All I know is, when I let it go and went to bed, the core was complete in the morning...
[21:32] * Tv|work (~Tv_@aon.hq.newdream.net) Quit (Quit: Tv|work)
[21:36] * Tv|work (~Tv_@aon.hq.newdream.net) has joined #ceph
[21:40] <dwm__> gregaf, sagewk: Regretfully, I have no money to spare..
[21:41] <sagewk> sjust: repushed
[21:41] <sjust> lookin
[21:42] <sjust> typo in create_object_context, populate_object_watchers
[21:42] <sagewk> where?
[21:43] <sagewk> ah
[21:44] <sjust> populate_object_watchers rather than populate_object_watcher
[21:44] <sjust> looks good otherwise
[21:45] <sagewk> stxshadow, absynth: i'll push a wip-watchers-stable branch for you guys
[21:46] <stxShadow> oliver is afk at the moment .... he will be back in half an hour
[21:47] <gregaf> sagewk: wip-config is just that one commit on top of master, right? looks all good!
[21:47] <sagewk> yeah thanks
[21:47] <gregaf> and the git submodule stuff just isn't in 1.7.2, unfortunately :(
[21:47] <sagewk> oh well
[21:48] <gregaf> they've got a new one in squeeze-backports; I think I'm just going to pull that down
[21:48] <gregaf> otherwise I know I'll break the repo
[21:51] <absynth> sagewk: purely out of curiosity, are the issues we're seeing very exotic or rather common?
[21:53] <gregaf> absynth: Christian Brunner reported one of them on the mailing list today — it's the only other report
[21:53] <gregaf> but I gather that you're running a significant fraction of the RBD-backed VMs in existence right now...
[21:55] <absynth> heh. good for us ;)
[21:55] <absynth> who of you is going to be in Germany next week?
[21:56] <gregaf> a bunch of business people, and Sage
[21:56] <absynth> i hope he isn't sx. i reckon there will be beer.
[21:56] <sagewk> it's a race with rbd watchers and recovery. it's a little frustrating our qa hasn't turned it up.. htere a ticket open now to improve the test coverage for this code
[21:57] <sagewk> sx?
[21:57] <absynth> straight edge
[21:57] <sagewk> thankfully no. looking forward to the beer
[21:57] <absynth> your first time over here?
[21:58] <sagewk> i lived in southern germany for a month in high school, and have visited a few times since then
[21:58] <absynth> ah, ok
[21:58] <sagewk> but that was a while ago.. my german is a bit rusty
[21:59] <darkfader> hallo ich haette gerne drei bier
[21:59] * Jaykra (~Jamie@64-126-89-248.dyn.everestkc.net) has joined #ceph
[21:59] <darkfader> it'll work out.
[21:59] <absynth> yeah, there's pretty much only "ein bier bitte" and "prost" to remember
[21:59] <sagewk> absynth, stxshadow: wip-watchers-stable branch is passing my smoke tests. you guys should pull that down and build it
[21:59] <sagewk> :)
[22:00] <dmick> ein? surely zwei or drei :)
[22:00] <absynth> can you query stxShadow the pull details? oliver took his history with him
[22:00] <nhm> sagewk: I bet you are looking forward to the beer! :D
[22:01] <sagewk> stxshadow: just checkout wip-watchers-stable and build that
[22:01] <stxShadow> ok
[22:01] <sagewk> which will get you v0.43-6-gcc2c383
[22:01] <nhm> btw, Macallan or Glenmorangie for a congratulations gift?
[22:03] <absynth> oh, you're a Malt drinker?
[22:05] <gregaf> isn't Glenmorangie a proper little thing and Macallan comes from all over?
[22:05] <absynth> both are single malts
[22:05] <nhm> absynth: not a serious one, but my brother-in-law just got a new job so I want to get him something and we both occasionally enjoy scotch. :)
[22:06] <absynth> get him whatever he didn't have yet
[22:06] <absynth> both are good, IMHO
[22:06] <gregaf> unfortunately I much prefer non-Scotch whiskeys; scotch always feels like I ran into a truck if I drink more than a sip :(
[22:06] <absynth> (if one is into highland/speyside whiskies)
[22:06] <nhm> absynth: He like Oban and I like Balvenie. I'm not sure if either of us have had the above mentioned.
[22:07] <absynth> well... if you like oban, try Cragganmore
[22:07] <absynth> another good one is Glenrothes. there's hundreds of editions available (literally)
[22:08] <nhm> ok. I'll have to see what I can pick up on the way there. Won't have time to hit up the good store.
[22:10] <nhm> gregaf: I don't drink much anymore so I know what you mean. Mostly a glass of one of my favorite beers on the weekend is about all I do now.
[22:10] <gregaf> it's not just the alcohol; there's something about scotches specifically even compared to Irish whiskeys or bourbon :/
[22:11] <gregaf> *shrug*
[22:11] <gregaf> I buy my dad one or two nice bottles every year and at least get to taste them ;)
[22:11] <gregaf> and it turns out you can get a pretty good variety of regular whiskey too
[22:12] <nhm> huh, I haven't noticed that, but everyone is different. I used to love stouts but now I get raging headaches from many of them.
[22:13] <absynth> i can't drink "weizen" (the cloudy german beer made with lots of yeast). gives me headaches after the first pint
[22:15] <sagewk> sjust: wip-2080?
[22:18] <sjust> sagewk: looks good
[22:18] <sjust> sagewk: don't merge the wip_watchers branch yet, seems to have a bug
[22:18] <sagewk> sjust: what i'm more worried about is that we didn't catch the ENOENT on truncate. i think the filestore error checks need to be much more strict
[22:19] <absynth> should we hold back on our experiments?
[22:19] <sagewk> stxshadow, absynth: hold off on pushing out that new binary, yeah
[22:21] <absynth> understood
[22:21] <joao> is there any server on the planas subnet with the ceph-client repo?
[22:22] <joao> (just because I get easily bored waiting for github)
[22:24] <sagewk> fixing the mirror at git://ceph.newdream.net/git/ceph-client.git
[22:25] <absynth> i gotta get some sleep. presume oliver's gonna be back soon, so if you get another fix up soon, he can deploy it tonight or in the morning or something
[22:25] <absynth> sagewk: when are you flying over?
[22:26] <sagewk> arrive monday midday
[22:26] <absynth> we'll be in rust on thursday. dona said we should aim for a 2pm meeting
[22:27] <joao> sagewk, had I known you were coming to Europe last month, I would have made arrangements to drop by to say hi :p
[22:27] <nhm> sagewk: ah, too bad. when are you going to be back?
[22:27] <nhm> sagewk: I'll be in late on sunday.
[22:28] * Oliver1 (~oliver1@p5483D33E.dip.t-dialin.net) has joined #ceph
[22:28] <sagewk> nhm: argh that's right, bad timing :(
[22:29] <Oliver1> ouch, driven too fast, not ready yet?
[22:30] <nhm> sagewk: I'll have to cause mayhem while you are gone.
[22:30] <sagewk> :)
[22:30] <elder> I'm getting nowhere and am going to call it a day. Too nice outside right now.
[22:30] <elder> Bad way to end my week though.
[22:31] <sagewk> elder: :( ok
[22:31] <elder> Enjoy Deutschland Sage.
[22:31] <sagewk> enjoy your freakish weather :)
[22:31] <nhm> hehe
[22:31] <elder> I know, it's like LA.
[22:31] <elder> I'm hoping to water ski Sunday. On a lake.
[22:32] <dmick> much better than trying to waterski on a field
[22:32] <nhm> elder: wow
[22:32] <nhm> elder: water will still be cold. ;)
[22:33] <elder> But it will be liquid.
[22:33] <elder> I had the chance to go last Saturday but skipped it.
[22:33] <elder> That was on the river though.
[22:34] <nhm> We started planting our vegetable garden a couple of days ago.
[22:34] <elder> Fools!
[22:34] <elder> It's going to freeze.
[22:34] <elder> You do that in May.
[22:34] <nhm> elder: naw, it's ok. Cold weather crops.
[22:35] <elder> Anyway, have a good trip Mark.
[22:35] <nhm> or at least that's what my wife tells me. ;)
[22:35] <nhm> elder: thanks, enjoy the lake!
[22:35] <elder> I'll see if I can get Veedie Yoyoyo going on Friday.
[22:35] <elder> Monday.
[22:35] <sagewk> stxshadow, absynth: carry on, the bug doesn't affect you guys
[22:36] <elder> Or maybe tonight If that's how the journal club will be done.
[22:36] <sagewk> (you aren't using rbd snapshots, right?)
[22:36] <stxShadow> not right now
[22:36] <sagewk> ok, then you can go ahead with wip-watchers-stable
[22:36] <nhm> ah that's right. I won't be making to journal club, family get together.
[22:36] * ^conner (~conner@leo.tuc.noao.edu) Quit (Read error: Operation timed out)
[22:36] <nhm> oh well
[22:37] <gregaf> elder: I was thinking Skype, but I guess I should go see what's working right now
[22:37] <gregaf> nhm: bummer :(
[22:38] <gregaf> elder: but if you're going waterskiing now I don't think you'll be at a video camera in 80 minutes?
[22:39] <nhm> gregaf: yeah, I was planning on it but my brother-in-law went and got a new Job.
[22:39] <gregaf> some people are so inconsiderate
[22:39] <gregaf> what's he doing now?
[22:40] * lxo (~aoliva@lxo.user.oftc.net) Quit (Read error: Operation timed out)
[22:40] <nhm> gregaf: working sales for some kind of parts distributor. Arrow I think?
[22:41] <gregaf> aw crap, the new git doesn't actually make it less stupid about submodules, I just didn't init them on my test instance
[22:41] <gregaf> argh
[22:51] * ArtemGr (~chatzilla@94.188.74.88) has joined #ceph
[22:51] <ArtemGr> sage: no, futimes manpage is not available.cat /etc/redhat-release
[22:52] * ^conner (~conner@leo.tuc.noao.edu) has joined #ceph
[22:52] <ArtemGr> sage: cat /etc/redhat-release; CentOS release 5.8 (Final); man 2 futimes No entry for futimes in section 2 of the manual
[22:54] * Jaykra (~Jamie@64-126-89-248.dyn.everestkc.net) Quit (Quit: Leaving.)
[22:56] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:00] <sagewk> artemgr: can make autoconf detect the syscall i guess
[23:01] <sagewk> artemgr: bleh, or just use teh patch.
[23:02] <ArtemGr> sagewk: yeah, I've changed it to "::utimes(path.c_str(), NULL);"
[23:02] <ArtemGr> sagewk: I wonder if GLIBC version could be checked with something like #elseif defined GLIBC && GLIBC < 123
[23:04] <sagewk> well, it's easy to detect the syscall, but it's probably not worth all the #ifdef noise. just using utimes().
[23:05] <ArtemGr> sagewk: I don't know if utimes is slower due to extra path->inode lookup
[23:06] * lofejndif (~lsqavnbok@83TAAD6BX.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:06] * lofejndif (~lsqavnbok@83TAAD6BX.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[23:15] <ArtemGr> sagewk: don't know why you're asking about the man pages, but did a "yum install man-pages man-pages-overrides" and now "man futimes" works, although it's in the third section, e.g. "man 3 futimes"
[23:16] <sagewk> artgemgr: no worries, i'm just switching to utimes() to be done with it.
[23:19] <ArtemGr> sagewk: okay, thanks!
[23:31] * Jaykra (~Jamie@64-126-89-248.dyn.everestkc.net) has joined #ceph
[23:36] <absynth> oh man
[23:36] <absynth> now our third osd is dying on us, after each recovery
[23:37] <absynth> sagewk: do you have any idea how we can achieve a stable state now?
[23:50] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[23:50] * lofejndif (~lsqavnbok@28IAADDRG.tor-irc.dnsbl.oftc.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.