#ceph IRC Log

Index

IRC Log for 2012-07-06

Timestamps are in GMT/BST.

[0:00] <darkfader> not.a.san.
[0:00] <neerbeer> darkfader: correct. sorry
[0:00] <darkfader> if you know then i take it back
[0:00] <darkfader> you only had one mon running right?
[0:01] <neerbeer> yes. mons can only run in odd numbers if I understand correctly.
[0:01] <darkfader> oh greg is around, i'll shush. more competent people first :>
[0:01] <darkfader> yes, thats right
[0:02] <neerbeer> unless there's someway to run 3 mons on two hosts but placing one on a different port ??? but I'll check the docs for that. This is just my test rig .
[0:02] <darkfader> i think it can go blocking if you shut down before the data arrived on both osds
[0:02] <darkfader> no, dont bother for now, i think i understand what happens. maybe :>
[0:03] <darkfader> can you first try to let the write/sync complete and everything stabilize, then fail the single OSD and see if you can still read?
[0:03] <neerbeer> I was just trying to simulate a osd/host failure .
[0:03] <neerbeer> I can try that
[0:03] <darkfader> i know, but try to do it in two steps, first see if it stays accessible for the read, then try the second thing
[0:03] <darkfader> if you have a replication of 1 (so two copies, it might HAVE to block if one osd goes offline)
[0:04] <darkfader> since it couldn't fulllfil policy
[0:05] <darkfader> and doing a "does read after fully syncing also block" makes troubleshooting a lot clearer
[0:05] <darkfader> and good nighrt
[0:05] <neerbeer> replication = 1 is two copies ?
[0:05] <dmick> no, one
[0:05] <dmick> er, I think
[0:05] <darkfader> dmick: sorry for the wrong number, it's midnight. but by default he gets two copies in a ceph fs, right?
[0:06] <gregaf1> sorry, got distracted
[0:06] <dmick> the default replication is 2
[0:06] <dmick> yes
[0:06] <darkfader> we shall call it replication count as it is correct[tm]
[0:06] <darkfader> and then it's definitely 2 :)
[0:06] <gregaf1> you actually configure the "size", so when we say replication is two it means it tries to create two total copies of the data
[0:06] <gregaf1> anyway, neerbeer, how long did you wait with it hanging?
[0:07] <gregaf1> and did you look at ceph -s/ceph -w during that time?
[0:07] <neerbeer> about a minute or so .. and then I restarted the 2nd osd .
[0:07] <gregaf1> with the defaults you should have expected it to hang for???about 35 seconds before the system decided the second OSD was really down
[0:07] <neerbeer> yes. I did see it running in degraded mode
[0:07] <gregaf1> and then the remaining OSD would have taken over responsibility for all the data, but that might have taken a little bit longer
[0:10] <gregaf1> anyway, if you try again and watch ceph -w you should see that progression, and you can watch the PGs go through their stages, and you want them to all end up in active+degraded (or active+degraded+clean? sjust1?)
[0:11] <gregaf1> if that doesn't happen, or it takes a long time, let us know, but "about a minute" isn't quite long enough for me to get concerned ;)
[0:11] <neerbeer> isn't a minute a rather long time to have VMs not running ?
[0:12] <neerbeer> or just tell me what the expectations should be
[0:12] <gregaf1> they're tunable settings; I'm trying to dig up the email I sent recently
[0:12] <neerbeer> I'm about to start testing again now .
[0:13] <gregaf1> and in *most* (though definitely not all) VMs they aren't going to do enough disk activity for a minute's partial inaccessibility to even be noticed
[0:13] <neerbeer> root@myhost:/rbd# dd if=/dev/zero of=outfile bs=1024 count=10000
[0:13] <neerbeer> 10000+0 records in
[0:13] <neerbeer> 10000+0 records out
[0:13] <neerbeer> 10240000 bytes (10 MB) copied, 0.0258244 s, 397 MB/s
[0:13] <neerbeer> that's with both osds on a freshly mkfs'd ext4
[0:14] <gregaf1> hurray cache! ;)
[0:15] <gregaf1> neerbeer: description of some relevant config options at http://www.spinics.net/lists/ceph-devel/msg07305.html
[0:15] * lxo (~aoliva@lxo.user.oftc.net) Quit (Read error: No route to host)
[0:15] * LarsFronius (~LarsFroni@2a02:8108:380:90:8c61:ae23:fa5f:b913) Quit (Quit: LarsFronius)
[0:16] <darkfader> neerbeer: add conv=fdatasync for non-cached
[0:17] <neerbeer> gregaf1: reading ??? good stuff so far ..
[0:21] <neerbeer> darkfader: conv=fdatasync ??? okay. I'm not as cool as I thought.
[0:21] <darkfader> dont have to know that
[0:21] <darkfader> just a linuxism
[0:22] * The_Bishop (~bishop@e179021099.adsl.alicedsl.de) has joined #ceph
[0:23] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:29] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:30] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[0:36] <neerbeer> I tried my dd test again and my dd if=/dev/zero of=outfile7 bs=1k count=100000 conv=fdatasync has been hung now for 5 min
[0:37] <neerbeer> The mon did detect the 2nd osd down and marked it down appropriately .
[0:37] <neerbeer> 012-07-05 18:30:33.190679 log 2012-07-05 18:30:32.806673 mon.0 10.1.101.8:6789/0 15 : [INF] osd.1 10.1.101.9:6800/15404 failed (by osd.0 10.1.101.8:6801/25352)
[0:39] <gregaf1> neerbeer: what's the full output of ceph -s
[0:39] <gregaf1> ?
[0:40] <neerbeer> root@myhost:~# ceph -s
[0:40] <neerbeer> 2012-07-05 18:40:03.986167 pg v8019: 396 pgs: 396 active+clean; 15818 MB data, 20984 MB used, 5539 GB / 5560 GB avail
[0:40] <neerbeer> 2012-07-05 18:40:03.987772 mds e4: 1/1/1 up {0=a=up:active}
[0:40] <neerbeer> 2012-07-05 18:40:03.987837 osd e17: 2 osds: 1 up, 1 in
[0:40] <neerbeer> 2012-07-05 18:40:03.987953 log 2012-07-05 18:35:47.268815 mon.0 10.1.101.8:6789/0 18 : [INF] osd.1 out (down for 304.270043)
[0:40] <neerbeer> 2012-07-05 18:40:03.988094 mon e1: 1 mons at {a=10.1.101.8:6789/0}
[0:41] <gregaf1> can you run "rados -p data bench 60 write" and make sure that completes without hanging?
[0:42] <gregaf1> (given that ceph -s output it looks like the rbd client is busted somehow, and I want to make sure)
[0:42] <neerbeer> do that from the host or from one of the osd machines?
[0:42] <neerbeer> s/host/client/
[0:42] <gregaf1> either one
[0:58] <neerbeer> I ran this from the up osd:
[0:58] <neerbeer> Total time run: 61.795925
[0:58] <neerbeer> Total writes made: 367
[0:58] <neerbeer> Write size: 4194304
[0:58] <neerbeer> Bandwidth (MB/sec): 23.756
[0:59] <neerbeer> Average Latency: 2.69363
[0:59] <neerbeer> Max latency: 7.84352
[0:59] <neerbeer> Min latency: 0.074748
[0:59] <neerbeer> However,I was not able to run this from the client machine
[0:59] <neerbeer> http://f.imgtmp.com/EUgjl.png
[0:59] <gregaf1> you couldn't run it from the client machine?
[0:59] <neerbeer> the client machine looks like it went belly up
[1:00] <gregaf1> oh, from the previous rbd mount, got it
[1:01] <gregaf1> elder, you there?
[1:01] <neerbeer> previous rbd mount , yest
[1:01] <neerbeer> yes
[1:02] <gregaf1> neerbeer: I think this is probably a bug in the kernelspace libceph messaging code when nodes disappear, and I think it might even have been fixed in 3.4 or 3.5-rc, but I'll need to check with one of our kernel guys
[1:02] <neerbeer> ok
[1:04] <gregaf1> if it's feasible, you might want to do testing using QEMU and the userspace rbd stuff ??? that's seen more testing, includes stuff like client-side caches, and won't bust up the rest of your machine if it breaks ;)
[1:06] <neerbeer> okay. that was my next step .
[1:06] <neerbeer> thanks
[1:20] <neerbeer> using qem-img would require running that cmd on the same host as an osd, correct ?
[1:21] <neerbeer> s/qem-img/qemu-img/
[1:27] * neerbeer (~Adium@65-125-22-154.dia.static.qwest.net) Quit (Quit: Leaving.)
[1:44] * lofejndif (~lsqavnbok@82VAAEXA5.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:05] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:07] <sagewk> elder: added hch's xfs patches to testing; in my test it fixes the lockdep warning we were seeing
[2:08] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[2:32] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[3:24] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[3:24] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Ping timeout: 480 seconds)
[3:31] <renzhi> Ceph on debian wheezy still shows the version number to 0.47.2:
[3:31] <renzhi> Setting up ceph (0.48argonaut-1~bpo70+1) ...
[3:31] <renzhi> Setting up ceph-mds (0.48argonaut-1~bpo70+1) ...
[3:31] <renzhi> root@beijing:~# ceph -v
[3:31] <renzhi> ceph version 0.47.2 (commit:8bf9fde89bd6ebc4b0645b2fe02dadb1c17ad372)
[3:31] <gregaf1> I think sage has that on his list (but here I am pinging his home computer in case it's not)
[3:31] * The_Bishop (~bishop@e179021099.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[3:32] <renzhi> thanks
[3:40] * neerbeer (~Adium@c-75-75-33-53.hsd1.va.comcast.net) has joined #ceph
[3:46] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) has joined #ceph
[3:53] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[4:14] <elder> gregaf1 I was not here but am now. Sorry about that, away longer than expected.
[4:14] <elder> I don't suppose you're on any more though.
[4:36] * Qten (~qgrasso@ip-121-0-1-110.static.dsl.onqcomms.net) Quit (Read error: Connection reset by peer)
[4:50] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[5:52] * deepsa (~deepsa@122.167.169.82) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[6:10] * deepsa (~deepsa@122.167.169.82) has joined #ceph
[6:33] <sage> renzhi: 'ceph' is in the ceph-common package
[6:33] <sage> renzhi: maybe that's it?
[7:19] * dmick (~dmick@aon.hq.newdream.net) Quit (Quit: Leaving.)
[7:22] <sage> elder: re-pushed testing, squashing that last fix and a new one into previous stuff
[7:22] * fghaas (~florian@194.158.199.28) has joined #ceph
[7:29] * fghaas (~florian@194.158.199.28) Quit (Read error: Connection reset by peer)
[7:33] * neerbeer (~Adium@c-75-75-33-53.hsd1.va.comcast.net) Quit (Quit: Leaving.)
[8:11] <renzhi> sage: not sure, usually, I just need to install ceph, and that should install ceph-common.
[8:11] <renzhi> but uninstall ceph won't uninstall ceph-common
[8:12] <sage> yeah, it won't upgrade it to make the versions match, tho, unless you mention it by name
[8:12] <sage> apt-get install ceph-common should trigger the upgrade
[8:12] <renzhi> ok, I see
[8:13] <renzhi> I'm fine with my test cluster now
[8:13] <renzhi> glad 0.48 came out before we go live
[8:20] * adjohn (~adjohn@50-0-133-101.dsl.static.sonic.net) has joined #ceph
[8:23] * fghaas (~florian@86.57.255.94) has joined #ceph
[8:59] * loicd (~loic@magenta.dachary.org) has joined #ceph
[9:05] <pmjdebruijn> hi guys
[9:09] * deepsa (~deepsa@122.167.169.82) Quit (Ping timeout: 480 seconds)
[9:16] * Ryan_Lane (~Adium@c-98-210-205-93.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[9:19] * LarsFronius (~LarsFroni@95-91-243-249-dynip.superkabel.de) has joined #ceph
[9:21] * fghaas (~florian@86.57.255.94) Quit (Quit: Leaving.)
[9:21] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[9:30] * LarsFronius (~LarsFroni@95-91-243-249-dynip.superkabel.de) Quit (Quit: LarsFronius)
[9:34] * lxo (~aoliva@lxo.user.oftc.net) Quit (Read error: No route to host)
[9:39] * deepsa (~deepsa@115.184.126.32) has joined #ceph
[9:45] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[9:48] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:58] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[10:03] * adjohn (~adjohn@50-0-133-101.dsl.static.sonic.net) Quit (Quit: adjohn)
[10:06] * deepsa_ (~deepsa@122.167.172.173) has joined #ceph
[10:08] * deepsa (~deepsa@115.184.126.32) Quit (Ping timeout: 480 seconds)
[10:08] * deepsa_ is now known as deepsa
[10:22] * loicd (~loic@83.167.43.235) has joined #ceph
[10:28] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:37] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[10:37] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[11:53] * Liam_Sa (~Liam_Sa@41.161.35.68) has joined #ceph
[11:59] <Liam_Sa> hey all. I'm using 0.48 on ubuntu when i run gceph it runs but no data it in and monclient: hunting for new mon error in the terminal.
[11:59] <Liam_Sa> can anyone help?
[12:01] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:09] * Liam_Sa (~Liam_Sa@41.161.35.68) Quit ()
[12:10] * Liam_Sa (~Liam_Sa@41.161.35.68) has joined #ceph
[12:11] <Liam_Sa> can anyone help
[12:23] <joao> gceph was going to be faded away
[12:23] <joao> I thought it had been removed by now
[12:23] <joao> you probably had an older version of gceph lying around in the system
[12:36] * deepsa (~deepsa@122.167.172.173) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[12:38] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) has joined #ceph
[12:38] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[12:39] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[12:59] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[12:59] <elder> nhm, are you free for lunch today?
[12:59] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[13:32] * renzhi (~renzhi@69.163.36.54) Quit (Quit: Leaving)
[13:55] * stxShadow (~Jens@ip-78-94-238-69.unitymediagroup.de) Quit (Read error: Connection reset by peer)
[14:11] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (Quit: Leaving)
[14:26] * The_Bishop (~bishop@e179021099.adsl.alicedsl.de) has joined #ceph
[14:51] <nhm> elder: heya, I think so, let me just make sure the car isn't needed.
[14:53] <nhm> elder: yeah, I'm game
[14:54] <elder> What time?
[14:54] <Liam_Sa> joao: thanx was very confused i thought i read somewhere that it was removed but couldnt find it again to confirm.
[14:54] <elder> You want to meet before the standup and tune in while there?
[14:54] <nhm> elder: Sounds good
[14:54] <elder> Rosedale area a reasonable meeting place for you too?
[14:54] <nhm> elder: maybe 11:30 or 12:00?
[14:55] <nhm> Yeah, Rosedale is fine
[14:56] <elder> Let's do noon. I'm about to take my dog to the vet and that will give me a slightly longer time to get work done before I have to leave again.
[14:56] <nhm> sure
[14:56] <elder> Any preference on restaurant?
[14:57] <nhm> I'm pretty open. Big Bowl?
[14:57] <elder> Sure.
[14:57] <elder> Big bowl at noon. Meet in the hall outside it.
[14:57] <nhm> sounds good
[14:57] <elder> Sounds good.
[14:57] <elder> Jinx.
[14:57] <nhm> lol
[15:01] <joao> Liam_Sa, np :)
[15:05] <elder> OK, back in about an hour and a half I think.
[15:21] * Liam_Sa (~Liam_Sa@41.161.35.68) Quit ()
[15:31] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[15:33] * The_Bishop (~bishop@e179021099.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[15:42] * The_Bishop (~bishop@e179021099.adsl.alicedsl.de) has joined #ceph
[15:49] * fghaas (~florian@194.158.199.28) has joined #ceph
[15:54] * The_Bishop (~bishop@e179021099.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[15:59] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[16:00] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[16:05] * The_Bishop (~bishop@e179021099.adsl.alicedsl.de) has joined #ceph
[16:18] * The_Bishop (~bishop@e179021099.adsl.alicedsl.de) Quit (Remote host closed the connection)
[16:23] * The_Bishop (~bishop@e179021099.adsl.alicedsl.de) has joined #ceph
[16:48] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[16:52] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[17:06] * brambles (brambles@79.133.200.49) Quit (Remote host closed the connection)
[17:06] * brambles (brambles@79.133.200.49) has joined #ceph
[17:20] * fghaas (~florian@194.158.199.28) Quit (Quit: Leaving.)
[17:37] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:48] * deepsa (~deepsa@122.167.172.173) has joined #ceph
[17:51] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[17:51] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[17:59] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[18:06] * lofejndif (~lsqavnbok@9YYAAHVWM.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:16] * Cube (~Cube@12.248.40.138) has joined #ceph
[18:33] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[18:34] * LarsFronius (~LarsFroni@95-91-243-249-dynip.superkabel.de) has joined #ceph
[18:36] * loicd (~loic@83.167.43.235) Quit (Quit: Leaving.)
[18:39] * joshd (~jdurgin@2602:306:c5db:310:1e6f:65ff:feaa:beb7) has joined #ceph
[18:41] <iggy> fyi... <mjt> iggy: ceph is currently i386-only (or, maybe, little-endian-only). Or if not ceph itself, it is one of the libraries (libleveldb) it uses. So this either will be fixed, or ceph will be removed from wheezy. I tend to think it will be the latter.
[18:43] <iggy> I told him I'd be surprised if that were true, but either way I'd hate to see ceph pulled from debian because of a misunderstanding
[18:44] <iggy> that statement came about because he was saying rbd support would be going away from qemu/kvm in debian
[18:45] * yehudasa (~yehudasa@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[18:45] <gregaf1> iggy: there's been a little chatter about it on the mailing list, and we're doing our best but it's largely in the hands of third-party maintainers right now :/
[18:45] * yehudasa_ (~yehudasa@aon.hq.newdream.net) has joined #ceph
[18:51] * yehudasa (~yehudasa@aon.hq.newdream.net) has joined #ceph
[18:59] * loicd (~loic@magenta.dachary.org) has joined #ceph
[19:38] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[19:43] * chutzpah (~chutz@100.42.98.5) has joined #ceph
[19:44] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[19:45] * Ryan_Lane (~Adium@c-98-210-205-93.hsd1.ca.comcast.net) has joined #ceph
[19:49] * The_Bishop_ (~bishop@e179021019.adsl.alicedsl.de) has joined #ceph
[19:57] * The_Bishop (~bishop@e179021099.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[19:58] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[20:01] * LarsFronius (~LarsFroni@95-91-243-249-dynip.superkabel.de) Quit (Quit: LarsFronius)
[20:04] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[20:07] * loicd (~loic@magenta.dachary.org) has joined #ceph
[20:21] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (Ping timeout: 480 seconds)
[20:22] * The_Bishop_ (~bishop@e179021019.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[20:52] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[20:56] <Dr_O_> There has been some chatter on #debian-bugs about it also
[20:56] * Dr_O_ is now known as Dr_O
[21:15] <iggy> I'm talking to the qemu/kvm maintainer about it, it seems a bit like people are pointing fingers more than actually trying to fix things
[21:26] <elder> sagewk, let me know when you're around.
[21:53] * Kioob`Taff1 (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[21:54] * Dr_O is now known as Dr_O_
[22:09] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[22:10] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:17] <dspano> I just got cephfs working. My thanks to the devs. This project rocks! I can't wait to use it with Openstack. Time for a beer.
[22:17] <nhm> dspano: congrats! :)
[22:18] <dspano> Thank you! This guy's tutorial was great. http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/
[22:19] <dspano> Have a great weekend everyone!
[22:19] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: Leaving)
[22:40] <gregaf1> iggy: hmmm, from what's come across my desk all the problems are getting fixed (I think? there was an issue with libatomic-ops, and an issue with leveldb) in the upstreams, but I don't follow debian packaging much
[22:50] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[23:02] <yehudasa> iggy: I looked at the leveldb big endian issue, and it was just a bug in the unitest.. I updated the debian maintainer about that but I haven't heard anything since
[23:06] <nhm> Anyone know if Dan is around today?
[23:08] <nhm> or, can anyone else deal with ipmi not working correctly? I think Dan said something about needing to use the java thing to fix it.
[23:12] <gregaf1> nhm: he's here; I'll poke him
[23:12] <nhm> gregaf1: thanks much
[23:15] <dmick> nhm: I'll get there in a sec (so the rest of the channel knows)
[23:16] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:36] * yehudasa_ (~yehudasa@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[23:58] <nhm> dmick: thanks for taking care of that!

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.