#ceph IRC Log

Index

IRC Log for 2012-05-18

Timestamps are in GMT/BST.

[0:05] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[0:06] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) Quit (Ping timeout: 480 seconds)
[0:10] * kb_gt (~kb_gt@adsl-89-217-38-150.adslplus.ch) has joined #ceph
[0:12] <kb_gt> free online comic book kh43.com
[0:12] * kb_gt (~kb_gt@adsl-89-217-38-150.adslplus.ch) Quit (autokilled: This host triggered network flood protection. please mail support@oftc.net if you feel this is in error, quoting this message. (2012-05-17 22:12:44))
[0:18] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[0:19] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[0:26] <SpamapS> sagewk: http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/wip-quorum/pool/main/c/ceph/ .. full of debs .. :) re-trying
[0:32] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[0:38] <sagewk> tv_: wip-quorum has the initial_members stuff. passing my single-host tests. barring bad interactions with the auto-ip-detection stuff, it should behave!
[0:47] <Tv_> yay
[0:50] <SpamapS> sage 2012-05-17 22:50:06.985337 mon e3: 3 mons at {cmon-debug2-0=10.252.69.248:6789/0,cmon-debug2-1=10.252.87.105:6800/0,cmon-debug2-2=10.252.10.239:6800/0}
[0:50] <SpamapS> woot
[0:50] <SpamapS> sagewk: ^^
[0:50] <sagewk> yay!
[0:50] <sagewk> tv_: working on socket piece now
[1:10] * BManojlovic (~steki@212.200.243.232) Quit (Remote host closed the connection)
[1:14] * brambles (brambles@79.133.200.49) Quit (Quit: leaving)
[1:32] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:36] * brambles_ (brambles@79.133.200.49) Quit (Quit: leaving)
[1:37] * brambles (brambles@79.133.200.49) has joined #ceph
[1:42] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:43] <SpamapS> Tv_: hey, I foudn this for doing declarative stuff much lighter weight than chef/puppet: http://ansible.github.com/
[1:45] <Tv_> SpamapS: i've seen that but didn't have time to dig in
[1:46] <SpamapS> Tv_: looks nice.. 1000 lines of python total.. very simple, works a lot like juju to extend it
[1:47] <Tv_> SpamapS: my notes say 1) it's push and hence sucks for large setups 2) looks very imperative to me (each op is in charge of idempotency, no help from the platform) 3) the syntax sucks: "command /path/foo bar baz creates=/path/quux" jumps between shell args and meta-information
[1:49] <SpamapS> Tv_: less imperative than shell scripts, and the push part is ok, I just want it to write charms w/o having to make tempfiles
[1:50] <SpamapS> Tv_: I'd say its less declarative than I ultimately want, but far more so than shell. :)
[1:52] * Tv_ (~tv@aon.hq.newdream.net) Quit (Quit: Tv_)
[2:10] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[2:13] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[2:15] * lofejndif (~lsqavnbok@82VAADV7O.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:20] * bchrisman (~Adium@108.60.121.114) Quit (Quit: Leaving.)
[2:23] * dennisj_ (~chatzilla@p5DCF7D6F.dip.t-dialin.net) Quit (Quit: ChatZilla 0.9.88.2 [Firefox 12.0/20120424092743])
[2:36] <Qten> anyone know of any issues combing compute & storage on the same server?
[2:37] <joshd> yeah, you shouldn't use the kernel clients on a storage server, since there may be a deadlock
[2:37] <joshd> it's the same problem nfs has
[2:39] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) Quit (Remote host closed the connection)
[2:39] <Qten> ahh ok
[2:40] * yoshi (~yoshi@p3167-ipngn3601marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:43] <iggy> fuse client/rbd should be fine from what i've heard
[2:43] <iggy> barring other issues
[2:44] <Qten> rbd isnt that basically a kernel client?
[2:45] <joshd> there's a userspace library, librbd, that qemu and the command line rbd tool use to access images
[2:45] <joshd> there's also a kernel rbd module
[2:49] <Qten> so the kernel rbd module may cause issues? or just the kernel based object storage client?
[2:50] * rturk (~rturk@aon.hq.newdream.net) has left #ceph
[2:53] <joshd> doing i/o through the rbd kernel module or the ceph filesystem kernel module may cause isssues
[2:56] <Qten> yah thats what i thought :), just confused me a little when you mentioned rbd was fine
[2:58] * adjohn (~adjohn@70-36-139-109.dsl.dynamic.sonic.net) has joined #ceph
[3:09] * joshd (~joshd@aon.hq.newdream.net) Quit (Quit: Leaving.)
[3:11] * adjohn (~adjohn@70-36-139-109.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[3:29] * dmick (~dmick@aon.hq.newdream.net) Quit (Quit: Leaving.)
[3:39] * joao (~JL@aon.hq.newdream.net) Quit (Ping timeout: 480 seconds)
[4:16] * jeffp (~jplaisanc@net66-219-41-161.static-customer.corenap.com) has left #ceph
[5:04] <Qten> for an ceph MDS/ODS/MON Server 2 x 4 Core 2.2Ghz AMD should be fast enough?
[5:56] <sage> qten: it all depends on how fast you want it to go :)
[5:57] <sage> qten: sounds fine
[6:02] * Ryan_Lane (~Adium@ip98-178-220-200.no.no.cox.net) Quit (Quit: Leaving.)
[6:36] * f4m8_ is now known as f4m8
[7:22] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) Quit (Read error: Connection reset by peer)
[7:24] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) has joined #ceph
[7:50] * Theuni (~Theuni@46.253.59.219) has joined #ceph
[8:40] <Qten> sage: thanks ;)
[8:42] <Qten> i was also considering using each disk as a seperate OSD instead of raid, I imagine this will give me great performance as ceph does striping and replication across the OSDs.
[8:44] <Qten> would this be accurate? :)
[8:44] * Theuni (~Theuni@46.253.59.219) Quit (Quit: Leaving.)
[9:02] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[9:09] * ogelbukh (~weechat@nat3.4c.ru) Quit (Remote host closed the connection)
[9:10] * ogelbukh (~weechat@nat3.4c.ru) has joined #ceph
[9:16] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:16] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[9:19] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) has joined #ceph
[9:30] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[9:30] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[9:49] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[9:49] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[9:50] * cattelan (~cattelan@c-66-41-26-220.hsd1.mn.comcast.net) has joined #ceph
[9:50] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[10:05] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[10:06] * s[X]_ (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[10:14] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:14] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[10:15] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[10:20] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) Quit (Ping timeout: 480 seconds)
[10:47] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) has joined #ceph
[11:39] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) Quit (Remote host closed the connection)
[11:50] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[11:52] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) has joined #ceph
[11:59] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[12:00] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) Quit (Remote host closed the connection)
[12:06] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) has joined #ceph
[12:13] * s[X]_ (~sX]@60-241-151-10.tpgi.com.au) Quit (Remote host closed the connection)
[12:20] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[12:56] * yoshi (~yoshi@p3167-ipngn3601marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[13:42] * ivan` (~ivan`@li125-242.members.linode.com) Quit (Quit: ERC Version 5.3 (IRC client for Emacs))
[13:44] * lofejndif (~lsqavnbok@28IAAETKQ.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:47] * ivan` (~ivan`@li125-242.members.linode.com) has joined #ceph
[14:31] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[14:31] * mgalkiewicz (~mgalkiewi@staticline58611.toya.net.pl) has joined #ceph
[14:32] <mgalkiewicz> Hi guys. Is it possible to force rbd removal? I constantly get errors "delete error: image still has watchers This means the image is still open or the client using it crashed. Try again after closing/unmapping it or waiting 30s for the crashed client to timeout."
[14:33] * lofejndif (~lsqavnbok@28IAAETKQ.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[14:37] * ivan` (~ivan`@li125-242.members.linode.com) Quit (Quit: ERC Version 5.3 (IRC client for Emacs))
[14:41] * ivan` (~ivan`@li125-242.members.linode.com) has joined #ceph
[16:00] * f4m8 is now known as f4m8_
[16:09] * LarsFronius (~LarsFroni@p578b21b6.dip0.t-ipconnect.de) has joined #ceph
[16:10] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[16:11] * LarsFronius (~LarsFroni@p578b21b6.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[16:18] * LarsFronius_ (~LarsFroni@testing78.jimdo-server.com) Quit (Ping timeout: 480 seconds)
[16:42] * lofejndif (~lsqavnbok@9YYAAF65Q.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:49] * doet (~doet@46.33.130.1) has joined #ceph
[16:51] * Theuni (~Theuni@82.113.99.215) has joined #ceph
[16:58] * mgalkiewicz (~mgalkiewi@staticline58611.toya.net.pl) Quit (Quit: Ex-Chat)
[16:58] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:05] * Theuni (~Theuni@82.113.99.215) Quit (Quit: Leaving.)
[17:20] * Ryan_Lane (~Adium@208-117-193-99.static.idsno.net) has joined #ceph
[17:28] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[17:33] * nrheckman (4b9538f1@ircip1.mibbit.com) has joined #ceph
[17:34] <nrheckman> Morning everybody. I have an odd state on my single-node ceph setup which I'm having trouble resolving. "2012-05-18 08:31:19.360104 mon.0 -> 'HEALTH_WARN 262 pgs degraded; 262 pgs stale; 262 pgs stuck unclean; recovery 1695/3390 degraded (50.000%)' (0)".
[17:35] <nrheckman> When I check list_missing, it gives me an empty list
[17:42] * MarkDude (~MT@c-71-198-138-155.hsd1.ca.comcast.net) has joined #ceph
[17:53] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[17:53] * CristianDM (~CristianD@host217.190-230-240.telecom.net.ar) has joined #ceph
[17:54] * adjohn (~adjohn@70-36-139-109.dsl.dynamic.sonic.net) has joined #ceph
[17:58] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Ping timeout: 480 seconds)
[18:14] * Tv_ (~tv@aon.hq.newdream.net) has joined #ceph
[18:14] <Tv_> i wonder why my irc client didn't reconnect automatically.. oh well
[18:21] * yehudasa (~yehudasa@aon.hq.newdream.net) Quit (Remote host closed the connection)
[18:24] * yehudasa (~yehudasa@aon.hq.newdream.net) has joined #ceph
[18:29] <sjust> nrheckman: if you have one osd, the degraded part is normal
[18:30] <gregaf> but not the stale part
[18:30] <sjust> yeah
[18:30] <gregaf> Qten: that's how most production deployments are using it so far, but we aren't sure which is better yet ?????not enough data points :)
[18:32] <sjust> nrheckman: is it responding to requests?
[18:35] <sagewk> tv_: let me know when you rebase the chef branch
[18:35] <Tv_> sagewk: yeah i wanted to put the mds upstart work into chef-3 but ran into stupid
[18:35] <Tv_> sagewk: to do it sort of like the osds, i need a "bootstrap-mds" key, and i'm trying to find a nicer way than copy-pasting etc
[18:36] <sagewk> k
[18:36] <Tv_> i can make chef-3 without that, with just using the new features though
[18:38] <sagewk> it can wait. at some point wip-quorum just needs to be rebased
[18:38] <sagewk> actually, i can drop out chef entirely for now.. it was only to get clint's issue fixed.
[18:52] <Tv_> sagewk: wait now i no longer understand what you need
[18:52] <sagewk> tv_: i don't think i need anything.. no worries.
[18:53] * Theuni (~Theuni@91-65-217-125-dynip.superkabel.de) has joined #ceph
[18:53] * adjohn (~adjohn@70-36-139-109.dsl.dynamic.sonic.net) Quit (Quit: adjohn)
[18:55] * joshd (~joshd@aon.hq.newdream.net) has joined #ceph
[18:55] * BManojlovic (~steki@212.200.243.232) has joined #ceph
[18:56] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[19:00] * doet (~doet@46.33.130.1) Quit (Ping timeout: 480 seconds)
[19:07] * joao (~JL@aon.hq.newdream.net) has joined #ceph
[19:07] <joao> hello #ceph
[19:07] <joao> best abstract ever: http://iopscience.iop.org/1751-8121/44/49/492001/article
[19:11] * rturk (~rturk@aon.hq.newdream.net) has joined #ceph
[19:13] * aa (~aa@r200-40-114-26.ae-static.anteldata.net.uy) has joined #ceph
[19:14] * adjohn (~adjohn@70-36-139-109.dsl.dynamic.sonic.net) has joined #ceph
[19:15] * adjohn (~adjohn@70-36-139-109.dsl.dynamic.sonic.net) Quit ()
[19:19] <nrheckman> sjust: yeah, it seems to be working fine
[19:20] <nrheckman> sjust: I attempted to change the replication factor to 1, that seems to have worked for any new data that gets inserted... But I can't seem to clear the existing data.
[19:21] * rturk (~rturk@aon.hq.newdream.net) Quit (Remote host closed the connection)
[19:22] * rturk (~rturk@aon.hq.newdream.net) has joined #ceph
[19:32] * lofejndif (~lsqavnbok@9YYAAF65Q.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[19:35] <sjust> nrheckman: how did you change the replication factor?
[19:40] * dmick (~dmick@aon.hq.newdream.net) has joined #ceph
[19:41] <nrheckman> sjust: info from the wiki (http://ceph.com/wiki/Adjusting_replication_level)
[19:44] <nrheckman> I was going to try and mount it, copy all the data off and re-insert it to see if that might clear it... But I can't get the kernel module to compile. Using latest available kernel in CentOS 6
[19:49] <sjust> did you set all of the pools to replication size 1?
[19:51] <nrheckman> yeah, i did
[19:52] <nrheckman> Oh hey, look at that. Maybe i'm just too impatient. It started logging lines like the following: "2012-05-18 10:51:53.540404 log 2012-05-18 10:51:50.472915 osd.0 127.0.0.1:6801/6112 107 : [INF] 1.3b scrub ok"
[19:52] <nrheckman> ceph health is reporting fewer pgs stuck
[20:02] <joshd> nrheckman: if you do 'ceph tell osd.0 flush_pg_stats' you should get more up to date info
[20:07] * doet (~doet@46.33.130.1) has joined #ceph
[20:13] * Ryan_Lane (~Adium@208-117-193-99.static.idsno.net) Quit (Quit: Leaving.)
[20:23] * doet (~doet@46.33.130.1) Quit (Ping timeout: 480 seconds)
[20:28] * Theuni (~Theuni@91-65-217-125-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[20:32] * doet (~doet@46.33.130.1) has joined #ceph
[20:36] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[20:36] * lofejndif (~lsqavnbok@1RDAABXTH.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:02] * Ryan_Lane (~Adium@208-117-193-99.static.idsno.net) has joined #ceph
[21:02] * dennisj (~chatzilla@p5DCF7D6F.dip.t-dialin.net) has joined #ceph
[21:04] * ulyn (~ulyn@82VAADWVU.tor-irc.dnsbl.oftc.net) has joined #ceph
[21:06] * adjohn is now known as Guest592
[21:06] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[21:09] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) Quit (Ping timeout: 480 seconds)
[21:12] * Guest592 (~adjohn@69.170.166.146) Quit (Ping timeout: 480 seconds)
[21:15] * The_Bishop (~bishop@cable-86-56-102-91.cust.telecolumbus.net) has joined #ceph
[21:17] * doet (~doet@46.33.130.1) Quit (Quit: Ex-Chat)
[21:18] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[21:30] * lofejndif (~lsqavnbok@1RDAABXTH.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[21:39] <nhm> good afternoon #ceph
[21:40] <dmick> why hello mr nhm sir
[22:01] <liiwi> good evening
[22:04] <ulyn> what is ceph
[22:07] <nrheckman> joshd: It's much more caught up now, but appears to have stalled? It's only showing 64 pgs degraded now. "2012-05-18 13:06:29.204187 mon.0 -> 'HEALTH_WARN 64 pgs degraded; 64 pgs stuck unclean; recovery 1685/3391 degraded (49.690%)' (0)"
[22:09] <nrheckman> Almost feels like I made a mistake bringing up a single node cluster and not setting the replication factor to 1 BEFORE adding data? :)
[22:09] <joshd> what does 'ceph pg dump | grep degraded' show? are they all mapped to osd.0 like they should be?
[22:10] <nrheckman> which column is that?
[22:11] <joshd> the up/acting column - there should be [0] [0]
[22:11] <joshd> er, columns
[22:12] <nrheckman> yup, zeros
[22:23] <joshd> for one of those pgs, could you pastebin the output of 'ceph tell osd.0 pg [pgid] query'?
[22:23] <joshd> the pgid is like 0.a, the first column in the pg dump
[22:24] * rturk (~rturk@aon.hq.newdream.net) Quit (Remote host closed the connection)
[22:24] * rturk (~rturk@aon.hq.newdream.net) has joined #ceph
[22:32] <nrheckman> joshd: sure.
[22:33] <nrheckman> joshd: ceph tell osd.0 pg 9.6 query > http://pastebin.com/UJWYhp30
[22:34] <joshd> nrheckman: this might be a special case where clearing the degraded flag doesn't happen for replication factor 1
[22:35] <nrheckman> joshd: understandable, should I dump and reload my data? Or is that not going to clear it up?
[22:36] <joshd> I'm curious if restarting the osd will reset that state
[22:36] <nrheckman> I tried it, didn't work.
[22:38] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[22:38] <joshd> the degraded flag is reset when recovery finishes, but with 1 replica it seems not to get triggered
[22:40] <nrheckman> Well... I suppose I could completely rebuild it with the appropriate replication factor of 1
[22:40] <joshd> yeah, that seems simplest
[22:40] <nrheckman> but I need to mount it to pull the data out first. Can't seem to compile the ceph kernel module in centos 6 though (2.6.32-220.17.1.el6.x86_64)
[22:41] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[22:41] <joshd> what about fuse?
[22:41] <nrheckman> haven't tried that yet!
[22:44] <nrheckman> Mounted with 'ceph-fuse -m localhost /tmp/ceph' but /tmp/ceph/ is empty!
[22:44] * CristianDMM (~CristianD@host217.190-230-240.telecom.net.ar) has joined #ceph
[22:44] * CristianDMM (~CristianD@host217.190-230-240.telecom.net.ar) Quit ()
[22:45] * nhorman (~nhorman@99-127-245-201.lightspeed.rlghnc.sbcglobal.net) Quit (Quit: Leaving)
[22:46] <joshd> wait, what kind of data did you store before if you couldn't use the kernel module?
[22:49] <CristianDM> joshd: Hi
[22:50] <CristianDM> joshd: I am building the ceph cluster in datacenter. For mon, 10GB are fine or is it possible with this space go out of space?
[22:56] <joshd> yeah that should be fine, it's technically possible to run out of space still, but pretty unlikely
[22:56] <CristianDM> Thanks
[22:57] <nrheckman> joshd: just files, though they were stored using the s3 api
[22:59] <nrheckman> joshd: only reason I can't use the kernel module is because it won't compile...
[23:04] * doet (~doet@46.33.129.2) has joined #ceph
[23:07] <sjust> s3 api?
[23:07] <sjust> radosgw, you mean?
[23:07] <nrheckman> sjust: right, that
[23:07] <sjust> you can't get at those files through the filesystem
[23:08] <sjust> you'll need to extract them using radosgw
[23:08] <nrheckman> sjust: gotcha
[23:12] * BManojlovic (~steki@212.200.243.232) Quit (Ping timeout: 480 seconds)
[23:12] <ulyn> what is ceph?
[23:13] <CristianDM> ulyn: http://ceph.com/
[23:14] <CristianDM> ulyn: Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability.
[23:15] <Tv_> sjust: I find it hard to justify, but f4b0cda17875c27d8b945be6cf5db9b356bb2dab is the commit that fixes the automake bug
[23:16] <Tv_> sjust: totally reproducible, checkout that~1 and start from a clean tree -> no Makefile.in
[23:16] <sjust> Tv_ yup
[23:16] <sjust> it makes perfect sense
[23:16] <Tv_> sjust: so cherry-picking that oughta help
[23:16] <Tv_> sjust: but you cherrypicked it!-o
[23:17] <Tv_> sjust: though you cherry-picked something else first
[23:18] <Tv_> sjust: i re-cherrypicked just that one, and resolved the conflict, and it looks good here
[23:20] <Tv_> sjust: pushed as for-caleb-automake-cherry-pick
[23:27] * doet (~doet@46.33.129.2) Quit (Quit: Ex-Chat)
[23:27] * tod (~tod@46.33.129.2) has joined #ceph
[23:40] * ulyn (~ulyn@82VAADWVU.tor-irc.dnsbl.oftc.net) has left #ceph
[23:42] * tod (~tod@46.33.129.2) Quit (Quit: Ex-Chat)
[23:42] * todon (~todon@46.33.129.2) has joined #ceph
[23:48] * dennisj_ (~chatzilla@p5DCF7625.dip.t-dialin.net) has joined #ceph
[23:49] * lofejndif (~lsqavnbok@82VAADWZE.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:53] * dennisj (~chatzilla@p5DCF7D6F.dip.t-dialin.net) Quit (Ping timeout: 480 seconds)
[23:53] * dennisj_ is now known as dennisj

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.