#ceph IRC Log

Index

IRC Log for 2012-07-18

Timestamps are in GMT/BST.

[0:02] * darkfader (~floh@188.40.175.2) Quit (Remote host closed the connection)
[0:02] * darkfader (~floh@188.40.175.2) has joined #ceph
[0:14] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[0:15] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[0:15] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[0:16] * Dr_O (~owen@host-78-149-118-190.as13285.net) Quit (Remote host closed the connection)
[0:26] * LarsFronius (~LarsFroni@95-91-243-240-dynip.superkabel.de) Quit (Quit: LarsFronius)
[0:45] * loicd (~loic@67.23.204.5) Quit (Quit: Leaving.)
[0:45] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[1:02] * loicd (~loic@173-12-167-177-oregon.hfc.comcastbusiness.net) has joined #ceph
[1:02] * widodh (~widodh@minotaur.apache.org) Quit (Read error: Connection reset by peer)
[1:02] * widodh (~widodh@minotaur.apache.org) has joined #ceph
[1:05] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) Quit (Remote host closed the connection)
[1:13] * tnt__ (~tnt@87.67.184.106) Quit (Read error: Operation timed out)
[1:41] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[1:55] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[1:57] * BManojlovic (~steki@212.200.241.106) Quit (Quit: Ja odoh a vi sta 'ocete...)
[2:09] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) Quit (Remote host closed the connection)
[2:15] * sagelap (~sage@38.122.20.226) Quit (Quit: Leaving.)
[2:15] * sagelap (~sage@38.122.20.226) has joined #ceph
[2:18] * Tv_ (~tv@2607:f298:a:607:b435:f9f6:cf25:1ca2) Quit (Quit: Tv_)
[2:24] * lofejndif (~lsqavnbok@28IAAF0XI.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:28] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[2:28] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) Quit (Read error: Connection reset by peer)
[2:31] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[2:31] * s[X]_ (~sX]@eth589.qld.adsl.internode.on.net) Quit (Read error: Connection reset by peer)
[2:34] * James259 (~James259@94.199.25.228) has joined #ceph
[2:36] * Cube (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[2:38] * James_259 (~James259@94.199.25.228) Quit (Ping timeout: 480 seconds)
[2:46] * renzhi (~renzhi@178.162.174.69) has joined #ceph
[3:01] * cking (~king@74-95-45-185-Oregon.hfc.comcastbusiness.net) Quit (Quit: It's BIOS Jim, but not as we know it..)
[3:01] * aliguori (~anthony@cpe-70-123-145-39.austin.res.rr.com) has joined #ceph
[3:04] * chutzpah (~chutz@100.42.98.5) Quit (Quit: Leaving)
[3:10] * Ryan_Lane (~Adium@216.38.130.164) Quit (Quit: Leaving.)
[3:30] * loicd (~loic@173-12-167-177-oregon.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[3:30] * loicd (~loic@173-12-167-177-oregon.hfc.comcastbusiness.net) has joined #ceph
[3:39] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) Quit (resistance.oftc.net larich.oftc.net)
[3:39] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (resistance.oftc.net larich.oftc.net)
[3:39] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (resistance.oftc.net larich.oftc.net)
[3:39] * sdouglas (~sdouglas@c-24-6-44-231.hsd1.ca.comcast.net) Quit (resistance.oftc.net larich.oftc.net)
[3:39] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) Quit (resistance.oftc.net larich.oftc.net)
[3:39] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (resistance.oftc.net larich.oftc.net)
[3:39] * jantje (~jan@paranoid.nl) Quit (resistance.oftc.net larich.oftc.net)
[3:39] * Solver (~robert@atlas.opentrend.net) Quit (resistance.oftc.net larich.oftc.net)
[3:40] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[3:40] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[3:40] * Solver (~robert@atlas.opentrend.net) has joined #ceph
[3:40] * jpieper (~josh@209-6-86-62.c3-0.smr-ubr2.sbo-smr.ma.cable.rcn.com) has joined #ceph
[3:40] * sdouglas (~sdouglas@c-24-6-44-231.hsd1.ca.comcast.net) has joined #ceph
[3:40] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[3:40] * jantje (~jan@paranoid.nl) has joined #ceph
[3:40] * cclien (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[3:54] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) Quit (Remote host closed the connection)
[3:58] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) Quit (Quit: Leaving.)
[4:01] * sagelap1 (~sage@211.sub-166-250-72.myvzw.com) has joined #ceph
[4:04] * sagelap (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[4:09] * sagelap1 (~sage@211.sub-166-250-72.myvzw.com) Quit (Ping timeout: 480 seconds)
[4:21] * sagelap (~sage@2600:1012:b003:13e6:6155:4a42:a114:d539) has joined #ceph
[4:44] * sagelap (~sage@2600:1012:b003:13e6:6155:4a42:a114:d539) Quit (Ping timeout: 480 seconds)
[5:05] * sagelap (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[5:10] * ryant5000 (~ryan@cpe-67-247-9-63.nyc.res.rr.com) has joined #ceph
[5:10] <ryant5000> is there any way to determine the total space usage of a directory *including* all of its snapshots?
[5:11] <ryant5000> also, does getfattr work with ceph-fuse or just the kernel implementation?
[5:32] * thafreak (~thafreak@198.144.180.21) has left #ceph
[5:48] * dmick (~dmick@2607:f298:a:607:7578:af16:6927:7001) Quit (Quit: Leaving.)
[6:21] <elder> sage, snapshot ids are monotonically increasing, aren't they?
[6:21] <sage> elder: yeah
[6:22] <elder> There should never be a case where a new set of snapshots would have anything in the middle that wasn't present in an older one.
[6:23] <elder> If a new set of snapshots has anything different from a previous set for the same image, it will be either: 1) one is missing that was there before; or 2) one or more new ones has been added to the set--and they will be higher than the previous high snapshot id.
[6:23] <elder> Right/
[6:24] <elder> Or is there some weird scenario where something else might occur that you can think of?
[6:27] <elder> I'll let you ponder that. I think I better get some sleep.
[6:28] <elder> If you conclude anything one way or the other, let me know (e-mail is fine too)
[6:28] <elder> Good night.
[6:44] * deepsa (~deepsa@122.172.3.58) Quit (Remote host closed the connection)
[6:46] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[6:47] * deepsa (~deepsa@115.242.146.235) has joined #ceph
[6:52] * widodh_ (~widodh@minotaur.apache.org) has joined #ceph
[6:57] * widodh (~widodh@minotaur.apache.org) Quit (Ping timeout: 480 seconds)
[7:33] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) has joined #ceph
[7:37] * tnt_ (~tnt@106.184-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[7:38] * tnt_ (~tnt@106.184-67-87.adsl-dyn.isp.belgacom.be) Quit ()
[8:46] * loicd (~loic@173-12-167-177-oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[8:46] * loicd (~loic@173-12-167-177-oregon.hfc.comcastbusiness.net) has joined #ceph
[9:08] * s[X] (~sX]@eth589.qld.adsl.internode.on.net) Quit (Ping timeout: 480 seconds)
[9:21] * widodh_ (~widodh@minotaur.apache.org) Quit (Read error: Operation timed out)
[9:23] * widodh (~widodh@minotaur.apache.org) has joined #ceph
[9:34] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:41] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[9:43] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[9:44] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[9:53] * renzhi (~renzhi@178.162.174.69) Quit (Quit: Leaving)
[9:57] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[10:08] * loicd (~loic@173-12-167-177-oregon.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[10:12] * yoshi (~yoshi@p22043-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[10:20] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[10:20] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[11:16] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[11:36] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[11:37] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[12:03] * sagelap (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (Read error: Operation timed out)
[12:04] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[12:05] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[12:15] * sage (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[12:16] * sagelap (~sage@cpe-76-94-40-34.socal.res.rr.com) has joined #ceph
[12:42] * Dr_O (~owen@heppc049.ph.qmul.ac.uk) has joined #ceph
[13:13] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[14:14] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) has joined #ceph
[14:43] * deepsa_ (~deepsa@122.172.1.251) has joined #ceph
[14:48] * deepsa (~deepsa@115.242.146.235) Quit (Remote host closed the connection)
[14:48] * deepsa_ is now known as deepsa
[14:59] * loicd (~loic@173-12-167-177-oregon.hfc.comcastbusiness.net) has joined #ceph
[15:12] * ryant5000 (~ryan@cpe-67-247-9-63.nyc.res.rr.com) has left #ceph
[15:14] <elder> sage, whenever you're up I'd like to talk about the thing I mentioned last night. I think I've convinced myself that new snapshots can show up with lower id's than the previous maximum. But I'd like to talk through it.
[15:17] <elder> The reason I started thinking about it though was that I think there's a bug in the code that merges in an updated set of snapshots and this (wrong) theory was one possible reason why it didn't matter.
[15:17] * lofejndif (~lsqavnbok@09GAAGUKF.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:48] * lxo (~aoliva@lxo.user.oftc.net) Quit (Read error: Operation timed out)
[15:52] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[16:00] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[16:02] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[16:02] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[16:19] <elder> ceph tracker isn't working for me.
[16:25] <joao> looks like RoR is to blame
[16:29] * bchrisman1 (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[16:29] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[16:33] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[16:52] * sagelap (~sage@cpe-76-94-40-34.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:11] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:18] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[17:28] * sagelap (~sage@240.sub-166-250-72.myvzw.com) has joined #ceph
[17:30] * lofejndif (~lsqavnbok@09GAAGUKF.tor-irc.dnsbl.oftc.net) Quit (Ping timeout: 480 seconds)
[17:35] * bchrisman1 (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:38] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[17:38] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) has joined #ceph
[17:44] * loicd (~loic@173-12-167-177-oregon.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[17:45] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[17:46] * gregorg_taf (~Greg@78.155.152.6) Quit (Quit: Quitte)
[18:02] * loicd (~loic@67.23.204.5) has joined #ceph
[18:08] <nhm> this is a bit late, but good morning #ceph. :)
[18:11] * sagelap (~sage@240.sub-166-250-72.myvzw.com) Quit (Ping timeout: 480 seconds)
[18:17] * Tv_ (~tv@2607:f298:a:607:b435:f9f6:cf25:1ca2) has joined #ceph
[18:18] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[18:18] * LarsFronius (~LarsFroni@testing78.jimdo-server.com) Quit (Remote host closed the connection)
[18:42] * loicd (~loic@67.23.204.5) Quit (Quit: Leaving.)
[18:48] * loicd (~loic@67.23.204.5) has joined #ceph
[18:49] * loicd (~loic@67.23.204.5) Quit ()
[18:50] <sagewk> joshd: can you look at branch bug-2796 ?
[18:51] <sagewk> passed regression suite
[18:51] <joshd> elder: you were right about snapids being monotonically increasing. the snapcontext is always sorted as well
[18:52] <elder> Yes but I was wrong about the other thing.
[18:52] <joshd> sagewk: the kernel doesn't have this problem too, does it?
[18:52] * bchrisman (~Adium@108.60.121.114) has joined #ceph
[18:52] <elder> At least in principle, it's possible for a snapshot id to show up that's less than what had been the previous maximum.
[18:52] * BManojlovic (~steki@212.200.241.106) has joined #ceph
[18:53] <sagewk> joshd: hmm, might.. need to look.
[18:53] <joshd> elder: do you mean in the snapshot context? or the list of snapshots? that won't happen (if it does, it's a bug)
[18:53] <elder> In the snapshot context.
[18:54] <elder> I say in principle, because it could occur if two clients concurrently attempted to create a snapshot on the same rbd image.
[18:54] <elder> (Sorry, no, not in the snapshot context)
[18:54] <elder> I mean on the client, as it's processing a new snapshot context.
[18:56] <elder> When I create a snapshot, it should just stash it away, right? It should not affect the existing image's behavior at all, right?
[18:57] <joshd> elder: the new class method for adding snapshots prevents the race from multiple clients
[18:57] <elder> I have a fix for a bug in the kernel code for this condition for old images.
[18:58] <joshd> ok
[18:58] <elder> I'll be posting it soon, but I need to verify it's working right. It's close but I just got an I/O error so I have to figure out why.
[18:58] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[18:58] <joao> sjust, around?
[18:59] <joshd> a new snapshot doesn't affect the existing image, but all subsequent writes should use the new snapshot context - that's what actually creates the snapshot
[18:59] * loicd (~loic@67.23.204.5) has joined #ceph
[18:59] <elder> OK, well if I create a snapshot, I should still be able to write to the original image shouldn't I?
[18:59] <elder> I think I'm getting an I/O error after I created a snapshot and tried to write to the original image.
[18:59] <joshd> yes, that's unrelated
[19:00] <elder> OK, that was my understanding.
[19:02] <elder> Nope, getting the same problem without any of my patches applied.
[19:02] <elder> Bummer.
[19:12] * jluis (~JL@89.181.150.156) has joined #ceph
[19:12] * chutzpah (~chutz@100.42.98.5) has joined #ceph
[19:14] * Cube (~Adium@12.248.40.138) has joined #ceph
[19:15] * allsystemsarego (~allsystem@188.25.131.234) has joined #ceph
[19:18] * joao (~JL@89.181.156.255) Quit (Ping timeout: 480 seconds)
[19:20] * loicd (~loic@67.23.204.5) Quit (Quit: Leaving.)
[19:20] * loicd (~loic@67.23.204.5) has joined #ceph
[19:21] * LarsFronius (~LarsFroni@2a02:8108:380:90:992f:e637:b392:68e) has joined #ceph
[19:36] * loicd (~loic@67.23.204.5) Quit (Quit: Leaving.)
[19:40] * loicd (~loic@67.23.204.5) has joined #ceph
[19:43] * MarkDude (~MT@74-92-171-141-Oregon.hfc.comcastbusiness.net) has joined #ceph
[19:44] * JJ (~JJ@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[19:44] * loicd (~loic@67.23.204.5) Quit ()
[19:50] * Ryan_Lane (~Adium@216.38.130.164) has joined #ceph
[20:02] * loicd (~loic@67.23.204.5) has joined #ceph
[20:07] * loicd (~loic@67.23.204.5) Quit (Read error: Connection reset by peer)
[20:09] * loicd (~loic@67.23.204.5) has joined #ceph
[20:13] * izdubar (~MT@74-92-171-141-Oregon.hfc.comcastbusiness.net) has joined #ceph
[20:14] * izdubar (~MT@74-92-171-141-Oregon.hfc.comcastbusiness.net) Quit ()
[20:17] * JJ (~JJ@cpe-76-175-17-226.socal.res.rr.com) has left #ceph
[20:22] * loicd (~loic@67.23.204.5) Quit (Quit: Leaving.)
[20:22] * loicd (~loic@67.23.204.5) has joined #ceph
[20:23] * loicd (~loic@67.23.204.5) Quit ()
[20:24] <sjust> joao: I'm around
[20:30] <nhm> ooh, I think I might have just caught a break on this controller issue.
[20:31] <nhm> apparently the cache setting "drive default (off)" and "off" aren't the same thing.
[20:31] <sjust> heh
[20:32] <gregaf> what was it set to before?
[20:32] <nhm> cache "on": sucks, cache "drive default (off)": sucks, cache "off": promising
[20:32] <nhm> gregaf: drive default
[20:32] <gregaf> weird ?????I think that's the opposite of what dho saw
[20:32] * deepsa (~deepsa@122.172.1.251) Quit (Quit: Computer has gone to sleep.)
[20:32] <gregaf> (but I could be misremembering)
[20:33] <nhm> gregaf: you'd think it wouldn't matter with the controller cache at all really.
[20:38] <gregaf> well, if the drive is caching stuff then the controller might overestimate the drive's speed and try and throw too much data at it at once
[20:38] * kfranklin (~kfranklin@adsl-99-64-33-43.dsl.pltn13.sbcglobal.net) has joined #ceph
[20:38] <nhm> the controller knows if the cache is on or off though. Seems like it should be able to handle that.
[20:39] <gregaf> yeah, I'm just saying it does matter if it's on or off, even though the controller has a cache ;)
[20:42] <nhm> gregaf: well yes, it clearly does since the results are different ;)
[20:42] * jluis is now known as joao
[20:42] <nhm> gregaf: actually, I'm more disturbed that "drive default (off)" and "off" seem to behave differently.
[20:43] <gregaf> yes, that is disturbing
[20:43] * James_259 (~James259@94.199.25.228) has joined #ceph
[20:43] <Tv_> so no explosions in the sepia network?
[20:43] <nhm> Tv_: seems fine here
[20:44] <elder> It'll take a while for the sound to reach me.
[20:44] <Tv_> elder: the ground wave will be there faster
[20:45] * James259 (~James259@94.199.25.228) Quit (Ping timeout: 480 seconds)
[20:49] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[20:51] * dmick (~dmick@2607:f298:a:607:651d:75bd:9b07:b19e) has joined #ceph
[20:55] * glowell (~glowell@dhcp6-19.nersc.gov) has joined #ceph
[21:00] <dspano> Good afternoon everyone. I removed one of my osds from my cluster and rebuilt it. Now when I added it to the other OSD it locks the cluster until I issue the ceph osd lost 0 --yes-i-really-mean-it command.
[21:00] <dspano> My guess is I'm doing some rookie mistake, but can't for the life of me figure out what.
[21:02] <dspano> It seems that it's stuck at peering. pg v228812: 412 pgs: 193 active+clean, 219 peering;
[21:03] <elder> joshd, sagewk the problem goes back at least to rbd in Linux 3.2. After a snapshot, writes to the original image produce EIO.
[21:03] <elder> Returning to my current code and will figure it out in that environment.
[21:06] <Tv_> dspano: can you be more explicit about how you removed it, how you added it, are you reusing the osd id or not
[21:08] <dspano> Tv_: I stopped it, then told the cluster it was lost. Removed it with ceph osd crush remove osd.0 and ceph osd rm 0
[21:09] <dspano> Tv: Then ran ceph osd create, received 0 as the id to use, then ran ceph-osd -i 0 --mkfs --mkkey and ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i /etc/ceph/keyring.admin.
[21:10] <Tv_> dspano: did you mkfs -t xfs the osd data disk?
[21:10] <Tv_> dspano: or just ceph-osd --mkfs?
[21:11] <dspano> Tv_: Initially I ran this the first time. mkfs.xfs -f /dev/mapper/vg--ha-cep
[21:11] <Tv_> dspano: but not between the ceph osd rm 0 and ceph-osd --mkfs?
[21:11] <dspano> Tv_: Nope.
[21:11] <Tv_> ceph-osd --mkfs doesn't delete the data (these days)
[21:12] <Tv_> if you want to throw it away, you need to throw it away
[21:12] <dspano> Tv_: I'll give that a try.
[21:30] <dspano> Tv_: I knew it was a rookie mistake.
[21:30] <dspano> Tv_: I'll go get my dunce cap and my stool.
[21:31] <dspano> Tv_: Thanks for you help.
[21:34] <dspano> Tv_: I thought just clearing out the directory would do it.
[21:40] <elder> sjust, I have an old patch from you (circa Thanksgiving 2011). It is not committed. I may have asked you about it before. Can I send it to you, and have you tell me whether to either commit it or discard it?
[21:40] <sjust> sure
[21:43] <sjust> elder: it's sort of irrelevant now since we don't do preferred pgs anymore
[21:44] <elder> So discard.
[21:44] <sjust> yeah
[21:44] <elder> OK.
[21:44] <elder> Thanks a lot.
[21:45] * benner (~benner@193.200.124.63) Quit (Ping timeout: 480 seconds)
[21:47] * Enigmagic (enigmo@c-24-6-51-229.hsd1.ca.comcast.net) has joined #ceph
[21:48] <Enigmagic> does anyone know what these errors mean? i restarted the cluster and it's gone away but some files that were written while the warnings were occurring contain null data.
[21:48] <Enigmagic> 2012-07-14 20:41:48.362942 7fd9f9c85700 0 log [WRN] : client.5631 10.0.2.2:0/1441501938 misdirected client.5631.1:130173 0.127 to osd.2 not [3,2,7] in e1219/1219
[21:50] <Enigmagic> and if the client is in fact chucking data into the wrong osd, shouldn't it return an error back to the client?
[21:59] <elder> joshd, let me know when you return.
[21:59] <joshd> elder: I'm back
[22:00] * MarkDude (~MT@74-92-171-141-Oregon.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[22:04] <dmick> Enigmagic: I *think* I just saw a bug about that
[22:06] <dmick> http://tracker.newdream.net/issues/2022 could be related
[22:09] <Enigmagic> dmick: certainly looks like it could be the same thing
[22:10] <Enigmagic> is there any way to request that it gets backported to v0.48 (if it's not already..) ?
[22:12] <dmick> that's a good question
[22:15] <Tv_> Enigmagic: that's already there (Backport: argonaut")
[22:16] <dmick> ff67210ec2e754c13d7d8bcbf0f01121ee82f722 is not so marked, however
[22:16] <dmick> could we do that after the fact, or is there another place besides the commit message to do it?...
[22:16] <Tv_> dmick: flagging commits like that assumes you can see into the future, so i wouldn't worry about that
[22:16] <Tv_> dmick: bug tracker
[22:17] * nhorman (~nhorman@2001:470:8:a08:7aac:c0ff:fec2:933b) Quit (Quit: Leaving)
[22:17] <dmick> (I've just been asking about this process and have gotten agreement from our fearless leader that it probably needs writing up)
[22:17] <joshd> it's in the stable branch already
[22:18] <Tv_> oh yeah, ef6beec99207f6f42b14c1e71fd944c7246ea49a
[22:18] <Tv_> uh git says v0.48argonaut contains that
[22:18] <Tv_> so not just in the stable branch, released
[22:19] <Tv_> 41a570778a51fe9a36a5b67a177d173889e58363 looks related, and is not released
[22:20] <Tv_> but next argonaut minor release should contain it, so nothing to do here
[22:20] <Enigmagic> k
[22:25] <Tv_> ruh-roh i think tracker broke
[22:25] * dmick nods
[22:32] <Cube> back up!
[22:34] * benner (~benner@193.200.124.63) has joined #ceph
[22:36] * loicd (~loic@67.23.204.5) has joined #ceph
[22:43] * danieagle (~Daniel@177.43.213.15) has joined #ceph
[22:50] * loicd (~loic@67.23.204.5) Quit (Quit: Leaving.)
[22:53] * lofejndif (~lsqavnbok@82VAAE6S1.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:01] * danieagle (~Daniel@177.43.213.15) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[23:03] * chutzpah (~chutz@100.42.98.5) Quit (Ping timeout: 480 seconds)
[23:09] * allsystemsarego (~allsystem@188.25.131.234) Quit (Quit: Leaving)
[23:28] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) has joined #ceph
[23:35] * s[X] (~sX]@ppp59-167-157-96.static.internode.on.net) Quit (Remote host closed the connection)
[23:41] * lofejndif (~lsqavnbok@82VAAE6S1.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[23:55] <elder> joshd, I've ported all your patches. Doesn't help the I/O error on write after snap, but at least I'll have your other fixes in.

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.