#ceph IRC Log

Index

IRC Log for 2010-11-30

Timestamps are in GMT/BST.

[0:22] <DeHackEd> If I wanted to use ceph in a disaster-recovery usable scenario (ie, a whole building knocked off the map) I think I'd have to have disk servers in both locations suitably mapped via CRUSH, and then disk-mirror the metadata servers between sites. it would appear that the disk servers were in both locations and the metadata/monitor servers are local only with fail-over. does that sound good?
[0:23] <cmccabe> johnl: I submitted a fix for the infinite recursion in the stack trace dumper
[0:23] <johnl> sweet
[0:23] <cmccabe> johnl: might improve your experience at least a bit with that
[0:23] <johnl> ah, so it's a single crash but the dumper is stuck in a loop then!
[0:24] <cmccabe> johnl: although it won't prevent the segfault :\
[0:24] <johnl> of course
[0:24] <johnl> was trying to understand how it was looping a crash but not changing pid, heh
[0:24] <johnl> makes sense now
[0:24] <gregaf1> DeHackEd: I'm not quite sure I understand what you're proposing
[0:25] <cmccabe> johnl: I've seen this before with heap corruption... whenever the signal handler touches the heap it segfaults again
[0:25] <gregaf1> the MDS doesn't have any local storage; all its data is stored on the OSDs
[0:25] <cmccabe> johnl: it could be worth cranking up malloc debugging on your cluster to see if that gives any warnings
[0:26] <johnl> sagewk: think you'll need anything more from me on #614? I'd like to wipe the data and start my testing again (rendering the bug harder to reproduce I'd imagine)
[0:26] <johnl> cmccabe: sounds good. how'd I do that?
[0:26] <DeHackEd> gregaf1: oh... that simplifies things
[0:27] <cmccabe> johnl: try adding this to your bashrc:
[0:27] <cmccabe> export MALLOC_PERTURB_=$(($RANDOM % 255 + 1))
[0:27] <DeHackEd> the objective is to build something like a SAN, with live off-site redundancy, on the cheap.
[0:28] <cmccabe> johnl: also try export MALLOC_CHECK_=2
[0:28] <gregaf1> are you expecting to have a wide pipe between the sites, DeHackEd?
[0:28] <cmccabe> johnl: notice the trailing underscore... yeah, it's weird
[0:28] <DeHackEd> yeah, gigabit or 10gig
[0:28] <johnl> that perturb just an effort to change the offsets or something, to avoid the same heap corruption?
[0:28] <cmccabe> johnl: well, it's actually the opposite... to randomize it so that it occurs more often... heh
[0:29] <johnl> heh
[0:29] <cmccabe> johnl: actually maybe just try MALLOC_CHECK_ for now. If you run the daemons using /etc/init.d/ceph you probably want to put the exports into there
[0:29] <gregaf1> DeHackEd: oh, that's not too bad then if you just design the CRUSH map carefully
[0:29] <DeHackEd> as much as I'd like to keep IO as local as possible (writes excepted) I can make due with reads coming from all over the place
[0:30] <michael-ndn> DeHackEd: if you had a third location you could place one metadata server there
[0:30] <gregaf1> you'll want to set it up so that at least one copy of the data is on each site and split the MDSes and monitors across the sites
[0:30] <michael-ndn> then if one site went offline then the third location could decide who was still onlline
[0:30] <gregaf1> turn on backup MDSes at each site so if you lose one site you have enough MDS machines to keep everything running
[0:31] <DeHackEd> michael-ndn: that's actually a good idea. I think I need to do some more research though
[0:31] <johnl> cmccabe: as long as I start ceph from the same shell I exported the env, it should inherit it
[0:31] <cmccabe> johnl: how repeatable is this segv? Have you tried restarting your cluster?
[0:31] <michael-ndn> just balance the metadata servers between the two locations
[0:31] <johnl> (and indeed, I've confirmed it is indeed inherited)
[0:31] <gregaf1> and then you'll want one (or 1/3 of) monitor (not MDS, michael-ndn :) ) off-site to make sure losing one site doesn't bring down your ability to change the map
[0:32] <johnl> cmccabe: every single time. whole cluster is currently down. bring mon,msd and osd up on one node and it crashes very quickly
[0:32] <michael-ndn> since a majority of mds will be enough to decide who is the up system
[0:32] <michael-ndn> ah ok, yeah monitor
[0:32] <cmccabe> johnl: can I log in and try something?
[0:32] <DeHackEd> yeah I got that. and while latency isn't bad between the hypothetical third site (<5ms) I'm still worried that it might impact performance
[0:32] <johnl> in fact, just running the osd on it's own crashes
[0:33] <johnl> yep, gimme your ssh key
[0:33] <cmccabe> ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAxT/EmlXe8YO4mJHpa8zMd4yibsO7ygg25n+8lIfkUeU2ugAn+Xt05IJKbofZ+6gok1dRO+sIUp4QMolCs2Sf9AuOJrvMbgZj398VQMmGyOc/3m9nUPiwzEXalrppn7TU5QLIHx0XccOuand2km/r3Bcoc3olc7VrIVBpJ8jBxOhaABoPtTYp6QiVYbeAYGpUqY+OyVpHVe23h5LFupMNr5EOgWDnA/8RViMHO/TO4Gw2Dkf7//o3r8BRY/HZHSQTRMA02Oq1D2kZK6Q1o3eQX528CaZkfVpd8RSSxIh9fiqVRJhXVZX/DkHoZbTOchFQtBpO9PnhjTkV84XTj3uH7Q== cmccabe@flab
[0:34] <gregaf1> DeHackEd: the monitors actually don't need super-low latency since they're maintaining a fairly simple set of Paxos state machines; a normal internet connection within one continent is probably fine for them
[0:34] <johnl> root@109.107.35.141
[0:34] <gregaf1> for the MDSes high latency would be bad
[0:34] <johnl> ./usr/bin/cosd -i 0 -c /etc/ceph/ceph.conf -D
[0:35] <johnl> do as you will. is just a test box. no real data
[0:35] <cmccabe> johnl: ok
[0:37] <cmccabe> john: can I recompile this?
[0:38] <johnl> cmccabe: it's installed from the package repository
[0:38] <johnl> though I did at one time have a compiled version on there
[0:38] <cmccabe> hmm
[0:38] <johnl> feel free to remove the package and git update and build.
[0:38] <johnl> there is a git repo in /root/
[0:41] <jantje> hi !
[0:47] <gregaf1> hi jantje
[0:50] <gregaf1> DeHackEd: you wouldn't want to mirror the monitor storage either, since each monitor needs to be distinct
[0:50] <gregaf1> but as long as you have a third site tiebreaker it's not a real problem if (less than half your) monitors are inaccessible
[0:51] <DeHackEd> gregaf1: acting on the assumption it wasn't, the intention was that I'd have two monitors with the same name and disk at sites A and B, but site B would only run if site A was F'd.
[0:52] <gregaf1> I think you'd run into issues doing that
[0:54] <DeHackEd> I like the third-site method a lot...
[0:54] <DeHackEd> (and I can provide)
[0:54] <gregaf1> separate from that, depending on what exactly you're after you could also set it up so that you had a primary data center which serviced all your reads unless it died, and then failed over to the secondary site
[0:55] <gregaf1> that would work by setting up your CRUSH map so that it always chose OSDs out of one data center as the primary
[0:55] <DeHackEd> that's roughly what I'm going for. the majority of disk IO would be be entirely within said datacenter. writes would be mirrored via replication to datacenter B which could take over should the worst happen
[0:56] <gregaf1> as long as you have sufficient bandwidth (and low latency) between the sites this can be done, although you have to set up your configuration *very* carefully
[0:57] <DeHackEd> at this point I haven't even tried running a 1-machine cluster yet
[0:58] <gregaf1> the problem with this scenario for most people is that Ceph was really designed to run in one data center so there's no delayed replication or anything — your inter-site connection needs to be fat enough to handle all the write bandwidth as it happens
[0:58] <cmccabe> johnl: I see you have some other cosd running, can I bring those down?
[0:59] <johnl> running where?
[0:59] <johnl> I don't see one
[0:59] <cmccabe> 10.135.211.78
[0:59] <DeHackEd> bandwidth won't the dealbreaker
[0:59] <cmccabe> johnl: oh sorry, looking at the wrong window
[0:59] <johnl> whole cluster is down
[0:59] <cmccabe> johnl: nvm
[1:00] <johnl> k
[1:01] <gregaf1> I think what you're trying to do is feasible then, DeHackEd
[1:01] <gregaf1> which is pleasant after the number of times we've had someone come in trying to do this over a 10Mb inter-site connection :)
[1:01] <DeHackEd> oh dear god no
[1:02] <DeHackEd> OC-12 minimum
[1:02] <DeHackEd> (we're still arguing over exactly which sites will be A, B and C)
[1:02] <gregaf1> nice
[1:11] * greglap (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[1:20] * johnl (~johnl@cpc3-brad19-2-0-cust563.barn.cable.virginmedia.com) Quit (Ping timeout: 480 seconds)
[1:28] * Mark22 is now known as Mark23
[1:36] * jantje_ (~jan@paranoid.nl) has joined #ceph
[1:43] * jantje (~jan@paranoid.nl) Quit (Ping timeout: 480 seconds)
[1:53] <DeHackEd> I can understand why you'd be put off by that. But anyways, you think that's a feasible strategy?
[1:55] <michael-ndn> you will be the first to try that configuration I believe, so I would consider it a test, heh
[1:55] * timg (~tim@leibniz.catalyst.net.nz) has joined #ceph
[1:57] <DeHackEd> oh good. for a minute there I was worried this was going to be difficult
[1:58] <cmccabe> johnl: #614 should be resolved
[2:03] <greglap> DeHackEd: like I said, it'll require careful configuration but I think it should be okay
[2:03] <DeHackEd> and on that note I should start reading the documentation a bit more carefully this time...
[2:04] <greglap> probably the biggest challenge will be that to make it work properly you'll need to play around with a lot of our more primitive configuration tools to set up the CRUSH map and such properly :)
[2:04] <greglap> feel free to ask lots of questions
[2:04] <DeHackEd> can you select the preferred read targets as well as replication hierarchy?
[2:04] * timg (~tim@leibniz.catalyst.net.nz) Quit (Ping timeout: 480 seconds)
[2:05] <greglap> despite some earlier attempts to set up different read/write strategies, these days all reads and writes go through the "primary" on a PG
[2:05] <greglap> so what you'll need to do is set up the CRUSH map so that it always selects primaries that are in site A
[2:06] <greglap> and always selects at least one replica in site B
[2:06] <greglap> and then whenever a client machine does a write it'll go to an OSD in site A, and get replicated from that machine
[2:06] <greglap> and all reads will also be serviced by that machine
[2:07] <greglap> unless it goes down, in which case the next OSD in the list will become the acting primary
[2:08] <greglap> generally speaking of course once a machine goes down its data is re-replicated, but there are configuration options to specify how long the delay needs to be before that happens
[2:08] <greglap> (I'm not sure how much reading you've done so I'm not sure how much detail you're after here)
[2:09] <DeHackEd> I've looked over most stuff on the wiki
[2:11] <DeHackEd> I'll be running non-btrfs for now. we have RAID batteries so it shouldn't cause any troubling corruption.
[2:11] <DeHackEd> (right?)
[2:12] <greglap> we recommend btrfs largely because it gives us much better hooks into the filesystem
[2:12] <greglap> so we can make certain types of things more efficient to some degree
[2:12] <greglap> also, snapshots are way cheaper with btrfs
[2:12] <greglap> since it has a mechanic for that natively
[2:12] * bchrisman (~Adium@70-35-37-146.static.wiline.com) has joined #ceph
[2:13] <greglap> with other FSes we need to do manual copies so it takes more disk space and means that the data needs to get copied once you try to write to it
[2:13] <greglap> (well, to the chunk, which by default has a 4MB granularity)
[2:14] <greglap> the journaling and consistency checks have improved enough in the past year or so that you're not going to get corruption with other FSes
[2:15] * cmccabe (~cmccabe@dsl081-243-128.sfo1.dsl.speakeasy.net) has left #ceph
[2:17] * greglap1 (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[2:17] * greglap (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Read error: Connection reset by peer)
[3:00] * eternaleye_ (~eternaley@195.215.30.181) has joined #ceph
[3:00] * eternaleye (~eternaley@195.215.30.181) Quit (Remote host closed the connection)
[3:31] <DeHackEd> whoops
[3:35] * eternaleye_ is now known as eternaleye
[3:37] <greglap1> what's the problem, DeHackEd?
[3:37] <greglap1> I'm going to go offline for about 20 minutes but I'll check the logs and help you out when I get back :)D
[3:38] * greglap1 (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[3:38] <DeHackEd> I wreaked the VM i was going to test with
[3:38] <DeHackEd> :)
[3:38] <DeHackEd> ah... well...
[3:43] * lidongyang_ (~lidongyan@222.126.194.154) has joined #ceph
[3:50] * lidongyang (~lidongyan@61.14.130.209) Quit (Ping timeout: 480 seconds)
[3:50] * Jiaju (~jjzhang@61.14.130.209) Quit (Ping timeout: 480 seconds)
[3:50] * Jiaju (~jjzhang@222.126.194.154) has joined #ceph
[4:02] * greglap (~Adium@166.205.136.123) has joined #ceph
[4:04] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[4:05] <greglap> well, pretty boring for me then ;)
[4:06] * sjust (~sam@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[4:34] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Ping timeout: 480 seconds)
[4:40] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[4:53] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Ping timeout: 480 seconds)
[4:53] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[5:02] * greglap (~Adium@166.205.136.123) Quit (Read error: Connection reset by peer)
[5:02] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Ping timeout: 480 seconds)
[5:11] * greglap (~Adium@cpe-76-90-74-194.socal.res.rr.com) has joined #ceph
[5:11] * tjikkun (~tjikkun@195-240-122-237.ip.telfort.nl) has joined #ceph
[6:27] * ijuz__ (~ijuz@p4FFF662E.dip.t-dialin.net) Quit (Ping timeout: 480 seconds)
[6:39] * tjikkun (~tjikkun@195-240-122-237.ip.telfort.nl) Quit (Ping timeout: 480 seconds)
[7:29] * f4m8_ is now known as f4m8
[7:39] * NoahWatkins (~jayhawk@kyoto.soe.ucsc.edu) has joined #ceph
[8:10] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (Quit: WeeChat 0.2.6)
[8:17] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[8:20] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[8:41] * ijuz (~ijuz@p57999A8A.dip.t-dialin.net) has joined #ceph
[9:31] * allsystemsarego (~allsystem@188.25.130.158) has joined #ceph
[11:34] * johnl (~johnl@cpc3-brad19-2-0-cust563.barn.cable.virginmedia.com) has joined #ceph
[12:12] * johnl (~johnl@cpc3-brad19-2-0-cust563.barn.cable.virginmedia.com) Quit (Quit: bye)
[13:02] * allsystemsarego (~allsystem@188.25.130.158) Quit (Quit: Leaving)
[13:09] * hijacker (~hijacker@213.91.163.5) Quit (Remote host closed the connection)
[13:12] * verwilst (~verwilst@router.begen1.office.netnoc.eu) has joined #ceph
[14:54] * verwilst (~verwilst@router.begen1.office.netnoc.eu) Quit (Ping timeout: 480 seconds)
[15:39] * verwilst (~verwilst@router.begen1.office.netnoc.eu) has joined #ceph
[16:05] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[16:10] * f4m8 is now known as f4m8_
[16:19] * allsystemsarego (~allsystem@188.25.130.158) has joined #ceph
[16:33] * Yoric (~David@213.144.210.93) has joined #ceph
[17:35] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[17:39] * greglap (~Adium@cpe-76-90-74-194.socal.res.rr.com) Quit (Quit: Leaving.)
[17:49] * greglap (~Adium@166.205.138.206) has joined #ceph
[18:33] * Yoric (~David@213.144.210.93) Quit (Quit: Yoric)
[18:34] * sjust (~sam@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:38] * cmccabe1 (~cmccabe@adsl-76-199-100-125.dsl.pltn13.sbcglobal.net) has joined #ceph
[18:39] * greglap (~Adium@166.205.138.206) Quit (Quit: Leaving.)
[18:53] * fred_ (~fred@80-219-183-100.dclient.hispeed.ch) has joined #ceph
[18:53] <fred_> hi
[18:54] <cmccabe1> hi fred
[18:54] <fred_> cmccabe1, I tested the unstable branch as you suggested
[18:54] <fred_> cmccabe1, I'm hitting FAILED assert(0 == "ENOSPC handling not implemented")
[18:55] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Remote host closed the connection)
[18:55] <cmccabe1> fred_: what is the disk utilization like on your system?
[18:55] <fred_> cmccabe1, I guess this is because when 1 of my 3 osds crashed, the 2 other tried to share the data, which was too much for them
[18:56] <cmccabe1> fred_: that makes sense
[18:57] <cmccabe1> fred_: unfortunately, like it says, ENOSPC handling is still on the TODO list
[18:57] <fred_> pg v2610312: 792 pgs: 242 active+clean, 439 active+clean+degraded, 111 crashed+down+degraded+peering; 287 GB data, 428 GB used, 110 GB / 539 GB avail
[18:57] <cmccabe1> fred_: of course, even if it weren't, you probably still wouldn't be able to do anything with no disk space :)
[18:58] <fred_> still 1.9G available
[18:58] <cmccabe1> fred_: how about the individual volumes.
[18:59] <fred_> but if that osd could join the cluster, it could free a lot of space. But it seems it asserts before that point :(
[19:01] <cmccabe1> fred_: yeah, it would be nice if you could somehow redistribute the load. There is a feature called rebalancing, but I don't know if you'll be able to use it with a full disk
[19:01] * gregaf1 (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[19:02] <fred_> cmccabe1, the problem is that my osd hits that assertion as soon as it starts
[19:03] <cmccabe1> fred_: can you reduce disk consumption somehow so you're not at 100%... then do a rebalance
[19:04] <fred_> this partition is dedicated to the osd, so nothing else on it ..
[19:06] <fred_> hmm, never played with snapshots, but got snap_* dirs. does ceph create them without its users asking ? maybe I could remove them...
[19:06] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[19:06] <cmccabe1> fred_: are you using btrfs?
[19:06] <fred_> yep
[19:06] <cmccabe1> fred_: I think there might be some ways to squeeze extra space out of a btrfs partition... let me check
[19:07] <fred_> cool, thanks
[19:08] <cmccabe1> fred_: hmm... you could try "btrfs filesystem defragment" on the partition
[19:09] <cmccabe1> fred_: like I said, the underlying problem is that we need to handle ENOSPC better. But that's not really a quick bugfix, it's kind of a new feature that's planned
[19:10] <gregaf> planned in the sense that maybe we can solve this problem that nobody else has solved :p
[19:11] <fred_> this would not be the first unsolvable problem for which you find a nice solution
[19:12] <cmccabe1> gregaf: nobody ever solves ENOSPC perfectly, but you can get reasonably good solutions
[19:12] <fred_> anyway, is there a way to remove some objects by hand?
[19:13] <fred_> ok, it seems btrfs defrag ate 0.1G :(
[19:13] <cmccabe1> gregaf: reading about btrfs' difficulties with ENOSPC is kind of interesting. It's not an easy problem for sure.
[19:13] <gregaf> yeah
[19:13] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[19:13] <gregaf> I was thinking specifically of distributed FSes though
[19:13] <cmccabe1> fred_: that might be enough to get you to the point where you can rebalance
[19:13] <gregaf> I haven't seen any very good solutions/workarounds, although I'm not that widely read
[19:13] <fred_> cmccabe1, I mean, I'v got 0.1G less available space after defrag...
[19:14] <cmccabe1> fred_: I don't know as much about rebalancing as some of the others here, but I assume that it will distribute the storage in a more even way and hopefully solve your space problem for now...
[19:14] <cmccabe1> fred_: um... that's unexpected... and how is that even possible?
[19:14] <fred_> still asserts
[19:14] <gregaf> fred_: which machine is asserting?
[19:14] <fred_> osd1
[19:15] <gregaf> is this the one that crashed or one of the ones that tried to rereplicate the remaining data?
[19:15] <fred_> the ones that tried to rereplicate the remaining data
[19:15] <fred_> one of the 2 remaining
[19:15] <fred_> the other one is fine
[19:16] <gregaf> and the one that crashed, is that back up?
[19:16] <fred_> no, stopped it so that it does not also get ENOSPC...
[19:17] <cmccabe1> fred_: let me log in and see where everything is..
[19:17] <cmccabe1> fred_: do you mind if I log in?
[19:17] <fred_> sorry, can't
[19:18] <gregaf> and there really isn't enough room for them to re-replicate onto each other
[19:18] <gregaf> it's a bummer we can't just temporarily adjust the full flag threshold and then bring the third OSD back up
[19:18] <fred_> gregaf, that's my guess
[19:18] <sagewk> wido: around?
[19:19] <fred_> gregaf, can't I remove some random objects (they should exist on the 2 other osd...)
[19:20] <fred_> afk 2-3 minutes
[19:21] <gregaf> fred_: if you're certain that nothing's been changed since the crash, you can go into your OSD data dir and delete entire PG folders if you know that they exist elsewhere
[19:22] <gregaf> you'll want to bring down the cluster, delete PGs that you know exist on a different machine, and then bring all 3 OSDs back up together
[19:22] <gregaf> the placement of PGs will be recalculated and it'll go back to being split among all three OSDs
[19:22] <gregaf> but you need to be really, really certain about what you're deleting
[19:23] * todinini (tuxadero@kudu.in-berlin.de) Quit (Read error: Connection reset by peer)
[19:23] <gregaf> if you can, you might want to back up each OSD somewhere, making sure to preserve xattrs (rsync by default doesn't)
[19:23] * todinini (tuxadero@kudu.in-berlin.de) has joined #ceph
[19:29] <fred_> gregaf, thanks, will do that tomorrow
[19:31] <fred_> as a way to verify that I can safely delete a pg, may a diff the content of the pg folder of the 2 osd. and assume it is safe to delete if they are the same ?
[19:31] <cmccabe1> fred_: yeah, as Greg said, make sure you're using rsync -X to preserve xattrs if you back it up that way
[19:32] <fred_> great thanks. I'll do that then we will see if #590 is fixed.
[19:32] <fred_> have a nice day, bye
[19:32] * fred_ (~fred@80-219-183-100.dclient.hispeed.ch) Quit (Quit: Leaving)
[19:47] * NoahWatkins (~jayhawk@kyoto.soe.ucsc.edu) Quit (Remote host closed the connection)
[19:58] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Read error: No route to host)
[20:13] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[20:30] <gregaf> failboat: have you had a chance to try those fixes for file locking?
[21:08] * alexxy (~alexxy@79.173.81.171) Quit (Remote host closed the connection)
[21:11] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[21:13] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[21:14] * alexxy (~alexxy@79.173.81.171) Quit (Remote host closed the connection)
[21:14] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[22:15] <wido> sagewk: I'm around for a few minutes now
[22:16] <sagewk> i sent an email.. just wanted to verify what the latest btrfs warnings you were seeing looked like
[22:16] <wido> I'm a bit caught up here at some work, we just got a bunch of new hardware for our production clusters which I've got to get setup
[22:16] <wido> So I didn
[22:16] <wido> didn't get a chance yet to check it with your patch, if the messages change
[22:17] <sagewk> no problem. when you do get some time let me know :)
[22:18] <wido> Yes, I hope to get some time to test these things later on this week
[22:18] <wido> I'll keep you updated!
[22:18] <sagewk> thanks :)
[22:18] <wido> got to go now, I'll reply on your other e-mail tomorrow
[22:19] <wido> ttyl!
[22:19] <sagewk> k
[22:19] <sagewk> ttyl
[22:23] * alexxy (~alexxy@79.173.81.171) Quit (Remote host closed the connection)
[22:23] * johnl (~johnl@cpc3-brad19-2-0-cust563.barn.cable.virginmedia.com) has joined #ceph
[22:27] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[23:21] * allsystemsarego (~allsystem@188.25.130.158) Quit (Quit: Leaving)
[23:32] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.