#ceph IRC Log

Index

IRC Log for 2010-10-25

Timestamps are in GMT/BST.

[0:01] * allsystemsarego (~allsystem@188.27.167.113) Quit (Quit: Leaving)
[0:06] * greglap1 (~Adium@cpe-76-90-74-194.socal.res.rr.com) has joined #ceph
[0:13] * greglap (~Adium@cpe-76-90-74-194.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:55] * terang (~me@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[1:02] * johnl (~johnl@cpc3-brad19-2-0-cust563.barn.cable.virginmedia.com) Quit (Ping timeout: 480 seconds)
[1:27] * terang (~me@ip-66-33-206-8.dreamhost.com) has joined #ceph
[2:09] * Guest218 (quasselcor@bas11-montreal02-1128536392.dsl.bell.ca) Quit (Remote host closed the connection)
[2:12] * bbigras (quasselcor@bas11-montreal02-1128536392.dsl.bell.ca) has joined #ceph
[2:12] * bbigras is now known as Guest514
[2:19] * Meths (rift@91.106.136.30) has joined #ceph
[2:29] * deksai (~deksai@96.35.100.192) has joined #ceph
[2:41] * deksai (~deksai@96.35.100.192) Quit (Ping timeout: 480 seconds)
[5:30] * terang (~me@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[7:55] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[7:59] * Yoric_ (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[8:05] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[8:05] * Yoric_ is now known as Yoric
[9:03] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)
[9:14] <jantje> morning
[9:22] <hijacker> morning
[9:37] * allsystemsarego (~allsystem@188.27.167.113) has joined #ceph
[10:14] <jantje> Hmm, isn't ceph using parallelism to compile?
[10:15] <jantje> I just see one cc1plus process running
[10:15] * terang (~me@pool-173-55-24-140.lsanca.fios.verizon.net) has joined #ceph
[10:16] * gregorg (~Greg@epoc-01.easyrencontre.com) has joined #ceph
[10:43] * Yoric (~David@213.144.210.93) has joined #ceph
[10:55] * johnl (~johnl@cpc3-brad19-2-0-cust563.barn.cable.virginmedia.com) has joined #ceph
[12:41] * gregorg (~Greg@epoc-01.easyrencontre.com) Quit (Quit: Quitte)
[13:34] <jantje> just wondering, I have 3x2OSDs 3 MONs and 3 MDSs
[13:34] <jantje> I only the logfile from osd0 is telling me the journal is full
[13:34] <jantje> is it only writing to OSD0 ?
[13:36] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Read error: No route to host)
[13:42] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[13:42] <jantje> i can see traffic going to all servers, might be replication
[16:25] * Guest514 (quasselcor@bas11-montreal02-1128536392.dsl.bell.ca) Quit (Ping timeout: 480 seconds)
[16:43] <jantje> suggestion for osd stat; list which OSDs are down
[16:47] * Meths_ (rift@91.106.245.205) has joined #ceph
[16:48] * Yoric (~David@213.144.210.93) Quit (Quit: Yoric)
[16:50] * Yoric (~David@213.144.210.93) has joined #ceph
[16:54] * Meths (rift@91.106.136.30) Quit (Ping timeout: 480 seconds)
[16:54] * Meths_ is now known as Meths
[17:09] * andret (~andre@pcandre.nine.ch) Quit (Remote host closed the connection)
[17:16] * allsystemsarego (~allsystem@188.27.167.113) Quit (Quit: Leaving)
[17:21] * andret (~andre@pcandre.nine.ch) has joined #ceph
[17:39] * greglap1 (~Adium@cpe-76-90-74-194.socal.res.rr.com) Quit (Quit: Leaving.)
[17:44] <jantje> journal throttle <= i'm only seeing this on OSD0 and OSD1 (which is 1 server, the same server as the MDS (active) and MON
[17:47] * deksai (~deksai@96-35-100-192.dhcp.bycy.mi.charter.com) has joined #ceph
[17:52] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[17:53] * greglap (~Adium@166.205.136.151) has joined #ceph
[17:55] <sage> jantje: i put this in my .bashrc: export MAKEFLAGS="-j"`grep -c processor /proc/cpuinfo`
[17:56] <sage> the journal throttle message shows up if the osd's local file system isn't keeping up with the writeahead journal
[18:01] * Meths_ (rift@91.106.247.93) has joined #ceph
[18:08] * Meths (rift@91.106.245.205) Quit (Ping timeout: 480 seconds)
[18:12] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[18:17] * Meths_ is now known as Meths
[18:20] <sage> sick, working from home today
[18:21] <yehudasa> sage: feel better
[18:21] <sage> thanks :)
[18:21] <sage> wanna look at that btrfs hang?
[18:21] <yehudasa> yep
[18:22] <yehudasa> it took 11 hours to reproduce?
[18:22] <sage> yeah :(
[18:23] <yehudasa> how much time did it take to reproduce with the async delete?
[18:23] <sage> i can add back in the debugging printks from before and get it started again
[18:23] <sage> 45 min
[18:24] <sage> i'm just worried that has its own issues
[18:24] <sage> i'll get a machine going with the standard ioctls
[18:25] <yehudasa> can try figuring it out first and see whether it's specific to the async delete or more general
[18:26] <sage> in the 10 hour case the async delete was reverted, it was async create and sync delete
[18:41] * Yoric (~David@213.144.210.93) Quit (Read error: Connection reset by peer)
[18:41] * Yoric_ (~David@213.144.210.93) has joined #ceph
[18:41] <yehudasa> sage: can I free some of your space on skinny:/home/sage?
[18:42] <sage> yeah
[18:42] <sage> rm -r ceph/src/dev/*
[18:42] <sage> probably
[18:42] <yehudasa> not much there
[18:42] <sage> out/?
[18:43] <yehudasa> hmm.. nope
[18:43] <yehudasa> src/linux-2.6.21.5?
[18:43] * greglap (~Adium@166.205.136.151) Quit (Read error: Connection reset by peer)
[18:44] <sage> removing it
[18:44] <yehudasa> I'll thanks
[18:45] <yehudasa> actually, what's been feeling up everything is joshd's ceph/src/out
[18:45] <yehudasa> I'll ask him to move it somewhere else
[18:45] * sjust (~sam@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:46] * sentinel_e86 (~sentinel_@188.226.51.71) has joined #ceph
[18:47] * sentinel_e86 (~sentinel_@188.226.51.71) Quit (Remote host closed the connection)
[18:48] * sentinel_e86 (~sentinel_@188.226.51.71) has joined #ceph
[18:48] * sentinel_e86 (~sentinel_@188.226.51.71) Quit (Remote host closed the connection)
[18:48] * sentinel_e86 (~sentinel_@188.226.51.71) has joined #ceph
[18:54] <johnl> I inject a corrupted crushmap and now all my monitors crash when then get quorum. any on clues on how I could recover from this?
[18:55] <sage> johnl: i'll take a look.. there's an issue open right?
[18:55] <johnl> yeah, I opened an issue about it the other day
[18:56] <johnl> but assuming that gets fixed, then nobody will be in this situation again, heh
[18:56] <johnl> my data isn't important, I could just trash and start again
[18:56] <johnl> but am interested to know where to even start
[18:57] <sage> if you remove the latest numbered file on each monitor osdmap/ and osdmap_full/ dir (should be the same), and adjust the last_committed file accordingly (subtract 1), then start the monitors you should be okay
[18:58] <sage> then we need to add a check that the provided crushmap is valid before using it :)
[18:58] <johnl> ta
[18:58] <johnl> was just peeking at those
[18:59] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) has joined #ceph
[19:01] <johnl> the numbered file in osdmap_full/ didn't match the one in osdmap/
[19:01] <johnl> it was 547 in osdmap/547 and osdmap/last_committed
[19:02] <johnl> but 546 was highest in osdmap_full/
[19:02] <johnl> same on both mons
[19:02] <sage> 247 is probably the offender. it probably crashed while trying to apply the incremental and update osdmap_full
[19:03] <johnl> 547
[19:03] <sage> right :
[19:03] <sage> )
[19:03] <johnl> removing it (and updating last_committed) fixed it!
[19:03] <sage> cool.
[19:03] <johnl> back up now, thanks sage.
[19:03] <sage> np
[19:03] <johnl> nice to see it's quite simple to poke around in there
[19:03] <sage> sometimes, at least ;)
[19:03] <johnl> felt like one of those situations where it's a hexeditor or nothing.
[19:08] * greglap (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[19:09] <johnl> while I'm here, I keep seeing warning about lease_expire being in the future
[19:09] <sage> clocks are out of sync.
[19:09] <johnl> yeah, but by milliseconds
[19:09] <sage> yeah
[19:09] <johnl> lease_expire from mon0 was sent from future time 2010-10-25 17:07:07.059735 with expected time <=2010-10-25 17:07:07.025769, clocks not synchronized
[19:09] <johnl> that expected?
[19:10] <sage> yeah
[19:10] <johnl> heh, ok!
[19:10] <sage> we can make it less picky by default
[19:10] <gregaf1> the default warning period is a tenth of a second
[19:10] <gregaf1> you can adjust it if you like by setting mon_clock_drift_allowed to something larger
[19:11] <johnl> ta.
[19:11] <gregaf1> but the lease period is only 2 seconds and ntp or whatever should have no trouble keeping clock drift much closer than that
[19:11] <gregaf1> oh, actually I guess it defaults to 1/100 of a second, thought we'd turned it up more than that
[19:12] <johnl> gregaf1: that was my next question. if ntp can be expected to keep it closer than that then fine
[19:12] <sage> gregaf1: btw we should syre the lease timeout subtract off the allowed drift so that ppl can set it to whatever they want (that's still << lease_interval)
[19:13] <gregaf1> huh?
[19:15] <sage> the drift only matters because of absolute timestamps used for the mon leases.. so we should subtract off the allowed drift when using those lease timeouts on the slaves
[19:15] <sage> if it doesn't do that already. then it won't matter too much how high you set the allowed drift
[19:17] <gregaf1> so make the leases shorter?
[19:18] <sage> on the slaves...
[19:18] <gregaf1> yeah
[19:18] <sage> Paxos::handle_lease() i think
[19:23] <gregaf1> I'll look at it, then
[19:23] <gregaf1> sage: do you know what would cause client operations to hang if there are no outstanding mdsc or osdc requests?
[19:23] <sage> usually caps
[19:24] <gregaf1> like the client is waiting to get issued caps?
[19:24] <sage> ... or lack thereof, blocking a read or write request
[19:24] <gregaf1> that would make sense, it happened while running qa on the mix_stale stuff over the weekend
[19:24] <sage> if you crank up debugging, and then kill -STOP, -CONT the blocked process, you'll see the get_caps helper complaining about what it does/doesn't have with the ino number and all that
[19:25] <gregaf1> yep
[19:25] <gregaf1> I just forgot that cap requests didn't leave a hanging tid so I wasn't sure where to start
[19:25] <gregaf1> thanks!
[19:30] <johnl> sweet, added a new osd to the crushmap properly this time and it's rebalancing.
[19:33] * gregaf1 (~Adium@ip-66-33-206-8.dreamhost.com) has left #ceph
[19:33] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[19:35] <johnl> love being able to cat the directories for stats!
[19:35] <johnl> bet the linux kernel devs hate that one
[19:35] <sage> they do
[19:36] <sage> you can also 'getfattr -d -m . thedir'
[19:38] <johnl> my mount died, will try in a min
[19:40] <johnl> suppose the getattr makes a bit more sense
[19:41] <johnl> well, dunno. I already had the tools to read stats cat style
[19:45] * Meths_ (rift@91.106.140.149) has joined #ceph
[19:46] * greglap (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[19:52] * Meths (rift@91.106.247.93) Quit (Ping timeout: 480 seconds)
[20:00] <sage> yehudasa: getting anywhere?
[20:00] <sage> i reproduced a hang where it's blocked at the schedule_timeout() in commit_transaction. wondering if an end_transaction() didn't wake_up the waitqueue.
[20:01] <yehudasa> sage: still trying to figure out what's going on
[20:02] <yehudasa> sage: did you reproduce it on sepia27?
[20:02] <sage> yeah
[20:02] <sage> with the async delete. didn't take too long. adding more printk's to the transaction stuff and doing it again
[20:02] <sage> 26 is running the vanilla ioctls as a sanity check.
[20:02] <yehudasa> yeah, I'm actually looking at that now.. assuming your kernel tree is on ~sage/src/btrfs-unstable?
[20:03] <sage> yeah
[20:04] * Yoric_ (~David@213.144.210.93) Quit (Quit: Yoric_)
[20:05] <sage> yehudasa: fwiw this is what i got: http://pastebin.com/E8SPh1t8
[20:09] <sage> hmm, this looks fishy:
[20:09] <sage> if (waitqueue_active(&cur_trans->writer_wait))
[20:09] <sage> wake_up(&cur_trans->writer_wait);
[20:10] <sage> the waiter does prepare_to_wait and then schedule_timeout() without any locks held. isn't that racy?
[20:10] <yehudasa> could be
[20:11] <sage> well i added a printk for when it's false. if i can reproduce we'll know if that's to blame.
[20:11] <yehudasa> static inline int waitqueue_active(wait_queue_head_t *q)
[20:11] <yehudasa> 114{
[20:11] <yehudasa> 115 return !list_empty(&q->task_list);
[20:11] <yehudasa> 116}
[20:15] * Meths_ is now known as Meths
[20:15] <sage> yeah, the synchronization here makes no sense to me.
[20:16] <sage> btrfs_end_transaction() and btrfs_commit_transaction(), where it's adjusting and waiting on cur_trans->num_writers
[20:16] <yehudasa> the prepare_to_wait takes the queue lock
[20:17] <yehudasa> maybe missing the same spin_lock_irqsave around the waitqueue_active?
[20:18] <sage> oh i see, it's just an ordering thing. waiters adds to queue, then checks condition, and waker adjusts condition, then checks queue.
[20:29] <sage> yehudasa: oh, i think i see the problem?
[20:30] <yehudasa> what?
[20:30] <sage> in commit_transaction, at the top of that do {}, we set timeout based on num_writers and show_grow, with locks held.
[20:30] <sage> then way way later, we wait for that timeout depending on writers and show_grow.. but num_writers may have since gone to 1, in which case we wait for MAX_SCHEDULE_TIMEOUT instead of 1
[20:32] <yehudasa> but that shouldn't deadlock?
[20:33] <yehudasa> oh
[20:33] <yehudasa> MAX_SCHEDULE_TIMEOUT is LONG_MAX
[20:34] <sage> it should just drop teh timeout var and do
[20:34] <sage> if (cur_trans->num_writers > 1)
[20:34] <sage> schedule_timeout(MAX_SCHEDULE_TIMEOUT);
[20:34] <sage> else if (should_grow)
[20:34] <sage> schedule_timeout(1);
[20:35] <yehudasa> yeah\
[20:41] <sage> aha.. yeah.
[20:41] <sage> so i'm seeing the same thing happen periodically, but it usually gets woken up every 30 seconds by the normal commit thread, that does a start/end and (re-)kicks the waitqueue. but if it happened to be that thread that deadlocked/raced in the first place, nobody wakes it up.
[20:43] <yehudasa> yep, great
[20:43] <sage> ok i'm going to send all this off to the list then
[20:50] * wido (~wido@fubar.widodh.nl) has joined #ceph
[20:50] <wido> hi
[20:50] <sage> wido: hi
[20:51] <wido> Just wanted to drop by and say that i'm a bit busy for now, we're moving from datacenter
[20:52] <wido> My Ceph cluster will be back next week, then I can continue testing
[20:53] <gregaf> good luck with your move!
[20:53] <wido> tnx, it's a lot of work...
[20:53] <wido> One thing I noticed, that is #502
[20:54] <wido> The VM I was running was running qemu-rbd, that one kept writing even when the cluster was full
[20:54] <wido> this was NOT the kernel client RBD code
[20:55] <gregaf> yeah, the cluster full blockers aren't in a lot of the different clients…thus the ticket! :)
[20:56] <wido> yes, but I saw the ticket was in the kclient section, while it is valid for the qemu-kvm code too
[20:56] <wido> or even librados? Since librados will keep writing too I assume
[20:57] <wido> librados::write will not block
[20:57] <gregaf> oh, so it is
[20:58] <sage> wido: i added teh full checks to the userspace side the other day.. should be in unstable
[20:58] <sage> the kernel rbd still needs fixing
[20:58] <wido> sage: librados or qemu-kvm?
[20:59] <sage> both; the check is in the internal Objecter interface
[20:59] <sage> rbd was merged for 2.6.37-rc1 btw!
[20:59] <wido> ok, i'll give it a try then. But i think i need a fresh mkcephfs, since I have a lot of corrupted objects
[20:59] <wido> sage: that is really cool! :)
[20:59] <sage> wido: yeah, i think that's best at this stage :)
[20:59] <wido> congrats
[21:11] * seibert (~seibert__@drl-dhcp42-115.sas.upenn.edu) has joined #ceph
[21:30] * cvaske (~cvaske@hgfw-01.soe.ucsc.edu) has joined #ceph
[21:53] <yehudasa> sage: did chris pull the other fixes already?
[21:53] <sage> that what he said. dunno if its in his tree yet
[21:59] <yehudasa> sage: I was referring to the clone ioctl and delalloc issues, they weren't in your latest pull request..
[21:59] <sage> oh
[21:59] <sage> i'll ask him
[22:00] <wido> btw, one more question. I was following the discussion about the bdrv_flush operation
[22:01] <wido> isn't this almost the same as cache=writeback with KVM?
[22:01] <wido> Imho, I would like the host caching writes, although I know I could loose data
[22:01] <sage> yeah
[22:02] <wido> So, implementing bdrv_flush would make cache=writeback work
[22:02] <wido> at least, giving extra performance
[22:02] <sage> although from the sound of it its not necessarily unsafe; fsync() and sync() still flush data all the way through. it's similar to the write cache on a SATA disk.
[22:03] <wido> Yes, then I understood it correctly
[22:03] <wido> I did some test a few weeks ago, could find any performance differences between cache=none and cache=writeback
[22:03] <sage> it sounds like the caching happens in the rbd layer, though, not above it. so we would look at whether cache=writeback, and cache accordingly, taking care to implement a bdrv_flush so that safety isn't compromised
[22:03] <gregaf> I think Christian just didn't know that bdrv_flush was an option when he wrote the original kvm rbd
[22:04] <gregaf> and everything since then has been modeled on that understanding
[22:05] <wido> Ok, right. But it should behave like a SATA disk indeed, that's normal.
[22:06] * terang (~me@pool-173-55-24-140.lsanca.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[22:11] <seibert> Hi, I have a quick question: I'm running some tests with ceph (so far very impressed), and trying to configure the system so that each replica of a file is always localized to one osd. (Doesn't matter which one.)
[22:12] <seibert> I found some old list posts from 2008 about this, but I can't tell what the current status is.
[22:12] <sage> there is a set_layout ioctl that let's you set preferred_osd to something >= 0 (-1, the default, disables teh feature)
[22:12] <sage> can i ask what you're using it for?
[22:13] <seibert> I'm looking to design a cluster where jobs are scheduled to run (when possible) on the nodes where the data is located.
[22:13] <sage> there's a get_layout that does that half (finding out where the data is).
[22:13] <seibert> so the performance advantages of striping are not as important as the ability to localize a file to a few nodes.
[22:14] <sage> i would avoid set_layout preferred_osd if possible, though, as it screws up the space balancing. and the benefits are a bit dubious in practice (at least according to what i've heard from the hadoop guys)
[22:15] <seibert> Would increasing the stripe size achieve what I'm describing?
[22:15] <seibert> (not even sure if that's possible)
[22:15] <sage> yeah, you can set the stripe size to whatever you want. 4mb is just the default.
[22:15] <sage> you can even set teh default for a directory/subtree (or root dir for the whole fs)
[22:16] <sage> to clarify though: you want to make _writes_ go somewhere specific, or you want to find out where things previously got stored to move your computation there?
[22:16] <seibert> so a large value (say 500 MB) shouldn't have any negative repercussions? (aside from losing the striping throughput benefit)
[22:16] <seibert> the usual computation model is reading a file
[22:16] <seibert> and producing some output where it won't be important where it goes
[22:17] <seibert> since it will generally be much smaller
[22:17] <sage> it means the balancing will be much more coarse. objects are placed pseudo-randomly, so making the objects bigger will increase the variance in osd utilizations and the risk of ENOSPC
[22:17] <seibert> OK, so the stripe size needs to be small compared to the underlying volume size on the osds.
[22:18] <sage> yeah
[22:18] <seibert> I'll have some control over the size of the files, so that should be achievable.
[22:19] <seibert> to be clear: is there a utility to control the stripe size now?
[22:20] <sage> there's an ioctl. i forget if the command line util is there (gregaf?)
[22:20] <gregaf> cephfs is pushed, but I'm not sure which branches hold it
[22:20] <gregaf> and if you want it fs-wide isn't that something that can be set without the file_layouts stuff?
[22:21] <sage> the 4mb default is compiled in and set on the root inode during mkfs
[22:22] <darkfader> seibert: why is it better for you to have the file local to the job instead of reading with more speed from multiple (local or not) osds?
[22:22] * mib_jw83r5 (a3011351@ircip3.mibbit.com) has joined #ceph
[22:23] <seibert> darkfader: Keeping the data off the network in the first place is even better than having lots of network bandwidth. :)
[22:23] <seibert> darkfader: It's not a hard requirement, and there are use cases where we will have to access files over the network (which is why I want a distributed system in the first place). But the heavy usage will be batch jobs operating on one file at a time.
[22:25] <sage> gregaf: i don't see cephfs in unstable...
[22:25] <sage> well, not in the Makefile.am at least
[22:26] <gregaf> hmmm, it may just be a cpp file for now
[22:27] <gregaf> yeah, no Makefile instructions yet, I'm not sure we ever decided what to actually call it/where to put it
[22:28] <darkfader> seibert: ok i see ;) if a single node gives very good speed foor the file already... my servers only have 4 sata disks but fast network and I got too used to thinking in that [box] :)
[22:38] <sage> gregaf: hmm right. any votes/preferences between a new tool ('cephfs setlayout foo ...') and extending the old tool to do local ops ('ceph file setlayout ...') as well as administrative cluster stuff?
[22:41] <gregaf> well, I just put it into the makefile as cephfs
[22:41] <gregaf> it seems like a good idea to separate use and administration
[22:42] <gregaf> although this stuff still doesn't require any extra perms as far as I remember
[22:42] <sage> ok
[22:42] <gregaf> should I push it into testing or rc as well?
[22:42] <sage> unstable's ok i think
[22:43] <gregaf> k
[22:43] <gregaf> sage: did you see Jim's email about the cosd journal assert?
[22:44] <sage> yeah. haven't looked yet
[22:49] <sage> gregaf: need a manpage for cephfs too at some point
[22:49] <gregaf> k
[22:49] <sage> replying to jim now
[22:52] * cvaske (~cvaske@hgfw-01.soe.ucsc.edu) Quit (Quit: Leaving)
[22:55] <seibert> sage: So I should take a look at the cephfs tool in the unstable tree to control the stripe size for a filesystem?
[22:55] <sage> yeah
[22:57] <seibert> ok, I'm currently working from the ceph debian repository. I assume that this means I get to compile stuff. :)
[22:57] <seibert> Unless the unstable tree gets auto-compiled to debian packages on some schedule?
[22:58] <sage> those packages are only rebuilt when i happen to need them for something.
[22:58] <sage> eventually that'll be automated...
[22:58] <seibert> OK, compiling stuff is no problem. Thanks for the help! It is very much appreciated.
[22:59] * terang (~me@ip-66-33-206-8.dreamhost.com) has joined #ceph
[23:00] * dubst (~me@ip-66-33-206-8.dreamhost.com) has joined #ceph
[23:00] <seibert> And thanks for the work on ceph. This is the 2nd distributed filesystem I actually understood. :) (The first was gluster, but the node management is far too crude for my needs.)
[23:07] * terang (~me@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[23:25] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Read error: Connection timed out)
[23:25] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[23:32] * bbigras (quasselcor@bas11-montreal02-1128536101.dsl.bell.ca) has joined #ceph
[23:33] * bbigras is now known as Guest614

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.